Porting a cross-platform GUI application to Rust - Mozilla Hacks - the Web developer blog
BB_C @ BB_C @programming.dev Posts 10Comments 366Joined 2 yr. ago
The crash reporter has a very unique requirement: it must use as little as possible of the Firefox code base, ideally none!
we already ship GTK with Firefox on Linux to make a modern-feeling GUI, so we can use it for the crash reporter, too.
I'm almost hoping for some GTK-caused crashes. They can enjoy the native look and feel while debugging that!
Maybe then they will learn how to stick fully to logical requirements instead of going for "meh big dependency" and "meh look and feel".
Rust in Thunderbird
Thunderbird is still going?
What's next? Rust in Songbird?
To be fair, the latest stable version of hyper until a few months ago (pre v1) did offer usable high level API. What you describe only strictly applies to v1 hyper which hasn't been around (in stable release form) for long.
On the other hand, I'm not sure why the parent commentator thinks lack of too much core development is a bad thing, or why they think hyper "needs help".
As a user of both libcurl (haven't followed it's development for years though) and hyper, I'd say either commit to making hyper the default at some point and make that a priority, or drop it altogether. And since there is no intention/plan to do the former, then latter does indeed follow logically.
Yep.
I think we are way past the point where a random release of a project that happens to use Rust as an implementation language would meet the "interesting" threshold.
Being webshit-related doesn't help of course, but maybe that's just me.
the shittiest
subjective
slowest
nope
Thank you for participating.
Were there actually any real-world use-cases affected by this? Do any of them not deserve to be named and shamed irregardless of this vulnerability?
If it was up to me, I would nuke the cmd custom implementation, leave some helpful compile error messages behind, and direct users to some 3rd party crates to choose from.
Didn't read the post yet. But I've been keeping half an eye on Ratatui for a while. I decided to put off writing anything utilizing it because I was hoping things would move along here at some point. Nothing yet on that front, unfortunately.
I don't have an informed answer, and may be you know all of this already.
(I only used pyo3 once in one of my projects, and didn't know about this new experimental feature, I also only used it to call python code from Rust, which is the other half of the crate, so to speak.)
https://github.com/PyO3/pyo3/blob/main/guide/src/async-await.md
Python awaitables instantiated with this method can only be awaited in asyncio context. Other Python async runtime may be supported in the future.
So the runtime here is from the python side. And my first guess would be that support doesn't extend to use-cases where special (Rust) runtime support is needed, like the typical io (and sometimes time) features of the tokio runtime that are often used.
Maybe the reference to "Other Python async runtime" in the quote above is hinting at something that may support such use-cases in the future.
If your Rust code doesn't work without using one of the enable_() methods here, then, that's probably your answer.
No
struct Shapes<const N: usize>([Shape; N])
impl<const N: usize> Shapes<N> {
const fn area(&self) -> f64 { /* ... */ }
}
Bad article 🤨
Other than a couple already mentioned, I like match_block_trailing_comma. Besides the symmetry, trailing commas in general are good for potentially reducing diffs in the future.
But really, I'm not that bothered.
Well, obviously that will depend on what defaults (and how many?!) a developer is going to change".
https://doc.rust-lang.org/cargo/reference/profiles.html#default-profiles
And the debug (dev) profile has its uses. It's just not necessarily the best for typical day-to-day development in many projects.
I actually use two steps of profile inheritance, with -cl (for Cranelift) inheriting from a special release-dev profile. A developer does not have to be limited in how they modify or construct their profiles.
Yes. And to complete the pro tips, the choice of linker can be very relevant. Using mold would come recommended nowadays.
Forgot to mention, and this is tangentially related to my comments from yesterday:
A paper from 2020 showed that Cranelift was an order of magnitude faster than LLVM, while producing code that was approximately twice as slow on some benchmarks. Cranelift was still slower than the paper's authors' custom copy-and-patch JIT compiler, however.
Cranelift is itself written in Rust, making it possible to use as a benchmark to compare itself to LLVM. A full debug build of Cranelift itself using the Cranelift backend took 29.6 seconds on my computer, compared to 37.5 with LLVM (a reduction in wall-clock time of 20%).
Notes:
- It's easy to gloss over the "order of magnitude" part in the presence of concrete and current numbers mentioned later.
- It's actually "orders of magnitude" faster.
But the numbers only show a 20% speed increase!
The unattended reader will be left with the impression that Cranelift compiles 20% faster for a 2x slowdown. Some comments below the article confirms that.
What the article author missed (again) is that the biggest Cranelift wins come when used in release/optimized/multi-pass mode. I mention multi-pass because the author should have noticed that the (relatively old) 2020 research paper he linked to tested Cranelift twice, with one mode having the single-pass tag attached to it.
Any Rust user knows that slow builds (sometimes boringly so) are actually release builds. These are the builds where the slowness of LLVM optimizing passes is felt. And these are the builds where Cranelift shines, and is indeed orders of magnitude faster than LLVM.
The fact that Cranelift manages to build non-optimized binaries 20% faster than LLVM is actually impressively good for Cranelift, or impressively bad for LLVM, however you want to look at it.
And that is the problem with researches/authors with no direct field expertise. They can easily miss some very relevant subtleties, leading the readers to make grossly wrong conclusions.
I read the rest of the article, and it appears to have been partially written before support for codegen backends landing in cargo.
The latest progress report from bjorn3 includes additional details on how to configure Cargo to use the new backend by default, without an elaborate command-line dance.
That "latest progress report" has the relevant info ;)
So, basically, you would add this to the top of Cargo.toml:
cargo-features = ["codegen-backend"]
Then add a custom profile, for example:
[profile.release-dev-cl]
inherits = "release"
lto = "off"
debug = "full"
codegen-backend = "cranelift"
Then build with:
cargo build --profile release-dev-cl
Users can now use Cranelift as the code-generation backend for debug builds of projects written in Rust
Didn't read the rest. But this is clearly inaccurate, as most Rustaceans probably already know.
Cranelift can be used in release builds. The performance is not competitive with LLVM. But some projects are completely useless (too slow) when built with the debug profile. So, some of us use a special release profile where Cranelift backend is used, and debug symbols are not stripped. This way, one can enjoy a quicker edit/compile/debug cycle with usable, if not the best, performance in built binaries.
In case you missed it like me yesterday, a new RFC with an initial experimental implementation (tracking issue) is where things at now.
And we can add a third word to delegate and forward, which is reuse ;)
I wanted to mention that this reminds me of the old Delegation RFC, which some of us didn't like, exactly because it felt like a problem for proc-macros to solve. It eventually got postponed.
But then, the delegation terminology is used in hereditary, so you're probably aware of all that already ;)
Anyway, other crates like delegate, ambassador and portrait are mentioned in the latest comments below that RFC. I wanted to quickly check how many dependants each one of those have, but https://crates.io can't do that at the moment. Nice error message though!
Unfortunately, I have no time to check what kind of code each one of them crates generates, so I have nothing of value to add.
If you're not familiar with the tracing crate, give the instrument page a read. You may find the #[instrument(err)] part particularly useful.
As for the errors themselves, if you're using thiserror (you should be), then the variants in your error enum would/should contain the relevant context data. And the chain of errors would be tracked via source/#[source] fields, as explained in the crate docs. You can see such chains if you use anyhow in your application code.
iced or slint