Dioxus Labs + “High-level Rust
BB_C @ BB_C @programming.dev Posts 10Comments 365Joined 2 yr. ago
But why can’t we fight to make Rust better and be that “good enough” tool for the next generation of plasma physicists, biotech researchers, and AI engineers?
Because to best realize and appreciate Rust's added value, one has to to be aware, and hindered by, the problems Rust tries to fix.
Because Rust expects good software engineering practices to be put front and center, while in some fields, they are a footnote at best.
Because the idea of a uni-language (uni- anything really) is unattainable, not because the blasé egalitarian "best tool for the job" mantra is true, but because "best tool" from a productive PoV is primarily a question of who's going to use it, not the job itself.
Even if a uni-language was the best at everything, that doesn't mean every person who will theoretically use it will be fit, now or ever, to maximize its potential. If a person is able to do more with an assumed worse tool than he does with a better one, that doesn't necessarily invalidate the assumption, nor is it necessarily the fault of the assumed better tool.
Rust’s success is not a technical feat, but rather a social one
fighting the urge to close tab
Projects like Rust-Analyzer, rustfmt, cargo, miri, rustdoc, mdbook, etc are all social byproduct’s of Rust’s success.
fighting much harder
LogLog’s post makes it clear we need to start pushing the language forward.
One man's pushing the language forward is another man's pushing the language backward.
A quick table of contents
Stopped here after all the marketing talk inserted in the middle.
May come back later.
Side Note: I don't know what part of the webshit stack may have caused this, but selecting text (e.g. by triple-clicking on a paragraph) after the page is loaded for a while is broken for me on Firefox. A lot of errors getting printed in the JS console too. Doesn't happen in a Blinktwice browser.
From my experience, when people say “don’t unwrap in production code” they really mean “don’t call panic! in production code.” And that’s a bad take.
What should be a non-absolutest mantra can be bad if applied absolutely. Yes.
Annotating unreachable branches with a panic is the right thing to do; mucking up your interfaces to propagate errors that can’t actually happen is the wrong thing to do.
What should be a non-absolutest mantra can be bad if applied absolutely.
(DISCLAIMER: I haven't read the post yet.)
For example, if you know you’re popping from a non-empty vector, unwrap is totally the right too(l) for the job.
That would/should be .expect(). You register your assumption once, at the source level, and at the panic level if the assumption ever gets broken. And it's not necessarily a (local) logical error that may cause this. It could be a logical error somewhere else, or a broken running environment where sound logic is broken by hardware or external system issues.
If you would be writing comments around your .unwrap()s anyway (which you should be), then .expect() is a strictly superior choice.
One could say .unwrap() was a mistake. It's not even that short of a shortcut (typing wise). And the maximumly lazy could have always written .expect("") instead anyway.
You should have mentioned OP.
@fil
/// # Panics
///
/// - if `samples.len()` does not match the `sample_count` passed to [Self::new]
/// - if there are `NaN`s in the sample slice
Since this is library code, why not make the function return a Result?
I thought it's been a while, but apparently the rate of NOT A CONTRIBUTIONs is up, not down.
start a process within a specific veth
That sentence doesn't make any sense.
Processes run in network namespaces (netns), and that's exactly what ip netns exec does.
A newly created netns via ip netns add has no network connectivity at all. Even (private) localhost is down and you have to run ip link set lo up to bring it up.
You use veth pairs to connect a virtual device in a network namespace, with a virtual device in the default namespace (or another namespace with internet connectivity).
You route the VPN server address via the netns veth device and nothing else. Then you run wireguard/OpenVPN inside netns.
Avoid using systemd since it runs in the default netns by default, even if called from a process running in another netns.
The way I do it is:
- A script for all the network setup:
ns_con AA
- A script to run a process in a netns (basically a wrapper around
ip netns exec):
ns_run AA <cmd>
- Run a termnal app using 2.
- Run a tmux session on a separate socket inside terminal app. e.g.
export DISPLAY=:0 # for X11
export XDG_RUNTIME_DIR=/run/user/1000 # to connect to already running pipewire...
# double check this is running in AA ns
tmux -f -f <alternative_config_file_if_needed> -L NS_AA
I have this in my tmux config:
set-option -g status-left "[#{b:socket_path}:#I] "
So I always know which socket a tmux session is running on. You can include network info there if you're still not confident in your setup.
Now, I can detach that tmux session. Reattaching with tmux -L NS_AA attach from anywhere will give me the session still running in AA.
You don't even need full-fledged containers for that btw.
Learn how to script with ip netns and veth.
What alternative would you suggest?
A, rolling release first, distro (e.g. Arch or Void) with no DE installed.
But you're probably not ready for that.
For me, a terminal and Firefox are the only GUI apps really needed. mpv too if it counts.
But I'm someone who has been running Arch+AwesomeWM for ~15 years ago (been using Arch for even longer). So I probably can't meaningfully put myself in new users' shoes.
Is your browser Firefox?
What kind of storage devices do you have? NVMe?
Did you check with tools like iotop to see if something is going on IO wise?
You assumed that the problem is caused by the CPU being utilized at 100%.
This may not be the case.
A lot of us don't run a DE at all. I myself use Awesome WM.
For non-tilers, Openbox with some toolbar would be the ideal setup.
I mention this because we (non-DE users) would have no experience with some funky stuff like a possible KDE indexer running in the background killing IO performance and thrashing buffered/cached memory.
Also, some of us run firefox with eatmydata because we hate fsync 🤨
Neither KDE nor Gnome is peak Desktop Linux experience.
Ubuntu and its flavors is not peak distro experience either.
If you want to try Desktop Linux for real, you will need to dip your toes a little bit deeper.
P.S. Since it wasn't mentioned already, look up cgroups.
Back when I had a humble laptop (pre-Rust), using nice and co. didn't help much. Custom schedulers come with their own stability and worst-case-scenario baggage. cgroups should give you supported and well-tested tunable kernel-level resource usage control.
This hasn't been my experience when no swapping is involved (not a concern for me anymore with 32GiB physical RAM with 28GiB zram).
And I've been Rusting since v1.0, and Linuxing for even longer.
And my setup is boring (and stable), using Arch's LTS kernel which is built with CONFIG_HZ=300. Long gone are the days of running linux-ck.
Although I do use craneleft backend now day to day, so compiles don't take too long anyway.
Yes, but then the concrete type of None literals becomes unknown, which is what I was trying to point out.
The macro rules are all used.
Oops. I was looking at it wrong.
I didn’t make
Option<&str>an option because the struct is for typeOption<String>.
Re-read the end of OP's requirements.
* Two of your macro rules are not used 😉 (expand to see which ones).
- This doesn't support
Option<&str>. If it did, we would lose literalNonesupport 😉
A generic impl is impossible.
Imagine you want to turn a Into<String> to Some(val.into()) and Option<Into<String>> to val.map(Into::into).
Now, what if there is a type T where impl From <Option<T>> for String is implemented?
Then we would have a conflict.
If you only need this for &str and String, then you can add a wrapper type OptionStringWrapper(Option<String>) and implement From<T> for OptionStringWrapper for all concrete type cases you want to support, and go from there.
Why Aren't We Embracing IPFS?
Because it's an overhyped joke successfully utilized by crypto scammers.
Neither content addressing, nor distributed hash tables (or key-value stores, or whatever) were novel ideas.
The combination of the two is not a novel idea.
For p2p, torrents, work as another user already pointed out (initial realease 2001).
For a distributed filesystem, look at Tahoe-LAFS (initial release 2007).
For a full anonymous p2p distributed filesystem, check out (real) Freenet, called Hyphanet now (initial release 2000).
And no, if you need anonymity, an anonymous transport (e.g. using libp2p) is not enough. You need to consider anonymity at each step like Freenet does.
These are three real non-overhyped products one can draw inspiration from. IPFS? not so much.
You can look around for more examples. I always found this Wikipedia page about file sharing in Japan interesting, since it mentions networks not well known to the rest of the world: https://en.wikipedia.org/wiki/File_sharing_in_Japan
DNS blockers became a thing in part because /etc/hosts can't do stuff like glob subdomain blocking, no?
e.g.
*.bla.tld 127.0.0.1
Don't get angry with me my friend. We are more in agreement than not re panics (not
.unwrap(), another comment coming).Maybe I'm wrong, but I understood 'literally' in 'literally never' in the way young people use it, which doesn't really mean 'literally', and is just used to convey exaggeration.