Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)BC
Posts
10
Comments
364
Joined
2 yr. ago

  • /me putting my Rust (post-v1.0 era) historian hat on.

    The list of (language-level) reasons why people liked Rust was already largely covered by the bullet points in the real original Rust website homepage, before some "community" people decided to nuke that website because they didn't like the person who wrote these points (or rather, what that person was "becoming"). They tasked some faultless volunteers who didn't even know much Rust to develop a new website, and then rushed it out. It was ugly. It lacked supposedly important components like internationalization, which the original site did. But what was important to those "community people" (not to be confused with the larger body of people who develop Rust and/or with Rust) is that the very much technically relevant bullet points were gone. And it was then, and only then, that useless meaningless "empowerment" speak came into the picture.

  • less likely to be insecure

    Evidenced by?

    requires reviewing all source code

    This is exactly the la-la-land view of what distributors do I was dispelling with facts and reality checks. No one is reviewing all source code of anything, except for cases where a distro developer and an upstream member are the same person. And even then, this may not be the case depending on the upstream project, its size, and the distro developer's role within that project.

    to make sure it meets interoperability

    Doesn't mean anything other than "it builds" and "API is not broken" (e.g. withing the same .so version), and "seems to work".

    These considerations happen to hardly exist with the good tooling provided by cargo.

    and open-source standards.

    Doesn't mean anything outside of licensing (for code and assets), and "seems to work".

    Your argument that crates.io is a known organization therefore we should trust the packages distributed is undermined by your acknowledgement that crates.io does not produce any code. Instead we are relying on the individual crate developers, who can be as anonymous as they want.

    Largely correct. But that was me comparing middle-man vs. middle-man. That is if crates.io operators can be described as middle-men, since their responsibilities (and consequently, attack vectors) are much smaller.

    Barring organizational attacks from within, with crates.io, you have one presumably competent/knowledgable, possibly anonymous, source, and operators that don't do much. With a binary distro, you have that, AND another "middle-man" source, possibly anonymous, and with competence and applicable knowledge <= upstream (charitable), yet put in a position to decide what to do with what upstream provides, or rather, provided.. X years ago, if we are talking about the claimed "stable" release channel.

    The middle man pulls sources from places like crates.io anyway. So applying trivial "logic"/"maths", it can't be "better", in the context being discussed.

    Software doesn't get depended on out of thin air. You are either first in line directly depending on a library, and thus you would naturally at least make the minimum effort to make sure it's minimally "fit for purpose". Or you are an indirect dependant, and thus looking at your direct dependencies, and maybe "trusting" them with the "trickle down".

    More processes, especially automated ones, are always welcome to help catch "stuff" early. But it is no surprise that the "success stories" concern crates with fat ZERO dependants.

    Processes that help dependants share their knowledge about their dependencies (a la cargo vet) are unquestionably good additions. They sure trump the dogmatic blind faith in distros doing something they simply don't have the knowledge or resources to do, or the slightly less dogmatic faith in some library being "trustable" if packaged by X or XX distros, assuming at least someone knowledgable/competent must have given a thorough look (this has a rough equivalent in the number of dependants anyway).

    This is all obvious, and doesn't take much thought from anyone active from the inside (upstreams or distros), instead of the surface "knowledge" that leaks, and possibly gets manipulated, in route to the outside.

  • While it may never be "enough" depending on your requirements (which you didn't specifically and coherently define), the amount of "review", and having the required know-how to do it competently, is much bigger/higher from your crate dependants, than from your distro packages.

    It's not rare for a distro packager to not know much about the programming language (let a lone the specific code) of some packages they package. It's very rare for a packager to know much about the specific code of what they package (they may or may not have some level of familiarity with a handful of codebases).

    So what you get is someone who pulls source packages (from the interwebs), possibly patching them (and possibly breaking them), compiling them, and giving you the binaries (libs/execs). With source distros, you don't have the compiling and binary package part. With crates.io, you don't have the middle man at all. Which is why the comparison was never right from the start. That's the pondering I left you to do on your own two comments ago.

    Almost all sufficiently complex user-space software in your system right now has a lot of dependencies (vendored or packaged), you just don't think of them because they are not in your face, and/or because you are ambivalent to the realities of how distros work, and what distro developers/packagers actually do (described above). You can see for yourself with whatever the Debian equivalent is to pactree (from pcaman).

    At least with cargo, you can have all your dependencies in their source form one command away from you (cargo vendor), so you can trivially inspect as much as you like/require. The only part that adds unknowns/complexities is crates that usebuild.rs. But just like unsafe{}, this factor is actually useful, because it tells you where you should look first with the biggest magnifying glass. And just like cargo itself, the streamlining of the process means there aren't thousands of ways/places in the build process to do something.

  • Debian (and other "community" distros) is distributed collaboration, not an organization in the sense you're describing. You're trusting a scattered large number of individuals (some anonymous), infrastructure, and processes. The individuals themselves change all the time. The founder of the whole project is not even still with us for example.

    Not only the processes did nothing to stop shipping the already mentioned xz backdoor (malicious upstream). But the well-known blasé attitude towards patching upstream code without good reason within some Debian developer circles actually directly caused Debian-only security holes in the past (If you're young, check this XKCD and the explanation below it). And it just happens that it's the same blasé attitude that ended up causing the xz backdoor to affect PID 1 (systemd) in the first place. While that particular malicious attack wasn't effective/applicable in distros that don't have such an attitude in their "culture" (e.g. Arch).

    On the other hand, other Debian developer(s) were the first to put a lot of effort into making reproducible builds a thing. That was a good invaluable contribution.

    So there is good, and there is very much some bad. But overall, Debian is nothing special in the world of "traditional" binary distros. But in any case, it's the stipulation "trusting an organization because it has a long track record of being trustworthy" in the context of Debian that would be weird.

    (The "stable distro" model of shipping old patched upstreams itself is problematic, but this comment is too long already.)

    crates.io is 10+ years old upstream-submitted repository of language-specific source packages. It's both not that comparable to a binary distro, and happens to come with no track record of own goals. It can't come with own goals like the "OpenSSL fiasco" in any case, because the source packages ARE the upstreams. It is also not operated by any anonymous people, which is the first practical requirement to have some logically-coherent trustworthiness into an individual or a group. Most community distros can't have this as a hard requirement by their own nature, although top developers and infrastructure people tend to be known. But it takes one (intentionally or accidentally) malicious binary packager...

    You don't seem to have a coherent picture of a threat model, or actual specific factualities about Debian, or crates.io, or anything really, in mind. Just regurgitations about "crates.io BAD" that have been fed mostly by non-techies to non-techies.

  • So, we established that "pulled in from the interwebs" is not a valid differentiator.

    which has existed for much longer than has crates.io

    True and irrelevant/invalid (see below). Among the arguments that could be made for

    <some_distro>

    packages vs. crates.io, age is not one of them. And that's before we get to the validity of such arguments.

    In this case, it is also an apples-to-oranges comparison, since Debian is a binary distro, and crates.io is a source package repository. Which one is "better", if we were to consider this aspect alone, is left for you to ponder.

    and has had fewer malicious packages get into it.

    The xz backdoor was discovered on a Debian Sid system, my friend. Can you point to such "malicious packages" that actually had valid users/dependants on crates.io?

  • fastrand has zero dependencies.

    And all external dependencies are "pulled from the interwebs" nowadays (in source and/or binary form), irrespective of language. This includes core, alloc, and std, which are crates that came with your compiler, which you pulled from the interwebs.

  • Not knowing about opt-in telemetry doesn't convey lack of experience, or lack of (relevant) knowledgeability. Especially considering the fact that Arch purposefully keeps the existence of it low-key to avoid the possibility of pissing off anyone.

    I was already an Arch user when that opt-in telemetry was introduced. And only heard about it because I was relatively active in Arch communities back then (relatively young, relatively new to Arch). If pkgstats were introduced two years later, I would have never heard of them. Because believe it or not, Arch is just a reliable OS, where you don't have to interact with anything other than reading the odd announcement every other year. It's not a "community", or a "way of life", or anything in that bracket.

  • The premise of the question is wrong, since it assumes a general preference.

    If you're asking 👉 this 👈 Arch user, the answer is "NONE".

    EDIT: The majority of users, especially experienced ones, don't enable pkgstats. So such stats always end up in some form of self-selection (biased towards users who would use a DE in this case).

  • They know you can just do if ((age < 18)) in bash, right?

    Or rather if ((10#$age < 18)) because age=021 would not be adult 😉 Hopefully, they protect against that at least.

    (I had to double-check this stupid default is still a thing, since I moved to zsh many years ago.)

  • With GPU rendering, you should learn about GPU processing and memory usage too, not that it would matter much for such a use-case.

    nvtop is nice for displaying all that info (it's not nvidia-specific).

    Also % CPU usage is not a good metric, especially when most people forget to set CPU frequencies to fixed values before measuring. And heterogenous architectures (e.g. big.LITTLE) make such numbers meaningless anyway (without additional context). But again, none of this really matters in this use-case.

  • A quick shallow look.

    • Avoid single hard paths. Provide fall-backs. Make them all configurable. Use xdg (properly)...etc.
    • Avoid .unwrap() or any source of panic!() for non-fatal things that can actually fail.
    • Make non strictly necessary fields optional in you model, if that helps.
    • Use .filter_map() and .collect() in your parsing code, instead of all the matches and continues in a for loop. You can use .ok()? to early-return with None on errors.
    • And finally, since you're micro-benchmarking, try speedy or borsh instead of bincode, unless you need the serde compat for some reason.
  • I gave this a quick look at 2X speed with a lot fast seeking, and my brain still hurts.

    First of all, and concerning Rust, please familiarize yourself with the mem module and its functions at least. You didn't even get near a situation where using unsafe{} was actually required.

    Second of all, and concerning the task at hand itself, for someone who knew to make the distinction between bytes and chars, you should have known about grapheme clusters too. There are a lot of multi-char (not just multi-byte) graphemes out there. You can make a "Fun With Flags" 😉 segment to show that off (no attribution required). Just don't do anything silly, and make sure to just utilize the unicode-segmentation crate.

  • sudo is NOT a part of coreutils. Anyone with basic *nix knowledge would have known this.

    sudo-rs, as expected, is also NOT a part of uutils. And the projects happen to be very different in many aspects. uutils started from scratch as a hobby side-project, and it was developed from the start in idiomatic Rust. It can't directly take anything from the GNU implementation anyway, as explained in their README. sudo-rs however is a funded effort to translate some C projects into Rust with as little unsafe{} as possible. Some of the code was directly translated from the original implementation. And if you look at the code in general, you will see that it's rather low-level and looks more like C than Rust in many parts. Some of this is arguably necessary given the nature of sudo functionality, but not all of it.

    Both projects do share the fact that they probably didn't push for distros, Ubuntu or anyone else, to switch to either of them by default already, and both were probably surprised it happened this soon.

    And yes, this exposure, negative as it may seem for now, is an unavoidable "teething" period, and it's going to be of great benefit to both projects on the long run. Hopefully, Ubuntu users living on the edge wouldn't face too much trouble meanwhile.

    (I don't use Ubuntu, but have been using sudo-rs by default for months.)

  • Like you, I'm not well versed in the web**** world (self-censored), but from my observations, Leptos appears to be the most popular (community) web rust/wasm framework currently. Why? I wouldn't know.

  • I get it – abstractions are cool. They’re supposed to hide complexity so we can focus on cooler stuff. And Rust loves that idea. Traits, generics, lifetimes – layer upon layer of "don’t worry about it honey."

    That's such a fundamental misunderstanding of something so basic, that I almost had to stop reading.

  • Programming @programming.dev

    koto v0.16.0 released (koto is a scripting programming language)

    Programming Circlejerk @programming.dev

    When I found out even Rust needed the clib, it was like seeing an iron-clad fortress only to look closer and see it was being held up by sticks, ducktape, and prayers.

    Programming @programming.dev

    Rust tops a diverse list of implementation languages in projects getting NLnet grants, Python 2nd, C is alive, and C++ is half dead!

    Rust @programming.dev

    Rust tops a diverse list of implementation languages in projects getting NLnet grants, Python 2nd, C is alive, and C++ is half dead!

    Rust @programming.dev

    Koto: a simple and expressive programming language, usable as an extension language for Rust applications, or as a standalone scripting language

    Programming @programming.dev

    Koto: a simple and expressive programming language, usable as an extension language for Rust applications, or as a standalone scripting language

    Rust @programming.dev

    kdl 6.0.0-alpha.1 (first version with a KDL v2 implementation)

    Rust @programming.dev

    COSMIC ALPHA 1 Released (Desktop Environment Written In Rust From System76)

    Rust @programming.dev

    cushy v0.3.0 Released

    Rust @programming.dev

    slint 1.6.0 Released