Zed on Linux is out!
BB_C @ BB_C @programming.dev Posts 10Comments 365Joined 2 yr. ago
It's not you who needs it.
It's for buzzword chasers and cost cutters.
Rust (=> fast and hip)
Shared (=> outsourced)
AI generated (=> robot devs)
Get it?
If NULL was a billion dollar mistake, imagine how many billions it's going to be for AI-generated code.
Yeah, sorry. My comment was maybe too curt.
My thoughts are similar to those shared by @Domi in a top comment. If an API user is expected to be wary enough to check for such a header, then they would also be wary enough to check the response of an endpoint dedicated to communicating such deprecation info, or wary enough to notice API requests being redirected to a path indicating deprecation.
I mentioned Zapier or Clearbit as examples of doing it in what I humbly consider the wrong way, but still a way that doesn't bloat the HTTP standard.
Proper HTTP implementations in proper languages utilize header-name enums for strict checking/matching, and for performance by e.g. skipping unnecessary string allocations, not keeping known strings around, ..etc. Every standard header name will have to added as a variant to such enums, and its string representation as a constant/static.
Not sure how you thought that shares equivalency with random JSON field names.
Weak use-case.
Wrong solution (IMHO).
If one must use a header for this, how Zapier or Clearbit do it, as mentioned in appendix A.2, is the way to go.
Bloating HTTP and its implementations for REST-specific use-cases shouldn't be generally accepted.
That's like the most trivial of theories one can test for.
Save response time every minute
bash
while true; do /usr/bin/time -f "%e `date`" dig '@1.1.1.1' +noall programming.dev &>>/tmp/dns_clf_perf.txt; sleep 60; done
Then after a while (maybe a couple of days), check the worst numbers:
sort -n /tmp/dns_clf_perf.txt |tail
Run the same script with a different DNS server at the same time, and compare numbers.
Dates included in case there are any patterns regarding the time of day/week.
making points that aren’t even particularly that new.
(putting my Rust historian hat on)
Even the name stdx[1][2] is not original.
It was one of multiple attempts to officially or semi-officially present a curated a list of crates. Thankfully, all these attempts failed, as the larger community pushed against them, and more relevantly, as the swarm refused to circle around any of them.
This reminds of a little-known and long-forgotten demo tool named cargo-esr[1][2]. But it's not the tool, but the events it was supposedly created as a response to that is worth a historical mention, namely these blog posts[1][2], and the commotion that followed them[1][2][3][4].
For those who were not around back then, there was an obscure crate named mio, created by an obscure developer named Carl Lerche, that was like the libevent/libuv equivalent for Rust. mio was so obscure I actually knew it existed before Rust even hit v1.0. Carl continued to do more obscure things like tokio, whatever that is.
So, the argument was that there was absolutely no way whatsoever that one could figure out they needed to depend on mio for a good event loop interface. It was totally an insurmountable task!
That was the circus, and "no clown left behind" was the mindset, that gave birth to all these std-extending attempts.
So, let's fast forward a bit. NTPsec didn't actually get (re)written in go, and ended up being a trimming, hardening, and improving job on the original C impl. The security improvements were a huge success! Just the odd vulnerability here and there. You know, stuff like NULL dereferences, buffer over-reads, out-of-bounds writes, the kind of semantic errors Rust famously doesn't protect from 🙂
To be fair, I'm not aware of any big NTP implementations written in Rust popping up around that time either. But we do finally have the now-funded ntpd-rs effort progressing nicely.
And on the crates objective metrics front, kornel of lib.rs fame, started and continues to collect A LOT of them for his service. Although, he and lib.rs are self-admittedly NOT opinion-free.
DISCLAIMER: I didn't even visit OP's link.
Permanently Deleted
why gcc couldnt do this automatically? i mean its supposed to do this right?
Because gcc is a compiler, not a build tool.
Maybe you come from a language where the two tasks are combined, but that's not the case here.
and another important issue that clang ls (language server) showed the same error? i thought fixing this would fix that too, but that isnt the case here.
For the same reason stated above, clangd needs to know how you build your code. This is done via a JSON file called compile_commands.json.
In your trivial case, running this should be enough:
clang -MJ- main.c `pkg-config --cflags --libs dbus-1` > compile_commands.json
Permanently Deleted
% pkg-config --cflags dbus-1
-I/usr/include/dbus-1.0 -I/usr/lib/dbus-1.0/include
% pkg-config --libs dbus-1
-ldbus-1
gcc main.c `pkg-config --cflags --libs dbus-1`
You don't need to link against the library yet, but you will.
Linking might become a separate step when you have multiple files, not just main.
how does that help when I’m searching a non-Rust project via the GitHub web search interface
Fair.
But you are writing a comment under a topic regarding a Rust-flavored IDE, posted to a Rust community.
With neither the IDE nor Rust involved, your quoted problem statement is 100% off-topic.
There is a YouTube video in Servo's homepage.
The first minutes of that video answer your question.
A reminder that the Servo project has resumed active development since the start of 2023, and is making good progress every month.
If you're looking for a serious in-progress effort to create a new open, safe, performant, independent, and fully-featured web engine, that's the one you should be keeping an eye on.
It won't be easy trying to catch up to continuously evolving and changing web standards, but that's the only effort with a chance.
I for one am happy we’re getting an alternative to the Chrome/Firefox duality we’re stuck with.
Anyone serious about that would be sending their money towards Servo, which resumed active development since the start of 2023, and is making good progress every month.
I would say nothing but "Good Luck" to other from-scratch efforts, but It's hard not to see them as memes with small cultist followings living on hope and hype.
My post was a showcase of why there is no substitute for knowing your tools properly, and how when you know them properly, you will never have to wait for 5 minutes, let alone 5 years, for anything, because you never used or needed to use an IDE anyway.
This applies universally. No minimum smartness or specialness scores required.
Not sure how what I write is this confusing to you.
- Tests don't necessarily live in paths containing
test. - Code in paths containing
testis not necessarily all tests. cargo expandgives you options for correctly and coherently expanding Rust code, and doesn't expand tests by default.rgwas half a joke since it's Rust's grep. You can just pipecargo expand [OPTIONS] [ITEM]output tovim '+set ft=rust' -orbat --filename t.rsand search from there.
What part are you struggling with?
The ripgrep (rg) part, or the cargo expand part?
You two bring shame to the programming community.
Just ripgrep cargo expanded output for f sake.
I thought I saw this weeks ago.
May 21, 2024
yep
Anyway, neovim+rust-analyzer+ra-multiplex is all I need.
that’s not what I’m looking for when I’m looking at a backtrace. I don’t mind plain unwraps or assertions without messages.
You're assuming the PoV of a developer in an at least partially controlled environment.
Don't underestimate the power of (preferably specific/unique) text. Text a user (who is more likely to be experiencing a partially broken environment) can put in a search engine after copying it or memorizing it. The backtrace itself at this point is maybe gone because the user didn't care, or couldn't copy it anyway.
In technical context, yes. I'm a Rustacean myself.
In business/marketing context, ...