There’s a difference between programming and software development, after all.
Yes, absolutely, but only because we're the customers.
The art is software design (imo) comes in understanding the problem and creating a clever, efficient and cost effective solution that is durable and secure. (This hardly ever happens in practice which is why we're constantly rewriting stuff). This is good and useful and in this case Art is Good. The artist has ascended to seeing the whole problem from the beginning and a short path from A to B, not just starting to code and seeing where it goes, as so many of us do.
A human programmer writing "artistic code" is often someone showing off by doing something in an unusual or clever way. In that case, I think boring, non-artistic code is better since it's easier to maintain. Once smarty-pants has gone elsewhere, someone else has to pick up their "art" and try to figure it out. In this case, Art is Bad. Boring is Good. LLMs are good at boring.
So the customer thing - by that I mean, we set the targets. We tell coders (AI or human) what we want, so it's us that judge what's good and if it meets our spec. The difficulty for the coders is not so much writing the code, but understanding the target, and that barrier is one that's mostly our fault. We struggle to tell other humans what we want, let alone machines, which is why development meetings can go on for hours and a lot of time is wasted showing progress for approval. Once the computers are defining the targets, they'll be fixing them before we're even aware. This means a change from the LLM prompt -> answer methodology, and a number of guardrails being removed, but that's going to happen sometime.
At the moment it's all new and we're watching changes carefully. But we'll tire of doing that and get complacent, after all we're only human. Our focus is limited and we're sometimes lazy. We'll relax those guardrails. We'll get AIs to tell other AIs what to do to save ourselves even the work of prompting. We'll let them work in our codebase without checking every line. It'll go wrong, probably spectacularly. But we won't stop using it.
I think it's... not wise to underplay or predict the growth of LLMs and AI. Five years ago we couldn't have predicted their impact on many roles today. In another five years it will be different again.
My point is that you don't know the actual truth. Nor do I. We can't.
Bots and paid agents are not a new technique - in ancient times, countries would send spies undercover into enemy territory to sow discord. To rabble rouse and change public opinion. It's the same now, just the tools have changed. No news source is entirely unbiased, even word of mouth is influenced. The only way you can determine the truth is by seeing it with your own, naked eyes. And even then, your own personal bias can change the context.
Reddit is a platform where its' easy to get the ears of a lot of people, so it's a big target. It's not Reddit's fault, and Lemmy would suffer exactly the same if we had the numbers they do.
What is different now on the world stage, mostly thanks to Trump, is that there's no longer even any pretence to truth. The most powerful person in the world lies constantly, and his example proves that works. No shame, no integrity, no honesty - just lies and crude manipulation.
Every iteration of the major models is better, faster, with more context. They're getting better at a faster speed. They're already relied upon to write code for production systems in thousands of companies. Today's reality is already as good as I'm saying. Tomorrow's will be better.
Give it, what, ten or twenty years and the thought of a human being writing computer code will be anachronistic.
But it's wrong to assume those algorithms don't change. They do, and improve, and become better with iterative changes and will continue to get less distinguishable from real intelligence with time. (Clarke's quote about "sufficiently advanced technology being indistinguishable from magic" springs to mind)
As for my point - writing good code is exactly the sort of task that LLMs will be good at. They're just not always there /yet/. Their context histories are short, their references are still small (in comparison), they're slow compared to what they will be. I'm an old coder and I've known many others, some define their code as art and there is some truth in that, and art is of course something any AI will struggle with, but code doesn't need to be artistic to work well.
There's also the possibility there will be a real milestone and true AI will emerge. That's a scary thought and we've no way of telling if that's close or far away.
I know I'm not reading the room here, but you mentioned "long term" and I think that's an important term.
AI tools will improve and in the near future, I'm pretty confident they will get better and one of the things they can do then is to solve the tech debt their previous generations caused.
"Hey, ChatGPT 8.0, go fix the fucking mess ChatGPT 5.0 created"... and it will do it. It will understand security, and reliance and all the context it needs and it will work and be good. There is no reason why it won't.
That doesn't help us if things break before that point, of course, so let's keep a copy of the code that we knew worked okay.
Same as everything else in life - like the bits that are useful to you and ignore the rest.
As for doing what you're told at work, who said we had to like it provided it's a reasonable request?
I’m at a point where I must adapt
What's wrong with adapting? The one constant in life is that things change. This is a change and you're not the only person who has faced their job changing - at least you still have it. Adapt or go raise goats.
This, along with any other mistake, erodes that trust and will have damaged Sectigo's reputation at least as much as Rustdesk's.
I doubt there's any conspiracy or higher figure at work here. Just human error.
Rustdesk will probably have a claim for financial losses and good luck if they pursue that - the admission of a mistake and breach of protocol makes it seem likely to be settled very quickly. The tone of this report suggests that this is somewhere they'll be heading towards and I suspect Sectigo will pay handsomely to make this story short lived.
I was a heavy smoker for 15 years (40+/day). Giving that up was really hard, both emotionally and physically (they don't warn you about the physical withdrawal effects - sweats, hyperactivity, insomnia, nausea etc) and habit breaking is a bastard.
But at least with that you can stop. It's binary, you're either not a smoker or you are. I've found managing diet to be harder than that.
I think that's easier than not over eating because you have to eat and psychologically, I've found that harder. Every meal feels like a little failure.
I used mounjaro this year which has helped lose 10kg, but even that's levelled off. Am also still a fat ass.
It should work fine in your thinkcentre using the onboard sata power and data cables. Being enterprise doesn't change this. 7200rpm makes it a bit noisier and hotter and faster, and use a tiny bit extra power, but it's a 2.5" so it's never going to be that hungry.
If there's no sound/vibration on start, and your BIOS doesn't detect it, it's probably dead.
Vendors do lie about testing second hand drives, but couriers also drop stuff all the time.
Contact the seller, explain the situation and ask for a replacement.
I love Eurovision and always admired how it's strived not to be political.
But I respect the Netherland's position here even more. I suspect they are just the first and this could be one a Eurovision to remember for the wrong reasons.
Back when the very first chinese IP cameras started arriving in the west in the noughties, after we'd only had the first gen very expensive professional ones from the like of Axis, they nearly all shared the same firmware. This had factory set credentials of admin / admin and a common port (8080, iirc)
Back then, uPnP was commonly also enabled by default on routers, so the camera would ask for the port to be opened automatically and the router would just do it, allowing the internet into the camera. A simple scan of IPs on port 8080 would yield a lot of prompts with the distinctive login page for this firmware and around 90% of the time, the default credentials would still let you in, and you could see the camera.
Fortunately, routers have improved and uPnP was recognised as being incredibly stupid and isn't seen much now and is disabled by default if it is. Some IP cameras have improved also, but there's still a lot at the lower end that have almost no security, or prioritise convenience or cloud solutions first.
(I researched the above when I found one of my company's cameras broadcasting and tried to educate people about it back in the day, but I doubt it did much good)
Why would he give a shit what people think about him? Others rich people don't because when you've got enough money you can insulate yourself entirely from what the world thinks.
You don’t know what hes actually responsible for
Nor do the people judging him so harshly.
You don’t see the pharmaceutical investments hes made
The fuck? Why would he donate money and save countless lives just to benefit from it via some claimed business link?
Brave of you to hold a nuanced opinion! So many people have a very binary view of others, and Lemmy's the same, as the downvoting shows.
And yes, totally, he was a typical morally corrupt businessman and one of the first tech bros in a time before most of Lemmy was even born. But he's also done a lot of good in the second half of his life. People are dismissive of that but they bloody well shouldn't be.
Who else has contributed $2bn specifically to fight malaria? Nobody. There's quite a few now who could have helped but nobody else has. The Gates Foundation has also contributed that much again towards fighting Tuberculosis and AIDs. These are big numbers and they've had a real effect. Those of us who live comfortable lives are fortunate where these diseases aren't everyday killers of friends and family and we cannot fully appreciate the benefit this work has done.
Does this offset his earlier negative behaviour? I honestly think it might do.
All good points and well argued. Thank you.
Yes, absolutely, but only because we're the customers.
The art is software design (imo) comes in understanding the problem and creating a clever, efficient and cost effective solution that is durable and secure. (This hardly ever happens in practice which is why we're constantly rewriting stuff). This is good and useful and in this case Art is Good. The artist has ascended to seeing the whole problem from the beginning and a short path from A to B, not just starting to code and seeing where it goes, as so many of us do.
A human programmer writing "artistic code" is often someone showing off by doing something in an unusual or clever way. In that case, I think boring, non-artistic code is better since it's easier to maintain. Once smarty-pants has gone elsewhere, someone else has to pick up their "art" and try to figure it out. In this case, Art is Bad. Boring is Good. LLMs are good at boring.
So the customer thing - by that I mean, we set the targets. We tell coders (AI or human) what we want, so it's us that judge what's good and if it meets our spec. The difficulty for the coders is not so much writing the code, but understanding the target, and that barrier is one that's mostly our fault. We struggle to tell other humans what we want, let alone machines, which is why development meetings can go on for hours and a lot of time is wasted showing progress for approval. Once the computers are defining the targets, they'll be fixing them before we're even aware. This means a change from the LLM prompt -> answer methodology, and a number of guardrails being removed, but that's going to happen sometime.
At the moment it's all new and we're watching changes carefully. But we'll tire of doing that and get complacent, after all we're only human. Our focus is limited and we're sometimes lazy. We'll relax those guardrails. We'll get AIs to tell other AIs what to do to save ourselves even the work of prompting. We'll let them work in our codebase without checking every line. It'll go wrong, probably spectacularly. But we won't stop using it.