Skip Navigation
Am I the only software engineer greatly worried and disturbed by AI ?
  • I think your job in your current form is likely in danger.

    SOTA Foundation Models like GPT4 and Gemini Ultra can write code, execute, and debug with special chain of thought prompting techniques, and large acale process verification on synthetic data and RL search for correct outputs will make this 10x better. The silver lining to this is that I expect this to require an absolute shit ton of compute to constantly generate LLM output hundreds of times for each internal prompt over multiple prompts, requiring immense compute and possibly taking longer than an ordinary software engineer to run. I suspect early full stack developer LLMs will mainly be used to do a few very tedious coding tasks and SWEs will be cheaper for a fair length of time.

    I expect it will be 2-3 years before this happens, so for that short period I expect workers to be "super-productive" by using LLMs in the coding process, but I expect the crossover point when the LLM becomes better is quite soon, perhaps in the next 5 years as compute requirements go down.

  • Microsoft says a Copilot key is coming to keyboards on Windows PCs starting this month
  • I suppose having worked with LLMs a whole bunch over the past year I have a better sense of what I meant by "automate high level tasks".

    I'm talking about an assistant where, let's say you need to edit a podcast video to add graphics and cut out dead space or mistakes that you corrected in the recording. You could tell the assistant to do that and it would open the video in Adobe Premiere pro, do the necessary tasks, then ask you to review it to check if it made mistakes.

    Or if you had an issue with a particular device, e.g. your display, the assistant would research the issue and perform the necessary steps to troubleshoot and fix the issue.

    These are currently hypothetical scenarios, but current GPT4 can already perform some of these tasks, and specifically training it to be a desktop assistant and to do more agentic tasks will make this a reality in a few years.

    It's additionally already useful for reading and editing long documents and will only get better on this end. You can already use an LLM to query your documents and give you summaries or use them as instructions/research to aid in performing a task.

  • Microsoft says a Copilot key is coming to keyboards on Windows PCs starting this month
  • Current LLMs are manifestly different from Cortana (🤢) because they are actually somewhat intelligent. Microsoft's copilot can do web search and perform basic tasks on the computer, and because of their exclusive contract with OpenAI they're gonna have access to more advanced versions of GPT which will be able to do more high level control and automation on the desktop. It will 100% be useful for users to have this available, and I expect even Linux desktops will eventually add local LLM support (once consumer compute and the tech matures). It is not just glorified auto complete, it is actually fairly correlated with outputs of real human language cognition.

    The main issue for me is that they get all the data you input and mine it for better models without your explicit consent. This isn't an area where open source can catch up without significant capital in favor of it, so we have to hope Meta, Mistral and government funded projects give us what we need to have a competitor.

  • What distros have you tried and thought, "Nope, this one's not for me"?
  • Yeah, I think Nix is a good concept but I feel like 99% of the config work could be managed by the OS itself and a GUI to change everything else. I also feel like flakes should be the default, not this weird multiple systems thing they have. I also wish most apps would have a sandbox built in, because nix apps would then rival flatpak and, if ported to Windows, become a universal package manager. Overall good concept but not there yet.

  • The vast majority of NFTs are now worthless, new report shows
  • NFTs are stupid AF for most of the tasks people currently use them for and definitely shouldn't be used as proof of ownership of physical assets.

    However, I think NFTs make a lot of sense as proof of ownership of purely digital assets, especially those which are scarce.

    For example, there are several projects for domain name resolution based on NFT ownership (e.g you look up crypto.eth, your browser checks that the site is signed by the owner of the crypto.eth NFT, then you are connected to the site), as it could replace our current system, which has literally 7 guys that hold a private key that is the backbone of the DNS system and a bunch of registrars you have to go through to get a domain. This won't happen anytime soon but it is an interesting concept.

    Then I think an NFT would also be good as a decentralized alternative to something like Google sign in, where you sign up for something with the NFT and sign in by proving your ownership of it.

    In general though I find NFTs to be a precarious concept. I mean the experience I've had with crypto is you literally have a seed phrase for your wallet, and if it gets stolen all your funds are drained. And then for an NFT, if you click on the wrong smart contract, all your monkeys could be gone in an instant. There is in general no legal recourse to reverse crypto transactions, and I think that is frankly the biggest issue with the technology as it stands today.

  • *Permanently Deleted*
  • "I use Signal to hide my data from the US government and big tech"

    "Wait, you seriously still use Reddit? Everyone switched to the Fediverse!"

    "Wow, can't believe you use Apple! Android is so much better."

    No one who isn't terminally online understands what these statements mean. If you want people to use something else, don't make it about privacy and choose something with fancy buttons and cool features that looks close enough to what they have. They do not care about privacy and are literally of the mindset "if I have nothing to hide I have nothing to fear". They sleep well at night.

  • Anti-Piracy Lessons Enter the School Curriculum: Are You a Thief?
  • Hello, kids! Pirates are very bad! Never use qBittorent to download copyrighted material, and certainly do NOT connect it to a VPN to avoid getting caught. Additionally, you should also NEVER download illegal material via an https connection because it is fully encrypted and you won't get caught!

  • Did anyone try to return to reddit and notice it just didn't do it for you anymore?
  • Reddit since changed the UI again which killed my interest in scrolling r/all. I still have to go there to view r/localllama, r/singularity and r/UFOs, none of which have a sizeable Feddit equivalent. I could do without the speculation of the latter 2 in my life, but I need LocalLlama because it is a great source for news and advice on LLMs.

  • Scientists at Fermilab close in on fifth force of nature
  • This is another reminder that the anomalous magnetic moment of the muon was recalculated by two different groups using higher precision lattice QCD techniques and wasn't found to be significantly different from the Brookhaven/Fermilab "discrepancy". More work needs to be done to check for errors in the original and newer calculations, but it seems quite likely to me that this will ultimately confirm the standard model exactly as we know it and not provide any new insight or the existence of another force particle.

    My hunch is that unknown particles like dark matter rely on a relatively simple extension of the standard model (e.g. supersymmetry, axioms, etc.) and the new physics out there that combines gravity and QM is something completely different from what we are currently working on and can't be observed with current colliders or any other experiments on Earth.

    So probably we will continue finding nothing interesting for quite some time until we can get a large ML model crunching every single possible model to check for fit on the data, and hopefully derive some better insight from there.

    Though I'm not an expert and I'm talking out of my ass so take this all with a grain of salt.

  • which linux distro do you NOT like, and why?
  • Yeah there's no way a viable Linux phone could be made without the ability to run Android apps.

    I think we're probably at least a few years away from being able to daily drive Linux on modern phones with functioning things like NFC payments and a decent native app collection. It's definitely coming but it has far less momentum than even the Linux desktop does.

  • TIL turn signal neglect is twice as dangerous as distracted driving
  • Ban cars is a lot like saying defund the police

    It makes a whole lot of sense when you stop and examine the reasoning, but it masks the wide reaching social and infrastructure reforms needed to accomplish it with a short meme that people misinterpret.

  • “AI” Hurts Consumers and Workers -- and Isn’t Intelligent
  • I think this is downplaying what LLMs do. Yeah, they are not the best at doing things in general, but the fact that they were able to learn the structure and semantic context of language is quite impressive, even if it doesn't know what the words converted into tokens actually mean. I suspect that we will be able to use LLMs as one part of a full digital "brain", with some model similar to our own prefrontal cortex calling the LLM (and other things like vision model, sound model, etc.) and using its output to reason about a certain task and take an action. That's where I think the hype will be validated, is when you put all these parts we've been working on together and Frankenstein a new and actually intelligent system.

  • AI model output quality decreases when trained with AI models
  • For the love of God please stop posting the same story about AI model collapse. This paper has been out since May, been discussed multiple times, and the scenario it presents is highly unrealistic.

    Training on the whole internet is known to produce shit model output, requiring humans to produce their own high quality datasets to feed to these models to yield high quality results. That is why we have techniques like fine-tuning, LoRAs and RLHF as well as countless datasets to feed to models.

    Yes, if a model for some reason was trained on the internet for several iterations, it would collapse and produce garbage. But the current frontier approach for datasets is for LLMs (e.g. GPT4) to produce high quality datasets and for new LLMs to train on that. This has been shown to work with Phi-1 (really good at writing Python code, trained on high quality textbook level content and GPT3.5) and Orca/OpenOrca (GPT-3.5 level model trained on millions of examples from GPT4 and GPT-3.5). Additionally, GPT4 has itself likely been trained on synthetic data and future iterations will train on more and more.

    Notably, by selecting a narrow range of outputs, instead of the whole range, we are able to avoid model collapse and in fact produce even better outputs.

  • Why do people hate Manjaro and how to replicate Manjaro sway in arch or arco?
  • I've never used Manjaro but the perception I get from it is that it is a noob friendly distro with good GUI and config (good) but then catastrophically fails when monkeying around with updates and the AUR. This is a pain for technical users and a back-to-Windows experience for the people it's targeted towards. Overall, significantly worse than EndeavorOS or plain 'ol vanilla Arch Linux.

  • AI is a "tragedy of the commons." We’ve got solutions for that.
  • We have no moat and neither does OpenAI is the leaked document you're talking about

    It's a pretty interesting read. Time will tell if it's right, but given the speed of advancements that can be stacked on top of each other that I'm seeing in the open source community, I think it could be right. If open source figured out scalable distributed training I think it's Joever for AI companies.

  • InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)CO
    coolin @beehaw.org
    Posts 0
    Comments 44