Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)PA
Posts
0
Comments
204
Joined
2 wk. ago

  • Boy am I glad I didn’t go to school (and don’t have to teach) during AI.

    Back in my day you knew people were copy and pasting from stack overflow because Python would complain about mixed indentation and there’d be comments in only one function.

    I do feel for the TAs having to read our printed assignment and hand written code on tests.

  • nat 20

    Jump
  • Guys, I really didn’t plan for you to try placing the king inside your bag of holding so now you’re criminals and have no hook to continue the plot.

    I need to go home and rewrite the whole first chapter of the story.

  • The new Ryzen AI chips and Apples Neural Engine (or whatever is called) have great efficiency for performance and can run strong local models.

    Intel also announced they’re going this route.

    I know soldered memory isn’t popular, but right now the performance/energy benefits are big — you just have to buy the premium models.

    I think NVIDIA will keep doing their massive GPU toaster ovens, Project Digits was supposed to be their low energy competitor and has been underwhelming.

  • Local AI is decent these days.

    It’s about 6 months behind state of the art frontier models, which 6 months ago were still really good, just had not figured out agentic tool calls.

    Qwen3 is supposed to be good at that now.

  • The local model scene is getting really good, I just wish I’d sprung for more RAM and GPU when I bought my macbook m1.

    Even then, I can still run 8-12gb models that are decently good, and I’m looking forward to the new Qwen3 30b to move my tool use local.