Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)LA
Posts
7
Comments
2,239
Joined
2 yr. ago

I dunno

Jump
  • Treat a + b/c + d as a + b/(c + d) I can almost understand, I was guilty of doing that in school with multiplication, but auto-parenthesising the first part is really crazy take, imo

  • I'm not sure that not having anyone on salary is part of that deal. Or if you were referring to the part where I said about salary being negative, that's from experience, a couple of directors of a non-profit I know had to donate their salary and add on top of it when times were rough (in that organisation it was pretty often). Large non-profits probably don't have that issue

  • Now I wonder, because both 999k and 100k are six figure sums, and one of them I find much more reasonable than the other.

    But yeah, running a non-profit often takes money instead of earning you money, and if they have spare money to pay salary to the CEO, maybe they're all right

  • For someone like me, who was a bit lost on the meaning of framework, framework here is what you use to build an engine (I thought it's on top of the engine instead)

    Also, the beginning of the article is a bit messy and the author jumps around thoughts, but it gets an interesting read, and they even talk about how to actually use AI for benefit instead of for multiplying bugs:

    "It's hard for me to talk about it without sounding like a cult member," Hall said sheepishly, when describing how he uses ChatGPT while working in Brutal. But he and Falanghe agreed—using LLMs has made language-based coding an easier task.

    Not that much easier, to be clear. They both said that when querying an LLM, they rarely copy and paste whatever code it generates. Instead they ask questions about C# libraries or Vulkan documentation, and the software is able to return high-quality answers. Answers that normally require programmers to sit down for hours to pore over documentation or scour ancient forums to find that one post with the solution (which was probably written in 2014).

    "An LLM is essentially tokenizing language, then putting masses of vectors around that to build linkages between those tokens," said Hall. "What could be better than a highly-structured, in fact brutally structured language?" Vulkan and the latest version of C# are very "highly structured, with very clear syntax."

    Developers critical of ChatGPT maker OpenAI should be able to replicate this process on open source models like DeepSeek, Falanghe said (though he hasn't tried this himself).

    This process doesn't work as well with Unity and Unreal because they're both "highly spatial" as a result of their visual scripting tools. A solution for one game's problem may not work with another because of the different scripted elements. LLMs scouring the web can't produce consistent answers.

    It is also the opposite of vibe coding, a method where programmers tell an LLM what they want a system to do and it generates code—and it isn't code completion, where AI tools "predict" what someone is typing and finish the string for them to speed up their workflow. The only thing the LLM does for Brutal developers is speed up access to information, letting them research without watching a 40 minute YouTube video.

    Maybe we will finally see no-vibe solutions, like we saw no-code solutions 🌚

  • I think, they have a point about the spec being both enormous and underspecified, and that there should be other ways to have and query relational data.

    But yeah, it looks like some of the points are a bit blown out of proportion. I especially liked those monstrosities of queries that are examples of how the same thing computes different results (but it shouldn't be allowed, really)

  • I'm more interested in how do you navigate system menus and such, or does DE manage this? I've tried one Linux distro recently without a mouse attached and it was painful because some elements of the system UI are not accessible in any way

  • This is what should've been in the description, imo

    The new methods developed by DeepSeek (and published in its latest paper) could help to overcome this issue. Instead of storing words as tokens, its system packs written information into image form, almost as if it’s taking a picture of pages from a book. This allows the model to retain nearly the same information while using far fewer tokens, the researchers found.

    Besides using visual tokens instead of just text tokens, the model is built on a type of tiered compression that is not unlike how human memories fade: Older or less critical content is stored in a slightly more blurry form in order to save space. Despite that, the paper’s authors argue, this compressed content can still remain accessible in the background while maintaining a high level of system efficiency.