Skip Navigation
How do you justify buying something you want but don’t need?
  • I’m lucky enough to be able to budget for things I want. If it’s in the budget, no justification is required. If it’s in the budget but expensive, then I just have to figure out if I want it more than the other things I want (or will want) that I won’t be able to afford as a result.

  • We all know grammar Nazis. What incorrect grammar are you completely in defence of?
  • I hate how much I agree with you in principle and how ugly it looks in practice. With doubled periods, at least - different marks don’t trigger that same reaction. For example, a question mark inside, followed by a period or comma outside feels right.

  • We all know grammar Nazis. What incorrect grammar are you completely in defence of?
  • It’s not grammatically incorrect to end a sentence with a preposition. It’s a common misconception that it is a rule, basically because one guy argued in favor of it back in the 1600s and had some support for formal writing in the 1700s. But it’s never been a broad rule, and even in formal contexts it’s not a rule in any current, reputable style or usage guides (so far as I know, at least).

    Some more info on the topic: https://www.merriam-webster.com/grammar/prepositions-ending-a-sentence-with

  • Ass Ads
  • Glaring doesn't imply a negative meaning. In this case it's used to mean "obvious".

    Unless you’re suggesting that “glaring” means “obviously staring” (it doesn’t - that would be “glaringly staring”) this doesn’t make any sense.

    “[He’s] glaring at [direct object]” is an example of a sentence that uses the present participle form of the verb “glare,” which explicitly communicates anger or fierceness.

    If you’re not convinced, read on.

    —————

    The verb form that takes an object is:

    Glare (verb with object): to express with a glare. They glared their anger at each other

    The noun form the above definition references is:

    Glare (noun): a fiercely or angrily piercing stare.

    “Glaring” can be an adjective and one of those definitions does mean “obvious” or “conspicuous,” but the use of that form of the word doesn’t make sense in her sentence. Think about a comparable sentence like “The undercover operative is conspicuous at the bar,” where the bar is the location. (Even then, most people wouldn’t use “glaring” in that sentence, as “conspicuous” or “obvious” are much less ambiguous; the operative could be staring piercingly or angrily at the bar rather than being glaring while being at the bar.) Another example that makes a bit more sense is “The effect of the invasive plants is glaring at the park.”

    But for that interpretation to be valid here, you’d have to:

    • believe that the dude is trying to hide/blend in, or otherwise explain how he - not what he’s doing, but the dude himself - is conspicuous
    • believe that the woman’s referring to her own ass as a location
    • assume that she isn’t commenting on how the guy is looking at her ass, even though the joke depends on giving him something different to look at

    That’s a bit of a stretch.

  • How do I manage docker&Traefik behind a reverse proxy not on docker.
  • This is what I would try first. It looks like 1337 is the exposed port, per https://github.com/nightscout/cgm-remote-monitor/blob/master/Dockerfile

    x-logging:
      &default-logging
      options:
        max-size: '10m'
        max-file: '5'
      driver: json-file
    
    services:
      mongo:
        image: mongo:4.4
        volumes:
          - ${NS_MONGO_DATA_DIR:-./mongo-data}:/data/db:cached
        logging: *default-logging
    
      nightscout:
        image: nightscout/cgm-remote-monitor:latest
        container_name: nightscout
        restart: always
        depends_on:
          - mongo
        logging: *default-logging
        ports:
          - 1337:1337
        environment:
          ### Variables for the container
          NODE_ENV: production
          TZ: [removed]
    
          ### Overridden variables for Docker Compose setup
          # The `nightscout` service can use HTTP, because we use `nginx` to serve the HTTPS
          # and manage TLS certificates
          INSECURE_USE_HTTP: 'true'
    
          # For all other settings, please refer to the Environment section of the README
          ### Required variables
          # MONGO_CONNECTION - The connection string for your Mongo database.
          # Something like mongodb://sally:sallypass@ds099999.mongolab.com:99999/nightscout
          # The default connects to the `mongo` included in this docker-compose file.
          # If you change it, you probably also want to comment out the entire `mongo` service block
          # and `depends_on` block above.
          MONGO_CONNECTION: mongodb://mongo:27017/nightscout
    
          # API_SECRET - A secret passphrase that must be at least 12 characters long.
          API_SECRET: [removed]
    
          ### Features
          # ENABLE - Used to enable optional features, expects a space delimited list, such as: careportal rawbg iob
          # See https://github.com/nightscout/cgm-remote-monitor#plugins for details
          ENABLE: careportal rawbg iob
    
          # AUTH_DEFAULT_ROLES (readable) - possible values readable, denied, or any valid role name.
          # When readable, anyone can view Nightscout without a token. Setting it to denied will require
          # a token from every visit, using status-only will enable api-secret based login.
          AUTH_DEFAULT_ROLES: denied
    
          # For all other settings, please refer to the Environment section of the README
          # https://github.com/nightscout/cgm-remote-monitor#environment
    
    
  • How do I manage docker&Traefik behind a reverse proxy not on docker.
  • To run it with Nginx instead of Traefik, you need to figure out what port Nightscout’s web server runs on, then expose that port, e.g.,

    services:
      nightscout:
        ports:
          - 3000:3000
    

    You can remove the labels as those are used by Traefik, as well as the Traefik service itself.

    Then just point Nginx to that port (e.g., 3000) on your local machine.

    —-

    Traefik has to know the port, too, but it will auto detect the port that a local Docker service is running on. It looks like your config is relying on that feature as I don’t see the label that explicitly specifies the port.

  • Arrrrr! Welcome Aboard The Good Ship Matey 🏴‍☠️
  • JustWatch is still useful if you want to act like you watched it legitimately, e.g., if a coworker asks where they can watch it. Even if your coworker also pirates, they might not have an account on your private tracker, Usenet, etc..

    I may be wrong, as I haven’t actually torrented anything substantial since Demonoid was still a thing, but it all feels less accessible than it used to be.

  • Study finds AI tools made open source software developers 19 percent slower
  • Ars points out that these findings contradict those of other experiments and then goes on to postulate as to why. I clicked on the link to the other experiment:

    when data is combined across three experiments and 4,867 developers, our analysis reveals a 26.08% increase (SE: 10.3%) in completed tasks among developers using the AI tool

    By comparison, this experiment considered 16 developers. That’s 0.3% as many as the experiments its findings contradict. Fortunately, the authors don’t claim their findings are broadly applicable. They even have a table that reads:

    We do not provide evidence that | Clarification —- | —- AI systems do not currently speed up many or most software developers | We do not claim that our developers or repositories represent a majority or plurality of software development work AI systems do not speed up individuals or groups in domains other than software de- velopment | We only study software development AI systems in the near future will not speed up developers in our exact setting | Progress is difficult to predict, and there has been substantial AI progress over the past five years [2] There are not ways of using existing AI systems more effectively to achieve positive speedup in our exact setting | Cursor does not sample many tokens from LLMs, it may not use optimal prompting/scaffolding, and domain/repository-specific training/finetuning/few-shot learning could yield positive speedup

    That said, the study has been an interesting read so far. I highly recommend reading it directly rather than just the news posts about it. Check out their own blog post: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/

    I personally find the psychological effect - the devs thought they were 20% faster even afterward - to be pretty interesting, as it suggests that even if more time overall is spent, use of AI could reduce cognitive load and potentially side effects like burnout.

    I’d like to see much larger scale studies set up like this, as well as studies of other real world situations. For example, how does this affect the amount of time this takes 10,000 different developers to onboard onto an unfamiliar repository?

  • Four Eyes Principle
  • There’s a whole history of people, both inside and outside the field, shifting the definition of AI to exclude any problem that had been the focus of AI research as soon as it’s solved.

    Bertram Raphael said “AI is a collective name for problems which we do not yet know how to solve properly by computer.”

    Pamela McCorduck wrote “it’s part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, but that’s not thinking” (Page 204 in Machines Who Think).

    In Gödel, Escher, Bach: An Eternal Golden Braid, Douglas Hofstadter named “AI is whatever hasn’t been done yet” Tesler’s Theorem (crediting Larry Tesler).

    https://praxtime.com/2016/06/09/agi-means-talking-computers/ reiterates the “AI is anything we don’t yet understand” point, but also touches on one reason why LLMs are still considered AI - because in fiction, talking computers were AI.

    The author also quotes Jeff Hawkins’ book On Intelligence:

    Now we can see the entire picture. Nature first created animals such as reptiles with sophisticated senses and sophisticated but relatively rigid behaviors. It then discovered that by adding a memory system and feeding the sensory stream into it, the animal could remember past experiences. When the animal found itself in the same or a similar situation, the memory would be recalled, leading to a prediction of what was likely to happen next. Thus, intelligence and understanding started as a memory system that fed predictions into the sensory stream. These predictions are the essence of understanding. To know something means that you can make predictions about it. …

    The human cortex is particularly large and therefore has a massive memory capacity. It is constantly predicting what you will see, hear, and feel, mostly in ways you are unconscious of. These predictions are our thoughts, and, when combined with sensory input, they are our perceptions. I call this view of the brain the memory-prediction framework of intelligence.

    If Searle’s Chinese Room contained a similar memory system that could make predictions about what Chinese characters would appear next and what would happen next in the story, we could say with confidence that the room understood Chinese and understood the story. We can now see where Alan Turing went wrong. Prediction, not behavior, is the proof of intelligence.

    Another reason why LLMs are still considered AI, in my opinion, is that we still don’t understand how they work - and by that, I of course mean that LLMs have emergent capabilities that we don’t understand, not that we don’t understand how the technology itself works.

  • Why is AI so wrong all the time???
  • It may be aware of them, but not in that context. If you asked it how to solve the problem rather than to solve the problem for you, there’s a chance it would suggest you use a reverse image search.

  • Why is AI so wrong all the time???
  • LLM image processing doesn’t work the same way reverse image lookup does.

    Tldr explanation: Multimodal LLMs turn pictures into a thousand 200-500 or so words tokens, but reverse image lookups create perceptual hashes of images and look the hash of your uploaded image up in a database.

    Much longer explanation:

    Multimodal LLMs (technically, LMMs - large multimodal models) use vision transformers to turn images into tokens. They use tokens for words, too, but these tokens don’t also correspond to words. There are multiple ways this could be implemented, but a common approach is to break the image down into a grid, then transform each “patch” of a specific size, e.g., 16x16, into a single token. The patches aren’t transformed individually - the whole image is processed together, in context - but it still comes out of it with basically 200 or so tokens that allow it to respond to the image, the same way it would respond to text.

    Current vision transformers also struggle with spatial awareness. They embed basic positional data into the tokens but it’s fragile and unsophisticated when it comes to spatial awareness. Fortunately there’s a lot to explore in that area so I’m sure there will continue to be improvements.

    One example improvement, beyond improved spatial embeddings, would be to use a dynamic vision transformers that’s dependent on the context, or that can re-evaluate an image based off new information. Outside the use of vision transformers, simply training LMMs to use other tools on images when appropriate can potentially help with many of LMM image processing’s current shortcomings.

    Given all that, asking an LLM to find the album for you is like - assuming you’ve given it the ability and permission to search the web - like showing the image to someone with no context, then them to help you find what music video - that they’ve never seen, by an artist whose appearance they describe with 10-20 generic words, none of which are their name - it’s in, and to hope there were, and that they remembered, the specific details that would make it would come up in the top ten results if searched for on Google. That’s a convoluted way to say that it’s a hard task.

    By contrast, reverse image lookup basically uses a perceptual hash generated for each image. It’s the tool that should be used for your particular problem, because it’s well suited for it. LLMs were the hammer and this problem was a torx screw.

    Suggesting you use - or better, using a reverse image lookup tool itself - is what the LLM should do in this instance. But it would need to have been trained to think to suggest this, capable of using a tool that could do the lookup, and have both access and permission to do the lookup.

    Here’s a paper that might help understand the gaps between LMMs and tasks built for that specific purpose: https://arxiv.org/html/2305.07895v7

  • AI coders think they’re 20% faster — but they’re actually 19% slower
  • From the blog post referenced:

    We do not provide evidence that:

    AI systems do not currently speed up many or most software developers

    Seems the article should be titled “16 AI coders think they’re 20% faster — but they’re actually 19% slower” - though I guess making us think it was intended to be a statistically relevant finding was the point.

    That all said, this was genuinely interesting and is in-line with my understanding of the human psychology that’s at play. It would be nice to see this at a wider scale, broken down across different methodologies / toolsets and models.

  • Software subscriptions: you own nothing and you'll be happy
  • Current generation iPad Pros and Airs have the same processing power as Apple Silicon Macs. That’s more than enough for Blender. Even the base iPad and the iPad Mini likely have enough processing power - though I don’t think the base iPad has enough RAM.

  • shittysuperpowers @lemm.ee hedgehog @ttrpg.network
    You can make people misinterpret homophones

    This only applies when the homophone is spoken or part of an audible phrase, so written text is safe.

    It doesn’t change reality, just how people interpret something said aloud. You could change “Bare hands” to be interpreted as “Bear hands,” for example, but the person wouldn’t suddenly grow bear hands.

    You can only change the meaning of the homophones.

    It’s not all or nothing. You can change how a phrase is interpreted for everyone, or:

    • You can affect only a specific instance of a phrase - including all recordings of it, if you want - but you need to hear that instance - or a recording of it - to do so. If you hear it live, you can affect everyone else’s interpretation as it’s spoken.
    • You can choose not to affect how it is perceived by people when they say it aloud, and only when they hear it.
    • You can affect only the perception of particular people for a given phrase, but you must either be point at them (pictures work) or be able to refer to them with five or fewer words, at least one of which is a homophone. For example, “my aunt.” Note that if you do this, both interpretations of the homophone are affected, if relevant, (e.g., “my ant”).
    • You can make it so there’s a random chance (in 5% intervals, from 5% to 95%) that a phrase is misinterpreted.
    0
    Meta trained its AI on almost all public posts since 2007
    www.theverge.com Meta fed its AI on almost everything you’ve posted publicly since 2007

    Making Facebook and Instagram private won’t delete that data.

    Meta fed its AI on almost everything you’ve posted publicly since 2007

    cross-posted from: https://lemmy.world/post/19716272

    > Meta fed its AI on almost everything you’ve posted publicly since 2007

    4
    Video - Palworld Modded with Pokemon

    The video teaser yesterday about this was already DMCAed by Nintendo, so I don’t think this video will be up long.

    2
    InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)HE
    hedgehog @ttrpg.network
    Posts 3
    Comments 818