Skip Navigation
Therapy Chatbot Tells Recovering Addict to Have a Little Meth as a Treat
  • It's because technological change has a reached staggering pace, but social change, cultural change, political change can't. It's not designed to handle this pace.

  • Therapy Chatbot Tells Recovering Addict to Have a Little Meth as a Treat
  • Not engagement, that's what social media does. They just maximize what they're trained for, which is increasingly math proofs and user preference. People like flattery

  • White House releases health report written by LLM, with hallucinated citations
  • Do you when the do a web search or just generating text?

  • Please, UI designers
  • finally someone who gets it

  • Please, UI designers
  • If it’s not implemented properly, resources (images, videos, ads) don’t get unloaded when they’re no longer visible.

    Doing this causes it's own problems. Try searching on a page that unloads everything out of view. Or saving it

  • Is that bad?
  • Half the changes in Linux are also editing random config files, so it's about the same

  • Is that bad?
  • You can. I've had it off for years. It just needs a registry update, and persists across updates.

  • The Daily Wire is now trying to pass off AI slop as actual footage from Palestine.
  • Tbh, it doesn't matter, the video doesn't show anything. Its just some random guy and people cheering. It could have been shot in Croatia for all we know.

  • This is Hostile to Business
  • That's not the same. In that case copilot is also doing a search. They're talking about the model itself

  • The Windows Subsystem for Linux is now open source
  • It's so annoying, because both are technically grammatically correct, but the current one just sounds the opposite

  • Microsoft employee disrupts Satya Nadella’s keynote with ‘Free Palestine’ protest
  • This was several weeks ago

    edit: I guess it happened again

  • First Nestle now AI
  • Water in data center is cyclic. AI uses a miniscule amount of water compared to most things in our economy

  • Left rule
  • I remember someone tested this. They lasted like eight hours until some neighbor called a cop

  • A cyberpunk anime girl!
  • An unattractive anime girl? Is that even possible?

  • [InkyRickshaw] An Elephant Never Forgets
  • Dude if your wife hasn't figured it out, you might want to consider if maybe you are just a little stupid

  • Do they just think about the same fuckin' thing forever?
  • Pretty sure everyone does, but they will take you through it first, not drop the topic change without context.

    Also it's considered weird and off topic, so even if they think it they don't bring it up

  • Einstein-Landauer culinary units
  • What about lemmings :-)

  • [paper] Evidence of a social evaluation penalty for using AI

    cross-posted from: https://lemmy.ml/post/30013197

    > ### Significance > As AI tools become increasingly prevalent in workplaces, understanding the social dynamics of AI adoption is crucial. Through four experiments with over 4,400 participants, we reveal a social penalty for AI use: Individuals who use AI tools face negative judgments about their competence and motivation from others. These judgments manifest as both anticipated and actual social penalties, creating a paradox where productivity-enhancing AI tools can simultaneously improve performance and damage one’s professional reputation. Our findings identify a potential barrier to AI adoption and highlight how social perceptions may reduce the acceptance of helpful technologies in the workplace.

    > ### Abstract > Despite the rapid proliferation of AI tools, we know little about how people who use them are perceived by others. Drawing on theories of attribution and impression management, we propose that people believe they will be evaluated negatively by others for using AI tools and that this belief is justified. We examine these predictions in four preregistered experiments (N = 4,439) and find that people who use AI at work anticipate and receive negative evaluations regarding their competence and motivation. Further, we find evidence that these social evaluations affect assessments of job candidates. Our findings reveal a dilemma for people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs.

    1
    [paper] Evidence of a social evaluation penalty for using AI

    cross-posted from: https://lemmy.ml/post/30013147

    > ### Significance > As AI tools become increasingly prevalent in workplaces, understanding the social dynamics of AI adoption is crucial. Through four experiments with over 4,400 participants, we reveal a social penalty for AI use: Individuals who use AI tools face negative judgments about their competence and motivation from others. These judgments manifest as both anticipated and actual social penalties, creating a paradox where productivity-enhancing AI tools can simultaneously improve performance and damage one’s professional reputation. Our findings identify a potential barrier to AI adoption and highlight how social perceptions may reduce the acceptance of helpful technologies in the workplace.

    > ### Abstract > Despite the rapid proliferation of AI tools, we know little about how people who use them are perceived by others. Drawing on theories of attribution and impression management, we propose that people believe they will be evaluated negatively by others for using AI tools and that this belief is justified. We examine these predictions in four preregistered experiments (N = 4,439) and find that people who use AI at work anticipate and receive negative evaluations regarding their competence and motivation. Further, we find evidence that these social evaluations affect assessments of job candidates. Our findings reveal a dilemma for people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs.

    0
    [paper] Evidence of a social evaluation penalty for using AI

    > ### Significance > As AI tools become increasingly prevalent in workplaces, understanding the social dynamics of AI adoption is crucial. Through four experiments with over 4,400 participants, we reveal a social penalty for AI use: Individuals who use AI tools face negative judgments about their competence and motivation from others. These judgments manifest as both anticipated and actual social penalties, creating a paradox where productivity-enhancing AI tools can simultaneously improve performance and damage one’s professional reputation. Our findings identify a potential barrier to AI adoption and highlight how social perceptions may reduce the acceptance of helpful technologies in the workplace.

    > ### Abstract > Despite the rapid proliferation of AI tools, we know little about how people who use them are perceived by others. Drawing on theories of attribution and impression management, we propose that people believe they will be evaluated negatively by others for using AI tools and that this belief is justified. We examine these predictions in four preregistered experiments (N = 4,439) and find that people who use AI at work anticipate and receive negative evaluations regarding their competence and motivation. Further, we find evidence that these social evaluations affect assessments of job candidates. Our findings reveal a dilemma for people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs.

    10
    Anyone experienced insta-crashing?

    Was working fine this morning for me. No updates.

    But now it keeps crashing and my phone shows popups saying "something went wrong with summit". Clearing the cache and force killing the app didn't help

    0
    I'm tempted to use Discord-esque "black hole" platforms due to AI scraping

    discord is a black hole for information

    Traditional reasoning says you should prefer open forums like lemmy that are available and searchable to the open web. After all, you're posting to help people, and that helps people the most. The platform (like reddit) may profit off of it, but that's fine, they're providing the platform for you to post. Fair deal.

    Plus people coming for high quality information helps the community and topic back. You attract other high quality contributors, the more people use/partake in the topic you are discussing, the platform often improves with the revenue etc. It's not perfect, but it worked

    AI scrapers break all that. The company profiting is the AI company, and they give nothing back. They model just holds all the information in its weights. It doesn't drive people to the source. Even the platform doesn't benefit from bot scraping. The addition of high quality data may improve the model on that topic and thus push people to engage in said topic more, but not much, because of how AI's are trained, while you need some high quality data, a lot more important, especially for lesser known topics, is amount of data.

    So as more of the world moves to AI models, I don't really feel like posting on public forums as much, helping the AI companies get richer, even if I do benefit from AI myself.

    12
    www.msnbc.com Opinion | I held three town halls in GOP districts. I heard one question over and over.

    Republicans are cancelling town halls in their districts. I’m a Democrat and I went to find out what those voters would like to say to their elected officials.

    Opinion | I held three town halls in GOP districts. I heard one question over and over.
    7
    Lemmy Users are willfully ignorant of AI's capabilities

    Other platforms too, but I'm on lemmy. I'm mainly talking about LLMs in this post

    First, let me acknowledge that AI is not perfect, it has limitations e.g

    • tendency to hallucinate responses instead of refusing/saying it doesn't know
    • different models/models sizes with varying capabilities
    • lack of knowledge of recent topics without explicitly searching it
    • tendency to be patternistic/repetitive
    • inability to hold on to too much context at a time etc.

    The following are also true:

    • People often overhype LLMs without understanding their limitations
    • Many of those people are those with money
    • The term "AI" has been used to label everything under the sun that contains an algorithm of some sort
    • Banana poopy banana (just to make sure ppl are reading this)
    • There have been a number companies that overpromised for AI, and often were using humans as a "temporary" solution until they figured out the AI, which they never did (hence the gag, "AI" stands for "An Indian")

    But I really don't think they're nearly as bad as most lemmy users make them out to be. I was going to respond to all the takes but there's so many I'll just make some general points

    • SOTA (State of the Art) models match or beat most humans besides experts in most fields that are measurable
    • I personally find AI is better than me in most fields except ones I know well. So maybe it's only 80-90% there, but it's there in like every single field whereas I am in like 1-2
    • LLMs can also do all this in like 100 languages. You and I can do it in like... 1, with limited performance in a couple others
    • Companies often use smaller/cheaper models in various products (e.g google search), which are understandably much worse. People often then use these to think all AI sucks
    • LLMs aren't just memorizing their training data. They can reason, as recent reasoning models more clearly show. Also, we now have near frontier models that are like 32B, or 21B GB in size. You cannot fit the entire internet in 21GB. There is clearly higher level synthesizing going on
    • People often tend to seize on superficial questions like the strawberry question (which is essentially an LLM blind spot) to claim LLM's are dumb.
    • In the past few years, researchers have had to come up with countless newer harder benchmarks because LLMs kept blowing through previous ones (partial list here: https://r0bk.github.io/killedbyllm/)
    • People and AI are often not compared fairly, for isntance with code, people usually compare a human with feedback from a compiler, working iteratively and debugging for hours to LLMs doing it in one go, no feedback, beyond maybe a couple of back and forths in a chat

    ---

    Also I did say willfully ignorant. This is because you can go and try most models for yourself right now. There are also endless benchmarks constantly being published showing how well they are doing. Benchmarks aren't perfect and are increasingly being gamed, but they are still decent.

    17
    donmoynihan.substack.com Real chilling effects

    A extraordinary pattern of government censorship and threats to speech

    Real chilling effects
    1
    Give JXL a Chance [song]
    1
    fediversereport.com Fediverse Report’s deep research on Deep Research’s fediverse report

    ChatGPT released a new mode, called Deep Research. Tech writer Casey Newton asked Deep Research to write a report about the fediverse. But how good is the quality of the report that ChatGPT puts out? Fediverse Report does some deep research on Deep Research's fediverse report.

    Fediverse Report’s deep research on Deep Research’s fediverse report
    4
    fediversereport.com Fediverse Report’s deep research on Deep Research’s fediverse report

    ChatGPT released a new mode, called Deep Research. Tech writer Casey Newton asked Deep Research to write a report about the fediverse. But how good is the quality of the report that ChatGPT puts out? Fediverse Report does some deep research on Deep Research's fediverse report.

    Fediverse Report’s deep research on Deep Research’s fediverse report
    4
    Rep. Ro Khanna (D-Calif.) Turns Republicans' Words Against Them in 'Drain the Swamp' Act Against Lobbyist Gifts: 'Trump Can Fulfill His Promise'
    www.latintimes.com Democratic Lawmaker Turns Republicans' Words Against Them in 'Drain the Swamp' Act Against Lobbyist Gifts: 'Trump Can Fulfill His Promise'

    Rep. Ro Khanna (D-CA) put Republicans on the spot with the Drain the Swamp Act, aimed at banning White House officials from accepting gifts from lobbyists.

    Democratic Lawmaker Turns Republicans' Words Against Them in 'Drain the Swamp' Act Against Lobbyist Gifts: 'Trump Can Fulfill His Promise'

    > Rep. Ro Khanna (D-Calif.) put Republicans on the spot with the introduction of his Drain the Swamp Act, a bill aimed at banning White House officials from accepting gifts from lobbyists and preventing them from becoming lobbyists.

    > The bill directly challenges Trump to uphold his long-standing campaign promise to "drain the swamp" by eliminating government corruption.

    > President Trump campaigned around the country to 'drain the swamp', yet one of the first things he did was reverse President Biden's executive order that banned White House officials from accepting gifts from lobbyists," Khanna said on the House floor. "I believe that this bill will have support, not just from progressives, not just from independents, but from the MAGA movement."

    > Khanna's move forces Trump-aligned Republicans to either support stricter ethics reforms—aligning with Trump's past rhetoric—or reject the bill, which could be seen as backtracking on promises to clean up Washington.

    > Last month, Sen. Elizabeth Warren (D-Mass.) accused Trump of breaking his promise to "drain the swamp" during his first term in a letter urging him to address "key corruption risks," a likely reference to Elon Musk, who holds a government role while maintaining extensive private business interests.

    > "The American people have seen that, all too often, government officials use their positions to benefit their own pocketbooks," Warren wrote. "Even the appearance of such corruption is enough to damage Americans' trust in government."

    > Khanna's bill is the latest effort from Democrats to test whether Trump and his allies are willing to follow through on anti-corruption rhetoric—or if "draining the swamp" was just a campaign slogan.

    15
    Zig Programming Language @lemm.ee morrowind @lemmy.ml
    Good News! Zig 0.14.0 Delayed
    1
    [OC] U.S Aviation Fatalities by year

    cross-posted from: https://lemmy.ml/post/26350717

    15
    morrowind morrowind @lemmy.ml

    If you're here, there's still hope for the internet

    Don't let it fall

    Posts 299
    Comments 2.6K
    Moderates