
TechTakes
- pivot-to-ai.com Yahoo! Mail shows email users fake AI-mangled subject lines
EQL runs promotions such as launches of new sneakers. Some of these, like this year’s NBA All-Star Weekend launch of the Michael Jordan AJ 1, are in huge demand! So EQL runs these as a lottery. Rep…
- pivot-to-ai.com Businesses still don’t know what an ‘AI agent’ is and they don’t really want one
What is an “AI agent,” anyway? We could concoct a reasonable definition — say, you tell it in natural language what you want and it goes out and does it for you. But that doesn’t describe the thing…
- pivot-to-ai.com Humane AI finally shuts down, HP pays $116m for the pieces — not including the Ai Pin
Humane, creator of the fabulous and literally nonfunctional Ai Pin gadget — that’s “Ai,” not “AI” — has finally thrown in the towel. After Humane took $230 million in …
- Musk accretes Gebbia, wants to run an AirBnB out of 1600 Pennsylvania
Original NYT title: Billionaire Airbnb Co-Founder Is Said to Take Role in Musk’s Government Initiative
- pivot-to-ai.com AI chatbots test as having cognitive decline
You know how chatbots can do fine in short bursts, but then you ask them how many ‘R’s there are in “strawberry” and they act like they’ve got a concussion? For the British Medical Journal’s Christ…
- pivot-to-ai.com Guardian does OpenAI deal, New York Times goes AI for newspaper content generation
The Guardian Media Group has announced a content deal with OpenAI. The various GPTs can serve information from the Guardian, with attribution and links back, and GMG gets a wad of cash. The company…
- pivot-to-ai.com How AI slop generators started talking about ‘vegetative electron microscopy’
In a world of publish or perish, academics will too often turn to our helpful robot friends and ask a chatbot to write the text of a paper for them. Sometimes they don’t check too closely if the re…
- pivot-to-ai.com El Salvador finds the use case for AI: handouts to the government’s pals
Agencia Nacional de Inteligencia Artificial (ANIA) is a new government agency for AI in El Salvador, operating directly under President Nayib Bukele. [YSKL, in Spanish] The “Law for the Promotion o…
- pivot-to-ai.com AI image gets a US copyright — or some of a copyright
AI-generated images mostly aren’t copyrightable in the US — no matter how much work you put into a prompt, you’re just running it through an AI gatcha game to see what you win. This isn’t enough to…
- gizmodo.com Microsoft Study Finds Relying on AI Kills Your Critical Thinking Skills
Researchers from Microsoft and Carnegie Mellon University warn that the more you use AI, the more your cognitive abilities deteriorate.
Crossposting from lemm.ee's technology community
Hahahahaha. At least they had the balls to publish and host it themselves.
- pivot-to-ai.com The UK’s ‘turbocharged’ AI initiative — bringing data center noise to a village near you
Bitcoin mining in the US is notorious for inflicting ridiculously noisy data centers on small towns across the country. We can bring these deafening benefits to towns and villages across the UK — a…
- newsocialist.org.uk AI: The New Aesthetics of Fascism
It's embarrassing, destructive, and looks like shit: AI-generated art is the perfect aesthetic form for the far right.
- Thomson Reuters wins AI training copyright case — what this does and doesn’t meanpivot-to-ai.com Thomson Reuters wins AI training copyright ruling — what this does and doesn’t mean
Thomson Reuters has won most of its copyright case against defunct legal startup Ross Intelligence, who trained an AI model on headnotes from Westlaw. [Wired, archive; opinion, PDF, case docket] Th…
- pivot-to-ai.com AI chatbots are still hopelessly terrible at summarizing news
BBC News has run as much content-free AI puffery as any other media outlet. But they had their noses rubbed hard in what trash LLMs were when Apple Intelligence ran mangled summaries of BBC stories…
- pivot-to-ai.com Elon Musk’s DOGE turns to AI to accelerate its destructive incompetence
You’ll be delighted to hear that Elon Musk’s Department of Government Efficiency and the racist rationalist idiot kids he’s hired to do the day-to-day destruction are heavily into “AI,” because it …
- Stubsack: weekly thread for sneers not worth an entire post, week ending 16th February 2025
Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
>The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be) > > Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Taking over for Gerard this time. Special thanks to him for starting this.)
- pivot-to-ai.com Microsoft research: Use AI chatbots and turn yourself into a dumbass
You might think using a chatbot to think for you just makes you dumber — or that chatbots are especially favored by people who never liked thinking in the first place. It turns out the bot users re…
- Andrew Molitor on "AI safety". "people are gonna hook these dumb things up to stuff they should not, and people will get killed. Hopefully the same people, but probably other people."nonesense.substack.com AI Safety
The current state of the art of AI safety research is mainly of two sorts: “what if we build an angry God” and “can we make the thing say Heil Hitler?” Neither is very important, because in the first place we’re pretty unlikely to build a God, and in the second place, who cares?
- pivot-to-ai.com DeepSeek roundup: banned by governments, no guard rails, lied about its training costs
Of course DeepSeek lied about its training costs, as we had strongly suspected. SemiAnalysis has been following DeepSeek for the past several months. High Flyer, DeepSeek’s owner, was buying Nvidia…
- pivot-to-ai.com OpenAI does a Super Bowl ad, Google’s ad uses bad stats from Gemini
When you advertise at the Super Bowl, you’ve reached just about every consumer in America. It’s the last stop. If you’re not profitable yet, you never will be. Back in the dot-com bubble, Pets.com …
- pivot-to-ai.com How to remove Copilot AI from your Office 365 subscription: hit ‘cancel’
Here in the UK, Microsoft is now helpfully updating your Office/Microsoft/Copilot 365 subscription to charge you for the Copilot AI slop generator! Being Microsoft, they’re making the opt-out stupi…
- pivot-to-ai.com Do be evil: Google now OK with AI for weapons and surveillance — for ‘freedom, equality, and human rights’
Good news! Google has determined that “responsible AI” is in no way incompatible with selling its AI systems for weaponry or surveillance. Google don’t state this directly — they just quietly remov…
- Iris Meredith: Licking the AI Boot
"the AI thread in our society now is nothing more or less than a demand by the wealthy creators of it for submission. It's a naked show of force by the powerful and stupid: you will use this tool, and you will like it."
- Forum admin’s madness, AI edition: the old Physics Forums site fills with generated sloppivot-to-ai.com Forum admin’s madness, AI edition: Physics Forums fills with generated slop
In 2025, the web is full of AI slop. Some propose going back to 2000s-style forums. But watch out for 2025 admins. Physics Forums dates back to 2001. Dave and Felipe of Hall Of Impossible Dreams no…
- What is the charge? Eating an LLM? A succulent chinese LLM? Deepseek judo-thrown out of Australian government devices
OFC if there were any real sense or justice in the world, LLMs would be banned outright.
- pivot-to-ai.com Oh no! AI can replicate itself! … when you tell it to, and you give it copies of all its files
“Frontier AI systems have surpassed the self-replicating red line” is the sort of title you give a preprint for clickbait potential. An LLM, a spicy autocomplete, can now produce copies of itself! …
- Stubsack: weekly thread for sneers not worth an entire post, week ending 9th February 2025
Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
> The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be) > > Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
- pivot-to-ai.com Skynet, but it’s Wheatley: OpenAI sells ‘reasoning’ AI to US government for nuclear weapon security
In hot pursuit of those juicy defense dollars, OpenAI has signed a new contract with the U.S. National Laboratories to “supercharge their scientific research” with OpenAI’s latest o1-series confabu…
- pivot-to-ai.com Cleveland police use AI facial recognition — and their murder case collapses
The only suspect in a Cleveland, Ohio, murder case is likely to walk because the police relied on Clearview AI’s facial recognition to get a warrant on them — despite Clearview warning specifically…
- www.technologyreview.com DeepSeek might not be such good news for energy after all
New figures show that if the model’s energy-intensive “chain of thought” reasoning gets added to everything, the promise of efficiency gets murky.
> In the week since a Chinese AI model called DeepSeek became a household name, a dizzying number of narratives have gained steam, with varying degrees of accuracy [...] perhaps most notably, that DeepSeek’s new, more efficient approach means AI might not need to guzzle the massive amounts of energy that it currently does. > > The latter notion is misleading, and new numbers shared with MIT Technology Review help show why. These early figures—based on the performance of one of DeepSeek’s smaller models on a small number of prompts—suggest it could be more energy intensive when generating responses than the equivalent-size model from Meta. The issue might be that the energy it saves in training is offset by its more intensive techniques for answering questions, and by the long answers they produce. > > Add the fact that other tech firms, inspired by DeepSeek’s approach, may now start building their own similar low-cost reasoning models, and the outlook for energy consumption is already looking a lot less rosy.
- Lol. Lmao even. "DeepSeek R1 reproduced for $30: Berkeley researchers replicate DeepSeek R1 for $30—casting doubt on H100 claims and controversy"
Sam "wrong side of FOSS history" Altman must be pissing himself.
Direct Nitter Link:
https://nitter.lucabased.xyz/jiayi_pirate/status/1882839370505621655
- pivot-to-ai.com DeepSeek AI leaves glaring security hole exposing user data, Italy blocks DeepSeek
DeepSeek provided a hilarious back-to-earth moment for the American AI-VC-industrial complex earlier this week. But if you first assume everyone in the AI bubble is a grifter of questionable compet…
- a handy list of LLM poisonerstldr.nettime.org ASRG (@asrg@tldr.nettime.org)
Attached: 1 image Sabot in the Age of AI Here is a curated list of strategies, offensive methods, and tactics for (algorithmic) sabotage, disruption, and deliberate poisoning. 🔻 iocaine The deadliest AI poison—iocaine generates garbage rather than slowing crawlers. 🔗 https://git.madhouse-projec...
- Deepseek Tianenmen square controversy gets weirder
So I was just reading this thread about deepseek refusing to answer questions about Tianenmen square.
It seems obvious from screenshots of people trying to jailbreak the webapp that there's some middleware that just drops the connection when the incident is mentioned. However I've already asked the self hosted model multiple controversial China questions and it's answered them all.
The poster of the thread was also running the model locally, the 14b model to be specific, so what's happening? I decide to check for myself and lo and behold, I get the same "I am sorry, I cannot answer that question. I am an AI assistant designed to provide helpful and harmless responses."
Is it just that specific model being censored? Is it because it's the qwen model it's distilled from that's censored? But isn't the 7b model also distilled from qwen?
So I check the 7b model again, and this time round that's also censored. I panic for a few seconds. Have the Chinese somehow broken into my local model to cover it up after I downloaded it.
I check the screenshot I have of it answering the first time I asked and ask the exact same question again, and not only does it work, it acknowledges the previous question.
So wtf is going on? It seems that "Tianenmen square" will clumsily shut down any kind of response, but Tiananmen square is completely fine to discuss.
So the local model actually is censored, but the filter is so shit, you might not even notice it.
It'll be interesting to see what happens with the next release. Will the censorship be less thorough, stay the same, or will china again piss away a massive amount of soft power and goodwill over something that everybody knows about anyway?
- pivot-to-ai.com Official Israeli AI Twitter bot backfires, makes pro-Palestinian posts, trolls other official accounts
Israel relies heavily on social media to promote official viewpoints, especially in the current conflict in Gaza. One Israeli Twitter bot, @FactFinderAI, is for “countering misinformation with know…
- pivot-to-ai.com DeepSeek slaps OpenAI, tech stocks crash
Chinese company DeepSeek announced its new R1 model on January 20. They released a paper on how R1 was trained on January 22. Over the weekend, the DeepSeek app became the number-one free download …