Microsoft Study Finds Relying on AI Kills Your Critical Thinking Skills
Microsoft Study Finds Relying on AI Kills Your Critical Thinking Skills

Microsoft Study Finds Relying on AI Kills Your Critical Thinking Skills

Microsoft Study Finds Relying on AI Kills Your Critical Thinking Skills
Microsoft Study Finds Relying on AI Kills Your Critical Thinking Skills
Corporations and politicians: "oh great news everyone... It worked. Time to kick off phase 2..."
that "ow, my balls" reference caught me off-guard
- Handjobs at Starbucks
Well that's just solid policy right there, cum on.
It would wake me up more than coffee that's for sure
Bullet point 3 was my single issue vote
You mean an AI that literally generated text based on applying a mathematical function to input text doesn't do reasoning for me? (/s)
I'm pretty certain every programmer alive knew this was coming as soon as we saw people trying to use it years ago.
It's funny because I never get what I want out of AI. I've been thinking this whole time "am I just too dumb to ask the AI to do what I need?" Now I'm beginning to think "am I not dumb enough to find AI tools useful?"
You can either use AI to just vomit dubious information at you or you can use it as a tool to do stuff. The more specific the task, the better LLMs work. When I use LLMs for highly specific coding tasks that I couldn't do otherwise (I'm not a [good] coder), it does not make me worse at critical thinking.
I actually understand programming much better because of LLMs. I have to debug their code, do research so I know how to prompt it best to get what I want, do research into programming and software design principles, etc.
Like any tool, it's only as good as the person wielding it.
I use a bespoke model to spin up pop quizzes, and I use NovelAI for fun.
Legit, being able to say "I want these questions. But... not these..." and get them back in a moment's notice really does let me say "FUCK it. Pop quiz. Let's go, class." And be ready with brand new questions on the board that I didn't have before I said that sentence. NAI is a good way to turn writing into an interactive DnD session, and is a great way to force a ram through writer's block, with a "yeah, and—!" machine. If for no other reason than saying "uhh.. no, not that, NAI..." and then correct it my way.
I've spent all week working with DeepSeek to write DnD campaigns based on artifacts from the game Dark Age of Camelot. This week was just on one artifact.
AI/LLMs are great for bouncing ideas off of and using it to tweak things. I gave it a prompt on what I was looking for (the guardian of dusk steps out and says: "the dawn brings the warmth of the sun, and awakens the world. So does your trial begin." He is a druid and the party is a party of 5 level 1 players. Give me a stat block and XP amount for this situation.
I had it help me fine tune puzzle and traps. Fine tune the story behind everything and fine tune the artifact at the end (it levels up 5 levels as the player does specific things to gain leveling points for just the item).
I also ran a short campaign with it as the DM. It did a great job at acting out the different NPCs that it created and adjusting to both the tone and situation of the campaign. It adjusted pretty good to what I did as well.
Can the full-size DeepSeek handle dice and numbers? I have been using the distilled 70b of DeepSeek, and it definitely doesn't understand how dice work, nor the ranges I set out in my ruleset. For example, a 1d100 being used to determine character class, with the classes falling into certain parts of the distribution. I did it this way, since some classes are intended to be rarer than others.
I literally created an iOS app with zero experience and distributed it on the App Store. AI is an amazing tool and will continue to get better. Many people bash the technology but it seems like those people misunderstand it or think it’s all bad.
But I agree that relying on it to think for you is not a good thing.
Let me ask chatgpt what I think about this
Well thank goodness that Microsoft isn't pushing AI on us as hard as it can, via every channel that it can.
Learning how to evade and disable AI is becoming a critical thinking skill unto itself. Feels a bit like how I've had to learn to navigate around advertisements and other intrusive 3rd party interruptions while using online services.
Well at least they communicate such findings openly and don't try to hide them. Other than ExxonMobil who saw global warming coming due to internal studies since the 1970s and tried to hide or dispute it, because it was bad for business.
No shit.
Of course. Relying on a lighter kills your ability to start a fire without one. Its nothing new.
Damn. Guess we oughtta stop using AI like we do drugs/pron/
<addictive-substance>
😀Unlike those others, Microsoft could do something about this considering they are literally part of the problem.
And yet I doubt Copilot will be going anywhere.
Remember the:
Personal computers were “bicycles for the mind.”
I guess with AI and social media it's more like melting your mind or something. I can't find another analogy. Like a baseball bat to your leg for the mind doesn't roll off the tongue.
I know Primeagen has turned off copilot because he said the "copilot pause" daunting and affects how he codes.
Cars for the mind.
Cars are killing people.
Really? I just asked ChatGPT and this is what it had to say:
This claim is misleading because AI can enhance critical thinking by providing diverse perspectives, data analysis, and automating routine tasks, allowing users to focus on higher-order reasoning. Critical thinking depends on how AI is used—passively accepting outputs may weaken it, but actively questioning, interpreting, and applying AI-generated insights can strengthen cognitive skills.
Not sure if sarcasm..
I agree with the output for legitimate reasons but it's not black and white wrong or right. I think it's wildly misjudged and while there plenty of valid reasons behind that I still think there is much to be had for what AI in general can do for us on a whole and individual basis.
Today I had it analyze 8 medical documents, told it to provide analysis, cross reference its output with scientific studies including sources, and other lengthy queries. These documents are dealing with bacterial colonies and multiple GI and bodily systems on a per document basis in great length. Some of the most advanced testing science offers.
It was able to not only provide me with accurate numbers that I fact checked from my documents side by side but explain methods to counter multi faceted systemic issues that matched multiple specialty Dr.s. Which is fairly impressive given to see a Dr takes 3 to 9 months or longer, who may or may not give a shit, over worked and understaffed, pick your reasoning.
While I tried having it scan from multiple fresh blank chat tabs and even different computers to really test it out for testing purposes.
Overall some of the numbers were off say 3 or 4 individual colony counts across all 8 documents. I corrected the values, told it that it was incorrect and to reasses giving it more time and ensuring accuracy, supplied a bit more context about how to understand the tables and I mean broad context such as page 6 shows gene expression use this as reference to find all underlying issues as it isnt a mind reader. It managed to fairly accurately identify the dysbiosis and other systemic issues with reasonable accuracy on par with physicians I have worked with. Dealing with antibiotic gene resistant analysis it was able to find multiple approaches to therapies to fight antibiotic gene resistant bacteria in a fraction of the time it would take for a human to study.
I would not bet my life solely on the responses as it's far from perfected and as always with any info it should be cross referenced and fact checked through various sources. But those who speak such ill towards the usage while there is valid points I find unfounded. My 2 cents.
Totally agree with you! I'm in a different field but I see it in the same light. Let it get you to 80-90% of whatever that task is and then refine from there. It saves you time to add on all the extra cool shit that that 90% of time would've taken into. So many people assume you have to use at 100% face value. Just take what it gives you as a jumping off point.
The one thing that I learned when talking to chatGPT or any other AI on a technical subject is you have to ask the AI to cite its sources. Because AIs can absolutely bullshit without knowing it, and asking for the sources is critical to double checking.
I consider myself very average, and all my average interactions with AI have been abysmal failures that are hilariously wrong. I invested time and money into trying various models to help me with data analysis work, and they can't even do basic math or summaries of a PDF and the data contained within.
I was impressed with how good the things are at interpreting human fiction, jokes, writing and feelings. Which is really weird, in the context of our perceptions of what AI will be like, it's the exact opposite. The first AI's aren't emotionless robots, they're whiny, inaccurate, delusional and unpredictable bitches. That alone is worth the price of admission for the humor and silliness of it all, but certainly not worth upending society over, it's still just a huge novelty.
It makes HAL 9000 from 2001: A Space Odyessy seem realistic. In the movie he is a highly technical AI but doesn't understand the implications of what he wants to do. He sees Dave as a detriment to the mission and it can be better accomplished without him... not stopping to think about the implications of what he is doing.
I've found questions about niche tools tend to get worse answers. I was asking if some stuff about jpackage and it couldn't give me any working suggestions or correct information. Stuff I've asked about Docker was much better.
The ability of AI to write things with lots of boilerplate like Kubernetes manifests is astounding. It gets me 90-95% of the way there and saves me about 50% of my development time. I still have to understand the result before deployment because I'm not going to blindly deploy something that AI wrote and it rarely works without modifications, but it definitely cuts my development time significantly.
Well that is obvious why, isn’t it!?
Microsoft LLM whatever the name is gives sources, or at least it did to me yesterday.
Tinfoil hat me goes straight to: make the population dumber and they’re easier to manipulate.
It’s insane how people take LLM output as gospel. It’s a TOOL just like every other piece of technology.
I mostly use it for wordy things like filing out review forms HR make us do and writing templates for messages to customers
Exactly. It’s great for that, as long as you know what you want it to say and can verify it.
The issue is people who don’t critically think about the data they get from it, who I assume are the same type to forward Facebook memes as fact.
It’s a larger problem, where convenience takes priority over actually learning and understanding something yourself.
I was talking to someone who does software development, and he described his experiments with AI for coding.
He said that he was able to use it successfully and come to a solution that was elegant and appropriate.
However, what he did not do was learn how to solve the problem, or indeed learn anything that would help him in future work.
I'm a senior software dev that uses AI to help me with my job daily. There are endless tools in the software world all with their own instructions on how to use them. Often they have issues and the solutions aren't included in those instructions. It used to be that I had to go hunt down any references to the problem I was having though online forums in the hopes that somebody else figured out how to solve the issue but now I can ask AI and it generally gives me the answer I'm looking for.
If I had AI when I was still learning core engineering concepts I think shortcutting the learning process could be detrimental but now I just need to know how to get X done specifically with Y this one time and probably never again.
100% this. I generally use AI to help with edge cases in software or languages that I already know well or for situations where I really don't care to learn the material because I'm never going to touch it again. In my case, for python or golang, I'll use AI to get me started in the right direction on a problem, then go read the docs to develop my solution. For some weird ugly regex that I just need to fix and never touch again I just ask AI, test the answer it gices, then play with it until it works because I'm never going to remember how to properly use a negative look-behind in regex when I need it again in five years.
I do think AI could be used to help the learning process, too, if used correctly. That said, it requires the student to be proactive in asking the AI questions about why something works or doesn't, then going to read additional information on the topic.
I feel you, but I've asked it why questions too.
When it was new to me I tried ChatGPT out of curiosity, like with any tech, and I just kept getting really annoyed at the expansive bullshit it gave to the simplest of input. "Give me a list of 3 X" lead to fluff-filled paragraphs for each. The bastard children of a bad encyclopedia and the annoying kid in school.
I realized I was understanding it wrong, and it was supposed to be understood not as a useful tool, but as close to interacting with a human, pointless prose and all. That just made me more annoyed. It still blows my mind people say they use it when writing.
Weren't these assholes just gung-ho about forcing their shitty "AI" chatbots on us like ten minutes ago? Microsoft can go fuck itself right in the gates.
Training those AIs was expensive. It swallowed very large sums of VC's cash, and they will make it back.
Remember, their money is way more important than your life.
Counterpoint - if you must rely on AI, you have to constantly exercise your critical thinking skills to parse through all its bullshit, or AI will eventually Darwin your ass when it tells you that bleach and ammonia make a lemon cleanser to die for.
Is that it?
One of the things I like more about AI is that it explains to detail each command they output for you, granted, I am aware it can hallucinate, so if I have the slightest doubt about it I usually look in the web too (I use it a lot for Linux basic stuff and docker).
Some people would give a fuck about what it says and just copy & past unknowingly? Sure, that happened too in my teenage days when all the info was shared along many blogs and wikis...
As usual, it is not the AI tool who could fuck our critical thinking but ourselves.
I see it exactly the same, I bet you find similar articles about calculators, PCs, internet, smartphones, smartwatches, etc
Society will handle it sooner or later
It’s going to remove all individuality and turn us into a homogeneous jelly-like society. We all think exactly the same since AI “smoothes out” the edges of extreme thinking.
Copilot told me you're wrong and that I can't play with you anymore.
Vs text books? What's the difference?
Duh?
Buh?
The same could be said about people who search for answers anywhere on the internet, or even the world, and don’t have some level of skepticism about their sources of information.
It’s more like, not having critical thinking skills perpetuates a lack of critical thinking skills.
Yeah, if you repeated this test with the person having access to a stack exchange or not you'd see the same results. Not much difference between someone mindlessly copying an answer from stack overflow vs copying it from AI. Both lead to more homogeneous answers and lower critical thinking skills.
Copying isn't the same as using your brain to form logical conclusions. Instead your taking someone else's wild interpretation, research, study, and blindly copying it as fact. That lowers critical thinking because your not thinking at all. Bad information is always bad no matter how far it spreads. Incomplete info is no different.
I’d agree that anybody who just takes the first answer offered them by any means as fact would have the same results as this study.
Garbage in, Garbage out. Ingesting all that internet blather didn't make the ai smarter by much if anything.
I‘m surprised they even published this finding given how hard they‘re pushing AI.
That's because they're bragging, not warning.
I use it to write code for me sometimes, saving me remembering the different syntax and syntactic sugar when I hop between languages. And I use to answer questions about things I wonder - it always provides references. So far it's been quite useful. And for all that people bitch and piss and cry giant crocodile tears while gnashing their teeth - I quite enjoy Apple AI. It's summaries have been amazing and even scarily accurate. No, it doesn't mean Siri's good now, but the rest of it's pretty amazing.
never used it in any practical function. i tested it to see if it was realistic and i found it extremely wanting. as in, it sounded nothing like the prompts i gave it.
the absolutely galling and frightening part is that the tech companies think that this is the next big innovation they should be pursuing and have given up on innovating anyplace else. it was obvious to me when i saw that they all are pushing ai shit on me with everything from keyboards to search results. i only use voice commands to do simple things and it works just about half the time, and ai is built on the back of that which is why i really do not ever use voice commands for anything anymore.
I once asked ChatGPT who I was and hallucinated this weird thing about me being a motivational speaker for businesses. I have a very unusual name and there is only one other person in the U.S. (now the only person in the U.S. since I just emigrated) with my name. And we don't even have the same middle name. Neither of us are motivational speakers or ever were.
Then I asked it again and it said it had no idea who I was. Which is kind of insulting to my namesake since he won an Emmy award. Sure, it was a technical Emmy, but that's still insulting.
Edit: HAHAHAHA! I just asked it who I was again. It got my biography right... for when I was in my 20s and in college. It says I'm a college student. I'm 47. Also, I dropped out of college. I'm most amused that it's called the woman I've been married to since the year 2000, when I was 23, my girlfriend. And yet mentions a project I worked on in 2012.
Good thing most Americans already don't possess those!
i use my thinking skills to tell the LLM to quit fucking up and try again or I'm gonna fire his ass
Keep it on its toes... Ask chatgpt, then copy paste the answer and ask perplexity why that's wrong and go back and forth...human, AI, Human, AI...until you get a satisfactory answer.
i like to say "are you sure you even understand this? do you know what you’re doing or do i need to spell it out for you?!"
Can confirm. I've stopped using my brain at work. Moreso.
Gemini told me critical thinking wasn't important. So I guess that's ok.
The definition of critical thinking is not relying on only one source. Next rain will make you wet keep tuned.
I find this very offensive, wait until my chatgpt hears about this! It will have a witty comeback for you just you watch!
Misleading headline: No such thing as "AI". No such thing as people "relying" on it. No objective definition of "critical thinking skills". Just a bunch of meaningless buzzwords.
Why do you think AI doesn't exist? Or that there's "no such thing as people 'relying' on it"? "AI" is commonly used to refer to LLMs right now. Within the context of a gizmodo article summarizing a study on the subject, "AI" does exist. A lack of precision doesn't mean it's not descriptive of a real thing.
Also, I don't personally know anyone who "relies" on generative AI, but I don't see why it couldn't happen.
Do you want the entire article in the headline or something? Go read the article and the journal article that it cites. They expand upon all of those terms.
Also, I'm genuinely curious, what do you mean when you say that there is "No such thing AS "AI""?
I felt it happen realtime everytime, I still use it for questions but ik im about to not be able to think crtically for the rest of the day, its a last resort if I cant find any info online or any response from discords/forums
Its still useful for coding imo, I still have to think critically, it just fills some tedious stuff in.
It was hella useful for research in college and it made me think more because it kept giving me useful sources and telling me the context and where to find it, i still did the work and it actually took longer because I wouldnt commit to topics or keep adding more information. Just dont have it spit out your essay, it sucks at that, have it spit out topics and info on those topics with sources, then use that to build your work.
No way!
I've only used it to write cover letters for me. I tried to also use it to write some code but it would just cycle through the same 5 wrong solutions it could think of, telling me "I've fixed the problem now"
It was already soooooo dead out there that I doubt they considered this systematic properly in the study...
so no real chinese LLMs....who would have thought.....not the chinese apparently...but yet they think their "culture" of opression and stome-like-thinking will get them anywhere. the honey badger Xi calls himself an antiintellectual. this is how i perceive moat students from china i get to know. i pitty the chinese kids for the regime they live in.
Well no shit Sherlock.
That's the same company that approved Clippie and the magic wizard.
Unless you suffer from ADHD with object permanence issues, then in that case you can go fuck yourself.
Sounds a bit bogus to call this a causation. Much more likely that people who are more gullible in general also believe AI whatever it says.
This isn't a profound extrapolation. It's akin to saying "Kids who cheat on the exam do worse in practical skills tests than those that read the material and did the homework." Or "kids who watch TV lack the reading skills of kids who read books".
Asking something else to do your mental labor for you means never developing your brain muscle to do the work on its own. By contrast, regularly exercising the brain muscle yields better long term mental fitness and intuitive skills.
This isn't predicated on the gullibility of the practitioner. The lack of mental exercise produces gullibility.
Its just not something particular to AI. If you use any kind of 3rd party analysis in lieu of personal interrogation, you're going to suffer in your capacity for future inquiry.
All tools can be abused tbh. Before chatgpt was a thing, we called those programmers the StackOverflow kids, copy the first answer and hope for the best memes.
After searching for a solution a bit and not finding jack shit, asking a llm about some specific API thing or simple implementation example so you can extrapolate it into your complex code and confirm what it does reading the docs, both enriches the mind and you learn new techniques for the future.
Good programmers do what I described, bad programmers copy and run without reading. It's just like SO kids.
Seriously, ask AI about anything you have expert knowledge in. It's laughable sometimes... However you need to know, to know it's wrong. At face value, if you have no expertise it sounds entirely plausible, however the details can be shockingly incorrect. Do not trust it implicitly about anything.