top of page

Will Artificial Intelligence make us dumber?

  • Writer: Korca Boom
    Korca Boom
  • Aug 10
  • 5 min read

Creativity and critical thinking may be affected but there are ways to mitigate the consequences, writes *The Economist*.


As anyone who has taken an exam at least once knows, writing an essay in less than 20 minutes requires intense concentration.


Having unlimited access to Artificial Intelligence (AI) would significantly ease this mental load.


But, as suggested by a recent study from researchers at the Massachusetts Institute of Technology (MIT), this assistance may come at a cost.


During several essay-writing sessions, students working both with and without ChatGPT’s help were connected to electroencephalograms (EEG) to measure brain activity during the task.


In all cases, AI users showed significantly less activity in brain areas linked to creative functions and focus.


Students who wrote with the help of the chatbot also found it harder to accurately recall any part of the essay they had just written.


These findings are part of a growing body of research examining the potentially harmful effects of using AI on creativity and the learning process.


The research raises important questions about whether the impressive short-term benefits of generative AI might come with a hidden long-term cost.

The MIT study complements the picture painted by two other high-profile pieces of research on the relationship between AI use and critical thinking.


The first, conducted by researchers at Microsoft Research, involved 319 knowledge workers (individuals engaged in tasks requiring analysis, critical thinking, decision-making, creativity, or information processing) who used generative AI at least once a week.


Respondents described over 900 tasks they had completed with AI assistance — from summarizing long documents to creating marketing campaigns.


According to their self-assessments, only 555 of these tasks required critical thinking — for example, reviewing AI-generated results before sending them to a client, or reformulating a prompt after an unsatisfactory initial output.


The rest of the tasks were considered essentially trivial.


Overall, most employees reported needing less — or much less — mental effort to complete tasks when using tools like ChatGPT, Google Gemini, or Microsoft Copilot compared to doing them without AI.


Another study, by Professor Michael Gerlich at SBS Swiss Business School, involved 666 individuals in the UK who were asked how often they used AI and how much they trusted it, before being presented with questions based on a critical thinking test.


Participants who used AI more frequently scored lower across the board.


Gerlich says that after the study was published, he was contacted by hundreds of high school and university teachers who are facing growing AI use among students.


According to him, they “felt that this study described exactly what they are experiencing right now.”


Whether AI will make people mentally weaker in the long term remains an open question.


Researchers in all three studies stress that further research is needed to establish a causal link between high AI use and weakened mental functions.


In Gerlich’s study, for example, it could be that people with stronger critical thinking skills are less inclined to rely on AI.


Meanwhile, the MIT study had a very small sample (just 54 participants) and focused on a single task.


Moreover, generative AI tools are explicitly designed to reduce users’ cognitive load, much like many earlier technologies.


As far back as the 5th century BCE, Socrates complained that writing was “not an elixir for memory, but only for reminding.” Calculators help cashiers with arithmetic.


Navigation apps have removed the need to read maps. Yet few would argue that these tools have made us less capable.


There is little evidence to suggest that allowing machines to take over certain mental functions alters the brain’s innate capacity for thought, says Evan Risko, a psychology professor at the University of Waterloo, who, along with colleague Sam Gilbert, coined the term *cognitive offloading* — shifting mental tasks to external aids.


What is concerning, according to Risko, is that generative AI allows people to “offload a much more complex set of processes.”


Offloading a simple arithmetic task, with limited applications, is not the same as offloading a process like writing or problem-solving. And once the brain learns to do this, it is hard to unlearn.


The tendency to look for the easiest way to solve a problem, known as *cognitive miserliness*, can create what Gerlich calls a reinforcing cycle.


Individuals who rely on AI will find it increasingly difficult to think critically, and their brains will become more “miserly,” leading to more offloading.


One participant in his study, a frequent AI user, said: “I rely so much on AI that I don’t think I’d know how to solve certain problems without it.”


Many companies expect productivity gains from large-scale AI adoption — but there may be unintended consequences.

"The long-term decline in critical thinking is likely to lead to a decline in competitiveness," says Barbara Larson, a management professor at Northeastern University.


Prolonged use of AI may make workers less creative.


In a University of Toronto study, 460 participants were instructed to suggest creative uses for everyday objects, such as a car tire or a pair of pants.


Those who had first seen AI-generated ideas gave responses that were less creative and less diverse than those who worked without assistance.


For example, for pants, the chatbot suggested stuffing them with straw to create half of a scarecrow essentially still using them as pants.


Meanwhile, one participant without AI help suggested putting nuts in the pockets to make a special bird feeder.


There are ways to keep the brain in shape. Larson suggests that the smartest way to benefit from AI is to treat it as “an enthusiastic but somewhat naïve assistant.”


Gerlich recommends that instead of asking the chatbot to instantly produce the final result, the user should guide it step by step toward the solution.


Instead of asking, “Where should I go for a sunny vacation?”, you might start with, “Where does it rain the least?” and develop from there.


The Microsoft team has also tested AI assistants that occasionally intervene with “prompts” to encourage deeper thinking.


In a similar spirit, researchers from Emory University and Stanford have proposed that chatbots be reprogrammed to act as “thinking assistants” that ask deep questions, rather than simply giving answers. Socrates might well have approved of this.


However, these strategies may not be very effective in practice, even if AI models were changed to be slower or less adept.


They may even have unwanted consequences. A study from Abilene Christian University in Texas found that AI assistants that constantly interrupted users with prompts worsened the performance of weaker programmers on a simple coding task.


Other possible measures to keep the brain active are more straightforward, though more imposing.


Overzealous AI users could be required to provide an answer themselves to a question or wait a few minutes before gaining access to AI.


This “cognitive forcing” could improve user performance, according to Zana Buçinca, a Microsoft researcher working on such techniques — but it would be less popular.


“People don’t like to be pushed to engage,” she says. The demand for ways to bypass such restrictions would be high.


In a representative survey of 16 countries conducted by consultancy Oliver Wyman, 47% of respondents said they would use generative AI even if their employer banned it.


The technology is so new that for many tasks, the human brain is still the sharpest tool to use.


But over time, AI users and its regulators will need to assess whether its broad benefits are worth the mental costs.


If strong evidence emerges that AI makes people less sharp, will they even care?



“KORÇA BOOM”

ree

bottom of page