Shockingly, using a machine to do your thinking for you leads to your not thinking for yourself. That's according to what a new study from the Massachusetts Institute of Technology found:
Let's not stop to consider how much money MIT probably spent on the study that resulted in one of the most common-sense findings ever. Instead, let's look at the details of what they found.
Here's more from The Hill:
Researchers at MIT's Media Lab asked subjects to write several SAT essays and separated subjects into three groups β using OpenAI's ChatGPT, using Google's search engine, and using nothing, which they called the 'brainβonly' group. Each subject's brain was monitored through electroencephalography (EEG), which measured the writer's brain activity through multiple regions in the brain.
Ah, the "brain only" group. The old-fashioned way, when human beings used to, you know, think and stuff.
Over the course of the study, the findings (though somewhat expected) were pretty striking. π
They discovered that subjects who used ChatGPT over a few months had the lowest brain engagement and 'consistently underperformed at neural, linguistic, and behavioral levels,' according to the study.
The study found that the ChatGPT group initially used the large language model, or LLM, to ask structural questions for their essay, but near the end of the study, they were more likely to copy and paste their essay.
Those who used Google's search engine were found to have moderate brain engagement, but the 'brain-only' group showed the 'strongest, wide-ranging networks.'
The study's findings don't instill much confidence in the future if the education system insists on introducing AI learning for students at an early age.

Then again, what if the researchers at MIT used ChatGPT to write the study?
P.S. Now check out our latest video π