read
Technology

Why AI Use Reduces New Skill Formation

Abstract digital visualization of artificial intelligence coding with glowing neural network connections on dark background
Abstract digital visualization of artificial intelligence coding with glowing neural network connections on dark background

Fifteen years ago, if you wanted to learn coding, you opened a blank text file and struggled through every error message yourself. Today, you paste a prompt into an AI tool and get working code in seconds. That convenience comes with a hidden cost researchers are just starting to measure: your brain may not be building the skills it thinks it is learning.

What Cognitive Offloading Actually Means

Psychologists have a term for relying on external tools to handle mental work. They call it cognitive offloading. You do this every day without thinking about it. You offload memory to your phone's contacts app. You offload navigation to Google Maps. You offload math to a calculator. In each case, you trade a mental skill for speed and convenience.

The trade-off usually feels worth it. Nobody seriously argues we should all memorize phone numbers again. But the balance shifts when the skill you are offloading is one you are actively trying to learn. That is exactly what happens when people use AI tools to pick up new abilities like coding, writing, or data analysis.

A new study by Judy Hanwen Shen and Alex Tamkin, conducted as part of the Anthropic Fellows Program, put this problem under a microscope. Published as a preprint on arXiv, the study asked a straightforward question: when someone uses AI assistance while learning a new skill, do they form that skill the same way as someone who learns without AI?

The answer is no.

The Evidence: How AI Assistance Changes Skill Formation

Shen and Tamkin recruited 52 professional and freelance programmers and split them into two groups. Both groups had to learn a new asynchronous programming library and complete a series of coding tasks. First, everyone did a warm-up task without AI. Then came the main task: the treatment group could use AI, while the control group coded on their own. Afterward, both groups took a quiz testing how well they understood the library they had just worked with.

The results were striking. The AI-assisted group was not faster than the control group. But they performed much worse on the quiz. Their average score was 17% lower, a statistically significant gap. Subgroup analysis showed this effect held across beginner, intermediate, and expert programmers alike. Experience level did not protect anyone from the offloading penalty.

Participants who fully delegated code generation to AI scored particularly badly. Those who asked the AI to generate code and also explain what it produced fared better. The distinction matters: it suggests that how you interact with AI, not just whether you use it, determines whether learning happens.

Psychology Today covered the study and summarized the finding bluntly: AI use significantly reduces new skill formation. The mechanism is not mysterious. When you write code yourself, you confront every syntax error, every logic flaw, every gap in your understanding. That struggle is not a bug in the learning process. It is the learning process.

Why Struggle Is Not the Enemy

Frontiers in Psychology published a related analysis exploring whether AI creates cognitive offloading or cognitive overload when people use it to cope with demanding tasks. The distinction matters. Offloading feels like relief in the moment. You skip the hard part. But overload, ironically, is what forces your brain to adapt and grow.

Think about weightlifting. If someone else lifts the weight for you, your muscles never get stronger. The resistance is the point. Cognitive skill formation works the same way. When AI removes the cognitive resistance, it also removes the trigger for neural adaptation.

The SendFull podcast, hosted by cognitive scientist Stef Hutka, explored this dynamic in an episode titled 'Cognitive Offloading to AI: The Peril and the Promise.' Hutka pointed out that the promise of AI is clear: faster output, fewer errors, less frustration. The peril is less visible but more important over time. You slowly lose the capacity to do the work without the tool. Hutka noted that a separate Anthropic study found educators were 2.5 to 3 times more likely than average workers to report witnessing cognitive atrophy firsthand, describing it as over-reliance causing skill loss, intellectual passivity, and critical thinking decline.

The Broader Pattern: Beyond Coding

Coding is the easiest domain to study because the output is verifiable. You can measure whether code runs correctly. But the cognitive offloading problem does not stop at programming.

Consider writing. If you use AI to draft every email, report, and presentation, you are offloading the skill of organizing thoughts into coherent prose. At first, you might tell yourself you are just saving time on routine tasks. But over months, you may notice that starting a blank document feels harder than it used to. Your ability to structure an argument from scratch has eroded because you stopped exercising it.

The Tougher Minds platform highlighted this concern in their coverage of the Shen and Tamkin study, noting that cognitive offloading to AI creates a risk of cognitive atrophy where underused mental skills deteriorate over time. The term is not meant to be alarmist. It describes a well-documented principle in neuroscience: neural pathways that are not regularly activated become weaker.

NewsBreak also reported on the findings, emphasizing that while AI can speed up certain tasks, its impact on genuine learning is fundamentally different from its impact on task completion. Completing a task is not the same as acquiring a skill. This distinction gets blurred when the AI is doing the heavy cognitive lifting and you are just reviewing the output.

So what does this actually mean for people who want to keep learning in an AI-saturated world?

What You Can Actually Do About It

The research does not say AI is harmful in all contexts. If you already know how to code and you use AI to speed up boilerplate work, that is efficient offloading, not skill erosion. The problem is specific to the learning phase, when the goal is to build a new capability from scratch.

Shen and Tamkin identified six distinct AI interaction patterns in their study, three of which involved genuine cognitive engagement and preserved learning outcomes even when participants received AI assistance. This finding is actually encouraging. It means the tool itself is not the problem. How you use it is.

There are practical strategies that align with what the research suggests. First, use AI as a hint, not a solution. Instead of asking an AI to write the code, ask it to point out where your logic might be flawed or to explain an approach. You stay in the driver's seat. The cognitive resistance stays intact.

Second, alternate between assisted and unassisted practice. Spend part of your learning time with AI tools available, and part without. The unassisted sessions are where actual skill consolidation happens. They feel uncomfortable. That discomfort is the signal that your brain is working.

Third, test yourself without the tool. After using AI assistance on a task, close the tool and try to reproduce the work from memory. If you cannot, the AI did the learning, not you. This simple self-check can reveal gaps that feel invisible while the AI is active.

The Qoshe aggregator summarized the core tension well in its coverage: AI can accelerate some tasks, but its effect on learning new skills is not well understood, and early evidence suggests the effect is not positive. That uncertainty should make you cautious, not fearful. You do not need to abandon AI tools. You need to be deliberate about when and how you use them.

The Long-Term Question We Are Not Asking Yet

There is a deeper layer to this conversation that most coverage has not touched. If an entire generation of professionals learns skills through AI-assisted practice, what happens to collective expertise in fields like software engineering, medicine, law, and scientific research?

Right now, we have experienced practitioners who built their skills before AI existed. They can evaluate AI output because they have deep internal models of their domains. But in ten or twenty years, many senior professionals will have learned with AI as a constant companion. Their expertise may be fundamentally different in ways we cannot yet predict.

Shen and Tamkin's study is an early data point in what will likely become a massive research area. It is worth noting that the paper has not yet undergone peer review, so the results should be considered preliminary. We need more studies across different domains, different age groups, and different types of AI tools. We need longitudinal research that tracks skill retention over months and years, not just hours.

What we know right now is enough to act on. Using AI to skip the hard parts of learning does not make you faster at learning. It makes you slower at forming the skill, even if it feels effortless in the moment.

The next time you sit down to learn something new and an AI tool is open in your browser, ask yourself a simple question: am I using this tool to learn, or am I using it to avoid learning? The honest answer might change how you approach the next hour of your life.

Sources

Tags

More people should see this article.

If you found it useful, share it in 10 seconds. Knowledge grows when shared.

Reading Settings

Comments