Learning Has Not Become Obsolete in the Age of AI The Focus Has Shifted
约 2234 字大约 7 分钟
2026-03-12
Lately, I have been using AI almost every day to deal with problems at work. There are things I used to know very little about, yet now, as long as I can describe the task clearly, AI can often give me a workable starting point within minutes. It has dramatically shortened the distance between "I do not know how" and "at least I can make a first version." Experiences like that make it easy to feel that many old barriers are rapidly losing their importance.
But the more I use it, the more I notice that something else is not being supplied along with that speed. A task becoming easier to start does not mean my understanding of it deepens at the same pace. A result appearing quickly does not mean I am suddenly better at judging whether it is good. It was through these very ordinary experiences that I slowly realized AI may not be changing whether people still need to learn. It is changing what, exactly, they need to learn. And the best place to begin is with this tension itself: AI lowers the barrier, but it does not build judgment for us.
AI lowers the barrier, but it does not build judgment for us
The problem I keep running into is not whether AI can produce something. It usually can. The real question is whether, once it has pushed the task forward to a certain depth, the human behind it can still keep up. When the work is shallow, AI's output often already looks good enough, and the differences between a decent answer and a strong one do not show up immediately. But once a project goes deeper, once the tradeoffs start multiplying, the gap between truly understanding something and merely using it becomes very obvious.
That is because many decisions may look small on the surface, just a parameter, an implementation path, a design choice, yet each of them can lead to very different consequences. If you do not know why a choice makes sense, or what it sacrifices, then the decision quietly slips out of your hands and back into the model's defaults. AI can continue from there, of course. But at that point you are no longer really using a tool. You are following an answer you do not fully understand.
That is the unsettling part. It is not that AI fails to produce results. It is that you cannot reliably tell whether those results are right. It gives you a direction, and you can only accept it. If it drifts, you may not know how to pull it back. Because you do not know the territory well enough, you do not know where to look, which details matter, or which small deviation will later grow into a real problem. The larger the project becomes, the more obvious that loss of control feels.
Looking stronger is not the same as becoming stronger
AI has given many people, for the first time, the feeling that "I can do this too." That feeling is not fake. Someone who never wrote code before really can use AI to build a product prototype. Someone unfamiliar with a technology really can complete a respectable first attempt with its help. On the surface, a person's range of action seems to expand all at once.
But there are two very different meanings hidden inside that apparent increase in power. One is that you now have a stronger external executor, so you can sketch things that used to be out of reach. The other is that you already have a solid understanding of the problem itself, and AI becomes an amplifier in your hands. The first kind of power is easy to obtain. The second is the one that actually holds up. Both can look like "higher efficiency," but they are not the same thing at all.
Someone who does not understand the details can still use AI to make something that looks polished. But as the work advances, that person becomes more and more dependent on the model's default choices. Someone who really understands the underlying ideas faces the opposite situation. AI does not think for them, but it compresses a great deal of trial and error, implementation work, and organization. Their ability is not replaced. It is amplified. Two people may use the same tool, yet walk away with entirely different degrees of freedom.
The deeper the project goes, the more important the work behind the prompt becomes
Over the past months I have worked on a number of projects, and one fact keeps becoming clearer: the larger the project, the more precise the guidance needs to be. Many people translate this into "you need clever prompts." But I have gradually become less convinced that prompting is an isolated skill. More often, a prompt reveals how well you understand the problem in the first place.
In areas I already know well, the difference is extremely obvious. I know where the real difficulty lies. I know which constraints cannot be touched, which ones can be relaxed, which suggestions sound reasonable but will cause trouble in this particular project. Because of that, I can give very concrete instructions, and I can tell almost immediately whether AI's proposal is worth pushing further. In those moments, AI really is powerful. It feels almost like it is sprinting along the path I had already mapped out.
But the moment I move into a domain I do not know nearly as well, the whole situation reverses. The question I ask may itself be blurry. The context I provide may miss the crucial pieces. So AI naturally follows the most common path available to it. Its answer may not be bad. In fact, it is often better than an average person's casual guess. But it may still be wrong for my actual goal. The problem is that when I lack understanding of the domain, I often cannot even recognize that mismatch in time. Only after the project has moved forward for a while do the frictions begin to surface.
That is why I increasingly distrust the idea that "as long as you know how to ask questions, you can skip learning." Asking well matters, of course. But good questions do not appear out of nowhere. How far you can ask depends on how much you already understand. How deeply you can use AI depends on whether you can draw boundaries for it. Many so-called high-quality prompts look like language skill on the surface, but underneath they are professional judgment.
Why AI so often gives an average answer first
There is nothing mysterious about this. Today's large models are trained on vast amounts of human data. In their default state, they are more likely to produce answers that are common, stable, and acceptable across many situations. For everyday tasks, that is precisely part of their value. They quickly fill the entry-level gap so that people do not get completely stuck for lack of basics.
But if your goal is not merely to get something working, but to make it fit, make it deep, and make it distinct, then the default answer is not enough. Average answers naturally prefer well-traveled paths. They lean toward expressions and solutions that have already been validated many times. They rarely volunteer the kind of bold, situational judgment that a specific context may require. They also do not decide for you whether a less common path is worth attempting. That part still belongs to the human being.
And that is where the difference appears again. Someone who does not understand the underlying principles is more likely to remain a receiver of default answers. They can use AI to produce something complete, smooth, and seemingly problem-free, but they are also more likely to produce work that resembles everyone else's. Someone who really understands mathematics, computer science, or more broadly the substance of their own craft, will actively rewrite the defaults in many small places: where to optimize, where to compromise, where to stay conservative, and where it is worth taking a risk. Only then does AI start to feel like a truly sharp tool.
So in the age of AI, do we still need to study seriously
After circling around the issue, everything comes back to learning. Once AI has entered the picture, do people still need to study fundamentals, especially things like mathematics and computer science that look slow and demanding? My answer is yes. And that need has not become smaller just because the tools became stronger. What has changed is the center of gravity of learning.
Some things that once required constant memorization no longer deserve the same amount of effort. Complicated API syntax, easy-to-forget language details, long and repetitive computational procedures, AI can now help fill those gaps very quickly. That convenience is real. If tools can already take over part of the mechanical burden, then people do not need to keep pouring so much time into pure memorization.
But another category of things has become even more important. Do you have a feel for the principles? Can you explain how something works from end to end? When facing a solution, can you judge why it works, and where it might fail? None of those abilities has become less relevant in the presence of AI. If anything, they more directly determine the relationship between you and the tool. Do you treat AI as an outsourcer that hands you answers, or as an assistant that you can direct, constrain, and correct? Very often the dividing line is exactly here.
If learning in the past often meant accumulating "can I do this," then learning now feels more like building "can I read this," "can I judge this," and "do I know what to press on next." On the surface, there is less to memorize. In reality, the demand for understanding and judgment has not been reduced at all.
Even the way we learn will change with it
Take one step further and even the method of learning begins to change. At first I thought AI mainly helped as a kind of practice partner. When I face an unfamiliar topic, it can help spread the problem out, suggest a few directions worth following, or give me a more approachable explanation when I get stuck. All of that is genuinely useful, and I still rely on it.
But the more I use it, the more obvious its limits become as well. AI explanations are usually smooth. They are friendly to the learner and eager to make knowledge feel immediately digestible. But smoothness and shallowness often arrive together. Its answers can help someone get started quickly, yet they do not always carry a problem to a truly rigorous place. If you stop at the satisfaction of "I get it now," it becomes very easy to mistake familiarity for mastery. In reality, what you may have absorbed is only a flattened and processed version of the idea.
In the past, we learned by looking for good books and strong teachers. Neither of those has become obsolete. A general-purpose model does not automatically become a high-level teacher. What it produces depends heavily on the context you feed it and on how you continue the conversation. It can certainly act as an explainer, but only if you give it material worth standing on, rather than asking it to improvise from its default memory.
Good books do not become obsolete either, but the way we use them changes
So in the end, I have come to see AI more as a new interface for learning than as a substitute for knowledge itself. Good books still matter. Solid reference materials still matter. The difference is that we may no longer need to consume everything line by line, from beginning to end, in the old way. Often, we can turn a genuinely good book, a rigorous paper, or a strong set of course materials into context for AI, and then let it explain, compare, summarize, and answer on top of those sources.
The point of doing that is not laziness. It is a different way of organizing learning. We are still responsible for the materials themselves. We still have to judge which sources deserve trust, which explanations merely sound smooth, and which places require going back to the original text. But AI can make a process that once depended heavily on personal patience and note-organizing ability much more flexible. It may not teach you automatically, but it may help translate a good author or a good textbook into a form that matches your current stage of understanding.
If I had to summarize what AI changes about learning, I would not say "people finally do not need to learn anymore." I would say: people need to learn in a way that keeps them at the helm. Tools can take over many execution steps. That makes understanding even less negotiable, not more. You can let AI help you do many things, but what ultimately determines how far you can go is still how much you truly understand, and whether you can make your own judgments when it matters.
