Skip to content

The Ceiling of a Tool Depends on the Structure of the Mind Using It

约 1054 字大约 4 分钟

2025-12-11

Recently, while preparing a talk on generative models, I tried using AI to help draft the lecture notes. I thought it would be easy. After all, AI can write articles, code, and reports. But the result was very different from what I expected.

I wanted the notes to have some mathematical flavor without becoming overly obscure. I wanted the logic to be clean, and the content to be both rigorous and well-paced. What AI produced often looked rich on the surface, but the problems were obvious: sometimes the mathematics was too loose, sometimes too dense; sometimes the structure was weak, sometimes the reasoning jumped too abruptly. I kept revising prompts, changing requirements, switching styles, yet the result never became truly satisfying.

And yet in other contexts, something almost opposite would happen. When I asked AI to help with research questions I already understood, or to write code in a stack I was already familiar with, it suddenly felt like the most cooperative partner possible. I could give it a sentence, a direction, and it would complete derivations, fill in structure, and often produce a working code example in just a few minutes.

Why could the same model feel so different across different tasks?

After thinking about it for a long time, I arrived at one conclusion:

The upper limit of a tool is constrained by the cognitive structure of the person using it.

I

When I work inside a domain I know well, I understand where the problem boundaries are, and I know what a reasonable approach should look like. In a mathematical derivation, I can tell whether an assumption is sound, which step needs checking, and which part can be skipped. So when AI gives me something, I can guide it, correct it, and quickly align it with what I actually want.

Coding is the same. As long as I know the stack, AI feels like a partner with unusually good tacit understanding. I know what structure I want, which APIs are reasonable, and where the risks are. Under that condition, even a few simple instructions are enough for it to produce something close to the target.

But the moment I move into a stack I do not yet understand, everything changes. In areas I have not fully grasped, such as certain backend systems, I often find that I cannot even decompose the task properly. My requests to AI become vague. I cannot immediately tell whether the code it gives me is correct. When I ask it to revise the output, I often do not know where to begin. The final result becomes a messy collection of pseudocode that looks plausible but does not really work.

At first glance this seems like a limitation of AI. But in truth, it is doing something very accurate: it is reproducing my own cognitive structure in that domain. Where I am unclear, its output becomes unclear. Where I have a gap, its response breaks at the same gap.

AI rarely exceeds the structural boundary of the person asking the question.

II

This also helped me understand why AI-assisted learning often works so well, while AI-assisted teaching can be disappointing.

When you are learning, you only need to judge whether you understand. If AI gives you an explanation, you can feel whether it helps. If something is off, you can correct it as you go.

Teaching is different. Teaching requires you to build a structure that other people can understand, not just one that makes sense inside your own head. You need to know what should come first, what should come later, what can be skipped, what must be explained carefully, where the audience is likely to get stuck, and how deep the mathematics should go. All of that depends on your grasp of the whole knowledge tree.

If your own framework is still incomplete, AI cannot build that framework for you. It does not even know what shape that framework ought to take. In the end, the output can only wander around inside your unfinished mental model.

III

This pattern appears everywhere in ordinary life.

Someone who cannot draw will not suddenly create great work just because they have the most advanced digital painting software. The tool may be powerful, but it cannot replace composition, taste, or understanding of light and color.

Someone who does not understand photography will not produce better images simply by holding a more expensive camera. What determines the final photograph is observation, judgment, and sensitivity to light, not the price of the device.

Someone who cannot structure a presentation will not make a strong talk just by using a beautiful PowerPoint template. Without hierarchy, sequence, and narrative flow, the result is still only a visually polished mess.

And programming is no exception. A person who does not understand software structure will not magically write maintainable code just because they have Copilot. The model will not decompose modules for them, design architecture for them, or teach them how to abstract a problem.

In all of these cases, the tool is strong. But without structure, it can only faithfully reproduce confusion.

IV

The more I think about this, the more convinced I become that tools are not substitutes. They are amplifiers.

They amplify your understanding, but also your blind spots. They amplify your clarity, but also your chaos. They amplify your structure, but also your weaknesses.

So the real question is rarely how to write a more clever prompt, or how to chase a more advanced plugin. The deeper question is how to build a clearer and more stable cognitive structure. Once that structure matures, the tool naturally becomes a multiplier. Before that, it can only remain trapped inside your boundaries.

Closing

In the end, I realized that the limit of AI is not primarily the model's own limit. More often, it is the limit of the structure I can provide to it.

So instead of asking AI to help me teach better, I increasingly see teaching itself as the process of building structure. When the structure becomes clear enough, the tool stops being an obstacle and becomes what it was always capable of being: an amplifier.

贡献者: Junyuan He