Writing code got cheaper. That does not make software easier. It shifts the bottleneck to review, reasoning, and failure detection.
TLDR
- Code generation is becoming cheap. Critical review is not.
- Students and junior developers should learn to inspect generated code, not just produce it.
- The useful skill is not prompting alone. It is knowing how to test, constrain, and repair output.
- Case-based teaching works better than pretending AI is just another autocomplete feature.
- AI should be treated as raw material. The developer still has to make it safe, correct, and maintainable.
Code got cheaper
The obvious mistake is to keep teaching software development as if typing code were the hard part.
It is not anymore. AI can produce code fast. It can also produce a lot of code that looks plausible and still misses the actual problem.
That changes what matters.
If code is easier to generate, then the scarce skill is not writing more of it. The scarce skill is knowing what to trust, what to reject, and what to fix.
The bottleneck moved
When code was expensive to write, the bottleneck was implementation speed. Now the bottleneck is much more often review and reasoning:
- Is the generated code solving the real problem?
- Did the model invent an API, shortcut, or assumption?
- Are the tests actually checking the right thing?
- Is the architecture getting more complicated than it needs to be?
- Does the result meet production standards, or just demo standards?
That is why the old “learn to code” advice is incomplete.
You still need to know how code works. But you also need to know when the code is wrong, even when it looks polished.
Teach case-based review
My default rule is that this is easier to teach through cases than through slogans.
Give students and junior developers examples like these:
- generated code that compiles but fails a hidden requirement,
- a hallucinated API that looks real enough to pass a quick glance,
- tests that all pass but do not cover the failure mode,
- an overengineered solution to a simple problem,
- a clean-looking implementation that hides a security problem.
That teaches something concrete.
It shows that AI output is not the finish line. It is the first draft.
What students should practice
If you want better AI judgment, train these habits:
-
Read the code before you run it Make students explain what the code is doing, why it is doing it that way, and why it might fail.
-
Write the test before trusting the output If the output cannot be checked, it is not ready.
-
Constrain the request Prompting is not magic. Clear requirements produce better results than vague ones.
-
Simplify bad output Students should learn to cut AI code down, not just accept it.
-
Check production readiness Working code is not the same as maintainable code.
This is the practical part that schools often skip. They focus on syntax and ignore the review muscle.
AI is raw material
The right mental model is not “AI writes the code for me.”
The right mental model is “AI gives me raw material that I still have to shape.”
That means the developer is still responsible for:
- correctness,
- security,
- maintainability,
- and fit with the rest of the system.
If you skip that responsibility, the output will eventually bite back.
What this means for education
Software education does not need to ban AI. It needs to stop pretending that typing code is the main learning outcome.
From my perspective, the curriculum should shift toward:
- code review,
- debugging generated output,
- architectural judgment,
- testing,
- specification writing,
- and failure analysis.
That is not a soft skill layer on top of programming. That is the job now.
Final rule
Writing code got cheaper.
If you want developers who can survive that shift, teach them to inspect the output, reject the bad parts, and turn machine-generated code into something safe and useful.
Comments