A Heresy About AI
I want to state a small heresy.
Most schools, companies, and individuals have not really understood AI. More formally, their organizational inertia is too heavy, and they have not yet built AI-native standards, workflows, or modes of evaluation. More bluntly, they are simply following the crowd: everyone says they need AI, so once the FOMO kicks in, they too announce that they must have this thing.
This cognitive dissonance is obvious enough when a team claims it wants to hire “people who know how to use AI,” while banning AI in interviews and even moralizing the matter into a question of integrity. In general, hiring is supposed to find people who can solve problems. Management, too, should first be management by outcomes: staying compliant, following the team’s workflow, avoiding production incidents, and delivering reliably. As for how exactly the problem gets solved, that is called agency. At bottom, it is a matter of tool choice: Google or ChatGPT.
Yet in practice, they leave genuinely scarce project experience and work experience unexplored, and instead keep grinding through standardized questions that AI can annihilate in seconds, as if they were trying to compete with the tool itself on efficiency. Then, with charming seriousness, they add that they also want to “empower” things with AI. Is this not absurd?
Many people still understand AI only at the level of a “code generator,” without grasping how it is changing the organization of knowledge and the use of human attention. As for the familiar claim that one “does not feel comfortable relying on it,” that is often just instinctive fear before an unfamiliar mode of collaboration. Worse still, some have begun using AI to screen resumes, which is almost a textbook misuse; Amazon already ran that experiment on everyone’s behalf. So in many vertical scenarios, the respectable term is exploration. The blunt term is AI theater.