[This article applies to the bullshit being spewed regarding "AI" but the same concept can be applied to almost everything being pushed by the establishment (the need for new "data centers" are really high on that list). . . SC]
By Sandeep Manudhane - February 13, 2026
WARNING!!
Massive AI hype being built in a sudden burst (and most of it is fake) . . .
1) A scary article: I was surprised to read a long article on Twitter (X) claiming it’s just 6-12 months before a Covid-like event changes this world. It claims this will be the "AI-event", where most white-collar jobs worldwide would be gone, because AI is that good now. That article got 100 M plus views. Clearly, people are spooked (naturally). So the psy-op has worked. (and I saw other similar dark articles too)
2) Suddenly many influencers are pushing the same narrative, and it so turns out that media reported many are being paid heavy sums by AI firms to push their story (that AI singularity is arriving). But if AI is “revolutionary”, does it need an influencer push? No. This should be a clear signal it’s hyped.
3) A correction in IT stocks’ and SaaS (Software as a Service) stock’s prices is suddenly creating a doom scenario about these companies dying any moment now, with second- and third-order effects on entire economy. Stock investors who haven’t studied AI technicals are automatically assuming it’s all over, dead, gone, finished. WRONG. NO.
4) What is the truth, and what’s most likely to happen? In my opinion, based on years of observing AI trends, reading and learning AI technology, and doing AI at various levels, my take is as follows. I urge you to read this, and preserve your sanity. Please don’t panic, nothing catastrophic is happening anytime soon.
A) IPO pressure: AI firms are going crazy pushing their God-narrative, as many giant IPOs are lined up soon. They need public to buy their paid subscriptions or else the story goes kaput. So they are creating a false hype. It’s shameful, anti-social and deeply hurtful. (Almost all AI firms released doom-scenarios just before their next funding rounds; investors who haven’t learnt technology fall for it; pure FOMO. This playbook is so repetitive it’s comical)
B) OpenAI is spooked: Sam Altman has lost the lead he temporarily managed to build against Google and others, and now his loss-making enterprise isn’t the darling of any investor any more. He’s terrified.
C) Elon Musk’s Grok does not have the traction in consumer space anyway near what’s needed to make it a profit-making entity. So with many other capex-heavy AI firms. But the GPU / TPU hungry AI ops need more capex each day, not less. It’s a dead-end for most except cash rich Googles.
D) Enterprise AI is patchy, lagging, slow, choppy: Anyone who has ever built a company, or run a large department, or consulted a business enterprise knows how random, undefined, tacit, and unstructured most of the real world work actually is. No way is AI ever going to replace humans doing those very complex things on a daily basis. No way. Not tomorrow, not in 10 years. NO. (I am not even beginning to get into ‘regulated’ industries’ needs)
E) Consumer AI is cool, but has limits: The more AI regular humans (of all ages) use, the more the artificiality of it becomes apparent to anyone. The novelty cannot sustain the commercial numbers needed to make AI (foundation models) profitable. OpenAI and Perplexity would never have given free tiers for most Indians otherwise. They desperately need folks to stick to this opium.
F) LLMs aren’t solved, Hallucinations aren’t zero: The structure of any LLM is such that it will ALWAYS hallucinate, no matter how much fine-tuning humans do. In most sensitive business operations, you cannot allow LLMs to control the core data at all. Can you run an airline with a Generative AI system (LLM-based) that’s 98% accurate? Can you run a precision-mfg. operation at 97% accuracy? Can you run a financial services firm with 95% accuracy? NO. NEVER. So the deterministic, old-fashioned computer software ERP will go nowhere. Nowhere at all. LLMs will be good as a top layer on those ERPs to glean insights, nothing more. (No one can ‘train away’ hallucinations in a probabilistic LLM model, using larger datasets. You are actually claiming I’ll build a dice that lands a 4 or a 6, each time)
G) Agents aren’t magical, humans aren’t going anywhere . . .
[SNIP]