Shumer compares the current moment to the early days of the pandemic: signals of massive disruption are visible to insiders, yet widely dismissed by the broader population. His central warning is blunt — society is underestimating both the speed and magnitude of the AI transition. As a result, AI anxiety is suddenly a new phenomenon of the modern age.
“We’re not making predictions,” he writes. “We’re telling you what already occurred in our own jobs.”
That framing — AI disruption as present reality rather than future risk — is precisely what has fueled the post’s viral spread.
Something Big is Happening, source: Matt Shumer
But how much of this is consensus versus insider anxiety? To understand the stakes, it’s critical to look at how other industry leaders are responding.
On the core claim — that AI is already transforming knowledge work — many leaders agree with Shumer’s assessment.
NVIDIA CEO Jensen Huang has repeatedly argued that AI will function as a universal productivity engine, describing it as “the most powerful technology force of our time.” His view aligns closely with Shumer’s firsthand accounts of engineers delegating large portions of their workflow to autonomous systems.
Similarly, Microsoft CEO Satya Nadella has framed AI copilots as foundational infrastructure rather than optional tools. He has said AI will reshape “every software category,” embedding automation directly into daily work rather than existing as a separate interface.
From this vantage point, Shumer’s examples — describing software being built end-to-end from natural language prompts — are not outliers but early indicators of a structural shift.
Even Google DeepMind CEO Demis Hassabis has acknowledged the acceleration, noting that progress toward advanced general systems is moving faster than many expected just a few years ago.
Among builders, the productivity shock is not controversial. It is observable.
Where debate intensifies is around timelines.
Shumer argues that AI progress is compounding exponentially — moving from simple tasks to complex autonomous execution within a compressed timeframe.
OpenAI CEO Sam Altman has echoed this trajectory, warning that society is “not ready” for the economic changes advanced AI could bring. He has suggested that entire job categories could evolve — or disappear — faster than labor markets can retrain.
Anthropic CEO Dario Amodei has expressed similar urgency, predicting that highly capable AI systems could emerge within this decade, with transformative economic consequences.
But not everyone agrees with the compression of timelines.
Meta Chief AI Scientist Yann LeCun has been one of the most prominent skeptics of near-term “intelligence explosion” narratives. He argues current systems, while powerful, still lack core reasoning frameworks, real-world modeling, and autonomous agency required for true general intelligence.
In his view, today’s AI is impressive pattern recognition — not independent cognition — and predictions of imminent human-level systems are overstated.
This divide is crucial: insiders agree acceleration is real, but disagree on how close we are to runaway capability.
One of Shumer’s most consequential claims is that AI is now contributing to its own development — optimizing code, training processes, and evaluation systems.
This recursive improvement loop is often cited as the gateway to rapid intelligence scaling.
Former Google CEO Eric Schmidt has supported this concern, warning that once systems meaningfully assist in designing successor systems, progress could accelerate beyond traditional forecasting models.
However, AI researcher Andrew Ng has pushed back on fears of runaway self-improvement, arguing that human engineering bottlenecks — data, compute infrastructure, alignment testing — still heavily constrain development cycles.
The disagreement is less about possibility and more about immediacy. Self-improving systems are plausible; the question is how quickly they become dominant drivers of progress.
Where Shumer’s post hits hardest — and resonates most widely — is employment disruption.
He argues that entry-level cognitive roles are particularly exposed, as AI systems absorb routine analytical and production tasks.
This concern is widely shared.
Mustafa Suleyman, CEO of Microsoft AI, has warned that AI will be “hugely destabilizing” to white-collar employment, especially in administrative, research, and support functions.
Goldman Sachs research has similarly projected that hundreds of millions of jobs globally could be affected by generative AI automation.
But some leaders see augmentation, not replacement, as the dominant pattern.
IBM CEO Arvind Krishna has said AI will automate tasks rather than eliminate roles wholesale — freeing workers to focus on higher-value responsibilities.
Accenture and McKinsey analysts have echoed this more tempered view: AI will restructure work more than it eradicates it, at least in the medium term.
In other words, disruption is inevitable — but its severity remains contested.
Shumer’s post is not purely alarmist. He emphasizes that AI dramatically lowers the barrier to creation — enabling individuals to build software, produce media, and launch businesses without traditional technical expertise.
This perspective is strongly supported across the venture ecosystem.
Investor Marc Andreessen has framed AI as a “force multiplier for human ambition,” arguing it will unleash entrepreneurial capacity at a scale previously limited to well-funded institutions.
Reid Hoffman, LinkedIn co-founder, has similarly described AI as a co-pilot for human ingenuity — amplifying rather than replacing creative and strategic thinking.
The optimistic framing: AI compresses the distance between idea and execution.
The pessimistic framing: it compresses the distance between employment and redundancy.
Both can be true simultaneously.
The viral spread of “Something Big Is Happening” signals a broader shift: AI is no longer a niche technology conversation. It is becoming a mainstream economic, political, and social issue.
When builders warn of exponential productivity…
When researchers debate intelligence timelines…
When CEOs model workforce disruption…
…it becomes clear that AI is not just another software cycle.
It is infrastructure — economic infrastructure — reshaping how value is created and distributed.
Shumer’s post resonates because it captures the insider mood: urgency, acceleration, and a sense that public awareness is lagging reality. AI anxiety is a thing.
AI anxiety has arrived, Source: Naval on X
But on one point there is near-universal agreement:
AI is not slowing down and now the average person is feeling anxious about what AI will do to their work and life.


