The future of work and were humans fit in

It began the way modern panics often begin: not with a factory fire or a bank run, but with a document that looked like it had slipped out of time.

Citrini Research published a long “macro memo” dated June 30th, 2028, with “February 22nd, 2026” visibly crossed out—an intentional stage prop that tells you, immediately, how you’re meant to read it: as if the future has already happened. The authors insist, right up front, that this is a scenario, not a prediction—an attempt to model an under-discussed left‑tail risk, not to declare destiny.

Then the memo speaks in the language markets understand: a fictional data print, a fictional selloff, a fictional drawdown that sounds uncomfortably specific. In that imagined 2028, unemployment “prints” 10.2%, and the S&P 500 is down 38% from its October 2026 highs. The memo’s narrator describes an economy that still looks fine in the headline numbers—booming productivity, rising output—but hollow in the place that matters: the place where paychecks turn into purchases, and purchases turn into profits.

To name that hollowness, Citrini coins a phrase—“Ghost GDP”—for output that shows up on spreadsheets and in national accounts but doesn’t circulate through ordinary life. Machines don’t take vacations; they don’t replace a broken dishwasher; they don’t browse for shoes because they had a hard week. In the memo’s telling, the velocity of money “flatlines” because the economy has found a way to produce without paying people, and the old bargain—work, earn, spend, repeat—starts to fracture.

The engine of the story is a loop that feels plausible precisely because it is so simple. AI gets better; companies need fewer workers; layoffs rise; spending falls; demand weakens; firms protect margins by investing even more in AI; AI gets better. The memo calls it a feedback loop with “no natural brake.”

And then, for a moment in February 2026, traders treated that loop like a live wire.

On February 23, 2026, U.S. stocks dropped sharply: the Dow fell 821.91 points (‑1.66%), the S&P 500 fell 1.04%, and the Nasdaq fell 1.13%. Reuters described a broad “risk reassessment” driven by a mix of anxieties—tariff uncertainty and geopolitics, yes, but also persistent fears about AI disruption. A strategist quoted in the piece summed up the mood with brutal honesty: “sell first, assess later.”

It would be wrong to say a single Substack post caused a global market move. Markets rarely have one cause. But it’s also wrong to pretend the memo didn’t matter. The Guardian—explicitly calling the scenario “completely speculative”—reported that it rattled investors, and that several companies named in the memo (including Uber, American Express, Mastercard and DoorDash) fell in the days the story spread. In the Guardian’s account, the memo didn’t just wave at “AI” in the abstract; it walked through a chain reaction from software disruption to job losses to private credit and mortgages, and the specificity made it feel tradable.

There’s another reason it landed: it told investors that the danger isn’t merely that AI makes some firms stronger, but that it can make the whole system we price cashflows on—employment, wages, consumer demand—behave strangely. Reuters Breakingviews noted that concern widened as the Citrini post circulated, raising the question of whether mass white‑collar displacement could crater spending and profits across consumer sectors. At the same time, Breakingviews pushed back on the fatalism, reminding readers of the “lump of labour fallacy” and the historical pattern: technology shocks often lower prices, raise real incomes, and create new work—though policy can be slow, and the transition can be painful.

So what does the memo say about the future of work?

It says that the modern economy has been built on a quiet assumption: human intelligence is scarce, and that scarcity is worth paying for. But if machine intelligence becomes abundant—cheap, tireless, replicable—then the premium on many kinds of white‑collar labor compresses. In Citrini’s scenario, it starts in software: tools that can plan and execute tasks (not just autocomplete) undermine the pricing power of SaaS and the white‑collar roles that implement, configure, and maintain those systems. Laid‑off workers don’t smoothly “move up” into managing the AIs, because the AIs improve at exactly the tasks humans would redeploy into. Many people slide down the wage ladder into lower‑paid, less stable work; spending weakens; the loop accelerates.

But beneath the finance jargon is a more human claim: if intelligence becomes cheap, the economy stops rewarding “being smart” in the way it did for the last half‑century, and millions of people discover that what they sold for a living—analysis, synthesis, coordination, documentation, persuasion over email—no longer clears at yesterday’s price.

That brings us to the question that matters more than markets: what is the human place in that world?

If you strip away the fear and look at constraints, a pattern appears. The work most likely to remain meaningfully human is not defined by whether a machine can do it in principle, but by whether society can actually live with the machine doing it end‑to‑end.

Some work stays human because it is physical. The world is lumpy. It leaks. It breaks in ways that don’t resemble training data. Homes, hospitals, construction sites, and field repairs are full of edge cases—tight spaces, messy wiring, human bodies, unpredictable weather, and the kind of improvisation that is easy to underestimate until you watch someone competent do it. Even very good software doesn’t magically become a competent plumber. Robotics may bite into parts of this domain, but deployment is slower, more regulated, and more expensive than copying code.

Some work stays human because it is relational. Care is not only service delivery; it is presence, trust, dignity, reassurance. People will accept AI copilots in medicine, education, therapy, and coaching—but many will still want a human being in the room, especially when fear, pain, shame, or grief enters the picture. We are not only customers of outcomes; we are social creatures who want to be seen by other social creatures.

Some work stays human because it is accountable. Even in an AI-saturated world, someone must sign their name. Courts, regulators, safety-critical industries, clinical decisions, executive leadership—these domains don’t just ask “what is optimal?” They ask “who is responsible if this goes wrong?” And responsibility is still a human-shaped institution: liability, legitimacy, consent, governance.

Some work stays human because it is meaning-making. AI can generate infinite options, but abundance creates a new scarcity: taste, judgment, story, cultural context, the ability to say this matters, not that. In a world of endless content, curation and direction become more valuable, not less—especially when attached to trust and reputation.

So the future of work is not simply “humans do the creative stuff.” A lot of “creative stuff” becomes cheap, too. The future is more like this: humans are increasingly paid for what machines struggle to socially replace—embodiment, relationships, accountability, and the authority to decide what counts.

And survival, in practical terms, becomes less about finding the one safe job and more about building a life that is hard to knock over.

That starts with learning to treat AI as leverage rather than competition: becoming the person who can orchestrate tools, verify outputs, and deliver results faster and more reliably—not the person whose entire job is producing first drafts. It continues by moving closer to domains where humans remain essential: the physical world, regulated systems, high-trust work, high-accountability work. It includes a brutally unromantic element, too: lowering personal fragility. If work becomes more volatile, high fixed costs and high debt become a kind of quiet enemy. Flexibility—financial and psychological—turns into a real advantage.

Finally, it requires something the Citrini memo gestures at in its own way: the recognition that “the human place” is not guaranteed by technology or markets. It’s built by institutions. Even critics of the memo, like Claudia Sahm (quoted in Business Insider), argue that a true labor-market shock would provoke powerful fiscal and monetary responses—meaning the trajectory is political as much as technological.

In other words: you survive the future not only by picking the right skills, but by helping shape the frameworks that decide how prosperity is distributed when intelligence is abundant.

Citrini’s memo is frightening because it pictures a society that fails to do that—an economy that can produce more while ordinary people receive less, until the math breaks and the story breaks and the social contract breaks. Whether that future arrives is unknowable. What is knowable is why a fictional memo could shake real markets: it made a previously abstract fear concrete, and it reminded everyone that the economy is not a machine that runs on productivity alone. It runs on people—on incomes, confidence, and the everyday act of believing tomorrow will still make sense.

Receive Breaking News

Receive Breaking News

Sign up for our newsletter and stay up to date! Be the first to receive the latest news in your mailbox: