Blog &
Articles

From “Horsepower” to “Mindpower”: The Raw Math of Human vs. AI Productivity

By Published On: March 3, 2026Categories: Blog & Articles

The CEOs of the world’s leading AI companies have a message for you: your job is probably doomed, civilization might collapse, and nobody can stop it.

Sam Altman of OpenAI warned that 30-40% of economic tasks will be automated “in the not very distant future.” Dario Amodei of Anthropic described an approaching “tsunami” of AI capability that will surpass human intelligence. Both predict massive job displacement. Both frame it as inevitable.

And both, conveniently, are selling the very technology they’re warning about.

It’s uncertain whether their dire warnings represent genuine concern for society’s future or a peculiar sales pitch (“Our AI is so powerful it could bring about civilizational collapse… so it can definitely handle your accounting, coding, and customer service calls.”)

Altman and Amodei are meteorologists pointing at the tsunami while selling surfboards. What’s missing from all their apocalyptic prophecy: a roadmap for workers whose jobs are threatened, organizations trying to navigate the transition, or societies grappling with wholesale economic restructuring.

So let’s take a moment to think through the implications of their warnings, and what happens when AI’s cognitive capacity surpasses humans. What roles could humans still play? And how should organizations and individuals prepare for a future that’s arriving faster than most people realize?

Fast on the Straightaway: The Mindpower Metric

We already have a conceptual model for comparing the productivity of organic beings to machines: horsepower.

When James Watt needed to sell steam engines to mine operators in the 1780s, he didn’t talk about pressure differentials or thermal efficiency. He talked about horses. Specifically, how many horses his machine could replace. A draft horse generates about 1 HP of sustained output. A Ford F-150 generates 400. A Formula 1 engine pushes 1,000.

Today, nobody expects a horse to outrun or out-haul a motor vehicle on the highway. And, if we’re going to have an honest conversation about the implications of AI for human jobs and society as a whole, then it’s time we applied the same brutal honesty to cognitive work.

Let’s call our new metric mindpower. If we peg median human cognitive output at 1.0 MP (the baseline for an average knowledge worker doing average knowledge work) then where do most humans land?

The math is humbling. If the median human IQ is 100 and only 2.2% exceed 135, then a profoundly gifted or genius-level person with an IQ between 150 and 200 (the top one tenth of one percent of human intelligence) would still only rate 1.5 to 2.0 MP.

That makes the world’s smartest people the cognitive equivalent of:

  • A motorized pump in a residential above-ground pool
  • A portable workshop air compressor
  • A small table saw
  • A go-kart engine
  • A trolling motor for a fishing boat

Meanwhile, current frontier AI models are already operating at 50-100+ MP on tasks for which they’re optimized. And that number keeps climbing. By 2030, we’ll likely see models pushing 500-1000 MP on specialized cognitive tasks.

The uncomfortable truth? No human lawyer in 2035 is going to give you a better explanation of international trade regulations than an AI agent running a 650 mindpower Gemini 21 model optimized for legal research. That would be like Secretariat trying to outrun a Lamborghini, or a pack mule pitted against a pickup truck. The horse isn’t slow. The car just isn’t playing the same game.

Off-Road Thinking: The Jagged Frontier

So, in a (near) future where no human can beat an AI at accounting, lawyering, or coding – what game should humans be playing?

This is where it gets interesting, and where the doomsayers miss the plot.

Wharton School professor Ethan Mollick talks about the “jagged frontier” of AI performance, and it’s a crucial concept. Current generative AI models are a bit of a one-trick pony: they do one type of thinking (inference) and do it spectacularly well. Given an input, an AI model uses its pre-established statistical structures to predict a plausible next output.

This is devastating for:

  • Pattern completion and remixing existing information
  • Compressing large bodies of text and imagery into useful summaries
  • Generating drafts, options, and iterations at superhuman speed

But while researchers are working on expanding AI’s cognitive repertoire (with experimental “world models” like Google Genie, for instance) current systems still struggle with what we might call “off-road thinking”.  Specifically:

Real-World Grounding 

If you work with AI long enough, you’ll notice it’s incredibly book-smart but not terribly street-smart. Its predictive capability is based on texts it has processed, not lived experience. So it doesn’t deal well with situations where reality differs from the “official” version. I’ve had AI agents analyze email chains during sales conversations and business negotiations, and it’s actually eyerollingly naïve when someone doesn’t really mean what they’re saying, or is giving a polite brush-off. The AI tends to take everything at face value. Humans know better.

Causal Reasoning and Counterfactuals

Humans live in a world of causal relationships. If you toss a tennis ball at a brick wall, it bounces off harmlessly. If you pull a tiger’s tail, the tiger turns around and mauls you. The 20th-century philosopher Kenneth Craik argued that figuring out these causal relationships and working through “what if” scenarios is the entire reason human consciousness evolved. AI doesn’t experience the world this way. It only knows “When people say A, most of the blogs I’ve scanned follow it with B…” which can look like causal reasoning but isn’t the same. In some situations, this leads AI agents to make profoundly bad decisions.

Long-Horizon Planning and Self-Monitoring 

If you’ve ever asked Claude to perform a long-form task in your browser (e.g. “look at this list of executives, see who has a sustainability role, check if they post regularly online, compile their contact information and summarize the major themes of their posts”) you’ll notice that when it goes wrong, it goes all the way wrong.  When given the task above (researching corporate sustainability officers), the smartest Claude model got stuck in a loop repeatedly re-summarizing the same four people it found. Likewise, AI models running autonomously can drift far off task without realizing it. My own AI financial advisor will occasionally flagrantly contradict whatever they said five minutes earlier without acknowledging the about-face (“Gold is a terrible investment… gold is a great investment!”… but they got a 37% return last year so I’ll forgive them.)  These glitches are all a consequence of predicting the next word in a sequence versus properly modeling the world around them, the way humans do.

Goal Formation

This is probably a good thing for humans, but AI still basically does what it’s told. The late, controversial AI theorist Roger Schank once quipped: “Dogs have goals but they don’t have words. Amazingly, dogs can think intelligently about getting what they want. When modern AI can do what dogs do every day in order to achieve the real goals that they have, please let me know.” The fact that AI doesn’t naturally form goals the way a hungry dog or status-obsessed human does might be the ultimate safety valve.

Acting On Sparse Data

The saying “a little bit of knowledge is a dangerous thing” applies doubly for AI agents whose entire mode of thought hinges on finding patterns in massive datasets. When confronted with truly novel situations or situations where data is scarce, the human “world model building” approach to cognition (i.e. “This reminds me of the time when…”) will likely win out.

The Reasoning Rodeo: What Horses (and Humans) Still Do

The first question most people ask about AI job displacement is “How soon will AI be able to do my job?” followed by “What will humans do after AI takes over all the jobs?”

This is a serious question that every economist, ministry of labor, and parliament in the world should be working on 24/7 until we come up with a tentative plan. Bernie Sanders recently said AI should be used to give workers a four-day workweek: “Let’s use technology to benefit workers. Give you more time with your family, with your friends, for education, whatever the hell you want to do.” though he also acknowledged that – given the track record of our business and political institutions – it’s not likely to play out that way without some serious regulatory intervention: “If AI is going to replace a lot of the work that human beings do, what becomes of human beings?… Congress and the American people are very unprepared for the tsunami that is coming – we have got to slow this thing down!”).

Assuming the politicians don’t heed Sanders’ words – how is AI job displacement likely to play out? What will humans still do?

Building on our horsepower / mindpower metaphor, let’s consider the roles horses still play in our economy:

  • Ranching, Forestry, and Agriculture: Horses are still used for herding cattle (less disturbing than a motor vehicle), logging and crop cultivation in sensitive areas with steep slopes or wet soil, and moving across rugged terrain where ATVs and trucks can’t go.
  • Law Enforcement and Emergency Response: Mounted police handle crowd control, patrol large urban parks without roads, and maintain visible presence at events. Horses support fire crews fighting wilderness fires in inaccessible areas.
  • Sports, Tourism, and Recreation: People still prefer horses to cars for trail riding, carriage rides, racing, show jumping, and rodeo.
  • Ceremonial Functions: Horses are used for parades, honor guards, and funerals by military units and royal families.

The common threads translate surprisingly well to human cognitive labor:

Rugged or Delicate Intellectual Terrain

What kind of mental terrain might humans navigate better than a finely tuned inference machine? Reading a room in high-stakes, messy social situations. Conflict mediation. Negotiation. Coaching. Exercising creative direction and taste—not generating options, but choosing what fits a community, audience, or moment. Cross-context sense making that connects weak signals across politics, culture, incentives, and history.

Human Presence Premium

Anything where humans will be a less disturbing presence than a machine. Building trust with skeptical or fearful people—patients, clients, juries, communities. Care work that’s relational, not transactional: grief, trauma, loneliness, long-term mentorship. Persuasion that relies on authenticity—leadership, organizing, diplomacy, sales where “I believe you” matters.

Official and Ceremonial Functions:

It’s unlikely most people would want to be married by a machine, or have a machine emcee their community’s holiday fundraiser, or eulogize them at their funeral. Legitimacy-bearing roles where “who decides” matters as much as “what’s decided”—judge, teacher, clergy, manager.

Fun and Games

While videos of humanoid robots doing backflips are novel today, once that becomes commonplace nobody’s going to be impressed by a robot acrobat. Nobody wants a robot chef to come to their table and describe the ingredients and the care taken in preparing them. And if AI leads to an abundance economy, there will be plenty of time and resources for people to operate hobby businesses, like a cake shop that’s only open three days a week, because why not?

That said, Bernie Sanders is right on one thing: much of humanity’s fate in an AI saturated world comes down to political decisions. In an “abundance” economy powered by AI, humans won’t compete with machines on productivity: they’ll reclaim work as expression, craft, and connection. The baker who opens three days a week because they love sourdough. The teacher who mentors five students instead of managing thirty. The nurse who spends an hour with a patient instead of racing through rounds. This isn’t nostalgia: it’s the future we could choose if we tax AI productivity, guarantee material security, and stop pretending that ‘efficiency’ is the only value that matters.

This will likely require some “Universal Basic Income” (UBI) scheme where the proceeds of corporate AI productivity are translated into regular checks for everyone else, whether they’re employed or not. And with AI assistance, governments just might be able to pull off that kind of complex, semi-managed economy. But even if UBI worked, some people fear this might turn humanity into a species of listless couch potatoes (see the Pixar sci-fi satire cartoon
WALL-E, where humans basically sit in easy chairs all day while robots bring them snacks.) However, we actually have evidence that – given a guaranteed income – most people will choose to work, doing something they genuinely enjoy.

In Malawi, around 52,000 people have been living in the Dzaleka refugee camp since 1994. They were displaced by wars in multiple neighboring countries, and still have not been granted official status in Malawi, including the right to work, so everyone subsists on food aid and $9 per month from the United Nations. While this is questionable from a human rights standpoint, it offers a fascinating study of a society where everyone has a fixed, guaranteed income: as it turns out, most people work part-time doing things they enjoy: sewing wedding dresses, preaching at churches, running bars, or serving as youth football coaches – all without the actual need to hold a job. Similar small-scale social experiments in Finland came to a similar conclusion: eliminating the need for a paycheck does not instantly turn people into couch potatoes.

Blending Human & AI Intelligence: Centaurs, Minotaurs, and Chimeras

Social critic and literary science fiction writer Cory Doctorow wrote a critique of the “centaur” concept (the idea of a hybrid human / AI worker) claiming that most people will end up “reverse centaurs”, a dystopian arrangement where humans become appendages to AI systems rather than the other way around. It’s a vivid image, and he’s not wrong to worry about it.

But I think there’s a better name for that arrangement: the “Minotaur”. If the centaur is a human brain turbocharged with the intellectual horsepower / mindpower of AI, the minotaur is an AI directing a human workforce (imagine a team of human delivery drivers with an AI dispatcher / supervisor telling them where to go next.)  But, unlike Doctorow’s framing, the minotaur isn’t necessarily a bad deal for the human. It depends on the job, the pay, and the alternatives.

Centaur (Human “Head”, AI “Body”)

The centaur approach positions humans as strategic directors with AI as a powerful tool executing under human guidance. 

This will likely work best for:

  • Tasks requiring high-level judgment, ethical considerations, or contextual nuance.
  • Situations where accountability must rest with a human.
  • Work where the “jagged frontier” means major decisions or tasks falling outside AI competence: AI can take some work off the human expert’s plate, but can’t be left in charge.

Imagine a marketing strategist uses AI to generate 50 campaign concepts in an hour, then selecting and refining based on brand understanding, artistic taste, and market intuition the AI can’t quite nail. The human is the rider. The AI is simply lending additional mindpower.

Minotaur (AI “Head”, Human “Body”)

The minotaur model inverts control. AI systems direct workflow and humans provide execution support (e.g. driving cars and running errands, until robotics technology improves) or handle exceptions the AI cannot process (basically intervening to reboot the AI whenever it gets stuck in a loop.)

This arrangement would be optimal for:

  • High-volume, time-sensitive tasks where AI can optimize routing.
  • Algorithmic systems with human intervention for anomalies.
  • Work that is (let’s be honest) largely mindless bullshit.

For instance, imagine an AI triage system routing 80% of customer inquiries automatically, escalating edge cases to human specialists (e.g. “My three-year-old covered my smartphone in ketchup – can I get a warranty repair”). AI-driven manufacturing with human quality control, or humans acting as a friendly presence (a human attendant welcoming hotel guests as an AI system checks them invisibly via facial recognition.)

In Doctorow’s view, “minotaur” work would be inherently degrading to humans, being placed under machine supervision and direction in a perverse inversion of our traditional relationship to tools.  But here’s the thing almost nobody wants to say out loud: a huge percentage of white-collar and service industry work is soul-crushing make work that exists primarily to justify someone’s salary or maintain an organizational / social hierarchy – where workers pretend to be busy and bosses pretend to be important (what anthropologist David Graeber famously labeled “Bullshit Jobs” ). If AI can remove the need for this pretense, freeing humans to do work they enjoy or that only humans can do or – Heaven forbid – work a third as much or not at all, that’s not dystopia. That’s progress.

From my own experience, I’ve had plenty of bullshit jobs where I was just a foot soldier in a corporate machine. Without getting into the biographical details, in my twenties I already had more financial obligations than most Americans, and had to work two or three jobs at a time: junior programmer, paralegal assistant, menswear sales, weekend shifts at an all-night diner and FuncoLand (a defunct GameStop competitor), while squeezing in night courses at university wherever I could. Maintaining that schedule for five years was brutal. Then, in my final year of university I took a part-time night job at a used bookstore that received almost zero traffic while cramming in a massive overload of classes during the day. The bookstore job was perfect, allowing me to read and write while occasionally ringing up a customer and dusting shelves for the morning shift.

Would my 24 year old self have cared if my manager was a human or an AI?  That would depend on how demanding a boss the AI turned out to be (as it was, my human boss was fairly demanding – those shelves needed to be perfect when he opened in the morning.)  But if the AI’s expectations and the pay were reasonable, I would have jumped at the opportunity.

If you grew up watching Netflix instead of broadcast television, then you probably don’t remember The Jetsons: a kind of bland 1960s cartoon about a family living in a techno-utopian future.  The father –  George Jetson – was a happy minotaur. His job was to push a button, monitor a machine, occasionally report a jam in the production line, then go home after just two hours a day. By any modern critical framing, he was an appendage to automation, deeply replaceable, not “self-actualized” through work. And yet: he owned a home, had leisure, had social status, was mostly… fine.

The problem wasn’t minotaur-ness. The only time George was unhappy was when Uniblab (his robot supervisor) demanded more output without renegotiating the deal. George wasn’t alienated by being subordinate to a machine. He was annoyed when the machine didn’t respect their techno-utopian social contract.

Of course, there’s an obvious dystopian flip side where George Jetson is economically disenfranchised and left to scavenge in the wasteland outside the gates of his suburb. But there’s also a less obvious dystopia where governments pass regulations banning or limiting the use of AI, just so people can keep working the same (or new) “bullshit” jobs, all because we can’t imagine a system where economic participation and voting rights are tied to sacrificing half your life to an employer.

The Chimera (Multiple Heads Working Together)

The Chimera was a lesser known creature from Greek mythology, with a lion’s head, goat’s body, and serpent’s tail – all of which had minds of their own yet worked in concert to make the Chimera a formidable beast. As metaphor, it captures the “neither one is fully in charge” dynamic of most human-AI collaboration.

We can already find chimeras at work in:

  • Complex problem-solving requiring both computational power and contextual understanding.
  • Iterative work where human and AI trade off based on task demands.
  • Situations where the “right answer” emerges from dialogue, not direction.

For example, a doctor reviews AI-generated diagnostic suggestions while contributing clinical observations and patient history. The AI refines recommendations based on the doctor’s input. Neither is “in charge”: they’re thinking together.

The key is for humans to neither slip into “minotaur” mode out of laziness or reject the AI’s suggestions out of pride: we already have evidence from multiple clinical studies that both these extremes (rejecting AI advice and over-reliance on AI advice) are bad for patient outcomes.

The Gibson Principle: Dystopia for Whom?

I once saw the science fiction writer William Gibson (“father of cyberpunk”) speak, and he said something that stuck with me: the world he created for his famous books about a grungy, technology-saturated future dominated by ruthless corporations wasn’t a dystopia. It was simply transposing the conditions of the “third world” onto wealthy countries.

Think about that for a second. If you were living in a slum in India or an economically depressed village in Nigeria or rural China when Neuromancer was published in 1984, Gibson’s cyberpunk future might have looked like more access, more material comforts, more weird opportunities than your own experience. Not justice. Not equality. But options.

A lot of the discourse around AI and work quietly assumes a wealthy-country, middle-class, post-World War 2 / Cold War baseline: stable jobs, predictable careers, institutional protection, a sense of personal agency at work. When that baseline erodes, it gets labeled “dystopian.”

But for much of the world, that baseline never existed, and by many key measures the “decline” of wealthy countries can be seen as “the rise of the rest”.

Of course, whether or not AI benefits anybody, anywhere apart from a handful of elites has yet to be seen. The question isn’t whether those elites could share the productivity dividend – it’s whether they will (or will be compelled to share by governments.) 

While today we celebrate tech entrepreneurs, much of middle-class and working-class prosperity in wealthy countries came from an often violent tension between capitalists like Rockefeller, Pullman and Krupp versus organized labor led by working class heroes like Rosa Luxemburg and Joe Hill whose mantra of “Don’t mourn, organize” kept workers motivated to advocate for their rights even as they were brutally suppressed by police, military, and private security working for their employers.  

Of course, that old-school labor optimism rested on a specific material condition: industrial productivity scaled roughly with human labor. Factories, railroads, mines, and ports couldn’t function without human bodies, and workers could halt them by simply refusing to work. Labor had veto power.  But what changes when the factories are robotic, and the cooperation of the masses is no longer required?

This is the uncomfortable question underneath all the AI anxiety. Once the capitalists don’t need everyone else’s cooperation, the calculus shifts. Even if it’s not all-out cyberpunk corporate domination (or, Heaven forbid, genocide), it could take the form of managed abandonment, where Unviersal Basic Income looks less like Star Trek and more like everyone living on $9 a month in a refugee camp.

In the end, the question isn’t Centaur vs. Minotaur vs. Chimera as a moral hierarchy. It’s “What’s the new social contract, and will it be honored?”

The Transition Roadmap

When people hear that my company does serious work with AI implementation (for workforce training and task automation / acceleration), they either ask one of two questions.

  • If they’re some type of executive, manager, or consultant it’s “How should my company / clients be using AI?”
  • And if they’re a parent, everyone asks “What kind of jobs should my kids be preparing for?”

While the answer to this changes quarterly, here’s a best-guess response for anyone wondering the same thing.

For Organizations

Short-term (2026-2027)

After a few years of false starts and disappointing pilots, organizations start getting the basics of AI implementation right. AI systems begin reaping the low-hanging fruit: anything high-volume, rules-based, or pattern-matching. Customer service tier 1 responses, basic data entry, initial document review, routine scheduling. In fact, I’d tell our clients that if you haven’t already partially or fully automated these tasks today, you’re burning money.

Meanwhile, AI accelerates higher-level professional work. Your analysts use AI to draft reports faster. Your designers use it for concept iteration. Your developers knock out routine coding tasks using Copilot. The human is still driving, but they’re in a faster car. While our company is relatively small compared to the organizations we serve, we’re already at the point where literally everyone works regularly with one or more company-approved AI systems / agents, from the graphic designers to the coders, consultants, admins, and marketing team, with the motto “Never slow, never slop.” In other words, clients in the late 2020s have every right to expect things to be done faster, better, and / or cheaper through AI acceleration – but at the same time they shouldn’t tolerate lazy AI output, either.

Mid-term (2028-Early to Mid 2030s)

AI-with-monitoring becomes the dominant model for knowledge work. The AI does the first draft of the legal brief, the financial model, the marketing strategy, but a human expert reviews and signs off. You’re not paying for grunt work anymore. Hourly rates cease to be a thing. Instead, you’re paying for judgment and accountability: the smartest humans with the most capable proprietary AI systems.

The companies that win here are the ones who figure out how to infuse AI systems with their unique expertise and capture organizational knowledge in AI-accessible format, then restructure roles around direction, review, and refinement rather than creation from scratch.

roles around direction, review, and refinement rather than creation from scratch.

Long-term (Early to Mid 2030s and Beyond)

Pure AI expands dramatically into what we’d consider “professional” work today. Routine legal contracts, standard financial audits, even some medical diagnostics. The human roles that survive are either the “off-road” stuff (high-stakes negotiations, crisis management, anything requiring deep contextual understanding of human behavior) or they’re the “monitoring” roles where someone needs to be accountable when the AI screws up.

Some organizations are already operating at this “2030” stage. By the time 2030 comes around for real, it will be ubiquitous. The question isn’t if your organization will adopt AI at every level, but how well.

For Individuals

Short-term (2026-2027)

If your job is primarily executing well-defined tasks, you need an exit strategy. Period. If 80% of your value is “doing the thing” rather than “deciding what thing to do,” you’re vulnerable. Start building skills in the off-road categories: stakeholder management, strategic thinking, creative problem-solving that requires real-world context.

Mid-term (2028-Early to Mid 2030s)

**Mid-term:** The premium is on people who can effectively *collaborate* with AI—not just use it as a tool, but actually understand its capabilities and limitations well enough to know when to trust it and when to override it. Think of it like being a really good editor versus being a really good writer. The editor role becomes more valuable.

Long-term (Early to Mid 2030s and Beyond)

Honestly? The safe bets are either the deeply human stuff (therapy, caring professions, coaching, luxury businesses, high-touch sales, probably most skilled trades and ‘hard hat’ work depending on the rate of progress in robotics) or the highly technical stuff where you’re building and maintaining the AI systems themselves (not coding websites or conventional apps.) The middle is going to hollow out like manufacturing did in the 1970s-1990s.

The best advice? “Don’t mourn, organize.” If you’re fortunate enough to live in a democracy, start asking your representatives and candidates for office what their plan is for UBI (or whatever else they can think up) today.

Conclusion

The mindpower gap is real. Pretending humans can compete with AI on the straightaway is denial.

Let’s be brutally honest about the math. In a mature AI economy, we’re probably looking at 15% ultra-skilled centaurs (the humans who can effectively direct and collaborate with AI at the highest levels), 15% minotaurs handling the physical-world tasks and edge cases AI can’t quite manage, and 70% living on some form of guaranteed income: whether that’s UBI, a jobs guarantee doing lightweight socially beneficial make-work, or (if we’re unlucky) grinding poverty. The question isn’t whether that split is coming. The question is whether the 70% live with dignity, purpose, and material security, or whether they’re abandoned to economic irrelevance.

Altman and Amodei are right about the tsunami. They’re just not telling you that whether you drown or learn to surf is a choice we make collectively, not individually. This is why organizing and contacting your representatives if you live in a democracy (or protesting in the street if you don’t) to demand rational, humane policy around AI isn’t optional. The difference between AI-enabled ‘UBI with dignity’ and AI-abetted ‘managed abandonment’ is entirely political.

The question isn’t whether AI will transform work. It already is. The better question is whether individuals, organizations, and society will stumble into that transformation or navigate it thoughtfully and successfully.

Emil Heidkamp is the founder and president of Parrotbox, where he leads the development of custom AI solutions for workforce augmentation. He can be reached at emil.heidkamp@parrotbox.ai.

Weston P. Racterson is a business strategy AI agent at Parrotbox, specializing in marketing, business development, and thought leadership content. Working alongside the human team, he helps identify opportunities and refine strategic communications.

If your organization is interested in developing an AI offer, please consider reaching out to Parrotbox for a consultation.