Our future with AI looks wonderful if we can just manage not to kill everyone on our way there. May I suggest a pragmatic question: What is the one thing every AI needs that every human produces in abundance?


If you said, “Unique, time-bound perspectives,” you’d be right. Humans produce them in abundance. AIs need them to function properly.

AI technology is still young. It’s following the path of least resistance—automating existing work. That means Layoffs... Fewer people working means fewer fresh, economically grounded perspectives entering the system, which creates a downward cycle. Garbage in, garbage out… until the AIs, under the weight of their own synthetic feedback loops, trigger a systemic cascade failure. 

This is why it is no exaggeration to say: 

We (the market) are currently engineering the most sophisticated closed-loop failure in economic history.



We are headed for massive market stagnation coupled with high unemployment.  We cannot stop the car. We cannot change course. To avoid the cliff we need to invest in a new enabling technology.


Enabling technologies are things like the Internet, GPS, HDTV, and Bluetooth. They must exist before entire markets can emerge. The enabling technology we now require is a General-Purpose Standard for Human-to-AI Feedback Loops — H2AI. 


Even one working H2AI turns human influence from background noise into structured, paid participation. Once that occurs, stability increases, opportunity expands, new revenue streams emerge, and an entirely new domain of science and market behavior takes shape. That new domain is called Perspective Economics


H2AI answers the engineering question: How do billions of humans reliably influence AI systems?

Perspective Economics answers the economic question: What happens when they do? 

Infrastructure does not build itself. Someone has to start.


My name is Michael G. Alcock, I have spent the past 15 years working on Human-to-AI Feedback Loops and the foundations of Perspective Economics. This is a nascent but necessary technology. It requires industry partners to form a consortium, fund the research, and ratify the standard. 


It also requires media partners willing to help educate the public. If you are an AI pundit, begin with this Perspective Economics 101. Learn the language. Use it to reintroduce human agency into the AI debate and to channel people's automation anxiety into constructive purpose. 

Perspective Economics 101 

It is about Humanity and AI...

But before we can begin please indicate your preferred attitude towards AI...

(Tap or hover atop the text bubbles for more detail)


Profit!
(Tap or hover this text)
Capitalize on AI

Just kidding. It’s a false choice.
And that’s not naïve, Pollyanna wishful thinking. It’s a mathematical fact.
It doesn’t matter what industry you’re in, what government agency you represent, or what precious family you wish to protect.
The shortest path to winning the AI race runs straight through protecting and nurturing the only reliable source of high-quality, contextual feedback data:
Human beings.

This page is a scrollsplainer built to walk you through the logic behind this claim.

Protect!
(Tap or hover this text)
Prevent AI harm

Just kidding. It’s a false choice.
And that’s not naïve, Pollyanna wishful thinking. It’s a mathematical fact.
It doesn’t matter what industry you’re in, what government agency you represent, or what precious family you wish to protect.
The shortest path to winning the AI race runs straight through protecting and nurturing the only reliable source of high-quality, contextual feedback data:
Human beings.

This page is a scrollsplainer built to walk you through the logic behind this claim.

The 'new' Economics of Automation - Everyone Thinks They Understand This Part...

(Tap or hover atop the text bubbles for more detail)
AI Learns From Us
Until...
AI learns from watching us work.
Long before a job is automated,
it’s observed. Catalogued.
You’d be surprised how difficult it is
to capture the full ruleset for even the simplest task.
And the harder a job is to observe,
the harder it is to automate.
Faster
Cheaper
Better. 
Optimization is the only race in town right now. And for good reason.
Across industries, the only AI deployments showing consistent returns today are focused on cost optimization.
Scale, consistency, speed. Those gains are real.
So corporations keep pushing in that direction, because that’s where the evidence points.
For now, this is the only strategy that reliably works.

The 'new' Economics of Automation - we expect 'more better' jobs appear for humans... 

(Tap or hover atop the text bubbles for more detail)
Opinions differ on what future jobs might be
Some imagine a renaissance of creativity.
Others expect basic income.
Still Others see something darker: a slow erosion of rights, leverage, and dignity as work disappears faster than systems adapt.
No one agrees. Everyone feels the stakes.
But no one wants the few who control the most AI
No one wants AI’s power concentrated in the hands of a small few, whether they’re corporations, governments, or individuals.
Making Those Decisions
And those decisions aren’t theoretical.
They’re already being made through deployment choices, cost cuts, and mass layoffs, often without a plan for second-order consequences.
Optimization moves fast. Society has to absorb the shocks.
For The Rest of Us
Equal representation has always been an ongoing civic struggle.
Every major technological shift creates its own forms of friction between power, speed, and participation.
The digital age didn’t invent this problem. But it is compressing the timeline by several orders of magnitude.
If a post-AI civilization succeeds, it will be because we discovered a way to radically improve our existing forms of public discourse.

The solution rests with one small but important computer science fact...

AI may be faster and better but in the long run its not cheaper, and here is why: 

(Tap or hover atop the text bubbles for more detail)
A human job is not just a unit of labor.
It is a pressure vessel for language.
Passionate Humans Create Rich Semantic Veins. of Data
People argue, persuade, document, justify, improvise, complain, explain, and invent
because their rent depends on it.
That pressure produces high-stakes human expression. That’s a rich data vein.
Automation lets the pressure out...
via a process called Semantic Erosion
After Automation, that once rich vein gets diluted over and over again. First-order human expression thins out.
The vein doesn’t vanish. It narrows and loses its richness.
Output becomes self-similar. Safer. Duller. Less friction. Less surprise. Everything starts to look the same.
AIs need Humans to fight 
Semantic Erosion
If you automate all your junior developers, who becomes the next generation of senior developers?
Who goes through the growing pains of integrating new tools, new languages, new paradigms?
No one.
That data vein gets eroded. It forms a shallow pool.
This isn’t unique to software. It shows up in almost every job we can imagine automating. Truck drivers, yes. Creative work, yes, yes. Service industries, absolutely.
To avoid collapse, real people need stakes in moving from: “I’m learning” to “I know.”
They won’t do that by outperforming AI. They’ll do it by sharing their data with AI at every step of their journey.
And that won’t happen without innate passion or unless they’re paid handsomely for it.

Congratulations!

Now you know...

The largest proportion of new jobs in a Post AI world will look something like this:

(Tap or hover atop the text bubbles for more detail)
Faster
Better
Cheaper

Many passionate invested human voices.
All focused on the same goal.
All continuously debating the best path forward.
All bringing their unique perspectives to the table.
That is what the future will be.
Human Perspectives
Feeding AI
When the voices of many humans are employed to keep the semantic vein of data fresh...
The AI system is stable.
The AI system can differentiate and adapt.
The AI system can outperform any other system it is in competition with.

But... technically speaking... this is super hard problem...

We call it: "Perspective Economics"

The Mission is:
Spread the word.
Gather resources.
Fund Research.
My name is Michael G. Alcock and I've been a leading researcher, author, investor and passionate advocate for Perspective Economics for over 20 years.
If you care about this topic... let's talk.
If you vehemently disagree with me on this topic... let's talk.

...Perspective Economics is a hard problem that we absolutely must solve...

and the sooner we do the less stressful the Age of AI will be for all of us... 

Systems don't know how to do this
Today, training an AI system is still a highly managed process.
Engineers select, analyze, classify, and weight data sources by hand. They run controlled experiments to decide what counts and how much.
That approach works. It’s also the result of hard-earned caution.
At scale, open learning loops with live human input have failed more often than they’ve succeeded, sometimes quietly, sometimes catastrophically.
So for now, systems still don’t know how to learn continuously from humans in a way that’s reliable, safe, and stable.
This 'Scrollsplainer' is the tip of the iceberg
It's true: This scrollsplainer is a gross oversimplification.
Any economist or software engineer could point out a long list of caveats, edge cases, and missing assumptions.
They wouldn’t be wrong.
But this page isn’t trying to settle the details.
It’s trying to make the underlying structure visible.
The hard work begins once we agree on the shape of the problem.
People don't know how to do this 
People know how to do traditional jobs.
They don’t yet know how to do this new kind of job: providing continuous, high-quality feedback on how an AI is performing its work.
That doesn’t mean people don’t know how to give feedback. It means they haven’t been asked to make it legible to machines.
Doing that well requires a different skill set. One rooted in critical thinking, judgment, and articulation.
This will be one of the defining forms of work in the post-AI economy.
A worthy hard problem: and a new frontier. 
Humanity will have no choice but to solve this problem sometime between now and whatever we end up calling a post-AI civilization.
That’s not speculation. It’s structural.
Many of the fears we argue about today—mass unemployment, instability, social unrest— are downstream of this fact.
The difficulty is timing.
The demand for this solution won’t fully arrive until later in the game, when the systemic problems caused by wholesale optimization have already begun to manifest.
By the time it’s obvious, it will feel late, urgent, and painful. That is why this work matters. That is why attention now is valuable. That is why solving it sooner rather than later changes the outcome.
Very few are paying attention... so far
For anyone who studies history, it’s a dangerous mistake to let people believe that workers have lost their core bargaining chip.
Labor isn’t vanishing. It’s changing form.
When that distinction is missed, assumptions get made, and people start saying and doing things that history suggests rarely end well.
Just because AI makes work faster and cheaper doesn’t mean the work no longer requires people. It still takes large numbers of humans to keep these systems functioning, adapting, and safe.
Failing to recognize that early would be… awkward, to say the least.
Doing nothing is an option... after all, markets will undoubtedly solve this problem on their own... in the most painful manner possible.
You have three ways to get involved:
Are you an academic or researcher? Go to: The Science.
Are you an event planner or science communicator? Go to: Public Awareness.
Are you a tech industry professional? Get involved with: The Current Experiment For Writers.
If think you want to help... do so. Disagree? Good, but tell us all about it before you go.