Who Is the Future For?
Last year a founder walked me through a robotics system built for work that damages people. She had video from a facility floor: the same motion, performed thousands of times a shift, by people who had learned to pace themselves against damage they knew was accumulating. The injury data was specific: musculoskeletal damage, repetitive strain, careers ended by cumulative toll. The facilities couldn't hire enough workers for these roles. The people they hired left within months because the work was physically punishing. The technology was genuinely good. It would also end the jobs of the people doing the work.
Again and again, the slide that got the most attention from customer boards was the one showing headcount reduction.
An earlier version of this essay told that story as an indictment of the system. The board couldn't hear the human case. But that isn't quite right. Injury reduction is a reduction in liability cost, and any board can price it. Turnover is a training expense. Demographic shortfall is an operational risk with a timeline attached. Every human argument in that pitch has a dollar translation, and the board found it. The system heard these things fine.
Then it discounted.
The quarterly cycle is the starting rhythm. A cost saving this quarter gets priced at full weight. The competitive advantage of deploying early, building process knowledge while rivals wait, gets acknowledged but discounted: real, but uncertain, and five years out. The demographic cliff arriving in ten or fifteen years shows up in strategy decks and almost never in purchase orders.
The founder's design constraint (eliminate the work that damages people) wasn't necessarily about a longer timeline. It was about what you choose to see. The board and the founder were looking at the same technology. The board saw cost. The founder saw consequence. Both were doing math. The gap between them was not a wall but a gradient: near-term consequences vivid, medium-term consequences fading, long-term consequences nearly indistinguishable from one another regardless of their actual weight.
It is difficult to get a man to understand something when his salary depends upon his not understanding it.
— Upton Sinclair
The system that funds, builds, and deploys technology is not broken. It is short-term.
A public company reports quarterly, but nobody in the room is thinking only about this quarter. The VP making the purchasing decision is thinking about the trajectory of their division over three to five years. The board is weighing capital allocation against a strategic plan that runs further than that. People are not short-term in any simple way. What they are is gradient-bound: the further out a consequence sits, the less weight it carries in the decisions being made today. Not zero weight. Less weight. And past a certain distance, the discounting flattens: a serious problem in ten years and a catastrophic one in twenty look roughly the same from here.
This is a more precise problem than the one usually described. The standard critique says the system can only hear money and is deaf to human outcomes. But money can represent almost anything given enough time. Injury reduction becomes lower insurance premiums over five years. Workforce stability compounds into institutional knowledge over a decade. The system can price all of this in principle. The gradient doesn't just make longer-horizon signals quieter. Further out, it makes them indistinguishable: all faint, all deferrable, regardless of actual magnitude. Sometimes a proxy metric cuts through: "reduce emissions" gave climate change a handle that pulled generational consequences into quarterly decisions. This territory has no equivalent proxy. "Net human consequence" is real but unmeasurable.
Much of the current enthusiasm about AI is justified. The technology is producing genuine capabilities: new ways to work, to analyze, to build things that were not previously buildable. That is real and worth being excited about. The question is not whether the technology is good. It is about what happens on the parts of the gradient where nobody is looking.
We build tools, and the tools rebuild us, and the rebuilt us builds different tools. This has been happening since before writing weakened memory and the clock restructured time. What is new is the speed. The iterations between technology and culture that once took generations now take years. Social media appears to have reshaped adolescent social development within a single cohort; the research is still arguing about magnitude, but not about direction. AI is doing something faster and broader: restructuring how professionals think, write, analyze, and make decisions. Changing what counts as expertise. Changing what it means to know something. These shifts are happening inside the working lives of the people experiencing them, which means the consequences are accumulating before anyone has the distance to read them.
Sherry Turkle was writing about what screens do to empathy in 2011. Tristan Harris was inside Google flagging attention-capture design before the phrase entered common use. They were not wrong. They were describing consequences that were real but discounted, because those consequences were competing with nearer, louder signals. Some parents acted early. Some schools acted. The aggregate system was last, when the consequences had moved close enough on the gradient to outweigh the things already commanding attention. By then, a generation of adolescents had already grown up inside the experiment.
To see what is in front of one's nose needs a constant struggle.
— George Orwell
The argument between technology optimists and technology doomers is the wrong argument. Both assume technology has a trajectory. The optimist says the trajectory is upward: technology as primary lever of human advancement, acceleration as moral imperative. The a16z manifesto is the clearest expression. The doomer says the trajectory is dangerous: technology outrunning governance, the responsible move being to slow down. Both treat technology as a river with a direction (nourishing or flooding, but either way flowing on its own).
Technology does not have a trajectory. It has funders, builders, and deployers making choices inside systems that render the aggregate consequence of those choices invisible to everyone inside them. The inevitability framing removes accountability from decision-makers by telling them they are not making decisions.
But among those who do acknowledge choice, there is a further sorting. It applies to everyone in the decision chain: funders, builders, deployers, buyers. Some read first-order: what does the technology do, and who will buy it. Capability, market, margin. This is the language of most purchasing decisions and most investment committees. It is how capital gets deployed at speed and it is not wrong. Some read second-order: if this capability exists at scale, what becomes possible that wasn't before? What closes, what opens, who moves where? A few read third-order: this technology, combined with the demographic shift, the regulatory change, the reshoring imperative, recomposes the architecture of a sector.
The depth of reading varies across every part of the system. Some founders build around structural recomposition; others build a feature and call it a platform. Some investors genuinely hold the third-order view; some use the language (paradigm shifts, market creation) while the investment committee memo reduces to addressable market and unit economics. First-order analysis wearing third-order language is not an investor problem. It is a human one.
There is a cohort right now building AI theses around US technological supremacy. The geopolitical case is real: who leads in AI will shape what gets built and on whose terms. But when that lens becomes the only lens, the questions it doesn't ask go unasked. What the technology does to the people who live with it, how it reshapes work and knowledge and professional development: these don't disappear because the race is on. They just stop being visible inside a framing that has room for one question.
Applied to AI, the distinction is concrete. First-order: this model automates this task, here is the productivity gain, here is the market. Second-order: if AI handles analysis and first-draft thinking across entire professions, what happens to how expertise develops? Third-order: at minimum, the institutional structures (universities, professional training, credentialing) were built on assumptions about human cognition that are now in play. The institutions are the visible part. They are not the deepest part.
The difference between these readings is not analytical sophistication. It is attention. How long you are willing to sit with the complexity of what a technology actually changes, how many threads you can hold, before you reach for a number or a narrative that closes the question down.
The first principle is that you must not fool yourself — and you are the easiest person to fool.
— Richard Feynman
I might be doing exactly this.
The time-horizon advantage of early-stage venture capital is partly structural: a ten-year fund sits further along the gradient than most corporate decision-making reaches. It can underwrite consequences that are real but too distant for a purchasing cycle to price. But the deeper advantage is attentional. You see more because you have been looking longer. The demographic curve, the injury data, the facilities that will close without the technology: these become visible not through better models but through accumulated contact with the people and the work. And once you see the second and third-order effects, you carry them. They don't leave when you close the spreadsheet.
This is not a claim to moral clarity. A ten-year fund has its own gradient. LPs ask about distributions. Markups matter for the next raise. The pressure to show near-term progress pulls at the same long-horizon discipline the fund was built to hold. The structural advantage is real, but it is not a virtue. It is a longer leash on the same set of forces.
What you see there depends on what you value. The gradient is the same for everyone. What rises above the noise is not. My bias is toward human agency, technology that is more about extending what people can do than replacing them, long-term human thriving. These are also the biases of someone who chose to make a living deploying technology. Others will weight differently, and the gradient will show them different things.
But reading further along the gradient reveals demand the near-term view misses. At the deep tech frontier, where a technology does something not previously possible, those longer-horizon signals are where massive structural need is forming, need that the near-term weighting understates. I want the outcomes. I also need the returns. Most of the time these are not in competition: they are the same signal read at different points on the gradient. Sometimes they genuinely diverge, and when they do, the fund's obligations are not ambiguous. What makes the alignment claim honest is not that it always holds, but that at the deep tech frontier it holds more often than the near-term view would predict.
· · ·
The gradient does not stop at ten years. Some consequences play out on timescales that no existing capital structure can hold. AI is the hardest case, because it operates on both ends simultaneously. The reshaping of how people think, work, and develop professional judgment is happening now, inside current careers. The deeper structural consequences (what happens to institutions, expertise, and the meaning of skill when an entire generation learns to think with AI from the start) are generational, and no fund has a generational horizon. The louder the near-term signal (and AI's near-term signal is the loudest in a generation), the harder it is to hear either: the disruption already underway or the slower recomposition underneath it.
Everyone making these decisions will live inside their consequences, as will their children, and everyone else outside the room. Most are not weighing the non-commercial ones even for themselves. The people with the least voice carry the least weight. This has always been so.
Technology will continue to be built. Within that, every funder, builder, and deployer is making choices about what gets built and for whom. The discipline of asking "who is this for" will not produce the clarity of "reduce emissions." There is no single metric.
But the people making these decisions are not just asking questions. They are building architecture: fund structures, milestone frameworks, deployment choices. None of this reaches the generational horizon. For that, different instruments are required. Regulators and standard-setters are building architecture too. So are the communities who adopt, resist, or demand a say in what gets deployed.
Each of these is a structural decision, and the question of who the technology is for either shapes it or doesn't. The future is being constructed by people who mostly are not asking. Each act of asking is small, and it is consequential. It is available to anyone willing to hold the question longer than is comfortable. That is not much. But the aggregate of these choices, across all of the people making them, is how the direction actually gets set. It has been set before. It can be set again.
Many thanks to Kent Jenkins for his review on an earlier version of this piece.