The Risks of Silicon Valley’s Grand Visions: AI, Longtermism, and Space Ambitions

The future is perpetually receding. Or, as the Polish philosopher Zygmunt Bauman reminds us, “The future is not a destination. It is a horizon, and horizons are not to be reached.” A poetic warning in which Silicon Valley’s most pious optimists seem disinterested. For them, the future is a corporate roadmap with guaranteed returns. In More Everything Forever, Adam Becker surveys the grand ambitions of our technological elite—their messianic faith in AI, their imperialist yearnings for space, their moral gymnastics in the name of humanity’s unborn trillions—with the sharp eye of someone watching a high-stakes magic trick where the rabbit never actually appears.

The book’s dramatis personae are the usual titans of futurist fantasy—Sam Altman, Eliezer Yudkowsky, Jeff Bezos, Elon Musk, Marc Andreeson—each convinced that their particular vision of progress will not only carry civilization forward but preserve it from destruction. Their rhetoric toggles between utopian rhapsody and apocalyptic alarmism, each prophecy serving the same purpose: to justify their power in the present.

Becker begins with artificial intelligence, the great cosmic dice roll of the coming century. Altman, CEO of OpenAI, believes AGI (artificial general intelligence) is inevitable and will usher in a world of limitless prosperity, where AI-driven abundance eliminates poverty and drudgery. Such an achievement would require a radical redistribution of wealth—not through government but through corporate shareholding, where the public owns fractional slices of an AI-powered economy. It’s an innovation-age feudalism, where the lords of Silicon Valley manage the estates and the peasantry receives dividends instead of wages.

Yudkowsky, in contrast, is gripped by terror. He sees AGI not as salvation but as an extinction-level event, an intelligence too vast and alien to be controlled. His prescription? A global halt to advanced AI research, enforced if necessary by nuclear deterrence. Between Altman’s giddy accelerationism and Yudkowsky’s doomsday cultism, one wonders whether AI is a technology or a theological schism. Perhaps the real question is not whether AI will destroy us or save us, but why its most vocal prophets seem incapable of imagining a future that is neither utopian rapture nor total annihilation.

These are familiar fictions. Since the Enlightenment, the West has told itself stories of progress as an inevitability, a moral force. The Industrial Revolution promised universal prosperity; it delivered empire, extraction, and class warfare. The space age was meant to democratize the stars; it became a Cold War arms race with a more poetic PR strategy. The neoliberal digital revolution was heralded as an engine of equality; it gave us monopolistic platform capitalism. Utopian predictions of whatever age have a strange quality in common: they tend to serve those making them.

Perhaps the most explicitly moralized version of this logic comes in the form of longtermism, the philosophical movement that has entranced much of Silicon Valley’s leadership. Popularized by William MacAskill, longtermism holds that we should prioritize the welfare of future generations—not just our great-grandchildren, but the potential trillions of humans (or post-humans) who might exist if civilization expands beyond Earth. In his book What We Owe the Future, MacAskill makes a compelling case for taking our long-term impact seriously. The central premise is not absurd; many of our greatest follies stem from thinking too short-term.

But Becker rightly points out that longtermism, in the hands of the tech elite, can slide from ethical consideration into a justification for present-day power. If the moral weight of the future dwarfs all current suffering, then almost any action in its name can be rationalized—including monopolizing AI research, consolidating wealth, and steamrolling regulation. The logic, carried to extremes, risks treating the living as expendable in service of the hypothetical.

Nowhere is this future-obsession more literal than in the tech industry’s spacefaring ambitions. Bezos dreams of a trillion-person interplanetary civilization, Musk of self-sustaining Martian colonies. They frame it in existential terms: expand or perish.

The idea that Earth might remain our home—requiring care, repair, and redistribution—barely registers. In Post-Growth, Tim Jackson calls this obsession with endless expansion a “story we have told ourselves for so long that we barely recognize it as a story.” Becker, like Jackson, notes that the myth of infinite progress is ultimately a deferral of reckoning. It allows those at the helm to avoid engaging with the limits of the planet we already have.

This is the quiet devastation at the heart of More Everything Forever: the realization that these grand techno-utopian visions, rather than democratizing the future, consolidate control over it. 

The rhetoric of inevitability—AGI will come, we will colonize space, growth will continue—serves a specific function: it forecloses alternative paths. The billionaire futurists do not seek to debate the future but to own it. Their dreams of a perfectly optimized, AI-managed, intergalactic society do not invite participation; they demand submission.

And yet, as Becker reminds us, the physical universe is unsentimental. Physics has its own story to tell. Spoiler: it does not include infinite growth. Energy is finite. The stars will burn out. The Singularity is a fantasy not because intelligence won’t advance but because history does not move in straight lines.     

Tim Jackson warns in Post-Growth that the real crisis of capitalism is its addiction to acceleration: we are so conditioned to believe that progress means “more” that we struggle to imagine prosperity in any other form.

But some of us manage to. Becker meticulously dissects the grandiose visions of his male anti-heroes, but some of the sharpest critiques of AI, longtermism, and techno-accelerationism have come from women. Dr. Kate Crawford, who deconstructs the myth of AI as a neutral force; Joy Buolamwini, who reveals how these systems reinforce bias; and Shannon Vallor, who argues that technological progress without ethical guardrails is meaningless. 

Even beyond AI, the loudest futurists tend to be men. Simultaneously, many female researchers, engineers, and philosophers focus not on hypothetical space colonies but on fixing the problems we already have—algorithmic bias, environmental collapse, surveillance capitalism. The real stakes, they know, are here and now. AI is not an abstract existential gamble but as a force already entrenching inequality, consolidating power, and reshaping society in ways that demand scrutiny. 

These voices are the braver prophets, calling us to tell a better story of the future, not one of extraction and conquest, but one of responsibility and repair. Stories of sustainability rather than expansion, of governance rather than control, of futures that belong to the many rather than the few. Intelligence—whether human or artificial—means nothing without wisdom. The question is not whether AI will surpass us, or whether we will colonize the stars. It is whether we will let the men who dream of such futures write our history for us.

Previous
Previous

Julien Baker & TORRES Blend Indie Sensibilities with Country Sounds

Next
Next

Koreaworld Cookbook Celebrates Korean Culture and Global Expansion