
TL;DR:
Software has eaten the world, as prophesied.
Software engineers have gained enormous power outside our own discipline—probably so more than any other class.
That is because software engineers are the ones most able to exercise meta-rationality. This is the critical capacity in current circumstances.
Software engineers are making civilization-level decisions. We aren’t trained for that, and often act unwisely or unjustly.
Nobility—the deliberately wise and just use of power—is the missing ingredient.
Software engineers are now exceptionally well-placed to take up the responsibility of nobility.
And so are you—regardless of occupation or position in society.
This is a post in my nobility arc. In the series, it follows and particularly relates to “Ofermōd,” so you might want to read that post first. More posts about nobility are coming!
Software has eaten the world
In 2011, in “Software is eating the world,” Marc Andreessen prophesied:
We are in the middle of a dramatic and broad technological and economic shift in which software companies are poised to take over large swathes of the economy. More and more major businesses and industries are being run on software and delivered as online services—from movies to agriculture to national defense. Over the next 10 years, I expect many more industries to be disrupted by software, with new world-beating Silicon Valley companies doing the disruption in more cases than not.
And so it came to pass.
The shift took another decade. It has mostly gone to completion. Just about every business either automated or had their lunch eaten by others that did.1
Software has become a blanket of stasis
Our now runs much too fast, but simultaneously it has ground to a halt. This is disorienting and difficult to understand. It seems that superficial changes in culture have accelerated past our ability to keep up, while fundamental structures have sclerosed into immobility.
Beneath constant churn, the backbone of our culture hasn’t moved in two decades. Everyone feels the disappointment of stasis underlying the anxious vertigo of incessant rug-pulls.
I suspect software is at least partly to blame for cultural stasis. It has robbed agency from many occupations. Algorithmic decisions, A/B tests of everything everywhere all the time, now form the cultural backbone. Each day, we get a new superhero movie series, a new bombing campaign, and a new offensive epithet. And yet, all those excitements seem the same as they did last year, and the year before.2 They are the outputs of automated processes that optimize the same metrics they did in 2005, so they get the same answers.
How are we going to break out of that?
Software engineers are eating the world now
Obligatory glaring example: Elon.
One broader measure: eight of the ten most valuable companies in the world were founded by software engineers.3
Software companies control the flow of meaning through online social networks. That gives them enormous power to shape the future of our culture, society, and selves. They also control much of the flow of goods (Amazon) and services (Facebook and Google, via advertising and influencing), giving them great economic power.
Increasingly over the past few years, the tech industry has deliberately exercised these powers toward preferred political, cultural, psychological, and economic ends. The “tech right” claims credit for getting Trump elected, and gaining Musk unprecedented authority within the federal government.
But nobody elected them! We don’t want to be ruled by software engineers!
I agree, on the whole. However, power and political authority have never perfectly coincided—and that is a good thing! Total control by political authority is called “totalitarianism,” and it is bad. In a democracy, there should be multiple sources and forms and locations of power. As there have always been.
Those check and balance each other; and the balance of power shifts periodically. A few years ago, unelected virtue experts were the dominant political power in America. So some power shifting to software companies and engineers is not inherently contrary to democracy.
Breaking out of stasis requires agency: the capability and willingness to act in non-routine ways. Someone has to do something different. Who?
Administrators do not have agency. A faithful administrator simply applies a rational system’s rules as given. It is not an administrator’s role to judge what should be done, only to ensure that it is done.
Management school mostly does not teach you how to create new systems, or transform existing ones.4 Most managers’ job is to optimize their operations in the short run.
Virtue experts (activists, “thought leaders,” op-ed pundits, political theorists, moralizers, religious leaders) may have agency. They act, sometimes powerfully, by applying verbal pressure to administrators.
However, breaking out of stasis also requires vision. Movement is impossible without a sense of direction.
Virtue experts have no positive vision. Their agenda is to universalize local morality. That may be a genuine improvement, but at best brings everyone to the experts’ own way of being.5 This is inadequate and uninspiring. And, they act mainly to obstruct actions they consider harmful, rather than promoting transformative improvements. That contributes to stasis.
Churls see the stasis and decadence brought about by administrators and virtue experts, and want to destroy the causes. They have recently demonstrated powerful agency. They resist rule by virtue-seeking bureaucracy, and so may seem direct opposites. But they are merely reactive mirror-images. They, too, lack any positive vision. Appetite for destruction is no basis for a system of government.
So. I agree that we software engineers are the wrong people for the job; but when no one else is able to do it, we may be the least wrong.
We’d better get less wrong, asap.
What is “eating the world”? Do we want more of that?
Andreessen’s article left the meaning of “eating the world” vague.6 In retrospect, software has transformed the way most people do most things. These cultural, social, and psychological changes are what we might now call “eating the world.”
Is that a good thing, or a bad thing? It has an aggressive connotation, which suggests it may be bad. The process did include considerable “creative destruction,” obsoleting whole industries and occupations. Some of what it created has also had large negative externalities. The world would probably be better if Facebook and Google suddenly blinked out of existence.
On the whole, I think software’s effects have been for the better, despite causing serious harms as well. And, if my “stasis blanket” suggestion is right, the eating is mostly finished.
If “eating the world” means “dramatically transforming many seemingly-unrelated domains of activity,” do we want that again?
Do we accept stasis?
Would we prefer slow, careful, incremental, local improvements, giving us a chance to nip risks and harms in the bud?
Is there a way to move fast without breaking so many things?
Who decides, and how?
What—and who—could drive a next wave of transformational change?
The reason software engineers can eat the world
Not because software. Because meta-rationality.
Software happened to be what drove major transformations over the past couple decades. Software engineers happened to be the people most directly involved in creating it. But now some software engineers are exerting power far outside that domain.
What has given us the power to do so?
Will we retain that power if software becomes decreasingly transformative?7
What if it becomes increasingly transformative?
Practical rationality’s necessity, and its limits
Effective use of power, in the modern world, requires practical, systematic rationality. “Practical” here contrasts with the rationality of science and mathematics, which mostly don’t provide routes to power.8 Software engineers are paid to produce systems that are immediately useful, which gives us leverage.
Effective use of power requires understanding the systematic operation of modern social institutions, such as corporations and governments. Other disciplines, like management, specialize in that. However, much institutional work, including much management and administration, is just humans carrying out more-or-less formal procedures. Much of that has been replaced with “enterprise software,” which is much of what “eating the world” meant over the past twenty years.
This is one source of power. Software engineers are better equipped to understand some institutional processes than any other group of people: by analogy with computation.
Formal procedures only apply to formal problems, though. We don’t live in a formal world. We live in a nebulous world. People and their interactions are nebulous, for example. That means that the analogy between social processes and computation can be misleading.9
In modernity, power operated by enforcing conformity to institutional systems. Postmodernity is the condition in which that no longer works. The world has become sufficiently volatile, uncertain, complex, and ambiguous—sufficiently nebulous—that institutions cannot adapt. They are now brittle in the face of chaotic social and cultural upsurges.
Meta-rationality works effectively with nebulosity
Meta-rationality bridges the gap between the nebulous actual world and the formal world of abstract computations. Meta-rationality acts on rational systems from outside and above. It orients to the nebulous contexts within which systems operate, and the nebulous purposes they serve. (This is the topic of Part Four of my book-in-progress Meta-rationality.)
As rigid modern institutions break down, meta-rationality increasingly enables effective action. (Power!) It’s the essential postmodern capability: the creation and transformation of adaptive systems serving diverse purposes.
The two disciplines most often engaged in practical meta-rationality are management and software engineering. In both cases, opportunities to exercise meta-rationality become available mainly only at the highest levels.
Established institutions mainly still follow a modern, top-down management control paradigm, in which only senior executives have power to create and transform. Entrepreneurship, as an exception, is inherently meta-rational. You create a new institution, and success usually depends on novel insights about nebulous market or operational processes. This is a route to power. Most managers aren’t entrepreneurs, however.
Software engineering is necessarily meta-rational:10
We design, build, and use software as rational systems with nebulous purposes in nebulous contexts that include other rational systems. Success demands a meta-rational overview understanding of the context and purpose, as well as detailed rational understanding of software technology.
In a successful software project, someone has to do that. It’s an uncommon ability for software engineers, but far more are capable of it than managers.11 And combining the understanding of meta-rationality from software engineering with entrepreneurial leadership may now be the best route to power. It was for Elon Musk, and for the founders of eight of the ten most valuable companies in the world!
Sorcery: fluid competence
Eating the world requires understanding the world, across a wide variety of domains. Not necessarily in depth, but at the level of sorcery. That grants “fluid competence”:
Fluid competence is the willingness and ability to address a situation’s purposes using any field or system of knowledge, skill, practice, method, mode of reasoning, form of understanding, or way of being.
Fluid competence is general competence, meaning potentially competent enough in any domain to provide "what the situation needs.” It doesn’t mean you are expert at everything; no one can be. It’s based on a realistic assessment of what you can do now, and what it may take you to become able to act outside your areas of expertise well enough to address the current situation.
At advanced levels, software engineering requires fluid competence, and also gives outstanding opportunities to learn and exercise it. The process is inherently exploratory, because that is the only way to address nebulosity. It can’t be planned in detail; you must improvise and innovate. And it has a much faster experimental cycle than science, conventional engineering, or management.
As
writes:This teaches people agency—you can just build stuff and get tangible results, over and over again. And crucially, those results can be successful regardless of whether they’re validated by authority figures.
Musk is a remarkable example of fluid competence. He learned enough of an extraordinary number of disparate disciplines to apply them successfully. He did most of that independently, not in a university or other formal educational system.
His company SpaceX’s success depends partly on deep, informal inquiry into rocket science and engineering; and partly on a rapid experimental cycle. Where NASA and its conventional contractors attempt to build rockets that work the first time, but are fantastically expensive, SpaceX builds cheap ones that blow up on the first several launches. They learn more from each explosion than could ever be calculated rationally from physical theory and simulation.
Resilient systems

Software systems are often relatively long-lived. They last many years; sometimes decades. To achieve such durability, we constantly adapt, extend, alter, and even fundamentally transform them. They must mutate to meet rapidly-changing contexts and purposes. This is how postmodernity is.
This is what modernity couldn’t accommodate. Modern durability relied on oaken rigidity: systems holding a fixed form against winds of change. Software durability is flexible resilience, like a willow bending in the wind, unharmed.
Modern management deployed authoritarian control to solidify institutional structures. In postmodernity, management often fails by going to the opposite extreme: by making any drastic changes necessary to meet quarterly financial goals.
Clueful software engineers typically wage permanent war against managers: pushing for a longer-term view, unwilling to sacrifice resilience for short-term gains, caring for the health of systems years into the future.
Caring for the long term manifests nobility. It is wise and just.
Decisions with civilizational consequences should consider a still much longer time-frame! But thinking even five years ahead puts many software engineers far beyond almost all managers.
One reason universities used to teach every student classical texts by Dead White Males was to encourage a centuries-long view. That was opposed and successfully dismantled by the left (as discussed in “Ofermōd”). This isn’t an intrinsically right or left issue, though. We could equally well teach the long view in a left-friendly style: “Seven Generations.”
The end of the world
AI is a wildcard. It may flop, or become a modestly useful tool, or impel transformations on the same scale as the internet, or speed us into an inconceivable utopia, dystopia, or total extinction. I don’t think we can estimate how likely any of those are.
“AGI,” “artificial general intelligence,” is the vague term for “the Really Big Deal AI Thing that will change everything‼️” AGI may grant enormous power to some entities—or not. If so, software companies and the engineers who lead them may take that power—or it may be others. As I wrote in “Fear power, not intelligence”:
What we should fear is not [artificial] intelligence as such, but sudden massive shifts of power to agents who may be hostile or callously indifferent. Technological acceleration can do that; but a new sort of AI is neither necessary nor sufficient to cause acceleration. Powerful new technologies are dangerous whether they are wielded by humans or AIs, and whether they were developed with or without AI.
I’ve been involved in AI research, on and off, for fifty years, and have known many key decision makers personally. Far too many were crazy, evil, and/or stupid. Somehow, the field attracts such people. That was true for decades, but it didn’t matter because AI was going nowhere. Now it may determine the fate of the world.
Most heads of leading AI laboratories have repeatedly stated that:
We are building AGI, and expect to have achieved it within a few years.
AGI is quite likely to cause human extinction.
You should give us lots of money and favorable legislation so we can build AGI faster.
It is reasonable to disagree with any of these three claims. You may believe that AGI is impossible, or a long way off; or that it definitely won’t cause human extinction; or that the development effort should be forcibly terminated. However, you can only assert all three claims simultaneously if you are crazy, evil, and/or stupid.
Current decision-makers in AI have repeatedly, dramatically demonstrated ignobility. Their actions have been unwise and unjust.
My post “Ofermōd” hinges on Tolkien’s discussion of the nature of nobility. He considers a decision in A.D. 991 by the warlord Byrhtnoth that caused the death of thousands. Byrhtnoth was motivated, Tolkien says, by lust for glory.
Tolkien says that choosing to go to your own glorious death is consistent with a particular, valuable conception of nobility. Risking the lives of others in pursuit of your glory is never noble, by any standard. Your subordinates may owe you loyalty even unto death, but you bear a reciprocal responsibility of care for them.
The common cynical interpretation of the ignobility of AI leaders is that they are motivated by lust for money and power. I suspect this is wrong. That’s from having known many famous AI guys well, and from my experience of my own motivations.
AI guys are motivated more by lust for glory. We want to be the one who does the most significant thing in history. And, if that means a Russian roulette chance of human extinction—well, whatever, we’re each going to die of something or other soon enough anyway, so it doesn’t much matter, does it?
Tolkien’s understanding of Byrhtnoth’s ignobility formed the moral center of The Lord of the Rings. A tale for our time too…
What better world?
“Eating the world” involves making civilization-level decisions that software engineers weren’t educated for and don’t know how to do well.
The tech industry does understand that it is the main place where power and positive visions can come together.12 Many within the industry accept that responsibility. Some are now major players on the national or global stage. This does not seem to me to be going well—even though they generally mean well.
As a class, software engineers are unusually well-intentioned, I think. We want to make things better. That is the engineering ethos! Startups often say they want to change the world. They say they want to do good while doing well. Most of them mean it. Yet our ambitions are too often trivial when concrete (“Uber, but for lawn care”) and vague when potentially consequential (“empowering the disadvantaged everywhere”).
What is “a better world”? How do software engineers typically think about this?
By analogy to software development practice, our idea of “better” tends to be “more rational, more systematic.”
That is often good, but not always! And we tend to cash it out as “more complicated and newer.” That is generally not good. Even in software development, “easier, more reliable, more useful, more enjoyable” is better.
I think making our culture and society easier, more reliable, more useful, and more enjoyable would be an excellent start on a vision. I think most people would agree!
Material improvements in life are good and important. Many software projects aim for those, even if only because they are often profitable as well.
However, most of us understand that society and culture need improvement too, and vaguely hope to contribute toward that.
Most software engineers understand that they are not trained in big questions of purpose. Not our expertise! So, we commonly outsource that to virtue experts.
This seems sensible, superficially, but often doesn’t go well. “Virtue,” as I’m using the word here, means making sure that you are behaving morally, within in your local circumstances, according to experts’ standards. Implicitly, their endpoint would enforce that as a theocracy. That is not an inspiring vision for broad positive transformation.13
Recognizing the limits of virtue, some software engineers adopt a churlish anti-virtue stance in reaction.
This is understandable emotionally, but a self-refuting confusion conceptually. We can disagree about exactly what counts as moral, but virtue is basically good. Negating even a moderately mistaken conception of virtue yields vice, not nobility.
The distinction I’ve made between virtue and nobility seems critical for finding a vision of “a better world” that can guide choices that have civilization-sized consequences.
A call to nobility—but which nobility?
Nobility is the wise and just use of power.
A vision of greater purpose is a key to nobility’s wisdom aspect.
Two essays by Andreessen sketch his vision: “It’s Time to Build” (2020) and “The Techno-Optimist Manifesto” (2023). I find these inspiring overall. They focus on creating material abundance. I agree that is much more important than our culture currently admits. My vision would emphasize meaning, culture, and society more. However, in a section of the Manifesto titled “The Meaning of Life,” he writes:
Material abundance from markets and technology opens the space for religion, for politics, and for choices of how to live, socially and individually.
We believe technology is liberatory. Liberatory of human potential. Liberatory of the human soul, the human spirit. Expanding what it can mean to be free, to be fulfilled, to be alive.
We believe technology opens the space of what it can mean to be human.

The Manifesto’s following section, “The Enemy,” identifies bureaucrats and virtue experts as obstacles to Andreessen’s vision. I don’t consider them enemies, but agree that they have gained excessive power, and exercise it for stagnation. In “Priests and Kings” and “Keep priests in check,” I suggested that it is a function of nobility to counter that obstructionism with propulsive forward energy.
Andreessen’s Manifesto is a call to nobility:
We believe in greatness.
We believe in ambition, aggression, persistence, relentlessness – strength.
We believe in merit and achievement.
We believe in bravery, in courage.
We believe in pride, confidence, and self respect – when earned.
Viking lords would lift their mead horns and cheer!
This is a particular, attractive, but limited and potentially misleading conception of nobility. It echoes the pagan “master morality” that Nietzsche cheered. (Andreessen’s use of Nietzsche’s unusual term “ressentiment” makes this nearly explicit.) Nietzsche also recognized it as inadequate (as I explained in “You should be a God-Emperor”).
Tolkien pointed out both its value, and its callousness and risk of fatal overconfidence. It’s interesting that many in the “tech right” are explicitly inspired by The Lord of the Rings. (Silicon Valley’s leading defense companies Anduril and Palantir are named for Aragorn’s sword and the elves’ magical surveillance technology.) The novel is an extended argument for the value of nobility—but with warnings against misunderstanding it.
Nevertheless, Andreessen’s essays attempt to reclaim, rethink, rebuild nobility. I’ll cheer for that, even if I disagree on details.
In ignorance of nobility
“Wise and just” may not describe recent uses of power by software engineers who are exerting it at scale. Software engineers are not trained in wisdom or justice, and that is a problem.
For example, it appears now that Musk’s DOGE effort has been a failure at its stated mission, and to have wreaked enormous damage—including to the science and engineering research efforts that his own career was based on. What happened?
I would guess ofermōd: overconfidence and lust for glory. Those are the common failure modes of nobility, according to Tolkien. Musk wanted to be the man who fixed America. And his experience with sorcery, his ability to learn new disciplines quickly while on the job, made him think he could pull that off in a few months.
Stupid, crazy, or evil? Or unwise, unjust, and ignoble: in ignorance of the nature of nobility.
Unfortunately, no one is trained in nobility any longer—the central point of “Ofermōd.”
That does not excuse ignobility. But it does represent an opportunity.
You can take up the challenge of nobility
Everyone can be noble, because everyone has some power: some ability to influence their environment. All of us are noble at times; we choose to act with wisdom and justice. All of us can choose to act more nobly more often.
The more power you have, the more nobility matters.
Insofar as software engineers are eating the world, it matters that we exercise power wisely and justly. Fortunately, we are also exceptionally prepared to rectify our ignorance of nobility. In “Ofermōd,” I sketched reasons we need to rethink the meaning of nobility to address the passing of modernity. I suggested that any functioning reconstitution must incorporate meta-rationality.
The essence of meta-rationality is actually caring for the concrete situation, including all its context, complexity, and nebulosity, with its purposes, participants, and paraphernalia.14
Software engineers may be the largest group capable of that. In response to destructive postmodern nihilism, some of us acknowledge nebulosity, but insist:
Wait, rational systems are actually good and useful, so long as they aren’t too rigid! We often make them work quite well! Formal systems are blind to many things, but we can take a broader view. We can see around them, and supplement them with non-formal supports. We can look to see how they relate to contexts and purposes, and adjust systems and their surroundings so they work together better!
When we are willing and able to exert power beyond our domains of expertise, let us apply this same attitude to larger-scale society and culture.
At the end of Ofermōd, I listed characteristics of the people most likely to do this. I had software engineers particularly in mind.
But we are not the only ones!
You have more power than you think.
How will you choose to use it?
What can “nobility” look like in your own sphere of influence? What if you expand that sphere? How can you expand that sphere, nobly?
How do you locate purpose, beyond the personal and local?
A “better” world may be more materially abundant and more virtuous. (This is obvious.) What else would you like to see in a better world?
Terms like “glory” and “magnificence” are scary… What comes with the courage to imagine them as vital aspects of “a better world”?
Of your world, because you create circumstances?
The Final Frontiers are government, education, housing, and healthcare. So far, they have mostly resisted successfully, to everyone else’s cost.
Experienced software engineers will recognize an analogy. The gusher of “new software technologies” are impossible to keep track of, and young people demand to use whatever’s the latest, hippest one. But those do the same things, the same ways, as forgotten ones from ten years ago; and from twenty years ago.
Amazon, Apple, Facebook, Google, Microsoft, and Tesla were founded by software engineers. Broadcom and Nvidia were founded by electrical engineers who specialized in computer architecture, and presumably coded extensively early in their careers.
Many management schools have an entrepreneurship and/or “transformational management” track, which do at least claim to teach those. Most managers weren’t trained in those tracks, and also may not get the opportunity to apply what they learned, even if they want to and the curriculum was accurate.
Or, perhaps more accurately, to the experts’ idealized concept of how they wish they were.
As a venture capitalist, he was mainly arguing that software companies would be exceptionally profitable, which many investors doubted at the time.
Silicon Valley seemed to have run out of ideas in 2022. Blockchain and virtual reality had been heavily hyped, but fizzled. “AI” arrived just as that became apparent. It may become huge. If not, software profits may revert to normal margins. Then the industry could become relatively insignificant, and software engineers will not longer be able to eat the world.
There are exceptions. A week before I published this, one mathematician was elected President of Romania, and another was elected Pope. I am cheering for both of them!
Failure to recognize this is rationalism.
I’ve used the term “software engineering” loosely, to match Andreessen’s title. The engineering, in a narrow sense, is usually somewhat meta-rational; but I am pointing more toward the overall process, which has to be still more so. “Software development” might be more accurate, but “software developer” tends to mean a relatively junior programmer, of lesser capability than an “engineer.” “Software architecting” may come closest, but is still more specific than I want.
I feel pretty sure? I don’t have any numbers to back this up.
To acknowledge an obvious point, passed over in the “reason software engineers can eat the world” section above: successful software companies generate unprecedented surpluses of money, competence, and reputation. Their leaders can, and do, leverage those for acquiring and exercising power beyond the company.
Some virtue experts may disagree? On the grounds that they do have a positive vision; or that they wouldn’t create a theocracy even if they had the power to; or that their theocratic rule would be good, actually. Maybe this deserves a follow-up post?
Quoted from the Introduction to Part Four of Meta-Rationality, a book-in-progress.
Aren't there better examples? Elon Musk is in a class by himself, but might be better described as a tech industry executive with some technical expertise. (How much is debated.) He isn't a software engineer and didn't gain power by writing code. He has minions who are software engineers.
To drill down a bit, I think one of the keys to SpaceX's success was the ability to attract rocket engineers to work long hours on something new and exciting that made sense to them from a technical perspective. They were attracted by the promise of being able to execute faster than was typical at NASA contractors. Funding of course matters too.
Google had this attraction at the beginning, too, for software engineers. And there was a time when Tesla was pretty exciting for car guys.
The ability to attract *many* talented nerds, along with funding, to work together on a common cause, is more about technical leadership than about writing code. Lone coders don't accomplish all that much by themselves, and many companies have management that doesn't listen to non-managers very much.
It's also possible to attract unpaid nerds to work on inspiring projects, or to pay people to work on uninspiring projects, but it's harder to get good people without having both an exciting project and funding to pay them. (A strong possibility of getting rich certainly helps too.)
I think the tech elite has so far failed to provide a positive future for society, because recent technology progress has both potent positive and negative effect on our lives. The effort to use technology to improve the physical world has largely been a positive force. I count Amazon, Tesla, Space X as net positive, and Uber, Airbnb, Doordash as mixed but still valuable. The effort on the virtual world has such strong positive and negative effect, which stretches our minds so much, probably left us worse off. The promise that we will be better if we are all connected / if information is free has not turned out to be true. We are overwhelmed by too much connections and turned passive. We are lost with too much information running around. I think AI has the potential to make things much worse.
It is commonly lamented that most modern science fictions are dystopian. Tech people commonly accuse the public as insufficiently optimistic about technology. But I think the dystopian vision is just a natural extrapolation of current trend -- the forces that are tearing us apart is stronger than the forces that are making the world better. I think the tech leaders -- Andreessen in particular -- failed to even acknowledge the problem, and therefore definitely not noble.