Meta-rational software development: Readings
Resources for learning the keystone topic missing from the CS curriculum.
This page collects insightful discussions of meta-rationality in software development.
These were among the most useful sources I found in my background research for writing the software sections of Part Four of my meta-rationality book. They include blog posts, books, and social media threads.
Meta-rationality is probably more widely understood in software development than in any other field.1 Software development creates a novel formal system—a program—for nebulous purposes, operating in a nebulous context. Reasoning about relationships between formalisms and the nebulosity of reality requires meta-rationality.
Success depends critically on good-enough meta-rational choices in organizational interaction, architectural design, user experience, and scope limitation.
That includes project success, organizational success, and personal success. Demonstrating meta-rational competence is the essential prerequisite for advancing to the “Staff+ level” as a software engineer.
I’ve placed the readings in four categories: the nebulosity of program code; the nebulosity of you, the developer; the nebulosity of contexts and purposes; and the nebulosity of everything else.
1. Working with the nebulosity of software itself
Although programs are perfectly formal, their complexity makes it impossible to relate to them as such. In practice, we have to treat large codebases as nebulous; and therefore somewhat unpredictable, incomprehensible, and uncontrollable.
Effective programming in-the-large demands meta-rational methods. Those are similar to those we must use in working with the inherent nebulosity of every other domain.
2. Developing meta-rationality as a software developer
Meta-rationality is not part of the undergraduate CS curriculum. It is rarely taught even in graduate school, and then almost always only informally, through apprenticeship. In practice, you learn it on the job by doing it badly and learning from your mistakes; by taking advice from mentors; from informal discussions with peers; from reading posts on blogs and forums; —and from reflecting on all these, while in the midst of project chaos.
Developing meta-rationality can result in profound personal transformation. It is not so much that you learn new skills. It is that, by expanding your scope of responsibility, you adopt an unfamiliar stance, and thereby become a different sort of thing.
3. The nebulosity of software purposes
Reasoning about purposes is a central meta-rational task, in all domains. Inadequate understanding of purposes is probably the most frequent reason software development projects fail. “Requirements analysis,” which attempts to formalize nebulous purposes rationally, often proves inadequate.
A meta-rational approach must also take into account the positive and negative impacts of software on its users and other people it affects. This often raises issues of ethics, power, society, and culture.
4. Ontology: formalizing patterns in a nebulous world
Great software gives users the sense that they are perceiving and acting on the same real-world objects that they understand as the topics of their tasks. Then the program becomes transparent; the user doesn’t need to fight the computer, they can just do what they want without it getting in the way.
Providing this illusion requires understanding the task domain in the same terms the users will, and organizing the program around that ontology. This is difficult, and requires meta-rationality, because the domain is nebulous, and the users’ understanding of it is also nebulous.
This is a part-paid post. The first two sections are free. Those cover the nebulosity of code itself, and how to develop your own meta-rationality in the software domain. The two sections after the paywall are about the nebulosity of software purposes, and on discovering, specifying, and implementing software ontologies.
Please consider becoming a paying subscriber—to support and encourage my work, and for full access to all posts!
I will probably add to this document as I continue writing about meta-rationality in software development. I expect I’ll find additional helpful resources as I do the necessary background reading.
Suggestions would be welcome. Please leave a comment!
I have edited some quotes for concision, clarity, or typo correction. There’s web links to the original sources if you need the exact language.
Working with the nebulosity of software itself
Software seems like something we should be able to reason about, yet the reality is that it’s often too complex. Since we don’t know how it works, we measure it and experiment on it as if we are trying to discover properties of the natural world.
—Adrienne Porter Felt, on twitter.
Although programs are perfectly formal, their complexity makes it impossible to relate to them as such. In practice, we have to treat large codebases as nebulous; and therefore somewhat unpredictable, incomprehensible, and uncontrollable.
Effective programming in-the-large demands meta-rational methods. Those are similar to those we must use in working with the inherent nebulosity of every other domain.
Programming Sucks
Peter Welch, “Programming Sucks”
It’s now late January 2025. It’s probable that I have had health insurance since switching insurers at the end of 2024. My new insurer did accept payment, and has sent me many emails informing me of the manifold benefits of their product. However, I don’t have an ID card. If I show up at a doctor’s office, my spleen and liver will probably get taken for payment, and used as spare parts.
The insurer’s telephone representative did apologize. “It takes a little while” for their membership department to get the necessary information from the payment department. Therefore, he can’t actually check whether I have insurance. He thinks it’s likely. Although… if I do, my card should have arrived a month ago. Since membership doesn’t know for sure whether I’m insured, he can’t send another one “yet.”
The login page of their members’ web site has a box for a password, but not one for a username or email address—so I can’t check there myself. Customer service can’t say why. I can: their SSO implementation is broken. It’s been broken in interestingly different ways on different days over the past few weeks. I know about these things.
If you don’t know what “SSO” is… I strongly recommend that you stand firm in blissful ignorance. Civilians have no idea what the software everything in the world runs on looks like on the inside. If they did, they would be prepping like Ebola had gone airborne.
Peter Welch’s “Programming Sucks”—a classic rant, hilarious and insightful—explains.
The truth is everything is breaking all the time, everywhere, for everyone. Right now someone who works for Facebook is getting tens of thousands of error messages and frantically trying to find the problem before the whole charade collapses. There’s a team at a Google office that hasn’t slept in three days. Somewhere there’s a database programmer surrounded by empty Mountain Dew bottles whose husband thinks she’s dead.
And if these people stop, the world burns. Most people don’t even know what sysadmins do, but trust me, if they all took a lunch break at the same time they wouldn’t make it to the deli before you ran out of bullets protecting your canned goods from roving bands of mutants.
A Codebase is an Organism
Kevin Simler, “A Codebase is an Organism”
Here’s what no one tells you when you graduate with a degree in computer science and take up a job in software engineering:
The computer is a machine, but a codebase is an organism.
Unlike a computer, which always does exactly what it’s told, code can’t really be bossed around. Instead, the most you can do is try to steward the codebase, nurture it as it grows over a period of months and years.
Recall my overall maxim: “The essence of meta-rationality is actually caring for the concrete situation, including all its context, complexity, and nebulosity, with its purposes, participants, and paraphernalia.” I too use metaphors of nurturing, stewarding, or gardening to explain this.
Simler’s post covers several heuristics for caring for a codebase. Selected topics and quotes:
Software rot:
In a healthy piece of code, entropic decay is typically staved off by dozens of tiny interventions — bug fixes, test fixes, small refactors, migrating off a deprecated API, that sort of thing. These are the standard maintenance operations that all developers undertake on behalf of code that they care about. It’s when the interventions stop happening, or don’t happen often enough, that code rot sets in.
Uncontrolled code growth:
Faced with the necessity of growth but also its dangers, the seasoned engineer therefore seeks a balance between nurture and discipline. She knows she can’t be too permissive; coddled code won’t learn its boundaries. But she also can’t be too tyrannical. Code needs some freedom to grow at the optimal rate.
In this way, building software isn’t at all like assembling a car. In terms of managing growth, it’s more like raising a child or tending a garden.
Panoramic, long-term view:
You realize that a lot of software engineering principles are corollaries or specific applications of the same general idea — namely that:
Successful management of a codebase consists in defending its long-term health against the dangers of decay and opportunistic growth.
Signs Your Software is Rotting (and what to do about it)
Matt Belcher, “Signs Your Software is Rotting”
Software “rots” when it no longer does what it originally did. You still think or hope it does that—because it hasn’t changed. Or at any rate you didn’t change it, or not in a way that ought to have broken it.
Often the reasons it doesn’t work anymore are difficult to locate and understand, because they are not in the code itself. Something your code interacts with has changed: other code, a network configuration, or patterns of user activity.
“Signs Your Software is Rotting” is a short, easy introduction to the topic. It goes into a little more detail than Simler’s introduction. In addition to seven signs of rot, Belcher suggests six ways to prevent or ameliorate it.
Effective Mental Models for Code and Systems
Cindy Sridharan, “Effective Mental Models for Code and Systems”
This is about writing understandable code. Code that is correct but difficult to understand won’t stay correct for long. Overall, the key is to put yourself in the shoes of different sorts of programmers who may need to modify or extend your code later. Think about how they will think about it—and how they may misunderstand it.
Code is a social construct. It comes into existence as an attempt to create an imprint of an ephemeral mental model of the group of engineers involved in its original design and implementation. Code is an artifact of a team’s possibly incomplete, possibly flawed and possibly ambiguous understanding of a problem and as such is possibly an embodiment of all of these shortcomings.
Modification of code comes with the risk of subtle invalidation or inadvertent distortion of the initial assumptions under which it was written. Rinse and repeat, and after a certain amount of time one is left with a codebase that is a patchwork of various mental models overlaid on top of each other that no one engineer fully understands or can reason about accurately.
Empathy for the future reader requires the current implementors invest the time upfront to map out the whys and the wherefores of circumstances which influenced the implementation, in addition to having a certain amount of foresight into possible future limitations of the current implementation (which in turn requires them being aware of the pros and cons of the tradeoffs being currently made).
Big Ball of Mud
Brian Foote and Joseph Yoder, “Big Ball Of Mud”
“The de facto standard software architecture”:
A BIG BALL OF MUD is haphazardly structured, sprawling, sloppy, duct-tape and bailing wire, spaghetti code jungle. We’ve all seen them. These systems show unmistakable signs of unregulated growth, and repeated, expedient repair. Information is shared promiscuously among distant elements of the system, often to the point where nearly all the important information becomes global or duplicated. The overall structure of the system may never have been well defined. If it was, it may have eroded beyond recognition. Programmers with a shred of architectural sensibility shun these quagmires. Only those who are unconcerned about architecture, and, perhaps, are comfortable with the inertia of the day-to-day chore of patching the holes in these failing dikes, are content to work on such systems.
Still, this approach endures and thrives. Why is this architecture so popular? Is it as bad as it seems, or might it serve as a way-station on the road to more enduring, elegant artifacts? What forces drive good programmers to build ugly systems? Can we avoid this? Should we? How can we make such systems better?
This is a great discussion! Partly because it’s descriptive before it’s normative. Everyone agrees that big balls of mud are “bad,” which results in our averting our eyes, and therefore failing to understand their dynamics.
A big ball of mud often evolves out of what was supposed to be a throwaway prototype that got rushed into production as a stopgap, and then there was never time for Version 2. Or, it evolves from an originally well-structured architecture that gradually had bits glued on that didn’t fit anywhere.
This may the the right thing!
Money spent on a quick-and-dirty project that allows an immediate entry into the market may be better spent than money spent on an elaborate, speculative architectural fishing expedition.
Premature architecture can be more dangerous than none at all, as unproved architectural hypotheses turn into straitjackets that discourage evolution and experimentation.
An immature architecture can be an advantage in a growing system because data and functionality can migrate to their natural places in the system unencumbered by artificial architectural constraints.
Unstructured architecture may reflect poor domain understanding. (See the section of this page on ontology, below.) “The software is ugly because the problem is ugly, or at least not well understood.” That may be inevitable initially. But then, as the system is put to use, its defects illuminate the domain. Then, ideally, you can impose a sane structure.
Alternatively, the architecture may have been accurate for the system’s original use; but as the context changes, its purposes change, and then the original architecture may not suit the new functionality required.
Requirements are inevitably moving targets, so you can’t simply plan. You have to plan to be able to adapt. If you can’t fully anticipate what is going to happen, you must be prepared to be nimble.
You can do that well, or badly:
The architect may attempt to cover him or herself by specifying a more complicated, and more general solution, secure in the knowledge that others will bear the burden of wasted effort solving a problem that no one has ever actually had. Other times, not only is the anticipated problem never encountered, its solution introduces complexity in a part of the system that turns out to need to evolve in another direction
Disposable programming is, in fact, a state-of-the-art strategy. Developers must not be afraid to get a little mud on their shoes as they explore new territory for the first time. Architectural insight is not the product of master plans, but of hard won experience.
Technical Research and Preparation
Keavy McMinn, “Technical Research and Preparation”
This is the second in a series of three great short posts. The first, “Where to Start,” explains that step one in your project should getting a preliminary sense of what the software needs to do. I discuss it in the section on software purposes, later.
In this second post, McMinn advocates constructing one or more “spikes” as the second step. Spikes are bits of throw-away code that explore one possible way a piece of the new system could work internally. Spikes are not prototypes. That comes later. A prototype is a complete enough sketch that you can show it to non-engineers so they can get a sense of how the software might appear to users. A spike is for engineers to understand trade-offs between alternative technical approaches to constructing a particular bit of the guts. It may have no user interface at all, nor any meaningful outputs.
Partly I do spikes for my own knowledge, to validate or invalidate each approach, and also for confidence in how to approach a problem. I have learned that they can also be highly valuable tools in persuading others of the viability of a project.
On the downside, approaching the start of technical research through spikes can also feel scary and intimidating. This can partly be because the work ahead may be ambiguous and the direction feel amorphous, at this stage. I believe acknowledging and embracing that as part of the job is helpful: get comfortable with the discomfort of not knowing a lot.
Willingness to confront nebulosity—“ambiguity and amorphousness”—is essential in meta-rationality. And beyond willingness, eventually: feeling comfortable with not knowing, not understanding, not being in control, and working with raw reality as-it-is anyway. Relishing that, even!
Shape Up
Ryan Singer, Shape Up
This short book describes the meta-rational approach to development used by one small, highly respected, somewhat eccentric software company: 37signals. It’s free on the web, or you can buy a paperback.
The book is meta-rational with respect to its own meta-rational methods: it explains how 37signals came up with those methods, and why they work—for that company, at least.
Shape Up takes an explicit “toolbox” approach. That is characteristic of meta-rationality: “There is no method; there are only methods.” This contrasts with rationalist software methodologies. Those cast themselves as universally valid procedures for turning any set of sufficiently-formal software specifications (“requirements”) into code. (The “Rational Unified Process” is my favorite villain here.)
You can think of this as two books in one. First, it’s a book of basic truths. [I call these “meta-rational maxims.”] I want it to give you better language to describe and deal with the risks, uncertainties, and challenges that come up whenever you do product development. Second, the book outlines the specific processes we’re using to make meaningful progress on our products at our current scale.
One main insight is to design projects at an intermediate level of abstraction. That’s what the title refers to: “shaping” projects. That includes discovering, choosing, and sketching a rough definition of a Problem—together with inventing, selecting, and exploring a general approach to producing a Solution. (In the case of software, a Solution is shippable code.) The necessity for shaping a Problem and Solution simultaneously is a common theme of meta-rationality across domains.
We shape the work before giving it to a programming team. A small senior group defines the key elements of a solution before we consider a project ready to bet on. Projects are defined at the right level of abstraction: concrete enough that the implementation team knows what to do, yet abstract enough that they have room to work out the interesting details themselves.
The Shape Up approach partially resembles the “Agile” approach (discussed later in this collection of readings). Separating this preliminary planning phase from the implementation phase is one way it differs. Agile criticizes excessive planning, which is the dysfunctional essence of management rationalism. However, taken naively, Agile may over-correct by under-planning, which can also become dysfunctional.
We then give full responsibility to a small integrated team of designers and programmers. They define their own tasks, make adjustments to the scope, and work together to build vertical slices of the product one at a time. This is completely different from other methodologies, where managers chop up the work and programmers act like ticket-takers.
We reduce risk by solving open questions before we commit the project to a time box. We don’t give a project to a team that still has rabbit holes or tangled interdependencies.
Building experimental “spikes,” as in McMinn’s post discussed above, might be one way to do this.
We reduce risk by integrating design and programming early. Instead of building lots of disconnected parts and hoping they’ll fit together in the 11th hour, we build one meaningful piece of the work end-to-end early on, and then repeat. The team sequences the work from the most unknown to the least worrisome pieces, and learns what works and what doesn’t by integrating as soon as possible.
This is one way to acknowledge and work with nebulosity throughout the process. A bad alternative is to formalize it out of existence in an initial planning phase. Then it reemerges catastrophically, in “final” user testing, when the elegant but misconceived product first confronts reality. That’s a common software project failure pattern.
How programs learn
Stewart Brand, How Buildings Learn: What Happens After They’re Built
Many writers on software architecture have been inspired by Stewart Brand’s brilliant book. Software systems, like buildings, get continually reconfigured to better fit the needs of their users/occupants.
This tweet thread, by Geoffrey Litt, is a fine explanation of some aspects of the analogy.
There’s a saying that writing software is more like tending a garden than constructing a building—things constantly change.
But the more I learn about how buildings evolve, I think this process is actually a perfect analogy for designing software!
(*While we’re here… The BBC turned Brand’s book into a fascinating six-part TV series. Recommended!)
Systems that defy understanding
Nelson Elhage, “Systems that defy detailed understanding”
Elhage discusses several common classes of software systems that are best treated as inherently incomprehensible. For each, he recommends “What works instead.”
One example is “Big Balls of Mud,” which you can’t fully understand because they are gigantic and there’s no coherent design. Elhage suggests that, in this case, “what works instead” is increased observability, so you can see what went wrong when it did. Detailed logging is one common method for this.
Developing meta-rational competence
Meta-rationality is not part of the undergraduate CS curriculum. It is rarely taught even in graduate school; and then almost always only informally, through apprenticeship. In practice, you learn it on the job by doing it badly and learning from your mistakes; by taking advice from mentors; from informal discussions with peers; from reading posts on blogs and forums; —and from reflecting on all these, while in the midst of project chaos.
Developing meta-rationality can result in profound personal transformation. It is not so much that you learn new skills. It is that, by expanding your scope of responsibility, you adopt an unfamiliar stance, and thereby become a different sort of thing.
Computers can be understood
Nelson Elhage, “Computers can be understood”
This is a complement to Elhage’s “Systems that defy understanding,” which I discussed above. With rare exceptions, computer systems can be understood—if you have the relevant skills, and if you devote the necessary effort.
In two senses, this might seem obvious. First, computers are deterministic formal systems, which might be thought to imply that they are understandable by definition. However, as we saw earlier, the enormous complexity of realistic software codebases usually makes full understanding impossible, or at least impractical.
Second, professional programmers surely should understand programs? (Electrical engineers should understand electricity; professional chefs should understand cooking.) But many don’t. At the junior level, it’s common for developers to cut-and-paste plausible-looking bits of code they found on the internet and hope that they’ll work. When code doesn’t work, they may not be able to figure out why—because they don’t know how to understand what’s going on when their program runs.
At this point, progress comes from insisting that every unexpected program behavior can be located and understood—with sufficient skill and effort.
There is no magic. There is no layer beyond which we leave the realm of logic and executing instructions and encounter unknowable demons making arbitrary and capricious decisions. Most behaviors in one layer are comprehensible in terms of the concepts of the next layer, and all behaviors can be understood by digging down through enough layers.
I love debugging. This is somewhat unusual. Junior developers love getting anything to work. Senior developers find that too easy, and usually love architecting—crafting a high-level vision for structuring a large system. I find that too easy…
The trickiest bugs are often those that span multiple layers, or involve leaky abstraction boundaries between layers. These bugs are often impossible to understand at a single layer of the abstraction stack, and sometimes require the ability to view a behavior from multiple levels of abstractions at once to fully understand. These bugs practically require finding someone on your team who is comfortable moving between multiple layers of the stack, in order to track down; on the flip side, to the engineers with the habit of so moving around anyways, these bugs often represent an engaging challenge and form the basis of their favorite war stories.
The post offers much useful advice about how to develop the advanced programming capabilities needed to locate “impossible” bugs, to speed up already-optimized code by a couple orders of magnitude, and to make an API implementation secure against malware attacks. These require not just specific techniques, but deep understanding of how and why complex systems work. An attitude of boundless curiosity, of “I can understand this fully, after sufficient exploration” is key.
In terms of the rationality J-curve, this is about moving from “basic rationality” (routine programming using standard methods) to “advanced rationality” (in which no standard method is applicable). It is not so much about meta-rationality.
However, meta-rationality is required to decide whether or not understanding a bug is worth the effort. Blunter approaches may be better: get someone else to fix it; remove that section of the program altogether and accept the loss of functionality; replace the whole module with an alternative (newly written, or off-the-shelf); detect the buggy behavior and signal or log an error; document the problem and expect users to cope somehow.
These are anathema for software craftspeople, but sometimes expedient.
Death of a Craftsman
Einar W. Høst, “Death of a Craftsman”
Outstanding. This is (in my terms) about the move from advanced rationality (the “craftsman” stage) to meta-rationality.
A software craftsman understands that large codebases are mysterious and organic, and there are no guaranteed methods. Therefore, software development is a craft, not an engineering discipline. However, the craftsman still implicitly believes that the code is where all the action is, and the only thing worth paying attention to.
Høst explains why this is wrong, and why domain modeling, and attention to the rest of the organization and its structure and dynamics, are critical. Those are meta-rational considerations.
20 Things I’ve Learned in my 20 Years as a Software Engineer
Justin Etheredge, “20 Things I’ve Learned in my 20 Years as a Software Engineer”
This is solid, easily-understood advice, much of it meta-rational. Some quotes:
The hardest part of software is building the right thing.
Designing software is mostly a listening activity, and we often have to be part software engineer, part psychic, and part anthropologist.
The best software engineers think like designers. Great software engineers think deeply about the user experience of their code.
The best code is no code.
Software is a means to an end. The primary job of any software engineer is delivering value. Very few software developers understand this; even fewer internalize it.
Every system eventually sucks. Get over it.
Nobody asks “why” enough.
People don’t really want innovation.
10x engineering
Ben S. Kuhn, “10x (engineer, context) pairs”
Salvatore Sanfilippo, “The mythical 10x programmer”
These two short blog posts are both excellent. They overlap in content, but the presentations are different, so I’d recommending reading both.
A “10x software engineer” is someone who is ten times as productive as an average engineer. A few years ago, there was an acrimonious internet brouhaha over whether 10x engineers exist. The sides were mostly talking past each other, due to having different ideas of what “ten times as productive” means.
It may well be true that no one can write the same code ten times as fast as an average engineer. However, some engineers are ten times as effective because, under suitable conditions, they write different code, which is ten times as valuable. Their unusual capability lies in figuring out the best thing to write.
That capability depends on a context that allows them to decide what to write. This is a matter of office politics, modulated by their social skills, rather than technical ability. It also depends on their caring for the organizational goals, so that what they want to write is ten times as valuable. Technical wizards often want to write something ten times as technically interesting, rather than something ten times as useful.
Kuhn’s post includes a bullet list of ten meta-rational tasks required, in sequence, for developing software that is actually useful. All of what is ordinarily considered “software engineering” is just step five. A 10x developer needs either to do all of them, or else find a work context in which some get done by competent, aligned collaborators.
Sanfilippo’s post includes discussions of nine bullet-point factors that lead to exceptional programmer productivity. Many of them could be summarized as “understanding what not to implement.” The difference between a 1x programmer and a 10x programmer is that 90% of the time of the 1x programmer is spent doing things that they didn’t realize weren’t useful.
The Agile Manifesto
These two very short pages present meta-rational maxims. I believe that, properly understood, they are mainly accurate, and enormously valuable. Contemplating their applicability in concrete situations may be the royal road to software meta-rationality.
Unfortunately, the maxims are so pithy, so compressed, that they can be easily misunderstood. Indeed, they almost always are. Consequently, “Agile” has come to mean almost exactly the opposite of what the Manifesto recommends. If you loathe this misinterpreted “Agile”—try going back to the source.
I’ve discussed the Manifesto, and the ways it has been distorted, in “Meta-rationality rationalized.”
Thriving on the Technical Leadership Path
Keavy McMinn, “Thriving on the Technical Leadership Path”
The author uses “the technical leadership path” to mean leading while remaining primarily an engineer, not a manager. That entails doing extensive meta-rational work.
Near the end of this blog post, McMinn provides a wonderful bullet point list of meta-rational activities in software development. “So what does the more strategic work of a very senior engineer look like? It might include…”
The Staff Engineer’s Path
Tanya Reilly, The Staff Engineer’s Path: A Guide for Individual Contributors Navigating Growth and Change
This is a book-length exploration of the same issues as McMinn’s blog post. A “Staff+ Engineer” is one who is given the freedom and responsibility to make meta-rational decisions. The book talks about “big-picture thinking,” and context and complexity and messes and purposes.
It’s a solid practical meat-and-potatoes-level career path guide if you are moving into this role.
Crafting a Research Agenda
Barath Raghavan, “Crafting a Research Agenda”
Raghavan has offered an explicitly meta-rational, graduate-level course in how to do PhD-level computer science research. I wish something like his “Crafting a Research Agenda” had been available when I was a graduate student!
The syllabus includes links to numerous readings on the topic. Some are by me (which is somewhat embarrassing). Skip those if you’re already bored with my stuff, and check out the others!
You don’t need to be a PhD student for Raghavan’s recommendations to be relevant. Much of the content will be valuable for anyone who has been working in tech for long enough that meta-rational considerations have become relevant.
So long as your job gives some scope for decision-making, questions about how to choose directions arise—and the standard computer science curriculum offers little guidance. Many of the syllabus readings cover topics similar to ones I discuss on this page, but in more depth and more seriously. I recommend them particularly if you want to go further than the lighter-weight blog posts I’ve linked and summarized here.
The third column of the syllabus poses questions that you might apply to your own technical work.
The course description:
In this course, we will aim to understand how to formulate a research agenda in Computer Science, examine trends that exist within sub-areas of CS, and identify fruitful and problematic research directions. We will examine barriers to scientific research advancement and how to avoid them when possible. We will practice thinking meta-systematically (“thinking outside of the box”) when approaching the selection of research problems and developing research solutions. We will also study prior paradigm-shifting papers in CS research and guidance from eminent Computer Scientists who have articulated what makes for good CS research and strong CS research communities.
The course will explore the hybrid nature of Computer Science – its existence at the intersection of mathematics, engineering, psychology, philosophy, statistics, and other disciplines – and how to evaluate progress in CS research as we identify fruitful avenues of future study. Our focus will not be on any one area of CS; we will examine research agendas in a variety of areas.
The course is intended for CS PhD students in their second year or later who are learning to articulate their own research objectives and plans.
Software’s nebulous purposes
Much of the essence of building a program is in fact the debugging of the specification. —Fred Brooks
Reasoning about purposes is a central meta-rational task, in all domains. Inadequate understanding of purposes is probably the most frequent reason software development projects fail. “Requirements analysis,” which attempts to formalize nebulous purposes rationally, often proves inadequate.
There are lots of books about software “requirements” and what you are supposed to do with them. All the ones I’ve looked through were terrible: tedious, and mostly outright wrong. You can learn more, faster, from blog posts and forum discussions. This section suggests several good ones.
Successful software development inevitably involves decisions that have consequences for many people. A meta-rational approach takes into account positive and negative impacts on users and other people its products affect. Writing good software is an ethical, political, social, and cultural activity, just as much as it is a technical one.
Technical people would mostly rather avoid making moral decisions. We mostly don’t feel qualified to make them. We mostly aren’t qualified to make them! Unfortunately, usually neither is anyone else in our organization. When forced to consider ethical questions, we usually reach either for fashionable or familiar folkways, or “rational” ethical systems. Neither of these is a good guide to moral decision-making.
Meta-rational ethical understanding is difficult and unattractive for the same reasons meta-rationality overall is. However, it is more often accurate than alternatives.