Software purpose nebulosity
Why rational “software requirements analysis” fails, and how to understand purposes meta-rationally
Software development projects are authorized, used, and paid for by people who have agendas: purposes.
Software’s value depends on how well it serves those purposes.
Purposes are nebulous; this creates trouble for rationality, and demands a meta-rational approach.
As we saw in Part Three, eliminating consideration of contexts and purposes is a prerequisite for rationality. Before you can apply rationality, someone has to turn purposes into a Problem: a formal specification of a formal Solution. Then you Solve the Problem while un-seeing the purposes that led to it.
But formulating Problems is an inherently meta-rational task, for which rational methods are ill-suited. In this section, we’ll see how analyzing software purposes rationally can fail, and how a meta-rational approach may work better.
For simplicity, we’ll concentrate on “enterprise software,” used to share information and coordinate work in a business, often across several departments with their own professional specialisms. Examples include inventory management systems (how many eggplants do we have?), customer relationship management systems (how many eggplants has Zack bought in the past year?), and employee relationship management systems (what are we paying Sasha as Senior Eggplant Inspector?). Similar but not identical issues of purpose arise when developing other sorts of software (video games, robot vacuum controllers).
To design and build software that supports collaborative work, you need to understand what that work is. How do warehouse workers check eggplants in and out of cold storage? What else do salespeople need to know about Zack? What are all the laws, regulatory requirements, and corporate policies involved in Senior Eggplant Inspector compensation?
Unfortunately, people mostly can’t or won’t explain their work for you accurately. Much less can they tell you what software should do to help them. There are multiple good reasons for this, some of which we’ll cover in this section.
If the purpose of the software you build is to support work whose details you can’t know, you cannot formulate an accurate Problem. Any attempt will include many guesses that will turn out to be wrong when people try to use your program.
Instead, the purposes of a software system must come into focus gradually, throughout its construction, and even after it has long been in regular use. The rational framework of Problem and Solution is misleading. Inasmuch as the terms apply at all, it would be more accurate to say that you co-construct the Solution and Problem, which illuminate each other.
This implies continually reorienting to real-world purposes throughout the process. Neglecting to do so, by misunderstanding purposes, or substituting representations of purposes for real-world contact with them, or choosing to ignore them in favor of solving technical Problems, or losing track of them altogether: these are common reasons for software project failures.
Putting the same thing a different way, whoever is responsible for the major technical decisions, i.e. the software architect or architects, must be involved in understanding purposes from the beginning. Likewise, the people who best understand purposes—business leaders, domain experts, and prospective software users—must remain involved in the design process until, and even after, the system is declared complete and delivered.
This section has two subsections. The first is on the rational approach to determining software purposes, and why it works poorly—quite often dooming a project to failure even before programming begins. The second is on an alternative, meta-rational approach, and how it works better.
This free post is a draft chunk from my meta-rationality book. If you are newly arrived, check the outline to get oriented. (You might want to begin at the beginning, instead of here!) If you run into unfamiliar terms, consult the Glossary.
For Substack, I have split a long book section into several posts. This one covers the rational approach, with meta-rational explanations for why it doesn’t work. Subsequent posts explain an alternative meta-rational approach.
As throughout this chapter, the aim here is to help you better understand meta-rationality—not to better understand software development. I’ll paint the technical issues broad-brush, simplifying many issues and omitting others.
Analyzing requirements rationally
Software purposes are called requirements in the field. Nailing down what software has to do is requirements analysis. That is a specific rational profession, with its own specialist knowledge, career path, certifications, conferences, and so forth. Requirements analysis culminates in a requirements document. That is a rational Problem: a formal specification for what would count as a Solution.
The requirements document serves as a contract between the people paying for the Solution and the people producing it. Both sides love this! Or anyway, they do at first, and in theory. Once they’ve agreed to the contract, the buyers know what they are going to get before committing to buy, and the producers know what they have to do for the engineering project to count as a success. Unfortunately… reality is not a party to the contract.
The approach works well for parking garages. Informally, the requirements are just “has room for 150 cars, fits into this particular plot of land, matches the visual style of the adjacent garage, and meets the local construction code.” That might turn into a hundred pages of contract details, but there’s little risk that some ambiguity in the specification means you’ll get an ornate Victorian treehouse instead of a parking garage.
It’s universally acknowledged that software requirements analysis is difficult and often goes wrong. Somehow, the people who say they want the software seem unable to explain what they want. As a result, the requirements document is vague and incomplete in retrospect, despite often running for several hundred pages and including numerous diagrams and tables. It’s common for software projects to deliver something that more-or-less matches what the customers said they wanted, but is nothing like what they imagined—expensive and useless, like an ornate Victorian treehouse.
What are the requirements for a supermarket inventory system? It’s the job of professional requirements analysts to find out, using the rational methods mandated as standards for their profession.1 Those boil down to:
Identify the stakeholders, which means classes of people whose opinions about what the software should do need to be taken seriously because they have political clout.2
Find one or more stakeholder representatives from each class. Those are people who can be made willing to talk to you; ideally ones with extra clout.
In an interview, ask them each what they want. (This is called requirements elicitation.)
Write down what they said in an approved pseudo-technical semi-formal format.
The rational standards for requirements analysis mandate the form of the process and its output, but do little to ensure that its contents are accurate, or even meaningful. They are rationality theater: bearing the appearance of rationality, without its substance.
You should not be surprised to hear that a main difficulty is that software requirements are nebulous. The rationalist approach assumes, explicitly or implicitly, that requirements are definite, objective, independently-existing entities. Often the task is described as “requirements gathering,” as if they are solid, discrete objects like walnuts that you could wander around and pick up and collect in a basket.
Stakeholders often can’t say clearly what they want. But, worse, it’s commonly understood in the field that what stakeholders want is also not what they need. Ideally, then, the job is to find out what they actually do need. However, it’s acknowledged that standard requirements analysis methods can’t do this. Further, this again tacitly assumes that “what they need” is itself well-defined, and exists objectively and independently from stakeholder opinions.
The usual problem is not that stakeholders are stupid or fail to communicate, it’s that requirements are inherently nebulous. They aren’t definite, pre-existing entities about which an analysis can determine Truths. This is true of purposes in general. That’s partly why rationality has to exclude consideration of purposes! And why meta-rationality (which does consider purposes) is necessary here.
In fact, stakeholders don’t have “software requirements.” They have work they want to get done. Software provides cognitive support for specific business activities. Stakeholders mostly don’t know or care what it has to do to help them. That’s not their job or expertise. What they want is just for their activities to go well: efficiently and without breakdowns.
Creating software that can do that requires understanding the work it supports. That’s not treated as part of the usual requirements analysis process, however. It is key to the meta-rational alternative.
Stakeholders can’t “know what they want”
Apparently, stakeholders do want something, even though they can’t say what it is. This seems puzzling. Do they have dark unconscious Freudian desires for software, which they can’t access?
Rationalism usually assumes everyone knows what their desires are. “I don’t know what I want” is an everyday experience, though, and usually not irrational. You don’t know what you want until you get it, or at least can see specific choices. That’s true of snacks, and of software.
Is that lack of knowledge? Or is it that “what you want” is nebulous and contextual? Desires, and purposes, manifest as aspects of interactions. We often can’t say what they are, or will be, outside that situation.
This is one reason stakeholders can’t explain “requirements” accurately when you interview them. They aren’t doing their work then! Concrete activity is largely driven by visible specifics of a situation. (We covered this in “Meaningful perception” in Part Two.) If you are not presently in a work situation, you can’t say what you would do in it, because you can’t perceive it. Requirements elicitation interviews demand that stakeholders envision work processes out of context, in a conference room somewhere, which is difficult or impossible to do accurately. (It would be better to watch them working… which is not part of standard requirements analysis practice.)
Stakeholders may have confused, vague, or mistaken ideas about how they do their own work, how others in the organization do theirs, or how particular types of work fit into the organizational context.
Software purposes are nebulous partly because the activities they support are nebulous. Business operations, and jobs within them, are not perfectly defined. What you did today at work cannot be described in perfectly precise detail; nor could its purpose be.
Even if software arrived today that perfectly supported today’s work, what you will do tomorrow will be a bit different, requiring some on-the-spot improvisation. Improvisation is nebulous and indescribable.
Purposes change as the business changes. By the time software is built, what the business needs may be significantly different. (What has to change in the inventory software when a supermarket installs self-checkout machines?) Well-architected software must anticipate a nebulous range of unknowable future uses. Later sections of this chapter explain how that can be done.
Much business work consists of making reasonable judgement calls that can’t be entirely explicit. (“Usually we don’t pay invoices until they’re a couple months overdue, but we’ll make an exception for Patel’s farm because they’re our only eggplant supplier and they sounded serious about cutting us off last time we let it slide.”) Decisions purely on the basis of explicit rules can be made by computers, and should be, and usually are. In contrast, one can’t explain implicit judgement-making to requirements analysts.
Much collaborative work involves expertise from several domains, provided by different people, none of whom can fully understand how goals are jointly accomplished. Each person does their work, prepares a summary document, and sends it off to the next. They may have only vague and inaccurate senses of what gets done with it after that. Consequently, many purposes don’t live in anyone’s head; they are distributed properties of jointly improvised activity.
Experts often have outright mistaken high-level concepts about what they do themselves. If they examined the details of their own work closely, they’d see that it’s different. Since the broad concepts do not determine the course of the actual activity, misunderstandings can persist. An example we saw in Part Three was that most scientists believe myths about The Scientific Method that don’t describe their own work at all. It’s common in business for there to be “the official story” about how something is done, frequently invoked; and then there’s how it is actually done. Sometimes everyone knows the score, but commonly no one has an explicit knowing-that model of it, but everyone implicitly knows-how to do their part.
Stakeholders representatives are often executives who have limited knowledge of what their underlings do.
Stakeholders may fail to mention critically important requirements because, from their perspective, they are super obvious and go without saying: “fruit has an expiration date.” You need to have a detailed enough understanding of their work to notice that gap, and then get them to fill it in: “…so we track that by…”
Due to all these sorts of problems, stakeholder representatives may think and say they want things which, when they get them, turn out not to be what they needed. Basing requirements on their explanations may produce a software system that would be ideal for the business as they described it, but lackluster or outright unusable in reality.
I personally have benefited enormously from this difficulty. In the 1990s, I founded, grew, and sold a small software company whose product analyzed and kept track of a particular sort of data generated in pharmaceutical research. Several much larger companies also made software for this purpose; my company outcompeted them by satisfying researchers’ needs, whereas theirs didn’t. Why?
The other companies performed extensive requirements analysis. The biggest spent a million dollars just on travel expenses for the requirements analysts, flying them around the world to talk to the scientists who needed this software assistance. The scientists explained what they did and what they needed, and my competitors gave them exactly that. Then the scientists said “uh, sorry, this doesn’t do what we need, after all” and bought my product instead.
The scientists had two radically different, incompatible ways of thinking and talking about their work. One was in terms of the products of a chemical process; it was an inaccurate simplification, although valid most of the time. The other was an accurate explanation of the process itself. The scientists gave the requirements analysts the first explanation, probably assuming correctly that they were talking to non-scientists who would find the second explanation too hard to understand.
Unlike the requirements analysts, I had worked closely with chemists doing this sort of work for a couple years. Although not a chemist myself, I had enough scientific education to understand the issues pretty well. I could recognize that their two mental models were non-equivalent. And then I could envision and build software could serve their purpose. My total development costs were less than the big company’s cost for requirements analysts’ travel.
I succeeded by getting the ontology right. The software’s primary representation was of chemical reactions (chemists’ more difficult, accurate mental model), whereas the alternative programs represented classes of molecules (the simplified, sometimes-inaccurate model).
Getting the ontology right is key to making great software. It’s important enough to warrant a substantial discussion, in an upcoming section. Two case studies there draw on my pharmaceutical computing experience. Getting the ontology right is important in formulating Problems in general as well; a chapter later in this Part discusses that as an issue across all domains, not just in software.
Requirements analysis refuses to confront nebulosity
Recall from “Rationalism’s responses to trouble” in Part One that:
When it encounters ontological nebulosity, rationalism typically misinterprets it as either linguistic vagueness or epistemological uncertainty.
Those are about human cognition, so it may seem that rationality could overcome them. We can sharpen our language and gather more data, and then maybe eventually, or at least in principle, rationalism could deliver on its guarantee. Nebulosity is about the world, so it can’t be fixed, and rationalism tries to ignore it.
So requirements analysts may misinterpret the ontological nebulosity of purposes as a linguistic problem of vagueness, ambiguity, or imprecise definitions in stakeholder explanations of their requirements. Or, it may misunderstand nebulous purposes as an epistemological problem of incomplete knowledge of what the true requirements are, on the part of the stakeholders and/or the analysts.
Interviewing a supermarket Chief Operating Officer:
So, what should the new inventory system do?
It needs to be reliable!
Great, we’ll make sure that’s highlighted as the most important requirement. So, could you say what else it should do?
What kind of question is that? It needs to keep track of how much of what we’ve got where. That’s what an inventory system is. It doesn’t need to do anything else.
This could be interpreted as extreme vagueness. Requirements analysis methods then recommend eliciting details through asking more specific questions. That can make interviewees hostile, or apologetically unable to answer. A stakeholder may say “this bit needs to work better,” but can’t describe more than general revulsion at frequent, inscrutable difficulties.
Uh, yes, so how reliable does it need to be?
Look, it just needs to work! We need to know how many eggplants we’ve got! The system we have now gets it wrong half the time.
Maybe this is better thought of not as vagueness, but as an epistemological problem. The COO simply doesn’t know what the software should do. Then, requirements methodology supposes, someone in the organization must. It’s common, though, that so far as interviews can determine, no one in the organization does.
Maybe everyone agrees that “we want a more reliable inventory system”; but specifically what does that mean? They probably don’t know. They just have a vague sense that the current one is causing expensive problems and everyone complains about it and the IT people can’t seem to fix it.
This suggests that requirements gathering has to be supplemented with analysis: the initially obscured requirements can be discovered by being more rational about them than the stakeholders. The recommended practice, though, is to be more formal, by rewriting what stakeholders said in pseudo-technical language. Plus, if you are hardcore,3 drawing diagrams in specified forms. This, for example, is a requirement drawn in Unified Modeling Language, which is part of the Rational Unified Process:
This work may clarify ambiguities or turn up contradictions, but generally there’s not much actual analysis involved. The semi-formalization is rationalist lipstick. (More about that in a moment!)
Finding out what’s gone wrong with an “unreliable” inventory system might be a substantial project, using several sorts of expertise that requirements analysts lack. It might be buggy software, or software misconfiguration, or a networking problem,4 or the software does not get used as intended because users misunderstand it, or it was never intended to do some things it’s now being used for—although it can be mostly made to work that way. Locating the trouble is not part of requirements analyst’s job, and they don’t have the training for it.
It’s a grungy meta-rational task; and if gets done, it may well be a software architect that does it.
Requirements analysis forces premature formalization
The semi-formality of the requirements document rarely reduces nebulosity effectively. What counts as meeting the requirements does not get nailed down, due to the inherent limits of definition.5 It’s common for the development team to deliver something that could reasonably be counted as meeting all the requirements, but that is barely usable, or not usable at all. It meets the letter, more or less, but not the spirit—or rather, misses the vague underlying sense of what was needed.
True theories may be wrong. The problem may not be that the requirements document’s rational representations are factually false. It may accurately reflect what stakeholders say they want, but still be misleading and useless.
Stepping back, the meta-rationalist take is that rationalism’s failure here is an example of premature formalization: creating a formal Problem before you have an adequate informal understanding of the context and purposes. Formalization is an inherently meta-rational activity; we’ll have a full chapter about that later in this Part.
Before attempting formalization, we should understand why it can be valuable, and when.
Rationality works only when reality and the formal statements correspond well enough. Making statements formal may sometimes help you recognize errors, but does not automatically make them any more likely to be correct. Formalization can force clarity, eliminate ambiguity, and reveal non-obvious contradictions between statements. It does not do any of those things on its own, however; it is a tool that is effective only when used skillfully.
We saw in “Are eggplants fruits?” that formal logic eliminates important syntactic ambiguities, but does not necessarily help with semantic ones—which cannot, in fact, be entirely eliminated:
At exactly what point does an aging eggplant cease to be an eggplant, and turn into “mush,” a different sort of thing?
If the requirements document specifies, in semi-formal language, that the inventory system must accurately track the quantity of fruit in the warehouse, it remains unclear whether eggplants count at all; and if they do, what exactly what should or should not count as an eggplant. You need to understand whether that matters for the business, whether it belongs in the requirements document, and what the implications are—before translating into requirements-ese.
Particular formalisms support specialized formal reasoning methods, which are likely to work in particular sorts of situations. We should ask: does the semi-formality of the requirements document format enable such methods?
In practice, no. The users of the document—both stakeholders and engineers—could reason about the requirements better if they were explained in plain language. The semi-formalism is often a barrier to understanding. It is not an affordance for a specialized, extra-reliable inference method.
Formal specifications are justified, or even an absolute must, in some software projects. That’s when the stakes are very high: for example in cryptography, spacecraft engine control, and core parts of widely-used software infrastructure such as databases, compilers, and low-level network code. Bugs in such cases can cause billions of dollars of damage in a single incident. Small, critical programs can and should be proven correct mathematically.
But “correct” relative to what specification? Semi-formal “requirements analysis-ese” won’t cut it. In such cases, we need specifications expressed in mathematical logic, which is the most formal stuff we’ve got.
Requirements analysis has difficulty prioritizing
Enterprise software systems generally do many different things, because they support many different sorts of work. Building them is expensive, and usually it’s not financially feasible to make them do everything stakeholders want. Requirements must be subject to cost/benefit analysis. Success depends as much on not building unimportant or outright wrong things as on building the necessary ones.
Purposes are nebulous, but so is software system construction—as upcoming sections will explain. Consequently, it is difficult to estimate accurately what it will cost to make software do particular things. Even software architects and engineering managers often get this quite wrong.
Stakeholder representatives generally can’t do this at all, but often vaguely imagine they can. So they may assert requirements that are effectively impossible, or way out of budget; and they may neglect to mention software capabilities that would make their lives significantly easier, but which they wrongly suppose would be too difficult to provide.
They also may have only a vague sense of how valuable particular features would be in their work, so both parts of the cost/benefit ratio are nebulous. In “eliciting requirements” from them, you have asked them to fantasize about imaginary future software, which is difficult even for engineers. They may demand unrealistically long lists of features, with little sense of which they actually need and which “might be nice.”
Or, they may have highly specific ideas about some things they want, which may be unrealistically difficult to provide. The standard process doesn’t allow for “How about this other thing, wouldn’t that be just as good, or better even? It would be much easier to build.” Only the tech people could suggest such alternatives, and they are not meant be involved in the process at this stage.
Different stakeholder classes may prioritize different requirements, or even have actively conflicting ideas about what the system should do. Getting agreement on what’s important for the project overall can be politically difficult. Software requirements analysis may become a pawn in a broad interdepartmental power struggle.
In standard practice, it falls to requirements analysts to negotiate priorities; or they may have full authority to decide. They do not, however, have the ability. They don’t have enough technical background to understand cost implications. In practice, handwaving estimates are used, which may be off by a factor of ten or even a hundred.
Let’s eavesdrop on a team of requirements analysts sharing notes in a conference room at the head office of the supermarket chain they’re embedded in:
What did you get from the Chief Financial Officer?
He explained, uh, let’s see, eleven summary reports he wants to see. Some of them every morning. Total sales across all the stores, in categories like “produce” and “household.” Stuff like that.
OK, he’s important, and that sounds important.
Yeah, and easy. It’s just putting a few numbers on the screen. Fancy it up a bit with some colored boxes and stuff. I could probably code that up myself, even!
Right, so that’s eleven items in the requirements document that are top priority.
There’s a couple problems here. First, the CFO may have neglected to mention that these eleven were the examples that happened to come to mind. He needs different summary information for different purposes, some of which come up only irregularly. There’s way more than eleven that he’s asked his subordinates to prepare during the past year. If there’s an unusual event tomorrow, he might need sales data categorized an entirely new way.
If the development team delivers a system that can generate just the eleven reports he listed, he may be disappointed—and vengeful. What he needed was a way of specifying types of reports, not any specific list. That’s what he does now, by telling the Comptroller to get them made up, but it didn’t occur to him that he’d do the same with software.
Another problem is that if there’s a large enough quantity of data, required report types can have major implications for database architecture. Depending on how the data is arranged, different summary totals may be very quick and cheap to calculate, or very slow and expensive. Getting this right requires deep database configuration expertise and perhaps person-years of full-time work. The requirements analyst was a little over-optimistic in imagining he could code it up himself.
Right, and Jacques, what have you got?
Well, you saw the transcript of my interview with the Chief Operating Officer. That was extremely helpful! “Reliable!” [Laughter all round] Oh, also, this weird woman from the warehouse said she wants to use a barcode scanner to log stuff arriving at the loading dock.
Oh. Um. Anyone here got an idea how a barcode scanner works?
I dunno, I think there’s a laser?
Then what?
I dunno.
That sounds difficult. It’s probably AI or something. We don’t have the budget for that.
Well, nobody told us to talk to truck guys, either. She just collared me when I was coming out of the men’s room.
OK, good, it’s out of scope for us.
“Nice to have?”
No, that’s just going to cause complications. Let’s drop it.
In fact—I happen to know this, from writing software for tracking research chemicals—barcode scanners are extremely easy to support. In the simplest case, they pretend to be keyboards, so when you pull the trigger they’ll fill in the active field in a form, just as if you were typing it. In that case, they require zero code to support. More complicated uses are still easy.
And, I expect that using a barcode scanner to log receipt of crates of eggplants as the truck is unloaded would significantly increase the reliability of inventory records, while speeding the job and thereby cutting costs.
Accurate cost/benefit analysis is impossible up front, even for the technical team. A better process includes system architects and engineering managers from the start, rather than their getting a completed requirements document as their project entry point. Then the team should continually reevaluate priorities, based on increasingly accurate cost estimates as construction proceeds and architectural decisions (which affect difficulty and therefore costs) are made.
This implies continuing revision of requirements. Some will have to be dropped as too expensive; and some, previous considered lower priority, may get implemented because they turn out to be surprisingly inexpensive.
What is the lesson here for meta-rationality broadly?
Problems don’t fall from the sky, ordered by the Cosmic Plan. They are not pre-existent objective entities. They are constructed through a process of social negotiation. Figuring out the right Problem specification and Solving it are interdependent, interwoven, ongoing processes.
After decades of bitter experience, it’s widely accepted in the field that hardcore rationalism in requirements analysis—insisting on a full Problem specification before Solving begins—doesn’t work. Most organizations adopt somewhat more flexible, exploratory, incremental, meta-rational approaches, along lines discussed in the following subsection, “Understanding software purposes meta-rationally.”
However, the tug toward the rationalist approach remains, due to its apparent offer of certainty, understanding, and control. (“A contract is a contract!”) As a result, far more rationalism than meta-rationalism still typically operates in requirements analysis.
Why stakeholders attempt architectural design
A common problem in requirements analysis is that when you ask stakeholders what software should do, they tell you instead how it should work. They start working out a database architecture or user interface design. They may produce quite elaborate proposals. They may insist that these get included in the requirements document. If they have enough clout, you may not have the authority to say no.
But the requirements document is not supposed to specify how the software works, just what it needs to do. It’s a Problem, not the sketch of a Solution. Also, of course, although a stakeholder representative from the Finance Department may be a sophisticated user of software, and perhaps also an amateur programmer, or even have taken numerous undergraduate courses in the field, they are not qualified to make architectural decisions for enterprise software.
And, they must know this. So why are they attempting to make critical decisions outside their professional expertise?
There are programs I use daily and am expert in, which I could not describe accurately or in detail without having my computer in front of me. Excel is an example. With the app open, I can see what it does, and as I’m using it, I can see what to do next. I do much of that without thought, as “muscle memory” for where commands live, and sequences of actions that accomplish common tasks. When I’m not using it, I have only a vague and incomplete memory for what it does.
Asking stakeholders what software should do is asking them to imagine using a future program whose nature is currently entirely undefined. It is difficult even for expert software developers (such as me) to imagine using even a program we’re highly familiar with. It is far more difficult to imagine using a ghostly non-existent program, whose shape and functions are not yet specified. Envisioning that is, in fact, the unique and critical capability of skilled software architects.
An intelligent interviewee implicitly recognizes the difficulty, and comes up with a sensible solution. If they can work out how, at a high level, the software would have to work, they can envision using it, and then see how it could support their work. Fixing the free-floating architecture in imagination starts to make sense of the ludicrously vague, open-ended question “what do you want it to do?”
In fact, stakeholders who try to force architectural decisions implicitly understand one point about how software design should work better than requirements analysts. They implicitly recognize the inseparability of Problem formulation (requirements analysis) from Solving it (technical issues, especially architecture).
That is a central point in the next subsection, on understanding software purposes meta-rationally, so this is the perfect point to segue!
I would love to hear what you think!
A main reason for posting drafts here is to get suggestions for improvement from interested readers.
I would welcome comments, at any level of detail. What is unclear? What is missing? What seems wrong, or dubious? How could I make it more relevant? More practical? More interesting? More fun?
For example, ISO/IEC/IEEE 29148, the requirements analysis standard promulgated jointly by the International Standards Organization (ISO), the International Electrotechnical Commission (IEC), and the Institute of Electrical and Electronics Engineers (IEEE). There are several others, and some companies create and require their own.
More politely than “opinions”: concerns, needs, interests; purposes. I’m being a bit snarky here, reflecting my view that professional requirements analysis practice is more about corporate politics and rationality theater than finding out what will help people do their work.
Different standards for requirements analysis mandate varying levels of formality.
As in “Getcha boots on!”
I discussed the general impossibility of full definition in “Are eggplants fruits?” in Part One.
Fascinating stuff.
Seems to me there is a lot of resonance with the modern UX researcher and “product manager” (distinct from project manager), whose main job is to make sure you’re *solving the right problem.*
What’s fascinating to me is that a lot of the most thoughtful authors I’ve seen in this field (eg Indi Young) come from design backgrounds, where things are almost self evidently nebulous and there is no absolute right answer. Instead, all the work is in understanding the right *context and fit*.
They also have lots of exercises that are about getting your categories of and relationships between ideas right — ie a proto-ontology if you will. They call this “qualitative research”, and I think there are a lot of resonances with what you write about. At the end of the day, it’s about getting the abstract architecture of things right, appropriate for the moment and the purpose.
The concept of “friction” seems pretty relevant here. Clausewitz talked about it in relation to war, but it likely applies to nearly any human endeavor. Here’s a post about how it applies for software development:
https://www.hillelwayne.com/post/software-friction/