Oblivious software management rationalism
... and the value of rationality in software development; with meta-rational antidotes
Software development may be the domain in which rationality is most important.
Computers are material embodiments of mathematical logic; they are the most formal physical objects in existence.
Software development may be the domain in which meta-rationality is most important.
Rationality, unaided, ignores context and purpose. The contexts and purposes for software systems are exceptionally nebulous, but also critical for success, so meta-rationality is necessary to guide rationality’s use in development.
Because software is formal, the temptation to proceed with rationality unaided is exceptionally strong. Giving in to that temptation is rationalism: overestimating the power of rationality, and therefore failing to apply meta-rationality as well. That often causes failure for a whole software development project.
Participants in software development projects typically have backgrounds either in software engineering or in business. (I’ve done both.) We bring to bear two sorts of rationality, technical and managerial. We are prone to two corresponding sorts of rationalism, each of which can cause project failure.
This free post is a draft section of my meta-rationality book. If you are newly arrived, check the outline to get oriented. (You might want to begin at the beginning, instead of here!) If you run into unfamiliar terms, consult the Glossary.
In the book, it’s titled “Rationality, rationalism, and meta-rationality in software development,” but that’s too long and too boring. “Oblivious software management rationalism” is more interesting, and still reasonably accurate as a summary.
The software architect’s role, ideally, is to supply missing meta-rationality. Architecting mediates between the business’s purposes and what is technically feasible. It mediates between a panoramic view of the project, in terms of the software’s business use, and a zoomed-in view of the intricate assemblage of technical details.
This section is about how software development projects can fail due to insufficient meta-rationality. It discusses mainly, and first, failures of management, particularly regarding architecting. Second, it discusses engineering rationalism briefly. The bulk of this chapter repeatedly contrasts rationalist and meta-rational approaches to software engineering, so that second subsection is only an overview.
Rationalism and meta-rationality in software management
Surveys find that software development projects typically cost about twice what was budgeted, are almost always delivered late, and about half fail completely. This should be unacceptable to everyone involved. It’s is widely considered unacceptable within the software industry, but it remains common.
From the point of view of senior executives, this is a management problem, by definition. Whenever anything goes significantly wrong in a business, it’s a management problem: it was some manager’s responsibility to ensure that thing went right, so they were doing their job wrong.
As an executive who learned rational management in business school, you demand predictability and control. You develop and document a general strategy, derived from clear overall institutional objectives. You approve projects on the basis of a quantitative risk and benefit model. You institute a standardized planning process and hold mid-level managers accountable for producing and executing project plans. Individual employees must have well-defined responsibilities. You have an org chart with specific qualifications for each job. You don’t allow hiring someone into a position unless HR determines they are qualified to do the work.
None of that works for software development management.
Management theory and practice originally developed for assembly-line manufacturing in the early to mid-20th century. That is still its paradigm, and business schools teach techniques derived from it as foundational. Manufacturing management can, and should, be done almost entirely rationally. A factory can be made to conform to a simple formal model, by standardizing inputs and shielding internal processes from external nebulosity. The inputs are materials (7-gauge Type 301 stainless steel wire) and generic labor. The differences between assembly line workers can be ignored, or quantified in a simple model (“widgets per hour”). The assembly line runs nine to five inside a big box with security guards to keep random people out, and with monitoring systems inside to ensure the workers and machines are working when they’re supposed to.
The purpose of a factory is simple and easily formalized: to mass-produce uniform products (eggbeaters and potato mashers), whose design is simple, fixed, and extremely well understood, at minimal cost. Basic mathematical optimization techniques are sufficient to guide decisions. If the FDA declares that eggs are now scientifically proven to be good and potatoes bad, or vice versa, and demand shifts, you can calculate how many workers to transfer from putting handles on eggbeaters to putting handles on potato mashers.
Software development is the opposite in every respect. You produce a single, unique, unimaginably complex abstract object, a software system—not millions of identical, simple, physical ones.1 It will be significantly different from anything else that exists (or else it would be cheaper to buy that instead of making something new). Building it is partly a matter of novel discovery, not optimizing an already well-understood process.
Every piece of software depends on numerous others, supplied by external parties. These are the closest analog to the material inputs to a factory. Unlike wire, they are not standardized. They constantly change out from under you, as their makers release new versions that fix bugs and add functionality. In the worst case, a software product you rely on may be discontinued altogether with no warning. It is impossible to entirely shield against this unpredictable external nebulosity.
The purpose of the software cannot be fully known when development begins, or even until it is completed, and sometimes not until several years later. This is surprising! It’s the biggest source of nebulosity in software development, so it’s the topic of a large upcoming section. In short, the people who will use the software don’t know what is technically possible, and don’t know what they want. When they get the program, they may find it doesn’t do important things they didn’t think to ask for. They may also find new uses for it, perhaps years later, that the developers didn’t expect or design for. Those may become more important than the original theory of its purpose.
Software developers are notoriously not interchangeable, nor easily modeled. Two people who HR has determined “know JavaScript” may have entirely different abilities, both quantitatively and qualitatively—and nebulously, in ways that may be impossible to describe. The most productive programmers are often eccentric, obnoxious, opinionated, and can get a higher-paying job tomorrow at a competitor if you tell them to do something they don’t like. They work best in close-knit, self-organized teams, in which individual capabilities and responsibilities are opaque to outsiders; so a management decision to transfer seventeen “workers” from project A to project B is likely throw both into dysfunction for months.
Software development management confronts nebulosity at every turn. The work is completely different from manufacturing, so the management methods have to be completely different. Rational ones reliably fail. This is a meta-rational observation: a particular rational system—conventional management theory—does not fit this context and purpose. Not recognizing this mismatch is a common, critical failure to apply needed meta-rationality in management.
A tempting alternative model is construction management. Your firm is hired to build a parking garage. You send people out to get the geometry of the site. You hold talks with counterparties at the company to get details of what they need. You give that to an architect, who draws up blueprints for the finished structure. You work out a detailed construction plan as a Gantt chart that lists all the necessary tasks. It specifies which team or subcontractor will perform each task, with a schedule of when each must be completed. During construction, you monitor the work closely to make sure the schedule doesn’t slip, and reassign resources optimally to get back on track if it does.
This doesn’t work for software development either. Often you can’t know how it will be deployed (the analog of “the geometry of the site”). The people who pay for software have only the vaguest idea what they need, so creating a detailed architectural plan before building it reliably fails. Even if you knew exactly what was needed, you can’t know what technical hurdles software construction will face until you get to them. Often the engineers have to innovate, which is rare in parking garage construction. Innovation takes an unpredictable amount of time, blowing up your schedule.
So, the term “software architect” is misleading: a building architect’s job is done before construction begins, and their blueprints remain mostly fixed throughout construction. Trying to permanently fix software architecture at the start of a project is called “waterfall development,” which is widely discredited. It doesn’t work; the architecture must be revised throughout the process, ideally through continuing involvement of the original architect.
This is a domain so nebulous that plans rarely reflect reality. Insisting on following one anyway results in mostly unproductive work building wrong things. Software development mostly cannot proceed rationally; it has to be done meta-rationally. Otherwise, if a project finishes at all—they frequently never do—the new software will be much more complicated than it should be, making it difficult to use, unreliable, and unreasonably expensive to improve, or even to keep running.
Nevertheless, detailed planning and progress monitoring—major methods of rationality—are constant temptations for software development managers, because how else can you get control? And how can you afford to give up control if you are held responsible for success?
Meta-rational management must begin by acknowledging that nebulosity is unavoidable. You must relax the rationalist demand for certainty, understanding, and control. It’s actively counter-productive; a project is more likely to succeed with less of them. Development management must be more like improvisational theater or dance.
You cannot get certainty, but you can get some predictability by making trade-offs. For example, you can structure the work to make it likely that you’ll deliver something minimally usable by a given date, so long as you relax requirements for exactly what it must do. Alternatively, you can make it likely that you’ll eventually deliver what was asked for, so long as you don’t expect to know when.
No one can comprehend all the details of a complex software system, but you can get enough impressionistic, textural understanding to make sensible decisions about what to work on next and how.
Trying to stick to a plan is disastrous, but plans are helpful so long as you understand they need frequent revision. They are there to be improvised around, not as an instrument of control.2 You can’t control development, much less developers, and they resent and resist monitoring. You can inspire and influence them, and check in at appropriate intervals to find out what support they need.
Invisible
Software architecting is often invisible, denigrated, neglected, overcontrolled, or rationalized. Here we revisit themes of the “invisibility of meta-rationality” chapter to watch them play out in this domain.
Recall that from rationalism’s point of view, the only two ways of being are (1) rationality and (2) the undifferentiated mass of everything else. Implicitly, it regards all non-rational thinking and acting as uniformly inferior and uninteresting.
For those who have not yet discovered the role of meta-rationality, its operation is invisible, and its results incomprehensible. When there is a danger that they may become visible, they must be ignored, reconstructed rationally, or even prohibited as unacceptable anomalies.
Meta-rationality characteristically crosses disciplinary boundaries, connects levels of description, and conjures with nebulosity and pattern. There aren’t college degrees or training certification programs for those. If you try to explain them to business or technical people, most will get increasingly annoyed. In practice, we’ll see later, meta-rational work looks to them like “meddling in everyone’s business, asking questions that make no sense, and issuing sweeping pronouncements without anything to back them up.”
“Why must software architecting be meta-rational?” listed several fields that software architecting must bridge. There are established rational methods and university curricula for each. As with all rational disciplines, they work quite well so long as practitioners avoid wading into marshes of nebulosity. Businesses pay high salaries for professionals in each domain to do their work according to the standards of their professions.
There are no professional standards for software architecting. There are no generally-recognized methods. University courses are rare and—as far as I can tell—lack substance. There are some books. I skimmed several in preparation for writing this chapter, and was not impressed. Mainly they describe the work in extremely abstract, general terms. When occasionally they get specific, the methods they suggest are not credible.3 In practice, you learn software architecting informally through apprenticeship, from discussion at trade conferences and specialized social media sites such as Hacker News and Stack Overflow, and by doing it badly and learning from your mistakes.
As a meta-rational task, software architecting operates above and around systems: the software system itself, but also all the business systems in its context, such as financial systems, operational systems such as warehouse inventory-taking, and sales systems. The business specialisms view software from the outside, and see a more-or-less defective, expensive, opaque box. Developers view software’s uninteresting purpose and incomprehensible context only by peering out from within, or by preference not at all.
Only the architect has the perspective from above and around. You develop a good-enough big-picture understanding, but not expertise, in all relevant areas. You need a contextual feel for aspects of each rational domain that might be relevant to the architectural decisions.
Connecting levels of description, meta-rationality locates key hard constraints at the detail level and ties them together with this impressionistic big picture. The architect must connect the real world—of refrigerated trucks, paper-mail invoices, and sociopathic executive politics—with the formal world of data structures, software modules, and asynchronous microservice interface protocols. It may turn out that seemingly minor details of the format of printed invoices, which everyone in the Accounts Payable department takes for granted and would never think to mention, have crucial implications for inventory software architecture.
Denigrated
Some organizations recognize the importance of software architecting and support and reward the role. Most do not. Many of the people an architect interacts with resent them and wish they would vanish.
Everyone rational who is involved in software projects agrees that the first step must be to specify precisely what the eventual product must do. You can’t build something if you don’t know what it is. Executives on the business side typically see this as the marketing department’s job, which technical people should stay out of. So the marketing department (or some other non-technical group) performs “requirements analysis” and delivers a “requirements document” to technical management.
This doesn’t work at all, as we’ll see in the next major section, “Software purpose nebulosity.” One reason is that non-technical people don’t know what is technically feasible, so they may ask for the impossible, or may fail to ask for useful functionality that they wrongly think would be difficult to produce. They are unable to envision what software might do for them.
This means that someone has to re-do the requirements work. The architect, bridging business and technical expertise, is ideally situated. Unfortunately, to executives on the business side, the architect seems to be an unqualified intruder from the technology department, second-guessing work they’ve already had done by their own experts. The architect claims to need to discover the purpose of the software; but “we’ve already told you that, our analysts have gathered all the requirements and put them in a document we gave you in the professionally recognized standard format.” So the architect must apologetically say “well, the technical people need a bit more detail, you see,” rather than “yeah, that document is nonsensical and utterly useless.”
As architect, to find out what the business actually needs, you may need to pester dozens of people all over the company, asking super stupid questions (because initially you know zero about their work). Often you may hear “Look, I’ve got a job to do, I can’t spend hours teaching you basic accounting concepts!”
Once you start to figure things out, your questions shift from clueless to apparently insane. Aspects of work left implicit in professional practice must be made explicit, and probing them may make professionals uncomfortable. It may seem that you are criticizing them for not understanding their own work. Aspects that seem trivial to them—because they get routinely papered over with circumrationality—may be critical for you (because they must be formalized for the software).
The architect’s function isn’t understood, so the work appears pointless and wasteful and annoying. An architect who is a skillful communicator can try to explain what they are doing and why, but there’s limits to how effective that can be. There’s a management culture failing here, which is usually not feasible for the architect to correct.
On the technical side, it seems the architect is telling engineers how to do their job, and making the most important technical decisions on the basis of hand-waving abstract arguments about business stuff and some sort of supposed creative aesthetic intuition or something. Engineers believe technical decisions should be made on the basis of rational, technical considerations about software reliability, capabilities, and quantitative performance. Software architects usually are not expert on all those details. They can’t be; even if they have a software engineering background, which not all do, they have to spend most of their time sweet-talking accountants into explaining internal transfer pricing models, not staying up to date on the latest JavaScript framework.
All these hostile forces may be right to denigrate software architecting. It is difficult, and usually done badly. If it were easy, software architects wouldn’t make billion-dollar mistakes (as they do). Software projects often fail; bad architecture may be the most important reason.
Bad architecture is common because skilled architecting requires meta-rational skills that aren’t taught and that most people don’t realize exist. There is no formal qualification for the job, but also many of those who do it aren’t functionally qualified either. When the work is not supported or rewarded by upper management, and when it involves mostly talking to reluctant or even hostile strangers, those who could do it well may sensibly avoid it. Management usually doesn’t have the meta-rational understanding needed to detect whether someone is doing it well or not.
The role can attract people who aren’t good at anything except politics, and may not be capable even of rationality. Rational people may be right to dismiss those as “bullshit artists”—a term regularly applied to software architects by engineers.4
Neglected
Without recognition and support, system architecting may simply not get done. Default options replace decisions that should be made meta-rationally. If no one else does the work, the technical head of the project may just use the architecture they always use, so they don’t have to learn anything new. Or, they may use an architecture they’ve never used before, because they’d enjoy the challenge of learning how it works. Or, they may choose a trendy new one because there’s growing demand for anyone who can make it work, and they can get themselves paid to learn how on this job before switching to a better-paid position next year. They may never wonder if the architecture suits the requirements of this project.
For small or routine systems, that may be fine. Off-the-shelf architectures try to be quite general. In principle, any architecture can meet any set of requirements. Selection using generic technical criteria may suffice. For complex systems, or for unusual or extreme requirements, that can doom the project from the outset.
Clueful technical project leaders recognize the cost of neglect. If no one else does the architecting job, they may reluctantly take responsibility, for the sake of the overall success of their team; even knowing their careers may suffer. You can read rueful discussions of this sacrifice, and ways to minimize its cost, on their blogs.5
Accurate architecting has substantial short term cost and substantial long term benefit. It may be hard to convince management to put resources into it. They want a product asap. “Why are you wasting time on this vague abstract fake work, instead of doing what you are supposed to, which is writing X lines of code per day?”
To be fair, it actually is possible to over-architect. A good-enough architecture is good enough. A bad architecture, thrown together as fast as possible, may be the best architecture if time-to-market is the most important factor. (Engineers hate to hear this, but it’s just true; software doesn’t exist for the sake of elegant engineering, it exists to meet people’s needs, which may include “now!”.) Putting excess time and resources into architecting can produce elaborate, elegant complexity that just gets in the way of building the thing.
Overcontrolled
Some organizations recognize that architectural decisions are critical to software project success, and that they are often done badly. In that case, it seems leaving it to supposed “architects” is unacceptably risky. Those people have no specific credentials or training, and no one else understands what they are doing as they do it, so how are we to know whether they will do a good or bad job?
To senior executives, this may appear to be a normal management problem. You must get control by imposing rationality on chaos. You eliminate these seat-of-their-pants cowboys and hold everyone else accountable for performing their expertise in accordance with recognized standards. This rationalism often causes project failure.
Nevertheless, there’s such demand that the multi-billion-dollar company Rational Software, Inc. developed the Rational Unified Process, a management framework for software development, particularly requirements analysis and architecting. The Process’s essence is to create formal representations of the work to be done, supported by expensive specialized meta-software, jargon, report formats, diagram types, handbooks, training courses, expert consultants, and other rationalist paraphernalia.
Reading first-person accounts, it seems all this is rationality theater. In the best case, it wastes technical leaders’ time producing unhelpful formal representations which they then quietly ignore. Alternatively, taking it seriously can bog a project down so badly in formal work-describing rituals that no software ever gets written.
Meta-rationality rationalized
Because software development is so necessarily meta-rational, and the industry is full of unusually smart people, valuable meta-rational insights are common. However, these usually get appropriated by rationalists and distorted into rote rationality. As mentioned in “Rational reconstruction” earlier, we’ll look at two examples: Agile, a software development management approach, discussed here; and domain-driven design, discussed in the upcoming section on software ontology.
Agile is a nebulous, amorphous collection of meta-rational insights into how software development teams can organize work effectively. It is not a specific method, nor an overall structured management methodology. That would be rational, and, if rigidly applied, rationalist. Agile is the negation of the Rational Unified Process.
The Agile Manifesto begins with four summary maxims:
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
Can you see how each of these is meta-rational? It may help to recall my definition:
The essence of meta-rationality is actually caring for the concrete situation, including all its context, complexity, and nebulosity, with its purposes, participants, and paraphernalia.
The 2001 Manifesto offered twelve further maxims. One of them summarizes meta-rationality itself:
At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.
Two address the nebulosity of business requirements (software purposes):
Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.
Business people and developers must work together daily throughout the project.
Another is the antidote to rationalist managerial overcontrol:
Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.
Elsewhere in the Manifesto:
We plan, but recognize the limits of planning in a turbulent environment.
Agile came at first as a fresh breeze for developers stuck in stifling rationalist development organizations. In more nimble companies, clueful technical managers saw the wisdom in the approach, adopted it, and got outstanding results.
Others saw that and tried to emulate them. Agile became a management fad, which all but the stodgiest development organizations had to implement. Or had to say they had implemented, anyway.
Results were disappointing, on the whole. That was because the meta-rational aspects of Agile—which are, in fact, the whole thing—were overlooked, neglected and gradually abandoned, or deliberately discarded.
Agile can work only if everyone involved understands and agrees with its meta-rational nature. In the absence of a broader meta-rational culture, that is infeasible for most participants. Managers are unwilling to give up predictability and control; technical people are unwilling to improvise in a volatile, nebulous context, rather than methodically Solving a fixed Problem.
Rational technical managers were told by higher-up executives, who bought into a hype wave, that they had to Do Agile. They had no idea how, so an industry of “Agile consultants” rationalized the maxims into specific processes and methodologies which promised success. As rationality theater, increasingly resembling the Rational Unified Process, those provided lackluster results. In most cases, they were discarded after a few years.
“Agile” now often refers instead to Scrum, usually in a debased form. That is a project management methodology with a set of “ceremonies.” The central one is a daily meeting in which every team member reports their progress relative to a detailed project plan. This micromanagement is the exact opposite of the Agile principles of minimizing bureaucracy, trusting individuals and teams to organize their own work, and flexible improvisation around a general plan sketch.
An alternative failure mode is to misunderstand Agile as anti-rational. Proponents may oppose planning on principle, and reject the use of any rational methods for organizing work. This is also a failure of meta-rationality, which means determining whether and how best to use rational methods. In software development projects too large to fit into a single person’s head, the meta-rational answer to “whether” is “definitely, yes!”
From the Manifesto:
The Agile movement is not anti-methodology; in fact, many of us want to restore credibility to the word methodology. We want to restore a balance.
The broader lesson from the relative failure of Agile is that meta-rationality itself must be trained. A collection of specific meta-rational insights, such as the Manifesto, is useful only for already meta-rational people. It can only be misunderstood and misused by everyone else.
Rationality and rationalism in software engineering
Program code is perfectly formal. Reading and writing it is the most formal economically valuable activity.6
Code is also about things; it refers, it represents. Code has formal syntax, but also semantics. It is meaningful. At minimum, it is a description of a computational process: the running of the program itself. Most programs also have data structures that represent things outside the computer, “in the real world,” such as crates of eggplants in a warehouse.
So, software development must be rational: working with formal systems that represent and are causally connected with physical things.
Routine programming requires only basic rationality: applying known formal methods to known-solvable problems. Most programming “in the small,” meaning reading, writing, debugging, and maintaining chunks of code of up to maybe a thousand lines, is basic rationality. Basic rationality is also necessary for systematic testing: making sure all possible execution paths have been tested.
More difficult development challenges, in which it’s not clear what method will work, or whether any method will work, count as advanced rationality. Skilled programmers live for that! We want to exercise our rational expertise by Solving difficult formal Problems. Our interest is in the technical challenges of how the software does whatever it does. Ideally, we’d like free rein to write software that chews its cud in some fascinatingly complicated and unusual way, with no distracting interaction with anything outside itself.
To the extent that rationalism gets the best of us, we are uninterested in the context or purpose of the software. That especially includes what it does from the user’s point of view. Users are nebulous nuisances, who, ideally, would be abolished.
We take the Problem as a given, and are uninterested in where it came from or why. We want the requirements analysts over in the marketing department to give us a fully-detailed formal specification for what a program must do. Then we will write it rationally, and then we will be finished. This is what most engineers and technical managers would like to do, and keep trying to do, but it doesn’t work.
It is not realistic for programming “in the large,” working with greater software complexity than anyone can fully understand. You can build a parking garage rationally because it’s a simple physical object; a crude concrete box. You cannot build a software system rationally because it is too complicated.
Writing software is mostly routine; what’s difficult is figuring out what software to write. That edges into software architecting, which is at minimum adventure rationality, and may require full meta-rationality. We work in messy situations where it is not yet clear what needs to be done, and somehow construct a Problem and a general plan for Solving it. This involves nebulous meta-rational observations, understandings, and judgments, many of which can’t be justified rationally, except at the end with “the system works, and that was how we did it.”
Software architecting requires standing meta-rationally outside and above a system that doesn’t even fully exist yet. Initially you envision its ghostly future form, and then improvise its construction with experiments. Many of them fail for nebulous reasons, but which gradually bring the imagined system into focus. Whereas a less skilled programmer treats failures as breakdowns, more experienced ones consider them anomalies, which may offer new opportunities or sources of information.
A naive engineer might frame architecting as “using the best software architecture”—and many have fierce opinions about which that is. A less naive one might frame it as “choosing the best architecture for the requirements we were given.” Then they’d ask “How many simultaneous users does it have to support? How many petabytes of video will it need to store?” Those are questions about context and purpose; but they are narrowly technical and quantitative. Technical people may prefer to ignore wider, more qualitative questions which seem frustratingly nebulous, but which can have critical architectural implications. Most of this chapter is about those.
Engineers may dismiss all nebulosity and meta-rational considerations as stupid people-stuff.
Yeah, I know insane politics always gets in the way of doing the right thing technically. I hate that people-stuff, so I just ignore it. Fortunately, it’s not my job. Just let me get on with writing the program!
That may be a realistic personal strategy, but it has drawbacks:
“The right thing technically” is never separate from a detailed understanding of how the software will get used. (As we’ll see.) If the system you work on was architected badly, your project may be doomed, or progress may be needlessly frustratingly difficult. Some engineers have a “so long as they pay me” attitude, but most care about their project’s success.
You are absolving yourself of engagement with real-world purposes, so you will always be someone else’s puppet. They may have bad ideas about what is worth doing, so—regardless of technical success—your project may have little value, or even negative value. Some software makes the world worse. Some other programs just waste people’s time.
Architecting is also mostly not about politics or “people skills,” although those usually play some part. Software never does everything everyone wants, and it does often fall to the architect to facilitate a negotiation between executives with conflicting priorities (and who might even be personal enemies). Also, understanding how people think and feel and act helps make software easy to use, which can be critical to success.
I would love to hear what you think!
A main reason for posting drafts here is to get suggestions for improvement from interested readers.
I would welcome comments, at any level of detail. What is unclear? What is missing? What seems wrong, or dubious? How could I make it more relevant? More practical? More interesting? More fun?
Users may run millions of copies of the software, but producing them is not considered part of the development process, and the cost of is tiny in comparison.
See “What are plans for?” by Philip E. Agre and David Chapman, Robotics and Autonomous Systems, Volume 6, Issue 1-2, June, 1990, pp. 17–34.
Of course, this chapter also explains software architecting in extremely abstract, general terms. It is not trying to teach you how to do it, though. The books aren’t altogether worthless; if you do this sort of work, you should probably read some. They might serve as checklists of factors you need to take into account, or articulate points you may have overlooked.
Hacker News contributor “Geminidog” makes this point forcefully in a series of comments starting at https://news.ycombinator.com/item?id=25553288. These form a wonderful case study in dogmatic ideological rationalism
See for example Ryan Harter’s “Getting credit for invisible work at the Staff+ level” on the LeadDev web site, and Tanya Reilly’s “Being Glue” on her No Idea blog, discussed earlier.
Notably, software is much more formal than mathematics, once you are past the grade school level. Mathematics involves some formal equations, but much of it is done in vague human language. Mathematicians believe they could, in principle, make their vague proofs fully formal, but they almost never do so. When they try, they usually find minor errors. How mathematicians are able to produce reasonably accurate proofs without formal reasoning is mysterious; we’ll come back to that in the chapter on meta-rationality in mathematics in Part Five.
Directionally everything rings true to my work experience, but I'm getting hung up on a lot of details (especially on software architect as a particular person's role and its interaction with other business and technical roles). I've more often worked in contexts where architecture in the sense you're describing it is the output of interaction among (1) various technical leads/managers with project delivery responsibility and (2) product managers/owners/designers who act as proxies for the business and users. I think 2 is playing the role of what you are referring to as "the marketing department", but IME experience the marketing department is usually playing the different role of figuring out how to push already-made product or identify trends/opportunities.
This only matters substantively because architectural responsibilities are often layered and situational: as a front-end lead I may be handed technical component or business domain architectures in some contexts and hand them out in others. More usually, though, I am contributing to meetings where those architectures are being defined interactively (before then being handed off to IC engineers).
I wonder if part of the problem is that software projects are often part of a rationalizing project to begin with. For example, "digital transformation" in business often requires that existing business processes be broken down and reformatted into things that can be managed by software. This might be driven by higher ups who want a more legible / measurable / manageable process, so the rational perspective can be baked into the project from the outset.