I’ve been thinking a lot about why metarationality and glue work doesn’t get more credit.
My hypothesis is that it is hard to scale and replicate, which is the easiest (only?) way that rational systems know how to deliver value. Metarationality is most present in a lot of early stage work, and the connection to delivery is not direct enough for people to grasp unless they seriously take time. If you only do metarationality work, nothing will get produced. If you only do rational work, you very likely will produce garbage, but at least you’re producing something.
And then separately, here’s a physical analog for glue work: polymer additives to solutions usually do all the interesting work that gives complex fluids their properties, oftentimes providing the structure that produces complex viscoelastic behavior. One could argue that they are the most important thing, qualitatively! The glue, so to speak! However, they are still viewed ultimately as a cost center: and when trying to scale people spend all their time trying to figure out how to get by with as little polymer/additives as possible.
Perhaps there are similar parallels to why companies cut R&D.
Anyway, thanks again for the great article. This is a hard topic to write about. On one hand I am really excited and want you to get to the meat and case studies faster! But on another hand there is a lot of prerequisite material that you have to cover, and continuously remind people, or else it leaves their limited context window!
> Metarationality is most present in a lot of early stage work, and the connection to delivery is not direct enough for people to grasp unless they seriously take time. If you only do metarationality work, nothing will get produced. If you only do rational work, you very likely will produce garbage, but at least you’re producing something.
I doubt this is an example of “meta-rationality”, but it is an example of when I was a little bit creative at my corporate job.
In the 1990s, I worked at an up-and-coming scientific/engineering corporation that was introducing PC technology into the test and measurement field. I was the manager of their branch in Tokyo. PC technology was still pretty primitive, so one of their products had an interface that could not display anything but Roman characters – nothing Far Eastern. We were trying to sell it into the Japanese market.
Of course the language display problem on PCs was solved decades ago, but at the time it was a big deal, and hampering our product sales in Japan. During our week long sales conference at the US home office, I kept hanging around the technical manager of this product, visiting his office, sitting with him at lunch, etc. Always hinting at what we needed.
Finally, by the end of the week, exasperated, he pulls me aside, and says, “Okay, whaddaya want? You want us to make the entire product, including the editor, Japanese-language compatible?”
I knew this was a no-go, so I said, “No no – just make the interface capable of displaying Far Eastern characters. No need to touch the (C language) editor itself.”
So he agreed to that limited objective. And soon we got a product much more attractive in the Japanese market.
This wasn’t meta-rationality. It was just a sales guy convincing a tech guy to devote a slender sliver of resources to improving the product for a particular market.
Of course if I hadn’t done it, nobody would have. At the time I was rather proud of what I had accomplished, I’d improved the company’s bottom line and eventually my own remuneration.
There were a couple other instances where I played a substantial role in pushing the company into greater globalization efforts, which again, paid off handsomely in the future, even if I was no longer there to reap the rewards.
In any case, meta-rationality seems like something way beyond this.
Sometimes it seems like you’re trying to capture lightning in a bottle. Every time I read you about meta-rationality, I feel like I’m being pushed further and further back into philosophical reflection.
Meta-rationality will make me more effective – but meta-rationality makes me ask, why do I want to BE more effective? Effective for what? What are the goals, which goals are worth pursuing?
Meta-rationality doesn't need to be fancy and a big deal. It is common and usually unremarkable.
Especially in software, "what do we need to build" is a meta-rational question. It is about purpose (selling in Japan) and context (customers need Japanese characters). The right answer to "what do we need to build" is always somewhat nebulous (just Japanese, or CJK more generally?). And the question is about how to apply software engineering rationality (or what to apply it to).
I'll be going into that quite specifically in the next several upcoming meta-rationality posts!
Have you been reading Ben Rechts series on Paul Meehl? He just wrote a piece on the non duality of context of discovery / context of justification (not how he puts it) that's super relevant! https://open.substack.com/pub/argmin/p/see-what-we-want-to-see
I know your article is more about meta rationality so I apologise for possibly digressing from your main point
You wrote
> Every large company is a giant mess, held together by circumrational “glue people” who compensate with reasonableness for the disconnects between the many internal rational systems.
You use glue people. Which is interesting
Because the only other context where I see it used is in google parlance they (or specifically Eric Schmidt) also use the term “glue people” but in a negative way
Thanks for this comment, it may help clarify several points for readers!
My phrase was "circumrational glue people." Circumrationality is quite different from meta-rationality. Meta-rationality may be useful in figuring out what sort of glue is required to repair gaps in rationality, but it's not generally glue-like.
The study of managers given meta-rationality training quoted one of them as saying:
> “You really start understanding all of the waste and all of the redundancy and all of the people who are employed as what I call intervention resources. The process doesn’t work, so you have to bone it up by putting people in to intervene in the process to hold it together. So it is like glue. So I would look around [the company], and I would see all these walking glue sticks, and it was just absolutely depressing and frustrating at the same time.”
There's a great post about glue work, and why you probably shouldn't do it, here: https://noidea.dog/glue
Apple does have a few flavors of glue people (the main one is program managers aka EPMs), but tries to get more use out of them by giving them other formal responsibilities - generally communicating with specific vendors, or being in charge of telling the related exec whether or not everyone else seems like they're on schedule. As for how you tell when one is effective, I've never been able to figure this out.
I assume a reason you'd have had more in the 90s is that someone has to know who the useful people on the other teams are, and it's not always you.
> As a personal strategy, I recommend mainly avoiding meta-rational work if you aren’t in a context that recognizes and values it. You may be tempted to do it anyway, because you can see that it would make your team, project, or organization more effective. However, if it’s not valued you won’t be able to get meta-rational insights adopted, and you may get actively punished for suggesting them.
> There are exceptions. If quietly doing some meta-rational work makes you better at deploying rationality, so you can Solve some difficult Problem, you will get rewarded for the rationality. Here is where rational reconstruction comes in! If you have to justify your work to people who see rational Problem Solving as the only worthwhile form of thinking and acting, you have to present your work as if you did it rationally. How far your fiction must diverge from reality, and how feasible it is to do this at all, varies from case to case.
I guess it partially answers my question.
One comment I have for this chapter is that I think it can be shorter
Thanks, I think you are right that this piece was too long for a Substack post. It's somewhat unnatural to break up what is actually a book into chunks that fit into a newsletter / blog, so it's hard to get right, but this was more than 6000 words, which is too much. I'm not sure what is optimal, but it's probably in the 2000-4000 word range, with maybe 5000 as a hard limit?
Oddly, when I started working at Google I once talked to a guy who described himself as a sort of glue person, helping teams interact. (Probably not in those terms - it's been a long time.) I think he was a manager of some sort, but I assume he had some technical background too.
Someday I should tell you the story of how I tried to get Quora to revise how they ranked posts using an explanation that involved metarationality. I failed spectacularly and likely would have done better with rational reconstruction.
The thing about having an emotional shock when opening up to meta-rationality doesn't resonate with me at all.
Maybe it's because I've already had plenty of "crises of faith" in beliefs that I held dear, people I saw as authorities-to-look-up-to and entire ontological/conceptual frameworks that at some point seemed pretty obvious and solid.
Or maybe it's because I already was pre-adapted to meta-rationality. The more I read of the first two chapters of the Eggplant, the more I thought "I thought there was no literature aiming more or less directly at the thing I want to deconfuse myself about but this guy seems to get it".
> It’s especially required to leave out all your meta-rational and circumrational work: how you came up with your hypothesis, how you worked out the experimental design, and all the many false starts, experimental and theoretical breakdowns, and your repairs in response.
Leaving this out is very dangerous when you're in a field that (pretends to) care about statistical significance, because you can publish noise as if it's true. Thus p-hacking, publication bias and the replication crisis.
Also, along with Agile, the other famous case of this in programming is "design patterns"; using way too many of them badly used to be associated with enterprise Java programmers, though maybe not so much anymore.
> It is also taught explicitly in business schools, and in some graduate-level social science classes. In many fields, eccentric individual professors may include it. We’ll look into these examples in Part Five
Im starting a part time masters in August
There’s a module I want to take and is called digital transformation. I still have some freedom in deciding the electives ahead of time
Which I believe should be somewhat related to organisational change
If you can kindly provide the names of the concepts meta rationality is explicitly taught I’ll be most grateful
The names might help me decide which electives to take
Yes, you could look for courses on entrepreneurship, innovation management, and organizational change management. There's also buzz-phrases like "transformational leadership," "design thinking," and "VUCA management." All these tend to get routinized and dumbed down/watered down, but if you find the early, strong texts for each of those trendy terms, there's generally substantial meta-rational insight.
I have been fascinated by the concept of meta-rationality ever since I came across it a year or two ago. It has been an eye opener in many ways.
Some points that occurred to me:
- It is true that in a typical organization, it is the upper level people who are expected to do most of the meta-rational work. So I guess therein lies the key to making the concept more visible and acknowledged and even appreciated: saying that what managers do all day is known as meta-rational work immediately raises its profile.
- I suppose then the way to learn meta-rational skills is to do what business schools do, namely case studies?
- Could there be something like an MRQ (meta-rationality quotient) to parallel IQ? Though I have no idea how one would measure it.
Thanks for the comment—glad this has been interesting for you!
> what managers do all day is known as meta-rational work
Well, most of what most managers do isn't meta-rational. It's reasonable ("people stuff") or it's one of the forms of rationality taught in business school: financial optimization, quantitative market analysis, organizational process and structure, and so on.
It's true that in most organizations that do value meta-rationality, it is left to relatively senior management and to senior professionals/technical people. And it's true that some business schools do teach some meta-rational material, typically in courses on entrepreneurship, innovation management, and organizational change management. All those things are necessarily meta-rational: you are reasoning about, and acting on, rational systems (businesses, products, and services), taking purpose and context into account, and dealing with a VUCA (nebulous: https://en.wikipedia.org/wiki/VUCA) business environment.
> I suppose then the way to learn meta-rational skills is to do what business schools do, namely case studies?
That does seem to have be a major component, and I'm planning to include quite a few. The penicillin one was the first.
> Could there be something like an MRQ (meta-rationality quotient) to parallel IQ?
Sort of, maybe? You could look at Bill Torbert's theory of stages of executive development. There are quantitative tests/assessment methods for that.
IQ is often thought to be innate (although that is controversial). Meta-rationality is very definitely learned, not innate. (Although possibly innate predispositions or capacities may help.)
> Meta-rationality extends reasoning to encompass nebulosity, which rationality can’t cope with. Nebulosity is uncomfortable because it makes complete certainty, understanding, and control impossible. Rationality is comforting because you can un-see nebulosity and work instead in an imaginary world that excludes it.
I suppose this also means being comfortable with the possibility (or fact) that the line between nebulosity and patterns is also sometimes nebulous
> the line between nebulosity and patterns is also sometimes nebulous
Yes, that's right! In fact, I sometimes say that in *all* of my writing, I have only one point to make: nebulosity and pattern are intrinsically inseparable, although not the same; and things go better if you don't try to either separate them (dualism) or insist that they are identical (monism).
As a sort of exercise, it could be interesting to consider each of my writing projects and see how that theme of non-identity and non-separability of nebulosity and pattern is central to each.
I don't think it is, despite the author agreeing right there lol. It's just a really specific pattern that he's trying to describe. I'd say the issue is that you need a bunch of prerequisite understanding to be able to notice the specific distinction but that doesn't make it antimemetic, just advanced.
Well then what's the difference between calling something an antimeme and calling it complicated or advanced? I thought it meant something that's more deliberately or inherently obtuse or hard to grasp than a typical complicated thing. I looked online and the definitions aren't that coherent imo (is the term antimeme an antimeme?)
Oh, antimemes are such an interesting idea. I read about them somewhere but what made me grasp the full implications of taking the idea seriously is "There Is No Antimemetics Division".
An antimeme is an idea with self-censoring properties; an idea which, by its intrinsic nature, discourages or prevents people from spreading it.
Antimemes are real. Think of any piece of information which you wouldn't share with anybody, like passwords, taboos and dirty secrets. Or any piece of information which would be difficult to share even if you tried: complex equations, very boring passages of text, large blocks of random numbers, and dreams...
But anomalous antimemes are another matter entirely. How do you contain something you can't record or remember? How do you fight a war against an enemy with effortless, perfect camouflage, when you can never even know that you're at war?
Welcome to the Antimemetics Division.
No, this is not your first day.
*Chef's kiss*
So yeah, I think part of the difficulty in explaining meta-rationality is that it's an antimeme. That adds to the challenge but I think it's cool :)
Honestly, I don't think it is one, because it's got a name. You can't spread the definition around, but spreading the name around and pointing back here works.
- Rational Reconstruction: While I've used this approach before, I've found it limits deeper understanding, closing off avenues rather than opening them. It just reinforces the rationalist narrative.
- I've always loved roguelikes and think they are a great way to learn meta-rationality.
- Meta-rational work has often led to success in my career, but is a real challenge of articulating its value in settings like interviews.
- In what hobbies, careers, or skills is meta-rationality particularly beneficial?
Thought provoking write-up as always, David! Here's a bit of feedback that I'd categorize as "would like to see more of"...
In this section, three key skills jump out at me: intuition, creativity, and reflection. The first two seem to be in tension with your description of meta-rationality. On the one hand you say,
'[Creative intuition and genius] are *thought-stopping cliches*, which function to prevent inquiry into what specifically was done and how and why. The [sic?] success is regarded as inherently inexplicable, bordering on mystical. That’s a way to avoid having to understand the role, importance, and details of a non-rational activity. Failure to understand it makes future breakthroughs less likely.'
But on the other hand you acknowledge,
'Meta-rationality is valuable in all rational work, but like medicine, it is more craft than science. Crafts are not formally rational, but they rely on explicable reasoning in part. *A craft might be “intuitive” and “creative,” but this is not anti-rational or especially mysterious.* We don’t have a scientific theory of what goes on in a theatrical costume designer’s brain, but no one thinks it’s alien or magical.'
I agree that referring to 'genius' isn't of much use in understanding, but I think investigating intuition and creativity can overcome their tendency to serve as thought-stopping cliches. We can reflect on what seems to be happening when intuition and creativity seem to be in play in meta-rationality. I think that is what Schon is doing in part in 'The Reflective Practitioner'.
For example, a lot of reflection has been done on methods of enhancing intuition and creativity, eg, in design research and it's popularized offshoot 'design thinking'. In fact, that's what I spent three years at IBM trying to get unreflectively rationalist IBM product managers to embrace. I'm hoping that future book installments will explore the intuitive, creative, and reflective aspects of craft (especially the craft of software architecture).
Whoops, sorry! Yes, more-or-less. This is due to there being three different versions now, the one on metarationality.com, the one on Substack, and the source version (a markdown file on my laptops). They get out of sync.
Also, I'm writing Part Four before Part Three, so links back to earlier parts of the text are links forward to future work.
I have tried looking for it in your meaningness.com website but do you have a schematic overview comparing reasonableness vs rationality vs meta rationality similar to this schematic overview?
Thanks for the great write-up as always!
I’ve been thinking a lot about why metarationality and glue work doesn’t get more credit.
My hypothesis is that it is hard to scale and replicate, which is the easiest (only?) way that rational systems know how to deliver value. Metarationality is most present in a lot of early stage work, and the connection to delivery is not direct enough for people to grasp unless they seriously take time. If you only do metarationality work, nothing will get produced. If you only do rational work, you very likely will produce garbage, but at least you’re producing something.
And then separately, here’s a physical analog for glue work: polymer additives to solutions usually do all the interesting work that gives complex fluids their properties, oftentimes providing the structure that produces complex viscoelastic behavior. One could argue that they are the most important thing, qualitatively! The glue, so to speak! However, they are still viewed ultimately as a cost center: and when trying to scale people spend all their time trying to figure out how to get by with as little polymer/additives as possible.
Perhaps there are similar parallels to why companies cut R&D.
Anyway, thanks again for the great article. This is a hard topic to write about. On one hand I am really excited and want you to get to the meat and case studies faster! But on another hand there is a lot of prerequisite material that you have to cover, and continuously remind people, or else it leaves their limited context window!
Thank you for an insightful comment!
This seems right, and well-put:
> Metarationality is most present in a lot of early stage work, and the connection to delivery is not direct enough for people to grasp unless they seriously take time. If you only do metarationality work, nothing will get produced. If you only do rational work, you very likely will produce garbage, but at least you’re producing something.
I doubt this is an example of “meta-rationality”, but it is an example of when I was a little bit creative at my corporate job.
In the 1990s, I worked at an up-and-coming scientific/engineering corporation that was introducing PC technology into the test and measurement field. I was the manager of their branch in Tokyo. PC technology was still pretty primitive, so one of their products had an interface that could not display anything but Roman characters – nothing Far Eastern. We were trying to sell it into the Japanese market.
Of course the language display problem on PCs was solved decades ago, but at the time it was a big deal, and hampering our product sales in Japan. During our week long sales conference at the US home office, I kept hanging around the technical manager of this product, visiting his office, sitting with him at lunch, etc. Always hinting at what we needed.
Finally, by the end of the week, exasperated, he pulls me aside, and says, “Okay, whaddaya want? You want us to make the entire product, including the editor, Japanese-language compatible?”
I knew this was a no-go, so I said, “No no – just make the interface capable of displaying Far Eastern characters. No need to touch the (C language) editor itself.”
So he agreed to that limited objective. And soon we got a product much more attractive in the Japanese market.
This wasn’t meta-rationality. It was just a sales guy convincing a tech guy to devote a slender sliver of resources to improving the product for a particular market.
Of course if I hadn’t done it, nobody would have. At the time I was rather proud of what I had accomplished, I’d improved the company’s bottom line and eventually my own remuneration.
There were a couple other instances where I played a substantial role in pushing the company into greater globalization efforts, which again, paid off handsomely in the future, even if I was no longer there to reap the rewards.
In any case, meta-rationality seems like something way beyond this.
Sometimes it seems like you’re trying to capture lightning in a bottle. Every time I read you about meta-rationality, I feel like I’m being pushed further and further back into philosophical reflection.
Meta-rationality will make me more effective – but meta-rationality makes me ask, why do I want to BE more effective? Effective for what? What are the goals, which goals are worth pursuing?
Anyway, keep writing!
Great story, thank you!
Meta-rationality doesn't need to be fancy and a big deal. It is common and usually unremarkable.
Especially in software, "what do we need to build" is a meta-rational question. It is about purpose (selling in Japan) and context (customers need Japanese characters). The right answer to "what do we need to build" is always somewhat nebulous (just Japanese, or CJK more generally?). And the question is about how to apply software engineering rationality (or what to apply it to).
I'll be going into that quite specifically in the next several upcoming meta-rationality posts!
Have you been reading Ben Rechts series on Paul Meehl? He just wrote a piece on the non duality of context of discovery / context of justification (not how he puts it) that's super relevant! https://open.substack.com/pub/argmin/p/see-what-we-want-to-see
Thanks, yes, that's great! It was a weird coincidence that we both posted about this slightly obscure topic at almost the same time. I posted a Note about his post here: https://substack.com/@meaningness/note/c-55348683?utm_source=notes-share-action&r=1cnfx
I sub and read all the Ben Rechts series on Paul Meehl
Thanks for the sharing
I know your article is more about meta rationality so I apologise for possibly digressing from your main point
You wrote
> Every large company is a giant mess, held together by circumrational “glue people” who compensate with reasonableness for the disconnects between the many internal rational systems.
You use glue people. Which is interesting
Because the only other context where I see it used is in google parlance they (or specifically Eric Schmidt) also use the term “glue people” but in a negative way
https://surfingcomplexity.blog/2021/08/28/contempt-for-the-glue-people/ is an example
I guess both uses re referring to different things?
Initially I thought my question may be irrelevant to this. But now I feel maybe it is
This article talks abt making meta rationality visible
Even if googles glue people is definitionally different from yours, is it ever possible to do meta rationally stuff that’s not useful?
If its possible,
Then wouldn’t the first priority be to make sure to do actually useful meta rational work?
Then there’s also the
“Doing actually useful meta rational work, and is visible but not deemed as actually useful by the gatekeepers and management” problem
Thanks for this comment, it may help clarify several points for readers!
My phrase was "circumrational glue people." Circumrationality is quite different from meta-rationality. Meta-rationality may be useful in figuring out what sort of glue is required to repair gaps in rationality, but it's not generally glue-like.
The study of managers given meta-rationality training quoted one of them as saying:
> “You really start understanding all of the waste and all of the redundancy and all of the people who are employed as what I call intervention resources. The process doesn’t work, so you have to bone it up by putting people in to intervene in the process to hold it together. So it is like glue. So I would look around [the company], and I would see all these walking glue sticks, and it was just absolutely depressing and frustrating at the same time.”
There's a great post about glue work, and why you probably shouldn't do it, here: https://noidea.dog/glue
And there's an insightful thread about why managers often rightly have contempt for glue people: https://x.com/eshear/status/1514021511337754625
> is it ever possible to do meta rationally stuff that’s not useful?
Yes! It's easy to do it badly, and even if you are good at it, it's easy to get it wrong, or to fail at it.
Oh my gosh! That glue piece... it's awesome!
Even as an vendor in big MNC, I also feel like I am doing too much under-appreciated glue work.
I need to be more ruthless as a businessperson
Apple does have a few flavors of glue people (the main one is program managers aka EPMs), but tries to get more use out of them by giving them other formal responsibilities - generally communicating with specific vendors, or being in charge of telling the related exec whether or not everyone else seems like they're on schedule. As for how you tell when one is effective, I've never been able to figure this out.
I assume a reason you'd have had more in the 90s is that someone has to know who the useful people on the other teams are, and it's not always you.
> The study of managers given meta-rationality training
do you have links to this study/paper?
Yes, it's linked in the footnote. Two versions, actually: https://meaningness.substack.com/p/making-meta-rationality-available#footnote-2-144221829
Now that I see this
> As a personal strategy, I recommend mainly avoiding meta-rational work if you aren’t in a context that recognizes and values it. You may be tempted to do it anyway, because you can see that it would make your team, project, or organization more effective. However, if it’s not valued you won’t be able to get meta-rational insights adopted, and you may get actively punished for suggesting them.
> There are exceptions. If quietly doing some meta-rational work makes you better at deploying rationality, so you can Solve some difficult Problem, you will get rewarded for the rationality. Here is where rational reconstruction comes in! If you have to justify your work to people who see rational Problem Solving as the only worthwhile form of thinking and acting, you have to present your work as if you did it rationally. How far your fiction must diverge from reality, and how feasible it is to do this at all, varies from case to case.
I guess it partially answers my question.
One comment I have for this chapter is that I think it can be shorter
Or at least broken up into smaller articles.
For example rational reconstruction can be its own chapter
And you just cite it in this one
Thanks, I think you are right that this piece was too long for a Substack post. It's somewhat unnatural to break up what is actually a book into chunks that fit into a newsletter / blog, so it's hard to get right, but this was more than 6000 words, which is too much. I'm not sure what is optimal, but it's probably in the 2000-4000 word range, with maybe 5000 as a hard limit?
I've read 10k+ word blog posts. As long as the ideas flow in an understandable manner I think it's fine 👍🏼
As a reader you shift your way of thinking to "I am going to read several dozen pages of information"
And then it's fine.
Knowing how long is the article upfront is very useful though.
I think it's okay to make longer posts and not worry too much about what's optimal. Just be aware that we might skim.
> I'm not sure what is optimal,
me neither
> but it's probably in the 2000-4000 word range, with maybe 5000 as a hard limit?
maybe as a general rule have a hard limit somewhere
This piece felt like you were in a momentum, with all of it seems precious and hard to remove.
So I only suggested moving out one small chunk to its own article and referencing it here.
Oddly, when I started working at Google I once talked to a guy who described himself as a sort of glue person, helping teams interact. (Probably not in those terms - it's been a long time.) I think he was a manager of some sort, but I assume he had some technical background too.
Someday I should tell you the story of how I tried to get Quora to revise how they ranked posts using an explanation that involved metarationality. I failed spectacularly and likely would have done better with rational reconstruction.
The thing about having an emotional shock when opening up to meta-rationality doesn't resonate with me at all.
Maybe it's because I've already had plenty of "crises of faith" in beliefs that I held dear, people I saw as authorities-to-look-up-to and entire ontological/conceptual frameworks that at some point seemed pretty obvious and solid.
Or maybe it's because I already was pre-adapted to meta-rationality. The more I read of the first two chapters of the Eggplant, the more I thought "I thought there was no literature aiming more or less directly at the thing I want to deconfuse myself about but this guy seems to get it".
That's great! I'm glad it has gone easily for you! Definitely how difficult it is varies from person to person.
> It’s especially required to leave out all your meta-rational and circumrational work: how you came up with your hypothesis, how you worked out the experimental design, and all the many false starts, experimental and theoretical breakdowns, and your repairs in response.
Leaving this out is very dangerous when you're in a field that (pretends to) care about statistical significance, because you can publish noise as if it's true. Thus p-hacking, publication bias and the replication crisis.
Also, along with Agile, the other famous case of this in programming is "design patterns"; using way too many of them badly used to be associated with enterprise Java programmers, though maybe not so much anymore.
Rereading the footnotes
> It is also taught explicitly in business schools, and in some graduate-level social science classes. In many fields, eccentric individual professors may include it. We’ll look into these examples in Part Five
Im starting a part time masters in August
There’s a module I want to take and is called digital transformation. I still have some freedom in deciding the electives ahead of time
Which I believe should be somewhat related to organisational change
If you can kindly provide the names of the concepts meta rationality is explicitly taught I’ll be most grateful
The names might help me decide which electives to take
Thank you
Yes, you could look for courses on entrepreneurship, innovation management, and organizational change management. There's also buzz-phrases like "transformational leadership," "design thinking," and "VUCA management." All these tend to get routinized and dumbed down/watered down, but if you find the early, strong texts for each of those trendy terms, there's generally substantial meta-rational insight.
Frankly I find the systemized (but not yet watered down) versions of these highly attractive.
But as I push for more systems and more certainty in the literature, I end up getting the watered down version
Thanks for the tip on digging for the early versions
Reminds me of a method for doing research in legal circles called shephardizing.
I guess I ought to apply that more broadly
I have been fascinated by the concept of meta-rationality ever since I came across it a year or two ago. It has been an eye opener in many ways.
Some points that occurred to me:
- It is true that in a typical organization, it is the upper level people who are expected to do most of the meta-rational work. So I guess therein lies the key to making the concept more visible and acknowledged and even appreciated: saying that what managers do all day is known as meta-rational work immediately raises its profile.
- I suppose then the way to learn meta-rational skills is to do what business schools do, namely case studies?
- Could there be something like an MRQ (meta-rationality quotient) to parallel IQ? Though I have no idea how one would measure it.
Thanks for the comment—glad this has been interesting for you!
> what managers do all day is known as meta-rational work
Well, most of what most managers do isn't meta-rational. It's reasonable ("people stuff") or it's one of the forms of rationality taught in business school: financial optimization, quantitative market analysis, organizational process and structure, and so on.
It's true that in most organizations that do value meta-rationality, it is left to relatively senior management and to senior professionals/technical people. And it's true that some business schools do teach some meta-rational material, typically in courses on entrepreneurship, innovation management, and organizational change management. All those things are necessarily meta-rational: you are reasoning about, and acting on, rational systems (businesses, products, and services), taking purpose and context into account, and dealing with a VUCA (nebulous: https://en.wikipedia.org/wiki/VUCA) business environment.
> I suppose then the way to learn meta-rational skills is to do what business schools do, namely case studies?
That does seem to have be a major component, and I'm planning to include quite a few. The penicillin one was the first.
> Could there be something like an MRQ (meta-rationality quotient) to parallel IQ?
Sort of, maybe? You could look at Bill Torbert's theory of stages of executive development. There are quantitative tests/assessment methods for that.
IQ is often thought to be innate (although that is controversial). Meta-rationality is very definitely learned, not innate. (Although possibly innate predispositions or capacities may help.)
> Meta-rationality extends reasoning to encompass nebulosity, which rationality can’t cope with. Nebulosity is uncomfortable because it makes complete certainty, understanding, and control impossible. Rationality is comforting because you can un-see nebulosity and work instead in an imaginary world that excludes it.
I suppose this also means being comfortable with the possibility (or fact) that the line between nebulosity and patterns is also sometimes nebulous
> the line between nebulosity and patterns is also sometimes nebulous
Yes, that's right! In fact, I sometimes say that in *all* of my writing, I have only one point to make: nebulosity and pattern are intrinsically inseparable, although not the same; and things go better if you don't try to either separate them (dualism) or insist that they are identical (monism).
As a sort of exercise, it could be interesting to consider each of my writing projects and see how that theme of non-identity and non-separability of nebulosity and pattern is central to each.
So meta-rationality is an antimeme. 🤯
I don't think it is, despite the author agreeing right there lol. It's just a really specific pattern that he's trying to describe. I'd say the issue is that you need a bunch of prerequisite understanding to be able to notice the specific distinction but that doesn't make it antimemetic, just advanced.
Something advanced *is* antimimetic in nature.
It harder for the idea to spread because, as you said, you need a certain amount of other knowledge to grasp it.
And until you have that knowledge the idea is essentially invisible.
A complex math equation is an antimeme for the general population.
But it may be a viral meme among mathematicians.
The people among who the idea has to spread defines if it's an antimeme or not.
Well then what's the difference between calling something an antimeme and calling it complicated or advanced? I thought it meant something that's more deliberately or inherently obtuse or hard to grasp than a typical complicated thing. I looked online and the definitions aren't that coherent imo (is the term antimeme an antimeme?)
Sorry for the late reply!
All complicated things are slightly antimemetic in nature. But there are degrees.
Imagine a piece of information you'd forget the moment you heard/saw/read it.
That's the ultimate antimeme!
If such antimemes existe we wouldn't know of course.
So in our day to day lives we deal with the slightly antimemetic kind.
I agree though! It's hard to get a concrete definition. If you want to read more about them "There is No Antimemetics Division" is such a great read!
Took me a minute to figure out what that meant! Yes, that seems accurate.
Oh, antimemes are such an interesting idea. I read about them somewhere but what made me grasp the full implications of taking the idea seriously is "There Is No Antimemetics Division".
https://qntm.org/scp
It's so good. This is the synopsis:
An antimeme is an idea with self-censoring properties; an idea which, by its intrinsic nature, discourages or prevents people from spreading it.
Antimemes are real. Think of any piece of information which you wouldn't share with anybody, like passwords, taboos and dirty secrets. Or any piece of information which would be difficult to share even if you tried: complex equations, very boring passages of text, large blocks of random numbers, and dreams...
But anomalous antimemes are another matter entirely. How do you contain something you can't record or remember? How do you fight a war against an enemy with effortless, perfect camouflage, when you can never even know that you're at war?
Welcome to the Antimemetics Division.
No, this is not your first day.
*Chef's kiss*
So yeah, I think part of the difficulty in explaining meta-rationality is that it's an antimeme. That adds to the challenge but I think it's cool :)
Honestly, I don't think it is one, because it's got a name. You can't spread the definition around, but spreading the name around and pointing back here works.
All antimemes have names. Doesn't make them more sharable though!
What an excellent read.
Thank you.
Some notes:
- Rational Reconstruction: While I've used this approach before, I've found it limits deeper understanding, closing off avenues rather than opening them. It just reinforces the rationalist narrative.
- I've always loved roguelikes and think they are a great way to learn meta-rationality.
- Meta-rational work has often led to success in my career, but is a real challenge of articulating its value in settings like interviews.
- In what hobbies, careers, or skills is meta-rationality particularly beneficial?
"Optimize scaling only only when the actual thing is big enough to need it." fyi :)
Thought provoking write-up as always, David! Here's a bit of feedback that I'd categorize as "would like to see more of"...
In this section, three key skills jump out at me: intuition, creativity, and reflection. The first two seem to be in tension with your description of meta-rationality. On the one hand you say,
'[Creative intuition and genius] are *thought-stopping cliches*, which function to prevent inquiry into what specifically was done and how and why. The [sic?] success is regarded as inherently inexplicable, bordering on mystical. That’s a way to avoid having to understand the role, importance, and details of a non-rational activity. Failure to understand it makes future breakthroughs less likely.'
But on the other hand you acknowledge,
'Meta-rationality is valuable in all rational work, but like medicine, it is more craft than science. Crafts are not formally rational, but they rely on explicable reasoning in part. *A craft might be “intuitive” and “creative,” but this is not anti-rational or especially mysterious.* We don’t have a scientific theory of what goes on in a theatrical costume designer’s brain, but no one thinks it’s alien or magical.'
I agree that referring to 'genius' isn't of much use in understanding, but I think investigating intuition and creativity can overcome their tendency to serve as thought-stopping cliches. We can reflect on what seems to be happening when intuition and creativity seem to be in play in meta-rationality. I think that is what Schon is doing in part in 'The Reflective Practitioner'.
For example, a lot of reflection has been done on methods of enhancing intuition and creativity, eg, in design research and it's popularized offshoot 'design thinking'. In fact, that's what I spent three years at IBM trying to get unreflectively rationalist IBM product managers to embrace. I'm hoping that future book installments will explore the intuitive, creative, and reflective aspects of craft (especially the craft of software architecture).
Quick 'typo' comment: The link ( https://metarationality.com/rote-rationality ) to the suggested reading, “Rote rationality and unreflected meta-rational choices”, is broken for me. Are you referring to this section: https://metarationality.com/rationality#geek ?
Whoops, sorry! Yes, more-or-less. This is due to there being three different versions now, the one on metarationality.com, the one on Substack, and the source version (a markdown file on my laptops). They get out of sync.
Also, I'm writing Part Four before Part Three, so links back to earlier parts of the text are links forward to future work.
I have tried looking for it in your meaningness.com website but do you have a schematic overview comparing reasonableness vs rationality vs meta rationality similar to this schematic overview?
https://meaningness.com/monism-dualism-schematic-overview
Yes, it appears repeatedly in the book, most recently in this section: https://meaningness.substack.com/i/143533037/the-meta-rational-development-of-penicillin