How not to predict the future of AI
I have a podcast! New AI stuff! Notes and how to get them!
Hello! In this News&Notes issue:
I have a podcast!
My new short post about predicting the future of AI, on a different site
How not to predict the future of AI, here in this post
A review of my book Better without AI, with extrapolation
A selection of short Notes I’ve posted recently
This was meant to just be a roundup of news, but somehow it sprouted an additional short, substantive essay about AI risk prediction in the middle.
I did a poll back in February about how often you’d like to get informal update posts. Opinions were divided. Provisionally, I intend to post one monthly.
New: a podcast!
Often I have ideas that seem best communicated informally, by me explaining them to you, or in conversation with someone else. Now I’ve created a podcast for that!
You should have received the first episode yesterday—in an email or in your Substack app. It’s “Learning Kindness Skills,” a conversation with my spouse
.If you didn’t see the email, check your spam/junk folder and gmail “promotions” tab, and move it to your main inbox.
In the app, it seems to have gotten buried; I don’t understand why. You can click on the link “Learning Kindness Skills” to see it.
You can subscribe to future episodes in your podcast app of choice, if you’d rather. Go to the first episode’s web cover page in your browser. You should see a right-hand sidebar with various buttons under “LISTEN ON.” They will subscribe you. The “RSS Feed” option should work if the others aren’t applicable.
If you have trouble, let me know and I will try to debug.
Opinions about AI are based on feels alone
Many people express forceful opinions about how rapidly AI research will progress, and whether AI’s effects will be good or bad. Often these take the form of specific numerical probabilities or timelines.
Opinions are extremely divergent:
“AI will almost certainly kill every human being within five years!”
“It’s physically impossible for AI to pose any threat ever, because it can never do what humans do!”
“AI will usher in utopia, so we must put all human effort into accelerating it, damn the torpedoes!”
These opinions come both from people who know zero about AI, and shouldn’t have any opinion at all; and also from experts in the field, who you’d think would have converged on a rough consensus based on evidence.
When challenged, it becomes apparent that these strong opinions are based on nothing, or nearly nothing. There’s almost no meaningful evidence or reasoning involved.
It seems to be pure feels! I find this extremely weird, and in need of explanation. Where do these feels come from, and why do they seem so compelling?
I have almost no opinions about future AI. I don’t think it’s possible to make meaningful predictions, because we don’t know how current AI systems work, or why.
In a new post on the Better without AI web site, I discuss a recent experiment that explored these issues. The Forecasting Research Institute convened two groups of experts: ones who think an AI catastrophe in this century is essentially impossible, and ones who think it’s likely enough for grave concern. The experiment had them engage in extensive conversation to try to figure out why they disagree; and to see whether that would adjust their beliefs toward a middle ground.
They didn’t. The experiment found both groups were almost perfectly immune to evidence or arguments against their beliefs. The experimenters also tested hypotheses about why. They were able to rule out three rational explanations, leaving “personality factors” (feels?) as the most likely alternative.
What is your opinion about the future of AI? How confident are you in it? Why?
How not to predict the future
After writing my post, I discovered Molly Hickman’s excellent “How Not To Predict The Future,” written by a Forecasting Research Institute staff member who participated in the experiment. Her subtitle is “Good forecasting thrives on a delicate balance of math, expertise, and… vibes.”
She explains that AI risk estimates using Bayesian probability are all over the map, depending on your initial framing of the problem. Particularly, she contrasts
’s calculations with alternatives that yield very different numbers.My takeaway is that there are some situations in which Bayesian reasoning is a valuable tool; and others in which it is worse than useless, because it gives you excessive confidence in the value of your opinion. AI risk is one of the latter.
That point was also made by
in “Probability Is Not A Substitute For Reasoning.” I recommended his post in a recent Substack Note. That resulted in an unexpectedly rancorous exchange, in its comment thread, with (Scott Alexander). He is a writer I admire and respect greatly, and who took personal offense at my Note in some way I don’t understand. I’m sad and baffled.Figuring out whether a particular reasoning method is useful in a particular situation is a typical meta-rational task, which requires a different sort of reasoning than formal rationality. That may be what Hickman meant by “vibes”! However, “vibes” suggests it’s emotional or aesthetic—like “feels.” And that it’s vague; maybe even pretty arbitrary. The point of my work on meta-rationality is that it can be much more specific and precise than that.
In this case, we can explain in detail why Bayes is unhelpful. (Maybe I should write a longer piece about that than my brief discussion here?)
At the most abstract level, the problem is that Bayes is an epistemological tool, but ontology is prior to epistemology. When the ontology is in flux, it’s premature to apply rational epistemology. You don’t yet have a coherent idea of what you are asking. The first chapter of Better without AI explains the many different conceptions of Scary AI; which you choose might radically affect your risk estimates.
More concretely, you estimate the probability of an event using Bayesian methods by assigning probabilities to a series of enabling conditions. Hickman explains that which conditions you consider significant—an ontological choice—radically changes your estimate. Using her own best probability estimates for various enabling conditions yielded a 16.1% chance of existential catastrophe in Carlsmith’s ontology, but only 1.7% in a different one that she apparently finds equally plausible.
The problem is not one of estimating individual probabilities, it’s figuring out what probabilities you need to estimate. “Shut up and multiply” doesn’t work if you don’t know which things you should be multiplying.
Hickman recommends “going with your gut” after finding contradictory Bayesian results, and making up some number that “feels right.” I don’t think that is good advice at all! You may fool yourself into thinking you’ve done rational work that you have not in fact done, as Landau-Taylor explains.
Or, you may consider “feels” the next-best substitute for rationality when it doesn’t work. “Emotions are the only alternative to rationality” is a foundational conceptual error in rationalism. If you act on your feels, things are likely to go badly.
What should we do? First, we should acknowledge that we have no idea about the magnitude of catastrophic risk from AI. Second, we should acknowledge that there is some risk, which justifies taking some measures to ameliorate it. Fortunately, we can protect against some catastrophic AI risks with pragmatic measures that have negative cost, because they also protect against large non-AI risks. Getting serious about computer security is one example. I suggest several more in the third chapter of Better without AI.
AI, The Idiot Ant Queen
Collin Lysford wrote a great review of my book Better without AI, titled “AI, The Idiot Ant Queen.”
AI research has always tried to make minds, not just intelligence; and debates about possible AI futures mostly try to reason about hypothetical minds-like-ours-but-bigger. That’s not what we’ve gotten so far, and probably not what we are going to get in the future. Better without AI points out risks and opportunities that are mostly currently overlooked for that reason.
’s essay starts with a solid, general review of the book. Then he uses my discussion of distributed intelligence as a jumping-off point for his own ideas about how AI may work in the future. He makes an analogy between colonial organisms, such as an ant hill, and both human and machine intelligence. We are intelligent mainly only due to being parts of a society and culture. Current AI systems are powerful and unpredictable mainly due to their interactions with each other—and with human society and culture.The biggest risk, Collin suggests, is that increasing reliance on AI will degrade our ability to participate in culture and society, and to continually re-make them to better fit changing circumstances.
It’s still a book!
I feel dumb saying this, because I’ve said it a million times; but each time, some more people do take it in:
Better without AI is an actual book, not just a web site. It’s available on Kindle and as a paperback. You may find one of those easier to read than on the web!
A selection of Notes
I’ve been using Substack Notes to express 100–500 word ideas: too long for a tweet, too short for a formal post. I’ve had fun with these, and some readers have too. There’s interesting comment threads on many of them!
It’s easy to miss Notes: you don’t get email notifications for them unless you ask for that, and they can get buried in the Substack app feed.
You can see all my notes on the Notes tab of my Substack home page, on the web or in the app.
Supposedly you can get email notification for Notes by clicking on your avatar in the Substack web version, and turning on Settings > Notifications > New Notes. They’re batched, so you don’t get an email for each one, but a summary list periodically. This hasn’t worked for me, but try it and let me know if it works for you!
Here are some you may have missed. These are not all the ones I’ve posted since the last time I listed them: I’ve omitted some that are less interesting, and two that are probably NSFW. If you are reading in gmail, it may cut the list off part way through, in which case you can see them all on the web.
Wow, that Notes blowup with Scott Alexander was super interesting. I do feel there is a resemblance between rationalism and fundamentalist religion. Hence why quite a few rationalists come from fundamentalist backgrounds, I think. Same mindset, different system.
That said, this claim was very interesting: "[rationalism is] exactly about how to integrate Kegan 5 meta-reasoning with Kegan 4 modelability." This marks the first time I ever saw a rationalist mention Kegan, so it's news to me, but I think I see what he means, there might be something to it (aren't superforecasters basically doing that?). What do you think?
> What is your opinion about the future of AI? How confident are you in it? Why?
I think it'll go interesting places by adding new modalities (in terms of sight sound smell etc). I don't know if it will become more "useful" or "generalized".
There are many philosophical issues with the AI doom people. For instance, there seem to be a lot of unstated assumptions where in their scenario there is some one entity called an "AI" with a single will, that is capable of doing things after it gets the intention to do them, that it never suffers permanent negative consequences via failing to do anything, and that it's somehow able to acquire physical resources despite not having any money. If you've ever had ADHD or been poor I think you can appreciate the problems there.
It also seems like a lot of it relies on the Berkeley people's old AI theories, which assume something called an "AI" would be created using expert systems, even though LLMs don't behave anything like that.
> He is a writer I admire and respect greatly, and who took personal offense at my Note in some way I don’t understand.
That's sad. I mean, from being online I've long been vaguely aware of the Berkeley people (and have never gone and read any of their haters or anything); I thought it was good for them that they clearly weren't actually rationalists because they were empiricists, but that they ironically didn't seem to have much self-awareness and were being psychologically trapped by calling themselves that, and maybe should've noticed we already invented logical positivism and it didn't work the first time.
Their ability to write 30,000 word essays constantly certainly gets them around, but I think they're not able to win over that many normal-er people. I'd already ignored them on the reasonable pre-rational principle that they were clearly in some kind of math-themed sex cult, which is like the official hobby of people from Berkeley, and you shouldn't join cults unless you really want to.
Mr ACX is so popular he's grown beyond that, but it's always seemed like his main life principle is that he's friends with all kinds of weird and sometimes bad people because they're nice to him in his comment section, and you're being mean if you don't also want to be friends with them.