9 Comments

Wow, that Notes blowup with Scott Alexander was super interesting. I do feel there is a resemblance between rationalism and fundamentalist religion. Hence why quite a few rationalists come from fundamentalist backgrounds, I think. Same mindset, different system.

That said, this claim was very interesting: "[rationalism is] exactly about how to integrate Kegan 5 meta-reasoning with Kegan 4 modelability." This marks the first time I ever saw a rationalist mention Kegan, so it's news to me, but I think I see what he means, there might be something to it (aren't superforecasters basically doing that?). What do you think?

Expand full comment
author
Apr 3·edited Apr 3Author

Hi Carlos, thanks for the comment!

> This marks the first time I ever saw a rationalist mention Kegan

Some parts of the Berkeley subculture reference him often. Their understanding of his work is ... quite different from mine.

> "[rationalism is] exactly about how to integrate Kegan 5 meta-reasoning with Kegan 4 modelability."

I forget, who said this, where? I vaguely remember it but can't locate it. It doesn't sound right, but maybe it makes sense in context. It would depend in part on what they think "rationalism" means.

I don't understand what went wrong with Scott, but different uses of what "rationalism" means certainly was a central part of it.

> aren't superforecasters basically doing that?

I don't know much about what superforecasters do, or how. My impression is that it's basically "choose a reference class with a known base rate, decide what features of the specific instance differ from the typical member of the reference class and assign magnitudes to those differences (with reference to other similar differences if possible)."

Since there's no definite method for choosing the reference class or deciding what differences are relevant, yes, these are meta-rational operations.

Expand full comment

Oh, Scott Alexander said that in that notes thread. This was the context:

> You don't have to do this [view rationalists as an adversary]! Your beliefs are probably closer to the center of the AI alignment cluster than mine at this point. You could cooperate with any of a dozen organizations working on the same problems you're concerned about! And if you could just stop taking potshots at it, you could bask in probabilistic forecasting being a sort of triumph of your ideas - it's exactly about how to integrate Kegan 5 meta-reasoning with Kegan 4 modelability.

Maybe I misread, he was talking of "probabilistic forecasting", but to be fair, LessWrong rationalism is largely about that, as I understand it.

Expand full comment
author
Apr 3·edited Apr 3Author

Oh, thanks, I don't know how I missed that when I looked there before my first reply to you!

> You don't have to do this [view rationalists as an adversary]!

Well, I don't, as I've explained many times. He thinks "rationalism" means his tribe, and I keep saying no it's not, and somehow it doesn't penetrate. I don't know what to do about this.

My best guess is that the Berkeley tribe feel generally attacked from all corners, and he's assimilating me to the general vague sense of attack, and therefore attributing many specific statements to me that I didn't say at all, along with generalized hostility that I don't feel at all.

Expand full comment

It might be worthwhile writing a post where you specify what you think about rationalism, and also address whether you really thought AI X-risk was completely implausible until recently. As an olive branch. Surprising showing from Scott at any rate, but he does have this side to him, he did write Radicalizing the Romanceless.

Expand full comment
author
Apr 5·edited Apr 5Author

Thanks! I do appreciate the suggestion.

> specify what you think about rationalism

Well, I've done this many many times, both informally on twitter and formally in the meta-rationality book and in other writing. It doesn't seem to help. Unless I can do it differently in a way that *does* help, it doesn't seem worth repeating.

The LW people think they own the word "rationalism," and they just don't. No one outside the tech industry has ever heard of them, or encountered their non-standard use of the word.

For anyone outside, saying "I don't mean this other thing you've never heard of when I say 'rationalism'" is OK as an occasional aside, but repeating it every time I mention one of my main topics would be cumbersome and off-putting.

> whether you really thought AI X-risk was completely implausible until recently

That's interesting... I honestly don't remember. I certainly did think there were serious risks, which is part of why I left the field (as I relate in the book). X-risk wasn't high on my list, and probably still isn't? I'm not sure how to think about this, because there's no meaningful way to quantify it.

I definitely was taken by surprise by GPT-3.5-generation systems. I underestimated that research direction. I doubt GPT-7 will kill everyone, but I don't think anyone can rule it out.

Expand full comment
Apr 1·edited Apr 1Liked by David Chapman

> What is your opinion about the future of AI? How confident are you in it? Why?

I think it'll go interesting places by adding new modalities (in terms of sight sound smell etc). I don't know if it will become more "useful" or "generalized".

There are many philosophical issues with the AI doom people. For instance, there seem to be a lot of unstated assumptions where in their scenario there is some one entity called an "AI" with a single will, that is capable of doing things after it gets the intention to do them, that it never suffers permanent negative consequences via failing to do anything, and that it's somehow able to acquire physical resources despite not having any money. If you've ever had ADHD or been poor I think you can appreciate the problems there.

It also seems like a lot of it relies on the Berkeley people's old AI theories, which assume something called an "AI" would be created using expert systems, even though LLMs don't behave anything like that.

> He is a writer I admire and respect greatly, and who took personal offense at my Note in some way I don’t understand.

That's sad. I mean, from being online I've long been vaguely aware of the Berkeley people (and have never gone and read any of their haters or anything); I thought it was good for them that they clearly weren't actually rationalists because they were empiricists, but that they ironically didn't seem to have much self-awareness and were being psychologically trapped by calling themselves that, and maybe should've noticed we already invented logical positivism and it didn't work the first time.

Their ability to write 30,000 word essays constantly certainly gets them around, but I think they're not able to win over that many normal-er people. I'd already ignored them on the reasonable pre-rational principle that they were clearly in some kind of math-themed sex cult, which is like the official hobby of people from Berkeley, and you shouldn't join cults unless you really want to.

Mr ACX is so popular he's grown beyond that, but it's always seemed like his main life principle is that he's friends with all kinds of weird and sometimes bad people because they're nice to him in his comment section, and you're being mean if you don't also want to be friends with them.

Expand full comment
Mar 31Liked by David Chapman

I was reminded of a term Kahneman (https://youtu.be/sW5sMgGo7dw) talked about (adversarial collaboration) where people can debate and no one changes their minds because of their values…Sowel’s book on Unconstrained and Constrained Values describes the notions of our AI Futire and either way no one knows, thx.

Expand full comment
author

Yes, The Forecasting Research Institute's study was explicitly described as an adversarial collaboration!

Expand full comment