Discussion about this post

User's avatar
Carlos's avatar

Wow, that Notes blowup with Scott Alexander was super interesting. I do feel there is a resemblance between rationalism and fundamentalist religion. Hence why quite a few rationalists come from fundamentalist backgrounds, I think. Same mindset, different system.

That said, this claim was very interesting: "[rationalism is] exactly about how to integrate Kegan 5 meta-reasoning with Kegan 4 modelability." This marks the first time I ever saw a rationalist mention Kegan, so it's news to me, but I think I see what he means, there might be something to it (aren't superforecasters basically doing that?). What do you think?

Expand full comment
Alex S's avatar

> What is your opinion about the future of AI? How confident are you in it? Why?

I think it'll go interesting places by adding new modalities (in terms of sight sound smell etc). I don't know if it will become more "useful" or "generalized".

There are many philosophical issues with the AI doom people. For instance, there seem to be a lot of unstated assumptions where in their scenario there is some one entity called an "AI" with a single will, that is capable of doing things after it gets the intention to do them, that it never suffers permanent negative consequences via failing to do anything, and that it's somehow able to acquire physical resources despite not having any money. If you've ever had ADHD or been poor I think you can appreciate the problems there.

It also seems like a lot of it relies on the Berkeley people's old AI theories, which assume something called an "AI" would be created using expert systems, even though LLMs don't behave anything like that.

> He is a writer I admire and respect greatly, and who took personal offense at my Note in some way I don’t understand.

That's sad. I mean, from being online I've long been vaguely aware of the Berkeley people (and have never gone and read any of their haters or anything); I thought it was good for them that they clearly weren't actually rationalists because they were empiricists, but that they ironically didn't seem to have much self-awareness and were being psychologically trapped by calling themselves that, and maybe should've noticed we already invented logical positivism and it didn't work the first time.

Their ability to write 30,000 word essays constantly certainly gets them around, but I think they're not able to win over that many normal-er people. I'd already ignored them on the reasonable pre-rational principle that they were clearly in some kind of math-themed sex cult, which is like the official hobby of people from Berkeley, and you shouldn't join cults unless you really want to.

Mr ACX is so popular he's grown beyond that, but it's always seemed like his main life principle is that he's friends with all kinds of weird and sometimes bad people because they're nice to him in his comment section, and you're being mean if you don't also want to be friends with them.

Expand full comment
12 more comments...

No posts