Re: philosophy, I would find something exploring the boundary of what you consider to be in-category for philosophy useful. Some central examples, some boundary examples, some things others often think are philosophy but aren't, some things that people think aren't philosophy but are...my current conclusion is that you're using the word in a way I really don't understand but that I'd probably agree with you if I understood what cluster you were gesturing at.
Thanks! FWIW, my guess is that I’m using it in an ordinary way that doesn’t diverge from yours or from typical usage. When/if I write something more, I’d be interested to hear whether that’s accurate!
There are some common ideas that are unambiguously wrong, like “philosophy is the field that studies anything not studied in another academic department.” Counterexample: bicycle maintenance is not studied in another academic department and is not philosophy. This overly-broad, wrong idea about what philosophy includes is one reason people want to say I do it.
RE philosophy being bad: Nishijima Roshi once described enlightenment as the experience of "dropping away of all philosophical problems". When I look at my own practice, it is usually obvious to me when I'm doing philosophy and when I'm not. Noticing that and letting it go is useful. Defining what philosophy *is* may be intellectually interesting but more useful is learning to recognize experientially what doing philosophy feels like. (And then not doing it.) Typically, some kind of confused stance is involved. Broadly it feels like taking your thoughts (typically expressed in discursive, representational language) as more "real" than experience or actions or "what's right there in front of you".
I find this to be sufficient in most situations. Not sure if pinning down a definition will add much more to it. Trying to pin down definitions is one of the fatal flaws of philosophy, after all!
Still, if you even do finish that essay, I'd love to read it! You've built up too much anticipation over the years. ;)
This might be one of my favorite posts of yours. Specifically the parts of about pursuing a 70% failure rate, and the harms of philosophy.
Resonate a lot, especially with the notion of putting a lot of effort into writing on important things, and it constantly not coming together in the right ways (or on time, etc.). 😮💨
Thank you, I liked Dr Monzo's post a lot! I started writing a new-from-scratch version of "why philosophy is bad" yesterday, before I had looked at his post. There's even more similarity than you might think!
There are degrees of success in (for example) a DARPA project …
- You got a paper about it published in a top ranking journal (entry level success)
- some commercial organisation thinks it was interesting enough they’d spend some of their own money investigating it further
- some commercial organisation has, at least, a prototype
- some commercial organisation has an actual product
As you go down the list, probability is going down and the amount of $ spent is increasing alarmingly.
DARPA would be giving you displeased looks if you didn’t even write a paper
They’d probably think they were doing well if 10% of things they funded got commercial interest (while I do have some actual numbers for how many $ someone needs to spending for DARPA to believe the interest is real, I will avoid giving financial details here; I will leave folks to just imagine it)
The rare low probability “and now everyone has it in their mobile phone” (or wherever) is the kind of gain that justifies funding stuff in the first place
As I try to think of a recent micro-scale failure…
Getting a large language model to pretend it’s a Linux system (that part works) and then hallucinate the Linux system booting another Linux system in a hypervisor … does not work.
By this, I don’t mean “it’s impossible” I mean “I couldn’t make it work”. In this area, things have a tendency to suddenly become possible with the next LLM release, or just better prompting.
So after a quick low cost feasibility study, the paper involving an AI hallucinating an Inception-style stack of hypervisors is not going to be written.
Thanks! I wrote six thousand words last week, which was a quarter of what an outline said I was supposed to do, and backed out of the plan.
However, I'm now thinking it should be possible to write a much shorter thing that states the main points without any support, and maybe that's worthwhile. The lack of support means it would be confusing (and probably upsetting) for many readers; but many others would understand, which might be helpful.
For what it's worth, I have read enough philosophy that I should be capable of understanding your main points, but I am not a philosopher, and don't take analytic philosophy too seriously to get upset. Actually, that touches upon the gist of why I'm interested. I recently followed a research topic down a rabbit hole into a particular domain of philosophy (where I might have stumbled across a few of those amateur philosophers you were describing) and came back thinking most of what I'd just read was personal intuition hiding behind philosophical jargon.
I actually enjoy your streams of consciousness, especially the ones you post with Charlie.
If you don't want to steam of consciousness here, I wonder if a compromise is to actually have two substacks. One that's your "scratchpad" where you dump stuff every week. And another one that's more "official".
Also, for the super streams of consciousness I sometimes use an LLM to summarize and triage it for me.
While it might be churlish to reject your previous endorsement, if you find it rational to vote against the worse choice, why should a previous endorsement override that? There is, after all, a lot at stake.
This is a sensible response, thanks! I meant the bit about Zvi as a joke (mostly). I don't think it would be useful for me to issue a genuine endorsement, but do intend to vote for the overall-least-bad candidates, on the basis of policy preferences rather than tribal identification.
Re: Higgs boson … reading (if I recall correctly) Steven Weinberg, I got the impression he had painted himself into a corner philosophically.
If all you wanted was a theory that gave approximately the right predictions at energy scales you have a practical reason to care about, you have that. Fine.
But if you say you want to know what the universe is *really like* … pushing to higher energy levels is not necessarily going to help you, because there is always the nagging doubt that at some higher energy scale beyond your current experiments, you might see behaviour that indicates no, the universe is not how you thought it was.
(An irrelevant aside: when a large language model started using Calabi-Yau manifolds as a metaphor, I go look it up, and I’m like … oh, I kind of assumed those extra dimensions formed a torus, but, yeah, of course, that’s not the only possible topology if you have more than 2 extra dimensions)
Re: philosophy, I would find something exploring the boundary of what you consider to be in-category for philosophy useful. Some central examples, some boundary examples, some things others often think are philosophy but aren't, some things that people think aren't philosophy but are...my current conclusion is that you're using the word in a way I really don't understand but that I'd probably agree with you if I understood what cluster you were gesturing at.
Thanks! FWIW, my guess is that I’m using it in an ordinary way that doesn’t diverge from yours or from typical usage. When/if I write something more, I’d be interested to hear whether that’s accurate!
There are some common ideas that are unambiguously wrong, like “philosophy is the field that studies anything not studied in another academic department.” Counterexample: bicycle maintenance is not studied in another academic department and is not philosophy. This overly-broad, wrong idea about what philosophy includes is one reason people want to say I do it.
RE philosophy being bad: Nishijima Roshi once described enlightenment as the experience of "dropping away of all philosophical problems". When I look at my own practice, it is usually obvious to me when I'm doing philosophy and when I'm not. Noticing that and letting it go is useful. Defining what philosophy *is* may be intellectually interesting but more useful is learning to recognize experientially what doing philosophy feels like. (And then not doing it.) Typically, some kind of confused stance is involved. Broadly it feels like taking your thoughts (typically expressed in discursive, representational language) as more "real" than experience or actions or "what's right there in front of you".
I find this to be sufficient in most situations. Not sure if pinning down a definition will add much more to it. Trying to pin down definitions is one of the fatal flaws of philosophy, after all!
Still, if you even do finish that essay, I'd love to read it! You've built up too much anticipation over the years. ;)
Thank you! This so much.
This might be one of my favorite posts of yours. Specifically the parts of about pursuing a 70% failure rate, and the harms of philosophy.
Resonate a lot, especially with the notion of putting a lot of effort into writing on important things, and it constantly not coming together in the right ways (or on time, etc.). 😮💨
One of my friends coincidentally happened to post a similar piece today on how philosophy can do a lot of damage: https://drmonzo.substack.com/p/the-danger-of-philosophy-on-goodness
Thank you, I liked Dr Monzo's post a lot! I started writing a new-from-scratch version of "why philosophy is bad" yesterday, before I had looked at his post. There's even more similarity than you might think!
"You can see all my Notes here, but you’d need to remember to visit that URL regularly, and who would do that?"
I actually have your notes on my Bookmarks Toolbar, but possibly not many people would do that.
Wow. Flattered!
There are degrees of success in (for example) a DARPA project …
- You got a paper about it published in a top ranking journal (entry level success)
- some commercial organisation thinks it was interesting enough they’d spend some of their own money investigating it further
- some commercial organisation has, at least, a prototype
- some commercial organisation has an actual product
As you go down the list, probability is going down and the amount of $ spent is increasing alarmingly.
DARPA would be giving you displeased looks if you didn’t even write a paper
They’d probably think they were doing well if 10% of things they funded got commercial interest (while I do have some actual numbers for how many $ someone needs to spending for DARPA to believe the interest is real, I will avoid giving financial details here; I will leave folks to just imagine it)
The rare low probability “and now everyone has it in their mobile phone” (or wherever) is the kind of gain that justifies funding stuff in the first place
As I try to think of a recent micro-scale failure…
Getting a large language model to pretend it’s a Linux system (that part works) and then hallucinate the Linux system booting another Linux system in a hypervisor … does not work.
By this, I don’t mean “it’s impossible” I mean “I couldn’t make it work”. In this area, things have a tendency to suddenly become possible with the next LLM release, or just better prompting.
So after a quick low cost feasibility study, the paper involving an AI hallucinating an Inception-style stack of hypervisors is not going to be written.
I will leave to readers’ imagination what far-out shenanigans were going to happen in that paper.
“Dear AI. Please write me a provably correct operating system and prove its correctness in an automated theorem prover. Thanks.” Not just yet.
I read your note on philosophy, and was hoping you'd write more, even if it's not complete or polished.
Thanks! I wrote six thousand words last week, which was a quarter of what an outline said I was supposed to do, and backed out of the plan.
However, I'm now thinking it should be possible to write a much shorter thing that states the main points without any support, and maybe that's worthwhile. The lack of support means it would be confusing (and probably upsetting) for many readers; but many others would understand, which might be helpful.
For what it's worth, I have read enough philosophy that I should be capable of understanding your main points, but I am not a philosopher, and don't take analytic philosophy too seriously to get upset. Actually, that touches upon the gist of why I'm interested. I recently followed a research topic down a rabbit hole into a particular domain of philosophy (where I might have stumbled across a few of those amateur philosophers you were describing) and came back thinking most of what I'd just read was personal intuition hiding behind philosophical jargon.
I actually enjoy your streams of consciousness, especially the ones you post with Charlie.
If you don't want to steam of consciousness here, I wonder if a compromise is to actually have two substacks. One that's your "scratchpad" where you dump stuff every week. And another one that's more "official".
Also, for the super streams of consciousness I sometimes use an LLM to summarize and triage it for me.
While it might be churlish to reject your previous endorsement, if you find it rational to vote against the worse choice, why should a previous endorsement override that? There is, after all, a lot at stake.
This is a sensible response, thanks! I meant the bit about Zvi as a joke (mostly). I don't think it would be useful for me to issue a genuine endorsement, but do intend to vote for the overall-least-bad candidates, on the basis of policy preferences rather than tribal identification.
Re: Higgs boson … reading (if I recall correctly) Steven Weinberg, I got the impression he had painted himself into a corner philosophically.
If all you wanted was a theory that gave approximately the right predictions at energy scales you have a practical reason to care about, you have that. Fine.
But if you say you want to know what the universe is *really like* … pushing to higher energy levels is not necessarily going to help you, because there is always the nagging doubt that at some higher energy scale beyond your current experiments, you might see behaviour that indicates no, the universe is not how you thought it was.
For all we know, the 26 dimensional space time of string theory is real, but the missing dimensions are squished too small for us to observe them.
(An irrelevant aside: when a large language model started using Calabi-Yau manifolds as a metaphor, I go look it up, and I’m like … oh, I kind of assumed those extra dimensions formed a torus, but, yeah, of course, that’s not the only possible topology if you have more than 2 extra dimensions)