We need a balanced, enterprising optimism—an ambitious and clear vision for what constitutes the good in technology, coupled with the awareness of its dangers
I suppose there are different versions of this idea -- for tech in general, for AI, genetics, etc. Either way, we need precisely this kind of ethical reflection on how tech can be used to promote human flourishing rather than simply hoping that technology will solve our problems simply by being brought into existence.
This is an interesting article, and I applaud the attempts to create a new paradigm on technology and human progress. I am trying to do much the same thing, but I am coming at it from a different perspective.
I believe that the study of history is far more useful than philosophy as a foundation for doing so. I am not convinced that we will be able to push philosophy far beyond what previous philosophers have already done. It is far too easy in philosophy for flights of fancy to take over and delude a person into thinking that their personal opinion is an objective view of reality.
I think that we can, however, make great strides from the study of history, because it is by definition grounded in reality. It essentially focuses on demonstrated results rather than philosophizing.
I call my perspective the "Progress-based perspective:
Some fantastic ideas. Unsurprising that this has come under Tyler Cowen's gaze.
I'm curious why there is no David Deutsch reading on your list? (I did see Popper). I think some of his ideas, particularly regarding what AI is and isn't, Fallibilism, and his unpacking of optimism, (among many many other insights) would add a lot to your project.
This deeply misrepresents AI safety (what you've inaccurately renamed existential pessimism). Indeed, the majority of the original AI safety folks were incredible optimists. Many were technoutopians. And indeed still are.
Equating e/acc with EA is deeply misleading. The former is an incoherent, wild community with very little value add to the world. EA has its issues, but at least has a 15 year track record and billions of dollars deployed in a sensible way to do good.
You maliciously misrepresent the Pascalian situation of countering AI existential risks. No, the risk does not imply anything goes. I'd guess that in 95%+ of written materials on this no one else makes the claims you are. I can't remember a single instance I've ever read making the same claim as you.
There is no such thing as a chief theorist in EA. Will and other thought leaders aren't trying to purify moral goodness, insofar as I think you interpret the phrase to mean.
And so on. You have failed to pass an ideological Turing test for these concepts and worldviews. Given it would have been trivially easy to get someone with more knowledge than you to edit your draft, it does seem either lazy, disrespectful or malicious.
I highly appreciate this attempt to navigate towards a 'third way'! If you haven't already, definitely talk to Daniel Schmachtenberger, he's on a very similar mission: https://www.youtube.com/watch?v=8XCXvzQdcug
One critique: This essay sometimes borders on enlightened centrism, forcing itself to say every side in the debate is equally right and wrong.
One example: "EA also says that quantity of suffering matters, as if multiplying pain amplifies its moral “weight.” But if reason says proximity shouldn't matter, why should amount matter more than kind? Moreover, how should we rank forms of suffering?"
More suffering is very obviously more bad. Different kinds are hard to compare. Since we don't have infinite resources, we're always in triage and have to make trade-offs. As for the last question, EA attempts to tackle this systematically, while most philosophies refuse to even engage with the question of how to trade-off between different kinds of suffering. But putting your head in the sand doesn't help anyone and leads to more suffering.
Sometimes an argument is simply strong, other times it's simply wrong. Not every issue has two equally valid sides. More broadly, if truthseeking is what you care about, the existential risk and e/acc crowd are not very equal. One is, for all it's flaws, still the most truthseeking practical movement I know off, the other one openly advocates for 'unrestricted memetic warfare', bad faith be damned.
Thanks Brendan, this was a brilliant read. I couldn't agree more with the idea that thinking with philosophy is what's needed if we are to find new ways to flourish inside what is happening now.
That's the project I'm engaged in, in my own small way, over at New World Same Humans.
This quote in particular resonated: 'Rather than locking in a dominant monoculture, technology should recognize, amplify, and facilitate cooperation and adaptation across intellectual and other forms of diversity. Related, visions for the future should not only appeal to one group but aim to address universal humanity in its variety and complexity.'
Resonating with some comments above, I think a philosophical circle to include would be critical rationalism, incorporating the Deutsch/Popperian understanding of epistemology, philosophy of optimism and problem solving (and how to manage risk), and much of the morality around it all.
Interested to learn more. Curious how a contemporary philosophical approach might respond to the ways in which (in Harari’s terms) AI represents “non-conscious intelligence” and whether this can provoke a reappraisal of the ways philosophy has often conflated intelligence and consciousness. If intelligence isn’t what makes us human, what might we turn our attention to? Consciousness and intelligence aren’t the same thing, and perhaps it is the former that is special about the life we live, rather than the latter.
Also, and this is just more of a question that I’m curious about from the longtermist camp. What about net present value calculations for utils? Is there really no value difference between a good produced now and a good produced one million years into the future? Do they engage with this question, and is their answer related to the way “infinity” functions to obviate the NPV?
I've followed a similar path and will be following along intensely. Like you, I think AI presents a "forcing function" for philosophy to begin to answer the hard questions that technology confronts us with: What is technology actually for? What is the right relationship between nature, humanity, and technology? What, if anything, is sacred about the human? My sense is that these questions need to go much deeper than optimism vs. pessimism.
Thanks for writing this, Brendon! It's a great idea. I tried to articulate something similar with respect to reproductive tech earlier this year: https://www.psychologytoday.com/us/blog/sex-and-civilization/202305/techno-traditionalism
I suppose there are different versions of this idea -- for tech in general, for AI, genetics, etc. Either way, we need precisely this kind of ethical reflection on how tech can be used to promote human flourishing rather than simply hoping that technology will solve our problems simply by being brought into existence.
Exciting vision and spot on critique of “rationalism” movement.
This is an interesting article, and I applaud the attempts to create a new paradigm on technology and human progress. I am trying to do much the same thing, but I am coming at it from a different perspective.
I believe that the study of history is far more useful than philosophy as a foundation for doing so. I am not convinced that we will be able to push philosophy far beyond what previous philosophers have already done. It is far too easy in philosophy for flights of fancy to take over and delude a person into thinking that their personal opinion is an objective view of reality.
I think that we can, however, make great strides from the study of history, because it is by definition grounded in reality. It essentially focuses on demonstrated results rather than philosophizing.
I call my perspective the "Progress-based perspective:
https://frompovertytoprogress.substack.com/p/a-manifesto-for-the-progress-based
Some fantastic ideas. Unsurprising that this has come under Tyler Cowen's gaze.
I'm curious why there is no David Deutsch reading on your list? (I did see Popper). I think some of his ideas, particularly regarding what AI is and isn't, Fallibilism, and his unpacking of optimism, (among many many other insights) would add a lot to your project.
Best wishes.
This deeply misrepresents AI safety (what you've inaccurately renamed existential pessimism). Indeed, the majority of the original AI safety folks were incredible optimists. Many were technoutopians. And indeed still are.
Equating e/acc with EA is deeply misleading. The former is an incoherent, wild community with very little value add to the world. EA has its issues, but at least has a 15 year track record and billions of dollars deployed in a sensible way to do good.
You maliciously misrepresent the Pascalian situation of countering AI existential risks. No, the risk does not imply anything goes. I'd guess that in 95%+ of written materials on this no one else makes the claims you are. I can't remember a single instance I've ever read making the same claim as you.
There is no such thing as a chief theorist in EA. Will and other thought leaders aren't trying to purify moral goodness, insofar as I think you interpret the phrase to mean.
And so on. You have failed to pass an ideological Turing test for these concepts and worldviews. Given it would have been trivially easy to get someone with more knowledge than you to edit your draft, it does seem either lazy, disrespectful or malicious.
I highly appreciate this attempt to navigate towards a 'third way'! If you haven't already, definitely talk to Daniel Schmachtenberger, he's on a very similar mission: https://www.youtube.com/watch?v=8XCXvzQdcug
One critique: This essay sometimes borders on enlightened centrism, forcing itself to say every side in the debate is equally right and wrong.
One example: "EA also says that quantity of suffering matters, as if multiplying pain amplifies its moral “weight.” But if reason says proximity shouldn't matter, why should amount matter more than kind? Moreover, how should we rank forms of suffering?"
More suffering is very obviously more bad. Different kinds are hard to compare. Since we don't have infinite resources, we're always in triage and have to make trade-offs. As for the last question, EA attempts to tackle this systematically, while most philosophies refuse to even engage with the question of how to trade-off between different kinds of suffering. But putting your head in the sand doesn't help anyone and leads to more suffering.
Sometimes an argument is simply strong, other times it's simply wrong. Not every issue has two equally valid sides. More broadly, if truthseeking is what you care about, the existential risk and e/acc crowd are not very equal. One is, for all it's flaws, still the most truthseeking practical movement I know off, the other one openly advocates for 'unrestricted memetic warfare', bad faith be damned.
Thanks Brendan, this was a brilliant read. I couldn't agree more with the idea that thinking with philosophy is what's needed if we are to find new ways to flourish inside what is happening now.
That's the project I'm engaged in, in my own small way, over at New World Same Humans.
This quote in particular resonated: 'Rather than locking in a dominant monoculture, technology should recognize, amplify, and facilitate cooperation and adaptation across intellectual and other forms of diversity. Related, visions for the future should not only appeal to one group but aim to address universal humanity in its variety and complexity.'
I recently published a piece that echoed those ideas: https://www.newworldsamehumans.xyz/p/creatures-and-machines
Cosmos sounds a fascinating project; will be watching!
Resonating with some comments above, I think a philosophical circle to include would be critical rationalism, incorporating the Deutsch/Popperian understanding of epistemology, philosophy of optimism and problem solving (and how to manage risk), and much of the morality around it all.
Interested to learn more. Curious how a contemporary philosophical approach might respond to the ways in which (in Harari’s terms) AI represents “non-conscious intelligence” and whether this can provoke a reappraisal of the ways philosophy has often conflated intelligence and consciousness. If intelligence isn’t what makes us human, what might we turn our attention to? Consciousness and intelligence aren’t the same thing, and perhaps it is the former that is special about the life we live, rather than the latter.
Also, and this is just more of a question that I’m curious about from the longtermist camp. What about net present value calculations for utils? Is there really no value difference between a good produced now and a good produced one million years into the future? Do they engage with this question, and is their answer related to the way “infinity” functions to obviate the NPV?
I've followed a similar path and will be following along intensely. Like you, I think AI presents a "forcing function" for philosophy to begin to answer the hard questions that technology confronts us with: What is technology actually for? What is the right relationship between nature, humanity, and technology? What, if anything, is sacred about the human? My sense is that these questions need to go much deeper than optimism vs. pessimism.
Love the belief and focus on people. Excited to see how this progresses