Bravo! Excellent work. This is especially important after the release of Tristan Harris's TED talk about the future of AI. He thinks decentralized AI leads to chaos.
"AI is our new epistemic infrastructure. It could take that project to new heights, or subvert it entirely. In the downside case, AI becomes an "autocomplete for life"—suggesting not just our next word, but our next action, job, relationship, purpose. Each small delegation of choice seems harmless, even natural. But together, these micro-abdications compound–choice by choice, day by day–gradually diminishing our capacity for autonomous thought."
Well said!
This echoes something that I recently wrote: "[W]e used to hide the humans inside the machine. Now we hide the machine behind the humans.
This strange new world resembles Theseus's famous paradox: the ship looks unchanged, but all its internal mechanisms have been quietly replaced. Our economy preserves the appearance of human work while silently replacing its cognitive components, leaving us to wonder who (or, more accurately, what) is truly at the helm."
An interesting essay. I found your eulogy to markets surprising though? Markets are only ‘efficient’ when they consist of autonomous & independent agents traversing the whole system state over time. This happens for a while but you surely realise that even natural systems will tend towards consolidation, path dependence, power laws & local optima that can still create global fragility? In natural systems this fragility is resolved by frequent collapses (forest fires/sandpiles etc) but in human constructed systems, we use our big brains to shore up the sand pile until we have a huge inter-connected, low variation mess that leaves us stranded facing huge collapse. Believe me, I’m British so I know how this feels! I also find it odd that you reference Polanyi without comparing his ideas around recursion & symmetry in indigenous societies. These early societies didn’t ‘leap’ as much as modern societies but nor did they keep collapsing. There is a trade off between stability & emergence? I would like to have seen you explore the notion that self-organising human systems do *not*, as you suggest, always conduct parallel experiments at local edges - instead they often leap? Peter Thiel is quite open about this - he’s not interested in 10, 50/50 bets. He wants an 80/20 funnel & he’ll do everything in his power to take the shortcut. Surely that’s also the essence of the human brain? It doesn’t explore every decision with a 50/50 experiment? It leaps, irrationally, with a decision that is good enough. Anyway, a very good essay. My comments are not intended to undermine - only to invite further exploration.
Well shared! I invite further exploration along these lines as well. A starting point... "fragility is resolved by frequent collapses" yes, though is it "collapses all the way down"? I reason that it is not, and the quick way to explain is, while a pathway in indirect network might collapse, triggering other paths collapse, the same path is structured in keeping with repair along paths. In a grounded system, the network "funnels" as in reconfigures due to intrinsic, dynamic interrelatedness. Collapse is not an absolute.
This is not to say: do nothing. But rather, focus on the structurally-sound subject of real (grounded) generativity and surpass the either/or options when it comes to centralization and decentralization.
Modular, networked systems can regenerate, as you say. Other pathways can be explored as the system reorganises. But where there is monoculture (markets, crops or ideas) systems become very inter-connected & synchronised increasing the chance of huge systemic cascades & collapses. I can’t quite remember what this essay said but I think he argues for edge innovation which I agree with. But the risk with AI is that we get monoculture that reduces variety & funnels our culture into a very narrow state space. The whole point about a brain rather than a machine is that it’s capable of reorganising (through networks of neurons) to address new system states. AI can’t do that. It can, if combined with human brains but if it becomes a priesthood or a set text then we really are doomed.
Nice piece. I am using a few insights here in crafting a discussion about decentralized technology in my next book.
One quibble: it's a bit confused to say that AI agents can "soak up" tacit knowledge by participating at a local level. The whole point of tacit knowledge is that it's not quantifiable, and can't be, as Polanyi put it, "articulated." This means that it can't be converted into data and input into a computer. How then, do the agents "soak it up"?
I think the point that's missing here is that computation itself is bounded with respect to human intelligence. The tacit component is distinctively human, insofar as computational systems must use downstream quantifiable data as input.
We need to preserve this distinction. In fact, one of the great virtues of decentralized systems is not that the computers will get in on the tacit knowledge game, but that we are freed to provide such knowledge. It enters as acquired judgement.
What the decentralized AI community needs badly is a proper theory of how to handle the tacit and the quantifiable. We may, for instance, be counting things that shouldn't be counted, or neglecting to count something that should. But suggesting that AI agents will soak up tacit knowledge does indeed miss not just Polanyi's but the broader and important point.
The issue with decentralization is that ultimately its unclear that there is anything human that flourishes - though I generally support the idea of empowering humans, if only so that humans can have more influence on the tsunami.
But everything indicated about humans likely is true also of AI agents, and scaled up, this results in a world where humanity is extremely diminished, if not existentially threatened. Many tools already exist, indeed, to spready capture the the "tacit knowledge" that humans have, such as monitoring.
If AI is truly to aid in human flourishing, these realities have to be faced and confronted. I'm planning to write a lot more myself as well, but I think that one can be easily too glib about that - though I'm glad that you at least have noted that accelerationists are profoundly anti-human.
Hi Shon Pan, I think the point is that "tacit knowledge" can't be captured. That's precisely why it's tacit. The worry is that what can be captured is only that which can be "counted" or rendered as data, and that's not all humans know and can do.
Monitoring helps capture information; if tacit knowledge is strictly defined as "something that cannot be captured," then it may not exist. It obviously can be captured at least by human brains within a certain context, and computational functionalism seems triumphant at the moment.
Argument for the benefits of decentralised knowledge creation, protection and reinforcing itself on the edges is meaningful. I’d be concerned about the increasing capabilities of AI in this part; ‘Similarly, decentralized AI systems can create interfaces between different forms of local knowledge without requiring all knowledge to be explicit. And if AI agents participate in institutions like markets, they can soak in our tacit knowledge–while potentially sharing their own’. The reason is tacit, inexplicable sometimes intuition based knowledge could be the only thing left between autonomous human systems and all powering AI. I’m all for being optimistic about it in the long term as long as we are in charge of conscientious development of AI yet definitely not sure our optimism should extend that far. The legal framework in the age of AI is something I think about a lot too; https://open.substack.com/pub/troubadour/p/on-laws-and-technology
I absolutely love the thoughtful provocation of your thinking and read and listen to you with relish (I live in NZ, when can I do an online course that grows my critical thinking run by the Institute?). My question is around manipulated knowledge. How do we handle the manipulation of knowledge when it is no longer possible to tell fact from fiction in populations that are increasingly divided and to a degree time poor or lack the curiosity to challenge? How does decentralised knowledge which increasingly reinforces individual bias contribute to a governable society?
Bravo! Excellent work. This is especially important after the release of Tristan Harris's TED talk about the future of AI. He thinks decentralized AI leads to chaos.
"AI is our new epistemic infrastructure. It could take that project to new heights, or subvert it entirely. In the downside case, AI becomes an "autocomplete for life"—suggesting not just our next word, but our next action, job, relationship, purpose. Each small delegation of choice seems harmless, even natural. But together, these micro-abdications compound–choice by choice, day by day–gradually diminishing our capacity for autonomous thought."
Well said!
This echoes something that I recently wrote: "[W]e used to hide the humans inside the machine. Now we hide the machine behind the humans.
This strange new world resembles Theseus's famous paradox: the ship looks unchanged, but all its internal mechanisms have been quietly replaced. Our economy preserves the appearance of human work while silently replacing its cognitive components, leaving us to wonder who (or, more accurately, what) is truly at the helm."
More here: The Inverse Mechanical Turk: Meat Puppets, Silicon Strings (https://www.whitenoise.email/p/the-inverse-mechanical-turk-meat)
An interesting essay. I found your eulogy to markets surprising though? Markets are only ‘efficient’ when they consist of autonomous & independent agents traversing the whole system state over time. This happens for a while but you surely realise that even natural systems will tend towards consolidation, path dependence, power laws & local optima that can still create global fragility? In natural systems this fragility is resolved by frequent collapses (forest fires/sandpiles etc) but in human constructed systems, we use our big brains to shore up the sand pile until we have a huge inter-connected, low variation mess that leaves us stranded facing huge collapse. Believe me, I’m British so I know how this feels! I also find it odd that you reference Polanyi without comparing his ideas around recursion & symmetry in indigenous societies. These early societies didn’t ‘leap’ as much as modern societies but nor did they keep collapsing. There is a trade off between stability & emergence? I would like to have seen you explore the notion that self-organising human systems do *not*, as you suggest, always conduct parallel experiments at local edges - instead they often leap? Peter Thiel is quite open about this - he’s not interested in 10, 50/50 bets. He wants an 80/20 funnel & he’ll do everything in his power to take the shortcut. Surely that’s also the essence of the human brain? It doesn’t explore every decision with a 50/50 experiment? It leaps, irrationally, with a decision that is good enough. Anyway, a very good essay. My comments are not intended to undermine - only to invite further exploration.
Well shared! I invite further exploration along these lines as well. A starting point... "fragility is resolved by frequent collapses" yes, though is it "collapses all the way down"? I reason that it is not, and the quick way to explain is, while a pathway in indirect network might collapse, triggering other paths collapse, the same path is structured in keeping with repair along paths. In a grounded system, the network "funnels" as in reconfigures due to intrinsic, dynamic interrelatedness. Collapse is not an absolute.
This is not to say: do nothing. But rather, focus on the structurally-sound subject of real (grounded) generativity and surpass the either/or options when it comes to centralization and decentralization.
Modular, networked systems can regenerate, as you say. Other pathways can be explored as the system reorganises. But where there is monoculture (markets, crops or ideas) systems become very inter-connected & synchronised increasing the chance of huge systemic cascades & collapses. I can’t quite remember what this essay said but I think he argues for edge innovation which I agree with. But the risk with AI is that we get monoculture that reduces variety & funnels our culture into a very narrow state space. The whole point about a brain rather than a machine is that it’s capable of reorganising (through networks of neurons) to address new system states. AI can’t do that. It can, if combined with human brains but if it becomes a priesthood or a set text then we really are doomed.
Very insightful thread
Nice piece. I am using a few insights here in crafting a discussion about decentralized technology in my next book.
One quibble: it's a bit confused to say that AI agents can "soak up" tacit knowledge by participating at a local level. The whole point of tacit knowledge is that it's not quantifiable, and can't be, as Polanyi put it, "articulated." This means that it can't be converted into data and input into a computer. How then, do the agents "soak it up"?
I think the point that's missing here is that computation itself is bounded with respect to human intelligence. The tacit component is distinctively human, insofar as computational systems must use downstream quantifiable data as input.
We need to preserve this distinction. In fact, one of the great virtues of decentralized systems is not that the computers will get in on the tacit knowledge game, but that we are freed to provide such knowledge. It enters as acquired judgement.
What the decentralized AI community needs badly is a proper theory of how to handle the tacit and the quantifiable. We may, for instance, be counting things that shouldn't be counted, or neglecting to count something that should. But suggesting that AI agents will soak up tacit knowledge does indeed miss not just Polanyi's but the broader and important point.
At any rate I enjoyed the piece.
The issue with decentralization is that ultimately its unclear that there is anything human that flourishes - though I generally support the idea of empowering humans, if only so that humans can have more influence on the tsunami.
But everything indicated about humans likely is true also of AI agents, and scaled up, this results in a world where humanity is extremely diminished, if not existentially threatened. Many tools already exist, indeed, to spready capture the the "tacit knowledge" that humans have, such as monitoring.
If AI is truly to aid in human flourishing, these realities have to be faced and confronted. I'm planning to write a lot more myself as well, but I think that one can be easily too glib about that - though I'm glad that you at least have noted that accelerationists are profoundly anti-human.
Hi Shon Pan, I think the point is that "tacit knowledge" can't be captured. That's precisely why it's tacit. The worry is that what can be captured is only that which can be "counted" or rendered as data, and that's not all humans know and can do.
Monitoring helps capture information; if tacit knowledge is strictly defined as "something that cannot be captured," then it may not exist. It obviously can be captured at least by human brains within a certain context, and computational functionalism seems triumphant at the moment.
Argument for the benefits of decentralised knowledge creation, protection and reinforcing itself on the edges is meaningful. I’d be concerned about the increasing capabilities of AI in this part; ‘Similarly, decentralized AI systems can create interfaces between different forms of local knowledge without requiring all knowledge to be explicit. And if AI agents participate in institutions like markets, they can soak in our tacit knowledge–while potentially sharing their own’. The reason is tacit, inexplicable sometimes intuition based knowledge could be the only thing left between autonomous human systems and all powering AI. I’m all for being optimistic about it in the long term as long as we are in charge of conscientious development of AI yet definitely not sure our optimism should extend that far. The legal framework in the age of AI is something I think about a lot too; https://open.substack.com/pub/troubadour/p/on-laws-and-technology
I absolutely love the thoughtful provocation of your thinking and read and listen to you with relish (I live in NZ, when can I do an online course that grows my critical thinking run by the Institute?). My question is around manipulated knowledge. How do we handle the manipulation of knowledge when it is no longer possible to tell fact from fiction in populations that are increasingly divided and to a degree time poor or lack the curiosity to challenge? How does decentralised knowledge which increasingly reinforces individual bias contribute to a governable society?