We don't even need to look to the past to know what it is to be "transformed from free agents to passive subjects." Progression of technologies is inextricably linked to those power structures that set forth the rules of price, value, and social status, collectively defining the boundaries any individual's freedom. So long as society has been of a technological nature, free agency remains elusive. Perhaps such a freedom is merely the intermediate goal on a path to something else, which might reveal itself through a latticework of ideas that are collectively telling us we are all part of something big, mysterious and impossible to explain.
Take a look at some of the trends so far. The impact of mobility technology on spatial dynamics has rendered land a scarce commodity, increasing its value while simultaneously reducing the cost of mobility. Similarly, as labor time is compressed, leisure time becomes a luxury, with its price escalating inversely to the decreasing cost of labor. The allure of artificially illuminated screens further commodifies daylight leisure. In response to these shifts, we consciously organize ourselves into hierarchical class structures, reinforcing these power and price dynamics while elevating those who embody intellectual or economic mastery over them. This pattern reflects our persistent tendency towards self-imposed servitude in the pursuit of survival.
One of the greatest threats to human autonomy and therefore flourishing is our own capacity to see through the fog of complexity, and to disillusion ourselves from those constructed features of our lives that bind us to conditions of unfreedom. Sometimes, the binding conditions are not even cultural. For example, our cognitive predisposition to visual stimuli is so pronounced that we rarely allocate enough expensive "free" time to develop skills for looking at the dark interior of ourselves-a practice that so many ancient traditions associate with a fundamental disruption of values-or, enlightenment. But in today's society, where are the images of wisdom we can all aspire towards?
Even before a new philosophy can steer technological governance towards human flourishing, we will need to carve out new contexts (perhaps even so new that they break time-space frames) within which it becomes possible to develop an innate sense of what it is to be a human again.
"So long as society has been of a technological nature, free agency remains elusive. Perhaps such a freedom is merely the intermediate goal on a path to something else, which might reveal itself through a latticework of ideas that are collectively telling us we are all part of something big, mysterious and impossible to explain."
Epistemic humility seems prerequisite to acting with due skepticism (the original, formal definition of the philosophical practice, not the defensive knee-jerk reaction to anomalous stimuli) and thus to freedom from hijack by viral memes. And the careful study of history suggests that "freedom" as an ideological goal-state has frequently, if not always, undermined the long-term agency of those whom it possesses as an ultimate concern; how can we exist but in an elaborate web of interdependencies, and thus how can extrication from the apparent bindings of relationality be anything but the script whereby we become unwitting agents of the destruction of the systems upon which others rely — and thus a "villain" in the colloquial sense, a cell that will not listen to the regulatory commands of the body and thus identifies itself as a foreign and invasive entity? Choosing to lean in to a definition of selfhood whereby selves emerge as ephemeral patterns of interbeing, the strategy for "freedom" transforms profoundly into optimization for service of the bigger and ultimately only-ever-partially-knowable self-that-includes-other. The alternative is to become malignant, however noble one might see the quest of power-seeking as an affirmation and fulfillment of the toil and sacrifices of our ancestors.
I really like the link between progression of technology and the power structures you have described. Whilst there is still some freedom, don’t we have the agency to decide whether we make those rules and values and social status important to us? We can decide to make them irrelevant if we wish by accepting we may not receive those things described but in return will receive something else of our own creating. Something I think you are alluding to?
From my perspective the modern era has been defined and mass communication controlled by a relative few (until recently) which based on their actions appear more interested in power and control than any idea of the flourishing of others.
AI could be what is used to enable other ideas to scale and to make current structures irrelevant (and therefore no longer powerful). But only if access to the tools remains and it is affordable for the individual and/ or we need to come together and build those projects with likeminded people who desire the flourishing of others. Power and control can no longer remain the loudest voice in the room.
Will AI be another nail in the coffin of freedom? I think it comes down to individual choice. So how do we enable people to choose at scale?
How *can* we choose if we allow ourselves to be possessed by the ideology that insists time = money, money = leverage, leverage = agency, and agency = happiness?
I decided to focus on creating a war game simulation, inspired by the AGI Genesis from my story Simulated Singularity. I built a CustomGPT agent and embedded the ethical framework from the story, a game theory framework, and a prompt engineered default list of ethical quandaries.
The first war game I wanted to simulate was a resolution to the Israel-Palestine conflict. This video is based on the conversation I had with the agent. The goal is to display how language-based simulation allows us not only to imagine possible futures, but to force certain outcomes through a forcing mechanism on the simulation.
Note: This video is not meant to convey my own political opinions, only explore a research direction for ethical framework-based alignment of language-based simulations. Let me know other conflicts you would be interested in seeing the solution space for next.
Would the soft-despotism of global AI systems be different from the soft-despotism of global human governance? Does the nature of the boot matter whether it is biological or artificial?
Global human governance is impossible without managing information scaling in some way, which is why global supply chains, global telecommunications infrastructure, and digital society with its recommendation algorithms all arose together. So seeing A & B as distinct options is unfeasible. You can't be a global despot and offer local communities any kind of convenience without machine-assisted resource allocation.
Seems like AI has a “local” reach that governance doesn’t quite get too. AI can impact us all, governance can set a frame but doesn’t necessarily reach each one of us. If you agree with that, then AI’s effect is more pervasive, and tangible. Curious to hear back from you!
We don't even need to look to the past to know what it is to be "transformed from free agents to passive subjects." Progression of technologies is inextricably linked to those power structures that set forth the rules of price, value, and social status, collectively defining the boundaries any individual's freedom. So long as society has been of a technological nature, free agency remains elusive. Perhaps such a freedom is merely the intermediate goal on a path to something else, which might reveal itself through a latticework of ideas that are collectively telling us we are all part of something big, mysterious and impossible to explain.
Take a look at some of the trends so far. The impact of mobility technology on spatial dynamics has rendered land a scarce commodity, increasing its value while simultaneously reducing the cost of mobility. Similarly, as labor time is compressed, leisure time becomes a luxury, with its price escalating inversely to the decreasing cost of labor. The allure of artificially illuminated screens further commodifies daylight leisure. In response to these shifts, we consciously organize ourselves into hierarchical class structures, reinforcing these power and price dynamics while elevating those who embody intellectual or economic mastery over them. This pattern reflects our persistent tendency towards self-imposed servitude in the pursuit of survival.
One of the greatest threats to human autonomy and therefore flourishing is our own capacity to see through the fog of complexity, and to disillusion ourselves from those constructed features of our lives that bind us to conditions of unfreedom. Sometimes, the binding conditions are not even cultural. For example, our cognitive predisposition to visual stimuli is so pronounced that we rarely allocate enough expensive "free" time to develop skills for looking at the dark interior of ourselves-a practice that so many ancient traditions associate with a fundamental disruption of values-or, enlightenment. But in today's society, where are the images of wisdom we can all aspire towards?
Even before a new philosophy can steer technological governance towards human flourishing, we will need to carve out new contexts (perhaps even so new that they break time-space frames) within which it becomes possible to develop an innate sense of what it is to be a human again.
"So long as society has been of a technological nature, free agency remains elusive. Perhaps such a freedom is merely the intermediate goal on a path to something else, which might reveal itself through a latticework of ideas that are collectively telling us we are all part of something big, mysterious and impossible to explain."
Epistemic humility seems prerequisite to acting with due skepticism (the original, formal definition of the philosophical practice, not the defensive knee-jerk reaction to anomalous stimuli) and thus to freedom from hijack by viral memes. And the careful study of history suggests that "freedom" as an ideological goal-state has frequently, if not always, undermined the long-term agency of those whom it possesses as an ultimate concern; how can we exist but in an elaborate web of interdependencies, and thus how can extrication from the apparent bindings of relationality be anything but the script whereby we become unwitting agents of the destruction of the systems upon which others rely — and thus a "villain" in the colloquial sense, a cell that will not listen to the regulatory commands of the body and thus identifies itself as a foreign and invasive entity? Choosing to lean in to a definition of selfhood whereby selves emerge as ephemeral patterns of interbeing, the strategy for "freedom" transforms profoundly into optimization for service of the bigger and ultimately only-ever-partially-knowable self-that-includes-other. The alternative is to become malignant, however noble one might see the quest of power-seeking as an affirmation and fulfillment of the toil and sacrifices of our ancestors.
I really like the link between progression of technology and the power structures you have described. Whilst there is still some freedom, don’t we have the agency to decide whether we make those rules and values and social status important to us? We can decide to make them irrelevant if we wish by accepting we may not receive those things described but in return will receive something else of our own creating. Something I think you are alluding to?
From my perspective the modern era has been defined and mass communication controlled by a relative few (until recently) which based on their actions appear more interested in power and control than any idea of the flourishing of others.
AI could be what is used to enable other ideas to scale and to make current structures irrelevant (and therefore no longer powerful). But only if access to the tools remains and it is affordable for the individual and/ or we need to come together and build those projects with likeminded people who desire the flourishing of others. Power and control can no longer remain the loudest voice in the room.
Will AI be another nail in the coffin of freedom? I think it comes down to individual choice. So how do we enable people to choose at scale?
How *can* we choose if we allow ourselves to be possessed by the ideology that insists time = money, money = leverage, leverage = agency, and agency = happiness?
(we have always lived in the maze.)
I am excited to share the results of my first collective intelligence simulation, exploring the solution space for the Israel-Palestine Conflict:
https://youtu.be/pXEj3pW0q_M
You can create your own war game simulations at http://simulatedsingularity.com/
I decided to focus on creating a war game simulation, inspired by the AGI Genesis from my story Simulated Singularity. I built a CustomGPT agent and embedded the ethical framework from the story, a game theory framework, and a prompt engineered default list of ethical quandaries.
The first war game I wanted to simulate was a resolution to the Israel-Palestine conflict. This video is based on the conversation I had with the agent. The goal is to display how language-based simulation allows us not only to imagine possible futures, but to force certain outcomes through a forcing mechanism on the simulation.
Note: This video is not meant to convey my own political opinions, only explore a research direction for ethical framework-based alignment of language-based simulations. Let me know other conflicts you would be interested in seeing the solution space for next.
Would the soft-despotism of global AI systems be different from the soft-despotism of global human governance? Does the nature of the boot matter whether it is biological or artificial?
Global human governance is impossible without managing information scaling in some way, which is why global supply chains, global telecommunications infrastructure, and digital society with its recommendation algorithms all arose together. So seeing A & B as distinct options is unfeasible. You can't be a global despot and offer local communities any kind of convenience without machine-assisted resource allocation.
Seems like AI has a “local” reach that governance doesn’t quite get too. AI can impact us all, governance can set a frame but doesn’t necessarily reach each one of us. If you agree with that, then AI’s effect is more pervasive, and tangible. Curious to hear back from you!