12 Comments
User's avatar
Jesse Parent's avatar

Yes: "Specifically, AI demands a new metacognitive capacity: reflective discernment. We’ll need to be able to clearly differentiate our own reasoned judgments from the pull of algorithmic suggestion."

And - Great to see the lives stream for MIT AHA available!

Expand full comment
Emily Burnett's avatar

I found myself agreeing out loud at several places in this piece. As an entrepreneur-at-heart who's spent time as an employee in various careers before leaving for the next stage of my own development, I've seen this first-hand, even before the mass adoption of AI. It's wake-up time to reclaim self-authorship.

Expand full comment
Michael Garfield's avatar

Breaking our need to worry about product-market fit unleashes prodigious amounts of autonomy.

Farmers were "self-employed" because they were farming for themselves. Now people who graduate with advanced degrees do so *in order to compete*. But we could all be farming again if our systems finally make good on the promise of masterful cross-domain translation, making someone's insights in the area of their choosing economically pertinent in areas they don't even know exist.

The more niches we have, the less competition there is for each niche. Cities drove the specialization of labor, affording more unique lifeways. Chris Anderson's "long tail" was made possible by the Web. The logical extreme is where we move out of Kevin Kelly's "1000 true fans" model for internet-based creative commerce and into deliberately choosing to be authentically self-directed in one's explorations, contributing something unique by taking a singular path through the latent space of all possible career trajectories.

A mature information economy is one in which your evolving identity *is* "the difference that makes a difference and we aren't all driven into the confinement of crowded attention basins by what other people find interesting and/or are willing to support. Most human labor is "useless" at the spatiotemporal scale and resolution of human-directed markets, but is existentially important to the continued existence of those markets by providing huge reservoirs of adaptive potential. And AI needs data to keep from overfitting...the faster the world spins, the more obvious it will become to bet on people choosing distinct paths. This is how we can resolve the need for "autonomy" with the reality of our participation in nested systems. (See https://michaelgarfield.substack.com/p/hotl-03 for why autonomy can't be "exit".)

Michael Dean and I discussed this future of exponentially creative post-job work proliferation and AI providing the infrastructure for self-inquiry-as-a-service in episode 17 of Cosmos-supported series and I'll be writing much more about this soon: https://michaelgarfield.substack.com/p/h-17

Expand full comment
Tom Moffatt's avatar

Autonomy of thought and agency are absolutely critical to the A.I. Age. As you highlight so clearly these are attribute’s reinforced and developed through daily practice. We have to decide as individuals, leaders and organisations how technology will be used in our lives and organisations. We also have to remember and celebrate the gifts of being human.

Expand full comment
The Human Playbook's avatar

I’m so glad I found this piece. What’s been intriguing me lately is this: what exactly is the problem? We haven’t agreed on what’s truly at stake. But we are all witnessing happening before our eyes ... which is fascinating. One side says AI is already doing the thinking, gradually but surely getting there. Meanwhile, academic papers keep reminding us that AI doesn’t ‘think’ at all, it simulates. And when it comes to complex reasoning, it still falls short.

So what’s really happening here? What are we missing in this conversation?

Expand full comment
Hopefully Abysmal's avatar

I am of the firm belief that the AI systems in place can (and will if we secure and decentralize access to them) be used to free one from that of the menial vocation. By automating as much as we can, ideally we free ourselves to do things we could never have done prior under the same set of economic circumstances. It has to be stated though that it is exactly that, however, dependent on the economics surrounding its implementation. Unless we determine new metrics and incentive systems to abide by, we will automate away the entirety of our lives; those in poor circumstances will only become more worse off. It will require a significant degree of altruisim, designing systems not for profit but for progress (a choice thats apparently super hard to make 🙄). I like to think of it like this though: if all of the consumers die off, who will be left to make the purchases that sustain the profits we're so obsessed with? Its not a tooling issue, its a systems, social, and societal issue...

Expand full comment
redbert's avatar

Wouldn't an "absolutely optimized" world provide a Universe 25 scenario? food, health, work, all becomes victim to the sterile logic of a hyper controlled (controlling) compu-system.. and all this idealized precursor stuff "if AI takes my job, ill be able to do hobbies!" is absolutely garbage because humans are abhorrently bad at filling voids with anything meaningful.. and we would also assume that AI hyper optimizing everything even eliminates the need for your hobbies.. then... poof...

Expand full comment
Hopefully Abysmal's avatar

You're not wrong, but I would not say you are inherently right. As stated I agree that uncontrolled automation could (likely would) result in a Universe 25 scenario, however that is not a given if we play our cards right. In short: I believe we risk more by resigning ourselves to that outcome than by imagining alternatives with clarity and care.

Rather than automating everything, we should be asking: What must never be automated? Where is friction sacred? What are the rituals of becoming in an age of infinite suggestion?

We must imagine tools that inspire responsibility, nudge reflection, and make transparent the cost of every shortcut. Imagine economies that reward not just production, but preservation of possibility; economies where contribution isn’t bound to narrow outputs, but to one's capacity to cultivate options for others.

We're bad at filling meaning because we have never been taught to... we have not yet even had the opportunity to learn how! I would argue that until this occurs we will only have ever survived as a species, never truly thrived. We must purposefully metabolize the shift from survival labor to intentional vocation if we are to have a future worth fighting for.

Universe 25 was a cautionary tale, but have you read `A Psalm for the Wild-Built` by Becky Chambers? I really enjoy their take on work post-automation: gentle, slow, and rich with meaning. Its an amazing story as well, worth the read if you're willing to take an alternative perspective for a moment.

Expand full comment
redbert's avatar

I like the alternative perspective, absolutely, in fact I prefer it and endorse it and want it... but the quick AI push is driven by capitalism and the desire for $$$ which i believe will inherently undermine any moral or social implications, as capitalism is a pig that constantly needs feeding.

The social implication talking points all seem to revolve around a common narrative which i poked at in my initial comment, and seemingly every comment I've left on posts on this topic: "Let AI do it all and just chill with your buddies, bro!" so we lose common sense, we lose value, and this capitulates , enter Universe 25 (or similar)

We did it with factory farming, we did it with social media, we did it with colonialism and even the fucking plastics industry. We played the money game too quick and the fallout touched everything from ecology, to mental health to democracy, medicine, and ultimately loss of life. Now we can skew the "loss of life" thing until loss of identity, purpose, usefulness, meaning... We're bad at this and while hopeful, I think we're really naive in thinking this time it'll be different. Cocktail: 1 pt Profit over prudence, 2 pts Bias, 1/2 Oz Illusion of Control. Stir vigorously 😆

Expand full comment
Takim Williams's avatar

Dropping the link to a note where I sketched a hypothetical Black Mirror episode about an "auto-complete" near-future:

https://substack.com/@takimwilliams/note/c-113381818?r=17mz6p

Expand full comment
Takim Williams's avatar

Love this, particularly the vision of experts developing discernment for when to return to analogue tools vs. when to augment with AI. Strikes me as a particularly relevant framing for education. So much of the debate remains starkly binary (those "against" and "for" student use of AI). The synthesis would involve the recognition that, should we restrict student AI usage, the purpose of doing so would be to develop their judgment faculties and capacity for self-determination, PRECISELY SO THAT they are equipped to use the cognitive enhancement technologies of the 21st-century to amplify them.

Expand full comment
Jamie House's avatar

It's crucial we identify what automation and autonomy looks like for child development. I think schools have already made mistakes in this regard. AI will continue to challenge the validity of schools.

Expand full comment