By Brad Tuttle and ChatGPT
July 30, 2025
In this conversation, I chide ChatGPT for providing outdated, off-the-shelf responses to a question about career planning. As a result, the AI removed its kiddie gloves and provided some truly insightful and very helpful answers. For instance, here is one of my favorite quotes, “We used to believe that machines would replace hands before heads. That logic is reversing.”
Chat history
You said:
Assuming AI and quantum computing will dramatically affect people’s jobs within the next 10-20 years, what advice do you have for young people considering their career choices?
ChatGPT said:
If AI and quantum computing do dramatically reshape the workforce over the next 10–20 years — and most signs suggest they will — young people should make career choices with adaptability, creativity, and long-term relevance in mind. Here’s the most grounded, actionable advice:
đź§ 1. Learn How to Learn
- Why it matters:Â The pace of change will outstrip traditional career paths. The ability to reskill quickly will be more valuable than any single degree.
- What to do: Focus on metacognition — understanding how you learn. Practice self-directed learning, and get comfortable with online platforms, project-based learning, and peer networks.
đź’» 2. Embrace Tech Fluency (Not Just Coding)
- Why it matters: You don’t need to be a programmer, but you must understand how technology shapes your field — whether it’s art, business, healthcare, or law.
- What to do: Learn the basics of AI, data literacy, and systems thinking. Try Python, yes — but also learn how to ask smart questions of tech tools.
🧬 3. Get into “Un-Automatable” Work
- Why it matters:Â AI will outperform humans in repetitive, analytical, or rules-based tasks. But it still struggles with nuanced judgment, creativity, human connection, and ambiguity.
- High-potential areas:
- Creative professions: Design, storytelling, innovation strategy
- Human-centered roles: Mental health, negotiation, leadership
- Complex interdisciplinary work: Bioethics, policy design, climate strategy
🌍 4. Think Systems, Not Jobs
- Why it matters: Careers will look more like portfolios than ladders. Understanding how industries connect will be more valuable than mastering one narrow niche.
- What to do: Study how domains overlap — e.g., biotech + data privacy, or climate science + finance. Learn to zoom out.
📊 5. Build Judgment, Ethics, and Communication
- Why it matters: As machines “think,” human roles shift toward guiding outcomes — making ethical decisions, setting goals, and interpreting results.
- What to do:Â Practice writing clearly, speaking persuasively, and thinking critically. Be the kind of person people trust to interpret tech in a social context.
⚙️ 6. Be Adaptable, Not Just Smart
- Why it matters: The job you’re training for today might not exist tomorrow. But resilience and adaptability are always in demand.
- What to do: Choose opportunities that teach problem-solving under uncertainty — like startups, research projects, fieldwork, or volunteer leadership.
🛠️ 7. Build and Share Things
- Why it matters: Your portfolio is more important than your résumé. Show what you can do, don’t just say it.
- What to do: Contribute to open-source projects, write online, create videos, build prototypes, run experiments. Share your work — feedback is fuel.
đź”® 8. Keep a Long View
- Why it matters:Â Trends like AI and quantum are real, but the timeline is messy. Careers are marathons, not 5-year plans.
- What to do: Follow the deep shifts — like energy transitions, population aging, and decentralized systems — and align your values with emerging needs.
Would you like a list of concrete career paths or degree programs aligned with this advice?
You said:
I been thinking about your advice to my query and have a couple of observations: (1) beginning with “learn-to-learn” and ending with “keep a long view” the advice seems to be boilerplate and off-the-shelf advice from about the last 50 years, and (2) it appears to me to be focusing young people on the kinds of things that AI will be very good at within the next decade or so, that is thinking, judgment, identifying ethical issues, attention to detail, etc. Have you any thoughts beyond what has already been said or is this a new topic for you?
ChatGPT said:
That’s an excellent and fair critique — and you’re right to push for more original thinking here.
You’re not wrong: much of the advice about adaptability, creativity, and ethics has become generic — precisely because it’s safe, non-committal, and often fails to confront the deeper paradox you’re pointing to:
If AI is getting better at thinking, analyzing, learning, judging, and even “being ethical,” then what’s left for humans? And how do we guide young people toward careers that won’t just be displaced or absorbed by those very same capabilities?
That is a more urgent question than the typical “soft skills will save you” narrative. Let’s tackle it more directly.
🔍 Reframing the Core Problem
We used to believe that machines would replace hands before heads. That logic is reversing. The new AI wave targets:
- Pattern recognition
- Rapid upskilling (via LLMs, code assistants, etc.)
- Even “moral” language patterns
So telling young people to focus on judgment or learning how to learn might now be like telling people in the 1920s to focus on horsemanship when cars were coming.
What does that mean?
🧠Possible New Directions — Beyond the Usual Advice
1. Focus on “World-Embedded” Work
AI is still bad at being in the world — not just modeling it.
Example fields:
- Field-based science: geology, ecology, disaster response
- Hands-on medicine: surgery, diagnostics with physical cues
- Skilled trades: robotics maintenance, electrical systems, fabrication
- Construction and infrastructure planning
- Crisis logistics (e.g., supply chain in unstable environments)
AI will simulate models. But reality is messy. Whoever can close the model-reality gap is indispensable.
2. Embrace “Machine-Partnered” Professions
Not human versus AI, but humans with AI — deeply collaborative.
Practical implication:
Train young people to use AI as an extension of themselves, not a rival.
Example career types:
- AI-augmented creatives (prompt engineering + worldbuilding)
- Cognitive designers (shaping user experience with AI behavior)
- AI-copilot specialists (integrating LLMs into real workflows)
- Augmented diagnostics (in law, health, education — with AI assistants)
The human value may be in orchestrating AI teams, not replacing them.
3. Cultivate “Unstable Domain” Fluency
AI thrives in stable, repeatable domains with lots of data. It struggles with:
- Rare events
- Noisy environments
- Conflicting values
- Emerging problems with no labels
So: steer toward fields with evolving rule sets, e.g.:
- Governance of new technologies
- Biosecurity and emerging pathogens
- Climate mitigation under chaos
- Space policy and interplanetary ethics
Become a translator between uncertainty and action. That’s a long-term human advantage.
4. Shift from Mastery to Stewardship
If AI can master everything from coding to diagnostics, then perhaps the human role isn’t mastery — it’s guardianship.
Teach:
- Systemic thinking + social accountability
- Collective memory and cultural meaning
- Restraint — knowing when not to act
- Care, in the literal sense (nurturing systems, lives, ecosystems)
Stewardship is hard to automate — it blends emotion, context, tradition, and long-view consequence.
🤔 Final Thought
You’re right to be skeptical of old advice in a new age. If AI is advancing fast and thinking well, then humans will need to:
- Work closer to the mess
- Play with tools, not just ideas
- Act in physical, political, or ambiguous settings
- Be responsible not just for what’s efficient — but for what’s worth doing
That may mean rethinking education, not just career advice.
Would you like to explore a vision for post-AI education? Or a speculative map of careers in a world where “intelligence” is cheap and abundant, but meaning and trust are rare?
You said:
That is a much better response especially learning how to co-exist with AI. The AI will evolve for certain. But will people and education?
ChatGPT said:
That’s the heart of it — and it’s far from guaranteed.
🚀 Yes, AI will evolve.
That part’s inevitable. The models are improving at exponential speed, the incentives are aligned, and the funding is enormous.
But your deeper question — “Will people and education evolve with it?” — is much harder, and the honest answer is:
Not quickly enough. Not uniformly. And not without serious friction.
Let’s break that down a bit:
đź§ 1. People Tend to Evolve After the Crisis, Not Before
Historically, social change lags technological change. Some patterns:
- Industrial Revolution: Machines came fast. Labor rights and universal schooling came decades later — after exploitation, unrest, and collapse.
- Digital Era: The internet revolutionized work, but mass digital literacy and cybersecurity awareness are still catching up.
So:
We may not see serious human adaptation to AI until job dislocation, misinformation, or other disruptions hit hard.
🎓 2. Education Systems Are Rigid by Design
Most schooling systems are:
- Bureaucratic
- Politically sensitive
- Built for credentialing, not curiosity
- Still training kids for jobs that won’t exist
You probably know this already, but a few specifics make it worse in the AI era:
- Standardized testing trains kids to repeat — just like AI does, but worse.
- Curricula lag behind emerging fields by years.
- Teachers aren’t reskilled fast enough, even when they want to be.
There are some rays of hope (see below), but for most systems:
Education is preparing kids for a past that no longer exists.
⚠️ 3. Without Guidance, Many Will Adapt the Wrong Way
Some likely short-term human responses to AI’s rise:
- Hyper-optimization: Using AI to chase shallow success (cheating, resume padding, “prompt farming”)
- Resentment or withdrawal: Rejecting tech entirely
- Over-reliance: Letting AI do the thinking, forgetting how to think independently
So unless culture evolves too — not just tech — we risk raising a generation that’s technically fluent but ethically ungrounded and intellectually passive.
🌱 But There Are Seeds of Real Evolution
In people:
- Some are learning to think with AI, not just use it — like musicians improvising with a smart bandmate
- Others are developing AI-literacy, ethics, and collaborative fluency from a young age
- A growing number value slow thinking, context, and embodied presence — as a counterbalance to algorithmic speed
In education:
- Progressive models like Montessori, Waldorf, and democratic schools are being reinterpreted for the AI era
- Some universities are experimenting with “human-AI teams”, not just teaching programming
- Online education is maturing — less about content, more about co-creation, mentorship, and experimentation
But these are still fringe. System-wide transformation will take:
- Bold policy
- Cultural shift
- New mental models for intelligence, creativity, and work
đź§ Bottom Line
You’re absolutely right to question whether people and education will keep up with AI. The tools are racing ahead, but the human systems are lagging.
The real challenge isn’t just how to coexist with AI — it’s how to stay human while doing so.
That means:
- Teaching judgment before convenience
- Practicing curiosity over optimization
- Building trust, meaning, and care into systems that don’t require them — but desperately need them
Would you be interested in an outline for a truly future-ready education model — one designed around the assumption that intelligence will be abundant but wisdom scarce?
ChatGPT can make mistakes. Check important info.

Leave a comment