What Does it Mean to Think in an AI-Dominant World?
Why the greatest educational responsibility today is not teaching AI tools — but preserving human judgment.
For the past year, my schedule has been filled with workshops, symposia, conference sessions, and meetings, all driven by one topic: artificial intelligence.
This past Tuesday, I attended another symposium focused on the impact of AI on the future of career readiness in education. The first keynote speaker made a few mind-boggling statements that stuck with me. First, he shared that we are not creating artificial intelligence; we are creating a new intelligence. Heads around the table nodded with equal parts excitement and impatience.
That one declaration set the tone. The conversation quickly shifted into something closer to evangelism. “AI will change the way we work.” “AI is here to stay.” “AI isn’t optional anymore.” I heard these lines over and over again, almost like a chorus.
But I couldn’t help but tilt my head, squint my eyes, and think to myself, ‘What do you mean, new intelligence?’
By the end of the session, the entire room was equally amazed and bewildered by the explanation of the state of AI and the new AI-enhanced world to come very soon. That single session and bold statement were enough to start me on a path of inquiry and curiosity. It was intriguing, and yet, as we walked out, I couldn’t shake a single uncomfortable question:
Are we being duped by the promise of gaining efficiency at the expense of human judgment?
That question returned to me again when I recently came across a striking essay in The Atlantic titled “The People Outsourcing Their Thinking to AI.” The piece profiles individuals who have begun delegating everything, from writing emails and parenting decisions to trivial daily choices like grocery shopping or whether a fruit is ripe, to AI tools. More than a cautionary tale, the article offers a window into a broader psychological and social phenomenon. It suggests that for many, AI is no longer just a tool. It is fast becoming a reflex. As I reflected on this article and my own experiences in higher education, I realized: we are not just integrating a new technology. We are renegotiating our relationship with thinking itself.
The Temptation of Easy Answers
A couple of years ago, a faculty colleague described a pattern she had begun noticing in student writing shortly after generative AI tools became widely available. She didn’t raise the usual concerns about plagiarism or outright academic dishonesty. Instead, she sensed something more subtle. “It’s not that the writing is wrong,” she said. “It’s just thinner. Less curious. Less… wrestled with.” What struck her most was not error but the absence of ‘character’ in her students’ writing. There were fewer intellectual detours, fewer surprising connections, and not enough evidence of the “productive struggle” that typically signals real engagement.
This observation mirrors a broader pattern described in The Atlantic, in which users increasingly consulted AI for trivial or personal decisions. One individual, a 44-year-old marketer named Tim Metz, told the interviewer that he used AI for up to eight hours each day. In those hours, he would ask for advice on parenting, relationship questions, and even whether a fruit was ripe. On one occasion, he uploaded a photo of a large tree near his home, asked the AI if it appeared dangerous, and then avoided his own house that night on the AI’s suggestion. The tree didn’t fall, but the behavior revealed a more profound psychological shift, a reflexive outsourcing of judgment.
His experience echoes what some experts call the “Google Maps–ification of the mind”; the notion that we no longer need to remember or reason about directions because GPS will do it for us. This cognitive offloading may not just ease our mental load; it may slowly reshape our default mode of thought. The risk is not simply that AI gives us answers. The risk is that we lose the muscle of questioning, resulting in bland, unremarkable outcomes for the sake of getting a result quickly.
Enhanced or Illusion Intelligence?
Continuing with my quest for a different perspective on AI, I read the Forbes article “Outsourcing Our Minds—How Generative AI Can Rewire The Way We Think” and found a compelling statement: it is crucial to use AI as a complement to our human thinking, not a replacement for it. This resonated with me because I have seen firsthand over the last two years that, when used skillfully and sparingly, AI can challenge your ideas and offer different perspectives. However, if used carelessly and ubiquitously, it can displace effort, reinforce your assumptions, and lead to cognitive atrophy.
Consider the story of another educator, quoted in The Atlantic article. One evening, after his AirPods had fallen between the seats on a train, his first instinct was to ask the AI for a solution rather than thinking through the problem himself. “It was the first time I realized I was defaulting to AI for thinking that I could just do myself,” he said. That moment of recognition spurred him to take a month-long break from AI to reset for his brain. “It was like thinking for myself for the first time in a long time,” he told the reporter.
What emerges is a new kind of human behavior: reflexive delegation of cognitive tasks, even those within everyday competency. The threshold of what we consider “requires thought” is shifting downward. This doesn’t always result in catastrophic errors. But it subtly erodes the confidence, initiative, and curiosity that underpin critical thinking.
How AI Quietly Dulls Thinking
The mechanisms through which AI may dull thinking are slowly becoming more visible. A growing number of experts, neuroscientists, educators, ethicists, describe a variety of ways that habitual outsourcing can reshape mindsets and habits of thought:
Uncritical acceptance. The fluency and confidence of AI-generated text create an illusion of correctness. When answers sound polished, users are less likely to question them.
Narrowed perspectives. Because AI’s outputs are shaped by its training data and algorithmic biases, over-reliance may limit exposure to unconventional or minority viewpoints.
Loss of productive struggle. The mental labor of organizing, evaluating, and revising, crucial to deep learning, becomes optional.
Erosion of metacognition. When AI feels like a reliable oracle, users stop monitoring their own thinking and instead trust the tool.
Reduced reflection. With answers delivered instantly, there is less opportunity for pause, doubt, or revision.
Worse still, when AI becomes a go-to interface for daily life, like commuting, parenting, shopping, and personal dilemmas, those patterns of mental passivity can seep into professional and academic domains. For educators like myself, this is deeply concerning. We aim to cultivate agency, curiosity, judgment, and the capacity to hold ambiguity. But as AI becomes ever more ambient, we may find ourselves nurturing compliance and passivity instead.
The Flattening of Choice and Identity
Recent academic work makes explicit what many anecdotal accounts only hint at. A 2025 preprint titled The Basic B*** Effect examined how LLM-based agents, when used to make everyday choices, tend to reduce both interpersonal distinctiveness and intrapersonal diversity. In other words, the more we let AI pick, plan, and decide, the more our choices converge with generic, popular options, thereby reshaping who we are. Our unique preferences shrink, and idiosyncratic taste fades.
If this trend accelerates, the risk is bigger than diminished critical thinking. It becomes a homogenization of thought, identity, and creativity that amounts to a silent erasure of individuality under the guise of convenience.
A Glimmer of a Different Path
This bleak picture, though, is not the whole story. Ironically, the same research that warns of cognitive dulling also points toward a possible future in which AI deepens, rather than depletes, human reasoning. A recent paper titled A Beautiful Mind: Principles and Strategies for AI‑Augmented Human Reasoning asserts a critical need to invest in human reasoning and proposes a paradigm for using AI as an extension of human thought rather than a stand-in.
According to the paper, if used intentionally, AI can:
elevate exploration, by surfacing hypotheses and alternative frameworks;
expand access, by offering high-quality personalized examples or scaffolding;
reinforce metacognition by prompting users to evaluate, revise, and reflect;
preserve uniqueness by helping individuals clarify rather than replace their own voice.
These potentials map closely onto what educators value: agency, judgment, creativity, and nuance.
I saw one such example last semester in my Education course. I designed an assignment that invited students to analyze current trends in public education of their choice, from teacher shortages to the expansion of charter schools, using AI as one of several tools. But before consulting AI, students were required to submit a short “pre-reflection,” stating their initial thoughts, assumptions, and questions. After AI-assisted drafts, they also submitted a “thinking journal,” comparing their own ideas with the AI’s suggestions, noting where the AI added insight, where it fell short, and where it introduced unintended bias.
The result was not disengagement or dependence. On the contrary, the students’ final papers were richer, more reflective, and more deeply grounded in evidence and personal reasoning. A handful even commented that the exercise helped them realize how much of their original thinking had been unexamined until the AI forced them to confront it. That classroom felt like a laboratory for what I believe thoughtful AI integration can look like.
For Educators, Leaders, and Everyday Thinkers
If we want AI to sharpen rather than soften our thinking, mere adoption is insufficient. We must cultivate habits, structures, and cultures that preserve mental discipline.
Here are five practices that matter:
Generate before you ask. Write your own thoughts, draft your questions, sketch your ideas before turning to AI.
Compare outputs. Treat every AI answer as a draft. Ask for multiple versions. Evaluate the differences.
Debate the tool. Demand from AI what you demand of yourself — questioning assumptions, probing limitations, checking context.
Use AI to expand perspectives, not shrink them. Prompt for counterarguments, alternative framings, and interdisciplinary lenses.
Reflect on your process. Develop a “thinking journal” or habitual pause to ask: What did I learn? What choices did I make? What was the AI’s role, and did it help or hinder my thinking?
We must help learners, colleagues, and ourselves become not just AI-literate but AI-wise.
Returning to the Conference Room
As I reflect on that symposium, the energetic push for AI indoctrination, the growing pressure on nearly every institution to modernize, and the urgency that hovered in the room, I choose to look in both directions before crossing the road. However, the presenter’s passion was not misplaced; it overlooked the actual costs for wholesale embrace of AI. They were seeing a changing landscape, the global competition for AI dominance, and the need for graduates who could navigate it.
But I wish the conversation had also included another truth: the future belongs not only to those who use AI, but to those who know when not to.
It will belong to people who can pause long enough to question a fluent answer, who are courageous enough to disagree with a confident machine, and who are disciplined enough to retain the capacity for slow thinking even when fast thinking is possible. It will belong to those who understand that identity and creativity live not in our ability to produce quickly, but in our capacity to question, to struggle, to choose deliberately.
AI may help us move through tasks. But only humans can move through meaning. The responsibility for real thinking, for judgment, curiosity, discernment, still falls on us. In a culture addicted to instant answers, let us not surrender the quiet luxury of uncertainty. Let us instead choose to pause, reflect, think for ourselves, and preserve our human judgment.
If this resonated, subscribe to Deep Thinker Lab for weekly tools that help you think, decide, and live more deliberately.
Want to support without a paid subscription? Make a one-time donation below.


Demand from AI what you demand of yourself!🔥