Thinking 2 Think

The AI Paradox: Why Artificial Intelligence Is Making Critical Thinking More Important (And More At Risk)

Michael A Aponte Episode 68

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 37:26

Send us Fan Mail

Artificial intelligence is evolving faster than ever—but is it making us better thinkers or more dependent? In this episode of Thinking 2 Think, Mike Aponte explores The AI Paradox: as AI tools become more powerful, the demand for human critical thinking is rising—yet our reliance on automation may be weakening our judgment.

Using the CLEAR Protocol, this episode breaks down:

  •  What AI actually solves (and what it doesn’t) 
  •  The hidden risks of automation bias and cognitive offloading 
  •  Where human judgment remains irreplaceable 
  •  How to use AI without outsourcing your thinking 

Grounded in research from Microsoft, OECD, NIST, and the World Economic Forum, this episode offers a practical framework for leaders, educators, and professionals navigating decision-making in the age of AI.

This is not anti-AI.
 This is pro-thinking.

Support the show

CHECK OUT OUR NEW CRITICAL THINKING GAME APP! Currently in BETA: 

Join My Substack for more content: maaponte.substack.com

Consulting/Advisory Services: MAAponte.com

📲 Let’s connect on social media (Linktree): https://linktr.ee/thinking2think

🎧 Don't forget to like, share, and subscribe to join our growing community of thoughtful individuals!

🔗 Follow us:
📖 Check out my book: The Logical Mind: Learn Critical Thinking to Make Better Decisions:

The AI Paradox — Why Intelligence Without Judgment Is Dangerous

00:00:00 M Aponte: Thanks to AI, you can now borrow intelligence at scale. The danger is forgetting that borrowed intelligence is not the same thing as judgment. Welcome to thinking 2 think. Today I want to talk about a strange contradiction in the modern world. The smarter our tools become, the more important human judgment becomes and yet more capable these tools become, the more tempted we are to stop thinking so hard. And that is the paradox. We are living through the fastest expansion of AI tools most of us have ever seen. For example, I just got it. Just got into open core and have a localized AI computer. And yet, by the time I was actually done setting up, two versions came out within a few days later. Like cast says, job postings require AI skills jumped sharply again in twenty twenty five, and Coursera reports that among learners focused on generative AI, enrollment in critical thinking courses rose Eighty five percent year over year. Which makes me wonder, why am I taking so long making these courses? I need to go a little bit faster. Nevertheless, at the same time, the World Economic Forum says analytical thinking remains the top core skill employers want. In other words, as AI spreads, people are not needing judgment less. They're needing it more. But here is the uncomfortable question are we outsourcing our thinking at the exact moment we are being told we need it most? Because AI can do something incredibly seductive, it can give you the feeling of intelligence without forcing you to go through the pain of reasoning. It can produce language without understanding. It can simulate judgment without actually having wisdom. And if you're not careful, it can make you feel informed when you really were just assisted. Jamie. Jamie Dimon, excuse me, the CEO of JPMorgan Chase stated back in January twenty twenty six that, quote, AI will eliminate jobs. That doesn't mean that people won't have other jobs. My advice to people would be critical thinking learn EQ that is, emotional intelligence. Learn how to be good in a meeting, how to communicate, how to write. You'll have plenty of jobs. This is not an anti AI episode. I use AI. I believe AI is going to be woven into work, education, business, media and daily life. Whether people like it or not. That Pandora's box is wide open. But intelligence without judgment is dangerous in a society full of people using advanced tools without disciplined thinking, that is even more dangerous. So today I'm going to apply a clear capital C, l e a r protocol to an AI adoption. And if you don't know what I'm talking about, I have broken down an entire episode on the clear protocol, and it's also on my Substack. I highly recommend you subscribe to that. In any case, without me going into it, let's roll the intro and get right into it.

00:03:59 M Aponte: This is M a Aponte, also known as Mike Aponte. Don't forget to like, share and subscribe. So clear protocol to AI adoption. So let's break it down. C clarify what problem A.I actually solves. L locate the hidden costs. E explore where human judgment remains irreplaceable. A apply boundaries so AI serves thinking instead of replacing it. R review how to stay sharp in an age of effortless answers. So let's get even deeper into this. The first mistake people make with AI is treating it like magic. It's not magic. It's not wisdom. It's not a substitute for discernment. At its best, AI is an amplifier. It amplifies speed. It amplifies access. It amplifies drafting, summarizing, organizing, brainstorming, pattern recognition. That matters a lot, especially if you're, I don't know, running a school like I am, or you're running a business or you're trying to start a business. So it's trying to get into an industry that is very difficult to get into or learn a new skill. Nevertheless, all those things are wrapped into one. So it matters a lot. And if you believe if you have a blank page, AI can help, uh, get you moving. If you have a pile of notes, AI can help structure them. If you need ten headline variations, three email drafts, a lesson outline, a study guide, a meeting summary, or a first pass analysis, AI can reduce friction dramatically, and that is real value. But the question is not whether AI is useful. The question is useful for what. Because when people say AI is changing everything, they often skip the most important step defining the actual job to be done. AI is excellent for accelerating routine cognitive labor, generating options, translating rough ideas into usable drafts, compressing time on low stakes tasks, and helping people move from zero to version one. AI is much weaker when the task requires value judgements, moral responsibility, deep context, consequences under uncertainty, emotional nuances, or the willingness to say, I don't know yet. And this matters because when you fail to define the problem correctly, you don't just misuse the tool, you slowly reshape yourself around the tool strengths. You stop asking what is the best way to solve this and start asking, what can I do for me here? That sounds subtle, but it's a major shift. The tool stops being your assistant. It becomes the hidden architect of how you think. So here is the first clear question. What specific burden am I trying to remove? AM I removing repetitive labor? Good. AM I removing the need to organize information? Fine. AM I removing the need to think? That is where danger begins. Because once AI stops helping you process reality and starts replacing your encounter with reality, you are no longer using a tool. You are surrendering agency. And that is the part people rarely say out loud. Now let's talk about what AI adoption really costs. Not the subscription cost. We're not talking about API bills. We're not even talking about the software budget. I mean, the real hitting costs. And these are harder to see precisely because the tool feels so helpful. And it is in many ways, one major hidden cost is cognitive offloading. Without cognitive recovery. So let me break this down. Humans have always offloaded thinking. We write notes. We use calculators, rely on maps, uh, search the web. None of that is new. The danger comes when offloading becomes so frictionless and so constant that the brain stops rehearsing the mental actions that build judgment in the first place. Microsoft Research published a study on the impact of generative AI on critical thinking. Surveying three hundred and nineteen knowledge workers across nine hundred and thirty six real world examples of AI use. One of the key findings was that when users had greater confidence in gen AI, they reported engaging in less critical thinking, while higher self-confidence with associated with more critical thinking. That is a huge warning sign. The issue is not just whether AI helps. The issue is whether it changes the amount of mental effort you choose to invest, and that is the first hidden cost. The quiet reduction of effort and effort matters because judgment is not just a trait, it's a practice activity. You do not become discerning by watching discernment happen. You become discerning by comparing claims, check assumptions, notice contradictions, handling ambiguity, and resisting easy answers. And another hidden cost is automation bias. Automation bias is the tendency to over rely on automated systems when they were wrong, or when contradictory evidence is present. This is not a new idea invented for the AI era. It is a well-established phenomenon in the research, literature and policy groups focus on AI safety. Still warn about it because automated systems can create a false sense of authority. And what makes this. Let me take this back and let me ask the audience, and you can please put in the comments. If you're listening to a platform that has that access. But I want to ask you, does that make sense? If a machine gives you an answer in a polished tone with confidence, structure and speed. It feels authoritative even when it's wrong, even when it's shallow, even when it's missing one key piece of context that changes the whole decision. And that is where AI becomes dangerous. Not because it thinks for itself, but because it can cause you to stop interrogating what is in front of you. Then there is the third hidden cause false mastery. A person can now produce work that looks competent without having built the underlying capability. An essay can sound polished without deep understanding. A strategy memo can look professional without real analysis. A student can turn in good writing without having learned to write. A manager can sound informed without having wrestled with the trade offs. The o e c, d twenty twenty six Digital Education Outlook, warns that generative AI can create this kind of shortcut learning, where outputs mask weak underlying understanding and produce a deceptive sense of mastery. Unless systems are designed to support real learning rather than replace it. Harvard experts have raised similar concerns publicly, arguing that AI shortcuts can impair social, emotional and cognitive development when they displace the effort that learning requires. And I'll give you a real world example of this. So as those of you that have been long time listeners, you know that I run a charter school. I am an executive director at a charter school. It's a title one public charter school. And I'm a huge advocate, obviously, of critical thinking because this whole podcast is about critical thinking. But here's the thing that I've noticed lately. My students have been using a program, and the program is called I-ready. And we were doing a diagnostics and we found some students, um, had a unique growth, I would say, where they jumped approximately four grade levels in one school year. Now, if just for additional context, if you're not in education, if you go up one grade level mid school year, that is a huge win for the student. That is a huge win for the teacher. That is a huge win for the school, for grade levels. That raises some suspicion. So with security and looking through, we come to realize that they were using an AI platform and all they were doing was screenshotting the screen and just uploading.

00:14:13 M Aponte: Now Kudos to them for their creativity on figuring that part out. However, here's where all these studies are explaining the students that I'm referring to. Although the AI was able to not only give the answer, but elaborate on why that's the correct answer, they didn't look at that. They thought it was a win. They and some of them, did skim through it and they still didn't get it because It didn't go into their long term memory. And when you give them a piece of paper and a pencil and you tell them to do the same question, they fail it. And that's a problem. And that's a problem we're all facing throughout all academia, not just in the K through twelve. When I speak to other administrators, but also at the university level. And that may be the dangerous cost of all. Because if you know you are ignorant, you can learn. If you know you are weak, you can train. But if AI gives you the illusion of competence, you may never realize what skills are disappearing underneath you. And that is how decline hides. Not through obvious collapse, through smooth assistance. One more cost cognitive fatigue through tool management. There is the this fantasy that more AI always means less strain. But recent workplace research featured by Harvard Business Review found that certain patterns of AI can create. What the authors call brain fry. When there's an oversight, switching, monitoring or constant tweaking of AI outputs. It creates mental fatigue instead of reducing it. So the hidden costs are not just AI makes people lazy. That's too simplistic. The real costs are reduced effort overtrust, false mastery, weakened ownership, and sometimes even a new kind of cognitive overload. And that is a much more serious problem. So now we're going to segue into E. We talked about C and L explore where human judgment is replaceable. Now we can get into the heart of this entire episode. Where does human judgment remain non-negotiable? Because if we are honest, there are plenty of tasks where the best use of AI is obvious. No one needs to manually draft every standard meeting recap from scratch if a good tool can help. But there are certain zones where judgment is not a luxury. It is the work. Here is the first zone high stakes ambiguity. When the facts are incomplete, the values are contested and the consequences are serious. Judgment matters more than output quality. Hiring is like this. Leadership is like this. Parenting is like this. Discipline is like this. Medicine is like this. Law is like this. Education is like this war is like this. Finance is like this. In these zones, the issues is not merely what answers should or should sound plausible. The issue is what trade off am I willing to own? And I cannot own anything. It cannot bear moral weight. It cannot absorb consequences. It cannot carry Responsibility after the decision lands on real people. And here's the second zone value conflicts. AI can optimize for efficiency. It can optimize for engagement. It can optimize for speed, reach, and profitability. But when two values collide, fairness versus mercy. Truth versus loyalty. Transparency versus discretion. Speed versus care. Somebody has to decide what matters more that somebody is still human. And the more advanced the tool becomes, the more this matters, because advanced tools can increase the temptation to mistake optimization for wisdom. But wisdom is not getting better outputs. Wisdom is knowing what should and should not be maximized. The third zone context that is lived, not merely stated. AI works from patterns. Human beings, uh, human beings live inside realities. A parents tone in a meeting, a student's shame behind defiance, a leader's fear behind certainty, a business partner's hesitation that does not show up in the transcript. A community's emotional memory after a public failure, a staff member who is technically correct but relationally toxic. These things matter, and they are often decisive. You can describe them to AI. Sometimes AI can even help you interpret them, but it does not actually inhabit them. Then there's the fourth zone sense making under uncertainty. The World Economic Forum in twenty twenty five report says analytical thinking remains the most sought after core skill among employers. I mentioned this earlier. And that makes sense because in an AI rich environment, the scarce value shifts away from mere information access and toward interpretation, prioritization, and judgment under uncertainty. Anyone can get an answer now. The advantage belongs to the person who can ask what is missing? What assumptions are hidden here? What would make this wrong? What is the second order effect? What are we not seeing? Because this answer is so clean and that is human territory. And I would add even one more meaning. AI can Imitate style. It can summarize beliefs, it can assemble arguments. It can remix wisdom, literature and motivational language all day long. But it does not care. And care changes judgment. A teacher who cares notices different things. A parent who cares weighs different things. A founder who cares sacrifices differently. A leader who cares does not merely choose the most efficient path. He chooses the path he can live with. That is not a bug in human judgment. That is part of what makes it human. Now to a apply boundaries. So AI serves thinking. So how do we use AI without becoming intentional, intelligently weaker, or intellectually weaker? This is where the most conversations become useless. They stay philosophical and never become practical. So let me give you some real boundaries on this one. Boundary one. Use AI after first contact with the problem before you ask AI for the answer. Spend a few minutes wrestling with the issue yourself. Write the problem in your own words. State your first instinct. List what you think matters most. The first contact is where ownership begins. If AI becomes your first move on every hard task, you risk training yourself out of original thought. A preliminary MIT Media Lab preprint on AI assisted essay writing found that participants using LLMs assistants showed weaker measures of neural engagement and memory ownership than brain only writers. But the work is still early and has drawn up methodological criticism. The cautious takeaway is not ban AI, it is how people use AI appears to matter and overreliance may carry. Cognitive trade offs are worth taking seriously. So be mindful of why you're using AI. Ask those questions. Boundary two ask AI for options, not authority. Instead of saying tell me the answer, ask. Give me three competing interpretations. What assumptions am I making? What would a critic say? What am I overlooking? What are the strongest objections to this plan that keeps you in the role of judge rather than passive receiver. And I have to tell you, my personal AI and I, I, we go through this all the time. Anytime I need to look at something analytically, I ask these questions. I personally ask these questions to make sure I'm making the right judgment. And anything that I any blindsides I might have. The AI is really good at picking it up, but I have to ask these questions. Otherwise it's going to feed me because it got to know me what I want to hear. So let me say that again. The AI is going to give me what I want to hear. So you have to make sure you prompt your eyes when you are working with AI, to not just give you fluff or entertain you or pull, make sure your pride is intact. You have to prep it so this way you can get some honest information. And always don't forget to ask the questions and you'll be surprised on what you get out of it. Boundary three. Keep humans in the loop for high stakes decisions. The nice guidance on AI risk management says generative AI may require additional human review, tracking, documentation, and management oversight. Because of the kinds of risk these systems can introduce. That is not bureaucracy for its own sake. This is an acknowledgment that powerful systems need accountable human supervision. If a decision affects someone's job, someone's reputation, someone's education, someone's health, someone's freedom, or someone's future, then AI should assist analysis, not replace accountable judgment. They made a movie about this, I think with, um, I forgot the name of the actor, uh, Chris Pratt, and he, uh, did a movie about where an AI was a judge and was in the whole movie is about his trial to determine whether he's guilty of a crime that allegedly was about to commit. Like he didn't commit it, but he was about to commit based off of the AI algorithm. I haven't seen it. I heard good things. Um, just I'm assuming you have to look up Chris Pratt's IMDb and you'll probably find the movie I. The name escapes me. So I apologize to all you listeners, but use AI to assist not replace accountable judgment boundary for never outsource values. You can use AI to draft a letter. You can not use AI to decide what kind of person you want to be. In that letter, You can use AI to model scenarios. You cannot use AI to decide what responsibility you owe another human being. You can use AI to organize evidence. You cannot let AI become your consciousness. Boundary five make verification part of the workflow because AI can sound right while being wrong. It can confidently compress complexity into nonsense. It can hallucinate details, misread context, and present weak reasoning in clean and a clean house. Um, so if the stakes are low, fine, use it. Use it fast. If the stakes are high, slow down and verify. And that is not paranoia. That is intellectual hygiene. Kind of like brushing your teeth. You protect your teeth. You protect not only your mouth, but believe it or not, your heart. So same thing with intellectual hygiene. Protect your brain. And finally, our review how to stay sharp in an AI era. So let's finish with a deeper question. How do you remain a thinking person in an age of easy answers? And here's my answer. You must deliberately practice the parts of thought that AI makes optional. That means wrestling with ambiguity. You. And what I mean by that is, if you don't know of certain ideas, contemplate on it. Think about it. Pros, cons, um, all sides to different stories. All of it. Wrestle with it. Next thing. Right. Writing some things without assistance, whether it's get a notebook, get a piece of paper and a pen. Write things down. Write it down. In my book, The Logical Mind. I deliberately and at the end of every chapter, have a series of questions for you to fill out with space to actually write in the book. I'm creating a second edition, but I do not know when it's going to be out, to be quite honest with you. However, in the meantime, you can purchase it in any, uh, bookstores that you, uh, you shop at, including and especially Amazon. Um, but it's called the logical mind. Learn critical thinking to make better decisions. That book has questions. And at the end of every single chapter, I did that again deliberately. So write things down without assistance. Checking claims manually is the third one. Just don't accept the first answer or the first headline or the the first piece of information you receive manually check for reading beyond summaries. Don't just look at and I kind of said it already, but don't look. Just look at the headline or the back of the book or the cliff notes. Like actually dive deeper. Read into it. The next one is holding two competing interpretations in your head. That's honestly pretty easy to do as long as you're not ideologically possessed by the interpretation. What that means, and I know that's pretty, um, it might be big words for some of my student listeners, but what I mean by that is don't be afraid to entertain ideas that you are not aligned to, and you do not have to agree with them. Just Hold that information and interpret it. You don't have to judge it. Just like based on this information. This is that is that. And look at all sides. The next thing and the last thing, and sometimes refusing convenience in order to preserve capability, sometimes you have to do in the hard work, I, and I know that's a hard sell, uh, for many people today, but sometimes you just need to put in the work. Don't be afraid to read a book of, uh, on a subject that you are interested in, rather than just looking it up through AI or actually listening to a podcast or a, uh, or an audiobook. I am constantly listening to books because I don't have time to sit down and crack open a book like I used to. I have kids and my oldest is eleven. So it is. It's not easy for me. And I run a school and it's a middle school, so I am constantly bombarded. Uh, so audiobooks is my go to find out what yours is and learn, learn, learn, learn. It is hard to do it, but I promise you the reward is worth it because capability decays when it's not demanded. If a calculator is always in your hand, mental math weakens. I can tell you right now you do not want me to teach you math. I tell this to my students all the time when they ask me a math question of. When it comes to advanced algebra, I am a social science guy. You give me advanced algebra. I am going to be really, really sad. So, that that's my fault. and smartphones, they all have calculators. So there you go. If GPS is always on spatial memory weekends, so I try my best not to use GPS in areas that I know I'm going to go to more than twice. I tried to look at my surroundings and know. So this way if the phone ever fails on me for whatever reason, I know how to get to where I need to go. And you should do the same. If AI is always framing, drafting, interpreting, and concluding for you, then some portion of your judgment can weaken too. So it's important to always, whatever you use AI to double check the information, question it, and have an understanding on why they decided to go that route. What was your evidence? Ask those questions to AI. Don't just accept it. And then one day you may discover that you still sound intelligent, but you are less able to think independently under pressure. And that is the real paradox. If you're not challenging yourself, you're going to sound intelligent. Of course, I mean, half these podcasters that I listen to that are extremely popular. and some of the famous ones, they have no idea what they are talking about. And they are praised in mass media. That is in my opinion, and this is strictly my opinion, the most highest level of conspiracy that I've ever witnessed. And I have a lot of weird conspiracies in my mind that I'll never share. But one of the one things that blows my mind is the amount of stupidity, but they sound intelligent. And I did an entire podcast episode about it. Um, so I will get into that. And, but that's the real paradox. AI can absolutely make you more effective. It can make you faster. It can make you more organized. It can make one person capable of output that previously required a team. I use my AI for all these things. I am extremely efficient at my job. That's how I'm able to write books and create my consulting company and create multiple applications, both in finance and critical thinking, and work on some side projects that I'm planning on that I will not be sharing until it is one hundred percent public. Um, because I know how to work with this information. I know how to work with AI effectively, but I also know that none of that guarantees wisdom. And in some cases, if used lazily, it can do the opposite. It can widen the gap between the output and understanding, between fluency and insight, between assistance and ownership. The goal is not rejection. The goal is disciplined partnership. Use AI to extend your reach. Do not let it replace your reasoning. Use AI to sharpen your work. Do not let it soften your mind. Use AI to accelerate execution. Do not let it erode judgment. Because in the end, the people who win in this era will not be the people who merely have access to intelligent tools. It will be the people who remain intellectually alive while using them. People who can still question, still compare, still notice what does not fit, still resist false certainty, still make principled decisions when the answer is not obvious. That is the edge. Not artificial intelligence alone, but artificial intelligence guided by human judgment. So here's the question I want to leave with you. And please post it in the comments. I am very, very curious what you have to say. Where in your life is AI saving you time but quietly costing you thought? And where do you need to draw a line before convenience turns into dependency? If this episode gave you something to think about, share it please. With someone who uses AI every day at work, in school, in leadership, or in business. Because this conversation is not really about technology, it's about whether we are going to remain the kind of people who can think for ourselves when thinking becomes optional. This is thinking to think. I am Mike Aponte, also known as M a Aponte, and I ask all of you guys to please stay sharp. I appreciate you and thank you.