Thinking 2 Think

Your Brain is Being Rewired While You Scroll

Michael Antonio Aponte Episode 53

Send us a text

We dive deep into how digital algorithms shape our thinking and behavior through subtle reward systems rather than direct commands, exploring Michael Aponte's concept of "digitally optimized obedience" and its far-reaching implications for individual autonomy and society. Drawing from Aponte's research, the Meadows Mental Health Policy Institute, and Harvard Medical School findings, we examine how technology is fundamentally reshaping our sense of morality and acceptable speech through invisible algorithmic nudges.

• Digitally optimized obedience works through rewards and incentives, not direct commands or fear
• Algorithms create feedback loops that train users to behave in ways that generate engagement 
• Content amplification functions as implicit moral approval while shadow-banning marks ideas as unacceptable
• Echo chambers and filter bubbles create the illusion of information while narrowing our perspectives
• Algorithms deliberately escalate content toward more extreme versions to maintain engagement
• Digital platforms known to target children's developing brains despite awareness of potential harm
• Self-censorship emerges as users internalize algorithmic preferences to gain social rewards
• Reclaiming autonomy requires conscious awareness of how algorithms shape our choices

Take a moment to consider how deeply algorithms are influencing your thoughts and behaviors. What does genuine freedom of choice look like in our digitally optimized world? Please like, comment, share, and subscribe to Thinking2Think for more explorations into the forces shaping our minds.


Support the show

🎧 Don't forget to like, share, and subscribe to join our growing community of thoughtful individuals!

🔗 Follow us:
📖 Check out my book: The Logical Mind: Learn Critical Thinking to Make Better Decisions:


📲 Let’s connect on social media!

  • https://x.com/Thinking_2Think
Lyra Morgan:

Welcome to Thinking2Think podcast. Today we're diving deep into something really timely how our digital lives are well subtly shaping how we think and act, and a huge thank you to Michael Aponte for letting us explore his groundbreaking work.

Dr. Elias Quinn:

Absolutely. The algorithm made me do it. It's fascinating stuff.

Lyra Morgan:

Yeah, we're going to explore how technology, especially social media algorithms, might be fundamentally rewiring our understanding of obedience, even our sense of morality.

Dr. Elias Quinn:

That's the plan. Our mission today is really to unpack Michael Aponte's theory he calls it digitally optimized obedience and then see how it connects with well other research on digital influence that's out there. We've got insights from Aponte's obviously some of his notes from July 16, 2025, plus research from the Meadows Mental Health Policy Institute they did a piece on how social media changes our brains and some insights from Harvard Medical School on screen time and the brain. Lots to dig into.

Lyra Morgan:

Okay, let's unpack this. So obedience most people probably think of like the Milgram experiment.

Dr. Elias Quinn:

Yeah, the classic ones.

Lyra Morgan:

Direct commands, someone in charge telling you what to do. Maybe a threat involved, very clear cut. But Michael Ponte, he's talking about something different, this digitally optimized obedience. How does that work? How is it different?

Dr. Elias Quinn:

Well, the key difference that Ponte points out is it's a new kind of compliance. It's not driven by fear, not like Milgram, where people feared punishment. Instead, it's actually incentivized. The algorithms themselves, their very design, encourages obedience. You're not being forced, exactly, you're being trained. Trained, not forced. So it's about rewards. How does that reward system actually like show up day to day? Can you give an example of how we might be getting rewarded into comply without realizing it?

Lyra Morgan:

Sure, Think about likes, shares. Your post starts trending. Each one is like a little digital pad on the back. Your brain learns. Okay, this kind of post, this way of saying things, it gets me that reward.

Dr. Elias Quinn:

Ah, the feedback loop. Exactly, it's this continuous cycle. The algorithm rewards certain behaviors and that subtly shapes what you post, Maybe even what you think is okay to post. It makes it well profitable in a social sense to obey and kind of uncomfortable not to. You're not getting a direct order, You're being nudged, guided and over time you internalize what the algorithm wants. It's obedience by design, not by force.

Lyra Morgan:

I see the difference. It's much smoother, almost invisible, compared to Milgram. But calling it obedience, that's a strong word. It implies a level of control. Is there a risk we're overstating how much control these algorithms really have? I mean, people still have free will, right, they can choose.

Dr. Elias Quinn:

That's a really important question, and Aponte does stress that, yes, we do have agency, but the systems are so subtle, exercising that agency becomes really, really difficult. It's not like a direct command you can just say no to. It's more like a constant stream of tiny nudges, often happening below our conscious radar, rewarding us with visibility, connection, approval. It feels good, so it's seamless. The influence isn't one big decision point, but this ongoing flow of incentives that just kind of reshape our default behaviors.

Lyra Morgan:

You mentioned. This goes beyond just what content we consume. How do these algorithms actually start to influence our deeper beliefs or even what we feel we can say online?

Dr. Elias Quinn:

Oh, it goes much deeper than just your feed. Algorithms don't just show you stuff. They actively shape what you come to believe and, critically, what you feel safe saying out loud or online. How so Well? Aponte argues and research backs this up that algorithms optimize for engagement, not truth. Truth not nuance engagement meaning strong emotions exactly outrage. Strong agreement, things that get a quick reaction, that content gets amplified, often at the expense of, you know, complex or balanced views and this creates what you called algorithmic morality.

Dr. Elias Quinn:

Yeah, that's a Ponzi's term. Basically, whatever trends, whatever the algorithm boosts, implicitly gets tagged as true or valuable or acceptable and the flip side content that gets shadow banned, you know, quietly hidden or demoted so nobody sees it. That's implicitly marked as shameful, unacceptable. The Meadows Mental Health Policy Institute talks about this too, how platforms tailor everything to your interests and behavior.

Lyra Morgan:

They curate a reality the algorithm thinks you want. So the algorithm is kind of setting the terms of what's good or bad online, and that naturally leads us into well, filter bubbles and echo chambers we hear those terms a lot.

Dr. Elias Quinn:

Precisely. You think you're getting infinite information Right. That's the problem.

Lyra Morgan:

Yeah, the whole internet at your fingertips.

Dr. Elias Quinn:

But what you actually get is a highly personalized slice. Often it's ideologically very narrow. Your feed becomes as a Ponce is a mirror reflecting your own views back at you.

Lyra Morgan:

Instead of a window into different perspectives.

Dr. Elias Quinn:

Exactly. The more you click on stuff you agree with, the more that stuff you see. It makes you feel super informed, like you know what's going on, but you're actually becoming more isolated from different viewpoints and that just digs the algorithmic influence in deeper.

Lyra Morgan:

Okay, so we internalize these rules. Our feed becomes this mirror. What's the cost? Aponte talks about this leading to digital self-censorship. We start policing ourselves.

Dr. Elias Quinn:

That's right. You learn the unspoken rules pretty quickly. What kind of posts get rewards? How should you phrase things? Which topics are safe? Which ones might get you pushback or, worse, get you hidden by the algorithm?

Lyra Morgan:

So you adjust your behavior.

Dr. Elias Quinn:

You do, and it's not even a response to a person disagreeing with you. Sometimes it's in response to this invisible system Over time that genuinely reshapes what you think is worth saying. You adjust your speech, maybe even your thoughts, to fit in. You internalize the algorithm's logic. Wow, it makes you think about you know, when you search for something random online, like shoes.

Lyra Morgan:

Oh yeah. And then you see ads for those shoes everywhere for weeks.

Dr. Elias Quinn:

Right, it's kind of annoying for shoes, but imagine that same relentless push applied to your political views or your social beliefs or your self-image. That's the mechanism. That's what we're talking about.

Lyra Morgan:

Okay. When you put it like that, the consequences seem huge. What are the real costs here for us as individuals and maybe for society overall?

Dr. Elias Quinn:

The costs are, yeah, are profound For individuals. Think about it. You end up performing your identity for this invisible algorithmic audience. Your beliefs might get shaped not by deep conviction but by what gets likes. So it stifles your real self, suppresses honest doubt, because doubt doesn't trend well. You post what aligns, what gets traction. Maybe you even delete things later if they don't perform well or attract the wrong kind of attention. It's exhausting and it's not authentic.

Lyra Morgan:

And for society if we're all doing this.

Dr. Elias Quinn:

Well, think about it. If our collective beliefs, what we talk about as a society, are guided by algorithms optimized just for engagement, what happens?

Lyra Morgan:

Fragmented realities, everyone in their own bubble.

Dr. Elias Quinn:

Exactly Fragmented realities, less critical thinking overall, and it really damages our ability to have nuanced public conversations. Because dissent, different views, they just get deranked. Conformity gets rewarded. And connecting this to the bigger picture, the Meadows Institute research points out something really stark Algorithms don't just connect hobbyists. They can identify extreme interests and connect people with shared radical views.

Lyra Morgan:

Like terrorist networks.

Dr. Elias Quinn:

Their research found examples. Yes, facebook's own Suggested Friends feature was apparently used for recruitment by extremist groups. In some cases, it learns what you're into and finds others like you, for better or worse.

Lyra Morgan:

That's incredibly disturbing, and it's not just connecting people who are already extreme, is it? You mentioned the algorithms might actually push people towards extremism?

Dr. Elias Quinn:

That's a crucial part of it. The Meadows Institute highlights this escalation effect. It's about intensity. You start looking for, say, jogging tips. The algorithm notices you're interested in fitness, so it pushes you towards maybe marathon training. There may be Ironman competitions. It keeps escalating to hold your attention.

Lyra Morgan:

Or healthy recipes leading to.

Dr. Elias Quinn:

To potentially pro-anorexia content, or someone looking for dating advice getting funneled into misogynistic pickup artist stuff. Wow, this escalation, aponte and others argue isn't a glitch, it's a feature. It's how the algorithms keep you hooked, find what stimulates your brain and then just keep pushing you down that path, often towards more extreme versions.

Lyra Morgan:

And this must be even worse during times of uncertainty. Right Like a pandemic, our brains are trying to find patterns, make sense of chaos.

Dr. Elias Quinn:

Absolutely. When people are stressed, looking for answers, algorithms can really exploit that. They could lead you down these rabbit holes of disinformation, conspiracy theories.

Lyra Morgan:

Because it feels like you're finding answers.

Dr. Elias Quinn:

Yes, and because often belonging into the group sharing those theories feels safer than being kicked out for disagreeing. People might accept a conspiracy rather than risk social isolation, especially online. Your brain wants answers and it wants to belong. Algorithms can hijack both needs.

Lyra Morgan:

Okay, this is heavy stuff for adults, but what about kids? If grown-ups with fully formed brains struggle? What are the unique dangers for children and teenagers whose brains are still developing?

Dr. Elias Quinn:

This is where it gets particularly alarming. Leaked documents, which both the Meadows Institute and Harvard Medical School research touch upon, show platforms like Instagram knew they were targeting kids.

Lyra Morgan:

Deliberately.

Dr. Elias Quinn:

Yes, spending huge advertising budgets to reach teens, even while internally acknowledging the psychological harm their algorithms were causing Things like increased anxiety, depression, body image issues, especially for teenage girls. It wasn't an accident. It seems to have been a known consequence of their design choices.

Lyra Morgan:

So a kid's natural curiosity online that can lead them down dangerous paths really quickly.

Dr. Elias Quinn:

Incredibly quickly, a few clicks, a few hours watching videos. A child looking for healthy eating tips could end up seeing content promoting dangerously low calorie counts or extreme exercise. Teenage boys looking for dating advice might stumble into that misogynistic content we mentioned and the key difference from, say, video games.

Lyra Morgan:

Which are mostly fantasy.

Dr. Elias Quinn:

Right. Kids generally know games aren't real life, but social media actively tries to modify real world behavior how long you spend on the app, what you buy, how you interact with people both online and off. Harvard Medical School research points out the developing brain is constantly building connections. Digital media use actively shapes that process and often it's what they call impoverished stimulation compared to real world interactions and experiences.

Lyra Morgan:

So, faced with all this, the subtle obedience, the echo chambers, the risks, especially for kids, what's the way forward? What does Michael Aponte suggest we actually do? We can't just unplug entirely right?

Dr. Elias Quinn:

No, and he doesn't suggest that His core solution is deceptively simple Cultivate awareness.

Lyra Morgan:

Awareness Meaning. What exactly Meaning?

Dr. Elias Quinn:

consciously noticing those subtle nudges from the algorithms, actively questioning why something is trending, pausing before you hit post or share and asking yourself why am I doing this? What's my real intention here? It's about bringing mindfulness to your digital life.

Lyra Morgan:

So it's about taking back control, reclaiming our autonomy in this environment that's constantly trying to guide us.

Dr. Elias Quinn:

Exactly that Aponta really emphasizes that algorithms aren't magic, they're tools tools designed by people with specific goals, usually engagement and profit. By understanding that, by being aware, you can start to reclaim your power to choose to speak your own mind authentically, to question the narratives you're being fed, to think independently, instead of just letting the algorithm guide you passively. So the central message is the central message really from the algorithm made me do it? And related research is that this technology is subtly rewiring us. It's incentivizing conformity, suppressing dissent.

Lyra Morgan:

Wow, that's a really powerful thought to end on. A lot to process there as you go about your day, maybe take a moment to think about that. How deeply are the systems you use every day shaping your thoughts? What does genuine freedom of choice even look like in a world so optimized by algorithms? What does it mean for you to truly choose? If this deep dive got you thinking, sparked some new insights, please do like comment, share this with someone who might find it interesting and, of course, subscribe to Thinking2Think podcast for more explorations into the forces shaping our world.

People on this episode