BPC Steaming 720x90

On Leadership: From broken gates to AI learning

Building independent thinkers and adaptive leaders

https://www.businessrecord.com/wp-content/uploads/2022/12/Suzanna-de-Baca-2024-scaled-e1732644901281.jpg

When I was a kid, much of my learning happened at school, but many of the lessons that shaped my ability to think for myself often took place on our farm. One moment, especially, has stayed with me. It was a bitterly cold afternoon in junior high when I went out to feed my horse and found the gate lying in the dirt, its hinges torn loose. Copper was gone.

My parents were not home, but on the farm chores never paused just because something went wrong. Figuring things out alone was part of growing up. I pushed aside the surge of panic, followed Copper’s tracks to the front pasture, coaxed him back and fed him. Then, with my hands stinging in the November wind, I worked out how to brace the damaged gate well enough to hold until an adult could fix it properly.

There was no parent to consult, no Google to search, no AI to ask for help. It was just me, a broken gate and a living, expensive animal depending on my ability to think clearly and find a way forward.

Today, by contrast, phones and search engines are ever-present, and children rarely sit alone with a difficult problem for long. With AI available at every moment, many fear kids may be losing the ability to think for themselves, not to mention their resilience in the face of challenges. A recent Harvard Gazette article, “Is AI dulling our minds,” points to an MIT Media Lab study that warns “excessive reliance on AI-driven solutions” could lead to “cognitive atrophy” and weakened critical thinking skills. 

The Harvard Gazette article taps several faculty across different disciplines to get their insights on critical thinking in the age of AI. Tina Grotzer, principal research scientist in education, Graduate School of Education, argues that students often trust AI too much because they do not understand “how it works in a computational/Bayesian sense.” More importantly, she reminds us that human minds are “better than Bayesian.” Our somatic markers, intuitive leaps, and ability to detect exceptions surpass what algorithms can currently offer. She notes that AI cannot reason analogically, even though analogical reasoning is one of our most powerful cognitive tools. Her goal is not to remove AI from learning but to help students understand when to rely on their own minds and when to let technology assist. The question is no longer whether AI shapes thinking, but how it does so.

Harvard Kennedy School faculty member Dan Levy, co-author of “Teaching Effectively with ChatGPT,” reinforces this idea. AI is neither inherently helpful nor harmful for learning; the outcome depends entirely on the learner’s engagement. “No learning occurs unless the brain is actively engaged,” he writes. If students use AI to do the work for them, learning collapses. But if AI takes care of routine tasks so they can focus on deeper thinking, it can accelerate understanding. The danger is confusing a finished product with meaningful insight.

Christopher Dede of Harvard’s Graduate School of Education offers a memorable metaphor. He urges learners to treat AI as “the owl on your shoulder,” an adviser rather than a replacement thinker. Generative AI can process huge amounts of data, but it lacks the contextual wisdom that comes from human experience. If students use AI only to accomplish familiar tasks more quickly, they risk achieving faster mediocrity rather than genuine improvement.

Senior lecturer Fawwaz Habbal of the John A. Paulson School of Engineering and Applied Science pushes this distinction even further. Machines “calculate,” he explains, but “only humans can solve human problems.” AI operates through statistical adjustments and relies entirely on human-created data. Critical thinking, ethics, and moral judgment remain uniquely human capacities.

AI is also reshaping workplaces. A Newsweek article, “Most Workers Say AI Managers Would Be Better in Key Ways,” cites a Resume Now survey that shows that 66% of workers believe AI-led management could make workplaces fairer. Yet experts caution that “bias can be baked into AI systems,” meaning algorithmic leadership can easily reproduce the same injustices it aims to eliminate. Fairness requires transparency, oversight, and human accountability.

McKinsey research adds a final insight. Their Leading Off newsletter explains in a piece titled “Reorient your thinking to create an AI-enabled mindset” that as AI takes on more technical work, human qualities such as empathy, creativity and humility will become increasingly essential, and leaders will need to develop an AI-enabled mindset that combines technological strength with human adaptability and long-term vision.

Learning to think for oneself has always involved moments of discomfort. Cold afternoons, broken gates and problems no one else can fix are the experiences that shape independent thinkers. I learned a great deal, and still do, by having to figure things out on my own and by moving through the process rather than around it. That includes learning how to use AI. What I have come to understand is that AI can either weaken or strengthen our ability to learn and solve problems. Whether it becomes a crutch or source of wisdom (“the owl”) depends entirely on how we choose to use it.

https://www.businessrecord.com/wp-content/uploads/2022/12/Suzanna-de-Baca-2024-scaled-e1732644901281.jpg

Suzanna de Baca

Suzanna de Baca is a columnist for Business Record, CEO of Story Board Advisors and former CEO of BPC. Story Board Advisors provides strategic guidance and coaching for CEOs, boards of directors and family businesses. You can reach Suzanna at sdebaca@storyboardadvisors.com and follow her writing on leadership at: https://suzannadebacacoach.substack.com.

Email the writer