top of page
Search

What AI Is Really Asking of Business Leaders: From Talent to Trust to Transformation

  • Writer: Lynda Koster
    Lynda Koster
  • Apr 19
  • 8 min read

Updated: Apr 21


Exterior view of the MIT Media Lab at dusk, overlaid with the event title “Imagination in Action: Pioneering the New Era with AI.” Logos of MIT CSAIL, MIT Connection Science, and Forbes appear below the text, indicating event sponsors and collaborators.

A Summit That Sparked More Than Ideas

Last week, I had the privilege of attending the Imagination in Action AI Summit—an immersive experience that brought together AI visionaries, business leaders, technologists, entrepreneurs, global policy makers, academics, and researchers, all of whom fueled a powerful exchange of ideas and possibilities.


Led by John Werner at MIT Media Labs, the energy throughout the event was electric—cross-disciplinary, multigenerational, and deeply human. Conversations with leaders like Johnny Ho, Cofounder and Chief Strategy Officer at Perplexity, added depth to that energy, bringing a sharp perspective on innovation and real-world application. What emerged was not merely an open discussion about AI’s trajectory, but a reframing of what it means to lead in this new era.


For those of us responsible for guiding our organizations through profound transformation—whether as CEOs, CMOs, CTOs, CHROs, compliance leaders, or those leading across other critical functions—the biggest takeaway was clear:


We're in it. This isn’t theoretical. The conversations, examples, and cross-functional insights shared at the Summit reflected a reality many of us are already navigating—AI is showing up in decision-making, workflows, strategies, and expectations. And while no one has the full roadmap, what’s emerging are real opportunities, real risks, and the need for real leadership.

And for those still early in the journey, that reality can feel overwhelming. The pace is fast, the landscape is noisy, and the stakes are high. But clarity doesn't come from having it all figured out—it comes from creating space to learn, asking the right questions, and moving forward with the right people around the table. Even early steps—like building a shared language, identifying a few organizing principles, or structuring time to experiment—can help make progress feel possible.


A heartfelt thank you to John Werner, the MIT Media Lab, and all the sponsors, hosts, and participants who created such a generous, energizing space for open exchange and forward-looking dialogue—and a special thank you to Bruce Hecht for the invitation, thoughtful conversations, and continued inspiration.


Coming out of the Summit, a few themes rose to the surface—both in the sessions and in the conversations that happened around them.



Navigating the AI Maturity Curve: A Call to Every Leader

AI maturity doesn’t follow a simple yes-or-no checklist—it depends on the organization, its goals, and where it’s starting from. Some organizations are just beginning to explore where AI might add value, while others are already integrating it into their products, workflows, and decision-making. Many are still figuring out where to begin—and what meaningful adoption really looks like. What emerged from the Summit, and aligns with what we believe at Growthential, is that maturity isn’t about how much AI you use—it’s about how intentionally and cohesively you align it to your organization’s unique strategy, talent, and operational DNA.




The Pace of AI Adoption: Strategic Considerations

I was genuinely excited to see Juan Enriquez on the list of speakers—his work has long influenced my thinking, and I’ve referenced him in previous posts. As expected, he delivered a thought-provoking talk that stayed with me.

 

In his session, “Where Are We on the AI Journey and the Anthropology of AI,” Enriquez highlighted not only how rapidly AI capabilities are advancing, but also how quickly these advancements are being adopted compared to past innovations. Unlike earlier technological shifts, AI’s progress and uptake are accelerating in parallel—reshaping industries, business models, and expectations at an unprecedented pace.


Line graph titled “Language and image recognition capabilities of AI systems have improved rapidly,” showing the performance of AI in handwriting recognition, speech recognition, image recognition, reading comprehension, and language understanding from 2000 to 2020. Each capability begins below human performance and steadily improves, with most surpassing human benchmarks by the late 2010s. Source: Kiela et al. (2021), Dynabench; data from OurWorldInData.org.

He emphasized that what makes this moment different is the compounding nature of AI’s development. This isn’t a linear curve—it’s exponential. That message came through powerfully in a few session slides, including one that showed the performance of large language models on medical question-answering benchmarks. As highlighted in research from Cornell University, one model achieved a 17% performance gain within a single month—December 2022 (see below). That kind of leap in such a short timeframe is staggering. It reinforces Enriquez’s point: this is not about steady evolution—it’s about compounding acceleration. For business leaders, the takeaway is clear—the pace of change demands proactive engagement, strategic clarity, and a readiness to adapt...quickly.


Line chart from a Cornell University research paper titled “Large Language Models Encode Clinical Knowledge,” showing rapid improvement in medical question-answering accuracy (USMLE benchmark) by large language models from December 2020 to December 2022. Performance increases from 33.3% with GPT-Neo to 67.6% with Flan-PaLM, with a 17% gain in a single month between PubMedGPT and Flan-PaLM.

He also posed a set of deeper questions that still linger—ones that go beyond the quarterly roadmap:What can’t AI do? What happens when different countries pursue AGI? Can we control it? Should we?


These may not have immediate answers, but they do matter—because the choices leaders make today are shaping how AI shows up in our organizations and society tomorrow.

And perhaps the most lasting takeaway from his session was this:

AI is a vocabulary we need to master.”

Not just technically—but strategically, ethically, and organizationally.



The Work of AI Leadership

AI is an Organizational Capability, Not a Department

A key takeaway from the Forbes panel, with Chief Content Officer Randall Lane, was the reminder that AI shouldn’t be isolated in its own department. As he pointed out, just like the internet couldn’t be managed by a standalone “internet team,” AI needs to be part of everything—from leadership conversations to product development to how teams work together every day.


That insight hit home for me. I saw firsthand how isolating internet strategy in a single team slowed progress, created bottlenecks, and delayed adoption. From my experience, it also created friction—especially among teams that felt sidelined or left out of the process.


When we first started building our approach with our team, we pulled from what we learned during the early waves of internet and martech adoption—how siloed innovation can hurt the organization, stall momentum, or even derail well-intentioned initiatives. That’s why we’ve been focused on creating cross-functional efforts that bring strategy, product, and operations together to test, refine, and work to scale what works.



Earning Trust at Scale

The New Frontier of Brand Governance

As marketing teams explore GenAI’s potential—from content generation to customer insights—one theme keeps surfacing: trust isn’t just an ethical imperative; it’s a brand imperative.

The challenge isn’t only what GenAI can produce—it’s whether what it produces consistently reflects your brand, your standards, and your values. That’s where the conversation around Human in the Loop (HITL) systems becomes especially relevant to marketers.

At the Summit, Angel An, VP of GenAI Strategy and Product Deployment at Morgan Stanley, emphasized the continued importance of HITL—particularly in managed services and highly regulated industries, where stakes are higher and oversight is non-negotiable. For marketing leaders, this signals an important consideration: how to design systems that not only generate brand-safe, customer-aligned content at scale—but also support deeper personalization, real-time insights, and trust-driven engagement, without compromising compliance or brand integrity.


That’s also where brand governance, AI governance, and data governance start to converge. As Boaz Fachler noted—echoing a belief I strongly share—these aren’t separate conversations. They are interdependent systems that must be designed together, with clear alignment and shared intent.



The Architecture of AI Readiness

What CTOs Are Navigating as Pressure Mounts Across the Business

There was sobering clarity in the Cyber Resilience session, where one of the panelists said,:

“We can get attacked at machine speed, so we must protect ourselves at machine speed.”

AI’s double-edged nature—enabler of progress and weapon for cyberthreats—demands new models for speed, safety, and systemic design.


As I was listening to this panel, I thought about how today’s CTOs are responsible for far more than cybersecurity. They’re now fielding a surge of demand from every corner of the business.


Marketing wants faster, brand-aligned content and personalization engines (to start). Product wants embedded intelligence. HR is exploring AI for hiring, engagement, and performance—but those efforts need support, not ownership, from tech. Legal and compliance teams need access, transparency, traceability, and the tooling to build their own governance capabilities. And the C-suite wants it all to happen faster—with fewer risks.


And these are just some of the stakeholders.


In this landscape, CTOs aren’t just shipping infrastructure. They’re designing the connective tissue aligning people, platforms, and policy— that makes AI usable, safe, and scalable across functions.

And while the CTO may not be the final decision-maker for every AI use case, they are the system architect, ensuring that as AI becomes embedded across the business, it stays resilient, trusted, and in sync with enterprise priorities.


Because in this next era, speed matters—but speed without governance is chaos.



The Human Layer of AI Adoption

Trust, Talent, and the Psychology of Adoption

Talent is not just adopting AI—they’re already using it, and quietly, if it’s not supported internally. The shift isn’t about introducing new tools, but about curating environments where experimentation is safe and structured. That includes building learning pathways—not just to understand how AI works, but to build the judgment, context, and confidence to apply it responsibly. Capability building isn’t a side effort—it’s the foundation for long-term, cross-functional adoption.


As Amy Edmondson, a Harvard Business School professor, urged, we need to ensure psychological safety, invite clarity, and shrink the distance between failure and learning—we need to drive fear out of the process.


I was also struck by Joan Larovere, Associate Chief Medical Officer at Boston Children's Hospital, who shared that we often over-index on iteration and tools—when the real work is clarifying the problem to be solved. She also noted—something that’s long been true—that those building or launching the tools don’t always fully understand the problem they’re meant to address. In the context of AI, where speed can sometimes outpace reflection, that reminder felt especially timely. Her point that job security remains a real concern—and that organizational culture plays a huge role in adoption—couldn’t have been more relevant.


This is where the CHRO plays a critical role. Because beneath the surface of AI adoption is a very real undercurrent: job security anxiety. It’s not always spoken openly, but it’s there —surfaced in town halls, team chats, and even executive coaching sessions. And it’s being amplified by headlines that swing between utopia and apocalypse.


In some of my speaking engagements and conversations with colleagues, I often surface this tension—employees aren’t just asking how to use AI; they’re wondering what it means for their roles, their value, and their future.


And in my view, we need to lean into this.

It’s a real and human response to a time of profound change. We’re all processing it—often at the same time, and with no universal blueprint to follow. That doesn’t mean we freeze up. It means we create them and shape the path as we walk it—with learning, intention, and shared momentum.

Ignoring that anxiety doesn’t make it go away. But naming it—and designing around it—can transform it into momentum.



Three Universal Themes That Transcend Functions

Across sessions and side conversations, three recurring themes stood out—cutting across industries, roles, and levels of AI maturity.


  1. First, integration matters. While pilots can be valuable for learning, AI can’t remain siloed. The leaders making the most meaningful progress are connecting it to real business needs across departments—moving beyond experiments that never scale.


  2. Second, trust has to be earned. Whether it’s customer-facing applications, internal tooling, or data infrastructure, AI systems must be designed with transparency and accountability built in. That doesn’t mean perfection—it means being clear, responsible, and deliberate in how we build and deploy.


  3. Third, people are central to whether AI succeeds or stalls. Adoption depends on whether employees feel informed, included, and equipped—not overwhelmed or sidelined. It’s about investing in the capability and confidence of the people who power the business.


These aren’t abstract themes—they’re showing up in the decisions we, as leaders, are making right now. And while there’s still a lot to figure out, the most grounded path forward is one that’s integrated, transparent, and human-centered.



Final Thought: Leadership is the Lever

The conversations at Imagination in Action made something unmistakably clear: AI isn’t asking us to change what it means to lead—but it is asking us to lead through a very different kind of moment.


This is a time of exploration, not certainty. A time when many are building while still learning, testing while still aligning—while the ground continues to shift beneath us. But even in the midst of that, what matters most is how we create space to move forward—with clarity, with care, and with others at the table. It’s in that kind of space—anchored, thoughtful, and forward-facing—that the most meaningful progress can take shape.


Because the future won’t be led by those who had it all figured out—but by those willing to learn, adapt, and shape it together.

 

~Lynda

Cofounder & Managing Partner at Growthential


 
 

Subscribe to Growthential Insights

A Resource for Business and Marketing Leaders Navigating Today's Dynamic Business Environment

Thanks for submitting!

bottom of page