|
Companies large and small are scrambling to implement AI in hopes of boosting productivity, while many are also stripping out the very leadership backbone needed to guide that change: managers. Thats a dangerous contradiction. AI adoption wont fail because of the platform a company chooses. It will fail if the people employees trust most, their managers, arent equipped to understand artificial intelligence, or if those roles disappear altogether. In todays climate of employee disengagement, burnout, and change fatigue, employees are resistant to yet another transformation. Thirty-one percent admit theyre actively working against their companys AI initiatives. No platform, no matter how powerful, can overcome that level of pushback without leaders stepping in to bridge the gap. Enter the middle manager. Whether you call them people leaders or frontline supervisors, they are the best (and often only) individuals to help employees understand the why and the whats in it for me. Yet only 34% of managers feel prepared to support AI adoption. Its clear that managers have the promise and power to help employees navigate changebut context is key. Our research at Zeno Group, Middle Managers at Risk: Companies Overlook the Communications Imperative, shows nearly three-quarters of middle managers (73%) believe its important to be able to explain the “why” behind company decisions in order to be a successful manager. However, when it comes to AI, nearly three-quarters of executives claim their AI approach is strategicyet fewer than half of employees agree. That disconnect underscores the need for trusted messengers. Managers, valued for their communication and empathy, are best positioned to close the gap. With the right support, they can help employees move from resistance to resilience. Here are five ways managers can turn anxious employees into AI champions. 1. Communicate the companys AI Vision Managers cant communicate what they dont understand. Only 22% of employees say their company has communicated a clear AI plan. That leaves many managers guessing or giving up. When theyre given the trust and tools to lead, managers can be powerful catalysts for change. Sitting at the intersection of strategy and execution, theyre the ones who turn lofty vision into daily action, earn credibility with employees, and translate ambitious AI transformations into something real and usable on the ground. Give them training, FAQs, and talking points that tie AI implementation back to company goals. Create forums where managers can ask questions. When theyre included early, they become credible messengers. Left in the dark, they add to the skepticism. 2. Acknowledge Change Fatigue and Keep Dialogue Open While more change is coming, the workforce is exhausted. Even back in 2022, the average employee experienced 10 planned enterprise changes, compared with just two in 2016. Their ability to cope has fallen sharply, from 74% to 43%. Add shifting RTO rules and fears of job loss, and resistance is natural. Managers can ease resistance by acknowledging the environment were in, sharing their own experiences, and inviting honest dialogue. Use team meetings to bust myths, answer questions, and show where AI supports (not replaces) human contributions. Concerns voiced arent threats; theyre opportunities to build trust. 3. Answer the Whats in it for Me? If employees cant see the personal benefit, AI feels like a mandate. Show how AI can save time, automate repetitive tasks, and free up space for creativity and growth. Managers are closest to the employees and the work, so they are best positioned to share examples of how AI can genuinely improve day-to-day tasks and experiences. 4. Walk the Talk Employees wont embrace tools their managers dont understand or use. The old show and tell approach can spark curiosity and normalize AI use in the workplace. Encourage managers to experiment with AI in their workflows and share results, including how it enhanced or sped up a project. Then invite employees to do the same. Consider adding an AI spotlight segment at team meetings and recognizing team members who are using AI. 5. Measure Readiness and Seek Feedback Research shows 75% of employees report low confidence in using AI, and 40% struggle to understand how it applies to their roles. Managers can help by finding out where their teams feel uncertain. They can gather insights through quick pulse surveys, one-on-ones, or informal conversations, and then advocate for the right training, mentoring, and reskilling programs. Confidence grows when people feel capable, heard, and backed by their leaders. The Bottom Line AI isnt the future of work. Its here now. And its success will hinge less on code and more on conversationsongoing conversations that managers have with their teams. Dont sideline managers. Equip them to be the heroes of your organizations AI adoption journey, turning anxiety into confidence and momentum.
Category:
E-Commerce
At some point, youve likely welcomed a recent college graduate into your business. Theyre smart, well-educated, and full of potentialbut on day one, they have little understanding of your companys unique processes, culture, or goals. Large language models (LLMs) are much the same. They carry vast general knowledge yet lack the specific context that makes them immediately valuable to your organization. Just like new hires go through the onboarding ropes, LLMs need structured access to your businesss data, tools, and workflows to become truly useful. Thats where Model Context Protocol (MCP) comes in. MCP enables communication between AI applications, AI agents, applications and data sources. The protocol has quickly moved from an emerging standard to a strategic enabler, and the conversation were having with our clients has shifted from technical architecture to practical application. MCP is not just another integration layer. Its a way to unlock latent value across your organization by connecting AI agents with the systems, data, and workflows that drive outcomes. The real opportunity lies in how you apply MCP. Start with what and why Lets be honest, theres no shortage of MCP primers out there. Most of them walk you through the architecture: hosts, clients, servers. Thats fine, but its not where the real value is. The real question isnt, How does MCP work? Its What can I do with it? and Why does it matter to my business? When we talk about MCP, I try to steer the conversation away from the architecture and toward the outcomes. What problem are you solving? Why is MCP the right tool to achieve your goals? A Midwest health system we worked with wanted to personalize treatment for patients with hypertension, using the vast troves of data stored in their electronic health records (EHR). The strategic hurdle wasnt just accessing the data, it was giving access securely, at scale, and in a way that respected compliance boundaries across thousands of patient encounters. With MCP, we were able to connect AI agents to a rich EHR data model that included vitals, medications, comorbidities, lab results, and even nuanced metrics like ejection fraction readings. MCP serves as the structured conduit, enabling the AI to interact with nearly 800 patient features per encounter without compromising privacy or requiring custom integrations. The predictive accuracy has enabled clinicians to tailor treatment plans with greater precision, according to our client. Patients have gained an estimated 100 additional days of life, and the system saw $2,000 in annual healthcare savings for 20% of its hypertension population. Similarly, a national convenience store chain used MCP to connect AI systems with real-time data on customer movement, promotional engagement, and inventory shrinkage. No retraining models. No custom APIs. Just a scalable model for improving store performance. MCP isnt just a bridge between systems. More vitally, it connects strategic intent with measurable outcomes. Guardrails for autonomy and accountability As we move toward agentic AImodels acting like digital employeesautonomy without structure is risky. You wouldnt let a new hire run wild with sensitive data or make decisions without oversight, and the same goes for AI. One major challenge is idempotency, or the ability to perform the same operation repeatedly with consistent results. Most LLMs arent idempotent. Ask one to write an email five times, and youll get five different versions. Thats fine for creativity, but not for processing payments or executing compliance workflows. MCP introduces guardrails to standardize how agents interact with internal systems, ensuring repeatable, reliable outcomes even if the AIs internal reasoning varies. Thats critical in regulated industries like healthcare, finance, and government. We saw this play out with a Middle East government statistics agency. They had decades of data buried in legacy systems. MCP enabled a voice-powered AI interface that could query massive datasets in Arabic and English. What used to take weeks now takes seconds, and more importantly, decision-makers now have timely, contextual intelligence at their fingertips. Strategic implementation: build once, scale everywhere Heres the thing: MCP isnt about building one-off solutions. Its about creating frameworks that can be reused across departments and use cases. To apply MCP effectively, organizations must think in the following terms: Reliability and repeatability: MCP enforces structured communication, making AI agents more predictable and trustworthy. Scalability and ecosystem growth: With a unified API layer, MCP simplifies deployment and integration, accelerating innovation. Safety and control: MCP ensures AI agents operate within defined boundaries, protecting sensitive data and maintaining enterprise integrity. We worked with a global healthcare technology provider that wanted to simplify complex medical terminology for patients. Instead of building a narrow solution, we used MCP to create a reusable framework that could be extended across departments. AI agents can securely access structured medical data and terminology libraries, apply consistent translation logic, and tailor outputs for patients, clinicians, and administrative staff. That same protocol-driven infrastructure was later adapted for internal training, multilingual documentation, and voice-assisted navigation of clinical systems. MCP made it possible to replicate success without reinventing the wheel. Thats what strategic implementation looks liketurning isolated wins into enterprise-wide transformation. The road ahead MCP is more than protocol, its a strategic enabler. It gives AI agents the structure they need to interact with enterprise data and tools. This means businesses can unlock new efficiencies, reduce development cycles, and build a thriving ecosystem of interoperable AI solutions. The full potential is still unfolding, but for companies serious about AI, working with partners that understand how to apply MCP can be foundational. With the right guardrails in place, AI can be creative and compliant, autonomous, and accountable. Just what youd expect from any employee helping move your business forward. Juan Orlandini is CTO North America of Insight Enterprises.
Category:
E-Commerce
Were in the midst of an extraordinary wave of AI-fueled innovation, and no industry will remain untouched. Its still early days in what promises to be a new technology super cycle. But for impact organizations such as nonprofits and government agencies that typically lag in tech adoption, this moment represents a priceless window of opportunity. Unfortunately, the impact sector is still playing it safe with digital strategies that prioritize incremental modifications over decisive, daring action and technical innovation. These organizations are led by some of the smartest, most dedicated people I know, and they understand the trends. So why are they stuck in their approach to digital? Impact organizations cant afford to ignore AI Before we jump into the reasons nonprofits and government agencies are playing it safe, lets consider the stakesand why time is of the essence when it comes to adopting AI. First, AI is an impact multiplier. Leaning into the technology isnt about adopting new tech for the sake of keeping current. Its about radically amplifying your teams capacity to focus on your core mission rather than rote administrative tasks. Of course, AI isnt a panacea. And there are serious ethical considerations that should be taken into account along the way. The best technology decisions are always values-aligned. But that doesnt mean sidestepping it altogether. Second, the moment to act is now. Over the coming years, the gap between organizations that figure out how to effectively adopt AI and those that dont will widen exponentially. And in these early, chaotic days of technological innovation, AI tools and models are more affordable and accessible than ever before, creating a unique opportunity for even resource-strapped organizations to explore their potential. But realizing that potential requires thoughtful investment, even when entry costs seem low. Finally, all organizations, from corporate giants to small nonprofits, are still figuring out how to adopt AI. In this rare moment of digital parity, you have the chance to position your organization at the front of the curve. Five common mistakes that prevent digital risk-taking There are several reasons non-profits and government agencies fail to take calculated risks in their digital strategies. These mistakes arent unique to the current moment. They are perennial stumbling blocks that hinder digital innovation for many well-intentioned organizations and agencies. Here are five common mistakes that prevent digital risk-taking and how to solve them. 1. Underfunding digital investments Nonprofits and government agencies are fundamentally resource constrained. Budgets in the public sector are never going to rival those available in the private sector. But its also about resource allocation. Digital is often still viewed as just a communications tool or overhead rather than a core investment that is fundamental to program delivery and organizational success. Digital projects are often viewed as a one-time line item rather than an ongoing investment that needs to be refined and improved over time. Further, investments are often made just in technology, but not in the people and processes that will ultimately make that technology successful. Solution: Advocate for increased and sustained digital budgets by aligning digital strategies with organizational goals and measuring ROI over time. 2. Decision by committee The prevalence of committee-driven decision-making and the pursuit of consensus often leads to watered-down strategies and missed opportunities. In a fast-paced digital environment, this approach can also slow down decision-making. The result is a strategy that is already outdated by the time it is implemented. Solution: Streamline decision-making processes for digital initiatives. Implement agile methodologies and empower digital teams with greater autonomy. 3. Thinking you know your audience better than you do Ask any public servant or nonprofit staffer, and theyll tell you that what motivates them is helping people. But when it comes to knowing their audiences, many teams rely too heavily on what they think they know about key stakeholders. Worse, some organizations prioritize internal perspectives over the needs and preferences of their target audiences. This misalignment can lead to digital initiatives that fail to resonate or drive meaningful engagement. Solution: Take the time to conduct direct user research and test products with the people they are designed to serve. 4. Fear of failure The impact sectors current funding models and budget structures create an environment where failure is taboo. This risk-averse culture stifles innovation and prevents organizations from learning through experimentationa crucial element of digital success. Solution: Focus funding proposals on outcomes, not activities, to allow flexibility in approach and create a culture of innovation that embraces calculated risks and learns from failures. 5. Analysis paralysis When confronted with thorny problems like AI adoption, many organizations hang back because they are waiting for the just right moment or a critical mass of decisive information to make a move. In the wildly fast-moving world of AI, this mindset doesnt work. Learning by doing is the best course of action. Experimentation and prototyping are the name of the game. Solution: Empower your team to experiment with AI tools. Provide flexible guidelines that ensure data security and values alignment without stifling creativity. AI is here to stay, and its indelibly reshaping the digital landscape. For risk-averse impact organizations, avoiding it is the riskiest strategy of all. Elisabeth Bradley is CEO of Forum One.
Category:
E-Commerce
All news |
||||||||||||||||||
|