|
|||||
Every so often, a technical dispute reveals something much bigger. The recent blowup between the U.S. Department of Defense and Anthropic is one of those moments: not because its about a $200 million contract, but because it makes visible a new kind of enterprise risk, one that most CEOs, CTOs, and CIOs are still treating as a procurement detail. In a recent piece, The Pentagon wants to rewrite the rules of AI, I focused on the political meaning of a government attempting to force an AI company to relax its own guardrails. For enterprise leaders, the most important takeaway is more practical: If your AI capabilities depend on a single providers terms, policies, and enforcement mechanisms, your strategy is now downstream of someone elses conflict. According to reporting, the Pentagon wanted the ability to use Anthropics models for all lawful purposes, while Anthropic insisted on explicit carve-outs, particularly around mass surveillance and fully autonomous weapons. When Anthropic wouldnt budge, the dispute escalated into threats of blacklisting and supply chain risk designation, with public pressure at the highest political levels. The Associated Press describes the demand for broader access and the potential consequences in detail, including the Pentagons willingness to treat compliance as nonnegotiable for participation in its internal AI network, GenAI.mil. Then came the second act: OpenAI stepped in with its own Pentagon agreement, presenting it as compatible with strong safety principles while debate continued over what the contract language actually prevents, especially regarding the use of publicly available data at scale. You may not be selling to the Pentagon or to governments that are making democracy progressively look like a pipe dream. But you are almost certainly building on vendors whose models are shaped by policies, politics, contracts, and reputational risk. And if youre deploying those models as is, or building agentic systems tightly coupled to one providers tooling and assumptions, youre making a strategic bet you probably havent priced in. This is what the PentagonAnthropic fight should teach every enterprise. Your AI vendor is not just a supplier. Its a governance regime. For the past two years, many companies have treated large language model (LLM) procurement like cloud procurement: Choose a provider, negotiate price, sign terms, integrate application programming interfaces (APIs), ship pilots. But LLM providers are not selling neutral infrastructure. Theyre selling models with built-in constraints, policies that can change, and enforcement mechanisms that can tighten overnight. Even when the models are accessed through APIs, the practical reality is that your capability is partly controlled elsewherethrough usage policies, refusal behaviors, rate limits, logging, retention choices, safety layers, and contractual wording. Thats why this dispute matters. Anthropics stance wasnt simply ethical positioning. It was product governance. The Pentagons stance wasnt simply buyer pressure. It was demanding control of governance. Enterprise leaders should recognize the parallel immediately: Your companys AI behavior is partly determined by a vendors definition of acceptable use, and that definition may collide with your own business requirements, your regulatory environment, your geography, or your risk appetite. In a sense, you are outsourcing part of your decision architecture. And when governance becomes the battleground, its not a technical issue anymore. Its strategic. Out of the box AI is rented intelligence. Strategy requires owned capability. Ive written before that most current AI deploymentsare essentially rented intelligence: powerful, convenient, but ultimately generic. That was the core of my argument in This is the next big thing in corporate AI, and in Why world models will become a platform capability, not a corporate superpower. When everyone can rent similar capabilities from OpenAI, Anthropic, Google, xAI, or others, the differentiator becomes what you build above the model: your workflows, your feedback loops, your integration with operational reality. The Pentagon dispute highlights a hard truth: When you depend on as-shipped AI behavior, your operational continuity depends on someone elses red lines, and those lines can be challenged by customers, governments, courts, or internal politics. If youre a CIO or CTO, this is the moment to stop thinking of LLM selection as the AI strategy, and start treating it as a replaceable component in a larger system. Because the real strategic question is not Which model do we choose? It is: Do we have the technical and organizational ability to switch models quickly, without rewriting our business logic, retraining our workforce, or rebuilding our agent systems? Agentic systems multiply lock-in and amplify the blast radius. You really believed that by saying we are developing an agentic system, you were, somehow, more sophisticated? Simple use cases such as summarization, drafting, and search augmentation are relatively portable. Agentic systems are not. The moment you build agents that call tools, trigger workflows, access internal systems, and make chained decisions, you start encoding business logic in places that are surprisingly hard to migrate: prompts, function-call schemas, tool-selection patterns, model-specific safety behavior, vendor-specific orchestration frameworks, and even quirks of how a particular model handles ambiguity. That is why the PentagonAnthropic fight should feel like a corporate risk scenario, not a Washington drama. A sudden policy shift, contract dispute, or reputational shock can force you to change providers fast, and if your agents are tightly coupled to one stack, your business doesnt switch. It stalls. I made a related point, though from a different angle, in Why your company (and every company) needs an AI-first approach. AI-first should not mean deploy more AI. It should mean building systems where artificial intelligence is structurally embedded, but is also governed, testable, observable, and resilient under change. Resilience is the missing word in most enterprise AI plans. The lesson isnt ethics first. Its architecture first. You dont need to take a public moral stance like Anthropic (or maybe you do, but thats not the topic of this article). You do need to design as if your vendor relationship will be volatile . . . because it will be. Volatility can come from many directions: A provider changes its safety posture. A regulator introduces new constraints. A customer demands contractual carve-outs. A government pressures suppliers. A vendor shifts pricing, retention, or availability. A model is withdrawn, restricted, or re-tiered. A geopolitical event changes what acceptable use means. The organizations that will navigate this era best are those that treat LLMs as interchangeable engines and build capabilities that are model-agnostic. That means investing in a layer above the model that belongs to you: evaluation, routing, policy, observability, and integration with your operational truth. If you need a mental frame, think of what NIST is doing with the AI Risk Management Framework: a structured way to map, measure, and manage AI risk across contexts and use cases, rather than assuming the technology is inherently safe because a vendor says so. The Pentagon itself (ironically, given this dispute) has formal language around responsible AI principles and implementation, emphasizing governance, testing, and life cycle discipline. Companies should read those documents not as government ethics, but as a reminder that the control plane matters as much as the model. Build AI capabilities that reflect your business, not your provider. The endgame is not model independence as an abstract principle. The endgame is strategy dependence: AI systems that are deeply shaped by your supply chain, your operating model, your risk posture, your customer obligations, and your competitive contextno matter how complex those are. That is the part most companies are still avoiding, because it is harder than buying a model. It requires building institutional competence: the ability to evaluate models, to swap them, to tune behavior through your own governance layers, to instrument outputs, to manage tool access, and to treat agents as production systems rather than demos. In What are the 2 categories of AI use and why do they matter?, I tried to describe the divide between organizations that use AI and those that build with AI. The PentagonAnthropic conflict is a perfect illustration of why that divide is becoming existential. If you only use, you inherit someone elses constraints. If you build, you can adapt. The companies that keep treating AI as a cost-cutting plug-in will almost certainly underinvest in the architecture that makes switching possible. Efficiency narratives feel safe, but they often lock you into the shallowest version of the technology. The Pentagon didnt want ethics getting in the way. Anthropic didnt want to yield control. OpenAI negotiated a different set of terms. That triangle is not a one-off story. Its a preview of how contested, politicized, and strategically consequential AI supply will become. Your companys job is not to pick the right provider. Your job is to ensure that, when the inevitable conflict arrives, your business is not trapped inside someone elses argument.
Category:
E-Commerce
Dieticians are warning that GLP-1 use can lead to extreme malnutrition, manifesting in diseases like scurvy, amid findings that the vast majority of studies fail to consider patients eating habits. While GLP-1s like Ozempic and Wegovy have surged in popularity in recent yearsand are now available through injections and in pill formleading dieticians in Australia have discovered that existing research hasnt considered what patients are eating, and how much. Nutritional Deficiencies While the drugs work by suppressing appetite, eating too little or making poor dietary choices can lead to further issues. A reduction in body weight does not automatically mean the person is well-nourished or healthy, Professor Clare Collins told the Australian Financial Review (AFR). Nutrition plays a critical role in health, and right now its largely missing from the evidence. She added that only two trials had recorded or published what GLP-1 users were eating. The current data shows that many patients using weight-loss medication are functionally malnourished, which can lead to severe vitamin deficiencies. A 2025 study of adults with type 2 diabetes found that more than 20 percent of participants had nutritional deficiencies after 12 months of GLP-1 use. And a study examining patients before joint surgery found that 38 percent of GLP-1 users suffered from malnutrition, versus 8 percent for patients not using GLP-1s. Last year, British pop artist Robbie Williams told The Mirror he had developed a 17th century pirate disease after taking something like Ozempic. He was referring to scurvy, a rare but serious vitamin C deficiency. In the worst cases, the illness can lead to death. Id stopped eating, and I wasnt getting nutrients, he said. Its exactly the kind of health emergency the dieticians are working to combat. The Proposed Solution Lets not wait for every GP (general practitioner) to see a case of scurvy, lets get on the front foot and link these GP chronic management plans to a dietician referral, said Collins. GLP-1 use has also been tied to thiamine deficiency, which can cause neurological and cardiovascular disease. Magriet Raxworthy, CEO at Dieticians Australia, said its essential that GLP-1 users receive nutritional guidance while taking the drug. Without personalized medical nutrition therapy provided by a dietitian, people may struggle to meet their nutritional needs and can be placed at risk of significant muscle loss, bone density loss, micronutrient deficiencies, and disordered eating behaviors, she said, according to the AFR. In this case, its clearmedication alone does not deliver sustainable health outcomes. Some GLP-1 providers do offer nutrition assistance, but the issue hasnt yet been centralized in a way that effectively prevents serious deficiencies that can accompany the medication. Ava Levinson This article originally appeared on Fast Companys sister website, Inc.com. Inc. is the voice of the American entrepreneur. We inspire, inform, and document the most fascinating people in business: the risk-takers, the innovators, and the ultra-driven go-getters that represent the most dynamic force in the American economy.
Category:
E-Commerce
Weve grown to despise meeting culture, and I understand why. Think about the last few meetings youve attended. How many of them felt clear, succinct, like a truly effective use of your time? Ive sat through more meetings than I can countmany of them with half the participants multitasking, cameras on but minds elsewhere. As a certified facilitator who has designed everything from executive offsites to weekly team stand-ups, Ive learned that most meetings fail not because people dont care, but because leaders treat meetings as a necessary evil instead of the expensive, high-stakes collaboration moments they actually are. But what can we do about it? you might lament. Bad meetings are a part of getting work done. While it’s true that meetings are a critical part of doing business, they don’t have to be bad. Here are five of the most common mistakes I see people make when it comes to meetingsand simple fixes you can implement today to start making the most of your meeting time. Mistake 1: You dont start with the end in mind You may think you know what a meeting is for: the title of your meeting explains the purpose or your agenda lays out what you hope to cover. But really, the most important planning step is having a clear vision of the intended outcome of the meeting. Think about what you want people to walk away from each meeting with. Are they coming away with information? Are they supposed to finish having made a decision? Is the goal to simply introduce a topic and tease out which smaller group should convene for more specific next steps? Are they supposed to have a deeper understanding of their peers priorities? When people know where the conversation is supposed to lead, they can both prepare and participate more effectively. Plus, this makes it easy to close the loop with action items related to your objective (another element of successful meetings). Action item: As youre kicking off each agenda item in a meeting, state, out loud, what the outcome youre striving for is. Mistake 2: Youre not timeboxing your agenda Weve all been in meetings where every agenda item seems to take way too long. You tune out, check some emails, and tune back in only to realize that the topic still isnt wrapped up and the third person is now piggybacking on what the first person said without adding any new or necessary information. Unsurprisingly, by the end of the meeting, youve only gotten through two of the six agenda items, leaving the group with a few non-ideal options: schedule an additional meeting, move those points to next week (which further adds to the backlog of agenda topics), or attempt to cover those items asynchronously. Instead, use timeboxing for every item of your agenda. Your intended outcomes should guide your timeboxing. Exploring a controversial decision that will impact the whole organization? Build in more time for discussion. Running through updates that don’t require much input? Keep those timeboxes tight. And no need to get ridiculous here: If you have three administrative topics at the beginning, you can batch them into a five-minute admin section instead of putting one minute next to each. When you hit that time mark (most video conferencing systems now have built-in timers you can use), you dont have to stop immediately. Instead, do a check-in to see whether you need to continue. I often use a quick thumbs pollthumbs up means people want more time on the topic, thumbs down means they’re ready to move on, thumbs sideways means they’re neutral. If most people are ready to move forward, capture the action item and keep going. If you’re getting mostly thumbs up, set a new timebox and check in again when it expires. And if people are slow to respond or give you sideways thumbs? They’ve probably checked out. Action item: Add timeboxes to every agenda item in your next meeting, and actually check in when you hit them. Mistake 3: Youre not being exclusive enough Leaders often invite a core group of required attendees to a meeting, then tack on everyone else as optional just in case they might find value in some small portion of the discussion, or to avoid anyone feeling left out. You think you’re being inclusive, but what you’re actually doing is cluttering people’s calendars with unnecessary events they feel pressured to attend. Sure, the last five “optional” meetings didn’t yield anything useful for them, but maybe this one will be different, right? Do everyone a favor: Stop inviting optional attendees. And if you’re marked as optional on a meeting that consistently provides no value, stop going. There are better ways to stay transparent without wasting anyone’s time. Use an AI notetaker to generate a summary and action items that non-attendees can review quickly. Have someone post key takeaways afterward, especially decisions or information that affects people outside the room. Or invite specific people for specific portions of the meeting when their input is actually needed. Action item: Audit your upcoming meetings and remove all optional attendees, either making them required or taking them off the invite entirely. Mistake 4: You dont do a meeting audit often enough Finally, with the above implemented, its important to keep yourself honest and regularly assess whether the meetings on your calendar are a valuable use of your time. A simple question I like to ask myself as I consider my upcoming meetings is: If this meeting was taken off the calendar, what would the meeting attendees miss out on? How would it hinder their ability to do their day-to-day roles and responsibilities? The answer can make it clear which meetings can be removed or restructured. I also think its valuable for meeting facilitators to do a quick gut check at the end of each meeting, asking yourself: Did we make any decisions? Do people know what to do next? Did everyone participate in some way? Did everyone walk away with some benefit? If your meetings arent reaching their intended outcomes, or you dont know what those intended outcomes are, it might be time to revisit the cadence, attendees, and style of the meeting (or consider if it should be a meeting at all). Action item: Schedule 30 minutes this week to audit all your recurring meetings using the questions above, and cancel or restructure at least one.
Category:
E-Commerce
All news |
||||||||||||||||||
|
||||||||||||||||||