Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2026-03-12 09:00:00| Fast Company

I think the strongest indicator of how normal using AI has become is the language we use as shorthand for it. It’s now extremely common for someone to say they asked “chat” for some piece of information. We all know what they mean. But if you needed data on how popular AI portals are now, OpenAI provided it recently when the company revealed that ChatGPT has 900 million users, up from 800 million in the fall. Even if Gemini, Copilot, and Claude weren’t also rising (they are), that would be enough for the medianot to mention brands and marketing/PR agenciesto really understand how fast AI is growing as a discovery channel. Whether or not it’s a source of traffic doesn’t matter; it’s a meaningful layer between publishers and audiences. That’s obviously the reason there’s been so much interest in the infant field of GEO (generative engine optimization) lately, and why I’ve written about it more than once in the past few months. But the focus on how to get AI search engines to notice and reference content doesn’t mean there shouldn’t be some kind of reckoning with how the content got there in the first place, and whatif anyvalue exchange that should trigger. {"blockType":"mv-promo-block","data":{"imageDesktopUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/media-copilot.png","imageMobileUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/fe289316-bc4f-44ef-96bf-148b3d8578c1_1440x1440.png","eyebrow":"","headline":"\u003Cstrong\u003ESubscribe to The Media Copilot\u003C\/strong\u003E","dek":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for The Media Copilot. To learn more visit \u003Ca href=\u0022https:\/\/mediacopilot.substack.com\/\u0022\u003Emediacopilot.substack.com\u003C\/a\u003E","subhed":"","description":"","ctaText":"SIGN UP","ctaUrl":"https:\/\/mediacopilot.substack.com\/","theme":{"bg":"#f5f5f5","text":"#000000","eyebrow":"#9aa2aa","subhed":"#ffffff","buttonBg":"#000000","buttonHoverBg":"#3b3f46","buttonText":"#ffffff"},"imageDesktopId":91453847,"imageMobileId":91453848,"shareable":false,"slug":""}} Surveys, such as this one done by OnMessage last fall, consistently show the public believes content providers should be compensated when their content is scraped by AI engines. The AI industry tends to have a different view, often suggesting that “publicly available” data (i.e., stuff on the internet) is fair game. It’s more nuanced than that, of course, but the central issue is one of leverage: The AI companies have it, and publishers by and large don’t. The push for a better bargain A new industry coalition is looking to rebalance those scales. In late February, a group of U.K. media companiesincluding the BBC, the Financial Times, and The Guardianannounced they were forming SPUR, which stands for Standards for Publisher Usage Rights. In an open letter, the leaders of those companies articulated the group’s purpose: “to establish shared technical standards and responsible licensing frameworks that ensure AI developers can access high quality, reliable journalism in legitimate, responsible and convenient ways.” In other words, SPUR is meant to help lead the publishing industry toward a better bargain between AI companies and the media. Currently, publishers have a hodgepodge of solutions: You could pursue a licensing deal with one of the big AI companies, an option available only to publishers above a certain size. You could sue the AI companies, an expensive proposition. Or you could try to defend your content through a combination of paywalls, bot-blocking protocols, and nascent technologies aimed at getting AI crawlers to pay for access. The spirit of SPUR is that there’s power in numbers. Although it’s beginning with a handful of U.K. publishers, the group is actively working to recruit media worldwide into the coalition. By taking collective action, which the news media is traditionally allergic to, the coalition stands a better chance of establishing some kind of framework for how AI services will pay for access to content. It stands an even better chance with allies. Last year, Cloudflare stepped into this fight, advocating on the side of publishers. And it brought to the battlefield technical clout: A significant portion of internet traffic goes through Cloudflare’s network, so it has an outsize say in what the rules are, and which ones get enforced. As part of its push against unauthorized AI scraping, it introduced Pay Per Crawl, a new way to charge bots for access to content. Couldflare’s solution is actually one of several on the market, and although SPUR doesn’t intend to play favorites, Pay Per Crawl is exactly the kind of technical barrier the group was created to encourage. The fact is, unauthorized AI crawling is rampant. TollBit, which publishes quarterly reports about bot activity, recently highlighted the problem of third parties leveraging virtual, “headless” browsers (essentially bots accessing sites as if they were humans and then scraping them) on an industrial scale to crawl vast amounts of datathe equivalent of a fishing trawler. For the longest time, the only technical weapon digital publishers had was the robots exclusion protocol (robots.txt), but it’s an honor system that can easily be ignored or bypassed. The main focus of SPUR, sources tell me, is to help publishers build more defenses. By making it more difficult and cost-prohibitive for AI crawlers to access content, it will encourage the people who operate them to make deals. Then come the agents The biggest wild card here is agents. AI services access content largely for three purposes: for training data, for search crawling, and in response to user requests. It’s the last category that is proving very contentious and the impetus behind a war of words between Perplexity and Cloudflare last summer. User agents have traditionally been given a pass from blocking since they effectively act as human proxies, not mass-scraping tools. Importantly, though, they don’t behav as humans (for example, they don’t look at ads), so many sites (and especially publishers) believe they should be entitled to block them. Some believe this aspect of AI crawling should be regulated, and certainly it’s part of the ongoing lawsuits between the media and the AI industry. But those approaches drag on; SPUR is acting now. You can picture this quickly leading to an arms race, and when the players were individual publishers versus the AI industry, that’s very asymmetric warfare. But a large, worldwide industry coalition, backed by technical allies like Cloudflare, might actually have a chance to push back. So now the hard work begins of herding the cats of the media industry. And the clock is ticking: User behavior is shifting rapidly, and asking “chat” what’s happening in the world means more agents are replacing human traffic to news websites. SPUR may give publishers a chance to shape that system, but it is taking form with or without them. Once those rules harden, changing them will be much harder. {"blockType":"mv-promo-block","data":{"imageDesktopUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/media-copilot.png","imageMobileUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/fe289316-bc4f-44ef-96bf-148b3d8578c1_1440x1440.png","eyebrow":"","headline":"\u003Cstrong\u003ESubscribe to The Media Copilot\u003C\/strong\u003E","dek":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for The Media Copilot. To learn more visit \u003Ca href=\u0022https:\/\/mediacopilot.substack.com\/\u0022\u003Emediacopilot.substack.com\u003C\/a\u003E","subhed":"","description":"","ctaText":"SIGN UP","ctaUrl":"https:\/\/mediacopilot.substack.com\/","theme":{"bg":"#f5f5f5","text":"#000000","eyebrow":"#9aa2aa","subhed":"#ffffff","buttonBg":"#000000","buttonHoverBg":"#3b3f46","buttonText":"#ffffff"},"imageDesktopId":91453847,"imageMobileId":91453848,"shareable":false,"slug":""}}


Category: E-Commerce

 

LATEST NEWS

2026-03-12 08:30:00| Fast Company

Every so often, a technical dispute reveals something much bigger. The recent blowup between the U.S. Department of Defense and Anthropic is one of those moments: not because its about a $200 million contract, but because it makes visible a new kind of enterprise risk, one that most CEOs, CTOs, and CIOs are still treating as a procurement detail.  In a recent piece, The Pentagon wants to rewrite the rules of AI, I focused on the political meaning of a government attempting to force an AI company to relax its own guardrails. For enterprise leaders, the most important takeaway is more practical: If your AI capabilities depend on a single providers terms, policies, and enforcement mechanisms, your strategy is now downstream of someone elses conflict.  According to reporting, the Pentagon wanted the ability to use Anthropics models for all lawful purposes, while Anthropic insisted on explicit carve-outs, particularly around mass surveillance and fully autonomous weapons. When Anthropic wouldnt budge, the dispute escalated into threats of blacklisting and supply chain risk designation, with public pressure at the highest political levels. The Associated Press describes the demand for broader access and the potential consequences in detail, including the Pentagons willingness to treat compliance as nonnegotiable for participation in its internal AI network, GenAI.mil. Then came the second act: OpenAI stepped in with its own Pentagon agreement, presenting it as compatible with strong safety principles while debate continued over what the contract language actually prevents, especially regarding the use of publicly available data at scale. You may not be selling to the Pentagon or to governments that are making democracy progressively look like a pipe dream. But you are almost certainly building on vendors whose models are shaped by policies, politics, contracts, and reputational risk. And if youre deploying those models as is, or building agentic systems tightly coupled to one providers tooling and assumptions, youre making a strategic bet you probably havent priced in. This is what the PentagonAnthropic fight should teach every enterprise.  Your AI vendor is not just a supplier. Its a governance regime.  For the past two years, many companies have treated large language model (LLM) procurement like cloud procurement: Choose a provider, negotiate price, sign terms, integrate application programming interfaces (APIs), ship pilots.  But LLM providers are not selling neutral infrastructure. Theyre selling models with built-in constraints, policies that can change, and enforcement mechanisms that can tighten overnight. Even when the models are accessed through APIs, the practical reality is that your capability is partly controlled elsewherethrough usage policies, refusal behaviors, rate limits, logging, retention choices, safety layers, and contractual wording.  Thats why this dispute matters. Anthropics stance wasnt simply ethical positioning. It was product governance. The Pentagons stance wasnt simply buyer pressure. It was demanding control of governance.  Enterprise leaders should recognize the parallel immediately: Your companys AI behavior is partly determined by a vendors definition of acceptable use, and that definition may collide with your own business requirements, your regulatory environment, your geography, or your risk appetite.  In a sense, you are outsourcing part of your decision architecture. And when governance becomes the battleground, its not a technical issue anymore. Its strategic. Out of the box AI is rented intelligence. Strategy requires owned capability. Ive written before that most current AI deploymentsare essentially rented intelligence: powerful, convenient, but ultimately generic. That was the core of my argument in This is the next big thing in corporate AI, and in Why world models will become a platform capability, not a corporate superpower. When everyone can rent similar capabilities from OpenAI, Anthropic, Google, xAI, or others, the differentiator becomes what you build above the model: your workflows, your feedback loops, your integration with operational reality.  The Pentagon dispute highlights a hard truth: When you depend on as-shipped AI behavior, your operational continuity depends on someone elses red lines, and those lines can be challenged by customers, governments, courts, or internal politics.  If youre a CIO or CTO, this is the moment to stop thinking of LLM selection as the AI strategy, and start treating it as a replaceable component in a larger system. Because the real strategic question is not Which model do we choose? It is: Do we have the technical and organizational ability to switch models quickly, without rewriting our business logic, retraining our workforce, or rebuilding our agent systems?  Agentic systems multiply lock-in and amplify the blast radius.  You really believed that by saying we are developing an agentic system, you were, somehow, more sophisticated? Simple use cases such as summarization, drafting, and search augmentation are relatively portable. Agentic systems are not.  The moment you build agents that call tools, trigger workflows, access internal systems, and make chained decisions, you start encoding business logic in places that are surprisingly hard to migrate: prompts, function-call schemas, tool-selection patterns, model-specific safety behavior, vendor-specific orchestration frameworks, and even quirks of how a particular model handles ambiguity. That is why the PentagonAnthropic fight should feel like a corporate risk scenario, not a Washington drama. A sudden policy shift, contract dispute, or reputational shock can force you to change providers fast, and if your agents are tightly coupled to one stack, your business doesnt switch. It stalls.  I made a related point, though from a different angle, in Why your company (and every company) needs an AI-first approach. AI-first should not mean deploy more AI. It should mean building systems where artificial intelligence is structurally embedded, but is also governed, testable, observable, and resilient under change.  Resilience is the missing word in most enterprise AI plans.  The lesson isnt ethics first. Its architecture first. You dont need to take a public moral stance like Anthropic (or maybe you do, but thats not the topic of this article). You do need to design as if your vendor relationship will be volatile . . . because it will be. Volatility can come from many directions: A provider changes its safety posture. A regulator introduces new constraints. A customer demands contractual carve-outs. A government pressures suppliers. A vendor shifts pricing, retention, or availability. A model is withdrawn, restricted, or re-tiered. A geopolitical event changes what acceptable use means. The organizations that will navigate this era best are those that treat LLMs as interchangeable engines and build capabilities that are model-agnostic. That means investing in a layer above the model that belongs to you: evaluation, routing, policy, observability, and integration with your operational truth. If you need a mental frame, think of what NIST is doing with the AI Risk Management Framework: a structured way to map, measure, and manage AI risk across contexts and use cases, rather than assuming the technology is inherently safe because a vendor says so.  The Pentagon itself (ironically, given this dispute) has formal language around responsible AI principles and implementation, emphasizing governance, testing, and life cycle discipline.  Companies should read those documents not as government ethics, but as a reminder that the control plane matters as much as the model. Build AI capabilities that reflect your business, not your provider. The endgame is not model independence as an abstract principle. The endgame is strategy dependence: AI systems that are deeply shaped by your supply chain, your operating model, your risk posture, your customer obligations, and your competitive contextno matter how complex those are.  That is the part most companies are still avoiding, because it is harder than buying a model.  It requires building institutional competence: the ability to evaluate models, to swap them, to tune behavior through your own governance layers, to instrument outputs, to manage tool access, and to treat agents as production systems rather than demos.  In What are the 2 categories of AI use and why do they matter?, I tried to describe the divide between organizations that use AI and those that build with AI. The PentagonAnthropic conflict is a perfect illustration of why that divide is becoming existential. If you only use, you inherit someone elses constraints. If you build, you can adapt.  The companies that keep treating AI as a cost-cutting plug-in will almost certainly underinvest in the architecture that makes switching possible. Efficiency narratives feel safe, but they often lock you into the shallowest version of the technology.  The Pentagon didnt want ethics getting in the way. Anthropic didnt want to yield control. OpenAI negotiated a different set of terms. That triangle is not a one-off story. Its a preview of how contested, politicized, and strategically consequential AI supply will become.  Your companys job is not to pick the right provider.  Your job is to ensure that, when the inevitable conflict arrives, your business is not trapped inside someone elses argument. 


Category: E-Commerce

 

2026-03-12 08:00:00| Fast Company

Dieticians are warning that GLP-1 use can lead to extreme malnutrition, manifesting in diseases like scurvy, amid findings that the vast majority of studies fail to consider patients eating habits. While GLP-1s like Ozempic and Wegovy have surged in popularity in recent yearsand are now available through injections and in pill formleading dieticians in Australia have discovered that existing research hasnt considered what patients are eating, and how much. Nutritional Deficiencies While the drugs work by suppressing appetite, eating too little or making poor dietary choices can lead to further issues.  A reduction in body weight does not automatically mean the person is well-nourished or healthy, Professor Clare Collins told the Australian Financial Review (AFR). Nutrition plays a critical role in health, and right now its largely missing from the evidence. She added that only two trials had recorded or published what GLP-1 users were eating. The current data shows that many patients using weight-loss medication are functionally malnourished, which can lead to severe vitamin deficiencies.  A 2025 study of adults with type 2 diabetes found that more than 20 percent of participants had nutritional deficiencies after 12 months of GLP-1 use. And a study examining patients before joint surgery found that 38 percent of GLP-1 users suffered from malnutrition, versus 8 percent for patients not using GLP-1s. Last year, British pop artist Robbie Williams told The Mirror he had developed a 17th century pirate disease after taking something like Ozempic. He was referring to scurvy, a rare but serious vitamin C deficiency. In the worst cases, the illness can lead to death. Id stopped eating, and I wasnt getting nutrients, he said.  Its exactly the kind of health emergency the dieticians are working to combat.  The Proposed Solution Lets not wait for every GP (general practitioner) to see a case of scurvy, lets get on the front foot and link these GP chronic management plans to a dietician referral, said Collins. GLP-1 use has also been tied to thiamine deficiency, which can cause neurological and cardiovascular disease.  Magriet Raxworthy, CEO at Dieticians Australia, said its essential that GLP-1 users receive nutritional guidance while taking the drug. Without personalized medical nutrition therapy provided by a dietitian, people may struggle to meet their nutritional needs and can be placed at risk of significant muscle loss, bone density loss, micronutrient deficiencies, and disordered eating behaviors, she said, according to the AFR. In this case, its clearmedication alone does not deliver sustainable health outcomes. Some GLP-1 providers do offer nutrition assistance, but the issue hasnt yet been centralized in a way that effectively prevents serious deficiencies that can accompany the medication.  Ava Levinson This article originally appeared on Fast Companys sister website, Inc.com.  Inc. is the voice of the American entrepreneur. We inspire, inform, and document the most fascinating people in business: the risk-takers, the innovators, and the ultra-driven go-getters that represent the most dynamic force in the American economy.


Category: E-Commerce

 

Latest from this category

12.03China went crazy for OpenClaw. Now its working to ban it
12.03Publishers are finally getting serious about AI scraping
12.03The crippling success paradox that makes even winners fear failure
12.03The PentagonAnthropic clash is a warning for every enterprise AI buyer
12.03Experts warn that GLP-1s are leading to the resurgence of a 17thcentury disease
12.03Ive facilitated 1,000+ meetings. Heres why most of yours are failingand how to fix them
12.03Are you part of the distraction economy?
12.03Never run out of hobbies: Olympic medalist Alex Hall on knowing what to do next after success
E-Commerce »

All news

12.03Lloyds, Bank of Scotland and Halifax apps showed customers other users' transactions
12.03How the Iran war may affect your money and bills
12.03How the Iran war may affect your money and bills
12.03Welsh Water to pay 45m after 'unacceptable' sewage failures
12.03Social media firms asked to toughen up age checks for under-13s
12.03John Lewis to pay first staff bonus for four years
12.03NVIDIA- and Uber-backed Nuro is testing autonomous vehicles in Tokyo
12.03China went crazy for OpenClaw. Now its working to ban it
More »
Privacy policy . Copyright . Contact form .