|
|||||
Weve been here before. At so many pivotal moments in our adoption of digital technology, people and businesses mistake a companys walled garden for the broader, more powerful network underneath. In the 1990s, many people genuinely believed AOL was the internet. When I left Facebook in 2013, hundreds of people asked how I would function without the web. Over and over, packaged productsoperating systems, app stores, streaming serviceseclipse quieter, less expensive, bottom-up alternatives like Linux or torrents. We forget they exist. Today were making the same mistake with large language models. {"blockType":"mv-promo-block","data":{"imageDesktopUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/adus-labs-16x9-1.png","imageMobileUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/anduslabs.png","eyebrow":"","headline":"Get more insights from Douglas Rushkoff and Andus Labs.","dek":"Keep up to date on the latest trends on how AI is reshaping culture and business, through the critical lens of human agency.","subhed":"","description":"","ctaText":"Learn More","ctaUrl":"https:\/\/www.anduslabs.com\/perspectives","theme":{"bg":"#1a064b","text":"#ffffff","eyebrow":"#9aa2aa","subhed":"#ffffff","buttonBg":"#ffffff","buttonHoverBg":"#3b3f46","buttonText":"#000000"},"imageDesktopId":91420531,"imageMobileId":91420530,"shareable":false,"slug":""}} To many of us, AI now means choosing among a handful of commercial LLMs such as ChatGPT, Claude, Gemini, or Grokand perhaps even choosing the one that matches our cultural or political sensibilities. But these systems share important structural limitations: they are centralized, expensive, energy-intensive operations that depend on massive data centers, rare chips, and proprietary data stores. Because theyre trained on roughly the same public internet, they also tend to generate the same generalized, flattened results. Companies using them wholesale often end up substituting their own expertise with recombinations of whatever is already out there. This is how AI will do to businesses what social media did to publications, and what the early web did to retailers who went online without a strategy. Using the same generic tools as everyone else produces the same generic results. Worse, outsourcing core knowledge processes to a black-box service replaces the long-term development of internal capacityespecially junior employees learning through real practicewith cheaper but future-eroding automation. The limits of centralized AI Commercial language models are optimized for generality and scale. That scale is impressive, but it creates real constraints for organizations. Centralized LLMs require: Large volumes of training data scraped from the open web Expensive server infrastructure and power consumption Constant external connectivity Business models built around subscription, token fees, or upselling For many companies, these models become another outsourced dependency. Every time a commercial LLM updates itselfwhich can happen weeklyyour workflows change underneath you. Your proprietary data may be exposed to third-party APIs. And your differentiation erodes, because the models knowledge is drawn from the same public corpus available to your competitors. Meanwhile, the narrative surrounding AI has encouraged businesses to believe that this centralized path is the only viable onethat achieving meaningful AI capability requires enormous data centers, billion-dollar training runs, and participation in a global race toward Artificial General Intelligence. But none of this is a requirement for using AI productively. A practical alternative already exists You do not need frontier-scale models to benefit from AI. A growing ecosystem of open-source, locally deployable language models provides organizations with far more autonomy, privacy, and control. A $100 Raspberry Pior any modest home or office servercan run a compact open-source model using tools like Ollama or GPT4All. These models dont learn on the fly the way people do, but they can produce high-quality responses while remaining completely contained within your own environment. More importantly, they can be paired with a private knowledge base using retrieval systems. That means the model can reference your own research library, internal documentation, or curated public resources like Wikipediawithout training on the entire internet, and without sending your data to an external provider. These systems build on your own data instead of extracting it, strengthen your institutional memory instead of commoditizing it, and run at a fraction of the cost. This approach allows an organization to create an AI system aligned with its actual priorities, values, and domain expertise. It becomes a private assistant rather than a generalized product shaped by the incentives of a trillion-dollar platform. And the alternative doesnt have to be a solitary effort. Neighborhoods, campuses, or company departments can form a mesh networka set of devices connected directly through Wi-Fi or cables rather than through the public internet. One node can host a local model; others can contribute or withhold their own data stores. Instead of a single company owning the infrastructure and the knowledge, you get something closer to a community data commons or a digital library system. Projects like the High Desert Institutes LoreKeepers Guild are already experimenting with this approach. Their Librarian initiative envisions local libraries acting as the data hubs for mesh-networked AI systemsresilient enough to function even during connectivity disruptions. But their deeper innovation is architectural. These systems give organizations access to powerful language capabilities without subscription costs, lock-in, data extraction, or exposure of proprietary information. Local or community models enable organizations to: Curate their own data Maintain complete privacy by keeping computation on-site Reduce latency to near zero Preserve and strengthen internal expertise Avoid recurring token or API costs And they do so using energy and computing resources that are orders of magnitude lower than those required by frontier-scale models. Why decentralized AI matters now The more institutions adopt localized or mesh-based AI, the less they are compelled to fund the centralized companies racing toward AGI. Those companies have made an effective argument: that sophisticated AI is only possible through their services. But much of what organizations pay for is not their own productivityit is the constrution of massive server farms, procurement of rare chips, and long-term bets on energy-intensive infrastructure. By contrast, in-house or community-run systems can be deployed once and maintained indefinitely. A week of setup can eliminate a decade of subscription payments. A small rural library has already demonstrated the feasibility of operating a self-hosted LLM node; a Fortune 500 company should have no trouble doing the same. Still, history suggests that most organizations will choose the convenient option rather than the autonomous one. Few people accessed the early Internet directly; they chose AOL. Today, many will continue to choose centralized AI services, even when they offer the least control. But what social media companies did to businesses that mistook them for the Internet will be mild compared to what comes when companies mistake these proprietary interfaces for AI itself. Decentralized AI already exists. The question now is whether well choose to use it. {"blockType":"mv-promo-block","data":{"imageDesktopUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/adus-labs-16x9-1.png","imageMobileUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/anduslabs.png","eyebrow":"","headline":"Get more insights from Douglas Rushkoff and Andus Labs.","dek":"Keep up to date on the latest trends on how AI is reshaping culture and business, through the critical lens of human agency.","subhed":"","description":"","ctaText":"Learn More","ctaUrl":"https:\/\/www.anduslabs.com\/perspectives","theme":{"bg":"#1a064b","text":"#ffffff","eyebrow":"#9aa2aa","subhed":"#ffffff","buttonBg":"#ffffff","buttonHoverBg":"#3b3f46","buttonText":"#000000"},"imageDesktopId":91420531,"imageMobileId":91420530,"shareable":false,"slug":""}}
Category:
E-Commerce
The way consumers search is changing faster than the industry expected. This holiday season, many shoppers are looking for gifts inside AI platforms, rather than retailer sites or traditional search. They are asking natural questions like: Find me a cruelty-free skincare gift for sensitive skin under $100. What are good gift ideas for a three-year-old that are safe and durable? What are the safest, nontoxic treats for my Golden Retriever? This shift is already measurable. Adobe Digital Insights reports a 4,700% year-over-year increase in retail visits driven by AI assistants between July 2024 and July 2025. At the same time, click-through rates from SEO have dropped 34% as users bypass the search results page entirely. eMarketer reports 47% of brands have no idea whether they appear in AI-driven discovery at all. The platforms know this shift is accelerating. Googles recent decision to add conversational shopping and AI-mode ads just weeks before the holidays shows how quickly consumer behavior is moving. Brands must adjust too. Despite the complexity behind AI systems, three simple signals determine which products get recommended: trust, relevance, and extractability. These signals are the backbone of how AI decides what to surface, and matter as much as packaging, price, or placement. 1. Trust: The models instinct about which information is dependable AI systems develop a sense of which sources to believe during training. Domains with consistent verification signals gain more weight because the model has learned they usually publish accurate information. This is why leading retailers, including Ulta, Sephora, Target, Amazon, and Bloomingdales, rely on independent verification partners for the claims displayed on their digital shelves. Verified domains act as trust anchors. When a model must choose, it selects the product backed by clearer and more reliable sources. Trust often determines whether you are included in the answer at all. 2. Relevance: How well your product matches the shoppers question AI assistants answer based on meaning, not keywords. When a shopper asks for eczema-safe moisturizer or gluten-free protein bars, the system retrieves products whose attributes clearly map to those concepts. Relevance depends on using consistent claims across every channel you sell inconsistency is heavily prioritized. When multiple sources concur, this repeated confirmation strongly reinforces your product is the right choice. Missing or inconsistent attributes keep your product of the candidate pool. 3. Extractability: How easy it is for AI to read and use your product data Even accurate information gets ignored if its hard for AI to parse. Clean structure, consistent formatting, and machine readability significantly increase the likelihood your product will be selected. Brands improve extractability by adding structured markup for details like ingredients, materials, and benefits so retrieval systems can interpret it without ambiguity. Clear structure anchors the attention of the large language model, giving your product an advantage. Extractability is often the deciding factor when competing products meet the same need. AI RECOMMENDATIONS SHAPE BEHAVIOR Algorithms do more than respond to consumers. They influence them. We see this in language, where content moderation has led millions of people to adopt new vocabulary. The same pattern is emerging in commerce. If AI consistently recommends a certain moisturizer, probiotic, or baby product, shoppers begin to trust those recommendations and carry those preferences into stores. Optimizing for trust, relevance and extractability goes beyond improving digital performance. It shapes real-world buying behavior. A PRACTICAL PLAYBOOK FOR THE HOLIDAY WINDOW Even with peak season here, brands can still make meaningful progress with these four steps: 1. Structure your data for machine and human audiences Fix blocked pages or missing product schemas, and use standard formats like JSON-LD that AI can parse reliably. Keep consumer-facing PDPs simple while storing deeper technical details, ingredients, and safety information in underlying schemas. Clean up formatting and refresh retailer feeds weekly, since AI systems prioritize recency. Example: A candle brand can keep the PDP simple for shoppers while storing allergen, VOC, and material data in structured markup that AI can read. 2. Align product claims everywhere you sell Match titles, claims and benefits across DTC sites, retailer PDPs, and marketplaces. Remove conflicting or outdated language that can weaken trust. Example: If one PDP says cruelty-free and another says not tested on animals, unify the phrasing so AI sees one consistent claim. 3. Map your data to real shopper intent Identify the attributes consumers care about most in your category. Encode those attributes in machine readable fields; add supporting evidence where possible. Example: For baby toys, encode safety standards like ASTM or CPSC in your structured data so AI can confirm the claim. 4. Build machine-readable authority with credible certifications and verification signals Encode ingredients, materials, certifications, and testing outcomes in structured fields so AI can verify your caims without guessing. Keep claim language consistent across channels to strengthen authority. Use references to third-party standards, testing, or retailer badges. AI gives more weight to claims it can trace back to trusted sources. Example: A sensitive skin serum should encode fragrance-free, eczema-safe, dermatologist testing details, and any third-party certifications directly in schema. 5. Use a tool that monitors, optimizes, and implements the work end-to-end Choose a tool that goes beyond generic visibility tracking, looks at each SKU individually, and helps you implement structured data improvements. Prioritize systems that strengthen your authority signals product by product, not just surface-level optimizations. Look for tools that measure real outcomes, like increased visibility in AI or higher conversion, so you can measure ROI. Consumer discovery is changing faster than most brands are prepared for. But there is still time. By reinforcing trust, relevance, and extractability now, brands can stay visible in AI-driven search this season and build a long-term foundation for every channel where AI shapes consumer decisions. Kimberly Shenk is cofounder and CEO of Novi.
Category:
E-Commerce
European Union regulators on Friday fined Elon Musk’s social media platform X 120 million euros ($140 million) for breaches of the bloc’s digital regulations that they said could leave users exposed to scams and manipulation.The European Commission issued its decision following an investigation it opened two years ago into X under the 27-nation bloc’s Digital Services Act, also known as the DSA.It’s the first time that the EU has issued a so-called non-compliance decision since rolling out the DSA. The sweeping rulebook requires platforms to take more responsibility for protecting European users and cleaning up harmful or illegal content and products on their sites, under threat of hefty fines.The Commission said it was punishing X, previously known as Twitter, because of three different breaches of the DSA’s transparency requirements. The decision could rile President Donald Trump, whose administration has lashed out at digital regulations, complaining that Brussels was targeting U.S. tech companies and vowing to retaliate.The company did not respond immediately to an email request for comment.EU regulators had already outlined their accusations in mid-2024 when they released preliminary findings of their investigation into X.Regulators said X’s blue checkmarks broke the rules because on “deceptive design practices” and could expose users to scams and manipulation.Before Musk acquired X, when it was previously known as Twitter, the checkmarks mirrored verification badges common on social media and were largely reserved for celebrities, politicians and other influential accounts.After he bought it in 2022, the site started issuing the badges to anyone who wanted to pay $8 per month for one.The means X does not meaningfully verify who’s behind the account, “making it difficult for users to judge the authenticity of accounts and content they engage with,” the Commission said in its announcement.X also fell short of the transparency requirements for its ad database, regulators said.Platforms in the EU are required to provide a database of all the digital advertisements they have carried, with details such as who paid for them and the intended audience, to help researches detect scams, fake ads and coordinated influence campaigns. But X’s database, the Commission said, is undermined by design features and access barriers such as “excessive delays in processing.”Regulators also said X also puts up “unnecessary barriers” for researchers trying to access public data, which stymies research into systemic risks that European users face.“Deceiving users with blue checkmarks, obscuring information on ads and shutting out researchers have no place online in the EU. The DSA protects users,” Henna Virkkunen, the EU’s executive vice-president for tech sovereignty, security and democracy, said in a prepared statement. Kelvin Chan, AP Business Writer
Category:
E-Commerce
FIFA has invited more teams than ever for a World Cup priced largely for fans in the 1%. The process of figuring out which teams in the expanded 48-nation field will play where begins with Friday’s draw at the Kennedy Center for the Performing Arts.Cape Verde, Curaçao, Jordan and Uzbekistan will appear in soccer’s premier event for the first time when next year’s tournament is played from June 11 to July 19 at 16 sites in the United States, Mexico and Canada.“I’m quite optimistic because to qualify you need to beat the other teams of your confederations, and that’s a sign of quality,” former Arsenal manager Arsene Wenger said Thursday as red carpets were installed at the Kennedy Center. “The teams are not there by coincidence.”President Donald Trump of the U.S. and Claudia Sheinbaum of Mexico are expected along with Canada Prime Minister Mark Carney. Instead of soccer gear, the Kennedy Center gift shop still was filled with socks of Shakespeare, Beethoven and Verdi along with shelves of red and white holiday nutcrackers.The world’s top 11-ranked teams have all qualified, with No. 12 Italy among 22 nations competing in playoffs for the final six berths to be decided March 31.Led by captain Lionel Messi, who turns 39 during the tournament, Argentina seeks to become the first nation to win consecutive World Cups since Brazil in 1958 and 1962. Messi will look to extend his record of 26 games played and enters with 13 career goals, three shy of Miroslav Klose’s record.Games will be played at 11 NFL stadiums along with three in Mexico and two in Canada, where construction is underway to add 17,000 temporary seats to BMO Field, raising capacity to around 45,000. Attendance will top the record 3.59 million in 1994.“We basically set the new tone in terms of attendance, in terms of surrounding the tournament with a lot of entertainment and glamor,” said Alan Rothenberg, head organizer of the 1994 World Cup in the U.S. “We did a lot of things that kind of broke the ice with respect to how you present the tournament as something other than just a soccer tournament.”FIFA announced initial ticket prices of $60-$6,730, saying they would be dynamic, up from $25-$475 for the 1994 tournament in the United States. It has refused to release a complete list of prices, as it had for every other World Cup since at least 1990. The governing body also is selling parking passes for up to $175 for a single match, a semifinal in Arlington, Texas.FIFA spokesman Bryan Swanson did not respond to a request for FIFA President Gianni Infantino to discuss ticket prices.Sixty-four nations will participate in the draw, 30% of FIFA’s members, but just 42 countries are assured of sports. Among the playoff teams, Albania, Kosovo, New Caledonia and Suriname are trying to reach the World Cup for the first time.With the expansion, the top two teams in each of 12 groups advance along with the eight best third-place teams. Some nations could reach the new round of 32 with three points.“I think we’re going to be in pretty good shape,” said former U.S. midfielder Tab Ramos, who during his playing days mapped out permutations for advancement. “We have a good team, so I’m not worried as much as I’ve been in the past about about this draw.”Opta Analyst’s computer projects the U.S. has a 0.9% chance of winning the Americans haven’t reached the semifinals since the first World Cup in 1930. Spain tops the forecast at 17%, followed by France (14.1%), England (11.8%), Argentina (8.7%), Germany (7.1.%), Portugal (6.6%), Brazil (5.6%) and the Netherlands (5.2%).In a new twist, FIFA said the top four teams in the rankings Spain, Argentina, France and England will avoid each other until the semifinals if they finish first in their first-round groups.Specific sites for most matchups and kickoff times won’t be announced until Saturday. In 1994, there were just seven night games. A team’s group play sites will be restricted to an Eastern, Central and Western regional. The 1994 World Cup draw in Las Vegas was apolitical, featuring performances by Stevie Wonder, Barry Manilow, James Brown and Vanessa Williams plus comedian Robin Williams, who called the draw screen “the world’s largest keno board,” yelled “Bingo!” when Greece was selected.This draw figures to be more akin to the ceremony for 2018 tournament in Moscow, opened by Russian President Vladimir Putin. Trump, who has campaigned for a Nobel Peace Prize, is expected to be awarded FIFA’s own peace prize that Infantino established after traveling to several events with Trump.But the main event is the pulling of balls from bowls to create groups. Retired tars Tom Brady of the NFL, Shaquille O’Neal of the NBA and Wayne Gretzky of the NHL along with three-time AL MVP Aaron Judge will assist in a ceremony to be run by former England captain Rio Ferdinand.“There is the angst and the looks of sheer terror and disappointment and/or joy and elation from the coaches and from the staffs,” said former U.S. defender Alexi Lalas, now Fox’s lead soccer analyst. “It really gets kind of real for people.” AP soccer: https://apnews.com/hub/soccer Ronald Blum, AP Sports Writer
Category:
E-Commerce
So far, Nvidia has provided the vast majority of the processors used to train and operate large AI models like the ones that underpin ChatGPT. Tech companies and AI labs dont like to rely too much on a single chip vendor, especially as their need for computing capacity increases, so theyre looking for ways to diversify. And so players like AMD and Huawei, as well as hyperscalers like Google and Amazon AWS, which just released its latest Trainium3 chip, are hurrying to improve their own flavors of AI accelerators, the processors designed to speed up specific types of computing tasks. Could the competition eventually reduce Nvidia, AIs dominant player, to just another AI chip vendor, one of many options, potentially shaking up the industrys technological foundations? Or is the rising tide of demand for AI chips big enough to lift all boats? Those are the trillion-dollar questions. Google sent a minor shockwave across the industry when it casually mentioned that it had trained its impressive new Gemini 3 Pro model entirely on its own Tensor Processing Units (TPUs)another flavor of AI accelerator chips (GPUs). Industry observers immediately wondered if the AI industrys broad dependence on Nvidia chips was justified. After all, theyre very expensive: A big part of the billions now being spent to build out AI computing capacity (data centers) is going to Nvidia chips. And Google TPUs are looking more like a Nvidia alternative. The company can rent TPUs in its own data centers, and its reportedly considering selling the chips outright to other AI companies, including Meta and Anthropic. A (paywalled) report from The Information in November said Google is in talks to sell or lease its GPUs so they can run in any companys data center. A Reuters report says Meta is in talks to spend billions on Googles TPUs starting in 2027, and may begin paying to run AI workloads on TPUs within Google data centers even sooner. Anthropic announced in October that it would use up to a million TPUs within Google data centers to develop its Claude models. Selling the TPUs outright would, technically, put Google in direct competition with Nvidia. But that doesnt mean that Google is gunning hard to steal Nvidias chip business. Google, after all, is a major buyer of Nvidia chips. Google may see selling TPUs to certain customers as an extension of selling access to TPUs running in its cloud. This makes sense if said customers are looking to do the types of AI processing that TPUs are especially good at, says IDC analyst Brandon Hoff. While Nvidias GPUs are workhorses capable of a wide range of work, most of the big-tech platform companies have designed their own accelerators that are purpose-built for their most crucial types of computing. Microsoft developed chips that are optimized for its Azure cloud services. Amazons Trainium chips are especially good at e-commerce-related tasks like product suggestion and delivery logistics. Googles TPUs are good at serving targeted ads across its platforms and networks. Thats something Google shares with Meta. They both do ads and so it makes sense that Meta wants to take a look at using Google’s TPUs, Hoff says. And its not just Meta. Most big tech companies use a variety of accelerators because they use machine learning and AI for a wide variety of tasks. Apple got some TPUs, got some of the AWS chips, of course got some GPUs, and they’ve been playing with what works good for different workloads, he adds. Nvidias big advantage has been that its chips are very powerfultheyre the reason that training large language models became possible. Theyre also great generalists, good for a wide variety of AI workloads. On top of that, theyre flexible, which is to say they can plug in to different platforms. For example, if a company wants to run its AI models on a mix of cloud services, theyre likely to develop those models to run on Nvidia chips because all the clouds use them. Nvidias flexibility advantage is a real thing; its not an accident that the fungibility of GPUs across workloads was focused on as a justification for increased capital expenditures by both Microsoft and Meta, analyst Ben Thompson wrote in a recent newsletter. TPUs are more specialized at the hardware level, and more difficult to program for at the software level; to that end, to the extent that customers care about flexibility, then Nvidia remains the obvious choice. However, vendor lock-in remains a big concern, especially as big tech companies and AI labs are sinking hundreds of billions of dollars into new data center capacity for AI. AI companies would prefer instead to use a mix of AI chips from different vendors. Anthropic, for one, is explicit about this: Anthropics unique compute strategy focuses on a diversified approach that efficiently uses three chip platformsGoogles TPUs, Amazons Trainium, and NVIDIAs GPUs, the company said in an October blog post. Amazons AWS says its Trainium3 chip is roughly four times faster than the Trainium2 chip it announced a year ago, and 40% more efficient. Because of the performance of Nvidia chips, many AI companies have standardized on CUDA, the Nvidia software layer that lets developers control how the GPUs work together to support their AI applications. Most of the engineers, developers, and researchers who work with large AI models know CUDA, which can cause another form of skills-based organizational lock-in. But now it may make sense for organizations to build whole new alternative software stacks to accommodate different kinds of chips, Thompson says. That they did not do so for a long time is a function of it simply not being worth the time and trouble; when capital expenditure plans reach the hundreds of billions of dollars, however, what is worth the time and trouble changes. IDC projects that the high demand for AI computing power isnt likely to abate very soon. We see that cloud service providers are growing quickly, but their spending will slow down, Hoff says. Beyond that, a second wave of demand may come from sovereign funds, such as Saudi Arabia, which is building the Humain AI hub, a large AI infrastructure complex that it will fund and control. Another wave of demand could come from large multinational corporations that want to build similar sovereign AI infrastructure, Hoff explains. There’s a lot of stuff in 2027 and 2028 that’ll keep driving demand. There are plenty of chipmaker challenges Nvidia stories out there, but the deeper one delves into the economic complexities and competitive dynamics of the AI chip market, much of the drama drains away. As AI finds more applications in both business and consumer tech, AI models will be asked to do more and more kinds of work, and each one will demand various mixtures of generalist or specialized chips. So while there is growing competitive pressure on Nvidia, theres still a lot ofgood reasons for players like Google and Amazon to collaborate with Nvidia. In the next two years, there is more demand than supply so almost none of that matters, says Moor Insights & Strategy chief analyst Patrick Moorhead. Moorhead believes that five years from now Nvidia GPUs will still retain their 70% market share.
Category:
E-Commerce