|
Picture a data center on the edge of a desert plateau. Inside, row after row of servers glow and buzz, moving air through vast cooling towers, consuming more electricity than the surrounding towns combined. This is not science fiction. It is the reality of the vast AI compute clusters, often described as AI supercomputers for their sheer scale, that train todays most advanced models. Strictly speaking, these are not supercomputers in the classical sense. Traditional supercomputers are highly specialized machines designed for scientific simulations such as climate modeling, nuclear physics, or astrophysics, tuned for parallelized code across millions of cores. What drives AI, by contrast, are massive clusters of GPUs or custom accelerators (Nvidia H100s, Google TPUs, etc.) connected through high-bandwidth interconnections, optimized for the matrix multiplications at the heart of deep learning. They are not solving equations for weather forecasts: they are churning through trillions of tokens to predict the next word. Still, the nickname sticks, because their performance, energy demands, and costs are comparable to, or beyond, the worlds fastest scientific machines. And the implications are just as profound. A recent study of 500 AI compute systems worldwide found that their performance is doubling every nine months, while both cost and power requirements double every year. At this pace, the frontier of artificial intelligence is not simply about better algorithms or smarter architectures. It is about who can afford, power, and cool these gigantic machines, and who cannot. The exponential moat When performance doubles every nine months but cost doubles every 12, you create an exponential moat: each leap forward pushes the next frontier further out of reach for all but a handful of players. This is not the familiar story of open-source vs. closed-source models: it is more fundamental. If you cannot access the compute substrate (the hardware, electricity, cooling, and fabs required to train the next generation ) you are not even in the race. Universities cannot keep up. Small startups cannot keep up. Even many governments cannot keep up. The study shows a stark concentration of capability: the most powerful AI clusters are concentrated in a few corporations, effectively privatizing access to the cutting edge of machine intelligence. Once compute becomes the bottleneck, the invisible hand of the market does not produce diversity. It produces monopoly. Centralization vs. democratization The rhetoric around AI often emphasizes democratization: tools made available to everyone, small actors empowered, creativity unleashed. But in practice, the power to shape AIs trajectory is shifting toward the owners of massive compute farms. They decide which models are feasible, which experiments get run, which approaches receive billions of tokens of training. This is not just a matter of money. It is about infrastructure as governance. When only three or four firms control the largest AI clusters, they effectively control the boundaries of the possible. If your idea requires training a trillion-parameter model from scratch, and you are not inside one of those firms, your idea remains just that: an idea. Geopolitics of compute Governments are beginning to notice. At the 2025 Paris AI Action Summit, nations pledged billions to upgrade national AI infrastructure. France, Germany, and the U.K. are each moving to expand sovereign compute capacity. The United States has launched large-scale initiatives to accelerate domestic chip production, and China, as always, is playing its own game, pouring resources into massive wind and solar buildouts to guarantee not only chips, but the cheap electricity to feed them. Europe, as usual, is caught in the middle. Its regulatory frameworks may be more advanced, but its ability to deploy AI at scale depends on whether it can secure energy and compute on competitive terms. Without that, AI sovereignty is rhetoric, not reality. And yet, there is a darker irony here. Even as governments race to assert sovereignty, the real winners of the AI arms race may be corporations, not nations. Control over compute is concentrating so quickly in the private sector that we are edging closer to a scenario long depicted in science fiction: corporations wielding more power than states, not only in markets but in shaping the very trajectory of human knowledge. The balance of authority between governments and companies is shifting, and this time, it is not fiction. Environmental reckoning There is also a physical cost. Training one frontier model can require as much electricity as a small city uses in a year. Cooling towers demand enormous volumes of water, and while much of it is returned to the cycle, siting matters: in water-scarce regions, the strain cn be significant. The carbon footprint is similarly uneven. A model trained on grids dominated by coal or gas produces orders of magnitude more emissions than one trained on grids powered by renewables. In this sense, the AI sustainability debate is really an energy debate. Models are not green or dirty by themselves. They are as green or as dirty as the electrons that feed them. What efficiency cannot buy Efficiency alone will not solve this problem. Each generation of chips gets faster, each architecture more optimized, but the aggregate demand continues to rise faster than the gains. Every watt saved at the micro level is consumed by the macro expansion of ambition. If anything, efficiency makes the arms race worse, because it lowers the cost per experiment and encourages even more experiments. The result is a treadmill: more compute, more power, more cost, more centralization. What to demand If we want to avoid a future in which AIs destiny is set by the boardrooms of three companies and the ministries of two superpowers, we need to treat compute as a public concern. That means demanding: Transparency about who owns and operates the largest clusters. Auditability of usage: what models are being trained, for what purposes. Shared infrastructure, funded publicly or through consortia, so that researchers and smaller firms can experiment without asking permission from trillion-dollar corporations. Energy accountability, requiring operators to disclose not just aggregate consumption but sources, emissions, and water footprints in real time. The debate should not stop at which model is safest or which dataset is fair. It should extend to who controls the machines that make the models possible in the first place. The machines behind the machines The next control point in AI isnt software: its hardware. The massive compute clusters that train the models are now the real arbiters of progress. They decide whats possible, whats practical, and who gets to play. If history teaches anything, it is that when power centralizes at this scale, accountability rarely follows. Without deliberate interventions, we risk an AI ecosystem where innovation is bottlenecked, oversight is optional, and the costs, from financial or environmental to human, are hidden until it is too late. The arms race of AI supercomputers is already underway. The only question is whether society chooses to watch passively as the future of intelligence is privatized, or whether we recognize that the machines behind the machines deserve just as much scrutiny as the algorithms they enable.
Category:
E-Commerce
Welcome to AI Decoded, Fast Companys weekly newsletter that breaks down the most important news in the world of AI. Im Mark Sullivan, a senior writer at Fast Company, covering emerging tech, AI, and tech policy. This week, Im focusing on Donald Trumps recent AI-generated videos, which he (or his staff) posts on Truth Social. I also look at world models, the successor to large language models, and at OpenAIs new Sora 2 model, which is also the anchor for a new social app. Sign up to receive this newsletter every week via email here. And if you have comments on this issue and/or ideas for future ones, drop me a line at sullivan@fastcompany.com, and follow me on X (formerly Twitter) @thesullivan. Trumps dangerous mix of low-grade and high-grade deepfakes AI-generated videos are becoming a routine part of politics. Thats probably not a good thing. In the past its been fairly easy to distinguish between the real and the generated, but as video-generation models have improved, its gotten harder. Were in a period when both high-quality AI videos and AI slop are common. The Trump administration appears to be using both types. Last weekend, Trump posted an AI video to Truth Social that looked a lot like a real Fox News segment describing a new healthcare program about to be rolled out to Americans. The video featured the presidents daughter-in-law Lara Trump describing medbeds, or futuristic health pods (think the health pod in Prometheus) that can do anything from curing cancer to growing back lost limbs. The beds would be offered in new hospitals across the country, and people would use a medbed card to access them. Except magical medbeds are a fantasy, a myth spun up in years of Qanon blather on sites like 4chan. And theres no new hospital system and no membership card. For most rational people, the unlikelihood of such a thing suddenly coming into existence was the first tip-off that the video was AI generated. Somebodymaybe Trump, maybe a stafferdeleted the strange post a few hours later, and the White House offered no explanation. A few days later, Trump posted another, lower-quality AI-generated video. This one depicted Senate minority leader Chuck Schumer (D-NY) and House minority leader Hakeem Jeffries (D-NY) standing at a podium talking about the looming government shutdown. The AI had Schumer insulting his own party and using profanity. The AI dropped a cartoon sombrero and a mustache on Jeffries, in a nod to a Republican lie that Democrats will let the government shut down if they cant give healthcare benefits to undocumented immigrants (who are, by law, ineligible for such benefits). The Schumer video is obviously AI-generated, meant to troll Democrats during the shutdown impasse. The medbeds video is more troubling because the AI is high-quality and shows an intent to mislead. When one politician routinely uses both slop and high-realism AI to make their points, will their constituents or supporters always know the difference? As the AI improves, that becomes almost completely up to the creator of the video. For Trump, a politician with authoritarian tendencies and a reliance on propaganda, that could be a dangerous mix. AI-based video may wind up becoming the most powerful propaganda tool ever invented. After all, seeing is believingespecially when the viewer wants to believe. World models are likely the future of AI The AI models weve been talking about for the past three years are fundamentally language modelsreally big mathematical probability engines that are good at predicting the most likely word in a sequence. But two things have happened since the appearance of ChatGPT in late 2022: Weve come to understand that a model that reasons primarily on words is limited in its real-life applications. And within the AI industry, a consensus has formed that the AI labs main trick of radically scaling up training data and computing power to make models smarter isnt achieving the big performance gains it once did. None of this should be too surprising. A machine (or a human) can only learn so much about how the world works by reading articles and books. We humans dont learn like that. We have a unique ability to quickly build a mental world model that organizes diverse modes of information gathered using our senses about our environment and ourselves. Researchers are working hard to develop synthetic world models, but its a hard problem. While language models form a vast, many-dimensional vector space (a map, if you will) representing all the possible combinations of words in various contexts, a world model must form a much bigger space to represent the virtually endless combinations of visual, aural, and motion information. While language models try to guess the next word in a sequence, the world model must reason in real time about how a certain action might affect the real world. The models are learning how to reason about physical reality, says Naeem Talukdar, CEO of the video-generation company Moonvalley. A world model inside a robot might be asked to imagine a world in which the robotic arm moves 10 degrees to the right, and then judge whether such a move is productive to the larger task at hand. Its this kind of reasoning that may allow robots to iteratively learn to complete tasks that theyve not been explicitly trained to do. The bigger these models get, and the more modalities that they learn on, just like humans, they start to be able to reason on things that they havent seen yet before, Talukdar says. For instance, in the past robots have struggled with the deceptively simple command: clean up the dinner table and put the dishes in the dishwasher. Without the ability to reason in real time about the physical world, the robot wouldnt be able to experimentally move from one micro-task to the next. World models are also being used to help self-driving cars train for real-world driving, and to iteratively manage unexpected or untrained-for events that occur on the roadway in real time. Additionally, augmented reality glasses such as Metas Orion may eventually rely on world models to organize all the data collected from the various sensors in the device, such as motion and orientation sensors, depth and tracking sensors, microphones, and light sensors. OpenAIs Sora 2 video generation model comes with a social network Would you like to open an app and spend your free time watching AI-generated videos depicting real peopleincluding yourself and your friendsdoing silly things? OpenAI and its CEO Sam Altman think you will. The company jst announced a new version of its impressive Sora video generation modelSora 2and a social networking app (iOS only) to go with it. OpenAI calls Sora 2 the “GPT-3.5 moment for video” (GPT-3.5 was when the output of OpenAIs language models became more coherent and relevant.) The major breakthrough is native audio-video generationcreating synchronized dialogue, sound effects, and soundscapes that naturally match the visuals. The company says that Sora 2s understanding of physics is vastly improved, and that it can now accurately model complex phenomena like buoyancy in water, and intricate movements such as gymnastics routines. One person on X demonstrated how Sora 2 can now accurately simulate water being poured into a glass with realistic reflections, ripples, and refraction. The model generates 10-second clips (20 seconds for Pro users) with better consistency across multi-shot sequences, OpenAI says. The model is good at producing realistic, cinematic, and anime styles. Importantly, a standout Cameos feature lets users insert themselves (or their friends) into videos after a onetime recording. When you set up the Sora app, youre now asked to record a live video of yourself repeating random numbers or phrases and turning your head in certain poses. All this gives the app a way to authenticate that its really you so that only you can use your own image, and to ensure that anybody else trying to use it in a video must get your permission. Like TikTok, the Sora iOS app (invite-only for now) features an algorithmic feed, content remixing, and social sharing. OpenAI emphasizes “long-term user satisfaction” over engagement metrics, and the company says it has no plans to insert advertising into the feed, as Meta plans to do with its Vibes AI video app. More AI coverage from Fast Company: AIs monstrous energy use is becoming a serious PR problem ChatGPT can now spend your money for you Shift the AI conversation from cost cutting to revenue AI is making your website irrelevant Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.
Category:
E-Commerce
Pepsi has a new challenge: keeping products like Gatorade and Cheetos vivid and colorful without the artificial dyes that U.S. consumers are increasingly rejecting.PepsiCo, which also makes Doritos, Cap’n Crunch cereal, Funyuns and Mountain Dew, announced in April that it would accelerate a planned shift to using natural colors in its foods and beverages. Around 40% of its U.S. products now contain synthetic dyes, according to the company.But just as it took decades for artificial colors to seep into PepsiCo’s products, removing them is likely to be a multi-year process. The company said it’s still finding new ingredients, testing consumers’ responses and waiting for the U.S. Food and Drug Administration to approve natural alternatives. PepsiCo hasn’t committed to meeting the Trump administration’s goal of phasing out petroleum-based synthetic dyes by the end of 2026.“We’re not going to launch a product that the consumer’s not going to enjoy,” said Chris Coleman, PepsiCo’s senior director for food research and development in North America. “We need to make sure the product is right.”Coleman said it can take two or three years to shift a product from an artificial color to a natural one. PepsiCo has to identify a natural ingredient that will have a stable shelf life and not change a product’s flavor. Then it must ensure the availability of a safe and adequate supply. The company tests prototypes with trained experts and panels of consumers, then makes sure the new formula won’t snag its manufacturing process. It also has to design new packaging. Experimenting with spices to color Cheetos Tostitos and Lay’s will be the first PepsiCo brands to make the shift, with naturally dyed tortilla and potato chips expected on store shelves later this year and naturally dyed dips due to be on sale early next year. Most of the chips, dips and salsas in the two lines already are naturally colored, but there were some exceptions.The reddish-brown tint of Tostitos Salsa Verde, for example, came from four synthetic colors: Yellow 5, Yellow 6, Red 40 and Blue 1. Coleman said the company is switching to carob powder, which gives the chips a similar color, but needed to tweak the recipe to ensure the addition of the cocoa alternative wouldn’t affect the taste.In its Frito-Lay food labs and test kitchens in Plano, Texas, PepsiCo is experimenting with ingredients like paprika and turmeric to mimic the bright reds and oranges in products like Flamin’ Hot Cheetos, Coleman said.The company is looking at purple sweet potatoes and various types of carrots to color drinks like Mountain Dew and Cherry 7Up, according to Damien Browne, the vice president of research and development for PepsiCo’s beverage division based in Valhalla, New York.Getting the hue right is critical, since many consumers know products like Gatorade by their color and not necessarily their name, Browne said.“We eat with our eyes,” he said. “If you look at a plate of food, it’s generally the different kinds of colors that will tell you what you would like or not.” Consumer demand goes from a whisper to a roar When the Pepsi-Cola Company was founded in 1902, the absence of artificial dyes was a point of pride. The company marketed Pepsi as “The Original Pure Food Drink” to differentiate the cola from rivals that used lead, arsenic and other toxins as food colorants before the U.S. banned them in 1906.But synthetic dyes eventually won over food companies. They were vibrant, consistent and cheaper than natural colors. They are also rigorously tested by the FDA.Still, PepsiCo said it started seeing a small segment of shoppers asking for products without artificial colors or flavors more than two decades ago. In 2002, it launched its Simply line of chips, which offer natural versions of products like Doritos. A dye-free organic Gatorade came out in 2016.“We’re looking for those little signals that will become humongous in the future,” Amanda Grzeda, PepsiCo’s senior director of global sensory and consumer experience, said of the company’s close attention to consumer preferences.Grzeda said the whisper PepsiCo detected in the early 2000s has become a roar, fueled by social media and growing consumer interest in ingredients. More than half of the consumers PepsiCo spoke to for a recent internal study said they were trying to reduce their consumption of artificial dyes, Grzeda said. Synthetic and natural colors are in FDA’s hands Some states, including West Virginia and Arizona, have banned artificial dyes in school lunches. But Browne said he thinks consumers are driving the push to overhaul processed foods.“Consumers are definitely leading, and I think what we need to do is have the regulators catching up, allowing us to approve new natural ingredients to be able to meet their demand,” he said.The U.S. Food and Drug Administration has said it’s expediting approval of natural additives after calling on companies to halt their use of synthetic dyes. In May, the FDA approved three new natural color additives, including a blue color derived from algae. In July, the agency approved gardenia blue, which is derived from a flowering evergreen.The FDA banned one petroleum-based dye, Red 3, in January because it was shown to cause cancer in lab rats. And in September, the agency proposed a ban on Orange B, a synthetic color that hasn’t been used in decades.Six synthetic dyes remain FDA-approved and widely used, despite mixed studies that show they may cause neurobehavioral problems in some children. Red 40, for example, is used in 25,965 food and beverage items on U.S. store shelves, according to the market research firm NIQ.But even if decades of research has shown that synthetic colors are safe, PepsiCo has to weigh public perceptions, Grzeda said.“We could just blindly follow the science, but it probably would put us at odds with what our consumers believe and perceive in the world,” she said. Passing taste and texture tests PepsiCo also has to balance the needs of consumers who don’t want their favorite snacks and drinks to change or get more expensive because of the costs of natural dyes. NIQ data shows that unit sales of products advertised as free of artificial colors fell sharply in 2023 as prices rose.Susan Mazur-Stommen, a small business owner in Hinton, West Virginia, picked up some Simply brand Cheetos Puffs recently at a convenience store because they were the only variety available. She found the texture to be much different from regular Cheetos Puffs, she said, and their pallid color made them less appetizing.Mazur-Stommen said she agrees with the move away from petroleum-based dyes, but it’s not a critical issue for her.“What I am looking for is the original formulation,” she said.Ultimately, PepsiCo does not want customers to have to choose between natural colors and familiar flavors and textures, Grzeda said.“That’s where it requires the deep science and ingredients and magic,” she said. Durbin reported from Detroit. Dee-Ann Durbin and Ted Shaffrey, Associated Press
Category:
E-Commerce
All news |
||||||||||||||||||
|