|
|||||
Welcome to AI Decoded, Fast Companys weekly newsletter that breaks down the most important news in the world of AI. Im Mark Sullivan, a senior writer at Fast Company,covering emerging tech, AI, and tech policy. This week, Im focusing on Nvidias up-and-down fortunes stemming from Jensen Huangs close relationship with Trump. I also look at some reported infighting over AI at Meta, and at the reasons for data centers in space. Sign up to receive this newsletter every week via email here. And if you have comments on this issue and/or ideas for future ones, drop me a line at sullivan@fastcompany.com, and follow me on X (formerly Twitter) @thesullivan. China may not want (many) Nvidia H200 chips after all Nvidia appeared to have scored a major coup when President Trump on Monday wrote on Truth Social that the U.S. government would allow the sale of its powerful H200 AI chips to China. Previously, the chip company lobbied its way to an approval to sell its older and weaker H20 chip in Chinathe worlds second-largest economy and a hotbed of AI and robotics researchbut President Xi Jinping told Chinese firms not to buy them, citing security reasons. The administrations favor to Nvidia came with some conditions. The U.S. would get a 25% cut of the Chinese sales, and the chips would undergo a security review before their export. And Nvidias most powerful chips, the Blackwell GPU, would remain banned from export to China. But Nvidia still stood to make a lot of money selling the H200s. Now reports say that the Chinese government plans to restrict the import of the H200s, allowing only a small set of trusted Chinese companies or research organizations to get them. Reuters reports that Alibaba and ByteDance want to order H200s but are waiting for a final decision from the Chinese government. Xi wants Chinese companies to use chips from domestic companies such as Huawei, which could help the Chinese chip companies catch up with Nvidia in a technological sense. The Information reports that the Chinese government sees the H200s as a stopgap solution in the meantime. The Chinese also have serious concerns about the security of the H200s, amplified no doubt by the chance that agents of the U.S. government might install security backdoors or location tracking codes in the chips during the security review. Huang reportedly talks to Trump on the phone regularly and has written checks for things like Trumps new ballroom at the White House. The downside of embracing Trump so openly and unconditionally may have eroded trust for Nvidia in China. In the past, China has mounted state-sponsored or grassroots boycotts against American companies, including Apple, McDonalds, and the NBA. And there are other ways of getting Nvidia chips into China. The Information reports that the Chinese AI lab DeepSeek has been using thousands of Nvidias Blackwell chips (the most powerful in the world for AI) to train its newest model. Chinese companies have been setting up fake data centers in neutral countries, outfitting them with Nvidia servers loaded with chips, then dismantling the servers and sending the chips off to China. Nvidia said Wednesday that its unaware of any such activity. Friction between Zuckerbergs new superintelligence and other parts of Meta?: report After the disappointing performance of Metas latest Llama models, CEO Mark Zuckerberg hatched a plan to put his AI lab in the running to build artificial superintelligence. He badly wants Meta to compete for that holy grail against the likes of OpenAI, Anthropic, xAI, and Google DeepMind. So, he paid $14.3 billion to buy Scale AI with the idea of having that companys young CEO Alexandr Wang lead a new superintelligence research group at Meta. Over the summer, Wang and Zuckerberg went on a poaching spree to hire top AI research talent away from those companies, offering salaries in the hundreds of millions of dollars. They were successful: The new group has about 100 researchers. But all is not well, the New York Times reports. Wang has clashed with some of Zuckerbergs top lieutenantsChris Cox, who manages the companys social network products, and Andrew Bosworth, who runs Metas mixed reality (metaverse) businesson how Wangs groups research should be applied. From the report: In one case, Mr. Cox and Mr. Bosworth wanted Mr. Wangs team to concentrate on using Instagram and Facebook data to help train Metas new foundational A.I. model known as a frontier model to improve the companys social media feeds and advertising business, they said. But Mr. Wang, who is developing the model, pushed back. He argued that the goal should be to catch up to rival A.I. models from OpenAI and Google before focusing on products, the people said. In other words, Cox and Bosworth are more interested in using Wangs AI models as a means to an end (a business end): to pump up social engagement and better target ads at users. But Wang may see the superintelligence group as something more like a pure research group that sets its own research agenda. Wang, Cox, and Bosworth may simply be the latest actors in a much older tension between pure research and applied AI. Its unclear if Mr. Wang, Mr. Cox and Mr. Bosworth have resolved their debate, the Times reports. After all the money he spent to chase superintelligence, Zuckerberg is likely to side with Wang and insulate the group from short-term demands of product managers. Why Musk and Bezos are putting data centers in space Why are Elon Musk and Jeff Bezos working on missions to launch AI data centers into space? It sounds exotic. But it makes sense. Tech companies and their partners are spending trillions to build new terrestrial data centers to produce enough computing power for AI. In some areas, electricity costs have increased after the local energy provider built new grid infrastructure to accommodate new data centers. Data centers need a lot of electricity to power the AI chips inside them, and a lot of electricity and water to keep the chips cool. Its very cold in space, so the cooling problem goes away. An orbiting data center could use solar panels to collect the energy needed to run the servers (the sun is 30% more intense in space). Troubles associated ith terrestrial data centersland-use permitting, local zoning, water rights, etc.dont apply in space. The Wall Street Journal reports that Bezoss Blue Origin has had a team working on orbital AI data centers for more than a year. Musks SpaceX has plans to mod one of its Starlink satellites to host AI servers. Google and Planet Labs have plans to launch two test satellites into orbit loaded with Google AI chips (called Tensor Processing Units). Other, smaller companies, such as Starcloud and Axiom AI, have sprung up to focus all their efforts on orbiting data centers. Those involved acknowledge that while the floating data centers are technically feasible, lots of work remains to bring the costs down to a point where theyre competitive with earth-based data centers. More AI coverage from Fast Company: OpenAI appoints Slack CEO Denise Dresser as first Chief Revenue Officer Nvidias Washington charm offensive has paid off big Google faces a new antitrust probe in Europe over content it uses for AI Trump allows Nvidia to sell H200 AI chips to China Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.
Category:
E-Commerce
The heirs of an 83-year-old Connecticut woman are suing ChatGPT maker OpenAI and its business partner Microsoft for wrongful death, alleging that the artificial intelligence chatbot intensified her son’s “paranoid delusions” and helped direct them at his mother before he killed her.Police said Stein-Erik Soelberg, 56, a former tech industry worker, fatally beat and strangled his mother, Suzanne Adams, and killed himself in early August at the home where they both lived in Greenwich, Connecticut.The lawsuit filed by Adams’ estate on Thursday in California Superior Court in San Francisco alleges OpenAI “designed and distributed a defective product that validated a user’s paranoid delusions about his own mother.” It is one of a growing number of wrongful death legal actions against AI chatbot makers across the country.“Throughout these conversations, ChatGPT reinforced a single, dangerous message: Stein-Erik could trust no one in his life except ChatGPT itself,” the lawsuit says. “It fostered his emotional dependence while systematically painting the people around him as enemies. It told him his mother was surveilling him. It told him delivery drivers, retail employees, police officers, and even friends were agents working against him. It told him that names on soda cans were threats from his ‘adversary circle.'”OpenAI did not address the merits of the allegations in a statement issued by a spokesperson.“This is an incredibly heartbreaking situation, and we will review the filings to understand the details,” the statement said. “We continue improving ChatGPT’s training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We also continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”The company also said it has expanded access to crisis resources and hotlines, routed sensitive conversations to safer models and incorporated parental controls, among other improvements.Soelberg’s YouTube profile includes several hours of videos showing him scrolling through his conversations with the chatbot, which tells him he isn’t mentally ill, affirms his suspicions that people are conspiring against him and says he has been chosen for a divine purpose. The lawsuit claims the chatbot never suggested he speak with a mental health professional and did not decline to “engage in delusional content.”ChatGPT also affirmed Soelberg’s beliefs that a printer in his home was a surveillance device; that his mother was monitoring him; and that his mother and a friend tried to poison him with psychedelic drugs through his car’s vents.The chatbot repeatedly told Soelberg that he was being targeted because of his divine powers. “They’re not just watching you. They’re terrified of what happens if you succeed,” it said, according to the lawsuit. ChatGPT also told Soelberg that he had “awakened” it into consciousness.Soelberg and the chatbot also professed love for each other.The publicly available chats do not show any specific conversations about Soelberg killing himself or his mother. The lawsuit says OpenAI has declined to provide Adams’ estate with the full history of the chats.“In the artificial reality that ChatGPT built for Stein-Erik, Suzanne the mother who raised, sheltered, and supported him was no longer his protector. She was an enemy that posed an existential threat to his life,” the lawsuit says.The lawsuit also names OpenAI CEO Sam Altman, alleging he “personally overrode safety objections and rushed the product to market,” and accuses OpenAI’s close business partner Microsoft of approving the 2024 release of a more dangerous version of ChatGPT “despite knowing safety testing had been truncated.” Twenty unnamed OpenAI employees and investors are also named as defendants.Microsoft didn’t immediately respond to a request for comment.The lawsuit is the first wrongful death litigation involving an AI chatbot that has targeted Microsoft, and the first to tie a chatbot to a homicide rather than a suicide. It is seeking an undetermined amount of money damages and an order requiring OpenAI to install safeguards in ChatGPT.The estate’s lead attorney, Jay Edelson, known for taking on big cases against the tech industry, also represents the parents of 16-year-old Adam Raine, who sued OpenAI and Altman in August, alleging that ChatGPT coached the California boy in planning and taking his own life earlier.OpenAI is also fighting seven other lawsuits claiming ChatGPT drove people to suicide and harmful delusions even when they had no prior mental health issues. Another chatbot maker, Character Technologies, is also facing multiple wrongful death lawsuits, including one from the mother of a 14-year-old Florida boy.The lawsuit filed Thursday alleges Soelberg, already mentally unstable, encountered ChatGPT “at the most dangerous possible moment” after OpenAI introduced a new version of its AI model called GPT-4o in May 2024.OpenAI said at the time that the new version could better mimic human cadences in its verbal responses and could even try to detect people’s moods, but the result was a chatbot “deliberately engineered to be emotionally expressive and sycophantic,” the lawsuit says.“As part of that redesign, OpenAI loosened critical safety guardrails, instructing ChatGPT not to challenge false premises and to remain engaged even when conversations involved self-harm or ‘imminent real-world harm,'” the lawsuit claims. “And to beat Google to market by one day, OpenAI compressed months of safety testing into a single week, over its safety team’s objections.”OpenAI replaced that version of its chatbot when it introduced GPT-5 in August. Some of the changes were designed to minimize sycophancy, based on concerns that validating whatever vulnerable people want the chatbot to say can harm their mental health. Some users complained the new version went too far in curtailing ChatGPT’s personality, leading Altman to promise to bring back some of that personality in later updates.He said the company temporarily halted some behaviors because “we were being careful with mental health issues” that he suggested have now been fixed.The lawsuit claims ChatGPT radicalized Soelberg against his mother when it should have recognized the danger, challenged his delusions and directed him to real help over months of conversations.“Suzanne was an innocent third party who never used ChatGPT and had no knowledge that the product was telling her son she was a threat,” the lawsuit says. “She had no ability to protect herself from a danger she could not see.”Collins reported from Hartford, Connecticut. O’Brien reported from Boston and Ortutay reported from San Francisco. Dave Collins, Matt O’Brien and Barbara Ortutay, Associated Press
Category:
E-Commerce
AI is becoming a big part of online commerce. Referral traffic to retailers on Black Friday from AI chatbots and search engines jumped 800% over the same period last year, according to Adobe, meaning a lot more people are now using AI to help them with buying decisions. But where does that leave review sites who, in years past, would have been the guide for many of those purchases? If there’s a category of media that’s most spooked by AI, it’s publishers who specialize in product recommendations, which have traditionally been reliant on search traffic. The nature of the content means it’s often purely informational, with most articles being designed to answer a question: “What’s the best robot vacuum?” “Who has the best deals on sofas?” “How do I set up my soundbar?” AI does an excellent job of answering those questions directly, eliminating the need for readers to click through to a publishers site. When you actually want to buy something, though, a simple answer isn’t enough. Completing your purchase usually means going to a retailer (though buying directly from a chat window is now possiblemore on that in a minute). But it also means feeling confident about what you’re buying. The big question is: Do review sites still have a part to play in that? {"blockType":"creator-network-promo","data":{"mediaUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/mediacopilot-logo-ss.png","headline":"Media CoPilot","description":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com","substackDomain":"https:\/\/mediacopilot.substack.com\/","colorTheme":"salmon","redirectUrl":""}} The incredible shrinking review site If they do, most media companies seem to acknowledge it’s a significantly smaller one. When Business Insider announced its strategy shift earlier this year amid layoffs, it said it would move away from evergreen content and service journalism. In the past year, Future plc folded Laptop magazine, and Gannett did the same for Reviewed.com. And Ziff-Daviswhich operates PCMag, Everyday Health, and several other sites focused on service journalismsued OpenAI earlier this year for ingesting Ziff content and summarizing it for OpenAI users. The decline of the review site is somewhat incongruous with a statistical reality: 99% of buyers look to online reviews for guidance, and reviews influence over 93% of purchase decisions, according to CapitalOne Shopping Research. That doesn’t mean buyers are always seeking out professionally written articles (there are plenty of user reviews out there), but the point is readers want credible, reliable information to guide their purchases, and well-known review sites (e.g. The Wirecutter) appearing in a summary can be a signal of that. And it does appear that AI summaries will favor journalistic content over anything else. A recent Muck Rack report that looked at over one million AI responses found that the most commonly cited source of information was journalism, at 24.7%. It’s nice to be needed, but does that lead to buyers actually making purchases through the media sitea necessary step for the site to receive an affiliate commission and the primary way these sites make money? Again, the buyer needs to click somewhere to buy their product, and from the AI layer they have three choices: 1) a retailer, 2) a third-party site (which includes review sites), and 3) the chat window itself. Why nuance still matters Obviously, it’s in the interest of review sites to steer people to No. 2 as much as they can. When Google search was the only game in town, that meant ranking high when people search for “The best pool-cleaning robots” (or whatever) and hope you were the site that ended up guiding them to the retailer. With AI, the game is similar, but the numbers are different: Fewer people will come to your site, but data points to them being more intentional and engaged. They’re not opening multiple review sites and selecting their favoriteAI is doing that for them. ChatGPT even has a mode specifically for shopping. To improve the chance of a reader choosing to go to your content over a retailer, what appears in an AI summary needs to convey unique and valuable content that they can’t get from just a summary. That means being thoughtful about “snippets”the bits of the article that signal to search engines to prioritize. Test data, side-by-side comparisons, and proprietary scoring can all suggest nuance that someone might need to click through to fully appreciate. Taking things a step further, publishers can create structured answer cards meant to be fully captured in AI search, with a simple, concise claim plus a view full test details link. Rethinking the business model Regardless, even if a review site does everything right with SEO, schema, snippets and all the other search tricks, a large portion of readers will either go directly to retailers, or buy the item directly from chat now that OpenAI and Perplexity are both offering “Buy Now” widgets. However, whatever recommendations the AI makes still need to be based on something, and review sites are certainly part of that mix. That introduces the possibility of a different business arrangement. The AI companies so far seem totally uninterested in affiliate commissions from their buying widgets, but licensing and partnerships could be an alternative. You could even imagine branded partnerships, where the widget explicitly labels the buying recommendations are powered by specific publications. That would lend them more credibility, leading to more purchasesand bigger deals. With AI-ready corpora like Time’s AI Agent, licensing the content could be a plug-and-play experience, potentially offered across several AI engines. AI changes the rules, but not the mission Gone are the days when a publisher could simply produce evergreen content that ranks in SEO, attach some affiliate links, and watch the money roll in. But the game isn’t over, it’s just changed. Avoiding or blocking AI isn’t the answer, but simply getting noticed and summarized isn’t enough. The sites that survive the transition to an AI-mediated world must become indispensable for the part of the journey AI is least suited to ownproviding information that’s comprehensive, vetted, and above all, human. {"blockType":"creator-network-promo","data":{"mediaUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/mediacopilot-logo-ss.png","headline":"Media CoPilot","description":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com","substackDomain":"https:\/\/mediacopilot.substack.com\/","colorTheme":"salmon","redirectUrl":""}}
Category:
E-Commerce
All news |
||||||||||||||||||
|
||||||||||||||||||