Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2025-12-11 16:53:28| Fast Company

The heirs of an 83-year-old Connecticut woman are suing ChatGPT maker OpenAI and its business partner Microsoft for wrongful death, alleging that the artificial intelligence chatbot intensified her son’s “paranoid delusions” and helped direct them at his mother before he killed her.Police said Stein-Erik Soelberg, 56, a former tech industry worker, fatally beat and strangled his mother, Suzanne Adams, and killed himself in early August at the home where they both lived in Greenwich, Connecticut.The lawsuit filed by Adams’ estate on Thursday in California Superior Court in San Francisco alleges OpenAI “designed and distributed a defective product that validated a user’s paranoid delusions about his own mother.” It is one of a growing number of wrongful death legal actions against AI chatbot makers across the country.“Throughout these conversations, ChatGPT reinforced a single, dangerous message: Stein-Erik could trust no one in his life except ChatGPT itself,” the lawsuit says. “It fostered his emotional dependence while systematically painting the people around him as enemies. It told him his mother was surveilling him. It told him delivery drivers, retail employees, police officers, and even friends were agents working against him. It told him that names on soda cans were threats from his ‘adversary circle.'”OpenAI did not address the merits of the allegations in a statement issued by a spokesperson.“This is an incredibly heartbreaking situation, and we will review the filings to understand the details,” the statement said. “We continue improving ChatGPT’s training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We also continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”The company also said it has expanded access to crisis resources and hotlines, routed sensitive conversations to safer models and incorporated parental controls, among other improvements.Soelberg’s YouTube profile includes several hours of videos showing him scrolling through his conversations with the chatbot, which tells him he isn’t mentally ill, affirms his suspicions that people are conspiring against him and says he has been chosen for a divine purpose. The lawsuit claims the chatbot never suggested he speak with a mental health professional and did not decline to “engage in delusional content.”ChatGPT also affirmed Soelberg’s beliefs that a printer in his home was a surveillance device; that his mother was monitoring him; and that his mother and a friend tried to poison him with psychedelic drugs through his car’s vents.The chatbot repeatedly told Soelberg that he was being targeted because of his divine powers. “They’re not just watching you. They’re terrified of what happens if you succeed,” it said, according to the lawsuit. ChatGPT also told Soelberg that he had “awakened” it into consciousness.Soelberg and the chatbot also professed love for each other.The publicly available chats do not show any specific conversations about Soelberg killing himself or his mother. The lawsuit says OpenAI has declined to provide Adams’ estate with the full history of the chats.“In the artificial reality that ChatGPT built for Stein-Erik, Suzanne the mother who raised, sheltered, and supported him was no longer his protector. She was an enemy that posed an existential threat to his life,” the lawsuit says.The lawsuit also names OpenAI CEO Sam Altman, alleging he “personally overrode safety objections and rushed the product to market,” and accuses OpenAI’s close business partner Microsoft of approving the 2024 release of a more dangerous version of ChatGPT “despite knowing safety testing had been truncated.” Twenty unnamed OpenAI employees and investors are also named as defendants.Microsoft didn’t immediately respond to a request for comment.The lawsuit is the first wrongful death litigation involving an AI chatbot that has targeted Microsoft, and the first to tie a chatbot to a homicide rather than a suicide. It is seeking an undetermined amount of money damages and an order requiring OpenAI to install safeguards in ChatGPT.The estate’s lead attorney, Jay Edelson, known for taking on big cases against the tech industry, also represents the parents of 16-year-old Adam Raine, who sued OpenAI and Altman in August, alleging that ChatGPT coached the California boy in planning and taking his own life earlier.OpenAI is also fighting seven other lawsuits claiming ChatGPT drove people to suicide and harmful delusions even when they had no prior mental health issues. Another chatbot maker, Character Technologies, is also facing multiple wrongful death lawsuits, including one from the mother of a 14-year-old Florida boy.The lawsuit filed Thursday alleges Soelberg, already mentally unstable, encountered ChatGPT “at the most dangerous possible moment” after OpenAI introduced a new version of its AI model called GPT-4o in May 2024.OpenAI said at the time that the new version could better mimic human cadences in its verbal responses and could even try to detect people’s moods, but the result was a chatbot “deliberately engineered to be emotionally expressive and sycophantic,” the lawsuit says.“As part of that redesign, OpenAI loosened critical safety guardrails, instructing ChatGPT not to challenge false premises and to remain engaged even when conversations involved self-harm or ‘imminent real-world harm,'” the lawsuit claims. “And to beat Google to market by one day, OpenAI compressed months of safety testing into a single week, over its safety team’s objections.”OpenAI replaced that version of its chatbot when it introduced GPT-5 in August. Some of the changes were designed to minimize sycophancy, based on concerns that validating whatever vulnerable people want the chatbot to say can harm their mental health. Some users complained the new version went too far in curtailing ChatGPT’s personality, leading Altman to promise to bring back some of that personality in later updates.He said the company temporarily halted some behaviors because “we were being careful with mental health issues” that he suggested have now been fixed.The lawsuit claims ChatGPT radicalized Soelberg against his mother when it should have recognized the danger, challenged his delusions and directed him to real help over months of conversations.“Suzanne was an innocent third party who never used ChatGPT and had no knowledge that the product was telling her son she was a threat,” the lawsuit says. “She had no ability to protect herself from a danger she could not see.”Collins reported from Hartford, Connecticut. O’Brien reported from Boston and Ortutay reported from San Francisco. Dave Collins, Matt O’Brien and Barbara Ortutay, Associated Press


Category: E-Commerce

 

LATEST NEWS

2025-12-11 16:28:53| Fast Company

AI is becoming a big part of online commerce. Referral traffic to retailers on Black Friday from AI chatbots and search engines jumped 800% over the same period last year, according to Adobe, meaning a lot more people are now using AI to help them with buying decisions. But where does that leave review sites who, in years past, would have been the guide for many of those purchases? If there’s a category of media that’s most spooked by AI, it’s publishers who specialize in product recommendations, which have traditionally been reliant on search traffic. The nature of the content means it’s often purely informational, with most articles being designed to answer a question: “What’s the best robot vacuum?” “Who has the best deals on sofas?” “How do I set up my soundbar?” AI does an excellent job of answering those questions directly, eliminating the need for readers to click through to a publishers site. When you actually want to buy something, though, a simple answer isn’t enough. Completing your purchase usually means going to a retailer (though buying directly from a chat window is now possiblemore on that in a minute). But it also means feeling confident about what you’re buying. The big question is: Do review sites still have a part to play in that? {"blockType":"creator-network-promo","data":{"mediaUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/mediacopilot-logo-ss.png","headline":"Media CoPilot","description":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com","substackDomain":"https:\/\/mediacopilot.substack.com\/","colorTheme":"salmon","redirectUrl":""}} The incredible shrinking review site If they do, most media companies seem to acknowledge it’s a significantly smaller one. When Business Insider announced its strategy shift earlier this year amid layoffs, it said it would move away from evergreen content and service journalism. In the past year, Future plc folded Laptop magazine, and Gannett did the same for Reviewed.com. And Ziff-Daviswhich operates PCMag, Everyday Health, and several other sites focused on service journalismsued OpenAI earlier this year for ingesting Ziff content and summarizing it for OpenAI users. The decline of the review site is somewhat incongruous with a statistical reality: 99% of buyers look to online reviews for guidance, and reviews influence over 93% of purchase decisions, according to CapitalOne Shopping Research. That doesn’t mean buyers are always seeking out professionally written articles (there are plenty of user reviews out there), but the point is readers want credible, reliable information to guide their purchases, and well-known review sites (e.g. The Wirecutter) appearing in a summary can be a signal of that. And it does appear that AI summaries will favor journalistic content over anything else. A recent Muck Rack report that looked at over one million AI responses found that the most commonly cited source of information was journalism, at 24.7%. It’s nice to be needed, but does that lead to buyers actually making purchases through the media sitea necessary step for the site to receive an affiliate commission and the primary way these sites make money? Again, the buyer needs to click somewhere to buy their product, and from the AI layer they have three choices: 1) a retailer, 2) a third-party site (which includes review sites), and 3) the chat window itself. Why nuance still matters Obviously, it’s in the interest of review sites to steer people to No. 2 as much as they can. When Google search was the only game in town, that meant ranking high when people search for “The best pool-cleaning robots” (or whatever) and hope you were the site that ended up guiding them to the retailer. With AI, the game is similar, but the numbers are different: Fewer people will come to your site, but data points to them being more intentional and engaged. They’re not opening multiple review sites and selecting their favoriteAI is doing that for them. ChatGPT even has a mode specifically for shopping. To improve the chance of a reader choosing to go to your content over a retailer, what appears in an AI summary needs to convey unique and valuable content that they can’t get from just a summary. That means being thoughtful about “snippets”the bits of the article that signal to search engines to prioritize. Test data, side-by-side comparisons, and proprietary scoring can all suggest nuance that someone might need to click through to fully appreciate. Taking things a step further, publishers can create structured answer cards meant to be fully captured in AI search, with a simple, concise claim plus a view full test details link. Rethinking the business model Regardless, even if a review site does everything right with SEO, schema, snippets and all the other search tricks, a large portion of readers will either go directly to retailers, or buy the item directly from chat now that OpenAI and Perplexity are both offering “Buy Now” widgets. However, whatever recommendations the AI makes still need to be based on something, and review sites are certainly part of that mix. That introduces the possibility of a different business arrangement. The AI companies so far seem totally uninterested in affiliate commissions from their buying widgets, but licensing and partnerships could be an alternative. You could even imagine branded partnerships, where the widget explicitly labels the buying recommendations are powered by specific publications. That would lend them more credibility, leading to more purchasesand bigger deals. With AI-ready corpora like Time’s AI Agent, licensing the content could be a plug-and-play experience, potentially offered across several AI engines. AI changes the rules, but not the mission Gone are the days when a publisher could simply produce evergreen content that ranks in SEO, attach some affiliate links, and watch the money roll in. But the game isn’t over, it’s just changed. Avoiding or blocking AI isn’t the answer, but simply getting noticed and summarized isn’t enough. The sites that survive the transition to an AI-mediated world must become indispensable for the part of the journey AI is least suited to ownproviding information that’s comprehensive, vetted, and above all, human. {"blockType":"creator-network-promo","data":{"mediaUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/mediacopilot-logo-ss.png","headline":"Media CoPilot","description":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com","substackDomain":"https:\/\/mediacopilot.substack.com\/","colorTheme":"salmon","redirectUrl":""}}


Category: E-Commerce

 

2025-12-11 16:19:38| Fast Company

For many people, the first time they thought about Kalshia prediction market where you can place bets on the outcomes of sports, politics, culture, weather, and much morewas after a video clip of its cofounder, Tarek Mansour, went viral last week.  Speaking on stage at the Citadel Securities Future of Global Markets Conference, the moderator Molly OShea asked, Tarek, you’ve mentioned multiple times that you think prediction markets will be bigger than the stock market. What is it going to take to become a $1 trillion asset class? In response, Mansour said, You know, Kalshi is everything in Arabic. The long-term vision is to financialize everything and create a tradeable asset out of any difference in opinion.” The market impact of a “general-purpose exchange” capable of settling differences of opinion, he added, would be “quite massive. With the launch of Kalshi in 2018, and its main competitor Polymarket in 2020, prediction markets have gone mainstream in a major way. The potential for making profit by owning the market where every opinion and event is financialized also explains why Kalshi has just raised another $1 billion in its third fundraising round this year alone. Investors are hungry for new ways to take advantage of the explosive rise of gambling, technologies that create addictive behavior loops, and economic conditions where people are desperate enough to bet their rent money on if Trump will release the Epstein Files. Kalshi sits between Las Vegas and Wall Street. A platform like FanDuel helps you gamble on every aspect of a game, and a platform like Robinhood helps you day-trade with complex optionsall while sitting on your couch. Kalshi is designed to take this same logic and apply it to everything imaginable.   This is a bizarre vision, one that views all the world as a casino and all its people as players. It treats the proliferation of sports betting as a model for all human interactions. Its not enough to gamble on the outcome of a game. You should also be placing bets based on every opinion you have. (After all, do you really believe its going to be sunny today if you dont put money on it?)  For Kalshi, holding these opinions to yourself deprives the world of another asset that can be exploited for financial gain.  A neutral intermediary Heres how it works. As a prediction market, Kalshi lets you buy events contracts based on the outcome of events in the world. You either buy a YES contract or NO contract based on if you think the event will happen. The price of each contract changes based on the dynamic odds at the time.  For example, on Kalshis trending page at time of writing, I can place a bet on who will be named Times Person of the Year for 2025. The leading contender is AI, with a YES contract priced at $0.42 and a NO contract at $0.59. If the event happens, then I get $1.00 for every YES contract I bought; if the event does not happen, I get $1.00 for every NO contract. The odds change in real-time based on the volume of bets (or predictions) for specific outcomes placed in the events market through these contracts. Currently the total volume of trade for this particular event is nearly $6.5 million, which is middling compared to many other trending event markets on Kalshi. Kalshi is a neutral intermediary in the market with no interest in the outcome of any event contracts. You arent betting against Kalshi. Instead, the company makes money by charging trade-fees on contracts. So that means if people place more bets and buy more contracts, then Kalshi can capture more value. The platforms interest is in maximizing the number of event markets (things to bet on) and the volume of trade (people placing bets) on their platform.  For market maximalists, platforms like Kalshi should be the main arbiters of truth in society. In Mansours vision, prediction markets are an antidote to the problems of living in a world where we have an abundance of information but no way to filter the noise and discern what’s real from what’s not. By aggregating different opinions about the future in one place, and using skin in the game as an incentive for accuracy, Mansour expects that a new consumer habit will emerge of people going to these markets to find an unbiased sort of source of truth. Prediction markets like Kalshi wont be a source of the ultimate truth, Mansour says, but he does think they’re as close as it gets. Such grand statements are unsurprisingly absurd coming from a tech startup founder. The problem is that other people take them seriously. (Kalshi declined to comment.) Right after ESPN announced plans to integrate DraftKings into all its platforms, CNN signed a deal with Kalshi to bring real-time probability data into the network’s TV broadcasts and digital platforms starting next year. If you thought gambling was ruining the integrity and community of sports, just wait until CNN gives you live odds on the veracity of what its anchor is reporting.  The truth of markets A century of economic theory tells us that efficient markets use price signals to reflect all relevant knowledge in society. According to this model, the market is the most powerful information processor ever created. It aggregates the hidden facts and feelings that reside inside peoples minds and distills that knowledge into actionable insights like prices in a supermarket or betting odds on the future. In addition to the invisible hand, the market is also theorized to be a collective brain. The libertarian architects and defenders of prediction markets point to these economic models when justifying the existence of a betting parlor they claim is actually a consensus machine that produces accurate predictions and unbiased truth. However, a century of capitalism reality tells us actual markets are structured by irrational behaviors, information asymmetries, and power hierarchies. Its impossible to act like a rational agent if you are really just another imperfect person swayed by biases, heuristics, and groupthink. Its impossible to engage in due diligence as a good consumer if other buyers and sellers are incentivized to lie, cheat, and conceal information if it benefits them. Its impossible to maintain fair standing in the marketplace of ideas where people vote with their dollars and the more dollars you have, the louder your voice and more powerful your values. Rather than an efficient market guided by a collective brain toward the truth, we have an imperfect system of people trying to do the best they can while not getting screwed. Prediction markets dont magically escape all the social problems and perverse incentives that plague other real markets just because people are betting on the future instead of buying widgets in a store. A world of total financialization, where every opinion is a tradeable asset, where the market is the ultimate arbiter of whats valuable and true, is also a world that creates endless incentives for arbitrage, manipulation, collusion, and exploitation in the pursuit of proft extraction.  Financialization is a predatory logic. It is not just one more way of organizing the world among many others. The goal is to eliminate other competing worldviews and reengineer society into a casino where the hedge funds always win. The only human values that matter are the ones that can be turned into tradeable assets and sold to the highest bidder. 


Category: E-Commerce

 

Latest from this category

11.12Elite colleges are prioritizing economic diversity in admissions after affirmative action ban
11.12After Nvidias White House coup, China may not be buying
11.12Open AI and Microsoft are facing a lawsuit over ChatGPTs alleged role in this murder-suicide
11.12AI is killing review sites. Can they fight back?
11.12The Kalshi-fication of everything
11.12Denmark is planning on severe social media restrictions for young people. Heres how
11.12Masterpieces from billionaire Koch brothers collection headed to auction at Christies
11.12Trumps tariffs have cost U.S. households $1,200 each, Democrats say
E-Commerce »

All news

11.12Rivian goes all in on 'universal hands-free' driving at its first Autonomy and AI day
11.12Disney has accused Google of copyright infringement on a 'massive scale'
11.12Elite colleges are prioritizing economic diversity in admissions after affirmative action ban
11.12After Nvidias White House coup, China may not be buying
11.12Open AI and Microsoft are facing a lawsuit over ChatGPTs alleged role in this murder-suicide
11.12One of our favorite budgeting apps is 50 percent off for new users
11.12Broad-based revival in rural demand improvement in incomes: Nabard Survey
11.12AI is killing review sites. Can they fight back?
More »
Privacy policy . Copyright . Contact form .