|
|||||
Hello again, and welcome back to Fast Companys Plugged In. A February 9 blog post about AI, titled Something Big Is Happening, rocketed around the web this week in a way that reminded me of the golden age of the blogosphere. Everyone seemed to be talking about itthough as was often true back in the day, its virality was fueled by a powerful cocktail of adoration and scorn. Reactions ranged from Send this to everyone you care about to I dont buy this at all. The author, Matt Shumer (who shared his post on X the following day), is the CEO of a startup called OthersideAI. He explained he was addressing it to my family, my friends, the people I care about who keep asking me so what’s the deal with AI? and getting an answer that doesn’t do justice to what’s actually happening. According to Shumer, the deal with AI is that the newest modelsspecifically OpenAIs GPT-5.3 Codex and Anthropics Claude Opus 4.6are radical improvements on anything that came before them. And that AI is suddenly so competent at writing code that the whole business of software engineering has entered a new era. And that AI will soon be better than humans at the core work of an array of other professions: Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service. By the end of the post, with a breathlessness that reminded me of the Y2K bug doomsayers of 1999, Shumer is advising readers to build up savings, minimize debt, and maybe encourage their kids to become AI wizards rather than focus on college in the expectation it will lead to a solid career. He implies that anyone who doesnt get ahead of AI in the next six months may be headed for irrelevance. The piecewhich Shumer told New Yorks Benjamin Hart he wrote with copious assistance from AIis not without its points. Some people who are blasé about AI at the moment will surely be taken aback by its impact on work and life in the years to come, which is why I heartily endorse Shumers recommendation that everyone get to know the technology better by devoting an hour a day to messing around with it. Many smart folks in Silicon Valley share Shumers awe at AIs recent ginormous leap forward in coding skills, which I wrote about last week. Wondering what will happen if its replicated in other fields is an entirely reasonable mental exercise. In the end, though, Shumer would have had a far better case if hed been 70% less over the top. (I should note that the last time he was in the news, it was for making claims involving the benchmark performance of an AI model he was involved with that turned out not to be true.) His post suffers from a flaw common in the conversation about AI: Its so awestruck by the technology that it refuses to acknowledge the serious limitations it still has. For instance, Shumer suggests that hallucinationAI stringing together sequences of words that sound factual but arentis a solved problem. He writes that a couple of years ago, ChatGPT confidently said things that were nonsense and that in AI time, that is ancient history. Its true that the latest models dont hallucinate with anything like the abandon of their predecessors. But they still make stuff up. And unlike earlier models, their hallucinations tend to be plausible-sounding rather than manifestly ridiculous, which is a step in the wrong direction. The same day I read Shumers piece, I chatted with Claude Opus 4.6 about newspaper comicsa topic I often use to assess AI since I know enough about it to judge responses on the flyand it was terrible about associating cartoonists with the strips they actually worked on. The more we talked, the less accurate it got. At least it excelled at acknowledging its errors: When I pointed one out, it told me, So basically I had fragments of real information scrambled together and presented with false confidence. Not great. After botching another of my comics-related queries, Claude said, I’m actually getting into shaky territory here and mixing up some details, and asked me to help steer it in the right direction. Thats an intriguing glimmer of self-awareness about its own tendency to fantasize, and progress of a sort. But until AI stops confabulating, describing it as being smarter than most PhDs, as Shumer does, is silly. (I continue to believe that human capability is not a great benchmark for AI, which is already better than we are at some things and may remain permanently behind in others.) Shumer also gets ahead of himself in his assumptions about where AI might be in the short-term future when it comes to being competently able to replace human thought and labor. Writing about the kind of complex work tasks he recommends throwing AIs way as an experiment, he says, If it even kind of works today, you can be almost certain that in six months it’ll do it near perfectly. That seems extraordinarily unlikely, given that all kinds of generative AI have been stuck in the kind-of-works era for years now. A decent rule of thumb: Dont believe AI will be able to do something well until it actually does. Ultimately, the takeaway from Shumers post Ill remember most isnt anything he wrote. In the spirit of AI experimentation, I fed his piece to ChatGPT, Gemini, and Claude with the prompt Give me an analysis/critique of this essay. Tell me whether its overly cautious, not cautious enough, what your own take is on the subjects discussed, etc. I was prepared for them all to respond with something facile but shallow, more a bullet-point summary than anything. ChatGPT and Gemini lived up to those expectations. However, Claudes swing at the topicwhich it popped out in seconds, and you can read in its entirety herestartled me. It took issue with some of the same things about the piece that bothered me, but also raised other weaknesses I hadnt considered. I felt smarter after reading it, which still isnt what I expect from AI prose. A few nuggets: Coding is a uniquely favorable domain for AI the outputs are verifiable, the feedback loops are tight, the training data is abundant, and the environment is fully digital. Lawyering, medicine, and management involve ambiguity, institutional context, human relationships, regulatory accountability, and adversarial dynamics that are qualitatively different. He acknowledges this briefly and then waves it away. . Yes, many people tried GPT-3.5 and wrote off the whole field. That’s a real problem. But plenty of serious, current users including people paying for the best models still encounter significant limitations: hallucination (reduced but not eliminated), brittleness on novel problms, inability to maintain coherent context over truly long projects, poor calibration about its own uncertainty, and difficulty with tasks requiring real-world grounding. Shumer treats all skepticism as outdated, which is intellectually dishonest. . He also doesnt seriously engage with the possibility that the economic incentives might not play out as smoothly as AI can do the job, therefore the job disappears. Adoption friction is real. Liability concerns are real. Organizational inertia is real. The history of technology suggests that even transformative tools take much longer to restructure industries than the pure capability timeline would suggest. Electricity took decades to reshape manufacturing even after it was clearly superior to steam. I think the underlying signal Shumer is pointing at is real. AI capabilities are improving faster than most people outside the industry appreciate, and the gap between public perception and current reality is genuinely large. He’s right that most people should be engaging with these tools more seriously than they are. But I think the essay is doing a thing that’s very common in Silicon Valley: mistaking the potential trajectory for the inevitable trajectory, and compressing timelines based on vibes and extrapolation rather than grappling seriously with the many real-world frictions that slow adoption. The piece reads like it was written in the immediate emotional aftermath of being impressed by a new model release and those moments tend to produce overconfidence about pace. To recap: In the same day that I found Claude Opus 4.6 writing something about Shumers piece that was not only coherent but insightful, it also devolved into a hallucinogenic fit. Thats just how AI is these days: amazing and terrible at the same time. Somehow, that reality is tough for many observers to accept. But any analysis that ignores it is at risk of badly misjudging what will come next. Youve been reading Plugged In, Fast Companys weekly tech newsletter from me, global technology editor Harry McCracken. If a friend or colleague forwarded this edition to youor if you’re reading it on fastcompany.comyou can check out previous issues and sign up to get it yourself every Friday morning. I love hearing from you: Ping me at hmccracken@fastcompany.com with your feedback and ideas for future newsletters. I’m also on Bluesky, Mastodon, and Threads, and you can follow Plugged In on Flipboard. More top tech stories from Fast Company Developers are still weighing the pros and cons of AI coding agentsThe tools continue to struggle when they need to account for large amounts of context in complex projects. Read More AI expert predicted AI would end humanity in 2027now he’s changing his timelineThe former OpenAI employee has rescheduled the end of the world. Read More Discord is asking for your ID. The backlash is about more than privacyCritics say mandatory age verification reflects a deeper shift toward routine identity checks and digital surveillance. Read More A Palantir cofounder is backing a group attacking Alex Bores over his work with . . . PalantirCurrent and former employees tell Fast Company the ad campaign is driven by opposition to the Democratic hopeful’s support for AI regulation. Read More Facebook’s new profile animation feature is Boomerang for the AI eraThe feature is part of a wider push toward AI content in Meta apps. Read More MrBeast’s business empire stretches far beyond viral YouTube videosBanking apps, snack foods, streaming hits, and data tools are all part of Jimmy Donaldson’s growing $5 billion portfolio under Beast Industries. Read More
Category:
E-Commerce
Once the king of the chicken sandwich, Popeyes faces a lot of competition for the crown these days. Ascendant fried chicken hotspot Raising Canes exploded in growth last year, knocking off KFC to become the third most-popular fast food chicken chain in the U.S. behind Chick-fil-A and Popeyes. Meanwhile, upstarts like Daves Hot Chicken and Hangry Joes Hot Chicken & Wings are growing fast and eyeing a similar trajectory. Popeyes once inspired feverish hordes and all-day lines for its top-selling chicken sandwich, but its been a rocky ride as of late. Popeyes parent company Restaurant Brands International (RBI) just reported its quarterly earnings, and In the last quarter, the chicken chains U.S. sales were down nearly 5%its fourth consecutive quarterly slide. Other fast food brands under RBIs umbrella saw sales tick up during the same time period. Beyond Popeyes Louisiana Kitchen, RBI also owns Burger King, Tim Hortons, and Firehouse Subs. With almost 20,000 locations, Burger King is RBIs biggest chain, dwarfing the 5,000 Popeyes locations around the globe. Weve had weaker performance than wed like over the last few quarters, and thats why you saw us make the change in leadership, RBI CEO Josh Kobza said on the companys earnings call. He noted the companys decision to bring former Burger King COO Peter Perdue in as Popeyes U.S. and Canada president. Popeyes also plans to triage its lowest-performing locations with targeted support, coaching visits and experience rallies for Popeyes restaurant general managers across the U.S. Kobza said that Popeyes plans to double down on operations and narrow the focus back to chicken on the marketing and product side. We know Popeyes is capable of much more and we’re taking decisive action to put the brand back on the right path while supporting our franchisees to deliver stronger results at the restaurant level, Kobza said. Reviving Popeyes In January, almost 20 Popeyes locations in George and Florida closed their doors after one of the chicken chains major operators declared bankruptcy. While Popeyes says that the majority of the 100-plus locations operated by franchisee Sailormen Inc. were profitable, borrowing rates, high inflation, and dwindling foot traffic contributed to the closures. Popeyes insists that the closures dont reflect the broader brand, which is owned by quick-service restaurant conglomerate RBI. Perdue reportedly reassured other franchisees that Sailormens bankruptcy does not reflect the healthy unit economics that you are experiencing in your restaurants. For Popeyes, the problem clearly isnt chicken. Persistent inflation continues to take a toll on the restaurant industry, but Americans are still opting for poultry on the go at Popeyes competitors like Raising Canes and Daves Hot Chicken. Traffic is down at fast food joints broadly too, but chicken restaurants lapped their lagging peers last year. For Popeyes, the problem is Popeyessomething the company seems well aware of right now. Our performance this year reinforces a clear reality, Kobza said in the earnings report, noting the intense level of competition in the quick-service chicken game. At its core, the chicken business is a service business and winning requires consistent speed, accuracy and reliability in every restaurant every day.
Category:
E-Commerce
Advertising in generative AI systems has become a fault line. Last month, OpenAI released that it would start running ads in ChatGPT. Speaking at the World Economic Forum in Davos, OpenAIs chief financial officer defended the introduction of ads inside ChatGPT, arguing that it is a way to democratize access to artificial intelligence, and that this decision is aligned with its mission: AGI for the benefit of humanity, not for the benefit of humanity who can pay.” Within days, Anthropic fired back in a Super Bowl commercial, ridiculing the idea that ads belong inside systems people trust for advice, therapy, and decision-making. In some way, this is a spat about how each company is marketing itself. In another way, this debate echoes the debates about the early internet, but with far higher stakes. The big question The underlying question is not whether advertising generates revenue. It clearly does. But rather: is advertising the only viable way to fund AI at scale. And whether, if adopted, it will quietly dictate what these systems optimize for. History offers a cautionary answer. The last several decades of online advertising has proven that when profit is decoupled from user value, incentives drift toward harvesting data and maximizing engagementthe variables that can be most easily measured and monetized. That trade-off shaped everything in the internet economy. As advertising scaled, so did the incentives it created. Attention became a scarce resource. Personal information became currency. What Google taught us Googles founders themselves acknowledged this risk at the dawn of the modern web. In their 1998 Stanford paper, Sergey Brin and Larry Page warned that ad-funded search engines create inherent conflicts of interest, writing that such systems are biased towards the advertisers and away from the needs of the consumers, and that advertising incentives can encourage lower-quality results. Despite this warning, the system optimized for what could be measured, targeted, and monetized at the expense of privacy, transparency, and long-term trust. These outcomes were not inevitable. They flowed from early design choices about how advertising worked, data moved, and influence was disclosed. A pivotal moment Artificial intelligence now finds itself at a similar pivotal moment, but under far greater economic pressure and with far higher stakes. It is worth noting, artificial intelligence is not cheap to run. OpenAI projected that it will burn through $115 billion by 2029. Like internet users, AI users are unwilling to pay for access, and advertising has historically allowed the internet, and businesses depending on it, to scale beyond paying users. If advertising is going to fund AI, personal data cannot be the fuel that powers it. If conversations on an AI platform leak into targeting data, users will stop trusting it and will start viewing it as a surveillance tool. Furthermore, once personal data becomes currency, the system inevitably optimizes for extraction. That does not mean future advertisers on these AI platforms would have to operate in the dark. Brands will still need to know that their spending delivers results, and that their messages target users aligned with their values. Its justifiable that brands need outcome measurement and contextual assurance. The real problem The irony in Anthropics critique is instructive. A Super Bowl commercial is itself a testament to advertisings enduring power as a form of communication and cultural signaling. Advertising is not the problem. Invisible incentives are. The way to satisfy both consumer trust and business growth is to build the advertising ecosystem on open, inspectable systems so that influence can be seen, measured, and governed without requiring the collection or exploitation of personal data. Standards such as the Ad Context Protocol sets out to do exactly this. This is the window in which profit can still be aligned with value. At stake is the difference between advertising as manipulation and advertising as sustainable and enduring market infrastructure. The ad-funded internet failed users not because it was free, but because its incentives were invisible. AI has the chance to do better. The choice is ours to make.
Category:
E-Commerce
AI is upending business, our personal lives, and much more in betweenincluding the operation of the U.S. government. In total, The Washington Post reported 2,987 uses of AI across the executive branch last year, hundreds of which are described as high impact. Some agencies have embraced the technology wholeheartedly. NASA has gone from 18 reported AI applications in 2024 to 420 in 2025; the Department of Health and Human Services, overseen by Robert F. Kennedy Jr., now reports 398 uses, up from 255 a year ago. The Department of Energy has seen a fourfold increase in AI usage, with a similar jump at the Commerce Department. Agencies were effectively given the green light in April 2025, when the White House announced it was eliminating barriers to AI adoption across the federal government. They appear to have taken that invitation seriously. Those numbers may raise eyebrowsor trigger concern among observers worried about bias, hallucinations, and lingering memories of the chaotic AI-enabled government overhaul associated with the quasi-official Department of Government Efficiency during Elon Musks brief orbit near the center of power. Its not clear using AI for most government tasks is necessary, or preferable to conventional software, cautions Chris Schmitz, a researcher at the Hertie School in Berlin. The digital infrastructure of the U.S. government, like that of many others, is a deeply suboptimal, dated, path-dependent patchwork of legacy systems, and using AI for quick wins is frequently more of a Band-Aid than a sustainable modernization. Others who have worked at the center of government digital innovation argue that alarmism may be misplaced. In fact, they say, experimenting with AI can be a form of smart governanceif done carefully. Its become apparent that we never really properly moved government into the internet era, says Jennifer Pahlka, cofounder and chair of the board at the Recoding America Fund and former U.S. deputy chief technology officer under the Obama administration. “There have been real problems that have come out of that where government is just not meeting the needs of people in the way that it should.” Pahlka believes that experimentation with AI in government is probably somewhat appropriate given how early we are in the generative AI era. Testing is necessary to understand whereand where notthe technology can improve operations. What you want, though, is ways of experimenting with this that gives you very clear and effective feedback loops, such that you are catching problems before it’s rolled out to large numbers of people or to have a large impact, she says. Still, it is far from certain that AI systems will produce outcomes that serve all Americans equally. Denice Ross, executive fellow in applied technology policy at the University of California, Berkeley, warns that rigorous evaluation is essential. The way government would find out if a tool is doing what it’s supposed to for the American people is by collecting and analyzing data about how it performs, and the outcomes for different populations, says Ross, who served as chief data scientist in the White House from 2023 to 2024. The core issue, she says, is whether a given system is actually helping the people its meant to serve, or whether some people [are] being left behind or harmed. The only way to know is to look closely at the data. That might mean discovering, for example, that a tool works fine for digitally fluent users but falls short for people without high-speed internet or for older Americans. Public participation is also critical. Getting the conditions for legitimate government AI use right is hard, and this work by and large has not been done, the Hertie School’s Schmitz argues, noting that there has been no real democratic negotiation of the legal basis for automated decision-making or build-out of oversight structures, for example. There are also reasons to be cautious about rushed or poorly structured AI deployments, including reported plans at the Department of Transportation to experiment with tools like Google Gemini. Philip Wallach, a senior fellow at the American Enterprise Institute, argues that while the government should be exploring how rapid advances in AI can serve the public, it must do so without sacrificing democratic accountability. The priority, he suggests, should be preserving accountable human judgment in government decision-making before momentum and political expediency crowd it out. Looking at the governments overall AI strategy, Pahlka says she sees some grounds for cautious optimism. From what she can tell, many of the early efforts appear focused on applying AI to bureaucratic bottlenecks and process slowdowns where it could meaningfully boost productivity. If that focus holds, she suggests, the payoff could be pretty useful. Still, she believes more care and attention to detail is neededsomething the Trump White House has not always demonstrated. What I’m not sure I see is a questioning of the processes themselves, she says, explaining that, in her view, thoughtful AI adoption requires asking whether a process should exist in its current form at allnot simply whether AI can accelerate one step within it. That distinction matters because poorly implemented AI can have real consequences. Governments track record with large-scale technology deployments is uneven, and layering AI onto flawed systems could cause undue harm. We have consistently rolled out technology in government in ways that have harmed people because we do not have test and learn frameworks as the fundamental way of approaching these problems, Pahlka says. If done right, however, the opportunity is significant. AI could help government function more effectively, and more equitably, for everyone.
Category:
E-Commerce
When Minnesota Timberwolves star Anthony Edwards steps onto the NBA All-Star court in Los Angeles with the leagues best players, there will be cameras following his every move. But it wont just be NBC clocking the action. Edwardss own Three-Fifths Media will be there for his ongoing unscripted show, Year Six. Its the second season chronicling the daily grind of his NBA exploits, building on last years Year Five. Three-Fifths Media started in 2019, with Justin Holland, Edwardss business partner and manager. They signed a production deal with Wheelhouse in 2024 to collaborate on projects like Year Six. So far, Three-Fifths has produced Serious Business, an unscripted show on Prime Video that challenges celebrities and athletes in their own domains, Year Five, and now Year Six, and the inaugural Believe That Awards, which aired in October on YouTube and had 167 million views across platforms in its first 48 hours. On the side, Edwards also produced a hip-hop album featuring heavyweights Pusha T, Quavo, and Wale. The 24-year-old Edwards is methodically building his own content and entertainment business clearly influenced by the success some of his on-court heroes have had over the past decade, like Kevin Durant with Boardroom and LeBron James with Fulwell Entertainment (formerly the SpringHill Co.). Of course, there is no guaranteed blueprintwitness SpringHill’s financial struggles, despite strong productions, that led to its merger with Fulwell last year. The two common threads among Three-Fifths Medias projects is that they shine a spotlight on a real and (largely) unfiltered Anthony Edwards, and are at least partly owned by the NBA star. Holland says thats not only at the core of their content, but the overall business strategy. We’ve leaned into being authentic in every room we walk into, and prioritize ownership over exposure, says Holland, who has been working with Edwards since 2016. Not just looking for deals because of dollar amounts or because they’re cute, but also really leaning into brands that we really can take ownership in, allow us to keep that authenticity, and also look for opportunities where we can actually own our IP. Just like Edwardss on-court career, its been an impressive start, and shows potential to help redefine athlete-owned media. Believe That Okay, picture this: A remake of the 2001 film Training Day, starring Timothée Chalamet as Ethan Hawkes character opposite NBA star Anthony Edwards in Denzel Washingtons spot. It sounds crazy, obviously, but Chalamet and Edwards actually talked about it in October when Edwards awarded the actor his White Boy of the Year honor as part of the satirical Believe That Awards show. View this post on Instagram The show didnt feature a red carpet, nor was it drenched in celebritythough Chalamet and Candace Parker made Zoom appearances. It was shot in Edwardss actual basement, and had the feel of a Saturday night hang-out with him and his friends. That ability to seamlessly jump from highly produced work like Year Five, to more street-level, vlogger-style content is perhaps Edwardss biggest media strength. You have guys that impact culture, and then you have guys that create, says Holland. Ant’s one of those guys that creates culture. So everything that we do, we’re intentional about not trying to follow the standard, and aim to actually be innovative in our creative process. Theres a reason the vibe of hanging with Edwards and his friends permeates so much of his work (his best friend, Nick Maddox, stars in many of his Adidas spots) its because thats whats really happening. It is actually pretty easy when you have a guy like Anthony and our crew, says Holland. We keep everything really tailored to our core group and just want to make sure that we continue to build from there. Brand consistent Holland says that, as a young up-and-coming NBA star, early in his career brands would try to fit him into their box or version of him they wanted. The work they’ve done with partners like Adidas, Sprite, Bose, and Prada represent those that have not only steered away from the old hold-the-product-and-smile approach, but encouraged Edwards to take ownership of the creative. Most modern athletes will talk about authentic connection with both brands and fans, but tend to serve up only the most curated and choreographed version of it. What makes Edwards work most unique is how it makes fans feel a part of that inner circle, whether in a social post or a big time sneaker ad. We try to stay away from just brand endorsements and we really like to be in business with people that really understand who we are and then actually want to collaborate with us, says Holland. That translates to having Maddox starring in Adidas ads, or Edwardss brothers music featured in a Bose campaign. It also brings Edwardss natural affinity for trash talk to his brand work. Brands typically shy away from controversy, but Adidas has embraced Edwardss approach wholeheartedly. They turned heads last year, launching his first signature shoe with ads that called out other pro shoe models and social media trolls by name. In a spot called Top Dog for his AE2 shoe, he beats video game caricatures of his biggest rivalsLuka Donèiæ, Victor Wembanyama, and Shai Gilgeous-Alexander, among others. Holland says getting brand partners to embrace Edwardss authentic self was tougher at first, but the results speak for themselves. We talk to our partners about our overall picture, looking at it from a wide lens of how we want to operate, he says. Now those conversations are a lot easier. They see how we move and how the public actually reacts to the authenticity, and how it resonates, because it just makes all the work that much more relatable.
Category:
E-Commerce