|
Walgreens on Thursday named retail veteran Mike Motz as its chief executive officer after the U.S. pharmacy chain was taken private by Sycamore Partners, marking a fresh chapter for the company that has been struggling to keep pace with rivals. The management shake-up came on the heels of Sycamore’s roughly $10 billion acquisition of Walgreens Boots Alliance, completed earlier in the day. Motz, previously the CEO of office supplies chain Staples’ U.S. retail business, will succeed Tim Wentworth and steer the ailing pharmacy operator as it shifts its focus back on core retail and pharmacy operations. He had also worked as the president of Canadian pharmacy chain Shoppers Drug Mart. Walgreens has struggled to expand beyond its pharmacy business and diversify into broader healthcare services, even as budget-conscious consumers increasingly turned to lower-cost options from Amazon and Walmart for their prescriptions and toiletries. Wentworth, who took the helm in 2023, spearheaded a turnaround at the company through cost cuts, including the removal of multiple mid-level executives and store closures. Wentworth will continue to serve as a director, Walgreens said. The company also appointed John Lederer, a former director of Walgreens Boots Alliance and a senior adviser to Sycamore, as executive chairman. Mrinalika Roy and Mariam Sunny, Reuters
Category:
E-Commerce
Online age checks are on the rise in the U.S. and elsewhere, asking people for IDs or face scans to prove they are over 18 or 21 or even 13. To proponents, they’re a tool to keep children away from adult websites and other material that might be harmful to them. But opponents see a worrisome trend toward a less secure, less private and less free internet, where people can be denied access not just to pornography but news, health information and the ability to speak openly and anonymously. I think that many of these laws come from a place of good intentions, said Jennifer Huddleston, a senior technology policy fellow at the Cato Institute, a libertarian think tank. Certainly we all want to protect young people from harmful content before theyre ready to see it. More than 20 states have passed some kind of age verification law, though many face legal challenges. While no such law exists on the federal level in the United States, the Supreme Court recently allowed a Mississippi age check law for social media to stand. In June, the court upheld a Texas law aimed at preventing minors from watching pornography online, ruling that adults don’t have a First Amendment right to access obscene speech without first proving their age. Elsewhere, the United Kingdom now requires users visiting websites that allow pornography to verify their age. Beyond adult sites, platforms like Reddit, X, Telegram and Bluesky have also committed to age checks. France and several other European Union countries also are testing a government sponsored verification app. And Australia has banned children under 16 from accessing social media. Platforms now have a social responsibility to ensure the safety of our kids is a priority for them, Australian Prime Minister Anthony Albanese told reporters in November. The platforms have a year to work out how they could implement the ban before penalties are enforced. To critics, though, age check laws raise significant privacy and speech concerns, not only for young people themselves, but also for all users of the internet, Huddleston said. Because the only way to make sure that we are age verifying anyone under the age of 18 is to also age verify everyone over the age 18. And that could have significant impacts on the speech and privacy rights of adults. The state laws are a hodgepodge of requirements, but they generally fall into two camps. On one side are laws as seen in Louisiana and Texas that require websites comprised of more than 33% of adult content to verify users’ ages or face fines. Then there are laws enacted in Wyoming or South Dakota that seek to regulate sites that have any material that is considered obscene or otherwise harmful to minors. What’s considered harmful to minors can be subjective, and this is where experts believe such laws run afoul of the First Amendment. It means people may be required to verify their ages to access anything, from Netflix to a neighborhood blog. In places like Australia and the U.K., there is already a split happening between the internet that people who are willing to identify themselves or go through age verification can see and the rest of the internet. And that’s historically a very dangerous place for us to end up, said Jason Kelley, activism director at the nonprofit digital rights group Electronic Frontier Foundation. What’s behind the gates is determined by a hundred different decision-makers,” Kelley said, from politicians to tech platforms to judges to individuals who have sued because they believe that a piece of content is dangerous. While many companies are complying, verifying users’ ages can prove a burden, especially for smaller platforms. On Friday, Bluesky said it will no longer be available in Mississippi because of its age verification requirements. While the social platform already does age verification in the U.K., it said Mississippis approach would fundamentally change how users access Bluesky. That’s because it requires every user to undergo an age check, not just those who want to access adult content. It would also require Bluesky to identify and track users that are children. We think this law creates challenges that go beyond its child safety goals, and creates significant barriers that limit free speech and disproportionately harm smaller platforms, the company said in a blog post. Some websites and social media companies, such as Instagram’s parent company Meta, have argued that age verification should be done by app store owners, such as Apple and Google, and not individual platforms. This would mean that app stores need to verify their users ages before they allow them to download apps. Unsurprisingly, Apple and Google disagree. Billed as simple by its backers, including Meta, this proposal fails to cover desktop computers or other devices that are commonly shared within families. It also could be ineffective against pre-installed apps, Google said in a blog post. Nonetheless, a growing number of tech companies are implementing verification systems to comply with regulations or ward off criticism that they are not protecting children. This includes Google, which recently started testing a new age-verification system for YouTube that relies on AI to differentiate between adults and minors based on their watch histories. Instagram is testing a similar AI system to determine if kids are lying about their ages. Roblox, which was sued by the Louisiana attorney general on claims it doesn’t do enough to protect children from predators, requires users who want to access certain games rated for those over 17 to submit a photo ID and undergo a face scan for verification. Roblox has also recently begun requiring age verification for teens who want to chat more freely on platform. Face scans that promise to estimate a persons age may address some of the concerns around IDs, but they can be unreliable. Can AI accurately tell, for instance, if someone is 17.5 or just turned 18? Sometimes its less accurate for women or its less accurate for certain racial or ethnic groups or for certain physical characteristics that then may mean that those people have to go through additional privacy invasive screenings to prove that they are of a certain age, Huddleston said. While IDs are a common way of verifying someones age, the method raises security concerns: What happens if companies don’t delete the uploaded files, for instance? Case in point: the recent data breaches at Tea, an app for women to anonymously warn each other about the men they date, speak to some of these concerns. The app requires women who sign up to upload an ID or undergo a scan to prove that they are women. Tea wasn’t supposed to keep the files but it did, and stored them in a way that allowed hackers to not only access the images, but also their private messages. Barbara Ortutay, AP technology writer
Category:
E-Commerce
Welcome to AI Decoded, Fast Companys weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week via email here. Could generative AI be just a minor revolution? On a recent episode of the TBPN podcast, Jordi Hays asked his cohost John Coogan whether his life would really be that much worse if he couldnt access generative AI tools like ChatGPT and Claude. Would AIs absence be as disruptive, he asks, as the sudden disappearance of smartphones, or TVs, or electricity? Coogan conceded it wouldnt. The real-life impact of AI variessomeone going through a hard time might find relief in the counsel of a chatbotbut its reasonable to judge the technology in societal and historical terms, because thats how its biggest cheerleaders and hype-spreaders frame it. So the question becomes, What is the real progress of the AI revolution? Based on the past two years and nine months since ChatGPTs debut, should we believe generative AI will change the world on the scale of the industrial revolution, the internet, or the mobile revolution? As Financial Times politics and culture columnist Janan Ganesh notes in a recent op-ed, those predicting that generative AI will usher in abundance and well-being are the ones working closest to the technology and presumably understand it best. But they are also the ones with the most to gain from overselling it, and the most reluctant to admit theyve dedicated their careers to something with only modest impact. To be sure, generative AI is an amazing technology. Anyone who has used ChatGPTs Deep Research, Anthropics Computer Use feature, or Googles Veo 3 video generator can see that. Its also the fastest-adopted technology in modern history: Gen AI apps reached 39.4% adoption among U.S. adults in just two years, compared to the four years it took for smartphones to hit 35% adoption following the iPhones 2007 launch. Yet adoption hasnt translated to willingness to pay. Only about 3% of users subscribe to premium tiers, according to Menlo Ventures State of Consumer AI report. (Mobile computing, by contrast, always required buying a handset and a cellular plan.) Globally, AI apps are bringing in only about $12 billion in annual revenue from 1.8 billion users. Two of generative AIs biggest players, OpenAI and Anthropic, remain far from profitability. Meanwhile, Nvidia, which sells the chips that power AI, made $130 billion last fiscal year. OpenAI and Anthropic remain far from profitability. In the late summer of 2025 generative AIs honeymoon period may be coming to an end. Consumers and businesses are less interested in being dazzled and more focused on actually being helped by it. The attention has shifted to whether AI can actually overhaul aging business practices. And AI is indeed making some tasks, like coding, more efficient, saving time and sometimes payroll. But few CTOs are claiming generative AI is transforming their business, at least not yet. Large enterprises are pouring money into AI projects, but many stall. An MIT report last week shook investors by finding that 95% of enterprise AI projects fail to substantially improve efficiency or profits. The research shows that the models arent the problem; the challenge is integrating them into company data, workflows, and infrastructure. In other words, it’s an application problemone thats lingered since 2023. AI companies seem to recognize this. Covering them, I hear less talk of artificial general intelligence (AGI) and superintelligence, and less emphasis on monolithic models that can do everything. In reality, most AI workloads are handled by teams of specialized models. (OpenAI even described GPT-5 not as a model but as a system of models.) Generative AI is technically complex and hard to grasp in detail. But that shouldnt mean its impact can only be judged by those inside AI labs or big tech companies. What really matters is whether it measurably boosts productivity in business, and whether it leaves societies healthier, better educated, freer, more prosperous, more creative, and less bored at work. Anthropics settlement with authors could set a precedent in future copyright cases Anthropic is poised to settle a lawsuit brought by a group of authors who alleged the company trained its Claude models on their copyrighted books. A court filing Tuesday shows the parties have agreed on preliminary terms. Judge William Alsup gave them until September 5 to finalize the details and submit the proposed settlement. In June, Alsup ruled that Anthropics use of digitized books qualified as fair use under the Copyright Act, but that the company had obtained the works unlawfully from a shadow library (including the notorious LibGen site). In late July, he certified that the class could include any author whose copyrighted book Anthropic downloaded from such libraries, meaning the company could have faced damages of $150,000 per book across potentially thousands of titles. Instead, Anthropic opted to settle. Like its peers, Anthropic relies on vast amounts of online text to pretrain its large language models (LLMs). These models process data for weeks or months to build an understanding of language and context, forming a basic knowledge of how the world works. Content owners continue to sue AI companies over this practice, with several major cases still ongoing. In his June ruling, Alsup addressed the broader issue at the heart of these lawsuits, writing that AI companies use copyrighted content in a transformative wayeven when an LLM is just meorizing textand therefore in a manner protected by fair use. That reasoning could become the defining legacy of Bartz v. Anthropic. More AI coverage from Fast Company: Want to disguise your AI writing? Start with Wikipedias new list How large language models can reconstruct forbidden knowledge Elon Musk has only one chance of forcing Apple to promote Grok Runways AI can edit reality. Hollywood is paying attention Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.
Category:
E-Commerce
All news |
||||||||||||||||||
|