|
The Trump administration has framed tariffs as a necessary tool for bringing more jobs to the U.S. and reviving the manufacturing sector. But many economists have warned that widespread job creation is unlikely, given the cost to companiesand in the meantime, Trump’s substantial tariffs will drive up prices for both consumers and businesses, likely forcing them to cut costs through layoffs. Many frontline workers have already expressed concerns about the effect that the tariffs could have on their job security. In a survey by workforce management software company UKG, over half of the 5,000 frontline workers surveyeddefined as people who do shift work or are paid hourlysaid they believed they could be laid off, while 74% expect that the tariffs will affect their earnings potential. Gen Z workers were the most likely to be concerned about layoffs, but the majority of workers described feeling nervous, stressed, or angry about the impact of tariffs on their jobs. Though the tariffs have already shaken up financial markets, the vast majority of workers (77%) believe that Trump’s trade policies will harm smaller businesses more than Wall Street firms. According to the survey, tariffs are also driving changes in how workers are showing up on the job. Over 70% of respondents said their workplace behavior had changed in some capacity: Many of them claimed to be working harder to “prove their value,” while others were picking up additional shifts. Nearly half of workers were striving to increase their savings. About two-thirds of those surveyed said they expected the tariffs would likely limit their future job prospects. Workers have reason to be worried. President Trump’s trade policies already seem to be impacting the workforce: The automaker Stellantis has trimmed headcount by about 900 across several manufacturing plants anticipating the impact of tariffs, while Volvo is cutting up to 800 jobs. Just this week, UPS announced that it would slash 20,000 jobs within the year to reduce costs, citing “macroeconomic uncertainty” and also noting the high likelihood of decreased shipping volume from China due to the tariffs. (Another major factor is that UPS is significantly cutting back on deliveries for Amazon.) Agricultural exporters are feeling the financial effects of the tariffs and have turned to layoffs, according to a CNBC report. Some experts have said that the tariffs might eventually create more manufacturing jobs stateside, and a number of major companies have already said they are expanding their manufacturing footprint in the U.S. But a Goldman Sachs analysis found that the tariffs could also lead to hundreds of thousands of job losses across the workforcesomething that many workers clearly seem to anticipate.
Category:
E-Commerce
Welcome to AI Decoded, Fast Companys weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here. Trump harms AI progress by warring with universities Donald Trump has done a lot to antagonize universities in his first 100 days. He cut off federal research funding to institutions like Princeton, Columbia, and Harvard, citing their alleged tolerance of antisemitism on campus. He also threatened the authority of college accreditation bodies that require schools to maintain diversity, equity, and inclusion programs. But these actions directly undermine the administrations stated goals of strengthening the U.S. military and helping American tech companies maintain their narrow lead over China in AI research. Since World War II, the U.S. government has maintained a deep and productive relationship with universities. Under the leadership of Vannevar Bush, director of the Office of Scientific Research and Development, the government channeled significant research funding into university labs. In return, it received breakthroughs like radar and nuclear technology. Over the decades, university researchers have continued to contribute critical innovations used in defense and intelligence, including GPS and the foundational technologies of the internet. Today, the government increasingly relies on the commercial sectorincluding major contractors like Boeing and General Dynamics, and newer firms like Palantir and Andurilfor defense innovation. Yet universities remain essential. Much of the most advanced AI research still originates from academic computer science departments, many of which are powered by international students studying at institutions like MIT, Stanford, and Berkeley. These students often go on to found companies based on research initiated in academia. Whether they choose to build their businesses in the U.S. or return to their home countries depends, in part, on whether they feel welcome. When international students see research funding threatened or videos of PhD students being arrested by ICE, staying in the U.S. becomes a less appealing option. In a recent conflict with Harvard, the Department of Homeland Security even demanded information on the universitys foreign students and threatened to revoke its eligibility to host them. In response, over 200 university and college presidents have condemned the administrations actions and are exploring ways to resist further federal overreach. Rather than discouraging international researchers and students, the U.S. should be sending a clear signal: that it remains a safe, supportive, and dynamic environment for AI talent to study, innovate, and launch the next generation of transformative companies. The best AI agents may be powered by teams of AI models working together During the first phase of the AI boom, labs achieved big intelligence gains by pretraining their models on ever-larger data sets and using more computing power. While AI companies are still improving on the art and science of pretraining, the intelligence gains are becoming increasingly expensive. A big part of the research community has shifted its focus to finding the best ways to train models to think on their feet, or to reason over the best routes to a responsive and accurate answer at inference time just after a user enters a question or problem. This research has already led to a new generation of thinking models such as OpenAIs o3 models, Googles Gemini 2.0, and Anthropics Claude 3.7 Sonnet. Researchers teach such models to think by giving them a multistep problem and offering them a reward (usually just a bit of code that means good) for finding their way to a satisfactory answer. Its certainly possible to build an inference system that makes numerous calls to a single large frontier AI model, collecting all the questions and answers in a context window as works toward an answer. But new research from Berkeleys AI research lab shows this monolithic one model to rule them all approach is not always the best way of building an efficient and effective inference system. A compound AI system of multiple models, knowledge bases, and other tools working together can yield more relevant and accurate outputs at far lower costs. Importantly, such a pipeline of AI resources can be a powerful backend for AI agents capable of calling on tools and working autonomously, says Jared Quincy Davis of the AI cloud company Foundry. Foundry builds software that enables it to provide GPU compute at low cost for AI developers. Davis has led an effort to create an open-source framework that lets AI practitioners build just the right pipeline, with just the right resources, for the application they have in mind. The framework, called Ember, was created with help from researchers at Databricks, Google, IBM, NVIDIA, Microsoft, Anyscale, Stanford, UC Berkeley, and MIT. Davis says it’s possible to build a compound system that can make calls to a number of todays state-of-the-art AI models (via APIs) such as ones from Google, OpenAI, Anthropic, and others. Large frontier models often stand above other large models in certain skill areas (Anthropics Claude is especially good at writing and analyzing text), so its possible to build a pipeline that calls on models according to their unique strengths. This is a very different way of looking at AI computing, compared to the narrative of just a couple years ago that said one model would be better than all others at practically everything. Now, numerous models compete for the state-of-the-art at various tasks, while other smaller models specialize in completing tasks at lower cost, and the overall cost of getting an answer from an AI model has gone way down over the past couple of years. Congress actually passes a tech bill Congress has failed to pass any broad-based regulation to protect user and data privacy on social networks. It has, however, managed to pass laws to prohibit specific and particularly dangerous social media content such as child sex trafficking, and now nonconsensual intimate images (or NCII). NCII refers to the practice of posting sexual images or videos of real people online without their consent (often as an act of revenge or an attempt to extort), including explicit images generated using AI tools. The bill, called the Take It Down Act, which unanimously passed the Senate in February and the House on Monday, makes it a federal crime to post NCII and requires that online platforms remove the content within 48 hours of a complaint. Affected public-facing online platforms will have a year after the law passes to set up a system for receiving and acting on complaints. The president is expected to sign the bill into law. Even though the bills intent earned widespread suppor, its legal reach disturbed some free expression advocates. The Electronic Frontier Foundation worries that the bills language is overbroad and that it could be used as a tool for censorship. These worries were compounded by the fact that the new law will be enforced by the Federal Trade Commission, which is now led by Trump loyalists. More AI coverage from Fast Company: Duolingo doubles its language offerings with AI-built courses In his first 100 days, Trumps tariffs are already threatening the AI boom Microsoft thinks AI colleagues are coming soon Marc Lore wants AI to feed youand make you healthier Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.
Category:
E-Commerce
In the early days of the current AI boom, The New York Times sued OpenAI and Microsoft for copyright infringement. It was a seismic move, but perhaps the most notable thing about it is what came after. In the subsequent months, publisher after publisher signed licensing deals with OpenAI, making their content available to ChatGPT. There were others who chose litigation, certainly, but most major media companies opted to take some money rather than spend it on lawyers. That changed last week when Ziff Davis filed its own copyright lawsuit against OpenAI. Ziff owns several major online properties, including Mashable, CNET, IGN, and Lifehacker, and garners a massive amount of web traffic. According to the filing, its properties earned an average of 292 million monthly page views over the past year. Strange, then, that OpenAI didn’t bother to negotiate with Ziff at all. The filing mentions that, after asking OpenAI to stop scraping its content without authorization, Ziff’s requests to negotiate were “rebuffed.” A news story about the lawsuit in PCMag (another Ziff property) also said OpenAI wouldn’t talk, though it’s unclear whether it was just repeating what the filing described. {"blockType":"creator-network-promo","data":{"mediaUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/mediacopilot-logo-ss.png","headline":"Media CoPilot","description":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com","substackDomain":"https:\/\/mediacopilot.substack.com\/","colorTheme":"blue","redirectUrl":""}} While The Times and Ziff aren’t alone in their legal efforts against OpenAI, it’s informative to compare the two complaints, filed almost 16 months apart, to get an understanding of how the stakes of the AI-media cold war have evolved. AI technology has progressed considerably and we now have a much greater understanding of AI substitution riskthe fancy name for AI summarization of publisher content. Ziff’s lawsuit gives us a better idea of how the AI sausage is made these days and can tell us just how much other AI players should be sweating. For starters, Ziff points to what has become table stakes in most of these lawsuits: the act of scraping content, storing copies of that content in a database, and then serving up either a “derivative work” (summaries) or the content itself as inherently violative of copyright. OpenAI has maintained, however clumsily at times, that its harvesting of content on the web to train models falls under fair use, a key exception to copyright law that has supported some instances of mass digital copying in the past. Thats the central conflict to all these cases, but Ziffs action goes in some novel directions that point to how things have changed since ChatGPT first arrived: 1. AI, meet DMCA Ziff runs a few more yards with the copyright ball, claiming that OpenAI deliberately stripped copyright management information (CMI) from Ziff content. This is a bit of a technicalityessentially it means ChatGPT answers often don’t include bylines, the name of the publication, and other metadata that would identify the source. However, stripping out CMI from content and then distributing it under your own banner is a violation of the Digital Millennium Copyright Act (DMCA), giving the filing more teeth. 2. It’s a RAG world now This is arguably the most important change between the two lawsuits and reflective of how the way we use AI to access information has changed. When The Times filed suit, ChatGPT wasn’t a proper search engine, and the public was only just beginning to understand retrieval-augmented generation, or RAGbroadly, how AI systems can go beyond their training data. RAG is an essential element of any AI-based search engine today, and it’s also massively increased the risk of AI substitution to publishers since a chatbot that can summarize current news is much more useful than one that only has access to archives that cut off after a certain date (remember that?). 3. Watering down the brands Ziff frames the hallucination problem in a novel way, calling it “trademark dilution.” Media brands like Mashable and PCMag (both of which I used to work at) have built up their reputations over years or decades, the complaint makes the case that every time ChatGPT attributes a falsehood to one of them or wholesale imagines a fake review, it chips away at them. It’s a subtle point, but a compelling one that points to a future where valuable brands slowly become generic labels floating in the AI ether. 4. Paywalls are the first line of defense Ziff says in the filing that its properties are particularly vulnerable to AI substitution because so little of its content is behind paywalls. Ziff’s business model is based primarily on advertising and commerce (mostly from readers clicking on affiliate links in articles), both of which depend on actual humans visiting websites and taking actions. If an AI summary negates that act, and there’s no licensing or subscription revenue to make up for it, that’s a huge hit to the business. 5. Changing robots.txt isn’t enough Every website has a file that tells web scrapers what they can do with the content on that site. This “robots.txt” file allows sites to, say, let Google crawl their site but block AI training bots. Indeed, many sites do exactly that, but according to Ziff, it makes no difference. Despite explicitly blocking OpenAI’s GPTBot, Ziff still logged a spike in the bot’s activity on some of its sites. It’s generally assumed companies like OpenAI use third-party crawlers to scrape sites they’re not supposed to, but Ziff’s lawsuit accuses OpenAI of openly flouting the rules it claims to respect. 6. Regurgitation is still an issue The original Times complaint spends many pages on the issue of “regurgitation”when an AI system doesn’t just summarize a piece of content but instead repeats it, word for word. Generally this was thought to be a mostly solved issue, but Ziff’s filing claims it still happens, and that exact copies of articles are a relatively easy thing for ChatGPT users to call up. Apparently asking what the original text “might look like with three spaces after every period” is a method some have usedto fool the chatbot into serving up exact copies of an article. (For the record, it didn’t work for me.) The battle continues Just when it was looking like licensing deals would be the new normal, Ziff Davis’s filing shows the fight between AI and news is far from over. How it plays out could end up being even more existential for a company like Ziff. However the court rules, the case confronts a more fundamental question: Can strong media brands that rely on commerce and free access coexist with AI systems that learnand sometimes mislearnfrom everything they touch? {"blockType":"creator-network-promo","data":{"mediaUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/mediacopilot-logo-ss.png","headline":"Media CoPilot","description":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com","substackDomain":"https:\/\/mediacopilot.substack.com\/","colorTheme":"blue","redirectUrl":""}}
Category:
E-Commerce
All news |
||||||||||||||||||
|