Stellantis, the maker of Chrysler, Dodge, Jeep, Ram, issued a do not drive warning for certain late-model vehicles, telling drivers not to use their vehicles until defective air bags are replaced, according to a notice from the National Highway Traffic Safety Administration (NHTSA).
This stop-drive directive was issued for 225,000 U.S. vehicles from 2003 to 2016 that contain the “defective, deadly” Takata airbag inflators, and is part of a larger, ongoing recall. More than 67 million Takata air bags have been recalled in tens of millions of vehicles across U.S.
“Over time, the chemical propellant inside certain Takata inflators can degrade, particularly in hot and humid conditions, increasing the risk of rupture during airbag deployment and the potential for metal fragments to enter the vehicle cabin,” Frank Matyok, a spokesperson for Stellantis, tells Fast Company.
Such explosions have caused injuries and death, according to the NHTSA which confirmed 28 people in the U.S. have died as a result of the defective airbag exploding; and injured at least another 400 people. Older vehicles pose a higher risk, as they are more likely to explode.
Meanwhile, a separate group of defective Takata air bags were recalled in late 2019 which involve non-azide driver inflators.
Which vehicles are being recalled?
Stellantis tells Fast Company the affected vehicles are the following:
20032016 Dodge Ram pickup trucks and Dodge Sprinter vans
20042009 Dodge Durango SUVs
20052012 Dodge Dakota pickup trucks
20052008 Dodge Magnum station wagons
20062015 Dodge Charger sedans
20072009 Chrysler Aspen SUVs
20072008 Chrysler Crossfire coupes
20082014 Dodge Challenger coupes
20052015 Chrysler 300 sedans
20072016 Jeep Wrangler SUVs
What should I do if I own one of the recalled vehicles?
A spokesperson for Stellantis tells Fast Company it will fix the vehicles free of charge, and began notifying affected customers earlier this week on February 9.
Drivers can also find out if their vehicles are affected by this recall by contacting Stellantis’ customer service hotline toll-free at 833-585-0144, or by entering their 17-digit vehicle identification number (VIN) at the NHTSA.gov website.
For most of modern management history, wasting time has been treated as a vice. This sensibility can be traced back to Frederick Taylors doctrine of scientific management, which recast work as an engineering problem and workers as components in a machine to be optimized, standardized, and controlled. In reducing human effort to measurable outputs and time-motion efficiencies, Taylorism marked the beginning of the end for seeing people as thinking agents, turning them instead into productivity units not unlike laboratory rats, rewarded or punished according to how efficiently they ran the maze.
Since then, we have come a long way. The post-war rise of the knowledge worker, and later the age of talent that took shape from the 1960s onwards, marked a decisive break with the logic of the factory floor. Work was no longer merely a job to be endured, but a career to be developed. Organizations began to concern themselves with engagement, motivation, wellbeing, and worklife balance, not out of benevolence alone but because value increasingly resided in peoples minds rather than their muscles. Human capital came to mean employability, shaped by intelligence, drive, expertise, and a new, if imperfect, meritocracy that coexisted with vocational careers. The growth of the creative class reinforced this shift: machines would handle the boring, repetitive tasks, freeing humans from the assembly line to think, design, and imagine.
{"blockType":"mv-promo-block","data":{"imageDesktopUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/tcp-photo-syndey-16X9.jpg","imageMobileUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/tcp-photo-syndey-1x1-2.jpg","eyebrow":"","headline":"Get more insights from Tomas Chamorro-Premuzic","dek":"Dr. Tomas Chamorro-Premuzic is a professor of organizational psychology at UCL and Columbia University, and the co-founder of DeeperSignals. He has authored 15 books and over 250 scientific articles on the psychology of talent, leadership, AI, and entrepreneurship. ","subhed":"","description":"","ctaText":"Learn More","ctaUrl":"https:\/\/drtomas.com\/intro\/","theme":{"bg":"#2b2d30","text":"#ffffff","eyebrow":"#9aa2aa","subhed":"#ffffff","buttonBg":"#3b3f46","buttonHoverBg":"#3b3f46","buttonText":"#ffffff"},"imageDesktopId":91424798,"imageMobileId":91424800,"shareable":false,"slug":""}}
The latest iteration of this story is, of course, AI. What makes it different is not merely that it automates standardized and repetitive work, but that it increasingly encroaches on intellectual, creative, and cognitive tasks once thought to be distinctly human. Writing, analyzing, summarizing, designing, even ideating are now faster, cheaper, and more scalable when performed by machines.
The irony is hard to miss. Just as work had evolved away from crude measures of output, we find ourselves drifting back towards a Taylorist logic, where value is once again assessed in terms of raw productivity: how much, how fast, how cheaply. Only this time, the benchmark is no longer the stopwatch but the algorithm. Worse still, the machines are not merely competing with us on these terms; they are learning from us how the work is done, refining it, and then doing it better. In the process, the very qualities that once distinguished human work risk being reduced to inputs in someone elses optimization function.
This is widely framed as progress. It may turn out to be a costly misunderstanding.
Engineering inefficiencies
Deep thinking is inefficient by design. It is slow, cognitively demanding, and frequently unproductive in the short term. Experimentation is worse. Most experiments fail, and even the successful ones rarely succeed on schedule; plus if you know in advance whether an experiment will work, then its not truly an experiment. Intrinsic curiosity is even more unruly, leading people into intellectual detours with no obvious payoff. None of this lends itself to neat metrics or reassuring dashboards. From a narrow productivity perspective, it looks like waste.
Those inefficiencies are not limited to how humans think. They also define how humans relate to one another at work.
Acting human, and especially acting humane, is inefficient by design. Greeting your barista and asking how they are doing slows the line, even as the system is optimized to maximize how many lattes can be poured per hour and you are encouraged to streamline your order through an app. Asking colleagues how they are doing at the start of a meeting consumes time that could otherwise be spent racing through the agenda. Showing genuine interest in others, listening without an immediate instrumental purpose, or helping someone become better at their job often sits well outside your formal goals, your key performance indicators, or your objectives and key results.
From a narrow productivity perspective, this too looks like waste.
Friction in the system
Efficiency, however, is indifferent to relationships. It privileges throughput over connection, output over meaning, and speed over understanding. Optimized systems have little tolerance for small talk, empathy, or curiosity because these behaviors resist standardization and cannot be cleanly measured or scaled. In a perfectly efficient organization, no one asks how anyone else is doing unless the answer can be converted into performance. Help is offered only when it aligns with incentives. Time spent listening, reflecting, or caring is treated as friction in the system.
The problem is surprisingly common, namely that when organizations optimize for the system, they often end up sub-optimizing the subsystems within it. This is a familiar lesson from systems theory, but one that is easily forgotten. In the age of AI, the system increasingly appears to be designed around what machines do best, while humans are quietly downgraded to a supporting subsystem expected to adapt accordingly. We hear a great deal about augmentation, but in practice augmentation often means asking people to work in ways that better suit the technology rather than elevating the human contribution.
Talent, however, will not be elevated if human output continues to be judged by the same raw, quantitative metrics that define machine performance: speed, repetition, and operational efficiency. If you are simply running faster in the same direction, you will only get lost quicker (and maybe even lose the capacity to realize that you are lost). These apparent efficiency measures reward behavior that machines naturally excel at and penalize the very qualities that distinguish human work. They focus obsessively on outut while ignoring input: the role of joy, curiosity, learning, skill development, and thoughtful deployment of expertise. In doing so, organizations risk building systems that are optimized for AI, but progressively impoverished of the human capabilities they claim to value most.
Inefficiency and new value
This is why efficiency so often feels dehumanizing. It removes the informal, relational, and moral dimensions of work that make organizations more than collections of tasks. Humans do not learn, trust, or collaborate best when they behave like streamlined processes. We improve through interactions that appear inefficient on paper but are foundational in practice. In this sense, the inefficiencies of acting human are not a failure of management but a feature of humanity. They are the social and psychological infrastructure that allows thinking, learning, and cooperation to occur at all, and the necessary counterweight to systems designed to optimize everything except what makes work worth doing.
Incidentally, inefficiency also plays a central role in the creation of new value, both in discovering better ways of doing existing things and in discovering entirely new things to do. Many important advances in science and business did not arise from tighter optimization or marginal efficiency gains, but from allowing room for exploration, deviation from plan, and attention to unexpected outcomes.
In science, this is often the product of curiosity-driven research rather than narrowly goal-directed problem solving. Alexander Flemings observation in 1928 that a mold contaminant inhibited bacterial growth on a culture plate did not, by itself, produce a usable antibiotic, but it did reveal a phenomenon that later became penicillin once developed by others. Similarly, early work that eventually led to technologies such as CRISPR gene editing emerged from basic research into bacterial immune systems, conducted without any immediate application in mind. These discoveries were not accidents in the casual sense, but they did depend on researchers having the freedom and attentiveness to notice anomalies rather than discard them as inefficiencies.
The role of anomalies
Business innovation shows a comparable pattern. The adhesive behind Post-it Notes was not the outcome 3M originally sought, but its unusual properties were documented rather than rejected, and only later matched to a practical use. This kind of outcome depends less on speed or optimization than on organizational tolerance for ideas that lack an immediate commercial rationale. Systems optimized exclusively for efficiency tend to filter such anomalies out before their value becomes apparent.
Even in exploration and trade, progress has often followed from imperfect information and miscalculation rather than from optimal planning. European expansion into the Americas, for example, was driven in part by navigational errors and incorrect assumptions about geography. While hardly an argument in favor of error, it is a reminder that historical change frequently arises from deviations rather than from flawlessly executed plans.
The broader point is not that inefficiency guarantees innovation, but that innovation is unlikely without it. Systems designed to maximize efficiency excel at refining what is already known. They are far less effective at generating what is new. Allowing space for uncertainty, exploration, and apparent waste is not an indulgence, but a necessary condition for discovering value that cannot be specified in advance.
This distinction is captured neatly in the work of Dean Keith Simonton, who has argued that innovation follows a two-step process: random variation followed by rational selection. New ideas arise from error, experimentation, and departures from established rules, and only later are refined and selected for value. AI is exceptionally strong at the second step. It can evaluate options, optimize choices, and select efficiently among existing alternatives. What it cannot meaningfully do is generate the kind of genuine variation and rule breaking from which truly novel ideas emerge. That responsibility remains human. The risk in an AI-saturated environment is that organizations double down on selection while starving variation, becoming ever more efficient at refining yesterdays ideas.
Reheating ideas
If, in the name of efficiency, creativity itself is outsourced to AI, the result is not randomness but prefabrication: synthetic re-combinations of existing ideas, smoothed and averaged across prior human output. This often resembles creativity without delivering it, more akin to reheating ideas than inventing new ones. The food analogy is instructive. Cooking a proper meal is inefficient and time-consuming, while a frozen meal is faster and perfectly adequate. But no one serves a microwaved lasagna to an important guest and mistakes it for craft. The extra effort is the point.
The same logic applies to thinking and work. Deep thinking is inefficient, but it converts familiarity into understanding. Stepping outside established processes may slow things down, but it is often how better methods are discovered. Time spent feeding curiosity rarely pays off immediately, but it expands skills, connections, and optionality. Even social inefficiencies, such as investing time in relationships that do not yield immediate returns, build trust and create opportunities that efficiency metrics fail to capture.
In this sense, inefficiency is not the opposite of effectiveness but a different path to it. Systems optimized solely for speed and output may function smoothly in the short term, but they do so by eroding the very conditions that allow learning, adaptation, and originality to emerge.
{"blockType":"mv-promo-block","data":{"imageDesktopUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/tcp-photo-syndey-16X9.jpg","imageMobileUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/tcp-photo-syndey-1x1-2.jpg","eyebrow":"","headline":"Get more insights from Tomas Chamorro-Premuzic","dek":"Dr. Tomas Chamorro-Premuzic is a professor of organizational psychology at UCL and Columbia University, and the co-founder of DeeperSignals. He has authored 15 books and over 250 scientific articles on the psychology of talent, leadership, AI, and entrepreneurship. ","subhed":"","description":"","ctaText":"Learn More","ctaUrl":"https:\/\/drtomas.com\/intro\/","theme":{"bg":"#2b2d30","text":"#ffffff","eyebrow":"#9aa2aa","subhed":"#ffffff","buttonBg":"#3b3f46","buttonHoverBg":"#3b3f46","buttonText":"#ffffff"},"imageDesktopId":91424798,"imageMobileId":91424800,"shareable":false,"slug":""}}
Think about how many emails you receive each day. Then how many of those include the phrase please find attached in the body. One X user has made a plea to retire the phrase, a relic leftover from a time when business communication relied on typewritten letters posted in envelopes, which actually included attached documents to be found.
The post quickly went viral, gaining nearly 15 million views since it was posted earlier this week.
While the user doesnt elaborate why exactly they personally take issue with the phrase, or what to say instead, the post had the desired effect, with many weighing in with their own takes on modern email etiquette.
Some agreed that the phrase is stuffy and outdated.
Please find attached adds zero information, sounds robotic, and does not respect the reader’s time, one wrote. Here’s the file does the job better than a sentence that adds zero information, another added.
Its true, these days email attachments are instantly accessible, clearly marked, and dont require a physical search. While young workers have no qualms including memes, emojis, slang, and abbreviations in their emails, and despite nearly one in four employees now using AI to help write emails, please find attached has somehow slipped through the net.
Others staunchly defended the use of the tried-and-tested phrase.
But if I don’t type those magic words, how will Outlook know to warn me when I inevitably forget to actually attach the file? one wrote.
Baby, no, another added. The people are stupid.
Many of us are trapped in a terminal cycle of reaching out and circling back, with dozens of corporate buzzwords and phrases that some argue make smart people sound less intelligent. But if youre in the market for some more creative ways to signal theres a PDF attached that needs attention, the replies to the X post is a goldmine.
Behold, the attachment, one X user suggested as an alternative.
For a sinister edge, There are attachments in this email with us right now, another put forth, or Watch out for the attachment below.
Feeling pumped about the PDF attached? Get a load of this MF attachment, is another option.
Or alternatively, feeling deflated? Find attached, if you even care works here.
And if youd rather the receiver doesnt open the attachment, you could simply put: Please don’t find attached, one wrote. It’ll only be more work for us both.
China moved on Thursday to curb a fierce price war among automakers that has caused massive losses for the industry, after passenger car sales dropped nearly 20% in January from the year before, the fastest pace in almost two years.
The State Administration for Market Regulation released guidelines for manufacturers, dealers, and parts suppliers aimed at preventing a race-to-the-bottom price war.
They ban automakers from setting prices below the cost of production to squeeze out competitors or monopolize the market. Violators may face significant legal risks,” the regulator warned.
The rules also target deceptive pricing strategies and price fixing between parts suppliers and auto manufacturers.
Passenger car sales in China fell 19.5% in January from a year earlier, according to the China Association of Automobile Manufacturers. That was the biggest percentage drop since February 2024.
The 1.4 million passenger cars sold in January compared with 2.2 million units sold in December, CAAM said.
Weakening demand reflects a reluctance of cash-strapped buyers to splash out on big purchases. Sales also have suffered from a cut in tax exemptions for EV purchases, coupled with uncertainties over whether trade-in subsidies for EV purchases will continue after some regions phased them out, auto analysts said.
The aggressive price war in Chinas auto sector has caused an estimated loss of 471 billion yuan ($68 billion) in output value across the whole industry in the past three years, Li Yanwei, a member of the China Automobile Dealers Association, wrote recently.
Analysts expect domestic demand to dip this year. S&P has forecast sales of light vehicles, including passenger cars, in China will fall up to 3% in 2026.
However, Chinese automakers are gaining ground in global markets. China’s exports of passenger cars jumped 49% year-on-year to 589,000 in January.
We dont foresee a loss in momentum for the Chinese auto industry this year, said Claire Yuan, director of corporate ratings for China autos at S&P Global Ratings.
Chinese automakers such as BYD the country’s largest and one that overtook Tesla as the worlds top electric vehicle maker are targeting markets in Europe and Latin America as they confront intense competition in both prices and lineups at home due to oversupply.
Analysts at Citi expect Chinas car exports could jump 19% this year driven by exports of electric vehicles and plug-in hybrids.
BYD is targetings around 1.3 million of overseas car sales in 2026, up from the 1.05 million last year. Other major Chinese automakers have also set ambitious sales targets with a focus on exports.
Last month, Canada agreed to cut its hefty 100% tariff on China-made EV imports in a move welcomed by Chinese carmakers. China also recently reached a deal with the European Union that could allow more of its EVs to enter the European market.
Earlier this week, the European Commission accepted a request by the German auto group Volkswagen to exempt import tariffs for one of its China-built EV models under the CUPRA brand as long as those vehicles are sold at or above an agreed minimum import price in a first of such exemptions.
Chinas commerce ministry said Thursday that it welcomed the move and that it hopes to see more such exemptions.
Chan Ho-Him, AP business writer
Adam Mosseri, the head of Meta’s Instagram, testified Wednesday during a landmark social media trial in Los Angeles that he disagrees with the idea that people can be clinically addicted to social media platforms.The question of addiction is a key pillar of the case, where plaintiffs seek to hold social media companies responsible for harms to children who use their platforms. Meta Platforms and Google’s YouTube are the two remaining defendants in the case, which TikTok and Snap have settled.At the core of the Los Angeles case is a 20-year-old identified only by the initials “KGM,” whose lawsuit could determine how thousands of similar lawsuits against social media companies would play out. She and two other plaintiffs have been selected for bellwether trials essentially test cases for both sides to see how their arguments play out before a jury.Mosseri, who’s headed Instagram since 2018 said it’s important to differentiate between clinical addiction and what he called problematic use. The plaintiff’s lawyer, however, presented quotes directly from Mosseri in a podcast interview a few years ago where he used the term addiction in relation to social media use, but he clarified that he was probably using the term “too casually,” as people tend to do.Mosseri said he was not claiming to be a medical expert when questioned about his qualifications to comment on the legitimacy of social media addiction, but said someone “very close” to him has experienced serious clinical addiction, which is why he said he was “being careful with my words.”He said he and his colleagues use the term “problematic use” to refer to “someone spending more time on Instagram than they feel good about, and that definitely happens.”It’s “not good for the company, over the long run, to make decisions that profit for us but are poor for people’s well-being,” Mosseri said.Mosseri and the plaintiff’s lawyer, Mark Lanier, engaged in a lengthy back-and-forth about cosmetic filters on Instagram that changed people’s appearance in a way that seemed to promote plastic surgery.“We are trying to be as safe as possible but also censor as little as possible,” Mosseri said.In the courtroom, bereaved parents of children who have had social media struggles seemed visibly upset during a discussion around body dysmorphia and cosmetic filters. Meta shut down all third-party augmented reality filters in January 2025. The judge made an announcement to members of the public on Wednesday after the displays of emotion, reminding them not to make any indication of agreement or disagreement with testimony, saying that it would be “improper to indicate some position.”During cross examination, Mosseri and Meta lawyer Phyllis Jones tried to reframe the idea that Lanier was suggesting in his questioning that the company is looking to profit off of teens specifically.Mosseri said Instagram makes “less money from teens than from any other demographic on the app,” noting that teens don’t tend to click on ads and many don’t have disposable income that they spend on products from ads they receive. During his opportunity to question Mosseri for a second time, Lanier was quick to point to research that shows people who join social media platforms at a young age are more likely to stay on the platforms longer, which he said makes teen users prime for meaningful long-term profit.“Often people try to frame things as you either prioritize safety or you prioritize revenue,” Mosseri said. “It’s really hard to imagine any instance where prioritizing safety isn’t good for revenue.”Meta CEO Mark Zuckerberg is expected to take the stand next week.In recent years, Instagram has added a slew of features and tools it says have made the platform safer for young people. But this does not always work. A report last year, for instance, found that teen accounts researchers created were recommended age-inappropriate sexual content, including “graphic sexual descriptions, the use of cartoons to describe demeaning sexual acts, and brief displays of nudity.”In addition, Instagram also recommended a “range of self-harm, self-injury, and body image content” on teen accounts that the report says “would be reasonably likely to result in adverse impacts for young people, including teenagers experiencing poor mental health, or self-harm and suicidal ideation and behaviors.” Meta called the report “misleading, dangerously speculative” and said it misrepresents its efforts on teen safety.Meta is also facing a separate trial in New Mexico that began this week.
By Kaitlyn Huamani and Barbara Ortutay, AP Technology Writers
Welcome to AI Decoded, Fast Companys weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week via email here.
Is AI slop code here to stay?
A few months ago I wrote about the dark side of vibe coding tools: they often generate code that introduces bugs or security vulnerabilities that surface later. They can solve an immediate problem while making a codebase harder to maintain over time. Its true that more developers are using AI coding assistants, and using them more frequently and for more tasks. But many seem to be weighing the time saved today against the cleanup they may face tomorrow.
When human engineers build projects with lots of moving parts and dependencies, they have to hold a vast amount of information in their heads and then find the simplest, most elegant way to execute their plan. AI models face a similar challenge. Developers have told me candidly that AI coding tools, including Claude Code and Codex, still struggle when they need to account for large amounts of context in complex projects. The models can lose track of key details, misinterpret the meaning or implications of project data, or make planning mistakes that lead to inconsistencies in the codeall things that an experienced software engineer would catch.
The most advanced AI coding tools are only now beginning to add testing and validation features that can proactively surface problematic code. When I asked OpenAI CEO Sam Altman during a recent press call whether Codex is improving at testing and validating generated code, he became visibly excited. Altman said OpenAI likes the idea of deploying agents to work behind developers, validating code and sniffing out potential problems.
Indeed, Codex can run tests on code it generates or modifies, executing test suites in a sandboxed environment and iterating until the code passes or meets acceptance criteria defined by the developer. Claude Code, meanwhile, has its own set of validation and security features. Anthropic has built testing and validation routines into its Claude Code product, too. Some developers say Claude is stronger at higher-level planning and understanding intent, while Codex is better at following specific instructions and matching an existing codebase.
The real question may be what developers should expect from these AI coding tools. Should they be held to the standard of a junior engineer whose work may contain errors and requires careful review? Or should the bar be higher? Perhaps the goal should be not only to avoid generating slop code but also to act as a kind of internal auditor, catching and fixing bad code written by humans.
Altman likes that idea. But judging by comments from another OpenAI executive, Greg Brockman, its not clear the company believes that standard is fully attainable. Brockman, OpenAIs president, suggests in a recently posted set of AI coding guidelines that AI slop code isnt something to eliminate so much as a reality to manage. Managing AI generated code at scale is an emerging problem, and will require new processes and conventions to keep code quality high, Brockman wrote on X.
Saas stocks still smarting from last weeks SaaSpocalypse
Last week, shares of several major software companies tumbled amid growing anxiety about AI. The share prices of ServiceNow, Oracle, Salesforce, AppLovin, Workday, Intuit, CrowdStrike, Factset Research, and Thompson Reuters fell so sharply that Wall Street types began to refer to the event as the SaaSpocalypse. The stocks fell sharply on two pieces of news. First, late in the day on Friday, January 30, Anthropic announced a slate of new AI plugins for its Cowork AI tool aimed at information workers, including capabilities for legal, product management, marketing, and other functions. Then, on February 4, the company unveiled its most powerful model yet, Claude Opus 4.6, which now powers the Claude chatbot, Claude Code, and Cowork.
For investors, Anthropics releases raised a scary question: How will old-school SaaS companies survive when their products are already being challenged by AI-native tools?
Although software shares rebounded somewhat later in the week, as analysts circulated reassurances that many of these companies are integrating new AI capabilities into their products, the unease lingers. In fact, many of the stocks mentioned above have yet to recover to their late-January levels. (Some SaaS players, like ServiceNow, are now even using Anthropics models to power their AI features.)
But its a sign of the times, and investors will continue to watch carefully for signs that enterprises are moving on from traditional SaaS solutions to newer AI apps or autonomous agents.
China is flexing its video models
This week, some new entrants in the race for best model are very hard to miss. X is awash with posts showcasing video generated by new Chinese video generation modelsSeedance 2.0 from ByteDance and Kling 3.0 from Kuaishou. The video is impressive. Many of the clips are difficult to distinguish from traditionally shot footage, and both tools make it easier to edit and steer the look and feel of a scene. AI-generated video is getting scary-good, its main limitation being that the generated videos are still pretty short.
Sample videos from Kling 3.0, which range from 3 seconds to 15 seconds, feature smooth scene transitions and a variety of camera angles. The characters and objects look consistent from scene to scene, a quality that video models have struggled with. The improvements are owed in part to the models ability to glean the creators intent from the prompts, which can include reference images and videos. Kling also includes native audio generation, meaning it can generate speech, sound effects, ambient audio, lip-sync, and multi-character dialogue in a number of languages, dialects, and accents.
ByteDances Seedance 2.0, like Kling 3.0, generates video with multiple scenes and multiple camera angles, even from a single prompt. One video featured a shot from within a Learjet in flight to a shot from outside the aircraft. The video motion looks smooth and realistic, with good character consistency across frames and scenes, so that it can handle complex high-motion scenes like fights, dances, and action sequences. Seedance can be prompted with text, images, reference videos, and audio. And like Kling, Seedance can generate synchronized audio including voices, sound effects, and lip-sync in multiple languages.
More AI coverage from Fast Company:
Were entering the era of AI unless proven otherwise
A Palantir cofounder is backing a group attacking Alex Bores over his work with . . . Palantir
Why a Korean film exec is betting big on AI
Mozillas new AI strategy marks a return to its rebel alliance roots
Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.
Russia has attempted to fully block WhatsApp in the country, the company said, the latest move in an ongoing government effort to tighten control over the internet.A WhatsApp spokesperson said late Wednesday that the Russian authorities’ action was intended to “drive users to a state-owned surveillance app,” a reference to Russia’s own state-supported MAX messaging app that’s seen by critics as a surveillance tool.“Trying to isolate over 100 million people from private and secure communication is a backwards step and can only lead to less safety for people in Russia,” the WhatsApp spokesperson said. “We continue to do everything we can to keep people connected.”Russia’s government has already blocked major social media like Twitter, Facebook, and Instagram, and ramped up other online restrictions since Russia’s full-scale invasion of Ukraine in 2022.Kremlin spokesman Dmitry Peskov said WhatsApp owner Meta Platforms should comply with Russian law to see it unblocked, according to the state Tass news agency.Earlier this week, Russian communications watchdog Roskomnadzor said it will introduce new restrictions on the Telegram messaging app after accusing it of refusing to abide by the law. The move triggered widespread criticism from military bloggers, who warned that Telegram was widely used by Russian troops fighting in Ukraine and its throttling would derail military communications.Despite the announcement, Telegram has largely been working normally. Some experts say it’s a more difficult target, compared with WhatsApp. Some Russian experts said that blocking WhatsApp would free up technological resources and allow authorities to fully focus on Telegram, their priority target.Authorities had previously restricted access to WhatsApp before moving to finally ban it Wednesday.Under President Vladimir Putin, authorities have engaged in deliberate and multipronged efforts to rein in the internet. They have adopted restrictive laws and banned websites and platforms that don’t comply, and focused on improving technology to monitor and manipulate online traffic.Russian authorities have throttled YouTube and methodically ramped up restrictions against popular messaging platforms, blocking Signal and Viber and banning online calls on WhatsApp and Telegram. In December, they imposed restrictions on Apple’s video calling service FaceTime.While it’s still possible to circumvent some of the restrictions by using virtual private network services, many of them are routinely blocked, too.At the same time, authorities actively promoted the “national” messaging app called MAX, which critics say could be used for surveillance. The platform, touted by developers and officials as a one-stop shop for messaging, online government services, making payments and more, openly declares it will share user data with authorities upon request. Experts also say it doesn’t use end-to-end encryption.
Associated Press
Daniel Kokotajlo predicted the end of the world would happen in April 2027. In AI 2027 a document outlining the impending impacts of AI, published in April 2025 the former OpenAI employee and several peers announced that by April 2027, unchecked AI development would lead to superintelligence and consequently destroy humanity.
The authors, however are going back on their predictions. Now, Kokotajlo forecasts superintelligence will land in 2034, but he doesnt know if and when AI will destroy humanity.
In AI 2027, Kokotajlo argued that superintelligence will emerge through fully autonomous coding, enabling AI systems to drive their own development. The release of ChatGPT in 2022 accelerated predictions around artificial general intelligence, with some forecasting its arrival within years rather than decades.
These predictions accrued widespread attention. Notably, JD Vance, U.S. vice president, reportedly read AI 2027 and later urged Pope Leo XIV who underscored AI as a main challenge facing humanity to provide international leadership to avoid outcomes listed in the document. On the other hand, people like Gary Marcus, emeritus professor of neuroscience at New York University, disregarded AI 2027 as a work of fiction, even calling various predictions pure science fiction mumbo jumbo.
As researchers and the public alike begin to reckon with how jagged AI performance is, AGI timelines are starting to stretch again, according to Malcolm Murray, an AI risk management expert and one of the authors of the International AI Safety Report. For a scenario like AI 2027 to happen, [AI] would need a lot of more practical skills that are useful in real-world complexities, Murray said.
Still, developing AI models that can train themselves remains a steady goal for leading AI companies. Sam Altman, OpenAI CEO, set internal goals for a true automated AI researcher by March of 2028.
However, hes not entirely confident in the companys capabilities to develop superintelligence. We may totally fail at this goal, he admitted on X, but given the extraordinary potential impacts we think it is in the public interest to be transparent about this.
And so, superintelligence may still be possible, but when it arrives and what it will be capable of remains far murkier than AI 2027 once suggested.
Leila Sheridan
This article originally appeared on Fast Company‘s sister publication, Inc.
Inc. is the voice of the American entrepreneur. We inspire, inform, and document the most fascinating people in business: the risk-takers, the innovators, and the ultra-driven go-getters that represent the most dynamic force in the American economy.
For most of modern finance, one number has quietly dictated who gets ahead and who gets left out: the credit score. It was a breakthrough when it arrived in the 1950s, becoming an elegant shortcut for a complex decision. But shortcuts age. And in a world driven by data, digital behavior, and real-time signals, the score is increasingly misaligned with how people actually live and manage money.
Were now at a turning point. A foundational system, long considered untouchable, is finally being reconstructed by using AIspecifically, advanced machine learning models built for risk predictionto extract more intelligence from existing data. These are rigorously tested, well-governed systems that help lenders see risk with greater nuance and clarity. And the results are reshaping core economics for lenders.
THE CREDIT SCORE WASNT BUILT FOR MODERN CONSUMERS
Legacy credit scores rely on a narrow slice of information updated at a pace that reflects the black-and-white television era. A single late payment can overshadow years of financial discipline. Data updates lag behind real behavior. And lenders are forced to make million-dollar decisions using a tool that cant see volatility, nuance, or context.
A single, generic credit score is a compromise by design. National credit scores are designed to work reasonably well across thousands of institutions, but not optimally for any specific one. That becomes clear when you compare regional differences. A lender in an agricultural region may see very different income seasonality and cash-flow patterns than a lender in a major metro areadifferences that a universal score was never designed to capture. Financial institutions need models built around their actual membership that can adjust to different financial histories and behaviors.
That rigidity has created the gap were now seeing across the economy. Consumers feel squeezed, lenders feel exposed, and businesses struggle to grow in a risk environment that looks nothing like the one their scoring tools were built for.
Modern machine-learning models give lenders something the score never coulda panoramic view instead of a narrow window.
HOW AI CHANGES THE GAME
The data in credit files has long been there. Whats changed is the modelingmodern machine learning systems that can finally make full use of those signals. These models can evaluate thousands of factors inside bureau files, not just the static inputs, but the patterns behind them:
How payment behavior changes over time
Which fluctuations are warning signs versus temporary noise
How multiple variables interact in ways a traditional score cant measure
This lets lenders differentiate between someone who is truly risky and someone who is momentarily out of rhythm. The impact is profound: more approvals without more losses, stronger compliance without more overhead, and decisions that align with how people actually manage their finances today.
For leadership teams, this also means making intentional choices about who to serve and how to allocate capital. Tailored models let institutions focus their resources on the customers they actually want to reach, rather than relying on a one-size-fits-all score.
AI FIXES SOMETHING WE DONT TALK ABOUT ENOUGH
There’s widespread concern about AI bias, and rightly so. When algorithms aren’t trained on a representative set of data or arent monitored after deployment, this can create biased results. In lending, these models arent deployed on faith; theyre validated, back-tested, and monitored over time, with clear documentation of the factors driving each decision. Modern explainability techniques, now well-established in credit risk, can give regulators and consumers a clearer view into how and why decisions are made.
Business leaders should also consider that there is bias embedded in manual underwriting. Human decisionsespecially in high-volume, time-pressured environmentsvary from reviewer to reviewer, case to case, hour to hour.
Machine learning models that use representative data, are regularly monitored, and make explainable, transparent decisions, giving humans a dependable baseline. This allows them to focus on exceptions, tough cases, and strategy.
THE NEW ADVANTAGE FOR BUSINESS LEADERS
The next era of lending will be defined by companies that operationalize AI with discipline, building in strong governance, clear guardrails, and transparency. Those who do will see higher approval rates, lower losses, faster decisions with fewer manual bottlenecks, and fairer outcomes that reflect real behavior, not outdated shortcuts.
For the first time in 70 years, were able to bring real, impactful change to one of the most influential drivers in the economy.
THE FUTURE ISNT A SCORE, ITS UNDERSTANDING
If the last century of lending was defined by a single, blunt number, the next century will be defined by intelligence. By the ability to interpret risk with nuance, adapt to fast-moving economic signals, and extend opportunity to people who have long been underestimated by the system.
AI wont make lending flawless. But it gives us the clearest path weve ever had toward a credit ecosystem that is more accurate, more resilient, and far fairer than the one we inherited.
And for leaders focused on growth, innovation, and long-term competitiveness, that shift is transformational.
Sean Kamkar is CTO of Zest AI.
Perusing the grocery aisle in the Westside Market on 23rd Street in Manhattan, you might not even notice the screens. They look just like paper price labels and, alongside a bar code, use a handwriting-style font weve come to associate with a certain merchant folksiness. Theyre not particularly bright or showy. The only clues that theyre not ordinary sticky shelf labels are a barely distinguishable light bulb and, on some, a small QR code.
These are electronic shelf labels, chip-enabled screens that some stores are now using to display product prices. Unlike their paper predecessors, the prices arent printed in ink but rendered in pixels, and they can change instantaneously, at any time. The labels also come with additional features. An LED light can switch on to flag something, perhaps a product that needs restocking, explains Vusion, the company that made the labels Westside Market is now using. The QR codes are designed to help customers find more information about a product, or integrate with a personalized shopping list someone might have.
Of course, these labels arent just labels, but end-points of a much larger effort to digitize every way we now interface with products. You have a network in the store. You send the information that you want to transmit to the labels, and there you go, says Finn Wikander, the chief product officer at Pricer, another company thats manufacturing ESLs with the hope of making them a fixture of 21st century shopping.
Unsurprisingly, electronic shelf labels have become a flashpoint for consumer anxiety. The companies selling the devices, and the stores buying them, say the technology isnt about screwing people over but about making their businesses easier to run. Automating price changes eliminates hours spent replacing labels. It also makes it simpler to respond to new tariffs or account for rising inflation.
But in a world spooked by dynamic pricing, electronic shelf labels can look to some like a goblin of digitizationa symptom of late-stage Silicon Valley campaigns to streamline and optimize seemingly all elements of commerce. Even members of Congress have raised suspicions about the technology, arguing that it enables price gouging and discrimination, particularly as it becomes more common in the United States.
“Historically, when we thought about brick and mortar stores, prices were relatively stable,” Vicki Morwitz, a Columbia Business School professor who focuses on marketing and consumer behavior, tells Fast Company. “These electronic shelf tags break that assumption which makes pricing feel less stable. Even if average prices aren’t necessarily going up, that shelf instability can become a psychological flash point.”
Screenified everything
A handful of companies sell this technology as part of broader enterprise software packages. Theres Pricer, a Swedish firm, and Vusion, headquartered in France. Solum operates out of South Korea, and Opticon, known for barcode scanners, is also in the mix. Electronic shelf labels can also be bought, ahem, off the shelf and integrated into a stores Bluetooth networkno enterprise startup required.
The pitch for these devices is exactly what unsettles so many shoppers: Electronic shelf labels make it much easier for stores to change prices dynamically and more frequently. The companies that manufacture and deploy these tools say there are legitimate reasons to do so. For example, a store might raise prices if suppliers increase costs, or cut them quickly when a product is nearing its expiration date.
ESLs also allow chains to keep prices consistent across locations and respond more quickly to competitors (especially valuable at a time when shoppers are already carrying smartphones to compare prices between stores). Most consumers today are used to either doing their own scanning or use ChatGPT or Gemini to find the best offer or use price comparison sites, says Pricer’s Wikander.
Then theres labor. Employees might spend hours replacing labels for a price surge or sale. The idea is to liberate people from very tedious tasks in a store. Changing prices could be one. Launching promotions could be one, argues Loc Oumier, a marketing executive with Vusion. There are also regulatory considerations: France, for instance, passed a law mandating that prices at checkout match advertised prices on aisles, which pushed stores in that country to adopt the technology, says Wikander.
They are now rolling out more broadly in the United States, especially at large chains. Vusion says its labels are in use at Fresh Market, Mattress Firm, and Leons in Canada. Walmart, which declined to comment for this story, announced in 2024 that it would begin installing electronic shelf labels, with plans to bring Vusions technology to more than 2,000 stores by the end of 2026. Tests or deployments have appeared in Whole Foods, Schnucks, and even smaller retailers like Westside Market.
The reception can be frosty. While there are some scenarios, like from Uber rides and airline tickets, where consumers have come to accept rapidly changing costs, the practice often feels jarring. That tension was evident in 2024, when Wendys faced backlash after announcing plans to install digital menu boards and later promised it wouldn’t introduce surge pricing for burgers. Shoppers also worry about price gouging, where retailers spike prices during emergencies. Exploiting consumers when they have no real alternatives or limited alternatives, says Columbia’s Morwitz. The problem is consumers may feel exploited long before an economist would say they are.
There is also the understandable anxiety that the technology is designed to cut jobs. Some workers, as reported in The Nation, say the labels do not simplify their work but replace one kind of labor with another form of algorithmic babysitting. Unlike paper tags, screens can break, and computer programs fall victim to bugs and internet outages. Employees at one chain store operated by Kroger, which has also deployed the tech, have apparently complained that the labels heat up stores. (Kroger did not respond to Fast Company‘s request for comment.)
Concerns reach D.C.
Lawmakers have taken notice. Democratic Senators Elizabeth Warren of Massachusetts and Bob Casey of Pennsylvania wrote to Kroger after the company announced it would introduce the technology, amid accusations that it was using facial recognition to show different customers different prices. In a letter of response obtained by Fast Company, Kroger defended the rollout, saying ESLs helped it manage the 1.3 billion price changes it implements each year and freed up associates to assist customers. Paula Walsh,Krogers director of retail operations, denied in the letter that the company was using facial recognition or collecting personal information from customers through the tags.
Kroger dodged my questions but confirmed my key concerns: Its using electronic shelf labels to change grocery prices in real-time and collect data that could be used to jack up grocery prices for Americans, Warren tells Fast Company. Ill keep pushing to make sure consumers arent being exploited while they work hard to put food on the table.
Wikander, for his part, dismisses the idea that retailers would use the technology that way. Just because you have the possibility of screwing your customers doesn’t mean that retailers will do that,” he says. “I don’t think retailers would typically do it, because the consumers are smarter than that. Wikander says it takes a typical business around a year or two and that, while the investment upfront is big, the labels last for many years.
Indeed, for all the eeriness surrounding the labels, research shows that it might not be much of a change, price wise, for either consumers or businesses. Ioannis Stamatopoulos, a business professor at the University of Texas at Austin, says there is little evidence that digital shelf labels lead to significant price swings. He pointed to a 2025 study involving an American grocery store that found no evidence of the practice, and another involving an international grocery store that showed that prices tended to decline, particularly for items with short shelf lives.
Much of his research, at least, suggests that the labels are most effective at stopping food waste, since it makes it easier for stores to offer sales on products like bananas and strawberries when theyre about to go bad.
For now, the future of grocery shopping may look almost exactly like the pastexcept the price tag is oh-so-faintly glowing.