When one of the founders of modern AI walks away from one of the worlds most powerful tech companies to start something new, the industry should pay attention.
Yann LeCuns departure from Meta after more than a decade shaping its AI research is not just another leadership change. It highlights a deep intellectual rift about the future of artificial intelligence: whether we should continue scaling large language models (LLMs) or pursue systems that understand the world, not merely echo it.
Who Yann LeCun is, and why it matters
LeCun is a French American computer scientist widely acknowledged as one of the Godfathers of AI. Alongside Geoffrey Hinton and Yoshua Bengio, he received the 2018 Association for Computing Machinerys A.M. Turing Award for foundational work in deep learning.
He joined Meta (then Facebook) in 2013 to build its AI research organization, eventually known as FAIR (Facebook/META Artificial Intelligence Research), a lab that tried to advance foundational tools such as PyTorch and contributed to early versions of Llama.
Over the years, LeCun became a global figure in AI research, frequently arguing that current generative models, powerful as they are, do not constitute true intelligence.
What led him to leave Meta
LeCuns decision to depart, confirmed in late 2025, was shaped by both strategic and philosophical differences with Metas evolving AI focus.
In 2025, Meta reorganized its AI efforts under Meta Superintelligence Labs, a division emphasizing rapid product development and aggressive scaling of generative systems. This reorganization consolidated research, product, infrastructure, and LLM initiatives under leadership distinct from LeCuns traditional domain.
Within this new structure, LeCun reported not to a pure research leader, but to a product and commercialization-oriented chain of command, a sign of shifting priorities.
But more important than that, theres a deep philosophical divergence: LeCun has been increasingly vocal that LLMs, the backbone of generative AI, including Metas Llama models, are limited. They predict text patterns, but they do not reason or understand the physical world in a meaningful way. Contemporary LLMs excel at surface-level mimicry, but lack robust causal reasoning, planning, and grounding in sensory experience.
As he has said and written, LeCun believes LLMs are useful, but they are not a path to human-level intelligence.
This tension was compounded by strategic reorganizations inside Meta, including workforce changes, budget reallocations, and a cultural shift toward short-term product cycles at the expense of long-term exploratory research.
The big idea behind his new company
LeCuns new venture is centered on alternative AI architectures that prioritize grounded understanding over language mimicry.
While details remain scarce, some elements have emerged:
The company will develop AI systems capable of real-world perception and reasoning, not merely text prediction.
It will focus on world models, AI that understands environments through vision, causal interaction, and simulation rather than only statistical patterns in text.
LeCun has suggested the goal is systems that understand the physical world, have persistent memory, can reason, and can plan complex actions.
In LeCuns own framing, this is not a minor variation on todays AI: Its a fundamentally different learning paradigm that could unlock genuine machine reasoning.
Although Meta founders and other insiders have not released official fundraising figures, multiple reports indicate that LeCun is in early talks with investors and that the venture is attracting atention precisely because of his reputation and vision.
Why this matters for the future of AI
LeCuns break with Meta points to a larger debate unfolding across the AI industry.
LLMs versus world models:LLMs have dominated public attention and corporate strategy because they are powerful, commercially viable, and increasingly useful. But there is growing recognition, echoed by researchers like LeCun, that understanding, planning, and physical reasoning will require architectures that go beyond text.
Commercial urgency versus foundational science:Big Tech companies are understandably focused on shipping products and capturing market share. But foundational research, the kind that may not pay off for years, requires a different timeline and incentives structure. LeCuns exit underscores how those timelines can diverge.
A new wave of AI innovation:If LeCuns new company succeeds in advancing world models at scale, it could reshape the AI landscape. We may see AI systems that not only generate text but also predict outcomes, make decisions in complex environments, and reason about cause and effect.
This would have profound implications across industries, from robotics and autonomous systems to scientific research, climate modeling, and strategic decision-making.
What it means for Meta and the industry
Metas AI strategy increasingly looks short-term, shallow, and opportunistic, shaped less by a coherent research vision than by Mark Zuckerbergs highly personalistic leadership style. Just as the metaverse pivot burned tens of billions of dollars chasing a narrative before the technology or market was ready, Metas current AI push prioritizes speed, positioning, and headlines over deep, patient inquiry.
In contrast, organizations like OpenAI, Google DeepMind, and Anthropic, whatever their flaws, remain anchored in long-horizon research agendas that treat foundational understanding as a prerequisite for durable advantage. Metas approach reflects a familiar pattern: abrupt strategic swings driven by executive conviction rather than epistemic rigor, where ambition substitutes for insight and scale is mistaken for progress. Yann LeCuns departure is less an anomaly than a predictable consequence of that model.
But LeCuns departure is also a reminder that the AI field is not monolithic. Different visions of intelligence, whether generative language, embodied reasoning, or something in between, are competing for dominance.
Corporations chasing short-term gains will always have a place in the ecosystem. But visionary research, the kind that might enable true understanding, may increasingly find its home in independent ventures, academic partnerships, and hybrid collaborations.
A turning point in AI
LeCuns decision to leave Meta and pursue his own vision is more than a career move. It is a signal: that the current generative AI paradigm, brilliant though it is, will not be the final word in artificial intelligence.
For leaders in business and technology, the question is no longer whether AI will transform industries, its how it will evolve next. LeCuns new line of research is not unique: Other companies are following the same idea. And this idea might not just shape the future of AI researchit could define it.
TikToks U.S. operations are now managed by a new American joint venture, ending a long-standing debate over whether the app would be permanently banned in the United States. The good news for TikTok users is that this deal guarantees that the app will continue to operate within Americas borders.
But theres some bad news, too.
Successive U.S. administrationsboth Bidens and Trump’sargued that TikTok posed a national security threat to America and its citizens, partly because of the data the app collected about them. While all social media apps collect data about their users, officials argued that TikToks data collection was a danger (while, say, Facebooks was not) because the worlds most popular short-form video app was owned by ByteDance, a Chinese company.
The ironic thing is that TikTok will actually collect more data about them now than it did under ByteDance ownership. The company’s new mostly American ownersLarry Ellison’s Oracle, private equity company Silver Lake, and the Emirati investment company MGXmade this clear in a recent update to TikToks privacy policy and its terms of service.
If this new data collection unnerves you, there are some things you can do to mitigate it.
How to stop TikToks new U.S. owners from getting your precise location
When TikToks U.S. operations were still owned by ByteDance, the app did not collect the GPS phone location data of users in the United States. TikToks new U.S. owners have now changed that policy, stating, if you choose to enable location services for the TikTok app within your device settings, we collect approximate or precise location information from your device.
While allowing TikTokor any social media appto access your location can mean you see more relevant content from events or creators in your area, theres no reason that app should need to know your precise GPS location, which reveals where in the world you are down to a few feet.
Thankfully, you can block TikToks access to your GPS location data by using the settings on your phone.
On iPhone:
Open the Settings app.
Tap Apps.
Tap TikTok.
Tap Location.
Set location access to Never.
On Android:
Find the TikTok app on your home screen and tap and hold on its icon.
Tap the App information menu item from the pop-up.
Tap Permissions.
Tap Location.
Tap Dont Allow.
How to limit new targeted advertising
When TikToks U.S. operations were owned by ByteDance, the companys terms of service informed users that it analyzed their content to provide tailored advertising to them. This was not surprising. TikToks main way of generating revenue is via showing ads in the app.
But in the updated terms of service posted by TikToks U.S. owners, it now appears that TikTok will use the data it collects about you, as well as the data its third-party partners have on you, to target you with relevant ads both on and off the platform. As the new terms of service states, You agree that we can customize ads and other sponsored content from creators, advertisers, and partners, that you see on and off the Platform based on, among other points, information we receive from third parties.
Unfortunately, as of this writing, TikToks new U.S. owners dont seem to offer a way for U.S. users to disable personalized ads (users in some regions may see the option under Settings and privacy > Ads in the TikTok app).
Still, if you have an iPhone, you can at least stop TikTok from tracking your activity across apps and websites using iOSs App Tracking Transparency feature, which allows users to quickly block an app from tracking what they do on their iPhone outside of the app.
Open the Settings app on your iPhone.
Tap Privacy and Security.
Tap Tracking.
In the list of apps that appears, make sure the toggle next to TikTok is set to off (white).
Currently, Android does not offer a feature like Apples App Tracking Transparency.
TikToks U.S. owners track your AI interactions
Like most social media apps, TikTok has been slowly adding more AI features. (One, called AI Self, lets users upload a picture of themselves and have TikTok turn it into an AI avatar).
As Wired previously noted, TikToks new U.S. owners have now inserted a new section in the privacy policy informing users that it may collect and store any data surrounding your AI interactions, including prompts, questions, files, and other types of information that you submit to our AI-powered interfaces, as well as the responses they generate.
That means anything you upload to use in TikToks AI featuresor prompts you writecould be retained by the company. Unfortunately, theres no internal TikTok app setting, or any iPhone and Android app setting that lets you get around this TikTok AI data collection.
That means TikToks U.S. users only have one choice if they dont want the apps new U.S. owners to collect AI data about them: Dont use TikToks AI features.
EVs hit a new milestone: In December, buyers in Europe registered more electric cars than gas cars for the first time.
EV registrations hit 217,898 in the EU last monthup 50% year-over-year from 2024. Sales of gas cars, on the other hand, dropped nearly 20% to 216,492. The same trend played out in the larger region, including the UK and other non-EU countries like Iceland.
Car buyers have more electric options in Europe than in the U.S., from tiny urban EVs like the $10,000 Fiat Topolino to Chinese cars like the BYD Dolphin.
“We’re actually seeing this trend globally, although the U.S. is a different story: as the availability and quality of EVs goes up, sales have been going up as well,” says Ilaria Mazzocco, who studies electric vehicle markets at the Center for Strategic & International Studies. “There’s a story that some of the major OEMs have been pushing that there’s no demand for EVs. But when you look at the numbers…it turns out there’s a lot of latent demand.”
Some automakers are doing better than others. Teslas market share dropped around 38% last year in Europe as buyers reacted to Elon Musk’s politics. BYD tripled its market share over the same period.
EVs made up 17.4% of car sales in the EU last year, around twice the rate in the U.S. That’s still well behind Norway (not part of the EU), where a staggering 96% of all registrations were fully battery-electric in 2025. Hybrid cars are still more popular than pure electric vehicles in the EU, with 34.5% of market share. Diesel cars, which used to dominate in Europe, now only have around 9% of market share.
It’s not clear exactly what will happen next as the EU may weaken its EV policy. The bloc had targeted an end to new fossil-fueled cars by 2035; in a proposal in December, it suggested cutting vehicle emissions by 90% instead, leaving more room for hybrid cars. Some of the growth also will depend on how willing European countries are to continue letting cheap Chinese EVs on the market. Still, steep growth in EVs is likely to continue.
Generative artificial intelligence technology is rapidly reshaping education in unprecedented ways. With its potential benefits and risks, K-12 schools are actively trying to adapt teaching and learning.
But as schools seek to navigate into the age of generative AI, theres a challenge: Schools are operating in a policy vacuum. While a number of states offer guidance on AI, only a couple of states require local schools to form specific policies, even as teachers, students, and school leaders continue to use generative AI in countless new ways. As a policymaker noted in a survey, You have policy and whats actually happening in the classroomsthose are two very different things.
As part of my labs research on AI and education policy, I conducted a survey in late 2025 with members of the National Association of State Boards of Education, the only nonprofit dedicated solely to helping state boards advance equity and excellence in public education. The survey of the associations members reflects how education policy is typically formed through dynamic interactions across national, state, and local levels, rather than being dictated by a single source.
But even in the absence of hard-and-fast rules and guardrails on how AI can be used in schools, education policymakers identified a number of ethical concerns raised by the technologys spread, including student safety, data privacy, and negative impacts on student learning.
They also expressed concerns over industry influence and that schools will later be charged by technology providers for large language model-based tools that are currently free. Others report that administrators in their state are very concerned about deepfakes: What happens when a student deepfakes my voice and sends it out to cancel school or report a bomb threat?
At the same time, policymakers said teaching students to use AI technology to their benefit remains a priority.
Local actions dominate
Although chatbots have been widely available for more than three years, the survey revealed that states are in the early stages of addressing generative AI, with most yet to implement official policies. While many states are providing guidance or tool kits, or are starting to write state-level policies, local decisions dominate the landscape, with each school district primarily responsible for shaping its own plans.
When asked whether their state has implemented any generative AI policies, respondents said there was a high degree of local influence regardless of whether a state issued guidance or not. We are a local control state, so some school districts have banned [generative AI], wrote one respondent. Our [state] department of education has an AI tool kit, but policies are all local, wrote another. One shared that their state has a basic requirement that districts adopt a local policy about AI.
Like other education policies, generative AI adoption occurs within the existing state education governance structures, with authority and accountability balanced between state and local levels. As with previous waves of technology in K-12 schools, local decision-making plays a critical role.
Yet there is generally a lack of evidence related to how AI will affect learners and teachers, which will take years to become more clear. That lag adds to the challenges in formulating policies.
States as a lighthouse
However, state policy can provide vital guidance by prioritizing ethics, equity, and safety, and by being adaptable to changing needs. A coherent state policy can also answer key questions, such as acceptable student use of AI, and ensure more consistent standards of practice. Without such direction, districts are left to their own devices to identify appropriate, effective uses and to construct guardrails.
As it stands, AI usage and policy development are uneven, depending on how well resourced a school is. Data from a Rand-led panel of educators showed that teachers and principals in higher-poverty schools were about half as likely to report that AI guidance was provided. The poorest schools are also less likely to use AI tools.
When asked about foundational generative AI policies in education, policymakers focused on privacy, safety, and equity. One respondent, for example, said school districts should have the same access to funding and training, including for administrators.
And rather than having the technology imposed on schools and families, many argued for grounding the discussion in human values and broad participation. As one policymaker noted, What is the role that families play in all this? This is something that is constantly missing from the conversation and something to uplift. As we know, parents are our kids first teachers.
Introducing new technology
According to a Feb. 24, 2025, Gallup poll, 60% of teachers report using some AI for their work in a range of ways. Our survey also found there is shadow use of AI, as one policymaker put it, where employees implement generative AI without explicit school or district IT or security approval.
Some states, such as Indiana, offer schools the opportunity to apply for a one-time competitive grant to fund a pilot of an AI-powered platform of their choosing, as long as the product vendors are approved by the state. Grant proposals that focus on supporting students or professional development for educators receive priority.
In other states, schools opt in to pilot tests that are funded by nonprofits. For example, an eighth grade language arts teacher in California participated in a pilot where she used AI-powered tools to generate feedback on her students writing. Teaching 150 kids a day and providing meaningful feedback for every student is not possible; I would try anything to lessen grading and give me back my time to spend with kids. This is why I became a teacher: to spend time with the kids. This teacher also noted the tools showed bias when analyzing the work of her students learning English, which gave her the opportunity to discuss algorithmic bias in these tools.
One initiative from the Netherlands offers a different approach than finding ways to implement products developed by technology companies. Instead, schools take the lead with questions or challenges they are facing and turn to industry to develop solutions informed by research.
Core principles
One theme that emerged from survey respondents is the need to emphasize ethical principles inproviding guidance on how to use AI technology in teaching and learning. This could begin with ensuring that students and teachers learn about the limitations and opportunities of generative AI, when and how to leverage these tools effectively, critically evaluate its output, and ethically disclose its use.
Often, policymakers struggle to know where to begin in formulating policies. Analyzing tensions and decision-making in organizational contextor what my colleagues and I called “dilemma analysis” in a recent reportis an approach schools, districts, and states can take to navigate the myriad of ethical and societal impacts of generative AI.
Despite the confusion around AI and a fragmented policy landscape, policymakers said they recognize it is incumbent upon each school, district, and state to engage their communities and families to co-create a path forward.
As one policymaker put it: Knowing the horse has already left the barn [and that AI use] is already prevalent among students and faculty . . . [on] AI-human collaboration versus an outright ban, where on the spectrum do you want to be?
Janice Mak is an assistant director and clinical assistant professor at Arizona State University.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
“Snow Will Fall Too Fast for Plows,” ICE STORM APOCALYPSE, and Another Big Storm May Be Coming … were all headlines posted on YouTube this past weekend as the biggest snowstorm in years hit New York City.
These videos, each with tens or hundreds of thousands of views, are part of an increasingly popular genre of weather influencers,” as Americans increasingly turn to social media for news and weather updates.
People pay more attention to influencers on YouTube, Instagram, and TikTok than to journalists or mainstream media, a study by the Reuters Institute and the University of Oxford found in 2024. In the U.S., social media is how 20% of adults get their news or weather updates, according to the Pew Research Center.
Its no surprise, then, that a number of online weather accounts have cropped up to cover the increasing number of extreme weather events in the U.S.
While some of these influencers have no science background, many of the most popular ones are accredited meteorologists. One of the most viewed digital meteorologistsor weather influencersis Ryan Hall, who calls himself “The Internet’s Weather Man” on his social media platforms. His YouTube channel, Ryan Hall, Yall, has more than 3 million subscribers.
Max Velocity is another. He’s a degreed meteorologist, according to his YouTube bio, who has 1.66 million followers. Reed Timmer, an extreme meteorologist and storm chaser, also posts to 1.46 million subscribers on YouTube. While most prefer to avoid the bad news that comes with bad weather, I charge towards it, Timmer writes in the description section on his channel.
The rising popularity of weather influencers is stemming not just from a mistrust in mainstream mediawhich is lingering at an all-time lowbut also from an appetite for real-time updates delivered in an engaging way to the social-first generation.
YouTube accounts like Halls will often livestream during extreme weather events, with his comments section hosting a flurry of activity. Theres even merch.
Of course, influencers are not required to uphold the same reporting standards as network weathercasters. Theres also the incentive, in terms of likes and engagement, to sensationalize events with clickbait titles and exaggerated claims, or sometimes even misinformation, as witnessed during the L.A. wildfires last year.
Still, as meteorologists navigate the new media landscape, the American Meteorological Society now offers a certification program in digital meteorology for those meteorologists who meet established criteria for scientific competence and effective communication skills in their weather presentations on all forms of digital media.
While we wait to see whether another winter storm will hit the Northeast this weekend, rest assured, the weather influencers will be tracking the latest updates.
You know the ancient proverb: Give a man a fish, and you feed him for a day; teach a man to fish, and you feed him for a lifetime.
For leaders, first-generation AI tools are like giving employees fish. Agentic AI, on the other hand, teaches them how to fishtruly empowering, and that empowerment lifts the entire organization. According to recent findings from McKinsey, nearly eight in ten companies report using gen AI, yet about the same number report no bottom-line impact. Agentic AI can help organizations achieve meaningful results.
AI agents are highly capable assistants with the ability to execute tasks independently. Equipped with artificial intelligence that simulates human reasoning, they can recognize problems, remember past interactions, and proactively take steps to get things donewhether that means knocking out tedious manual tasks or helping to generate innovative solutions. For CEOs juggling numerous responsibilities, agentic AI can be a powerful ally in simplifying decision-making and scaling impact. Thats why I believe it belongs on every CEOs roadmap for 2026.
As CEO of a SaaS company grounded in automation, Ive made it a priority to incorporate agentic AI into our everyday workflows. Here are three ways you can put it to work in your organization.
1. Take the effort out of scheduling
Starting with one of the most basic functions of any organizationand one that can easily become a time and energy vacuumscheduling is perfect fodder for AI agents. And they go well beyond your typical AI-powered scheduling tool.
For starters, theyre adaptable. AI agents can monitor incoming data and requests, proactively adjust schedules, and notify the relevant parties when issues arise. Lets say your team has a standing brainstorming session every Wednesday and a new client reaches out to request an intro meeting at the same time. Your agent can automatically respond with alternative time slots. On the other hand, if a client needs to connect on a time-sensitive issue, your agent can elevate the request to a human employee to decide whether rescheduling makes sense.
You can also personalize AI agents based on your unique needs and priorities, including past interactions. If, for example, your agent learns that you religiously protect time for deep-focus work first thing in the morning, it wont keep proposing meetings then.
By delegating scheduling tasks, organizationsfrom the CEO to internsfree up time for higher-level priorities and more meaningful work. You can build your own agent, or get started with a ready-to-use scheduling assistant that offers agentic capabilities, like Reclaim.ai.
2. Facilitate idea generation and innovation
When we talk about AI and creativity, the conversation often stirs anxiety about artificial intelligence replacing human creativity. But agentic AI can help spark ideas for engagement, leadership development, and strategic initiatives. The goal is to cultivate the conditions in which these initiatives can thrive, not to replace the actual brainstorming or strategic thinking.
For example, you can create an ideation-focused AI agent and train it on relevant organizational contextperformance data, KPIs, meeting notes, employee engagement data, culture touch points, and more. Your agent can continuously gather new information and update its internal knowledge.
When the time comes for a brainstorming or strategy session (which the agent can also proactively prompt), it can draw on this working organizational memory plus any other resources it can access, and tap generative AI tools like ChatGPT or Gemini to generate themes, propose topics, and help guide the discussion. Meanwhile, leaders remain focused on evaluating ideas, decision-making, and execution.
3. Error-free progress updates and year-end recaps
While generative AI can be incredibly powerful, the issue remains that it is largely reactive, not proactive. When it comes to tracking performance, team KPIs, and organizational progress, manual check-ins are still required. As Ive written before, manual tasks are subject to human error. Calendar alerts go unnoticed. Things slip through the cracks. Minor problems become big issues.
One solution is to design an AI agent that can autonomously monitor your organizations performance. Continuous, real-time oversight helps ensure processes run smoothly and that issues are flagged as soon as they arise. For example, if your company sells workout gear and sees a postNew Year surge in fitness resolutionsand demand for a specific productan agent can track sales patterns and alert the team to inventory shortages. An AI agent can also independently generate reports, including year-end recaps that are critical for continued growth.
Rather than waiting to be prompted by a human, they can do the work alone and elevate only the issues that require human judgment.
Agents have the potential to create real value for organizations. Importantly, leaders have to rethink workflows so AI agents are meaningfully integrated, fully liberating employees from rote, manual tasks and freeing them to focus on more consequential, inspiring work like strategy and critical thinking. Ive found this leaves employees more energized, and the benefits continue to compound.
TikTok agreed to settle a landmark social media addiction lawsuit just before the trial kicked off, the plaintiffs attorneys confirmed.
The social video platform was one of three companies along with Metas Instagram and Googles YouTube facing claims that their platforms deliberately addict and harm children. A fourth company named in the lawsuit, Snapchat parent company Snap Inc., settled the case last week for an undisclosed sum.
Details of the settlement with TikTok were not disclosed, and the company did not immediately respond to a request for comment.
At the core of the case is a 19-year-old identified only by the initials KGM, whose case could determine how thousands of other similar lawsuits against social media companies will play out. She and two other plaintiffs have been selected for bellwether trials essentially test cases for both sides to see how their arguments play out before a jury and what damages, if any, may be awarded, said Clay Calvert, a nonresident senior fellow of technology policy studies at the American Enterprise Institute.
A lawyer for the plaintiff said in a statement Tuesday that TikTok remains a defendant in the other personal injury cases, and that the trial will proceed as scheduled against Meta and YouTube.
Jury selection starts this week in the Los Angeles County Superior Court. It’s the first time the companies will argue their case before a jury, and the outcome could have profound effects on their businesses and how they will handle children using their platforms. The selection process is expected to take at least a few days, with 75 potential jurors questioned each day through at least Thursday. A fourth company named in the lawsuit, Snapchat parent company Snap Inc., settled the case last week for an undisclosed sum.
KGM claims that her use of social media from an early age addicted her to the technology and exacerbated depression and suicidal thoughts. Importantly, the lawsuit claims that this was done through deliberate design choices made by companies that sought to make their platforms more addictive to children to boost profits. This argument, if successful, could sidestep the companies’ First Amendment shield and Section 230, which protects tech companies from liability for material posted on their platforms.
Borrowing heavily from the behavioral and neurobiological techniques used by slot machines and exploited by the cigarette industry, Defendants deliberately embedded in their products an array of design features aimed at maximizing youth engagement to drive advertising revenue, the lawsuit says.
Executives, including Meta CEO Mark Zuckerberg, are expected to testify at the trial, which will last six to eight weeks. Experts have drawn similarities to the Big Tobacco trials that led to a 1998 settlement requiring cigarette companies to pay billions in health care costs and restrict marketing targeting minors.
Plaintiffs are not merely the collateral damage of Defendants products, the lawsuit says. They are the direct victims of the intentional product design choices made by each Defendant. They are the intended targets of the harmful features that pushed them into self-destructive feedback loops.
The tech companies dispute the claims that their products deliberately harm children, citing a bevy of safeguards they have added over the years and arguing that they are not liable for content posted on their sites by third parties.
Recently, a number of lawsuits have attempted to place the blame for teen mental health struggles squarely on social media companies, Meta said in a recent blog post. “But this oversimplifies a serious issue. Clinicians and researchers find that mental health is a deeply complex and multifaceted issue, and trends regarding teens’ well-being aren’t clear-cut or universal. Narrowing the challenges faced by teens to a single factor ignores the scientific research and the many stressors impacting young people today, like academic pressure, school safety, socio-economic challenges and substance abuse.”
A Meta spokesperson said in a statement Monday the company strongly disagrees with the allegations outlined in the lawsuit and that it’s confident the evidence will show our longstanding commitment to supporting young people.
José Castaeda, a Google spokesperson, said Monday that the allegations against YouTube are simply not true. In a statement, he said Providing young people with a safer, healthier experience has always been core to our work.”
TikTok did not immediately respond to a request for comment Monday.
The case will be the first in a slew of cases beginning this year that seek to hold social media companies responsible for harming children’s mental well-being. A federal bellwether trial beginning in June in Oakland, California, will be the first to represent school districts that have sued social media platforms over harms to children.
In addition, more than 40 state attorneys general have filed lawsuits against Meta, claiming it is harming young people and contributing to the youth mental health crisis by deliberately designing features on Instagram and Facebook that addict children to its platforms. The majority of cases filed their lawsuits in federal court, but some sued in their respective states.
TikTok also faces similar lawsuits in more than a dozen states.
By Kaitlyn Huamani and Barbara Ortutay, AP technology writers
The Trump administration has not shied away from sharing AI-generated imagery online, embracing cartoonlike visuals and memes and promoting them on official White House channels.
But an edited and realistic image of civil rights attorney Nekima Levy Armstrong in tears after being arrested is raising new alarms about how the administration is blurring the lines between what is real and what is fake.
Homeland Security Secretary Kristi Noems account posted the original image from Levy Armstrong’s arrest before the official White House account posted an altered image that showed her crying. The doctored picture is part of a deluge of AI-edited imagery that has been shared across the political spectrum since the fatal shootings of Renee Good and Alex Pretti by U.S. Border Patrol officers in Minneapolis
However, the White Houses use of artificial intelligence has troubled misinformation experts who fear the spreading of AI-generated or edited images erodes public perception of the truth and sows distrust.
In response to criticism of the edited image of Levy Armstrong, White House officials doubled down on the post, with deputy communications director Kaelan Dorr writing on X that the memes will continue. White House Deputy Press Secretary Abigail Jackson also shared a post mocking the criticism.
David Rand, a professor of information science at Cornell University, says calling the altered image a meme certainly seems like an attempt to cast it as a joke or humorous post, like their prior cartoons. This presumably aims to shield them from criticism for posting manipulated media. He said the purpose of sharing the altered arrest image seems much more ambiguous than the cartoonish images the administration has shared in the past.
Memes have always carried layered messages that are funny or informative to people who understand them, but indecipherable to outsiders. AI-enhanced or edited imagery is just the latest tool the White House uses to engage the segment of Trumps base that spends a lot of time online, said Zach Henry, a Republican communications consultant who founded Total Virality, an influencer marketing firm.
People who are terminally online will see it and instantly recognize it as a meme, he said. Your grandparents may see it and not understand the meme, but because it looks real, it leads them to ask their kids or grandkids about it.
All the better if it prompts a fierce reaction, which helps it go viral, said Henry, who generally praised the work of the White Houses social media team.
The creation and dissemination of altered images, especially when they are shared by credible sources, crystallizes an idea of whats happening, instead of showing what is actually happening, said Michael A. Spikes, a professor at Northwestern University and news media literacy researcher.
The government should be a place where you can trust the information, where you can say its accurate, because they have a responsibility to do so,” he said. “By sharing this kind of content, and creating this kind of content it is eroding the trust even though Im always kind of skeptical of the term trust but the trust we should have in our federal government to give us accurate, verified information. Its a real loss, and it really worries me a lot.
Spikes said he already sees the institutional crises around distrust in news organizations and higher education, and feels this behavior from official channels inflames those issues.
Ramesh Srinivasan, a professor at UCLA and the host of the Utopias podcast, said many people are now questioning where they can turn to for trustable information. AI systems are only going to exacerbate, amplify and accelerate these problems of an absence of trust, an absence of even understanding what might be considered reality or truth or evidence, he said.
Srinivasan said he feels the White House and other officials sharing AI-generated content not only invites everyday people to continue to post similar content but also grants permission to others who are in positions of credibility and power, like policymakers, to share unlabeled synthetic content. He added that given that social media platforms tend to algorithmically privilege extreme and conspiratorial content which AI generation tools can create with ease weve got a big, big set of challenges on our hands.
An influx of AI-generated videos related to Immigration and Customs Enforcement action, protests, and interactions with citizens has already been proliferating on social media. After Renee Good was shot by an ICE officer while she was in her car, several AI-generated videos began circulating of women driving away from ICE officers who told them to stop. There are also many fabricated videos circulating of immigration raids and of people confronting ICE officers, often yelling at them or throwing food in their faces.
Jeremy Carrasco, a content creator who specializes in media literacy and debunking viral AI videos, said the bulk of these videos are likely coming from accounts that are engagement farming,” or looking to capitalize on clicks by generating content with popular keywords and search terms like ICE. But he also said the videos are getting views from people who oppose ICE and DHS and could be watching them as fan fiction, or engaging in wishful thinking, hoping that they’re seeing real pushback against the organizations and their officers.
Still, Carrasco also believes that most viewers can’t tell if what they’re watching is fake, and questions whether they would know “whats real or not when it actually matters, like when the stakes are a lot higher.”
Even when there are blatant signs of AI generation, like street signs with gibberish on them or other obvious errors, only in the best-case scenario would a viewer be savvy enough or be paying enough attention to register the use of AI.
This issue is, of course, not limited to news surrounding immigration enforcement and protests. Fabricated and misrepresented images following the capture of deposed Venezuelan leader Nicolás Maduro exploded online earlier this month. Experts, including Carrasco, think the spread of AI-generated political content will only become more commonplace.
Carrasco believes that the widespread implementation of a watermarking system that embeds information about the origin of a piece of media into its metadata layer could be a step toward a solution.The Coalition for Content Provenance and Authenticity has developed such a system, but Carrasco doesnt think that will become extensively adopted for at least another year.
Its going to be an issue forever now, he said. I dont think people understand how bad this is.
Kaitlyn Huamani, AP technology writer
Associated Press writers Jonathan J. Cooper and Barbara Ortutay contributed to this report.
Sales reps, business owners and recruiters are documenting their cold calls online and cashing in on the viral content.
Cold calling has existed as long as the telephone: Its a sales technique where a representative for a company calls an individual unsolicited to attempt to hook them with their sales pitch in the first 30 seconds or less. Some say cold calling is dead in 2026, as people pick up the phone less and less due to the increase in spam and AI bots on the other end of the line.
But these days, if your phone incessantly buzzes with endless sales calls, answer at your own riskyou might end up going viral on TikTok.
Across the social media platform, sales reps create compilations of clips from the weeks calls, complete with the headset, standing desk and lively salesperson energy. Some test out different openers live on camera to varying results. Others document the blunt interactions or unconventional strategies for viral content.
One content creator and life insurance salesman, Juliano Massarelli, who has gained notoriety online refers to himself as The Wolf of Insurance. The 18-year-olds most popular cold call to date has over 16.3 million views. In between hang ups, he bounces around his bedroom, embodying Jordan Belfort-esque energy.
In another video, with 5 million views, Massarelli is shirtless and flexing in front of the camera before hitting the call button. After the woman hangs up less than a minute in, rather than be disheartened, he hits play on the music and dances at his desk. Massarellis boundless enthusiasm is so infectious, some have since parodied his content.
Unfortunately this is the sales attitude you need, one person commented.
This is at a time when a lot has been made of Gen Zs general aversion to phones. Almost a third report having phone anxiety at work, according to recent research from Trinity College London. Still, for those who work in sales, mastering the art of the cold call is non-negotiable, even in the year 2026. Other sales reps online choose to lean into the funny side of the incessant rejection that comes with the job.
Would it absolutely ruin your day if I told you this was a cold call, one sales rep opens with in a viral video. Yes, the person responds. OKthen I just wont tell you its a cold call, he swiftly replies.
Quick question: do you wanna hear what Im selling, or no? he tries in another video. The answer this time, is surprisingly yes.
Its an undisputed fact: most people dont enjoy getting unexpected phone calls and many simply wont give cold callers the time of day. Still, over 50% of B2B leads still originate from cold calling in 2025, according to a recent report from lead generation company Martal.AI. Almost half of B2B buyers prefer to be contacted via phone first, and 82% accept meetings from cold outreach, the report found.
And for the times cold callers get rejected, posting the clips online creates a feedback loop for promoting their products, businesses, or just themselves, to an audience of millions, whilst simultaneously cashing in on the viral content.
Here, every no really is one step closer to a yes.
When Stephen Smith started NOCD 11 years ago, he wanted to build an app for people like himselfone of the nearly 3 million Americans with obsessive-compulsive disorder (OCD)to track their symptoms and time their therapy exercises.
Since 2018, NOCD (pronounced “No-CD”) has provided virtual appointments with therapists specializing in OCD-focused exposure and response prevention (ERP) therapy. With more than 140 million people able to access NOCD through their insurance, the company currently provides at least 1 million therapy sessions annually.
Now, NOCDlast valued at nearly $270 million in 2024, according to PitchBookis making an acquisition and forming a parent brand that will position it as the largest telehealth provider of specialty therapy.
The company on Tuesday announced its acquisition of Rebound Health, a self-guided mental health platform focused on post-traumatic stress disorder (PTSD) and trauma. The dealthe financial terms of which weren’t disclosedtook place in November 2025.
Starting today, both companies will operate under Noto, a parent brand that takes its name from the AI-powered software platform that has fueled NOCD’s growth.
Rebound’s specialized focus on PTSD and trauma care will be integrated into Noto’s platform, which Smith says will help both companies reach more patients and expedite the process of getting them support.
A common pairing
In 2022, Smith and his team noticed there was a significant subset of NOCD users who suffered from both OCD and PTSD. Those individuals, he says, benefit from a treatment called prolonged exposure (PE) therapy, which asks patients to confront memories in order to process them.
Like ERP, PE is an exposure-based method of treatment, so NOCD trained about 100 of its 1,000 therapists to specialize in PE therapy.
By 2025, Smith felt confident that PE had proven effective and useful for NOCDs patient base. We saw that that segment [of therapists] was delivering best-in-class outcomes, he says. Given the results, he felt a need to scale the treatment as quickly as possible.
That’s where Rebound Health comes in. Founded in 2023, the company focuses on PTSD and trauma, and has primarily supported patients when a therapist is not immediately available to them due to timing or cost.
Under Noto, it will launch Rebound Therapy, a live therapy offering that will be available in the next two months. Noto is in the process of working with payers to enroll patients in Rebound Therapy, and the service should be available to most Rebound and NOCD users as a covered benefit sometime this year.
Revving a growth engine
With the Rebound acquisition expanding NOCD’s scope, Smith wanted to create a parent brand that highlights the technology that has helped the company grow to the point that it logged its first quarter of positive cash flow last year.
Noto is essentially the engine of the NOCD vehicle, he says. Through awareness campaigns that target both consumers and providers, Noto is able to identify patients who have OCD but may have been misdiagnosed and miscoded in a medical system, then enroll them in a NOCD care plan.
Noto’s data also showed that offering PE therapy to NOCD patients experiencing PTSD resulted in long-term improvement in their mental health.
When looking for the right business to help scale NOCD’s PE capabilities, Smith says Rebound stood out because CEO Raeva Kumar has personal experience with PTSD, and her cofounder and chief product officer, Erin Berenz, has clinical experience treating the disorder. Smith says the two saw the acquisition as an opportunity to get live therapy to their users.
“With Noto, we’re able to easily work with payers, enroll hard-to-engage members, and disseminate gold-standard trauma therapy,” Kumar said in a press release about the acquisition.
[Rebound] realized that they could offer scale therapy in a much shorter amount of time on the Noto infrastructure, because we already had it all built, Smith says, suggesting Noto may add more therapy areas as it grows. “We built this incredible foundation serving a complex, hidden, but treatable population. In the future, we realize there could be more.”