The Walt Disney Company has agreed to pay a $10 million civil penalty as part of a settlement to resolve allegations it violated child privacy laws, the Justice Department said on Tuesday.
A federal court order in the case involving Disney Worldwide Services Inc and Disney Entertainment Operations LLC also bars Disney from operating on YouTube in a manner that violates the Childrens Online Privacy Protection Act, the department said.
The order requires Disney to create a program that will ensure it properly complies with the privacy law on YouTube in the future, it added.
The law requires websites, apps, and other online services aimed at children under 13 to notify parents about what personal information they collect, and obtain verifiable parental consent before collecting such information
“The Justice Department is firmly devoted to ensuring parents have a say in how their childrens information is collected and used, Assistant Attorney General Brett Shumate of the Justice Department’s Civil Division said in a statement.
Disney could not immediately be reached for a comment.
The order finalizes a settlement reached in September in a case referred to the DOJ by the Federal Trade Commission.
Ryan Patrick Jones, Doina Chiacu, and Dawn Chmielewski, Reuters
The U.S. Federal Reserve agreed to cut interest rates at its December meeting only after a deeply nuanced debate about the risks facing the U.S. economy right now, according to minutes of the latest two-day session.
Even some of those who supported the rate cut acknowledged “the decision was finely balanced or that they could have supported keeping the target range unchanged,” given the different risks facing the U.S. economy, according to the minutes released on Tuesday.
In economic projections released after the December 9-10 meeting, six officials outright opposed a cut and two of that group dissented as voting members of the Federal Open Market Committee.
“Most participants” ultimately supported a cut, with “some” arguing that it was an appropriate forward-looking strategy “that would help stabilize the labor market” after a recent slowdown in job creation.
Others, however, “expressed concern that progress towards the committee’s 2% inflation objective had stalled.”
“Some participants suggested that, under their economic outlooks, it would likely be appropriate to keep the target range unchanged for some time after a lowering of the range at this meeting,” the minutes said of a debate that saw officials dissent both in favor of tighter and looser monetary policy, an unusual outcome for the central bank that has now happened at two consecutive meetings.
The quarter-point rate cut approved in December lowered the Fed’s benchmark overnight interest rate to a range of between 3.5% to 3.75%, the third consecutive move by the central bank as officials agreed that a slowdown in monthly job creation and rising unemployment warranted slightly less restrictive monetary policy.
But as rates fell, and approached a neutral level that neither discourages nor encourages investment and spending, opinion at the Fed became more divided about just how much more to cut. New projections issued after the December meeting show only one rate cut expected next year, while language in the new policy statement indicated the Fed would likely remain on hold for now until new data shows that either inflation is again falling or unemployment is rising more than anticipated.
The lack of official data during the 43-day government shutdown, a gap in information still not fully filled, continued to shape the outlook and policymakers’ views about how to manage risk.
Some of those either opposed or skeptical of the most recent cut “suggested that the arrival of a considerable amount of labor market and inflation data over the coming intermeeting period would be helpful on making judgments about whether a rate reduction was warranted.”
The data catch-up continues, with jobs and consumer price information for December coming on January 9 and January 13, back to the normal release schedule.
The Fed next meets on January 27-28, with investors currently expecting the central bank to leave its benchmark rate unchanged.
Howard Schneider, Reuters
The White House cannot lapse in its funding of the Consumer Financial Protection Bureau, a federal district court judge ruled on Tuesday, only days before funds at the bureau would have likely run out and the consumer finance agency would have no money to pay its employees.
Judge Amy Berman ruled that the CFPB should continue to get its funds from the Federal Reserve, despite the Fed operating at a loss, and that the White House’s new legal argument about how the CFPB gets its funds is not valid.
At the heart of this case is whether Russell Vought, President Donald Trump’s budget director and the acting director of the CFPB, can effectively shut down the agency and lay off all of the bureau’s employees. The CFPB has largely been inoperable since President Trump has sworn into office nearly a year ago. Its employees are mostly forbidden from doing any work, and most of the bureau’s operations this year have been to unwind the work it did under President Biden and even under Trump’s first term.
Vought himself has made comments where he has made it clear that his intention is to effectively shut down the CFPB. The White House earlier this year issued a reduction in force for the CFPB, which would have furloughed or laid off much of the bureau.
The National Treasury Employees Union, which represents the workers at the CFPB, has been mostly successful in court to stop the mass layoffs and furloughs. The union sued Vought earlier this year and won a preliminary injunction stopping the layoffs while the union’s case continues through the legal process.
In recent weeks, the White House has used a new line of argument to potentially get around the court’s injunction. The argument is that the Federal Reserve has no combined earnings at the moment to fund the CFPBs operations. The CFPB gets its funding from the Fed through expected quarterly payments.
The Federal Reserve has been operating at a paper loss since 2022 as a result of the central bank trying to combat inflation, the first time in the Fed’s entire history its been operating at a loss. The Fed holds bonds on its balance sheet from a period of low interest rates during the COVID-19 pandemic, but currently has to pay out higher interest rates to banks who hold their deposits at the central bank. The Fed has been recording a deferred asset on its balance sheet which it expects will be paid down in the next few years as the low interest bonds mature off the Fed’s balance sheet.
Because of this loss on paper, the White House has argued there are no combined earnings for the CFPB to draw on. The CFPB has operated since 2011, including under President Trumps first term, drawing on the Feds operating budget.
White House lawyers sent a notice to the court in early November, where they argued that the CFPB would run out of appropriations in early 2026, using the combined earnings argument, and does not expect to get any additional appropriations from Congress.
This combined earnings legal argument is not entirely new. It has floated in conservative legal circles going back to when the Federal Reserve started operating at a loss. The Office of Legal Counsel, which acts as the government’s legal advisors, adopted this legal theory in a memo on November 7. However, this idea has never been tested in court.
In her opinion, Berman said the OLC and Vought were using this legal theory to get around the court’s injunction instead of allowing the case to be decided on merits. A trial on whether the CFPB employees’ union can sue Vought over the layoffs is currently scheduled for February 2026.
It appears that defendants new understanding of combined earnings is an unsupported and transparent attempt to starve the CPFB of funding and yet another attempt to achieve the very end the Courts injunction was put in place to prevent,” Berman wrote in an opinion.
Were very pleased that the court made clear what should have been obvious: Vought cant justify abandoning the agencys obligations or violating a court order by manufacturing a lack of funding, said Jennifer Bennett of Gupta Wessler LLP, who is representing the CFPB employees in the case.
A White House spokeswoman did not immediately respond to a request for comment on Berman’s opinion.
Ken Sweet, AP business writer
In a year defined by companies pouring shocking sums of money into AI, one more deal squeaked in just before 2026.
Meta just made a play on Manus, the buzzy Singapore-based company with Chinese roots that turned heads earlier this year when it showed its AI agents executing complex tasks, like hunting for real estate and sorting through resumes.
The deal is sure to turn heads too. Manus and its parent company Butterfly Effect are now based in Singapore but were founded in China a country with a fraught relationship to the U.S tech industry and maintain operations there. Facebooks parent company will reportedly pay more than $2 billion to acquire the startup, which it hopes will bolster its own lagging AI capabilities.
In a crowded field of soaring chipmakers, nimble startups laser focused on AI, and ancient tech giants like Microsoft making themselves freshly relevant with big AI bets, Meta is far from leading the pack a fact the company seems well aware of.
The acquisition will bring the startups agentic AI tech on board, allowing Meta to potentially integrate it into its vast suite of products, including Facebook, Instagram, WhatsApp and Metas AI chatbot. The Manus deal follows Metas $14.3 billion investment in AI training data startup Scale AI earlier this year.
Joining Meta allows us to build on a stronger, more sustainable foundation without changing how Manus works or how decisions are made, Manus CEO Xiao Hong said in a blog post announcing the news.
Metas (latest) course correction
Metas AI spending spree is only accelerating. After renaming itself Meta and declaring itself all in on the metaverse less than five years ago, Meta abandoned its course and its massive investments to play catch up on AI. Mark Zuckerberg declared last month that Meta plans to invest a mind boggling $600 billion into U.S. AI tech and infrastructure by 2028.
Meta Chief AI Officer Alexandr Wang, formerly of ScaleAI, welcomed the Manus team into the fold Tuesday in a post on X. Excited to announce that @ManusAI has joined Meta to help us build amazing AI products! Wang wrote, adding that the Meta Superintelligence Labs team will be hiring in Singapore. The Manus team in Singapore are world class at exploring the capability overhang of todays models to scaffold powerful agents.
Manus is no DeepSeek, but the company is still notable as a prominent Asian AI company coming under the wing of an American tech giant. In April, Manus raised $75 million in a round of funding led by San Francisco venture firm Benchmark. The startup is also backed by Asian investors, including Chinese tech conglomerate Tencent and HongShan Capital Group, previously the China-focused wing of American venture capital firm Sequoia, which frequently invests in Chinese startups.
Meta told Fast Company that it plans to wind down Manus business operations that continue in China. That process will include relocating remaining Manus employees and severing any Chinese business entanglements. The company also emphasized that Manus employees joining Meta wont have access to first party user data from Metas existing products.
“Metas acquisition of Manus AI will enable us to provide the most advanced technology to our users with safeguards in place to eliminate areas of potential risk, a Meta spokesperson told Fast Company. There will be no continuing Chinese ownership interests in Manus AI following the transaction, and Manus AI will discontinue its services and operations in China.
In artificial intelligence, 2025 marked a decisive shift. Systems once confined to research labs and prototypes began to appear as everyday tools. At the center of this transition was the rise of AI agents AI systems that can use other software tools and act on their own.
While researchers have studied AI for more than 60 years, and the term agent has long been part of the fields vocabulary, 2025 was the year the concept became concrete for developers and consumers alike.
AI agents moved from theory to infrastructure, reshaping how people interact with large language models, the systems that power chatbots like ChatGPT.
In 2025, the definition of AI agent shifted from the academic framing of systems that perceive, reason, and act to AI company Anthropics description of large language models that are capable of using software tools and taking autonomous action. While large language models have long excelled at text-based responses, the recent change is their expanding capacity to act, using tools, calling APIs, coordinating with other systems, and completing tasks independently.
This shift did not happen overnight. A key inflection point came in late 2024, when Anthropic released the Model Context Protocol. The protocol allowed developers to connect large language models to external tools in a standardized way, effectively giving models the ability to act beyond generating text. With that, the stage was set for 2025 to become the year of AI agents.
AI agents are a whole new ballgame compared with generative AI.
The milestones that defined 2025
The momentum accelerated quickly. In January, the release of the Chinese model DeepSeek-R1 as an open-weight model disrupted assumptions about who could build high-performing large language models, briefly rattling markets and intensifying global competition. An open-weight model is an AI model whose training, reflected in values called weights, is publicly available. Throughout 2025, major U.S. labs such as OpenAI, Anthropic, Google, and xAI released larger, high-performance models, while Chinese tech companies, including Alibaba, Tencent, and DeepSeek, expanded the open-model ecosystem to the point where the Chinese models have been downloaded more than American models.
Another turning point came in April, when Google introduced its Agent2Agent protocol. While Anthropics Model Context Protocol focused on how agents use tools, Agent2Agent addressed how agents communicate with each other. Crucially, the two protocols were designed to work together. Later in the year, both Anthropic and Google donated their protocols to the open-source software nonprofit Linux Foundation, cementing them as open standards rather than proprietary experiments.
These developments quickly found their way into consumer products. By mid-2025, agentic browsers began to appear. Tools such as Perplexitys Comet, Browser Companys Dia, OpenAIs GPT Atlas, Copilot in Microsofts Edge, ASI X Inc.s Fellou, MainFunc.ais Genspark, Operas Opera Neon, and others reframed the browser as an active participant rather than a passive interface. For example, rather than helping you search for vacation details, it plays a part in booking the vacation.
At the same time, workflow builders like n8n and Googles Antigravity lowered the technical barrier for creating custom agent systems beyond what has already happened with coding agents like Cursor and GitHub Copilot.
New power, new risks
As agents became more capable, their risks became harder to ignore. In November, Anthropic disclosed how its Claude Code agent had been misused to automate parts of a cyberattack. The incident illustrated a broader concern: By automating repetitive, technical work, AI agents can also lower the barrier for malicious activity.
This tension defined much of 2025. AI agents expanded what individuals and organizations could do, but they also amplified existing vulnerabilities. Systems that were once isolated text generators became interconnected, tool-using actors operating with little human oversight.
The business community is gearing up for multiagent systems.
What to watch for in 2026
Looking ahead, several open questions are likely to shape the next phase of AI agents.
One is benchmarks. Traditional benchmarks, which are like a structured exam with a series of questions and standardized scoring, work well for single models, but agents are composite systems made up of models, tools, memory and decision logic. Researchers increasingly want to evaluate nt just outcomes, but processes. This would be like asking students to show their work, not just provide an answer.
Progress here will be critical for improving reliability and trust, and ensuring that an AI agent will perform the task at hand. One method is establishing clear definitions around AI agents and AI workflows. Organizations will need to map out exactly where AI will integrate into workflows or introduce new ones.
Another development to watch is governance. In late 2025, the Linux Foundation announced the creation of the Agentic AI Foundation, signaling an effort to establish shared standards and best practices. If successful, it could play a role like the World Wide Web Consortium in shaping an open, interoperable agent ecosystem.
There is also a growing debate over model size. While large, general-purpose models dominate headlines, smaller and more specialized models are often better suited to specific tasks. As agents become configurable consumer and business tools, whether through browsers or workflow management software, the power to choose the right model increasingly shifts to users rather than labs or corporations.
The challenges ahead
Despite the optimism, significant socio-technical challenges remain. Expanding data center infrastructure strains energy grids and affects local communities. In workplaces, agents raise concerns about automation, job displacement, and surveillance.
From a security perspective, connecting models to tools and stacking agents together multiplies risks that are already unresolved in standalone large language models. Specifically, AI practitioners are addressing the dangers of indirect prompt injections, where prompts are hidden in open web spaces that are readable by AI agents and result in harmful or unintended actions.
Regulation is another unresolved issue. Compared with Europe and China, the United States has relatively limited oversight of algorithmic systems. As AI agents become embedded across digital life, questions about access, accountability, and limits remain largely unanswered.
Meeting these challenges will require more than technical breakthroughs. It demands rigorous engineering practices, careful design and clear documentation of how systems work and fail. Only by treating AI agents as socio-technical systems rather than mere software components, I believe, can we build an AI ecosystem that is both innovative and safe.
Thomas Şerban von Davier is an affiliated faculty member at the Carnegie Mellon Institute for Strategy and Technology at Carnegie Mellon University.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Packages of grass fed ground beef are being recalled over possible E. coli contamination. The affected packages were distributed in at least six states, according to the U.S. Department of Agriculture (USDA).
What ground meat is recalled?
The USDA made the announcement in a Dec. 27 recall notice, explaining that the recall includes 2,855 pounds of raw ground beef produced on Dec. 16 from Mountain West Food Group, LLC. The at-risk product is the company’s 16-oz. (1-lb.) vacuum-sealed packages of FORWARD FARMS GRASS-FED GROUND BEEF. The packages have a use or freeze by date of 01/13/26. The product also has the establishment number EST 2083 printed on the side of the package.
The meat was shipped to six states, including California, Colorado, Idaho, Montana, Pennsylvania, and Washington to be sold. The USDA’s Food Safety and Inspection Service (FSIS) advises customers to check their freezers for the at-risk products and either throw them away immediately or return them to the place of purchase. There have been no confirmed reports of illnesses due to the contaminated product.
Per the announcement, the possible contamination was flagged during a routine FSIS testing, which discovered the presence of E. coli O26. “Most people infected with STEC O26 develop diarrhea (often bloody) and vomiting,” the release said.
How do you treat E.coli?
According to Mayo Clinic, rest and fluids are the most common treatments for an E. coli infection. Those with the illness should avoid anti-diarrheal medications which can slow digestion and healing. Likewise, antibiotics aren’t recommended. It also notes that sometimes E. coli can lead to a life-threatening form of kidney failure, which requires hospitalization, IV fluids, blood transfusions, and kidney dialysis.
Another recent E. coli outbreak affecting raw milk cheese led to at least three infections. A Dec. 30 Food Safety News report found that there have been at least 33 multi-state food outbreaks, including E. Coli, Salmonella, and other bacteria, in 2025. “For every confirmed patient in an E. coli outbreak there are likely more than 26 who go undetected,” it noted.
The ground beef recall notice directed consumers with further questions to contact Mountain West Food Group, LLC’s CEO, Jeremy Anderson at 208-679-3765 or info@mountainwestfoodgroup.com. Fast Company reached out to Mountain West Group, LLC. for additional comments on the recall but did not hear back by the time of publication.
When Bianca Jones, a 33-year-old special education teacher in Memphis, Tennessee, decided a couple of years ago that she wanted to buy a house, she started digging into her Experian credit report. She was shocked by what she found.
Her student debt had been double-counted, making it look as though she owed a quarter of a million dollars and putting home ownership out of reach. Jones disputed the items with Experian, one of the major credit reporting agencies, multiple times in writing and over the phone, but got nowhere.
“They kept saying it’s been verified, it’s been verifiedThey never investigated. They never tried to remove it,” Jones said in an interview.
Eventually, Jones complained to the Consumer Financial Protection Bureau, a federal watchdog created by Congress in 2010 to protect consumers in their financial dealings, helping her lawyers show a judge the lengths she’d gone to mitigate damage to her credit, according to her attorneys, legal papers and a copy of the complaint. That paper trail eventually helped Jones successfully sue Experian to correct her record.
Jones closed on a house purchase in the Memphis suburb of Millington for $300,000 in January.
“If I didn’t have this agency to go to, I don’t think I’d be in the house right now,” said Jones. “It actually changed my life.”
Experian and the CFPB did not respond to a request for comment on Jones’ case.
AGENCY FACING SHUTDOWN
In interviews, consumers who had fallen on hard times or known difficulty, lawyers who work with the poor and credit counselors told Reuters the CFPB had been a lifeline for people facing hardship and they feared that, without it, many consumers would be left unprotected from financial predators.
Conceived by Senator Elizabeth Warren to police the type of lending that fueled the 2008 financial crisis, the CFPB has long been a target of conservatives and industry. Congress created the agency as part of post-crash reforms in 2010 as the sole federal body primarily charged with protecting consumers’ rights in the financial marketplace.
The CFPB now faces extinction under President Donald Trump’s second administration, which says the agency is a political weapon for Democrats and a burden on free enterprise.
Speaking to reporters at the White House in February, Trump said it was “very important to get rid of the agency,” claiming, without spelling out evidence, that Warren had “used that as her little personal agency to go around and destroy people.”
In an interview, Warren dismissed the criticism as a sign the CFPB was doing its job. “This is not about vendettas. This is about enforcing the law as it is written, so that billionaires and billionaire corporations don’t cheat American families. I think that’s a pretty good thing,” she said.
White House Budget Director Russell Vought, a staunch CFPB critic and the agency’s acting head, told “The Charlie Kirk Show” podcast in October he plans to shutter the CFPB. The administration is fighting in court to fire up to 90% of its workers, while planning to move pending investigations and litigation to the Justice Department.
The agency says it is due to run out of money in early 2026 and Vought says he cannot legally seek more until the Federal Reserve returns to what the administration deems “profitability,” a position experts dispute. Congressional Republicans also slashed the CFPB’s maximum allowable funding in July.
Together, the administration, congressional Republicans and industry-backed lawsuits have undone a decade’s worth of CFPB rules on matters ranging from medical debt and student loans to credit card late fees, overdraft charges and mortgage lending.
The agency has also dropped or paused its probes and enforcement actions, and stopped supervising the consumer finance industries, leading to a string of resignations.
The CFPB and the White House did not respond to requests for comment.
Warren said that as a law professor studying bankruptcy she saw that consumer protections were weak and fragmented, and that America needed a single federal agency dedicated to protecting consumers from unfair, deceptive and abusive practices.
“I was stunned by the number of people in financial trouble who had lost a job or got sick but who had also been cheated by one or more of their creditors,” she told Reuters. “For no agency was consumer protection a first priority, it was somewhere between fifth and tenth, which meant there was just no cop on the beat. If the CFPB is not there, people have nowhere to turn when they get cheated.”
CRITICS COMPLAIN OF OVERREACH
Republicans said the agency was redundant, with federal bank watchdogs, like the Office of the Comptroller of the Currency and Federal Deposit Insurance Corporation, and state regulators already looking out for consumers, and that its funding and leadership structure were unconstitutional. Like other banking regulators, the CFPB’s funding is not set annually by Congress and does not come directly from taxpayers. Rather, the agency draws on the Federal Reserve and its director was until recently protected from removal at will by the president.
Republicans accused the CFPB’s first director Richard Cordray, a Democrat, of using those powers to crush small banks and businesses via overzealous enforcement and complex regulations, and of overstepping the agency’s legal authority by trying to regulate companies Congress had exempted from its oversight, such as auto dealerships.
Conservative and industry groups tried several times to curb its powers or extinguish it altogether via the courts. In 2020 the Supreme Court handed the president the power to fire the director, which he has since used. Critics on the political right accused former director Rohit Chopra, a Democrat, of exceeding his authority, flouting the federal rule-making process, and harming consumers with an ill-conceived crackdown on financial firm fees.
Thomas Hoenig, who served as vice chair of the FDIC from 2012 to 2018, said he was skeptical of some of the CFPB’s work under prior administrations, but that it still served an important purpose.
“If you take them out of the picture altogether, you’re going to get more abuse, not less,” he said. “I’m disappointed to see the CFPB just go away.”
“VERY IMPORTANT FOR ME”
For some, though, the agency has been a lifeline. Millions of Americans like Jones who are struggling with credit reporting errors, predatory lenders, debt collectors, fraud, discrimination or other challenges, are now filing complaints every year with the agency, which prompts companies to fix the issues, sometimes by paying the complainants, or explain themselves.
When companies repeatedly break the rules, the CFPB punishes them and tries to make their customers whole. To date, it has returned $21 billion to consumers, according to CFPB data.
Morgan Smith, a 31-year-old single mother and social services worker in Issaqua, Washington, turned to those resources when she realized she had been a victim of identity theft.
After her wallet and ID were stolen from her car, she learned that someone had opened up a string of accounts in her name, she said: a rental car that ended up in a crash, an unpaid storage unit and a hotel room at an amusement park. Reuters was unable to confirm Smith’s account independently.
“I went straight to the CFPB and I was navigated there to their consumer education tab where I was able to find out how to deal with fraud and scams. It gave me all the information I needed to knowmy rights,” she said.
“That was very important for me to have this resource.”
Without the CFPB, borrowers would once again rely on a hodgepodge of federal, state and other local agencies which lack the CFPB’s resources, expertise and legal powers, say consumer groups.
“Prior to the CFPB coming around, we’d have to say, ‘write your attorney general, write to the FTC,’ whoever it was, and it became this sort of letter-writing campaign,” said Sam Hohman, who runs the Nebraska nonprofit Credit Advisors Foundation, which helps people get out of debt and offers consumer education services.
As a result, people like Virginia resident Michael Johnson, 49, may have fewer options in future when they fall into trouble.
After a kidney transplant and leg amputation several years ago left Johnson unable to work, he racked up credit card debt paying for basic expenses, he said. This summer he received court summonses from creditors seeking to collect on that old debt, according to court records.
“I got in over my head unintentionally,” Johnson said in an interview.
Using a CFPB database of credit card terms and conditions, Johnson learned that his creditors were required to use arbitration rather than sue in court, which could cost more than the underlying debts. Johnson represented himself in court and says so far one creditor has dropped its complaint while the other is considering its options.
“It adds credibility to your defense that you understand your rights,” Johnson said. “Life happens to everybody.”
Douglas Gillison, Reuters
A veteran jazz ensemble announced on Monday it was canceling its New Year’s Eve performances at the Kennedy Center, the latest group to withdraw from the Washington arts institution after it was renamed to include U.S. President Donald Trump.
“Jazz was born from struggle and from a relentless insistence on freedom: freedom of thought, of expression, and of the full human voice. Some of us have been making this music for many decades, and that history still shapes us,” the Cookers jazz ensemble said in a statement.
The Kennedy Center had promoted two New Year’s Eve performances by the Cookers as an “all-star jazz septet that will ignite the Terrace Theater stage with fire and soul.”
Richard Grenell, a longtime ally of the U.S. president whom Trump named as the center’s president, said on Monday that such boycotts are a “form of derangement syndrome” and the cancelations are coming from artists booked by the institution’s previous leadership. He has previously termed cancelations a “political stunt.”
The withdrawal adds to a growing list of cancellations since the name change was announced this month by the Center’s board, which the Republican president filled with allies during a broad takeover earlier this year.
A Christmas Eve jazz concert was canceled last week, with the host of the show, musician Chuck Redd, attributing it to the name change. The New York Times reported that Doug Varone and Dancers, a New York dance company, has pulled out of two April performances. Democrats have called the decision by the board of the Kennedy Center to add Trump’s name to the institution illegal, while John F. Kennedys family denounced the move as undermining the slain president’s legacy.
The board voted to rename the arts venue The Donald J. Trump and The John F. Kennedy Memorial Center for the Performing Arts, or Trump Kennedy Center for short.
Trump has been eager to put his stamp on Washington and his name on buildings in his second term. His critics say he has compromised institutions by installing loyalists and making funding threats. Trump says he is tackling what he calls those institutions’ liberal bias.
Kanishka Singh, Reuters
Artificial intelligence is reshaping the global workforce and rapidly expanding the expectations placed on todays learners. The World Economic Forum predicts that technological advancements like AI, alongside economic and demographic factors, will lead to a net increase of 78 million global jobs this decade. Educational institutions now face a pivotal moment. They must evolve how students learn, how instructors teach, and how technology supports each step of that journey.
For decades, the education sector adopted new technologies cautiously. However, the profound impact of AI on the workforce has accelerated interest and experimentation. Our latest research at Cengage Group shows that both positive perceptions of AI and classroom usage are rising. While this enthusiasm is a promising step toward ensuring learners are prepared for an AI-forward future, its critical that institutions approach AI responsibly.
With new AI tools launching at unprecedented speeds, it can be difficult to determine which will truly enhance learning outcomes. In some cases, rapid launches have created more friction for educators and confusion for students. To ensure responsible deployment, the conversation must shift from racing to market and instead toward measured, purposeful development aligned with how learning actually occurs.
WELL-INTENTIONED, BUT MISSING THE MARK
Many big tech companies have rushed to develop AI-based educational tools. But while tech innovators have made strides in exploring AI to enhance the educator and student experience, the critical reality is that education is an incredibly complex ecosystem. Education is simply not fit for plug-and-play solutions.
Googles recent homework help feature is one example. Designed to give students an AI overview of what appeared on the screen including assessment answers, the tool inadvertently made it harder for instructors to validate work and accurately gauge understanding. Instead of reducing friction, it increased workload for both educators and students, ultimately leading to a pause in deployment.
A similar challenge emerged this past summer with OpenAIs Study Mode. While designed to guide students and ask questions rather than provide answers, it is just one click away from ChatGPT, where answers are readily available. Without a deep understanding of teaching fundamentals, and how and when real learning happens, technological developments can lead to unintended consequences that disrupt rather than improve learning.
These examples highlight an important truth. Innovation alone is not enough. Educational impact requires domain expertise, intentional design, and clear boundaries that promote understanding rather than shortcuts.
BALANCE MEANINGFUL INNOVATION AND REINFORCE LEARNING
To deliver educational support that blends innovation with learning outcomes, AI product development must balance the needs of both educators and students. Faculty are increasingly being asked to do more with less. AI should lighten that load, not add to it. For example, AI can surface classroom trends, flag areas where students are struggling, and help educators personalize instruction.
Students, meanwhile, need support tools that build understanding, and dont just provide answers. Success in student deployment lies in cultivating curiosity and critical thinking. For example, AI can provide study support outside of classroom hours, deliver personalized feedback, and encourage further exploration to strengthen learning.
This balanced approach requires maintaining human oversight. Collaboration with institutions and faculty ensures AI experiences align with course objectives and reinforce, rather than disrupt, proven teaching practices.
THE PATH FORWARD: PRIORITIZE PEDAGOGY
As AI continues to evolve, pedagogy must be at the core of all innovation, ensuring academic integrity and quality content that builds trust and drives meaningful student outcomes. Through controlled, confined subject knowledge and consistent training to ensure accuracy and academic integrity, AI tools can prioritize pedagogy and remain narrowly focused on driving specific student learning outcomes.
AI should act as a supporting coach who helps break down problems, prompts curiosity, and encourages persistent learning so students can confidently reach the correct answer on their own. This purpose-built approach to AI complements the human teacher and enhances instruction by confirming student understanding and pinpointing knowledge gaps to support educators in delivering more personalized learning.
The key to unlocking AIs potential in education goes beyond speed to market, and lies in thoughtful development rooted in intentional and responsible design. With pedagogy at the core, AI becomes more than a tool. It becomes a partner in improving learning outcomes for students and reducing the educators load.
Darren Person is EVP and chief digital officer of Cengage Group.
When internet services platform Cloudflare suffered an outage in November, it took a big chunk of the online world down with it.
Major platforms like ChatGPT, X, and Canva became unreachable. So did digital services offered by countless banks, retailers, and many other businesses. During the six-hour meltdown, as many as 2.4 billion users could have felt the impact.
Software outages like this have always been and always will be part of online life. But today our systems are more interconnected than ever, so a single failure can ripple outward. AI only amplifies that risk.
Yet, too many companies still lack protection against such disasters. In an era when outages are inevitable, theyre effectively operating without a safety net.
The fundamental missing ingredient is something simple but easily overlooked: resilience testing.
In a nutshell, resilience testing is all about pressure testing your software, before issues happen. It ensures that systems keep workingor quickly bounce backwhen things go wrong.
Think of resilience testing as a small safety step to prevent big problems. The annual median cost of a high-impact IT outage is about $76 million. Businesses can also suffer reputational damage, lose customers, and get hit with regulatory penalties. Cloudflare is only one recent example. In the past year alone, AWS, Microsoft 365, and Starlink all went down, to name just a few.
So why arent more businesses stress-testing their software for inevitable failure? Heres why, and what companies can do about it.
MOST COMPANIES DONT BOTHER WITH RESILIENCE TESTING
As high as the stakes are, businesses have reasons to avoid software resilience testing. The process is technical, and it can get messy.
Modern resilience testing, also called chaos engineering, was put in the spotlight 15 years ago by Netflix software developers. Realizing that the only way to test for resilience is to simulate problems in the wild or in production, they created a suite of tools that replicated network crashes, cloud services meltdowns, and other real-world failures.
Netflix might have been able to roll with the punches, but few other companies have the expertise or the stomach to compromise their systems like this. Its the equivalent of starting a controlled fire to ensure you have the resources to put it out.
Resilience testing requires the technical acumen to know what failures to simulate for and responses to take. Putting these drills into action also entails risk, like triggering your homes fire sprinkler system which could ruin the furniture. Most importantly, developers need to know what to do when tests reveal weaknesses.
Because the threshold for resilience testing is so high, it isnt integrated into most companies software development processes. Theres rarely a dedicated team, and often no one except maybe the CTO is clearly in charge. As a result, resilience testing becomes a bottleneck, so companies dont bother with it.
A BETTER WAY FORWARD: HELP FROM AI
The good news: It no longer has to be this way. For companies that want to adopt resilience testing, new platforms and toolspowered by AIare making the process safer and easier.
Specialized resilience testing agents now enable companies to automate and optimize testing, without needing dedicated experts or teams.
First, the AI agent identifies likely edge casesunusual or unexpected scenarios that could compromise reliability. It examines system behavior in production, how services interact, and where similar systems have previously failed.
For example, the agent might highlight a scenario where a service slows, rather than fails outright. Another edge case: A code deployment updates only half the companys servers, leading to inconsistent user experiences.
The agent then generates and prioritizes the test cases most likely to reveal resilience issues, explaining why each one matters. It can also set up and run those tests.
After problems are identified, the AI agent suggests targeted fixes, making the software more resilient. With the heavy lifting completed, developers can review and apply those insights.
WHY RESILIENCE TESTING NEEDS TO SHIFT LEFT
Having the right tools is one thing, but effective resilience testing requires more than just software.
Creating a culture of resilience is part of the solution. Software teams need to include testing in their routine. Ultimately, the only way to strengthen yourself against failures is to practice for them. If you never run those drills, you never know how bad things can get until its too late.
Developers should also remember that resilience testing isnt just about full-scale, five-alarm outages. Its also about small, partial failures that create a poor user experience for customers, without necessarily taking the whole system down.
Lets say a platform like Cloudflare has an issue affecting a major banks consumer app, leaving millions unable to check their balances. Resilience testing should anticipate this problem and provide a viable workaround.
But the best way to encourage a culture of resilience is to shift leftmoving resilience testing to the software development preproduction phase, before code ever goes live.
Shifting left helps teams catch weaknesses long before customers feel them. Thats crucial with todays complex, interconnected software systems, where seemingly minor issues can rapidly spiral into major outages. Rather than scramble to diagnose problems during live incidents, developers can uncover and fix them in a safe environment.
Shifting left can save money and stress, too. Fixing resilience issues in production is costly and disruptive, often pulling team members away from other vital tasks. By taking a proactive approach, developers and business leaders can be more confident in the product they deliver to customers.
Ultimately, resilience testing isnt rocket science. Companies that run fire drills for their software and embrace a culture of resilience testing will find themselves in a stronger position when the next disruption strikes. And in an increasingly interconnected world, where AI tools and features depend on more underlying services than ever, its safe to say that might be sooner rather than later.
Jyoti Bansal is CEO of Harness.