Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 

Keywords

E-Commerce

2026-02-05 22:30:00| Fast Company

Have you seen larger-than-life depictions of your friends lately? They might have been sucked into the latest social trend: creating AI-generated caricatures. The trend itself is simple. Users input a common prompt: “Create a caricature of me and my job based on everything you know about me,” and upload a photo of themself, and, voila! ChatGPT (or any AI-image platform) spits out an over-the-top, cartoon-style image of you, your job, and anything else it’s learned about you. This ability is predicated on a robust ChatGPT (or other AI) chat history. Those who don’t have a close, personal relationship with the AI might need to give additional information to get a more accurate depiction. But notably, that’s yet another instance of potential AI privacy concerns. It’s not the first AI image trend. Other social media challenges have had users posting themselves as AI-generated cartoons, Renaissance paintings, or fantasy characters. AIs image capabilities have gone in a few different directions. Some of them, like with this trend, or the meme-ification of Sora, are seemingly harmless fun. However, Sora has started to see issues with bad-faith individuals being able to create AI deepfakes (see also: Grok porn). Meanwhile, even as the trend continues to rise, more than 13,000 ChatGPT users reported issues on Thursday, according to outage tracking website Downdetector.com.

Category: E-Commerce
 

2026-02-05 21:48:50| Fast Company

Tech workers have been worried for years about the AI tidal wave coming for their jobs, but their bosses are starting to worry too.  Stocks plunged this week as fears escalated that AI advancements will take a bite out of business for many software and services companies. The market losses are tied to updates to Anthropics AI-powered workplace productivity suite, Claude Cowork, which threatens to replace some software tools ubiquitous in the professional world. Companies with business in research and legal software like Thomson Reuters and LegalZoom dropped dramatically on the Anthropic news, with a wide swath of software stocks following suit. Intuit, PayPal, Equifax all dropped by over 10%, with enterprise software companies like Atlassian and Salesforce deepening their own losses, which started well before the latest AI news. The S&P North American software index also slid further this week, worsening a recent losing streak punctuated by a 15% decline in January the indexs worst month in nearly two decades. Unlike Claude Code, a coding tool designed for developers, Anthropic built Claude Cowork as a powerful, general purpose AI agent for non-coders. Available to Anthropics $100-per-month premium subscribers, Claude Cowork can knock out easier tasks like searching, collecting and organizing files, but its also capable of taking on much bigger challenges like making slide decks, producing reports and pulling and synthesizing information from other business software tools, like Zendesk and Microsoft Teams.  Claudes ability to execute complex tasks with dedicated software sub-agents prompted plenty of nervous jokes about humans being replaced by C-suites full of AI. And that was before a new Anthropic update introduced powerful new plugins designed to automate tasks across domains like finance, legal, sales, data, marketing, and customer support. The market is still digesting those new agentic AI capabilities, which could pose an existential threat to the software-as-a-service companies that undergird big chunks of the economy.  Fears of a zero sum software game grow Anthropic co-founder and CEO Dario Amodei has made his own ominous predictions about AI displacing human workers. Last year, Amodei predicted that AI could vaporize half of entry-level white collar roles, sending unemployment as high as 20% within five years. He pointed to losses in industries like tech, law, consulting and finance, specifically. “We, as the producers of this technology, have a duty and an obligation to be honest about what is coming,” Amodei told Axios. “I don’t think this is on people’s radar.” Not everyone deeply invested in AI agrees. Nvidia CEO Jensen Huang swatted away worries that AI would eat the traditional software industry after the stock bloodbath that began on Tuesday. “There’s this notion that the tool in the software industry is in decline, and will be replaced by AI, Huang said, emphasizing that relying on existing software tools makes more sense than reinventing the wheel. It is the most illogical thing in the world, and time will prove itself.

Category: E-Commerce
 

2026-02-05 21:00:00| Fast Company

The federal agency that enforces anti-discrimination laws in the workplace made an unexpected disclosure this week: Nike was under investigation for its approach to diversity, equity, and inclusion, due to claims that the company had discriminated against white employees and job applicants.  The investigation suggests that Nikes diversity goals and other DEI initiatives led the company to hire non-white workers to meet quotas or award them with more opportunities for career advancement, thereby discriminating against white workers. It is notable as the first major legal undertaking by Andrea Lucas, who President Trump installed as the chair of the Equal Employment Opportunity Commission (EEOC) last year. But it also indicates that Lucas is serious about targeting corporate employers over alleged discrimination against white workers, which she has clearly signaled is a priority for the agency under the Trump administration.  It is designed to instill fear into the hearts of large companies, says Chai Feldblum, a former EEOC commissioner and a member of EEO Leaders, a group of former senior officials who worked at the EEOC and Department of Labor under multiple administrations. If they’re afraid, then small companies will be afraid. And the point is to chill any form of equity and diversity efforts, even legal ones.  An unusual investigation The investigation into Nike is unusual for a few reasons: It is, of course, the first inquiry into what the agency has called DEI-related discrimination. But it is also rare that the EEOCs investigations into employers become public before they have concluded, since the process is supposed to be confidential.  An EEOC investigation typically either ends in a dismissal or, if the agency finds reasonable cause and concludes there was discrimination, results in a conciliation process that allows an employer to resolve the issue in private, with both parties coming to an agreement. If conciliation fails, the agency would then decide whether or not to bring a lawsuit, which is considered a last resort and happens infrequently. The EEOC does often use subpoenas to force employers to comply with their requests for information. According to Feldblum, subpoenas can be a useful tool for the agency to extract information from a company that might be stonewalling or only offering partial responses to its inquiries. In the case of Nike, however, the EEOC went to court to enforce the subpoena, thrusting the investigation into the public record.  What is unusual about this is the publicity, Feldblum says. Which is what chair Lucas wants. She’s doing that by suing on a subpoena. I think it’s a question whether EEOC is following its normal process for enforcing subpoenas.  Nike seemed to suggest as much in a statement to Fast Company.  This feels like a surprising and unusual escalation, a company spokesperson said. We have had extensive, good-faith participation in an EEOC inquiry into our personnel practices, programs, and decisions and have had ongoing efforts to provide information and engage constructively with the agency. We have shared thousands of pages of information and detailed written responses to the EEOCs inquiry and are in the process of providing additional information.  The statement continued: We are committed to fair and lawful employment practices and follow all applicable laws, including those that prohibit discrimination . . . We will continue our attempt to cooperate with the EEOC and will respond to the petition. A possible new precedent Feldblum argues the EEOCs approach to this investigation could set a precedent of taking companies to court over what the agency perceives to be insufficient cooperation with its requests for information.  The press release put out by the EEOC makes evident that the agency had requested extensive details about Nikes employment decisions, including its criteria for layoffs, the use of demographic data and how it was tied to executive compensation, and specifics about 16 programs that offered mentoring, leadership, or career development opportunities to underrepresented employees.  Unlike many of the cases the EEOC investigates, this one was not initiated by a complaint from a worker alleging discrimination; Lucas herself brought the charge against Nike in 2024. But its not clear exactly what prompted the investigation.  The EEOC claims to be looking into systemic allegations of DEI-related intentional race discrimination at Nike that have targeted white workers. By Lucass own admission, per a statement in the EEOC release, this investigation seems to have been prompted by Nikes public disclosures about its DEI programs. (When Lucas sent letters to 20 law firms last year requesting details on their DEI practicesa move that drew widespread criticismshe had relied on public statements.) You sign a commissioner charge under penalty of perjury, Feldblum says. You need to have at least some evidence of discrimination to sign that charge. Now if you believe that simply having a [diversity] goal is reasonable evidence of discrimination, then you’ll go ahead and sign that.  The future of DEI Like many companies at the time, Nike set ambitious DEI goals after the murder of George Floyd sparked a racial reckoning across corporate America. (The company has also grappled with broader culture issues over the years, including allegations of sexual harassment and gender discrimination.) In 2021, Nike tied executive compensation to DEI commitments that were intended to increase the share of women in leadership and boost representation of racial and ethnic minorities to 35% across its workforce.  In the time since, however, Nike has cycled through five chief diversity officers; the company also declined to put out a corporate sustainability report last year, which typically documents its progress on DEIthough Nike claimed it had not wavered from its diversity commitments.  Depending on how the EEOC investigation unfolds, Nike could face significant repercussions. The court will likely uphold the subpoena, according to Feldblum, which means Nike will likely have to produce reams of additional information. If the EEOC decides to make an example of Nike, the investigation could ultimately result in a lawsuitwhich would have far-reaching consequences for other employers and potentially set a precedent for subsequent investigations.  I think we allemployers, employees, the general publichave got to assume there will be a continued onslaught of attacks on DEI, Feldblum says, urging companies to review, not retreat” from their diversity programs and position on DEI. The EEOC is trying to stop employes from doing anything to increase diversity and equity, and they are stretching their own procedures, as well as the law . . . And that is a very sad day for an agency entrusted with enforcing employment civil rights laws.

Category: E-Commerce
 

2026-02-05 20:44:20| Fast Company

Anthropic is out with a new model called Claude Opus 4.6, an upgrade to its top-of-the-line Opus 4.5 model that launched in November. The new release could add new capabilities to Anthropics Claude Code coding assistant, which is facing growing competitive pressure from OpenAIs Codex. Anthropic says Opus 4.6 improves on its predecessors coding skills, planning, and, perhaps most importantly, its ability to reason more clearly when handling large amounts of information. When Opus 4.6 powers Claude Code, the coding agent can comprehend larger codebases and make more thoughtful decisions about how and where to add new code, the company says. More long-term memory AI labs have been racing to build models with longer context windows, meaning the amount of information a model can consider for a given task. But models have often struggled to use that information effectively in their outputs, a limitation Anthropic acknowledges. Previously, we would see things like, maybe the model gets lost in the middle, or it might forget details, Opus product manager Dianne Penn tells Fast Company. I wouldnt say Opus 4.6 is perfecthumans or other past models arent perfectbut we think that the quality improvement is pretty significant. Opuss longer memory also allows it to work on complicated tasks for extended periods, enabling Claude Code users to assemble teams of agents that collaborate on tasks. Anthropic also says the tool offers improved code review and debugging capabilities, helping it catch its own mistakes. Opus 4.6 arrives as the use of AI coding tools continues to surge, and as competition between Anthropic and OpenAI for software developers intensifies. OpenAIs Codex coding tool recently launched as a standalone app, powered by the GPT-5.2 model, and has received largely enthusiastic reviews from developers. A model for everyday work tasks Beyond coding, the new Anthropic model is designed to improve performance on everyday work tasks such as running financial analyses, conducting research, and creating or using documents, spreadsheets, and presentations. Opus 4.6 will also power Anthropics general-purpose work tool, CoWork, enabling it to multitask with minimal human supervision. Anthropic says Opus 4.6 achieved top scores across several industry benchmark tests, reaching the highest results so far on multiple evaluations. These include Humanitys Last Exam, a complex multidisciplinary reasoning test; Terminal-Bench 2.0, an agentic coding evaluation; and GDPval-AA, which measures performance on economically valuable knowledge-work tasks in finance, legal, and other domains. Anthropic also says Opus 4.6 outperforms all other models on OpenAI’s BrowseComp, which measures a models ability to locate difficult-to-find information online. Anthropic says the Opus 4.6 model is available to developers using Claude Code for the same price per million tokens as Opus 4.5. The new model is now the default for Claude Code Pro subscribers, and is available as an option for all other subscribers. 

Category: E-Commerce
 

2026-02-05 20:35:00| Fast Company

Visa announced a new platform designed to stimulate small businesses through a variety of tools and network opportunities on Thursday in advance of major sporting events this year. The program, Visa & Main, identifies and is built around helping address what Visa calls the most pressing challenges that entrepreneurs face: access to capital, reaching customers, and adopting modern business tools. That starts with a $100 million partnership with small business lender Lendistry, with Visa saying it would continue to provide additional grants and financial support programs as part of Visa & Main. Additionally, Visa & Main connects Visas small business members with its corporate sponsors, identifying opportunities through major events like Super Bowl LX and this summers FIFA World Cup. It launched the Square Stops Here hop-on, hop-off bus tour in San Francisco during Super Bowl week designed to support and spotlight local businesses. The company is using its platform to help direct potential customers to its small business members, also hosting workshops for entrepreneurs to help them convert a short-term gain into long-term sustainability. “Heartbeat of local communities” Visa & Main also intends to make it easier for small businesses to adopt AI in the workplace, noting that small businesses have adopted the new technology at a rate less than half that of bigger businesses. The program attempts to close that gap by making tools such as expense management and fraud protection easier to access. Small businesses are the heartbeat of local communities and represent nearly half of our countrys economic activity, Kim Lawrence, Visas North America regional president, said in a release. With Visa & Main, were connecting Visas products and in-house knowledge with the expertise of our clients and partners to provide small businesses with flexible financing opportunities and customer acquisition and technology support.” The move expands Visa’s investment into small businesses, an area where credit card companies have long competed for marketshare. For example, Small Business Saturdays was launched 15 years ago as a marketing push from American Express, and it has since become an annual Thanksgiving weekend event. Visa held a launch event in Atlanta on January 21 for Visa & Main, where members worked directly with the Visa team to get hands-on experience with the product.

Category: E-Commerce
 

2026-02-05 19:30:00| Fast Company

With community opposition growing, data center backers are going on a full-scale public relations blitz. Around Christmas in Virginia, which boasts the highest concentration of data centers in the country, one advertisement seemed to air nonstop. Virginias data centers are investing billions in clean energy, a voiceover intoned over sweeping shots of shiny solar panels. Creating good-paying jobs cue men in yellow safety vests and hard hats and building a better energy future.  The ad was sponsored by Virginia Connects, an industry-affiliated group that spent at least $700,000 on digital marketing in the state in fiscal year 2024. The spot emphasized that data centers are paying their own energy costs framing this as a buffer that might help lower residential bills and portrayed the facilities as engines of local job creation. The reality is murkier. Although industry groups claim that each new data center creates dozens to hundreds of high-wage, high-skill jobs, some researchers say data centers generate far fewer jobs than other industries, such as manufacturing and warehousing. Greg LeRoy, the founder of the research and advocacy group Good Jobs First, said that in his first major study of data center jobs nine years ago, he found that developers pocketed well over a million dollars in state subsidies for every permanent job they created. With the rise of hyperscalers, LeRoy said, that number is still very much in the ballpark.  Other experts reflect that finding. A 2025 brief from University of Michigan researchers put it bluntly: Data centers do not bring high-paying tech jobs to local communities. A recent analysis from Food & Water Watch, a nonprofit tracking corporate overreach, found that in Virginia, the investment required to create a permanent data center job was nearly 100 times higher than what was required to create comparable jobs in other industries.  Data centers are the extreme of hyper-capital intensity in manufacturing, LeRoy said. Once theyre built, the number of people monitoring them is really small. Contractors may be called in if something breaks, and equipment is replaced every few years. But thats not permanent labor, he said. Jon Hukill, a spokesperson for the Data Center Coalition, the industry lobbying group that established Virginia Connects in 2024, said that the industry is committed to paying its full cost of service for the energy it uses and is trying to meet this moment in a way that supports both data center development and an affordable, reliable electricity grid for all customers. Nationally, Hukill said, the industry supported 4.7 million jobs and contributed $162 billion in federal, state, and local taxes in 2023. Dozens of community groups across the country have mobilized against data center buildout, citing fears that the facilities will drain water supplies, overwhelm electric grids, and pollute the air around them. According to Data Center Watch, a project run by AI security company 10a Labs, nearly 200 community groups are currently active and blocked or delayed 20 data center projects representing $98 billion of potential investment between April and June 2025 alone.  The backlash has exposed a growing image problem for the AI industry. Too often, were portrayed as energy-hungry, water-intensive, and environmentally damaging, data center marketer Steve Lim recently wrote. That narrative, he argued, misrepresents our role in society and potentially hinders our ability to grow. In response, the industry is stepping up its messaging.  Some developers, like Starwood Digital Ventures in Delaware, are turning to Facebook ads to appeal to residents. Its ads make the case that data center development might help keep property taxes low, bring jobs to Delaware, and protect the integrity of nearby wetlands. According to reporting from Spotlight Delaware, the company has also boasted that it will create three times as many jobs as it initially told local officials.   Nationally, Meta has spent months running TV spots showcasing data center work as a viable replacement for lost industrial and farming jobs. One advertisement spotlights the small city of Altoona, Iowa. I grew up in Altoona, and I wanted my kids to be able to do the same, a voice narrates over softly-lit scenes of small-town Americana: a Route 66 diner, a farm, and a water tower. So, when work started to slow down, we looked for new opportunities and we welcomed Meta, which opened a data center in our town. Now, were bringing jobs here for us, and for our next generation. The advertisement ends with a promise superimposed over images of a football game: Meta is investing $600 billion in American infrastructure and jobs.  In reality, Altoonas data center is a hulking, windowless, warehouse complex that broke ground in 2013, long before the current data center boom. Altoona is not quite the beleaguered farm town Metas advertisements portray, but a suburb of 19,000, roughly 16 minutes from downtown Des Moines, the most populous city in Iowa. Meta says it has supported 400+ operational jobs in Altoona. In comparison, the local casino employs nearly 1,000 residents, according to the local economic development agency. Ultimately, those details may not matter much to the ads intended audience. As Politico reported, the advertisement may have been targeted at policymakers on the coasts more than the residents of towns like Altoona. Meta has spent at least $5 million airing the spot in places like Sacramento and Wahington, D.C.  The community backlash has also made data centers a political flashpoint. In Virginia, Abigail Spanberger won Novembers gubernatorial election in part on promises to regulate the industry and make developers pay their fair share of the electricity they use. State lawmakers also considered 30 bills attempting to regulate data centers. In response to concerns about rising electricity prices, Virginia regulators approved a new rate structure for AI data centers and other large electricity users. The changes, which will take effect in 2027, are designed to protect household customers from costs associated with data center expansion. These developments may only encourage companies to spend more on image-building. In Virginias Data Center Alley, the ads show no sign of stopping. Elena Schlossberg, an anti-data-center activist based in Prince William County, says her mailbox has been flooded with fliers from Virginia Connects for the past eight months.  The promises of lower electric bills, good jobs, and climate responsibility, she said, remind her of cigarette ads she saw decades ago touting the health benefits of smoking. But Schlossberg isnt sure the marketing is going to work. One recent poll showed that 73 percent of Virginians blame data centers for their rising electricity costs. Theres no putting the toothpaste back in the tube, she said. People already know were still covering their costs. People know that. This article originally appeared in Grist. Grist is a nonprofit, independent media organization dedicated to telling stories of climate solutions and a just future. Learn more at Grist.org

Category: E-Commerce
 

2026-02-05 19:06:10| Fast Company

In special education in the U.S., funding is scarce and personnel shortages are pervasive, leaving many school districts struggling to hire qualified and willing practitioners. Amid these long-standing challenges, there is rising interest in using artificial intelligence tools to help close some of the gaps that districts currently face and lower labor costs. Over 7 million children receive federally funded entitlements under the Individuals with Disabilities Education Act, which guarantees students access to instruction tailored to their unique physical and psychological needs, as well as legal processes that allow families to negotiate support. Special education involves a range of professionals, including rehabilitation specialists, speech-language pathologists and classroom teaching assistants. But these specialists are in short supply, despite the proven need for their services. As an associate professor in special education who works with AI, I see its potential and its pitfalls. While AI systems may be able to reduce administrative burdens, deliver expert guidance and help overwhelmed professionals manage their caseloads, they can also present ethical challenges ranging from machine bias to broader issues of trust in automated systems. They also risk amplifying existing problems with how special ed services are delivered. Yet some in the field are opting to test out AI tools, rather than waiting for a perfect solution. A faster IEP, but how individualized? AI is already shaping special education planning, personnel preparation, and assessment. One example is the individualized education program, or IEP, the primary instrument for guiding which services a child receives. An IEP draws on a range of assessments and other data to describe a childs strengths, determine their needs and set measurable goals. Every part of this process depends on trained professionals. But persistent workforce shortages mean districts often struggle to complete assessments, update plans and integrate input from parents. Most districts develop IEPs using software that requires practitioners to choose from a generalized set of rote responses or options, leading to a level of standardization that can fail to meet a childs true individual needs. Preliminary research has shown that large language models such as ChatGPT can be adept at generating key special education documents such as IEPs by drawing on multiple data sources, including information from students and families. Chatbots that can quickly craft IEPs could potentially help special education practitioners better meet the needs of individual children and their families. Some professional organizations in special education have even encouraged educators to use AI for documents such as lesson plans. Training and diagnosing disabilities There is also potential for AI systems to help support professional training and development. My own work on personnel development combines several AI applications with virtual reality to enable practitioners to rehearse instructional routines before working directly with children. Here, AI can function as a practical extension of existing training models, offering repeated practice and structured support in ways that are difficult to sustain with limited personnel. Some districts have begun using AI for assessments, which can involve a range of academic, cognitive, and medical evaluations. AI applications that pair automatic speech recognition and language processing are now being employed in computer-mediated oral reading assessments to score tests of student reading ability. Practitioners often struggle to make sense of the volume of data that schools collect. AI-driven machine learning tools also can help here, by identifying patterns that may not be immediately visible to educators for evaluation or instructional decision-making. Such support may be especially useful in diagnosing disabilities such as autism or learning disabilities, where masking, variable presentation and incomplete histories can make interpretation difficult. My ongoing research shows that current AI can make predictions based on data likely to be available in some districts. Privacy and trust concerns There are serious ethicaland practicalquestions about these AI-supported interventions, ranging from risks to students privacy to machine bias and deeper issues tied to family trust. Some hinge on the question of whether or not AI systems can deliver services that truly comply with existing law. The Individuals with Disabilities Education Act requires nondiscriminatory methods of evaluating disabilities to avoid inappropriately identifying students for services or neglecting to serve those who qualify. And the Family Educational Rights and Privacy Act explicitly protects students data privacy and the rights of parents to access and hold their childrens data. What happens if an AI system uses biased data or methods to generate a recommendation for a child? What if a childs data is misused or leaked by an AI system? Using AI systems to perform some of the functions described above puts families in a position where they are expected to put their faith not only in their school district and its special education personnel, but also in commercial AI systems, the inner workings of which are largely inscrutable. These ethical qualms are hardly unique to special ed; many have been raised in other fields and addressed by early-adopters. For example, while automatic speech recognition, or ASR, systems have strugged to accurately assess accented English, many vendors now train their systems to accommodate specific ethnic and regional accents. But ongoing research work suggests that some ASR systems are limited in their capacity to accommodate speech differences associated with disabilities, account for classroom noise, and distinguish between different voices. While these issues may be addressed through technical improvement in the future, they are consequential at present. Embedded bias At first glance, machine learning models might appear to improve on traditional clinical decision-making. Yet AI models must be trained on existing data, meaning their decisions may continue to reflect long-standing biases in how disabilities have been identified. Indeed, research has shown that AI systems are routinely hobbled by biases within both training data and system design. AI models can also introduce new biases, either by missing subtle information revealed during in-person evaluations or by overrepresenting characteristics of groups included in the training data. Such concerns, defenders might argue, are addressed by safeguards already embedded in federal law. Families have considerable latitude in what they agree to, and can opt for alternatives, provided they are aware they can direct the IEP process. By a similar token, using AI tools to build IEPs or lessons may seem like an obvious improvement over underdeveloped or perfunctory plans. Yet true individualization would require feeding protected data into large language models, which could violate privacy regulations. And while AI applications can readily produce better-looking IEPs and other paperwork, this does not necessarily result in improved services. Filling the gap Indeed, it is not yet clear whether AI provides a standard of care equivalent to the high-quality, conventional treatment to which children with disabilities are entitled under federal law. The Supreme Court in 2017 rejected the notion that the Individuals with Disabilities Education Act merely entitles students to trivial, de minimis progress, which weakens one of the primary rationales for pursuing AI that it can meet a minimum standard of care and practice. And since AI really has not been empirically evaluated at scale, it has not been proved that it adequately meets the low bar of simply improving beyond the flawed status quo. But this does not change the reality of limited resources. For better or worse, AI is already being used to fill the gap between what the law requires and what the system actually provides. Seth King is an associate professor of special education at the University of Iowa. This article is republished from The Conversation under a Creative Commons license. Read the original article.

Category: E-Commerce
 

2026-02-05 18:44:07| Fast Company

As Winter Storm Fern swept across the United States in late January 2026, bringing ice, snow, and freezing temperatures, it left more than a million people without power, mostly in the Southeast. Scrambling to meet higher than average demand, PJM, the nonprofit company that operates the grid serving much of the mid-Atlantic U.S., asked for federal permission to generate more power, even if it caused high levels of air pollution from burning relatively dirty fuels. Energy Secretary Chris Wright agreed and took another step, too. He authorized PJM and ERCOTthe company that manages the Texas power gridas well as Duke Energy, a major electricity supplier in the Southeast, to tell data centers and other large power-consuming businesses to turn on their backup generators. The goal was to make sure there was enough power available to serve customers as the storm hit. Generally, these facilities power themselves and do not send power back to the grid. But Wright explained that their industrial diesel generators could generate 35 gigawatts of power, or enough electricity to power many millions of homes. We are scholars of the electricity industry who live and work in the Southeast. In the wake of Winter Storm Fern, we see opportunities to power data centers with less pollution while helping communities prepare for, get through, and recover from winter storms. Data centers use enormous quantities of energy Before Wrights order, it was hard to say whether data centers would reduce the amount of electricity they take from the grid during storms or other emergencies. This is a pressing question, because data centers power demands to support generative artificial intelligence are already driving up electricity prices in congested grids like PJMs. And data centers are expected to need only more power. Estimates vary widely, but the Lawrence Berkeley National Lab anticipates that the share of electricity production in the U.S. used by data centers could spike from 4.4% in 2023 to between 6.7% and 12% by 2028. PJM expects a peak load growth of 32 gigawatts by 2030enough power to supply 30 million new homes, but nearly all going to new data centers. PJMs job is to coordinate that energyand figure out how much the public, or others, should pay to supply it. The race to build new data centers and find the electricity to power them has sparked enormous public backlash about how data centers will inflate household energy costs. Other concerns are that power-hungry data centers fed by natural gas generators can hurt air quality, consume water, and intensify climate damage. Many data centers are located, or proposed, in communities already burdened by high levels of pollution. Local ordinances, regulations created by state utility commissions, and proposed federal laws have tried to protect ratepayers from price hikes and require data centers to pay for the transmission and generation infrastructure they need. Always-on connections? In addition to placing an increasing burden on the grid, many data centers have asked utility companies for power connections that are active 99.999% of the time. But since the 1970s, utilities have encouraged demand response programs, in which large power users agree to reduce their demand during peak times like Winter Storm Fern. In return, utilities offer financial incentives such as bill credits for participation. Over the years, demand response programs have helped utility companies and power grid managers lower electricity demand at peak times in summer and winter. The proliferation of smart meters allows residential customers and smaller businesses to participate in these efforts as well. When aggregated with rooftop solar, batteries and electric vehicles, these distributed energy resources can be dispatched as virtual power plants. A different approach The terms of data center agreements with local governments and utilities often arent available to the public. That makes it hard to determine whether data centers could or would temporarily reduce their power use. In some cases, uninterrupted access to power is necessary to maintain critical data systems, such as medical records, bank accounts and airline reservation systems. Yet, data center demand has spiked with the AI boom, and developers have increasingly been willing to consider demand response. In August 2025, Google announced new agreements with Indiana Michigan Power and the Tennessee Valley Authority to provide data center demand response by targeting machine learning workloads, shifting non-urgent compute tasks away from times when the grid is strained. Several new companies have also been founded specifically to help AI data centers shift workloads and even use in-house battery storage to temporarily move data centers power use off the grid during power shortages. Flexibility for the future One study has found that if data centers would commit to using power flexibly, an additional 100 gigawatts of capacitythe amount that would power around 70 million householdscould be added to the grid without adding new generation and transmission. In another instance, researchers demonstrated how data centers could invest in offsite generation through virtual power plants to meet their generation needs. Installing solar panels with battery storage at businesses and homes can boost available electricity more quickly and cheaply than building a new full-size power plant. Virtual power plants also provide flexibility as grid operators can tap into batteries, shift thermostats or shut down appliances in periods of peak demand. These projects can also benefit the buildings where they are hosted. Distributed energy generation and storage, alongside winterizing power lines and using renewables, are key ways to help keep the lights on during and after winter storms. Those efforts can make a big difference in places like Nashville, Tennessee, where more than 230,000 customers were without power at the peak of outages during Fern, not because there wasnt enough electricity for their homes but because their power lines were down. The future of AI is uncertain. Analysts caution that the AI industry may prove to be a speculative bubble: If demand flatlines, they say, electricity customers may end up paying for grid improvements and new generation built to meet needs that would not actually exist. Onsite diesel generators are an emergency solution for large users such as data centers to reduce strain on the grid. Yet, this is not a long-term solution to winter storms. Instead, if data centers, utilities, regulators and grid operators are willing to also consider offsite distributed energy to meet electricity demand, then their investments could help keep energy prices down, reduce air pollution and harm to the climate, and help everyone stay powered up during summer heat and winter cold. Nikki Luke is an assistant professor of human geography at the University of Tennessee. Conor Harrison is an associate professor of economic geography at the University of South Carolina. This article is republished from The Conversation under a Creative Commons license. Read the original article.

Category: E-Commerce
 

2026-02-05 18:05:00| Fast Company

In 2024, the clean energy sector saw a job boom: The industry added nearly 100,000 new jobs throughout that year, meaning clean energy jobs grew more than three times faster than the rest of the workforce. Last year was a different story, however. It was a year of losses for the clean energy industry, in terms of projects, investments, and employment. Existing factories closed, like Natron Energys sodium-ion battery facilities in Michigan and California. Planned facilities were canceled, including a $3.2 billion Stellantis battery factory in Illinois. And multiple kinds of projects were scrapped, blocked, or downsized, from EV plants to wind farms. In total, the turbulent year meant that 38,000 jobsa mix of current and future positionswere erased from the clean energy industry, according to a new analysis by E2, a nonpartisan organization that tracks U.S. clean energy projects. A net loss of clean energy jobs The vast majority of those 38,000 lost jobs were in manufacturing (though some may have been counted in multiple categories, like energy generation or maintenance). For comparison, by the end of 2024, there were about 577,000 manufacturing jobs in the clean energy industry. These job losses are especially significant because theyre happening amid a general decline in manufacturing employment. In 2024, clean energy manufacturing had been a “bright spot,” says Michael Timberlake, E2 director of research and publications, helping bring back U.S. production. “When those projects are canceled, were not just losing jobs on paper; were losing a pathway that had been driving a new manufacturing resurgence,” he says. “And the investment doesnt disappear. It moves to other countries and U.S. competitors that are aggressively building clean energy supply chains and hiring the workers we cant afford to lose.” Even amid cancellations, some new clean energy projects and jobs were announced in 2025, like a $42 million Anthro Energy battery factory in Louisville, Kentucky, which will create 110 jobs.  But the number of jobs eliminated outweighs those potential additions. Just 22,905 jobs were announced in 2025, meaning a net loss of more than 15,000 expected clean energy positions.  No previous year tracked by E2 saw job losses on this scale, underscoring how quickly employment gains can evaporate when projects are abandoned, the analysis reads. New clean energy investments were also overshadowed by cancellations. Companies canceled, closed, or downsized $34.8 billion in clean energy projects, nearly three times the $12.3 billion in new investment announced throughout the year, a 3-to-1 imbalance.  Republican-held districts hit harder Though the entire country was affected by these losses, Republican-held districts felt their impact a bit more than others.  Republican districts lost $19.9 billion in investments that would have brought 24,500 jobs to those regions, compared to $10.6 billion and 12,600 jobs lost in Democratic-held districts. That makes sense because the Inflation Reduction Act (IRA) signed by then-President Joe Biden in 2022which spurred clean energy jobs and projectsbenefited many Republican-led districts, even though not a single Republican voted for the legislation and in fact House Republicans voted 42 times to repeal it. Nearly 200,000 of the 334,000 clean energy jobs that the IRA created in its first two years were in congressional districts represented by Republican House members.  Still, clean energy is growing Despite attacks on clean energy by the current Trump administration, the sector is still growing in the United States. In 2025, nearly all of the new power added to the countrys grid came from solar, wind, and batteries.  Even the U.S. Energy Information Administration has said that all net new generating capacity the country sees in 2026 will come from renewables.  And clean energy experts say the industry will continue to groweven as the president tries to prop up coal, oil, and gasbecause electricity generated from renewables is cheaper than fossil fuels, and the projects are often faster to build than fossil fuel power plants.  Still, economic losses that the clean energy sector saw in 2025 are devastating, and may not be fully recovered. And if clean energy job growth is at risk, that affects our entire economy. Clean energy jobs are present in every single state, and, as the World Resources Institute put it in November, movement toward clean energy will create opportunity for millions of Americans. E2’s data also doesn’t capture the “tens of thousands of additional jobs and projects” that likely would have been announced if the country’s policy and market certainty continued, Timberlake says. “Likely hundreds of projects that would have been announced, and hundreds more that could’ve been announced this year, cannot be recovered,” he adds, “and will instead benefiting workers and communities in other countries.”

Category: E-Commerce
 

2026-02-05 18:00:00| Fast Company

Bob Iger doesn’t understand generative AI. He thinks it is good for the quarterly bottom line. He believes a corporation can control it and that lawyers and agreements can bind it. He is clueless. Generative AI is here to kill Hollywoodincluding the company hes now leaving to Josh DAmaro, the new heir to Disney’s throne. This became painfully clear to me during Disney’s recent first-quarter financial call. Taking a victory lap for his modernization efforts, he briefly laid out the road map for the company’s partnership with OpenAI, announced in December 2025. Under the agreement, Disney would invest $1 billion in the artificial intelligence company and let it tap Disney’s IP crown jewels so Sora users can make clips of Donald Trump wearing an Iron Man suit battling Jafar dressed as an Iranian Ayatollah.  Heres Igers plan as stated: Step oneflood Disney+ with Sora 2 generated vertical videos capped at 30 seconds. Iger views this as a positive step that will jump-start the platform’s ability to compete with the dopamine-loop short-form content of TikTok and YouTube. There is no Step 2. At least not yet. For the last 15 years, Iger has been on a quest to find the silver bullet that keeps Disney relevant deep into the 21st century. He bought Pixar, Marvel, Star Wars, and Fox. Now, as he leaves Cinderellas castle behind, he clearly views this Sora partnership as the final move that allows him to leave the company future-proofed. During the call, Iger all but carved this philosophy in stone for DAmaro. I believe that in the world that changes as much as it does that in some form or another, trying to preserve the status quo is a mistake, and Im certain that my successor will not do that, Iger said. Theyll be handed, I think, a good hand in terms of the strength of the company, [and a] number of opportunities to grow. But to say curated AI slop provides a number of opportunities to grow is an Epcot-sized ball of naiveté. Iger’s intention to evolve Disney is correct; stagnation is indeed death, as any Harvard Business School freshman will recite. But his strategy fails to understand the nature of the beast he has invited into the Magic Kingdom.  Iger is talking about generative AI like a new distribution channel or a camera lensa tool that can be kept in a walled garden to serve a corporate master. But AI is not a tool; it is a solvent. It dissolves the barriers between creator and consumer, between professional and amateur, and ultimately, between value and noise. A new plan for Disney D’Amaro is walking into a wall of noise that is going to get increasingly harder to break through as generative content continues to take over our feeds. Disney’s saving grace could be that D’Amaro, a man who built his career overseeing the company’s theme parks and experiences, likely understands the value of true physical, human-driven innovation. Expanding those experiences, as Iger said on the call, will be Disney’s focus in the years to come. It makes perfect sense. Disneys Experiences segment outperformed the Entertainment segment in Q1 2026 by a factor of almost three. While entertainment revenue reached $11.61 billion, high content production and marketing costs for major releases caused its operating income to plunge 35% to $1.1 billion. In contrast, the Experiences segment posted record revenue of $10.01 billion with an operating income of $3.31 billion, accounting for roughly 71% of Disney’s total segment operating profit for the quarter. Its telling that the physical experience and its human factor, beat the cumulus of film and TV re-fried franchise releases. D’Amaro has the opportunity to set a strategy that could make Disney thrive. He has the track record to do it. D’Amaro’s experience isnt limited to running a theme park. He secured the throne partly because he championed Disney’s $1.5 billion investment in Epic Games and Fortnite. He seemingly understands the digital generation. Now the question is, will he see the Sora deal for what it is? Disney’s agreement with OpenAI is a three-year deal, with a one-year exclusivity clause that opens Disney to close deals with, say, Kuaishou Technology, the Chinese makers of Kling. In corporate time, three years is a blink. But for Generative AIwhere time is measured in yellow dogs yearsit is an epoch. By the time this contract expires, the havoc AI will have wreaked on the entertainment industry won’t be something you can negotiate away. This is a pivotal moment that DAmaro needs to address now, even if it goes against the stock market algorithms and the vision of a Wall Street-revered old man now sailing into the sunset on his gilded version of the Black Pearl. Iger’s AI strategy Iger outlined three pillars for this AI strategy at his call:  Creativity (assisting the process) Productivity (efficiency, read: cost-cutting) Connectivity (a “more intimate relationship” with the consumer).  His vision is a Disney+ where you don’t just watch Frozen; you generate a 30-second clip of Olaf dancing in your living room. Exciting. The financial sector, predictable as ever, applauded at the mere thought of Disney embracing AI. When the Sora deal was announced, many analysts like Citi Research Media Analyst Jason Bazinet called this a masterful move: A strategic defense, and a way to monetize IP that would otherwise be scrapped for free. Bazinet believes this agreement codifies what specific IP can be used (animated characters) and what form the output can take (i.e. short-form video). This will both protect actors/actresses in Hollywood and prevent cannibalization of Disneys long-form Film and TV output. Outside the boardroo, things arent so La La Land. The unions that work in the “Creativity” pillar view Igers AI strategy as a betrayal, framing it as a Trojan Horse that normalizes the technology that is intended to replace them. The Writers Guild of America said that [the partnership] seems to endorse the platform’s appropriation of their work while diminishing the value of their creations for the benefit of a tech corporation. Igers idea of “Productivity” is just corporate speak for employing fewer humans. Jobs are going to be lost, as filmmaker Tyler Perry said after the news. Perry saw the writing on the wall a long time ago, halting an $800 million studio expansion after seeing the first version of Sora. If you can generate a location, you don’t need to build it. If you can generate a performance, you don’t need to film it. Disney has been cutting jobs in the film, television, and finance department, but none related yet to its AI initiatives, mainly in post-production.. And as for Connectivity, consumers are all well served, thank you very much. Anyone who has surfed YouTube, TikTok, Discord, Instagram, X, or Reddit, knows they are overflowing with AI-generated videos. There are not enough Avengers, Baby Yodas, and Mickey Mice in the world to win this war of content. And the more time that passes, the less chance Disney has at winning that war with the same tools as the enemy is using. Disney is adopting Sora to fight a battle in its own walled garden, limited to its famous-but-limited IP. By definition, it cant compete against the entire planet creating universes of infinitely-expanding generated content.  Horizon events Iger seems to believe that by partnering with OpenAI, Disney has bought safety. Somehow, he thinks this buys Disney control over the beast. But OpenAI does not control generative AI. Altman is a chump compared to the combined power of the companies cooking generative AI video technology in China. Generative AI is, right now, an all-powerful being who doesnt care about corporate deals.  Igers remarks remind me of that viral 1999 Newsnight interview with David Bowie, where he laughed at the interviewer who thought the Internet was just a tool. No Bob, Bowie would have told Iger today, AI is not a tool. Its an alien lifeform. Experts warned me of this moment in 2023. Tom GrahamCEO of Metaphysic, a firm dedicated to protecting actors and regular people against AI clones told me that we were approaching a horizon of events where reality would evaporate. Gil PerryCEO of AI avatar firm D-IDpredicted that within one or two years, we wouldn’t be able to distinguish truth from lies. Emad Mostaqueco-founder of Stability AItold me  that within a decade, wed create anything in real-time with visual perfection. They were all correct, but far too conservative. We didnt need a decade. We barely needed three years. Which, in itself, is a testimony of the true power of AI and its ability to change reality and content as we know it. Today, early 2026, we have crossed that horizon. The uncanny valley, which allowed us to instinctively distinguish fake AI from real, is permanently closed. Models like Sora 2 and Googles Veo 3 more than often produce video indistinguishable from reality for short clips. But the real threat to Disney isn’t the partner they paid $1 billion to; its the technology they didn’t buy. Open-source platforms like Wan 2.6made by Chinese company Alibabaare already running on consumer hardware, offering multi-shot storytelling and character consistency that rivals the closed systems of Silicon Valley. The technology is wild, uncensored, and free. It doesn’t care about Disneys copyright. It doesn’t care about walled gardens. It is creating a Big Bang of content where a teenager in a basement can generate a film that looks as expensive as a Marvel blockbuster.  The dilution of magic And this is where Igers gamble truly falls apart. He assumes that in this world of infinite, picture-perfect content, Disneys IP will remain king. And why? Disney has spent the last decade systematically exhausting its brand equity. We are drowning in the umpteenth Star Wars spinoff and the 50th Marvel phase. The brand fatigue is palpable. Why would people, except the hard-core fanboys, choose to consume frozen-TV-dinner clips of the same old stuff again and again?  How can the acceleration of this IPs exhaustion, allowing users to churn out AI-slopped versions of these characters, help Disney? Iger thinks adding curated user-generated noise to Disney+ is a value-add, failing to see it for what it is: the final commoditization of its former magic. Why would the current and future generations care about a sanitized, 30-second Mickey Mouse clip on Disney+ when they can go to an open platform and generate their own universe, tailored specifically to their own desires, with characters that feel just as real but are completely new? Change course or sink If theres anything I can be sure of is that the history of the internetfrom YouTube to TikTokteaches us one thing: The audience craves the new, the raw, and the personal. They are moving away from the polished, corporate monoliths. By integrating Sora 2, Iger isn’t saving Disney; he is training his audience to accept synthetic media, accelerating the very shift that renders legacy studios obsolete. Bob Iger is right that you have to change or die. But by betting that he can ride the tiger of generative AI without being eaten, he may have just opened the cage door for good. Perhaps D’Amaro, the man of the physical Disney, can save the House of Mouse from the digital trap Iger has set for him. If the future of content is infinite, cheap, and synthetic, the only true luxury left is the human touch. D’Amaro has the chance to zag where the rest of the industry is zigging. He can double down on the one thing AI cannot simulatethe spark of human genius that birthed this company in the first place. Instead of competing with teenagers in garages on AI speed, hire them to do what Walt Disney himself did: Invent new mythologies. Create your own technologies. Craft truly new, bold stories born from the messiness of the human spirit, not the probability curves of a model trained on the past. Reclaim the experience not just as a theme park ride, but as the act of witnessing something undeniably, beautifully human.  That is the only magic trick left that an algorithm cant replicate.

Category: E-Commerce
 

Sites: [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] next »

Privacy policy . Copyright . Contact form .