Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 

Keywords

E-Commerce

2026-02-05 23:30:00| Fast Company

The distance between a world-changing innovation and its funding often comes down to four minutesthe average time a human reviewer tends to spend on an initial grant application. In those four minutes, reviewers must assess alignment, eligibility, innovation potential, and team capacity, all while maintaining consistency across thousands of applications. It’s an impossible ask that leads to an impossible choice: either slow down and review fewer ideas or speed up and risk missing transformative ones. At MIT Solve, we’ve spent a year exploring a third option: teaching AI to handle the repetitive parts of review so humans can invest real time where judgment matters most. WHY AI, AND WHY NOW In 2025, Solve received nearly 3,000 applications to our Global Challenges. Even a cursory four-minute review per application would add up to 25 full working days. Like many mission-driven organizations, we dont want to trade rigor for speed. We want both. That led us to a core question many funders are now asking: How can AI help us evaluate more opportunities, more fairly and more efficiently, without compromising judgment or values? To answer this question, we partnered with researchers from Harvard Business School, the University of Washington, and ESSEC Business School to study how AI could support early-stage grant review, one of the most time-intensive and high-volume stages of the funding lifecycle. WHAT WE TESTED AND WHAT WE LEARNED The research team developed an AI system (based on GPT-4o mini) to support application screening and tested it across reviewers with varying levels of experience. The goal was to understand where AI adds value and where it doesnt. Three insights stood out: 1. AI performs best on objective criteria. The system reliably assessed baseline eligibility and alignment with funding priorities, identifying whether applications met requirements or fit clearly defined geographic or programmatic focus areas. 2. AI is more helpful to less experienced reviewers. Less experienced reviewers made more consistent decisions when supported by AI insights, while experienced reviewers used AI selectively as a secondary input. 3. The biggest gain was standardization at scale. AI made judgments more consistent across reviewers, regardless of their experience, creating a stronger foundation for the second level of review and human decision-making. HOW THIS TRANSLATES INTO REAL-WORLD IMPACT At Solve, the first stage of our review process focuses on filtering out incomplete, ineligible, or weak-fit applications, freeing human reviewers to spend more time on the most promising ideas. We designed our AI tool with humans firmly in the loop, focused on the repetitive, pattern-based nature of initial screening that makes it uniquely suited for AI augmentation. The tool: Screens out applications with no realistic path forward. Supports reviewers with a passing probability score, a clear recommendation (Pass, Fail, or Review), and a transparent explanation. When the 2025 application cycle closed with 2,901 submissions, the system categorized them as follows: 43% Pass; 16% Fail; and 41% Review. That meant our team could focus deeply on just 41% of the applicationscutting total screening time down to ten dayswhile maintaining confidence in the quality of the results. THE BIGGER TAKEAWAY FOR PHILANTHROPY Every hour saved during the early stages of evaluation is an hour redirected toward the higher-value work that humans excel at: engaging more deeply with innovators and getting bold, under-resourced ideas one step closer to funding. Our early results show strong alignment between AI-supported screening and human judgment. More importantly, they demonstrate that its possible to design AI systems that respect nuance, preserve accountability, and scale decision-making responsibly. The philanthropic sector processes millions of applications annually, with acceptance rates often below 5%. If we’re going to reject 95% of ideas, we owe applicantsespecially those historically excluded from fundinga genuine review. Dividing responsibility, with humans making decisions and AI eliminating rote review, makes it that much more possible at scale. It’s a practical step toward the thoroughness our missions demand.  Hala Hanna is the executive director and Pooja Wagh is the director of operations and impact at MIT Solve.

Category: E-Commerce
 

2026-02-05 23:00:00| Fast Company

Planned layoffs have now reached their highest rate since 2009’s Great Recession.  The data comes from Challenger, Gray & Christmas’ new layoffs report, which revealed that  U.S.-based employers announced 108,435 job cuts in January, marking the highest rate to start a year since 2009. Also notable, in the same month, just 5,306 planned hires were announcedthe lowest total on record for January. According to the data, that means layoffs are up a staggering 118% from the same period a year ago, and 205% from December 2025.  Generally, we see a high number of job cuts in the first quarter, but this is a high total for January, Andy Challenger, workplace expert and chief revenue officer for the firm, said in the report. It means most of these plans were set at the end of 2025, signaling employers are less-than-optimistic about the outlook for 2026. The most hard-hit sectors for layoffs are transportation, technology, and health care industries. According to a Reuters report, 31,243 planned cuts came from United Parcel Service (UPS). UPS plans to close 24 facilities in 2026, as part of a major restructuring effort.  On the tech side, 22,291 tech job cuts, most came from Amazon, as the company announced plans to lay off 16,000 corporate employees.  Some of you might ask if this is the beginning of a new rhythmwhere we announce broad reductions every few months, wrote Beth Galetti, senior vice president of people experience and technology at Amazon, in an announcement last week. Thats not our plan. But just as we always have, every team will continue to evaluate the ownership, speed, and capacity to invent for customers, and make adjustments as appropriate. Meanwhile, the healthcare sector has been battling as a result of federal funding cuts with 17,107 job cuts announced in January, making it the largest cut since April 2020.  Healthcare providers and hospital systems are grappling with inflation and high labor costs, Challenger said. Lower reimbursements from Medicaid and Medicare are also hitting hospital systems. These pressures are leading to job cuts, as well as other cutting measures, such as some pay and benefits. Its very difficult for leaders of these companies to tighten budgets while not sacrificing patient care. Additionally, the Labor Department reported that job openings are down to the lowest rate since September 2020, as vacancies fell to 6.5 million in December.Of course, many have been quick to blame AI for a surging number of layoffs. But some experts say that it has more to do with current economic conditions, and that AI could be being used as a mere scapegoat.  In a post on BlueSky responding to the new data, CNBC journalist Carl Quintanilla shared a quote attributed to market researcher Renaissance Macro Research (RENMAC), referencing the Challenger report and explaining the real reasons behind the downslide: …While there is quite a bit of attention on AI driving layoffs, most of the reasons cited in this data set are about closing, economic conditions, restructuring, and loss of contract. AI is a comparatively small factor behind the January jump in layoff news. That aligns with data from entities like the Brookings Institution and Yale University, which found that sectors (including ones especially susceptible to AI) havent seen drastic changes in the amount of available jobs since ChatGPT debuted in 2022. Still, other experts continue to believe that AI’s toll on the job market will be crushing.  We are at the beginning of a multi-decade progress development that will have a major impact on the labor market, said Gad Levanon, chief economist at the Burning Glass Institute, a workforce research firm, told CNBC last year.  Theres probably much more in the tank, he said.

Category: E-Commerce
 

2026-02-05 22:30:00| Fast Company

Have you seen larger-than-life depictions of your friends lately? They might have been sucked into the latest social trend: creating AI-generated caricatures. The trend itself is simple. Users input a common prompt: “Create a caricature of me and my job based on everything you know about me,” and upload a photo of themself, and, voila! ChatGPT (or any AI-image platform) spits out an over-the-top, cartoon-style image of you, your job, and anything else it’s learned about you. This ability is predicated on a robust ChatGPT (or other AI) chat history. Those who don’t have a close, personal relationship with the AI might need to give additional information to get a more accurate depiction. But notably, that’s yet another instance of potential AI privacy concerns. It’s not the first AI image trend. Other social media challenges have had users posting themselves as AI-generated cartoons, Renaissance paintings, or fantasy characters. AIs image capabilities have gone in a few different directions. Some of them, like with this trend, or the meme-ification of Sora, are seemingly harmless fun. However, Sora has started to see issues with bad-faith individuals being able to create AI deepfakes (see also: Grok porn). Meanwhile, even as the trend continues to rise, more than 13,000 ChatGPT users reported issues on Thursday, according to outage tracking website Downdetector.com.

Category: E-Commerce
 

2026-02-05 21:48:50| Fast Company

Tech workers have been worried for years about the AI tidal wave coming for their jobs, but their bosses are starting to worry too.  Stocks plunged this week as fears escalated that AI advancements will take a bite out of business for many software and services companies. The market losses are tied to updates to Anthropics AI-powered workplace productivity suite, Claude Cowork, which threatens to replace some software tools ubiquitous in the professional world. Companies with business in research and legal software like Thomson Reuters and LegalZoom dropped dramatically on the Anthropic news, with a wide swath of software stocks following suit. Intuit, PayPal, Equifax all dropped by over 10%, with enterprise software companies like Atlassian and Salesforce deepening their own losses, which started well before the latest AI news. The S&P North American software index also slid further this week, worsening a recent losing streak punctuated by a 15% decline in January the indexs worst month in nearly two decades. Unlike Claude Code, a coding tool designed for developers, Anthropic built Claude Cowork as a powerful, general purpose AI agent for non-coders. Available to Anthropics $100-per-month premium subscribers, Claude Cowork can knock out easier tasks like searching, collecting and organizing files, but its also capable of taking on much bigger challenges like making slide decks, producing reports and pulling and synthesizing information from other business software tools, like Zendesk and Microsoft Teams.  Claudes ability to execute complex tasks with dedicated software sub-agents prompted plenty of nervous jokes about humans being replaced by C-suites full of AI. And that was before a new Anthropic update introduced powerful new plugins designed to automate tasks across domains like finance, legal, sales, data, marketing, and customer support. The market is still digesting those new agentic AI capabilities, which could pose an existential threat to the software-as-a-service companies that undergird big chunks of the economy.  Fears of a zero sum software game grow Anthropic co-founder and CEO Dario Amodei has made his own ominous predictions about AI displacing human workers. Last year, Amodei predicted that AI could vaporize half of entry-level white collar roles, sending unemployment as high as 20% within five years. He pointed to losses in industries like tech, law, consulting and finance, specifically. “We, as the producers of this technology, have a duty and an obligation to be honest about what is coming,” Amodei told Axios. “I don’t think this is on people’s radar.” Not everyone deeply invested in AI agrees. Nvidia CEO Jensen Huang swatted away worries that AI would eat the traditional software industry after the stock bloodbath that began on Tuesday. “There’s this notion that the tool in the software industry is in decline, and will be replaced by AI, Huang said, emphasizing that relying on existing software tools makes more sense than reinventing the wheel. It is the most illogical thing in the world, and time will prove itself.

Category: E-Commerce
 

2026-02-05 21:00:00| Fast Company

The federal agency that enforces anti-discrimination laws in the workplace made an unexpected disclosure this week: Nike was under investigation for its approach to diversity, equity, and inclusion, due to claims that the company had discriminated against white employees and job applicants.  The investigation suggests that Nikes diversity goals and other DEI initiatives led the company to hire non-white workers to meet quotas or award them with more opportunities for career advancement, thereby discriminating against white workers. It is notable as the first major legal undertaking by Andrea Lucas, who President Trump installed as the chair of the Equal Employment Opportunity Commission (EEOC) last year. But it also indicates that Lucas is serious about targeting corporate employers over alleged discrimination against white workers, which she has clearly signaled is a priority for the agency under the Trump administration.  It is designed to instill fear into the hearts of large companies, says Chai Feldblum, a former EEOC commissioner and a member of EEO Leaders, a group of former senior officials who worked at the EEOC and Department of Labor under multiple administrations. If they’re afraid, then small companies will be afraid. And the point is to chill any form of equity and diversity efforts, even legal ones.  An unusual investigation The investigation into Nike is unusual for a few reasons: It is, of course, the first inquiry into what the agency has called DEI-related discrimination. But it is also rare that the EEOCs investigations into employers become public before they have concluded, since the process is supposed to be confidential.  An EEOC investigation typically either ends in a dismissal or, if the agency finds reasonable cause and concludes there was discrimination, results in a conciliation process that allows an employer to resolve the issue in private, with both parties coming to an agreement. If conciliation fails, the agency would then decide whether or not to bring a lawsuit, which is considered a last resort and happens infrequently. The EEOC does often use subpoenas to force employers to comply with their requests for information. According to Feldblum, subpoenas can be a useful tool for the agency to extract information from a company that might be stonewalling or only offering partial responses to its inquiries. In the case of Nike, however, the EEOC went to court to enforce the subpoena, thrusting the investigation into the public record.  What is unusual about this is the publicity, Feldblum says. Which is what chair Lucas wants. She’s doing that by suing on a subpoena. I think it’s a question whether EEOC is following its normal process for enforcing subpoenas.  Nike seemed to suggest as much in a statement to Fast Company.  This feels like a surprising and unusual escalation, a company spokesperson said. We have had extensive, good-faith participation in an EEOC inquiry into our personnel practices, programs, and decisions and have had ongoing efforts to provide information and engage constructively with the agency. We have shared thousands of pages of information and detailed written responses to the EEOCs inquiry and are in the process of providing additional information.  The statement continued: We are committed to fair and lawful employment practices and follow all applicable laws, including those that prohibit discrimination . . . We will continue our attempt to cooperate with the EEOC and will respond to the petition. A possible new precedent Feldblum argues the EEOCs approach to this investigation could set a precedent of taking companies to court over what the agency perceives to be insufficient cooperation with its requests for information.  The press release put out by the EEOC makes evident that the agency had requested extensive details about Nikes employment decisions, including its criteria for layoffs, the use of demographic data and how it was tied to executive compensation, and specifics about 16 programs that offered mentoring, leadership, or career development opportunities to underrepresented employees.  Unlike many of the cases the EEOC investigates, this one was not initiated by a complaint from a worker alleging discrimination; Lucas herself brought the charge against Nike in 2024. But its not clear exactly what prompted the investigation.  The EEOC claims to be looking into systemic allegations of DEI-related intentional race discrimination at Nike that have targeted white workers. By Lucass own admission, per a statement in the EEOC release, this investigation seems to have been prompted by Nikes public disclosures about its DEI programs. (When Lucas sent letters to 20 law firms last year requesting details on their DEI practicesa move that drew widespread criticismshe had relied on public statements.) You sign a commissioner charge under penalty of perjury, Feldblum says. You need to have at least some evidence of discrimination to sign that charge. Now if you believe that simply having a [diversity] goal is reasonable evidence of discrimination, then you’ll go ahead and sign that.  The future of DEI Like many companies at the time, Nike set ambitious DEI goals after the murder of George Floyd sparked a racial reckoning across corporate America. (The company has also grappled with broader culture issues over the years, including allegations of sexual harassment and gender discrimination.) In 2021, Nike tied executive compensation to DEI commitments that were intended to increase the share of women in leadership and boost representation of racial and ethnic minorities to 35% across its workforce.  In the time since, however, Nike has cycled through five chief diversity officers; the company also declined to put out a corporate sustainability report last year, which typically documents its progress on DEIthough Nike claimed it had not wavered from its diversity commitments.  Depending on how the EEOC investigation unfolds, Nike could face significant repercussions. The court will likely uphold the subpoena, according to Feldblum, which means Nike will likely have to produce reams of additional information. If the EEOC decides to make an example of Nike, the investigation could ultimately result in a lawsuitwhich would have far-reaching consequences for other employers and potentially set a precedent for subsequent investigations.  I think we allemployers, employees, the general publichave got to assume there will be a continued onslaught of attacks on DEI, Feldblum says, urging companies to review, not retreat” from their diversity programs and position on DEI. The EEOC is trying to stop employes from doing anything to increase diversity and equity, and they are stretching their own procedures, as well as the law . . . And that is a very sad day for an agency entrusted with enforcing employment civil rights laws.

Category: E-Commerce
 

2026-02-05 20:44:20| Fast Company

Anthropic is out with a new model called Claude Opus 4.6, an upgrade to its top-of-the-line Opus 4.5 model that launched in November. The new release could add new capabilities to Anthropics Claude Code coding assistant, which is facing growing competitive pressure from OpenAIs Codex. Anthropic says Opus 4.6 improves on its predecessors coding skills, planning, and, perhaps most importantly, its ability to reason more clearly when handling large amounts of information. When Opus 4.6 powers Claude Code, the coding agent can comprehend larger codebases and make more thoughtful decisions about how and where to add new code, the company says. More long-term memory AI labs have been racing to build models with longer context windows, meaning the amount of information a model can consider for a given task. But models have often struggled to use that information effectively in their outputs, a limitation Anthropic acknowledges. Previously, we would see things like, maybe the model gets lost in the middle, or it might forget details, Opus product manager Dianne Penn tells Fast Company. I wouldnt say Opus 4.6 is perfecthumans or other past models arent perfectbut we think that the quality improvement is pretty significant. Opuss longer memory also allows it to work on complicated tasks for extended periods, enabling Claude Code users to assemble teams of agents that collaborate on tasks. Anthropic also says the tool offers improved code review and debugging capabilities, helping it catch its own mistakes. Opus 4.6 arrives as the use of AI coding tools continues to surge, and as competition between Anthropic and OpenAI for software developers intensifies. OpenAIs Codex coding tool recently launched as a standalone app, powered by the GPT-5.2 model, and has received largely enthusiastic reviews from developers. A model for everyday work tasks Beyond coding, the new Anthropic model is designed to improve performance on everyday work tasks such as running financial analyses, conducting research, and creating or using documents, spreadsheets, and presentations. Opus 4.6 will also power Anthropics general-purpose work tool, CoWork, enabling it to multitask with minimal human supervision. Anthropic says Opus 4.6 achieved top scores across several industry benchmark tests, reaching the highest results so far on multiple evaluations. These include Humanitys Last Exam, a complex multidisciplinary reasoning test; Terminal-Bench 2.0, an agentic coding evaluation; and GDPval-AA, which measures performance on economically valuable knowledge-work tasks in finance, legal, and other domains. Anthropic also says Opus 4.6 outperforms all other models on OpenAI’s BrowseComp, which measures a models ability to locate difficult-to-find information online. Anthropic says the Opus 4.6 model is available to developers using Claude Code for the same price per million tokens as Opus 4.5. The new model is now the default for Claude Code Pro subscribers, and is available as an option for all other subscribers. 

Category: E-Commerce
 

2026-02-05 20:35:00| Fast Company

Visa announced a new platform designed to stimulate small businesses through a variety of tools and network opportunities on Thursday in advance of major sporting events this year. The program, Visa & Main, identifies and is built around helping address what Visa calls the most pressing challenges that entrepreneurs face: access to capital, reaching customers, and adopting modern business tools. That starts with a $100 million partnership with small business lender Lendistry, with Visa saying it would continue to provide additional grants and financial support programs as part of Visa & Main. Additionally, Visa & Main connects Visas small business members with its corporate sponsors, identifying opportunities through major events like Super Bowl LX and this summers FIFA World Cup. It launched the Square Stops Here hop-on, hop-off bus tour in San Francisco during Super Bowl week designed to support and spotlight local businesses. The company is using its platform to help direct potential customers to its small business members, also hosting workshops for entrepreneurs to help them convert a short-term gain into long-term sustainability. “Heartbeat of local communities” Visa & Main also intends to make it easier for small businesses to adopt AI in the workplace, noting that small businesses have adopted the new technology at a rate less than half that of bigger businesses. The program attempts to close that gap by making tools such as expense management and fraud protection easier to access. Small businesses are the heartbeat of local communities and represent nearly half of our countrys economic activity, Kim Lawrence, Visas North America regional president, said in a release. With Visa & Main, were connecting Visas products and in-house knowledge with the expertise of our clients and partners to provide small businesses with flexible financing opportunities and customer acquisition and technology support.” The move expands Visa’s investment into small businesses, an area where credit card companies have long competed for marketshare. For example, Small Business Saturdays was launched 15 years ago as a marketing push from American Express, and it has since become an annual Thanksgiving weekend event. Visa held a launch event in Atlanta on January 21 for Visa & Main, where members worked directly with the Visa team to get hands-on experience with the product.

Category: E-Commerce
 

2026-02-05 19:30:00| Fast Company

With community opposition growing, data center backers are going on a full-scale public relations blitz. Around Christmas in Virginia, which boasts the highest concentration of data centers in the country, one advertisement seemed to air nonstop. Virginias data centers are investing billions in clean energy, a voiceover intoned over sweeping shots of shiny solar panels. Creating good-paying jobs cue men in yellow safety vests and hard hats and building a better energy future.  The ad was sponsored by Virginia Connects, an industry-affiliated group that spent at least $700,000 on digital marketing in the state in fiscal year 2024. The spot emphasized that data centers are paying their own energy costs framing this as a buffer that might help lower residential bills and portrayed the facilities as engines of local job creation. The reality is murkier. Although industry groups claim that each new data center creates dozens to hundreds of high-wage, high-skill jobs, some researchers say data centers generate far fewer jobs than other industries, such as manufacturing and warehousing. Greg LeRoy, the founder of the research and advocacy group Good Jobs First, said that in his first major study of data center jobs nine years ago, he found that developers pocketed well over a million dollars in state subsidies for every permanent job they created. With the rise of hyperscalers, LeRoy said, that number is still very much in the ballpark.  Other experts reflect that finding. A 2025 brief from University of Michigan researchers put it bluntly: Data centers do not bring high-paying tech jobs to local communities. A recent analysis from Food & Water Watch, a nonprofit tracking corporate overreach, found that in Virginia, the investment required to create a permanent data center job was nearly 100 times higher than what was required to create comparable jobs in other industries.  Data centers are the extreme of hyper-capital intensity in manufacturing, LeRoy said. Once theyre built, the number of people monitoring them is really small. Contractors may be called in if something breaks, and equipment is replaced every few years. But thats not permanent labor, he said. Jon Hukill, a spokesperson for the Data Center Coalition, the industry lobbying group that established Virginia Connects in 2024, said that the industry is committed to paying its full cost of service for the energy it uses and is trying to meet this moment in a way that supports both data center development and an affordable, reliable electricity grid for all customers. Nationally, Hukill said, the industry supported 4.7 million jobs and contributed $162 billion in federal, state, and local taxes in 2023. Dozens of community groups across the country have mobilized against data center buildout, citing fears that the facilities will drain water supplies, overwhelm electric grids, and pollute the air around them. According to Data Center Watch, a project run by AI security company 10a Labs, nearly 200 community groups are currently active and blocked or delayed 20 data center projects representing $98 billion of potential investment between April and June 2025 alone.  The backlash has exposed a growing image problem for the AI industry. Too often, were portrayed as energy-hungry, water-intensive, and environmentally damaging, data center marketer Steve Lim recently wrote. That narrative, he argued, misrepresents our role in society and potentially hinders our ability to grow. In response, the industry is stepping up its messaging.  Some developers, like Starwood Digital Ventures in Delaware, are turning to Facebook ads to appeal to residents. Its ads make the case that data center development might help keep property taxes low, bring jobs to Delaware, and protect the integrity of nearby wetlands. According to reporting from Spotlight Delaware, the company has also boasted that it will create three times as many jobs as it initially told local officials.   Nationally, Meta has spent months running TV spots showcasing data center work as a viable replacement for lost industrial and farming jobs. One advertisement spotlights the small city of Altoona, Iowa. I grew up in Altoona, and I wanted my kids to be able to do the same, a voice narrates over softly-lit scenes of small-town Americana: a Route 66 diner, a farm, and a water tower. So, when work started to slow down, we looked for new opportunities and we welcomed Meta, which opened a data center in our town. Now, were bringing jobs here for us, and for our next generation. The advertisement ends with a promise superimposed over images of a football game: Meta is investing $600 billion in American infrastructure and jobs.  In reality, Altoonas data center is a hulking, windowless, warehouse complex that broke ground in 2013, long before the current data center boom. Altoona is not quite the beleaguered farm town Metas advertisements portray, but a suburb of 19,000, roughly 16 minutes from downtown Des Moines, the most populous city in Iowa. Meta says it has supported 400+ operational jobs in Altoona. In comparison, the local casino employs nearly 1,000 residents, according to the local economic development agency. Ultimately, those details may not matter much to the ads intended audience. As Politico reported, the advertisement may have been targeted at policymakers on the coasts more than the residents of towns like Altoona. Meta has spent at least $5 million airing the spot in places like Sacramento and Wahington, D.C.  The community backlash has also made data centers a political flashpoint. In Virginia, Abigail Spanberger won Novembers gubernatorial election in part on promises to regulate the industry and make developers pay their fair share of the electricity they use. State lawmakers also considered 30 bills attempting to regulate data centers. In response to concerns about rising electricity prices, Virginia regulators approved a new rate structure for AI data centers and other large electricity users. The changes, which will take effect in 2027, are designed to protect household customers from costs associated with data center expansion. These developments may only encourage companies to spend more on image-building. In Virginias Data Center Alley, the ads show no sign of stopping. Elena Schlossberg, an anti-data-center activist based in Prince William County, says her mailbox has been flooded with fliers from Virginia Connects for the past eight months.  The promises of lower electric bills, good jobs, and climate responsibility, she said, remind her of cigarette ads she saw decades ago touting the health benefits of smoking. But Schlossberg isnt sure the marketing is going to work. One recent poll showed that 73 percent of Virginians blame data centers for their rising electricity costs. Theres no putting the toothpaste back in the tube, she said. People already know were still covering their costs. People know that. This article originally appeared in Grist. Grist is a nonprofit, independent media organization dedicated to telling stories of climate solutions and a just future. Learn more at Grist.org

Category: E-Commerce
 

2026-02-05 19:06:10| Fast Company

In special education in the U.S., funding is scarce and personnel shortages are pervasive, leaving many school districts struggling to hire qualified and willing practitioners. Amid these long-standing challenges, there is rising interest in using artificial intelligence tools to help close some of the gaps that districts currently face and lower labor costs. Over 7 million children receive federally funded entitlements under the Individuals with Disabilities Education Act, which guarantees students access to instruction tailored to their unique physical and psychological needs, as well as legal processes that allow families to negotiate support. Special education involves a range of professionals, including rehabilitation specialists, speech-language pathologists and classroom teaching assistants. But these specialists are in short supply, despite the proven need for their services. As an associate professor in special education who works with AI, I see its potential and its pitfalls. While AI systems may be able to reduce administrative burdens, deliver expert guidance and help overwhelmed professionals manage their caseloads, they can also present ethical challenges ranging from machine bias to broader issues of trust in automated systems. They also risk amplifying existing problems with how special ed services are delivered. Yet some in the field are opting to test out AI tools, rather than waiting for a perfect solution. A faster IEP, but how individualized? AI is already shaping special education planning, personnel preparation, and assessment. One example is the individualized education program, or IEP, the primary instrument for guiding which services a child receives. An IEP draws on a range of assessments and other data to describe a childs strengths, determine their needs and set measurable goals. Every part of this process depends on trained professionals. But persistent workforce shortages mean districts often struggle to complete assessments, update plans and integrate input from parents. Most districts develop IEPs using software that requires practitioners to choose from a generalized set of rote responses or options, leading to a level of standardization that can fail to meet a childs true individual needs. Preliminary research has shown that large language models such as ChatGPT can be adept at generating key special education documents such as IEPs by drawing on multiple data sources, including information from students and families. Chatbots that can quickly craft IEPs could potentially help special education practitioners better meet the needs of individual children and their families. Some professional organizations in special education have even encouraged educators to use AI for documents such as lesson plans. Training and diagnosing disabilities There is also potential for AI systems to help support professional training and development. My own work on personnel development combines several AI applications with virtual reality to enable practitioners to rehearse instructional routines before working directly with children. Here, AI can function as a practical extension of existing training models, offering repeated practice and structured support in ways that are difficult to sustain with limited personnel. Some districts have begun using AI for assessments, which can involve a range of academic, cognitive, and medical evaluations. AI applications that pair automatic speech recognition and language processing are now being employed in computer-mediated oral reading assessments to score tests of student reading ability. Practitioners often struggle to make sense of the volume of data that schools collect. AI-driven machine learning tools also can help here, by identifying patterns that may not be immediately visible to educators for evaluation or instructional decision-making. Such support may be especially useful in diagnosing disabilities such as autism or learning disabilities, where masking, variable presentation and incomplete histories can make interpretation difficult. My ongoing research shows that current AI can make predictions based on data likely to be available in some districts. Privacy and trust concerns There are serious ethicaland practicalquestions about these AI-supported interventions, ranging from risks to students privacy to machine bias and deeper issues tied to family trust. Some hinge on the question of whether or not AI systems can deliver services that truly comply with existing law. The Individuals with Disabilities Education Act requires nondiscriminatory methods of evaluating disabilities to avoid inappropriately identifying students for services or neglecting to serve those who qualify. And the Family Educational Rights and Privacy Act explicitly protects students data privacy and the rights of parents to access and hold their childrens data. What happens if an AI system uses biased data or methods to generate a recommendation for a child? What if a childs data is misused or leaked by an AI system? Using AI systems to perform some of the functions described above puts families in a position where they are expected to put their faith not only in their school district and its special education personnel, but also in commercial AI systems, the inner workings of which are largely inscrutable. These ethical qualms are hardly unique to special ed; many have been raised in other fields and addressed by early-adopters. For example, while automatic speech recognition, or ASR, systems have strugged to accurately assess accented English, many vendors now train their systems to accommodate specific ethnic and regional accents. But ongoing research work suggests that some ASR systems are limited in their capacity to accommodate speech differences associated with disabilities, account for classroom noise, and distinguish between different voices. While these issues may be addressed through technical improvement in the future, they are consequential at present. Embedded bias At first glance, machine learning models might appear to improve on traditional clinical decision-making. Yet AI models must be trained on existing data, meaning their decisions may continue to reflect long-standing biases in how disabilities have been identified. Indeed, research has shown that AI systems are routinely hobbled by biases within both training data and system design. AI models can also introduce new biases, either by missing subtle information revealed during in-person evaluations or by overrepresenting characteristics of groups included in the training data. Such concerns, defenders might argue, are addressed by safeguards already embedded in federal law. Families have considerable latitude in what they agree to, and can opt for alternatives, provided they are aware they can direct the IEP process. By a similar token, using AI tools to build IEPs or lessons may seem like an obvious improvement over underdeveloped or perfunctory plans. Yet true individualization would require feeding protected data into large language models, which could violate privacy regulations. And while AI applications can readily produce better-looking IEPs and other paperwork, this does not necessarily result in improved services. Filling the gap Indeed, it is not yet clear whether AI provides a standard of care equivalent to the high-quality, conventional treatment to which children with disabilities are entitled under federal law. The Supreme Court in 2017 rejected the notion that the Individuals with Disabilities Education Act merely entitles students to trivial, de minimis progress, which weakens one of the primary rationales for pursuing AI that it can meet a minimum standard of care and practice. And since AI really has not been empirically evaluated at scale, it has not been proved that it adequately meets the low bar of simply improving beyond the flawed status quo. But this does not change the reality of limited resources. For better or worse, AI is already being used to fill the gap between what the law requires and what the system actually provides. Seth King is an associate professor of special education at the University of Iowa. This article is republished from The Conversation under a Creative Commons license. Read the original article.

Category: E-Commerce
 

2026-02-05 18:44:07| Fast Company

As Winter Storm Fern swept across the United States in late January 2026, bringing ice, snow, and freezing temperatures, it left more than a million people without power, mostly in the Southeast. Scrambling to meet higher than average demand, PJM, the nonprofit company that operates the grid serving much of the mid-Atlantic U.S., asked for federal permission to generate more power, even if it caused high levels of air pollution from burning relatively dirty fuels. Energy Secretary Chris Wright agreed and took another step, too. He authorized PJM and ERCOTthe company that manages the Texas power gridas well as Duke Energy, a major electricity supplier in the Southeast, to tell data centers and other large power-consuming businesses to turn on their backup generators. The goal was to make sure there was enough power available to serve customers as the storm hit. Generally, these facilities power themselves and do not send power back to the grid. But Wright explained that their industrial diesel generators could generate 35 gigawatts of power, or enough electricity to power many millions of homes. We are scholars of the electricity industry who live and work in the Southeast. In the wake of Winter Storm Fern, we see opportunities to power data centers with less pollution while helping communities prepare for, get through, and recover from winter storms. Data centers use enormous quantities of energy Before Wrights order, it was hard to say whether data centers would reduce the amount of electricity they take from the grid during storms or other emergencies. This is a pressing question, because data centers power demands to support generative artificial intelligence are already driving up electricity prices in congested grids like PJMs. And data centers are expected to need only more power. Estimates vary widely, but the Lawrence Berkeley National Lab anticipates that the share of electricity production in the U.S. used by data centers could spike from 4.4% in 2023 to between 6.7% and 12% by 2028. PJM expects a peak load growth of 32 gigawatts by 2030enough power to supply 30 million new homes, but nearly all going to new data centers. PJMs job is to coordinate that energyand figure out how much the public, or others, should pay to supply it. The race to build new data centers and find the electricity to power them has sparked enormous public backlash about how data centers will inflate household energy costs. Other concerns are that power-hungry data centers fed by natural gas generators can hurt air quality, consume water, and intensify climate damage. Many data centers are located, or proposed, in communities already burdened by high levels of pollution. Local ordinances, regulations created by state utility commissions, and proposed federal laws have tried to protect ratepayers from price hikes and require data centers to pay for the transmission and generation infrastructure they need. Always-on connections? In addition to placing an increasing burden on the grid, many data centers have asked utility companies for power connections that are active 99.999% of the time. But since the 1970s, utilities have encouraged demand response programs, in which large power users agree to reduce their demand during peak times like Winter Storm Fern. In return, utilities offer financial incentives such as bill credits for participation. Over the years, demand response programs have helped utility companies and power grid managers lower electricity demand at peak times in summer and winter. The proliferation of smart meters allows residential customers and smaller businesses to participate in these efforts as well. When aggregated with rooftop solar, batteries and electric vehicles, these distributed energy resources can be dispatched as virtual power plants. A different approach The terms of data center agreements with local governments and utilities often arent available to the public. That makes it hard to determine whether data centers could or would temporarily reduce their power use. In some cases, uninterrupted access to power is necessary to maintain critical data systems, such as medical records, bank accounts and airline reservation systems. Yet, data center demand has spiked with the AI boom, and developers have increasingly been willing to consider demand response. In August 2025, Google announced new agreements with Indiana Michigan Power and the Tennessee Valley Authority to provide data center demand response by targeting machine learning workloads, shifting non-urgent compute tasks away from times when the grid is strained. Several new companies have also been founded specifically to help AI data centers shift workloads and even use in-house battery storage to temporarily move data centers power use off the grid during power shortages. Flexibility for the future One study has found that if data centers would commit to using power flexibly, an additional 100 gigawatts of capacitythe amount that would power around 70 million householdscould be added to the grid without adding new generation and transmission. In another instance, researchers demonstrated how data centers could invest in offsite generation through virtual power plants to meet their generation needs. Installing solar panels with battery storage at businesses and homes can boost available electricity more quickly and cheaply than building a new full-size power plant. Virtual power plants also provide flexibility as grid operators can tap into batteries, shift thermostats or shut down appliances in periods of peak demand. These projects can also benefit the buildings where they are hosted. Distributed energy generation and storage, alongside winterizing power lines and using renewables, are key ways to help keep the lights on during and after winter storms. Those efforts can make a big difference in places like Nashville, Tennessee, where more than 230,000 customers were without power at the peak of outages during Fern, not because there wasnt enough electricity for their homes but because their power lines were down. The future of AI is uncertain. Analysts caution that the AI industry may prove to be a speculative bubble: If demand flatlines, they say, electricity customers may end up paying for grid improvements and new generation built to meet needs that would not actually exist. Onsite diesel generators are an emergency solution for large users such as data centers to reduce strain on the grid. Yet, this is not a long-term solution to winter storms. Instead, if data centers, utilities, regulators and grid operators are willing to also consider offsite distributed energy to meet electricity demand, then their investments could help keep energy prices down, reduce air pollution and harm to the climate, and help everyone stay powered up during summer heat and winter cold. Nikki Luke is an assistant professor of human geography at the University of Tennessee. Conor Harrison is an associate professor of economic geography at the University of South Carolina. This article is republished from The Conversation under a Creative Commons license. Read the original article.

Category: E-Commerce
 

Sites: [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] next »

Privacy policy . Copyright . Contact form .