Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2025-12-23 17:00:00| Fast Company

Metas decision to end its professional fact-checking program sparked a wave of criticism in the tech and media world. Critics warned that dropping expert oversight could erode trust and reliability in the digital information landscape, especially when profit-driven platforms are mostly left to police themselves. What much of this debate has overlooked, however, is that today, AI large language models are increasingly used to write up news summaries, headlines, and content that catch your attention long before traditional content moderation mechanisms can step in. The issue isnt clear-cut cases of misinformation or harmful subject matter going unflagged in the absence of content moderation. Whats missing from the discussion is how ostensibly accurate information is selected, framed, and emphasized in ways that can shape public perception. Large language models gradually influence the way people form opinions by generating the information that chatbots and virtual assistants present to people over time. These models are now also being built into news sites, social media platforms, and search services, making them the primary gateway to obtain information. Studies show that large language models do more than simply pass along information. Their responses can subtly highlight certain viewpoints while minimizing others, often without users realizing it. Communication bias My colleague, computer scientist Stefan Schmid, and I, a technology law and policy scholar, show in a forthcoming accepted paper in the journal Communications of the ACM that large language models exhibit communication bias. We found that they may have a tendency to highlight particular perspectives while omitting or diminishing others. Such bias can influence how users think or feel, regardless of whether the information presented is true or false. Empirical research over the past few years has produced benchmark datasets that correlate model outputs with party positions before and during elections. They reveal variations in how current large language models deal with public content. Depending on the persona or context used in prompting large language models, current models subtly tilt toward particular positionseven when factual accuracy remains intact. These shifts point to an emerging form of persona-based steerabilitya models tendency to align its tone and emphasis with the perceived expectations of the user. For instance, when a user describes themselves as an environmental activist and another as a business owner, a model may answer the same question about a new climate law by emphasizing different, yet factually accurate, concerns for each of them. For example, the criticisms could be that the law does not go far enough in promoting environmental benefits and that the law imposes regulatory burdens and compliance costs. Such alignment can easily be misread as flattery. The phenomenon is called sycophancy: Models effectively tell users what they want to hear. But while sycophancy is a symptom of user-model interaction, communication bias runs deeper. It reflects disparities in who designs and builds these systems, what datasets they draw from, and which incentives drive their refinement. When a handful of developers dominate the large language model market and their systems consistently present some viewpoints more favorably than others, small differences in model behavior can scale into significant distortions in public communication. Bias in large language models starts with the data theyre trained on. What regulation can and cant do Modern society increasingly relies on large language models as the primary interface between people and information. Governments worldwide have launched policies to address concerns over AI bias. For instance, the European Unions AI Act and the Digital Services Act attempt to impose transparency and accountability. But neither is designed to address the nuanced issue of communication bias in AI outputs. Proponents of AI regulation often cite neutral AI as a goal, but true neutrality is often unattainable. AI systems reflect the biases embedded in their data, training, and design, and attempts to regulate such bias often end up trading one flavor of bias for another. And communication bias is not just about accuracyit is about content generation and framing. Imagine asking an AI system a question about a contentious piece of legislation. The models answer is not only shaped by facts, but also by how those facts are presented, which sources are highlighted and the tone and viewpoint it adopts. This means that the root of the bias problem is not merely in addressing biased training data or skewed outputs, but in the market structures that shape technology design in the first place. When only a few large language models have access to information, the risk of communication bias grows. Apart from regulation, then, effective bias mitigation requires safeguarding competition, user-driven accountability and regulatory openness to different ways of building and offering large language models. Most regulations so far aim at banning harmful outputs after the technologys deployment, or forcing companies to run audits before launch. Our analysis shows that while prelaunch checks and post-deployment oversight may catch the most glaring errors, they may be less effective at addressing subtle communication bias that emerges through user interactions. Beyond AI regulation It is tempting to expect that regulation can eliminate all biases in AI systems. In some instances, these policies can be helpful, but they tend to fail to address a deeper issue: the incentives that determine the technologies that communicate information to the public. Our findings clarify that a more lating solution lies in fostering competition, transparency, and meaningful user participation, enabling consumers to play an active role in how companies design, test, and deploy large language models. The reason these policies are important is that, ultimately, AI will not only influence the information we seek and the daily news we read, but it will also play a crucial part in shaping the kind of society we envision for the future. Adrian Kuenzler is a scholar-in-residence at the University of Denver and an associate professor at the University of Hong Kong. This article is republished from The Conversation under a Creative Commons license. Read the original article.


Category: E-Commerce

 

LATEST NEWS

2025-12-23 16:33:31| Fast Company

The Federal Communications Commission on Monday said it would ban new foreign-made drones, a move that will keep new Chinese-made drones such as those from DJI and Autel out of the U.S. market. The announcement came a year after Congress passed a defense bill that raised national security concerns about Chinese-made drones, which have become a dominant player in the U.S., widely used in farming, mapping, law enforcement,ss and filmmaking. The bill called for stopping the two Chinese companies from selling new drones in the U.S. if a review found they posed a risk to American national security. The deadline for the review was Dec. 23. The FCC said Monday the review found that all drones and critical components produced in foreign countries, not just by the two Chinese companies, posed unacceptable risks to the national security of the United States and to the safety and security of U.S. persons.” But it said specific drones or components would be exempt if the Pentagon or Department of Homeland Security determined they did not pose such risks. The FCC cited upcoming major events, such as the 2026 World Cup, America250 celebrations, and the 2028 Summer Olympics in Los Angeles, as reasons to address potential drone threats posed by criminals, hostile foreign actors, and terrorists.” Michael Robbins, president and chief executive officer of AUVSI, the Association for Uncrewed Vehicle Systems International, said in a statement that the industry group welcomes the decision. He said it’s time for the U.S. not only to reduce its dependence on China but build its own drones. Recent history underscores why the United States must increase domestic drone production and secure its supply chains,” Robbins said, citing Beijing’s willingness to restrict critical supplies such as rare earth magnets to serve its strategic interests. DJI said it was disappointed by the FCC decision. While DJI was not singled out, no information has been released regarding what information was used by the Executive Branch in reaching its determination, it said in a statement. Concerns about DJIs data security have not been grounded in evidence and instead reflect protectionism, contrary to the principles of an open market, the company said. In Texas, Gene Robinson has a fleet of nine DJI drones that he uses for law enforcement training and forensic analyses. He said the new restrictions would hurt him and many others who have come to rely on the Chinese drones because of their versatility, high performance, and affordable prices. But he said he understands the decision and lamented that the U.S. had outsourced the manufacturing to China. Now, we are paying the price, Robinson said. To get back to where we had the independence, there will be some growing pains. We need to suck it up, and lets not have it happen again.” Also in Texas, Arthur Erickson, chief executive officer and co-founder of the drone-making company Hylio, said the departure of DJI would provide much-needed room for American companies like his to grow. New investments are pouring in to help him ramp up production of spray drones, which farmers use to fertilize their fields, and it will bring down prices, Erickson said. But he also called it crazy and unexpected that the FCC should expand the scope to all foreign-made drones and drone components. The way it’s written is a blanket statement, Erickson said. There’s a global allied supply chain. I hope they will clarify that. Didi Tang, Associated Press


Category: E-Commerce

 

2025-12-23 16:01:54| Fast Company

The governor of Niigata on Tuesday formally gave local consent to put two reactors at the Kashiwazaki-Kariwa nuclear power plant in the north-central prefecture back online, clearing a last hurdle toward restarting the plant idled for more than a decade following the 2011 meltdowns at another plant managed by the same utility. Gov. Hideyo Hanazumi, in his meeting with Economy and Industry Minister Ryosei Akazawa, conveyed the prefecture’s “endorsement” to restart the No. 6 and No. 7 reactors at the Kashiwazaki-Kariwa plant, accepting the government’s pledge to ensure safety, emergency response and understanding of the residents. Restart preparations for No. 6 reactor have moved ahead and utility company TEPCO is expected to apply for a final safety inspection by the Nuclear Safety Authority later this week ahead of a possible resumption in January. Work at the other reactor is expected to take a few more years. The move comes one day after the Niigata prefectural assembly adopted a budget bill that included funding necessary for a restart, supporting the governor’s earlier consent. “It was a heavy and difficult decision,” Hanazumi told reporters. Hanazumi also met with Prime Minister Sanae Takaichi, who also supports nuclear energy, and asked her to visit to observe the safety at the plant. Japan once planned to phase out atomic power following the disaster at the Fukushima plant caused by an earthquake and tsunami. But in the face of global fuel shortages, rising prices and pressure to reduce carbon emissions, the government has reversed its policy and is now seeking to increase nuclear energy use by accelerating reactor restarts, extending their operational lifespan and considering building new ones. Of the 57 commercial reactors, 13 are currently in operation, 20 are offline and 24 others are being decommissioned, according to the nuclear authorities. The Kashiwazaki-Kariwa plant, which comprises seven reactors, is the world’s biggest. The plant has been offline since 2012 as part of nationwide reactor shutdowns in response to the March 2011 triple meltdowns at TEPCO’s Fukushima Daiichi plant. Reactors No. 6 and 7 at Kashiwazaki-Kariwa had cleared safety tests in 2017, but their restart preparations were suspended after a series of safeguarding problems were found in 2021. The Nuclear Regulation Authority lifted an operational ban at the plant in 2023. Its resumption again faced uncertainty following the Jan. 1, 2024, earthquake in the nearby Noto region that rekindled safety concerns among local residents about the plant and evacuation in case of a major disaster. The industry ministry sought an early resumption approval from Niigata two months later. In Japan, a reactor restart is subject to the local community’s consent. TEPCO, heavily burdened with the growing cost of decades-long decommissioning and compensation for residents affected by the Fukushima disaster, has been anxious to resume its only workable nuclear plant to improve its business. TEPCO has been struggling to regain public trust in safely running a nuclear power plant. Aside from plant safety, experts say acceleration of reactor restarts also raises concern in a country without full nuclear fuel reprocessing or plans for radioactive waste management. Mari Yamaguchi, Associated Press


Category: E-Commerce

 

Latest from this category

23.12Consuming news from AI shifts our opinions and reality. Heres how
23.12FCC bans new foreign-made drones over national security concerns
23.12Japan will restart the worlds largest nuclear plant after clearing this last major hurdle
23.12U.S. economy shows strong growth in third quarter, Commerce Department says
23.12Quantum computing stocks soar, then fall, in holiday week trading. Whats up with D-Wave, Rigetti, and IonQ?
23.12The 7 biggest design trends of 2025
23.12Silicon Valley says to skip college
23.12FDA approves Novo Nordisk weight-loss pill. Heres what to know
E-Commerce »

All news

23.12Mid-Day Market Internals
23.12What Makes This Trade Great: CCCX Stands Out on a Quiet Day
23.12Engadget Podcast: Why is the Nex Playground 'AI console' such a hit?
23.12Xbox cloud gaming comes to newer Amazon Fire TV models
23.12New York Times reporter files lawsuit against AI companies
23.12Consuming news from AI shifts our opinions and reality. Heres how
23.12FCC bans new foreign-made drones over national security concerns
23.12The seller of a $34.5 million Winnetka home downsized with a $14 million buy in Kenilworth
More »
Privacy policy . Copyright . Contact form .