Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2025-10-02 16:43:30| Fast Company

Napheesa Collier is more than just a WNBA star who is critical of her league and its leadership. The Minnesota Lynx player is a vice president of the players union, which means she will be sitting across from WNBA Commissioner Cathy Engelbert at the negotiating table ahead of an Oct. 31 deadline to reach a new collective bargaining agreement. If that doesn’t cause enough tension, Collier is also a co-founder of Unrivaled, a three-on-three womens basketball league that plays in the winter and features WNBA stars. That could give her additional leverage to try to press the WNBA as talks unfold. Here’s a look at some of the implications of Collier’s headline-grabbing comments. Player negotiations with the WNBA are already tense. Could they get worse? As an executive on the negotiating team, Collier will have a loud voice in the room when in-person negotiations between the two sides continue. She was at the face-to-face meeting at All-Star weekend in July that included dozens of players. There have been meetings since, but players haven’t really been able to attend because they’ve still been in season. Were working hard to make sure that we are putting ourselves in the best position to negotiate for what we think is fair,” said Collier, who has torn ligaments in her left ankle. We have a lot of meetings internally to make sure were on the same page and were all lockstep for this. Just making sure were super aligned. There also is the trust factor. During her comments at an end-of-season media session this week, Collier revealed conversations that were to remain private that she had had with the commissioner in February. That could undermine trust that is often needed to carry out negotiations. For all the faults that Collier cited in her prepared comments, Engelbert has delivered on many of her promises since coming into the league in 2019. She will have added six expansion teams by 2030 and secured a major new media rights deal for the next decade that will bring in more than $2.2 billion. Engelbert also had the league pay for a full charter flight program this season that the players hope will be added to the new CBA to address concerns about issues ranging from safety to travel time. The commissioner has said all along that the league is hoping for a transformational agreement that includes significantly increased player salaries and benefits. There’s little reason for Collier’s remarks to detract from that goal. How are other players responding to Collier’s comments? Players across the league backed Collier either on social media or at Game 5 of the WNBA semifinals series between Las Vegas and Indiana that the Aces won in overtime. WNBA MVP Aja Wilson said she was appreciative of Collier and the union for standing up for the players. Im grateful to have those type of people to be able to continue to speak up for us, Wilson said after the Aces advanced to the WNBA Finals. Im going to ride with Phee always. Obviously, shes a business girlie and she has her own stuff going on, but moving forward, weve gotta continue to stand on business as we talk about this CBA negotiation. Other players, including Rookie of the Year Paige Bueckers, backed Collier on social media, calling her “Queen Phee” in an Instagram Story while the song Pink Pony Club plays in the background. What do the negotiations mean for free agency? Nearly every player not on a rookie contract will be a free agent this offseason, hoping to cash in on a potential giant leap in the league’s salary structure. Free agency usually has taken place in January, with players meeting with teams and able to sign in February. Players have been able to work out and get treatment for injuries at their former team’s facility in the offseason before becoming free agents. In a worst-case scenario where owners decided to lock out the players or the players decided to go on strike, those courtesies would go away. Could Collier’s Unrivaled league give players more leverage? The 3-on-3 league will start its second season in January and already has expanded to 54 players and added two new teams. The domestic league, made up entirely of WNBA players, now gives players another option to earn money, which would lessen the impact of a lockout or strike. Last season, players in the league had an average salary of more than $220,000, which was close to the maximum base salary in the WNBA. Unrivaled will add Bueckers to an already loaded roster that includes Collier, Breanna Stewart, and Angel Reese. It also has set itself up for the future by offering NIL deals to many of the top college players. Doug Feinberg, AP basketball writer


Category: E-Commerce

 

LATEST NEWS

2025-10-02 16:22:40| Fast Company

Picture a data center on the edge of a desert plateau. Inside, row after row of servers glow and buzz, moving air through vast cooling towers, consuming more electricity than the surrounding towns combined. This is not science fiction. It is the reality of the vast AI compute clusters, often described as AI supercomputers for their sheer scale, that train todays most advanced models. Strictly speaking, these are not supercomputers in the classical sense. Traditional supercomputers are highly specialized machines designed for scientific simulations such as climate modeling, nuclear physics, or astrophysics, tuned for parallelized code across millions of cores. What drives AI, by contrast, are massive clusters of GPUs or custom accelerators (Nvidia H100s, Google TPUs, etc.) connected through high-bandwidth interconnections, optimized for the matrix multiplications at the heart of deep learning. They are not solving equations for weather forecasts: they are churning through trillions of tokens to predict the next word. Still, the nickname sticks, because their performance, energy demands, and costs are comparable to, or beyond, the worlds fastest scientific machines. And the implications are just as profound.  A recent study of 500 AI compute systems worldwide found that their performance is doubling every nine months, while both cost and power requirements double every year. At this pace, the frontier of artificial intelligence is not simply about better algorithms or smarter architectures. It is about who can afford, power, and cool these gigantic machines, and who cannot.  The exponential moat When performance doubles every nine months but cost doubles every 12, you create an exponential moat: each leap forward pushes the next frontier further out of reach for all but a handful of players.  This is not the familiar story of open-source vs. closed-source models: it is more fundamental. If you cannot access the compute substrate (the hardware, electricity, cooling, and fabs required to train the next generation ) you are not even in the race. Universities cannot keep up. Small startups cannot keep up. Even many governments cannot keep up.  The study shows a stark concentration of capability: the most powerful AI clusters are concentrated in a few corporations, effectively privatizing access to the cutting edge of machine intelligence. Once compute becomes the bottleneck, the invisible hand of the market does not produce diversity. It produces monopoly. Centralization vs. democratization The rhetoric around AI often emphasizes democratization: tools made available to everyone, small actors empowered, creativity unleashed. But in practice, the power to shape AIs trajectory is shifting toward the owners of massive compute farms. They decide which models are feasible, which experiments get run, which approaches receive billions of tokens of training.  This is not just a matter of money. It is about infrastructure as governance. When only three or four firms control the largest AI clusters, they effectively control the boundaries of the possible. If your idea requires training a trillion-parameter model from scratch, and you are not inside one of those firms, your idea remains just that: an idea. Geopolitics of compute Governments are beginning to notice. At the 2025 Paris AI Action Summit, nations pledged billions to upgrade national AI infrastructure. France, Germany, and the U.K. are each moving to expand sovereign compute capacity. The United States has launched large-scale initiatives to accelerate domestic chip production, and China, as always, is playing its own game, pouring resources into massive wind and solar buildouts to guarantee not only chips, but the cheap electricity to feed them.  Europe, as usual, is caught in the middle. Its regulatory frameworks may be more advanced, but its ability to deploy AI at scale depends on whether it can secure energy and compute on competitive terms. Without that, AI sovereignty is rhetoric, not reality.  And yet, there is a darker irony here. Even as governments race to assert sovereignty, the real winners of the AI arms race may be corporations, not nations. Control over compute is concentrating so quickly in the private sector that we are edging closer to a scenario long depicted in science fiction: corporations wielding more power than states, not only in markets but in shaping the very trajectory of human knowledge. The balance of authority between governments and companies is shifting, and this time, it is not fiction.  Environmental reckoning There is also a physical cost. Training one frontier model can require as much electricity as a small city uses in a year. Cooling towers demand enormous volumes of water, and while much of it is returned to the cycle, siting matters: in water-scarce regions, the strain cn be significant. The carbon footprint is similarly uneven. A model trained on grids dominated by coal or gas produces orders of magnitude more emissions than one trained on grids powered by renewables. In this sense, the AI sustainability debate is really an energy debate. Models are not green or dirty by themselves. They are as green or as dirty as the electrons that feed them.  What efficiency cannot buy Efficiency alone will not solve this problem. Each generation of chips gets faster, each architecture more optimized, but the aggregate demand continues to rise faster than the gains. Every watt saved at the micro level is consumed by the macro expansion of ambition. If anything, efficiency makes the arms race worse, because it lowers the cost per experiment and encourages even more experiments. The result is a treadmill: more compute, more power, more cost, more centralization. What to demand If we want to avoid a future in which AIs destiny is set by the boardrooms of three companies and the ministries of two superpowers, we need to treat compute as a public concern. That means demanding:  Transparency about who owns and operates the largest clusters. Auditability of usage: what models are being trained, for what purposes. Shared infrastructure, funded publicly or through consortia, so that researchers and smaller firms can experiment without asking permission from trillion-dollar corporations. Energy accountability, requiring operators to disclose not just aggregate consumption but sources, emissions, and water footprints in real time. The debate should not stop at which model is safest or which dataset is fair. It should extend to who controls the machines that make the models possible in the first place.  The machines behind the machines The next control point in AI isnt software: its hardware. The massive compute clusters that train the models are now the real arbiters of progress. They decide whats possible, whats practical, and who gets to play. If history teaches anything, it is that when power centralizes at this scale, accountability rarely follows. Without deliberate interventions, we risk an AI ecosystem where innovation is bottlenecked, oversight is optional, and the costs, from financial or environmental to human, are hidden until it is too late. The arms race of AI supercomputers is already underway. The only question is whether society chooses to watch passively as the future of intelligence is privatized, or whether we recognize that the machines behind the machines deserve just as much scrutiny as the algorithms they enable.


Category: E-Commerce

 

2025-10-02 16:20:47| Fast Company

Welcome to AI Decoded, Fast Companys weekly newsletter that breaks down the most important news in the world of AI. Im Mark Sullivan, a senior writer at Fast Company, covering emerging tech, AI, and tech policy. This week, Im focusing on Donald Trumps recent AI-generated videos, which he (or his staff) posts on Truth Social. I also look at world models, the successor to large language models, and at OpenAIs new Sora 2 model, which is also the anchor for a new social app.  Sign up to receive this newsletter every week via email here. And if you have comments on this issue and/or ideas for future ones, drop me a line at sullivan@fastcompany.com, and follow me on X (formerly Twitter) @thesullivan.  Trumps dangerous mix of low-grade and high-grade deepfakes AI-generated videos are becoming a routine part of politics. Thats probably not a good thing. In the past its been fairly easy to distinguish between the real and the generated, but as video-generation models have improved, its gotten harder. Were in a period when both high-quality AI videos and AI slop are common.  The Trump administration appears to be using both types. Last weekend, Trump posted an AI video to Truth Social that looked a lot like a real Fox News segment describing a new healthcare program about to be rolled out to Americans. The video featured the presidents daughter-in-law Lara Trump describing medbeds, or futuristic health pods (think the health pod in Prometheus) that can do anything from curing cancer to growing back lost limbs. The beds would be offered in new hospitals across the country, and people would use a medbed card to access them.  Except magical medbeds are a fantasy, a myth spun up in years of Qanon blather on sites like 4chan. And theres no new hospital system and no membership card. For most rational people, the unlikelihood of such a thing suddenly coming into existence was the first tip-off that the video was AI generated. Somebodymaybe Trump, maybe a stafferdeleted the strange post a few hours later, and the White House offered no explanation.  A few days later, Trump posted another, lower-quality AI-generated video. This one depicted Senate minority leader Chuck Schumer (D-NY) and House minority leader Hakeem Jeffries (D-NY) standing at a podium talking about the looming government shutdown. The AI had Schumer insulting his own party and using profanity. The AI dropped a cartoon sombrero and a mustache on Jeffries, in a nod to a Republican lie that Democrats will let the government shut down if they cant give healthcare benefits to undocumented immigrants (who are, by law, ineligible for such benefits). The Schumer video is obviously AI-generated, meant to troll Democrats during the shutdown impasse. The medbeds video is more troubling because the AI is high-quality and shows an intent to mislead. When one politician routinely uses both slop and high-realism AI to make their points, will their constituents or supporters always know the difference? As the AI improves, that becomes almost completely up to the creator of the video. For Trump, a politician with authoritarian tendencies and a reliance on propaganda, that could be a dangerous mix.  AI-based video may wind up becoming the most powerful propaganda tool ever invented. After all, seeing is believingespecially when the viewer wants to believe.  World models are likely the future of AI The AI models weve been talking about for the past three years are fundamentally language modelsreally big mathematical probability engines that are good at predicting the most likely word in a sequence. But two things have happened since the appearance of ChatGPT in late 2022: Weve come to understand that a model that reasons primarily on words is limited in its real-life applications. And within the AI industry, a consensus has formed that the AI labs main trick of radically scaling up training data and computing power to make models smarter isnt achieving the big performance gains it once did.  None of this should be too surprising. A machine (or a human) can only learn so much about how the world works by reading articles and books. We humans dont learn like that. We have a unique ability to quickly build a mental world model that organizes diverse modes of information gathered using our senses about our environment and ourselves. Researchers are working hard to develop synthetic world models, but its a hard problem. While language models form a vast, many-dimensional vector space (a map, if you will) representing all the possible combinations of words in various contexts, a world model must form a much bigger space to represent the virtually endless combinations of visual, aural, and motion information. While language models try to guess the next word in a sequence, the world model must reason in real time about how a certain action might affect the real world.  The models are learning how to reason about physical reality, says Naeem Talukdar, CEO of the video-generation company Moonvalley. A world model inside a robot might be asked to imagine a world in which the robotic arm moves 10 degrees to the right, and then judge whether such a move is productive to the larger task at hand. Its this kind of reasoning that may allow robots to iteratively learn to complete tasks that theyve not been explicitly trained to do. The bigger these models get, and the more modalities that they learn on, just like humans, they start to be able to reason on things that they havent seen yet before, Talukdar says. For instance, in the past robots have struggled with the deceptively simple command: clean up the dinner table and put the dishes in the dishwasher. Without the ability to reason in real time about the physical world, the robot wouldnt be able to experimentally move from one micro-task to the next.  World models are also being used to help self-driving cars train for real-world driving, and to iteratively manage unexpected or untrained-for events that occur on the roadway in real time. Additionally, augmented reality glasses such as Metas Orion may eventually rely on world models to organize all the data collected from the various sensors in the device, such as motion and orientation sensors, depth and tracking sensors, microphones, and light sensors.  OpenAIs Sora 2 video generation model comes with a social network Would you like to open an app and spend your free time watching AI-generated videos depicting real peopleincluding yourself and your friendsdoing silly things? OpenAI and its CEO Sam Altman think you will. The company jst announced a new version of its impressive Sora video generation modelSora 2and a social networking app (iOS only) to go with it.  OpenAI calls Sora 2 the “GPT-3.5 moment for video” (GPT-3.5 was when the output of OpenAIs language models became more coherent and relevant.) The major breakthrough is native audio-video generationcreating synchronized dialogue, sound effects, and soundscapes that naturally match the visuals. The company says that Sora 2s  understanding of physics is vastly improved, and that it can now accurately model complex phenomena like buoyancy in water, and intricate movements such as gymnastics routines. One person on X demonstrated how Sora 2 can now accurately simulate water being poured into a glass with realistic reflections, ripples, and refraction. The model generates 10-second clips (20 seconds for Pro users) with better consistency across multi-shot sequences, OpenAI says. The model is good at producing realistic, cinematic, and anime styles.  Importantly, a standout Cameos feature lets users insert themselves (or their friends) into videos after a onetime recording. When you set up the Sora app, youre now asked to record a live video of yourself repeating random numbers or phrases and turning your head in certain poses. All this gives the app a way to authenticate that its really you so that only you can use your own image, and to ensure that anybody else trying to use it in a video must get your permission.  Like TikTok, the Sora iOS app (invite-only for now) features an algorithmic feed, content remixing, and social sharing. OpenAI emphasizes “long-term user satisfaction” over engagement metrics, and the company says it has no plans to insert advertising into the feed, as Meta plans to do with its Vibes AI video app.  More AI coverage from Fast Company:  AIs monstrous energy use is becoming a serious PR problem ChatGPT can now spend your money for you Shift the AI conversation from cost cutting to revenue AI is making your website irrelevant Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.


Category: E-Commerce

 

Latest from this category

02.10Mortgage lending may never look the same after FICOs latest shake-up
02.10Trump administration slashes $7.6 billion for clean energy projects in blue states
02.10Tech leads market near records while D.C.s shutdown stalls jobs data
02.10Musk reports Tesla sales jump 7% after months of boycotts
02.10These 3 tricks will get AI chatbots to help you do your job
02.10High-performing teams are using AI differently. Heres how you can, too  
02.10Employees are using AI at work without askingand putting company security at risk
02.10ChatGPT maker OpenAI becomes worlds most valuable startup at $500B
E-Commerce »

All news

02.10Baroness Mone accuses chancellor of 'inflammatory' language
02.10Why Size Matters: How to Profit from Small Headline-Driven Stocks
02.10Mortgage lending may never look the same after FICOs latest shake-up
02.10Trump administration slashes $7.6 billion for clean energy projects in blue states
02.10Ice rink to open at Napervilles Block 59 starting in November
02.10Childcare costs a 'real barrier' for families
02.10Tech leads market near records while D.C.s shutdown stalls jobs data
02.10Gatorade and Cheetos are among the Pepsi products getting a natural dye makeover
More »
Privacy policy . Copyright . Contact form .