Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2025-11-25 19:00:00| Fast Company

The Trump administration is hunting for ways to block the ability of states to regulate artificial intelligence. In response, dozens of state attorneys general have now sent a letter pressing Congressional leadership not to approve language that would preempt their governments freedom to propose their own legislation on the technology. Broad preemption of state protections is particularly ill-advised because constantly evolving emerging technologies, like AI, require agile regulatory responses that can protect our citizens, they write in a Tuesday memo. This regulatory innovation is best left to the 50 states so we can all learn from what works and what does not. New applications for AI are regularly being found for healthcare, hiring, housing markets, customer service, law enforcement and public safety, transportation, banking, education, and social media. The endeavor, which represents 36 states total, comes as Congress weighs language, packed in a new defense funding authorization bill, that would prevent states from enforcing their own rules about the technology. A previous measure, which failed, would have established a 10-year moratorium on states writing their own rules. A draft executive order leaked last week would, similarly, push the federal government to punish states for enacting or enforcing these rules.  If there were real cases to be brought up, they would have brought [them] already, Alex Bores, the lawmaker who authored New Yorks passed, but not-yet-signed AI legislation, the RAISE Act, told Fast Company last week. The only reason you need an executive order to tell people to look for cases is when you just want to harass states into submission.  Every state should be able to enact and enforce its own AI regulations to protect its residents, New York Attorney General Letitia James, the lead author of the letter, said in a statement. Certain AI chatbots have been shown to harm our childrens mental health and AI-generated deepfakes are making it easier for people to fall victim to scams. State governments are the best equipped to address the dangers associated with AI. The letter comes after state lawmakers wrote to their federal peers not to strip states of their ability to regulate artificial intelligence. Thus far, the federal government has not passed major legislation on ensuring model transparency use, AI cybersecurity and safety, or energy use.   For state officials, the concern is that states will be banned from taking their own action on these fronts. Arati Prabhakar, a top tech adviser under the Biden administration, recently called this effort ludicrous, since Congress has yet to establish any regulatory regime for AI.  The attorneys general emphasized the importance of defending children from inappropriate relationships with chatbots, including discussions of self-harm, and defending against deepfake-enabled scams. A moratorium would put us behind by tying states hands and failing to keep up with the technology, they write, arguing that pre-emption prevents states from remaining agile in responding to an emerging technology. 


Category: E-Commerce

 

LATEST NEWS

2025-11-25 18:55:19| Fast Company

In a packed room at a library in downtown Boston, Rep. Ayanna Pressley posed a blunt question: Why are Black women, who have some of the highest labor force participation rates in the country, now seeing their unemployment rise faster than most other groups? The replies Monday from policymakers, academics, business owners, and community organizers laid out how economic headwinds facing Black women may indicate a troubling shift for the economy at large. The unemployment rate for Black women increased from 6.7% to 7.5% between August and September this year, the most recent month for available data because of the federal government shutdown. That compares with a 3.2% to 3.4% increase for white women over the same period. And it extended a yearlong trend of the Black women’s unemployment rate increasing at a time of broad economic uncertainty. Many roundtable attendees view those numbers as both an affront and a warning about the uneven pressures on Black women. Everyone is missing out when were pushed out of the workforce, said Pressley, a progressive Democrat from Massachusetts. That is something that I worry about now, that you have all these women with specific expertise and specializations that were being deprived of. And when Black women do have work, she said they tend to be woefully underemployed. Black women had the highest labor force participation rate of any female demographic in 2024, according to the Bureau of Labor Statistics, yet their unemployment rate remains higher than other demographics of women. Historically, their unemployment rate has trended slightly above the national average, widening during periods of slowed economic growth or recession. Black Americans are overrepresented in industries like retail, health and social services, and government administration, according to a 2024 Bureau of Labor Statistics Survey. Black women are at the center of the Venn diagram that is our society, said Anna Gifty Opoku-Agyeman, a PhD candidate in public policy and economics at the Harvard Kennedy School. She pointed to April as the month when Black womens unemployment began to diverge more sharply from other groups. A policy agenda that ignores the causes, she said, could harm the broader economy. Roundtable participants cited many long-standing structural inequities but attributed most of the latest divergence to recent federal actions. They blamed the Trump administration’s downsizing of the Minority Business Development Agency and the cancellation of some federal contracts with nonprofits and small businesses, saying those actions disproportionately impacted Black women. Others said tariff policies and mass federal layoffs also contributed to the strain. The administration’s opposition to diversity, equity, and inclusion initiatives was repeatedly mentioned by participants as a cause for a more hostile environment for Black women to find employment, customers, or government contracting. There is no concrete data on how many Black federal workers were laid off, fired, or otherwise dismissed as part of President Donald Trump’s sweeping cuts through the federal government. The attendees discussed a wide range of potential solutions to the unemployment rate for Black women, including using state budgets to bolster business development for Black women, expanding microloans to different communities, increasing government resources for contracting, requiring greater transparency on corporate hiring practices, and encouraging state and federal officials to enforce anti-discrimination policies. I feel like I was just at church, said Ruthzee Louijeune, the Boston City Council president, as the meeting wrapped up. She encouraged attendees to keep up their efforts, and she defended DEI policies as essential to a healthy workforce and political system. Without broad-based efforts, the Democrat said, the countrys business and political leadership would be abnormal and weakened. Any space that does not look like our country and like our cities is not normal, she said, and not the city or country we are trying to build.” By Matt Brown, Associated Press


Category: E-Commerce

 

2025-11-25 18:00:00| Fast Company

In recent weeks, OpenAI has faced seven lawsuits alleging that ChatGPT contributed to suicides or mental health breakdowns. In a recent conversation at the Innovation@Brown Showcase, Brown University’s Ellie Pavlick, director of a new institute dedicated to exploring AI and mental health, and Soraya Darabi of VC firm TMV, an early investor in mental health AI startups, discussed the controversial relationship between AI and mental health. Pavlick and Darabi weigh the pros and cons of applying AI to emotional well-being, from chatbot therapy to AI friends and romantic partners.  This is an abridged transcript of an interview from Rapid Response, hosted by the former editor-in-chief of Fast Company Bob Safian. From the team behind the Masters of Scale podcast, Rapid Response features candid conversations with todays top business leaders navigating real-time challenges. Subscribe to Rapid Response wherever you get your podcasts to ensure you never miss an episode. A recent study showed that one of the major uses of ChatGPT for users is mental health, which makes a lot of people uneasy. Ellie, I want to start with you, the new institute that you direct known as ARIA, which stands for AI Research Institute on Interaction for AI Assistance. It’s a consortium of experts from a bunch of universities backed by $20 million in National Science Foundation funding. So what is the goal of ARIA? What are you hoping it delivers? Why is it here? Pavlick: Mental health is something that is very, I would say I don’t even know if it’s polarizing. I think many people’s first reaction is negative, the concept of AI mental health. So as you can tell from the name, we didn’t actually start as a group that was trying to work on mental health. We were a group of researchers who were interested in the biggest, hardest problems with current AI technologies. What are the hardest things that people are trying to apply AI to that we don’t think the current technology is quite up for? And mental health came up and actually was originally taken off our list of things that we wanted to work on because it is so scary to think about if you get it wrong, how big the risks are. And then we came back to it exactly because of this. We basically realized that this is happening, people are already using it. There’s companies that are like startups, some of them probably doing a great job, some of them not. The truth is we actually have a hard time even being able to differentiate those right now. And then there are a ton of people just going to chatbots and using them as therapists. And so we’re like, the worst thing that could happen is we don’t actually have good scientific leadership around this. How do we decide what this technology can and can’t do? How do we evaluate these kinds of things? How do we build it safely in a way that we can trust? There’s questions like this. There’s a demand for answers, and the reality is most of them we just can’t answer right now. They depend on an understanding of the AI that we don’t yet have. An understanding of humans and mental health that we don’t yet have. A level of discourse that society isn’t up for. We don’t have the vocabulary, we don’t have the terms. There’s just a lot that we can’t do yet to make this happen the right way. So that’s what ARIA is trying to provide this public sector, academic kind of voice to help lead this discussion. That’s right. You’re not waiting for this data to come out or for the final, whatever academia might say, this consortium might say. You’re already investing in companies that do this. I know you’re an early stage investor in Slingshot AI, which delivers mental health support via the app Ash. Is Ash the kind of service that Ellie and her group should be wary about? What were you thinking about when you decided to make this investment? Darabi: Well, actually I’m not hearing that Ellie’s wary. I think she’s being really pragmatic and realistic. In broad brushstrokes, zooming back and talking about the sobering facts and the scale of this problem, one billion out of eight billion people struggle with some sort of mental health issue. Fewer than 50% of people seek out treatment, and then the people who do find the cost to be prohibitive. That recent study that you cited, it’s probably the one from the Harvard Business Review, which came out in March of this year, which studied use cases of ChatGPT and their analysis showed that the number one, four, and seven out of 10 use cases for foundational models broadly are therapy or mental health related. I mean, we’re talking about something that touches half of the planet. If you’re looking at investing with an ethical lens, there’s no greater TAM [total addressable market] than people who have a mental health disorder of some sort. We’ve known the Slingshot AI team, which is the largest foundational model for psychology, for over a decade. We’ve followed their careers. We think exceptionally highly of the advisory board and panel they put together. But I think what really led us down the rabbit hole of caring deeply enough about mental health and AI to frankly start a fund dedicated to it, and we did that in December of last year. It was really kind of going back to the fact that AI therapy is so stigmatized and people hear it and they immediately jump to the wrong conclusions. They jump to the hyperbolic examples of suicide. And yes, it’s terrible. There have been incidents of deep codependence upon ChatGPT or otherwise whereby young people in particular are susceptible to very scary things and yet those salacious headlines don’t represent the vast number of folks whom we think will be well-serviced by these technologies. You said this phrase, we kind of stumbled on [these] uses for ChatGPT. It’s not what it was created for and yet people love it for that. Darabi: It makes me think about 20 years ago when everybody was freaking out about the fact that kids were on video games all day, and now because of that we have Khan Academy and Duolingo. Fearmongering is good actually because it creates a precedent for the guardrails that I think are absolutely necessary for us to safeguard our children from anything that could be disastrous. But at the same time, if we run in fear, we’re just repeating history and it’s probably time to just embrace the snowball, which will become an avalanche in mere seconds. AI is going to be omnipresent everywhere. Everything that we see and touch will be in some way supercharged by AI. So if we’re not understanding it to our deepest capabilities, then we’re actually doing ourselves a great disservice. Pavlick: To this point of yeah, people are drawn to AI for this particular use case. So on our team in ARIA, we have a lot of computer scientists who build AI systems, but acually a lot of our teams do developmental psychology, core cognitive science, neuroscience. There are questions to say, why? The whys and the hows. What are people getting out of this? What need is it filling? I think this is a really important question to be asking soon. I think you’re completely right. Fearmongering has a positive role to play. You don’t want to get too caught on it and you can point historically to examples of people freaked out and it turned out okay. There’s also cases like social media, maybe people didn’t freak out enough and I would not say it turned out okay. People can agree to disagree and there’s plus and minuses, but the point is these aren’t questions that really we are in a position that we can start asking questions. You can’t do things perfectly, but you can run studies. You can say, “What is the process that’s happening? What is it like when someone’s talking to a chatbot? Is it similar to talking to a human? What is missing there? Is this going to be okay long-term? What about young people who are doing this in core developmental stages? What about somebody who’s in a state of acute psychological distress as opposed to as a general maintenance thing? What about somebody who’s struggling with substance abuse?” These are all different questions, they’re going to have different answers. Again, I feel very strongly that the one LLM that just is one interface for everything is, I think a lot is unknown, but I would bet that that’s not going to be the final thing that we’re going to want.


Category: E-Commerce

 

Latest from this category

25.11Dicks joins growing list of companies trimming subsidiary brands with Foot Locker closures
25.11Inside the Trump administrations dicey play to block states from regulating AI
25.11All of ByHearts recalled baby formula may be contaminated with botulism, tests show
25.11Why shoppers may spend less this holiday weekeven with more deals
25.11Retail sales rose slightly in September as Americans pulled back on spending
25.11Why so many brands want to piss you off
25.11Consumer confidence falls to lowest level since April as Americans worry about inflation and jobs
25.11Attorneys general are fighting for states rights to regulate AI
E-Commerce »

All news

25.11Stocks Surging into Final Hour on Fed Rate-Cut Hopes, Earnings Outlook Optimism, Short-Covering, Consumer Discretionary/Transport Sector Strength
25.11Xs new feature raises questions about the foreign origins of some popular US political accounts
25.11Retail Ignites: KSS, ANF, and a Rare Surge in XRT
25.11Tomorrow's Earnings/Economic Releases of Note; Market Movers
25.11Bull Radar
25.11Aurora roofing company VP called Black worker slave, EEOC lawsuit alleges
25.11Historic downtown Elgin building to be transformed into a boutique hotel, cafe
25.11Dicks joins growing list of companies trimming subsidiary brands with Foot Locker closures
More »
Privacy policy . Copyright . Contact form .