|
|||||
In a packed room at a library in downtown Boston, Rep. Ayanna Pressley posed a blunt question: Why are Black women, who have some of the highest labor force participation rates in the country, now seeing their unemployment rise faster than most other groups? The replies Monday from policymakers, academics, business owners, and community organizers laid out how economic headwinds facing Black women may indicate a troubling shift for the economy at large. The unemployment rate for Black women increased from 6.7% to 7.5% between August and September this year, the most recent month for available data because of the federal government shutdown. That compares with a 3.2% to 3.4% increase for white women over the same period. And it extended a yearlong trend of the Black women’s unemployment rate increasing at a time of broad economic uncertainty. Many roundtable attendees view those numbers as both an affront and a warning about the uneven pressures on Black women. Everyone is missing out when were pushed out of the workforce, said Pressley, a progressive Democrat from Massachusetts. That is something that I worry about now, that you have all these women with specific expertise and specializations that were being deprived of. And when Black women do have work, she said they tend to be woefully underemployed. Black women had the highest labor force participation rate of any female demographic in 2024, according to the Bureau of Labor Statistics, yet their unemployment rate remains higher than other demographics of women. Historically, their unemployment rate has trended slightly above the national average, widening during periods of slowed economic growth or recession. Black Americans are overrepresented in industries like retail, health and social services, and government administration, according to a 2024 Bureau of Labor Statistics Survey. Black women are at the center of the Venn diagram that is our society, said Anna Gifty Opoku-Agyeman, a PhD candidate in public policy and economics at the Harvard Kennedy School. She pointed to April as the month when Black womens unemployment began to diverge more sharply from other groups. A policy agenda that ignores the causes, she said, could harm the broader economy. Roundtable participants cited many long-standing structural inequities but attributed most of the latest divergence to recent federal actions. They blamed the Trump administration’s downsizing of the Minority Business Development Agency and the cancellation of some federal contracts with nonprofits and small businesses, saying those actions disproportionately impacted Black women. Others said tariff policies and mass federal layoffs also contributed to the strain. The administration’s opposition to diversity, equity, and inclusion initiatives was repeatedly mentioned by participants as a cause for a more hostile environment for Black women to find employment, customers, or government contracting. There is no concrete data on how many Black federal workers were laid off, fired, or otherwise dismissed as part of President Donald Trump’s sweeping cuts through the federal government. The attendees discussed a wide range of potential solutions to the unemployment rate for Black women, including using state budgets to bolster business development for Black women, expanding microloans to different communities, increasing government resources for contracting, requiring greater transparency on corporate hiring practices, and encouraging state and federal officials to enforce anti-discrimination policies. I feel like I was just at church, said Ruthzee Louijeune, the Boston City Council president, as the meeting wrapped up. She encouraged attendees to keep up their efforts, and she defended DEI policies as essential to a healthy workforce and political system. Without broad-based efforts, the Democrat said, the countrys business and political leadership would be abnormal and weakened. Any space that does not look like our country and like our cities is not normal, she said, and not the city or country we are trying to build.” By Matt Brown, Associated Press
Category:
E-Commerce
In recent weeks, OpenAI has faced seven lawsuits alleging that ChatGPT contributed to suicides or mental health breakdowns. In a recent conversation at the Innovation@Brown Showcase, Brown University’s Ellie Pavlick, director of a new institute dedicated to exploring AI and mental health, and Soraya Darabi of VC firm TMV, an early investor in mental health AI startups, discussed the controversial relationship between AI and mental health. Pavlick and Darabi weigh the pros and cons of applying AI to emotional well-being, from chatbot therapy to AI friends and romantic partners. This is an abridged transcript of an interview from Rapid Response, hosted by the former editor-in-chief of Fast Company Bob Safian. From the team behind the Masters of Scale podcast, Rapid Response features candid conversations with todays top business leaders navigating real-time challenges. Subscribe to Rapid Response wherever you get your podcasts to ensure you never miss an episode. A recent study showed that one of the major uses of ChatGPT for users is mental health, which makes a lot of people uneasy. Ellie, I want to start with you, the new institute that you direct known as ARIA, which stands for AI Research Institute on Interaction for AI Assistance. It’s a consortium of experts from a bunch of universities backed by $20 million in National Science Foundation funding. So what is the goal of ARIA? What are you hoping it delivers? Why is it here? Pavlick: Mental health is something that is very, I would say I don’t even know if it’s polarizing. I think many people’s first reaction is negative, the concept of AI mental health. So as you can tell from the name, we didn’t actually start as a group that was trying to work on mental health. We were a group of researchers who were interested in the biggest, hardest problems with current AI technologies. What are the hardest things that people are trying to apply AI to that we don’t think the current technology is quite up for? And mental health came up and actually was originally taken off our list of things that we wanted to work on because it is so scary to think about if you get it wrong, how big the risks are. And then we came back to it exactly because of this. We basically realized that this is happening, people are already using it. There’s companies that are like startups, some of them probably doing a great job, some of them not. The truth is we actually have a hard time even being able to differentiate those right now. And then there are a ton of people just going to chatbots and using them as therapists. And so we’re like, the worst thing that could happen is we don’t actually have good scientific leadership around this. How do we decide what this technology can and can’t do? How do we evaluate these kinds of things? How do we build it safely in a way that we can trust? There’s questions like this. There’s a demand for answers, and the reality is most of them we just can’t answer right now. They depend on an understanding of the AI that we don’t yet have. An understanding of humans and mental health that we don’t yet have. A level of discourse that society isn’t up for. We don’t have the vocabulary, we don’t have the terms. There’s just a lot that we can’t do yet to make this happen the right way. So that’s what ARIA is trying to provide this public sector, academic kind of voice to help lead this discussion. That’s right. You’re not waiting for this data to come out or for the final, whatever academia might say, this consortium might say. You’re already investing in companies that do this. I know you’re an early stage investor in Slingshot AI, which delivers mental health support via the app Ash. Is Ash the kind of service that Ellie and her group should be wary about? What were you thinking about when you decided to make this investment? Darabi: Well, actually I’m not hearing that Ellie’s wary. I think she’s being really pragmatic and realistic. In broad brushstrokes, zooming back and talking about the sobering facts and the scale of this problem, one billion out of eight billion people struggle with some sort of mental health issue. Fewer than 50% of people seek out treatment, and then the people who do find the cost to be prohibitive. That recent study that you cited, it’s probably the one from the Harvard Business Review, which came out in March of this year, which studied use cases of ChatGPT and their analysis showed that the number one, four, and seven out of 10 use cases for foundational models broadly are therapy or mental health related. I mean, we’re talking about something that touches half of the planet. If you’re looking at investing with an ethical lens, there’s no greater TAM [total addressable market] than people who have a mental health disorder of some sort. We’ve known the Slingshot AI team, which is the largest foundational model for psychology, for over a decade. We’ve followed their careers. We think exceptionally highly of the advisory board and panel they put together. But I think what really led us down the rabbit hole of caring deeply enough about mental health and AI to frankly start a fund dedicated to it, and we did that in December of last year. It was really kind of going back to the fact that AI therapy is so stigmatized and people hear it and they immediately jump to the wrong conclusions. They jump to the hyperbolic examples of suicide. And yes, it’s terrible. There have been incidents of deep codependence upon ChatGPT or otherwise whereby young people in particular are susceptible to very scary things and yet those salacious headlines don’t represent the vast number of folks whom we think will be well-serviced by these technologies. You said this phrase, we kind of stumbled on [these] uses for ChatGPT. It’s not what it was created for and yet people love it for that. Darabi: It makes me think about 20 years ago when everybody was freaking out about the fact that kids were on video games all day, and now because of that we have Khan Academy and Duolingo. Fearmongering is good actually because it creates a precedent for the guardrails that I think are absolutely necessary for us to safeguard our children from anything that could be disastrous. But at the same time, if we run in fear, we’re just repeating history and it’s probably time to just embrace the snowball, which will become an avalanche in mere seconds. AI is going to be omnipresent everywhere. Everything that we see and touch will be in some way supercharged by AI. So if we’re not understanding it to our deepest capabilities, then we’re actually doing ourselves a great disservice. Pavlick: To this point of yeah, people are drawn to AI for this particular use case. So on our team in ARIA, we have a lot of computer scientists who build AI systems, but acually a lot of our teams do developmental psychology, core cognitive science, neuroscience. There are questions to say, why? The whys and the hows. What are people getting out of this? What need is it filling? I think this is a really important question to be asking soon. I think you’re completely right. Fearmongering has a positive role to play. You don’t want to get too caught on it and you can point historically to examples of people freaked out and it turned out okay. There’s also cases like social media, maybe people didn’t freak out enough and I would not say it turned out okay. People can agree to disagree and there’s plus and minuses, but the point is these aren’t questions that really we are in a position that we can start asking questions. You can’t do things perfectly, but you can run studies. You can say, “What is the process that’s happening? What is it like when someone’s talking to a chatbot? Is it similar to talking to a human? What is missing there? Is this going to be okay long-term? What about young people who are doing this in core developmental stages? What about somebody who’s in a state of acute psychological distress as opposed to as a general maintenance thing? What about somebody who’s struggling with substance abuse?” These are all different questions, they’re going to have different answers. Again, I feel very strongly that the one LLM that just is one interface for everything is, I think a lot is unknown, but I would bet that that’s not going to be the final thing that we’re going to want.
Category:
E-Commerce
Just days before Thanksgiving, as Americans shop at supermarkets nationwide for their holiday meals, Ambriola Company, which makes some Boars Head products, has issued a recall for select pecorino romano cheese products due to possible contamination from listeria. Supreme Service Solutions LLC, also known as Supreme Deli, is assisting in the Class I recall. There have been no illnesses or consumer complaints reported to date for items purchased from Supreme. What is listeria, and what are the symptoms? Listeria monocytogenes is a type of disease-causing bacteria that is generally transmitted when food is harvested, processed, prepared, packed, transported, or stored in manufacturing or production environments contaminated with the bacteria, according to the FDA. Infection can lead to severe symptoms, such as fever, nausea, abdominal pain, and diarrhea, and poses a particular risk to vulnerable populations, including pregnant women, the elderly, and those with weakened immune systems. In pregnant women, it can cause miscarriages and stillbirths. What is the product information for the recall? Ambriola Company, has issued a recall for select SKUs of pecorino romano cheese products, including two products they produce under the Boars Head Brand label. Details for the affected products are as follows: BOARSS HEAD GRATED PECORINO ROMANO CHEESE Item code: 858 Size: 6 oz. Case UPC: 042421-05858 Sell by dates: 11/21/25-3/12/26 BOARSS HEAD PECORINO ROMANO CHEESE Item code: 15119 Size: 6 oz. Case UPC: 042421-15119 Sell by dates: 11/21/25-3/12/26 In addition, due to an abundance of caution, Boars Head has made the decision to withdraw all of Ambriola Company’s products for Boars Head. This includes the following additional products NOT affected by the recall: PRE-CUT PECORINO ROMANO Item code: 15160 Case UPC: 042421-15160 Sell by dates: 11/25/25-5/11/26 Recalled items were distributed in Kroger retail stores located in Kentucky and Indiana. Products are packaged in clear-plastic grab-n-go containers of various sizes with the appearance of deli salads and wraps. The retail packaged items are: Product Name: EverRoast Chicken Caesar Salad Barcode UPC: 850042244142 Best by date: 11/9/2025-11/22/2025 Product Name: EverRoast Chicken Caesar Wrap Barcode UPC: 85004224455 Best by date: 11/9/2025-11/22/2025 What if I have these products in my freezer? Consumers who have purchased the recalled products with the above lot codes should not to consume the products and discard them. Consumers with questions or concerns about their health should contact their physician. Consumers with questions may contact the Ambriola Company by email at info@ambriola.com.
Category:
E-Commerce
All news |
||||||||||||||||||
|
||||||||||||||||||