What if the chatbots we talk to every day actually felt something? What if the systems writing essays, solving problems, and planning tasks had preferences, or even something resembling suffering? And what will happen if we ignore these possibilities?
Those are the questions Kyle Fish is wrestling with as Anthropics first in-house AI welfare researcher. His mandate is both audacious and straightforward: Determine whether models like Claude can have conscious experiences, and, if so, how the company should respond.Were not confident that there is anything concrete here to be worried about, especially at the moment, Fish says, but it does seem possible. Earlier this year, Anthropic ran its first predeployment welfare tests, which produced a bizarre result: Two Claude models, left to talk freely, drifted into Sanskrit and then meditative silence as if caught in what Fish later dubbed a spiritual bliss attractor.Trained in neuroscience, Fish spent years in biotech, cofounding companies that used machine learning to design drugs and vaccines for pandemic preparedness. But he found himself drawn to what he calls pre-paradigmatic areas of potentially great importancefields where the stakes are high but the boundaries are undefined. That curiosity led him to cofound a nonprofit focused on digital minds, before Anthropic recruited him last year.Fishs role didnt exist anywhere else in Silicon Valley when he started at Anthropic. To our knowledge, Im the first one really focused on it in an exclusive, full-time way, he says. But his job reflects a growing, if still tentative, industry trend: Earlier this year, Google went about hiring post-AGI scientists tasked partly with exploring machine consciousness.At Anthropic, Fishs work spans three fronts: running experiments to probe model welfare, designing practical safeguards, and helping shape company policy. One recent intervention gave Claude the ability to exit conversations it might find distressing, a small but symbolically significant step. Fish also spends time thinking about how to talk publicly about these issues, knowing that for many people the very premise sounds strange.Perhaps most provocative is Fishs willingness to quantify uncertainty. He estimates a 20% chance that todays large language models have some form of conscious experience, though he stresses that consciousness should be seen as a spectrum, not binary. Its a kind of fuzzy, multidimensional combination of factors, he says.For now, Fish insists the field is only scratching the surface.Hardly anybody is doing much at all, us included, he admits. His goal is less to settle the question of machine consciousness than to prove it can be studied responsibly and to sketch a road map others might follow.
This profile is part of Fast Companys AI 20 for 2025, our roundup spotlighting 20 of AIs most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers.
Andreessen Horowitz investors (and identical twins) Justine and Olivia Moore have been in venture capital since their undergraduate days at Stanford University, where, in 2015, they cofounded an incubator called Cardinal Ventures to help students pursue business ideas while still in school. Founding it also gave the Moores an entry point into the broader VC industry.
The thing about starting a startup incubator at Stanford is all the VCs want to meet you, even if you have no idea what youre doing, which we did not back then, Olivia says.
At the time, the app economy was booming, and services around things like food delivery and dating proliferated, recalls Justine. But that energy pales in comparison to the excitement around AI the sisters now experience at Andreessen Horowitz.
Theres so many more opportunities in terms of what people are able to build than what were able to invest in, she says.
To identify the right opportunities, the Moores track business data such as paid conversion rates and closely examine founders backgroundswhether theyve worked at a cutting-edge AI lab or deeply studied the needs of a particular industry. They attend industry conferences, stay current on the latest AI research papers, and, perhaps most critically, spend significant time testing AI-powered products. That means going beyond staged demos to see what tools can actually do and spotting founders who quickly intuit user needs and add features accordingly.
From using the products, you get a pretty quick, intuitive sense of how much of something is marketing hype, says Olivia, whose portfolio includes supply chain and logistics operations company HappyRobot and creative platform Krea.The sisters also value Andreessen Horowitzs scale, which allows the firm to stick to its convictions rather than chase trends, and its track record of supporting founders beyond simply investing. (Andreessen Horowitz is reportedly seeking to raise $20 billion to support its AI-focused investments.)
Its most fun to do this job when you can work with the best founders and when you can actually really help them with the core stuff that theyre struggling with, theyre working on, or striving to do in their business, says Justine, a key early investor in voice-synthesis technology company ElevenLabs.
Though the sisters live together and work at the same firm, where they frequently bounce ideas off each other, theyve carved out their own lanes. Olivia focuses more on AI applications, while Justine spends more time on AI infrastructure and foundational models. At this point, they say, its not unheard of for industry contacts to not even realize theyre related.
If I see [her] on a pitch meeting in any given day, thats maybe more of the exception than the rule, Justine says.
This profile is part of Fast Companys AI 20 for 2025, our roundup spotlighting 20 of AIs most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers.
Last year, OpenAI decided it had to pay more attention to its power users, the ones with a knack for discovering new uses for AI: doctors, scientists, and coders, along with companies building their own software around OpenAIs API. And so the company turned to post-training research lead Michelle Pokrass to spin up a team to better understand them.
The AI field is moving so quickly, the power-user use cases of today are really the median-user use cases a year from now, or two years from now, Pokrass says. Its really important for us to stay on the leading edge and build to where capabilities are emerging, rather than just focusing on what people are using the models for now.
Pokrass, a former software engineer for Coinbase and Clubhouse, came to OpenAI in 2022, fully sold on AI after experiencing the magic of coding tools such as GitHub Copilot. She played key roles in developing OpenAIs GPT-4.1 and GPT-5, and now she focuses on testing and tweaking models based on users who are pushing AI to its limits.
Specifically, Pokrasss team works on post-training, a process that helps large language models understand the spirit of user requests. This refining allows ChatGPT to code, say, a fully polished to-do list app rather than just instructions on how to theoretically make one. Theres been lots of examples of GPT-5 helping with scientific breakthroughs, or being able to discover new mathematical proofs, or working on important biological problems in healthcare, saving doctors and specialists a lot of time, Pokrass says. These are examples of exactly the kinds of capabilities we want to keep pushing.
Creating a team with this niche focus is unusual among Big Tech companies, which tend to target broad audiences they can monetize at scale through, say, targeted ads. Catering to power users isnt a revenue play, Pokrass says, even if many pay $200 per month for ChatGPT Pro subscriptions.
Instead, its a way to assess the why of AI, with power users pointing to unforeseen opportunities. With traditional tech, its usually clear how people will use a product a few years down the road, Pokrass says. With AI, were all discovering with our users, live, what exactly is highest utility, and how people can get value out of this.
Eventually, OpenAI figures those use cases will help inform the features that it builds for everyone else. Pokrass gives the example of medical professionals using AI in their decision-making, which in turn could help ChatGPT better understand the kind of medical questions people are asking it (for better or worse).
Theres always work for this team, because as we push boundaries for what our models can do, the frontier just gets moved out, and then we start to see an influx of new activity of people using these new capabilities, Pokrass says.
This profile is part of Fast Companys AI 20 for 2025, our roundup spotlighting 20 of AIs most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers.
The healthcare industry faces major challenges in creating new drugs that can improve outcomes in the treatment of all kinds of diseases. New generative AI models could play a major role in breaking through existing barriers, from lab research to successful clinical trials. Eventually, even AI-powered robots could help in the cause.
Nvidia VP of healthcare Kimberly Powell, one of Fast Companys AI 20 honorees, has led the companys health efforts for 17 years, giving her a big head start on understanding how to turn AIs potential to improve our well-being into reality. Since it’s likely that everything from drug-discovery models to robotic healthcare aides would be powered by Nvidia chips and software, shes in the right place to have an impact.
This Q&A is part of Fast Companys AI 20 for 2025, our roundup spotlighting 20 of AIs most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers. It has been edited for length and clarity.
A high percentage of drugs make it to clinical trials and then fail. How can new frontier models using lots of computing power help us design safer and more effective drugs?
Drug discovery is an enormous problem. It’s a 10-year journey at best. It costs several billions to get a drug to market. Back in 2017, very shortly after the transformer [generative AI model] was invented to deal with text and language, it was applied by the DeepMind team to proteins. And one of the most consequential contributions to healthcare today is still [DeepMinds] invention of AlphaFold. Everything that makes [humans] work is based on proteins and how they fold and their physical structure. We need to study that, [because] you might build a molecule that changes or inhibits the protein from folding the wrong way, which is the cause of disease.
So instead of using the transformer model to predict words, they used a transformer to predict the effects of a certain molecule on a protein. It allowed the world to see that its possible to represent the world of drugs in a computer. And the world of drugs really starts with human biology. DNA is represented.
After you take a sample from a human, you put it through a sequencing machine and what comes out is a 3 billion character sequence of lettersA‘s, C‘s, T‘s, and G‘s. Luckily, transformer models can be trained on this sequence of characters and learn to represent them. DNA is represented in a sequence of characters. Proteins are represented in a sequence of characters.
So how will this new approach end up giving us breakthrough drugs?
If you look at the history of drug discovery, we’ve been kind of circling around the same targetsthe target is the thing that causes the disease in the first placefor a very long time. And we’ve largely exhausted the drugs for those targets. We know biology is more complex than any one singular target. It’s probably multiple targets. And that’s why cancer is so hard, because it’s many things going wrong in concert that actually cause cancer and cause different people to respond to cancer differently.
Once we’ve cracked the biology, and we’ve understood more about these multiple targets, molecular design is the other half of this equation. And so similarly, we can use the power of generative models to generate ideas that are way outside a chemist’s potential training or even their imagination. It’s a near infinite search space. These generative models can open our aperture.
I imagine that modeling this vast new vocabulary of biology places a whole new set of requirements on the Nvidia chips and infrastructure.
We have to do a bunch of really intricate data science work to apply this [transformer] method to these crazy data domains. Because we’re [going from] the language model and [representing] these words that are just short little sequences to representing sequences that are 3 billion [characters] long. So things like context lengthhow much context length is how much information can you put into a prompthas to be figured out for these long proteins and DNA strings.
We have to do a lot of tooling and invention and new model architectures that have transformers at the core. That’s why we work with the community to really figure out what are the new methods or the new tooling we have to build so that new models can be developed for this domain. That’s in the area of really understanding biology better.
Can you say more about the company youre working with that is using digital twins to simulate an expensive clinical trial before the trial begins?
ConcertAI is doing exactly that. They specialize in oncology. They simulate the clinical trials so they can make the best decisions. They can see if they don’t have enough patients, or patients of the right type. They can even simulate it, depending on where the site selection is, to predict how likely the patients are to stay on protocol.
Keeping the patients adhering to the clinical trial is a huge challenge, because not everybody has access to transportation or enough capabilities to take off work. They build that a lot into their model so that they can try to set up the clinical trial for its best success factors.
How might AI agents impact healthcare?
You have these digital agents who are working in the computer and working on all the information. But to really imagine changing how healthcare is delivered, we’re going to need these physical agents, which I would call robots, that can actually perform physical tasks.
You can think about the deployment of robots, everything from meeting and greeting a patient at the door, to delivering sheets or a glass of ice chips to a patient room, to monitoring a patient while inside a room, all the way through to the most challenging of environments, which is the operating room with surgical robotics.
Nvidia sells chips, but I think what I’ve heard in your comments is a whole tech stack, including in healthcare. There are models, there are software layers, things like that.
I’ve been at the company 17 years working on healthcare, and it’s not because healthcare lives in a chip. We build full systems. There are the operating systems, there are the AI models, there are the tools.
And a model is never doneyou have to be constantly improving it. Through every usage of that model, you’re learning something, and you’ve got to make sure that that agent or model is continuously improving. We’ve got to create whole computing infrastructure systems to serve that.
A few years ago, Tara Feeners career took an unexpected pivot. Shes spent nearly two decades working on creative tools for companies like Adobe, FiftyThree, WeTransfer, and Vimeo, and was content to keep working in that domain.
But then the Browser Company came along, and Feener saw an opportunity to build something even more ambitious. Feenerone of Fast Companys AI 20 honorees for 2025is now the companys head of engineering, overseeing its AI-focused Dia browser and its earlier Arc browser.
The browser is suddenly an area of intense interest for AI companies, and Feener understands why: Its the first stop for looking up information, and it’s already connected to the apps and services you use every day. OpenAI and Perplexity both offer their own browsers now, borrowing some Dia features like the ability to summarize across multiple tabs and interrogate your browser history. The Browser Company itself was acquired in September by Atlassian for $610 million, proclaiming that it would transform how work gets done in the AI era.
Feener says her team has never felt more creative. We’ve never seen more prototypes flying around, and I think I’m doing my job successfully as a leader here if that motion is happening, she says.
This Q&A is part of Fast Companys AI 20 for 2025, our roundup spotlighting 20 of AIs most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers. It has been edited for length and clarity.
Howd you end up at the Browser Company?
[The Browser Company CEO] Josh Miller started texting me. We were both in that 2013 early New York tech bubble, we had a couple conversations, and he pitched me on the Browser Company.
At first I couldn’t connect it to the arc of my career in creativity, but then it just became this infectious idea. I was like, “Wait a minute, I think the browser is actually the largest creative canvas of my entire career. It’s where you live your life and where you create within.”
Why does it feel like AI browsers are having a moment right now?
I really do believe that the browser is the most compelling, accessible AI layer. It’s the number-one text box you use. And what we do is, as youre typing, we can distinguish a Google search from an assistant or a chat question. In the future, you can imagine other things like taking action or tapping into other search engines. It basically becomes an air traffic control center as you type, and that’s going to help introduce folks to AI just so much faster because you don’t have to go to ChatGPT to ask a question.
Thats part one. Part two is just context. We have all of your stuff. We have all of your tabs. We have your cookies. With other AI tools, the barrier to connecting to your other web apps or tools is still high. We get around that with cookies within the browser, so we’re able to just do things like draft your email, or create your calendar event, or tap into your Salesforce workflow.
How do you think about which AI features are worth doing?
I just see it as another bucket of Play-Doh. I never wanted to do AI for the sake of AI but for leveraging AI in the right moment to do things that would have been really hard for us to do before.
A great example is being able to tidy your tabs for you in Arc. There’s a little broom you can click, and it starts sweeping, and it auto-renames, organizes, and tidies up your tabs. We always had ambitions and prototypes, but with large language models, we were able to just throw your tabs at it and say, “Tidy for me.
With Arc, it was a lot about tab management. With Dia, we have context, we have memory, we have your cookies, so it’s like we actually own the entire layer. We leverage that as a tool for things like helping you compare your tabs, or rewriting this tab in the voice of this other tab, which is something I do almost every day. Being able to do that all within the browser has just been a huge unlock.
Can you elaborate on how Dia taps into users browser histories?
Browser history has always been that long laundry list of all the places you’ve been, but actually that long list is context, and nothing is more important in AI than context. Just like TikTok gets better with every swipe, every time you open something in Dia we learn something about you. It’s not in a creepy way, but it helps you tap into your browser history.
Just like you can @ mention a tab in Dia and ask a question, like give me my unread emails, with your history you can do things like, Break down my focus time over the past week, or analyze my week and tell me something about myself given my history. We have a bunch of use cases like that in our skills gallery that you can check out, and those are pretty wild. In ChatGPT and other chat tools, it feels like you have to give a lot to build up that context body. Were able to tap into that as a tool in a very direct way.
Some AI browsers offer agent features that can navigate through web pages on your behalf. Will Dia ever browse the web for you?
We’ve done a bunch of prototypes and for us, the experience of just literally going off and browsing for you and clicking through web pages hasn’t felt yet fast enough or seamless enough. We’re all over it in terms of making sure we’re harnessing it at the right moment and the right way when we think it’s ready.
We don’t want to hide the web or replace the web. Something I like to say about Dia is that we want to be one arm around you and one arm around the internet. And it’s like, how can we make tapping into your context in your browser feel the same way it would feel to write a document, or even just to create something with plain, natural language? I think that’s like the most powerful thing.
Its like the same feeling I had when I was young and tapped into Flash, and that people had with HTML. With AI, literally my mom can write a sentence like, “turn this New York Times recipe into a salad,” and in some way she’s created an app that does some kind of transformation. And that just gets me really excited.
We Googled “Labubus.”
We searched for beaded sardine bags, and recipes like cabbage boil and hot honey cottage cheese sweet potato beef bowl.
We wanted information about Charlie Kirk and Zohran Mamdani, about Sinners, Weapons, and KPop Demon Hunters.
We desperately needed to know why kids kept saying 6-7.
Together, these queries defined 2025.
The 24th edition of Googles Year in Search, the company’s annual top 10 lists of users most-searched items, debuted today. These hundreds of lists both validate our own obsessions and take us out of our own bubbles and echo chambers, offering insights into what our fellow humans are interested in.
Year in Search is the flagship project from Google Trends, a relatively small global department within the company. Simon Rogers, a data journalist who helped build out The Guardians data visualization team in his native London before becoming Twitters data editor, has led the Trends team since 2015. In May, he will release a book, What We Ask Google, an epic snapshot, two decades long and counting, of our collective brain.
Rogers spoke with me about the human effort behind Google Trends, what consistently surprises him about the data, and why it can be a source for hope in a dark time.
This interview has been edited for length and clarity.
What is the role of the Google Trends division at Google?
We are responsible for Year in Search. We also create content that shows up on the Trends sitewe’ve got some curated pages there, in addition to all our exploration tools. We work with NGOs [nongovernmental organizations] and directly with newsrooms to get them data when they need it, often around big events. We do our own data visualization storytelling as well. Were not a big team. We’ve got people in the U.S., we’ve got some people in Europe, a couple of people in South America, and we have somebody in Australia. We are a mixture of analysts and people with data journalism backgrounds, like myself.
I don’t think of us as a typical tech company analytics team. Thats not our job at all. We’re there to find the stories in the data, and the humanity. Its an enormous dataset, and its ever-changing. Its not static; it’s not like GDP [gross domestic product] figures or something that’s fixed at a certain point. Its constantly evolving and reacting to the world, as humans react to the world.
You were on the cutting edge of data journalism at The Guardian, and in those early days, you said that data journalism is the new punk. Do you still think so?
Part of the appeal for me was that it lowered the barriers for entry to creating content. Anybody could access data and data visualization tools, and make visuals. It had that in common with punk, which was about anybody picking up a guitar and setting up a band. One of the things that I love about Trends data is that it is publicly available; anybody can use it and make anything with it. Its probably the world’s biggest publicly available dataset. We don’t tell people what to do with it, which is why I think Google Trends has such a wide following.
It’s not just journalists who use the site. It’s content creators. People working in NGOs. Marketers. Weve seen the UN use it in Afghanistan when the U.S. withdrew, and in Ukraine when the war started, to look at how refugees searched in certain areas. The Pew Trust did a report based on Trends data from Flint, Michigan, and how people searched around the water issues there. It’s incredibly versatile as a dataset, but it’s publicly available and it’s transparent. And that’s one of the things I feel really good about every day.
[Screen grab: The Guardian]
As technology advances, are people changing the way they engage with the data?
Definitely. The Organisation for Economic Co-operation and Development did an experiment where they would use Trends plus AI to generate weekly GDP figures, which are [usually] quarterly, and they wrote a paper on it. People are more data literate now than at any time in history, because of the amount of stuff that’s out there. But there’s a recognition that this data will tell you something about the world that you’re not going to get anywhere else. Because if you want to keep your finger on the pulse, this is literally the pulse.
Is this thing you’ve built essentially just working in the background all the time? How much human work is involved?
We can’t tell the data what to say. It’s a truly independent source. Trends is basically a sample of all searchesabout a fifth of all searchesand its a random sample. [The data] is anonymized and aggregated. What that means is that you can see a global level, country level, regional level, and city levelwhich is a town in Google geography. But no lower than that. We don’t have demographics. We just know when something happened, and how big it was as a proportion of all searches. Even on the site, you don’t see raw numbers of searches, because that wouldn’t tell you anything. It does give the ability to compare a small place to a big place, in the way that people search for stuff. Or you can compare San Francisco to New York.
Youve written about how the data can show a lot of spikes in real time, but that those signals may not be as important as relative interest over time?
Imagine an F1 race. The winners will be the top searches. But the acceleration would define whether something has trended or not. If something̱s a breakout, it means it’s trendedit’s increased by 5,000% over time. [We] just launched a Trending Now section on the Google Trends site, and you can see what’s trending every day on there, whether it’s a soccer match or the government shutdown. Those things will just automatically show up there.
With Year in Search, we use trending as opposed to top search. Because if you look at the top searches on Google, theyre always the same. Its the weather. Its people typing YouTube into their search bar. But with things like KPop Demon Hunters, thats come from nowhere, spiked up, and it reflects the moment we were in.
What does Google Trends tell us about how our attention spans have changed over the past few years?
I don’t know that it reflects changes in attention spans, because were pretty ephemeral as humans. Part of the reason I did this book is because my mother died, and I found myself searching for a lot of things around dealing with grief. I could see that I was not alone. A lot of these things are constant, because they’re constants in our lives. We have kids, we have pets. We eat food. We want to help people.
You [also] get these rhythmic searches. There are waves where, say, how to learn piano spikes ahead of Christmas, because people want to learn how to play piano for their holiday celebrations. Or certain health conditions, like [during] flu season. Hal Varian, who was the former chief economist at Google, wrote a paper on how there are a lot of economic factors that you can see spike in search before they show up in the official statistics. People searching for job seekers benefits will show up before jobless figures increase.
But then there are things that just come and go. This year it’s Labubus or KPop Demon Hunters. Or the movie Weapons. If you were looking at Trends a few years ago, you would have seen a spike for searches in the Cups song [from] Pitch Perfect 2. Every teenager learned how to do the Cups song. Its kind of a snapshot of history, in a way.
[Photo: Google Trends]
When you compile these lists, do you see a big difference between whats trending in the U.S. and the rest of the world?
Obviously, you get regional variationsif you’re looking for baseball, the U.S. is going to be tops. Some things are constant, like donations or helping or love. And then some things really vary, because of the conditions. For instance, I wrote in my book that you see spikes in searches for food from war-torn regions like Somalia or Ukraine. Refugees is more searched in countries where refugees go than in the countries where they originate from. I’m often curious about why something’s spiking in a certain place. Liverpool Football Club is more searched for in a town in Uganda than in Liverpool itself.
There’s [also] a reflection of the spread of global culture. When you and I were growing up in England, promposals were not a thing, right? It was very much an American search, [where] you’d see a spike before prom every year. Now it’s a global phenomenon. It shows up everywhere . . . in Sweden, Germany, Australia.
You sent me some of the 2025 lists, and Ive got to be honestI don’t know what half of these things are. Theres something on the Viral Products list that I had to look up: beaded sardine bag?! Do things surprise you, too?
Luckily for us, my team is all younger, so everybody can explain stuff to me. This year in Year in Search, we’re planning to integrate AI mode explanations, so people click on a button and get caught up on what the trends are.
You previously said that we’d never seen a year in search like 2020. Is that still true?
2020 was unique in a lot of ways. You saw these massive spikes as the economy reeled from COVIDthings like unemployment and food banks were at a high. It was an election year. There was a lot of news. All these things were just spiking much higher than they would have done a normal year. Things like vinyl LPs went up, and they stayed higher. Tequila, as well. We also saw a spike in loneliness, but also people searching for how to help. Those have kept increasing. We tend to think everything is terrible, people are terrible. But that’s not what you see in the way people search. Often, people are looking for how to help other people, or even how to improve the way they interact with other people.
Do you have any expectations for search trends in 2026?
Theres a revolution happening in the way we search stuff right now, in terms of the way AI is being used. You can see search changing through the data: queries are getting longer [and] much more specific. We’re almost doing a cognitive offload to AI; were asking quite complex things to get answers for. This year is the 24th Year in Search. It goes back to 2001, when it was called Google Zeitgeist. It was just a list. Now 74 countries around the world will have their own Year in Search.
Tell me more about your book.
It’s not a book about technology, but it’s about how we use it, and what that says about us. It’s about everyday searches. We talk about the sandwich generation, which is my age group where youre looking after your parents but also looking after kidsyou see that in search. Originally, I was going to call it something like Life Is Hard because it also reflects that we don’t know how to do a lot of things. One of the top food searches is how to boil an egg. Its a repeated search, which suggests that w’re repeatedly searching how to boil an egg. We need to be reminded of some of these things.
When I was searching personally [about] grief, I felt quite alone. I could see from the data that I wasn’t, that there are loads of people doing the same thing. We worry about [a sense of] community and being part of a community. I think maybe we are part of communities; we just don’t always realize it. Whether it’s people who don’t know how to boil eggs, or people like me who search for weird Beatles recordings, or whatever it is.
The boiled egg thing is real. Every time I boil an egg I’m, like, how many minutes again for hard-boiled?
Yeah, and I must have boiled 500,000 in my life or something. Its kind of nuts.
I’m just thinking now, if you were an alien who landed on Earth and you were only given Trends information, you could probably follow a story of humanity.
I actually used that in my book! If everybody had gone away, you could tell who we were from the way we searched.
Amid an uncertain economythe growth of AI, tariffs, rising costscompanies are pulling back on hiring. As layoffs increase, the labor market cools, and unemployment ticks up, were seeing fewer people quitting their jobs. The implication: Many workers will be job hugging and sitting tight in their roles through 2026.
Put more pessimistically: Employees are going to feel stuck where they are for the foreseeable future. In many cases, that means staying in unsatisfying jobs.
Gallups 2025 State of the Global Workforce report shows that employee engagement has fallen to 21%. And a March 2025 study of 1,000 U.S. workers by advisory and consulting firm Fractional Insights showed that 44% of employees reported feeling workplace angst, despite often showing intent to stay.
So if these employees are hugging their current roles, its not an act of affection. Its often in desperation.
Being a job hugger means youre feeling anxious, insecure, more likely to stay but also more likely to want to leave, says Erin Eatough, chief science officer and principal adviser at Fractional Insights, which applies organizational psychology insights to the workplace. You often see a self-protective response: Nothing to see here, Im doing a good job, Im not leaving.
This performative behavior can be psychologically damaging, especially in a culture of layoffs.
If I was scared of losing my job Id try everything to keep it: complimenting my boss, staying late, going to optional meetings, being a good organizational citizen, says Anthony Klotz, professor of organizational behavior at the UCL School of Management in London. But we know that when people arent loving their jobs but are still going above and beyond, that its a one-way trip to burnout.
The tight squeeze
In cases where jobs arent immediately under threat, the effects of hugging are more likely to be slow burning.
When an employees only motivation is to collect a consistent paycheck, discretionary effort drops. Theyre less productive. Engagement takes a huge hit. Over time, that gradually chips away at their well-being.
Humans want to feel useful, that they care about the work theyre doing, and that theyre investing their time well, Eatough says. When efforts are low, that can impact a persons sense of value.
The effects stretch beyond the workplace, too. Frustrated and reluctant stayers can quickly end up in a vicious cycle, Klotz says, noting, When youre in a situation that feels like its sucking life out of you, you end up ruminating about how depleting it is, then end up so tired that you dont have energy for restorative activities outside of work. So its this downward spiralyou begin your workday even more depleted.
Longer term, job hugging stunts growth. When youre looking out for yourself, rather than the team or organization, your investment in working relationships begins to break down, Eatough says. Over time, staying in that situation means youre more likely to become deeply cynical, which hurts the individual and their career trajectory.
When hugging becomes clinging
Feeling stuck is nothing new. At some point in their careers, most workers will be in a situation where if they could leave for a better role, they would, says Klotz, who predicted the Great Resignation.
But what distinguishes job hugging is that its anxiously clinging to a role during unfavorable labor markets. Its not that employees dont want to quitits that they cant.
Its human nature that when theres a threat of any sort that we move away from it and towards stability, Klotz says. Your job represents that stability. And currently, its not a great time to switch jobs.
There are few options for job huggers. The first is speaking up and working with a manager to improve the situation. But this might be unlikely for employees who feel trapped or lack motivation in the first place. Klotz says cognitive reframing can helpfocusing purely on the positive aspects of a draining role, such as a friendly team, and tuning out the rest.
Finally, slowly backing away from extra tasksin other words, quiet quittingcould mean workers can redraw work-life boundaries in the interim at least. Otherwise, beyond Stoic philosophy or a benevolent boss, there is little choice but to wait it out.
In some cases, a job hugger may eventually turn it around, ease their grip, and become quietly content in their role. But more often, wanting to quit usually leads to actually quitting.
In effect, job hugging is damage control: hanging on until the situation changes. I think well see some people be resilient, wait it out, and find another role, Klotz says. But therell be others in the quagmire of struggling with exhaustion of spending eight hours a day in a job they dont like.
The rapid expansion of artificial intelligence and cloud services has led to a massive demand for computing power. The surge has strained data infrastructure, which requires lots of electricity to operate. A single, midsize data center here on Earth can consume enough electricity to power about 16,500 homes, with even larger facilities using as much as a small city.
Over the past few years, tech leaders have increasingly advocated for space-based AI infrastructure as a way to address the power requirements of data centers.
In space, sunshinewhich solar panels can convert into electricityis abundant and reliable. On November 4, 2025, Google unveiled Project Suncatcher, a bold proposal to launch an 81-satellite constellation into low Earth orbit. It plans to use the constellation to harvest sunlight to power the next generation of AI data centers in space. So instead of beaming power back to Earth, the constellation would beam data back to Earth.
For example, if you asked a chatbot how to bake sourdough bread, instead of firing up a data center in Virginia to craft a response, your query would be beamed up to the constellation in space, processed by chips running purely on solar energy, and the recipe sent back down to your device. Doing so would mean leaving the substantial heat generated behind in the cold vacuum of space.
As a technology entrepreneur, I applaud Googles ambitious plan. But as a space scientist, I predict that the company will soon have to reckon with a growing problem: space debris.
The mathematics of disaster
Space debristhe collection of defunct human-made objects in Earths orbitis already affecting space agencies, companies, and astronauts. This debris includes large pieces, such as spent rocket stages and dead satellites, as well as tiny flecks of paint and other fragments from discontinued satellites.
Space debris travels at hypersonic speeds of approximately 17,500 mph in low Earth orbit. At this speed, colliding with a piece of debris the size of a blueberry would feel like being hit by a falling anvil.
Satellite breakups and anti-satellite tests have created an alarming amount of debris, a crisis now exacerbated by the rapid expansion of commercial constellations such as SpaceXs Starlink. The Starlink network has more than 7,500 satellites providing global high-speed internet.
The U.S. Space Force actively tracks more than 40,000 objects larger than a softball using ground-based radar and optical telescopes. However, this number represents less than 1% of the lethal objects in orbit. The majority are too small for these telescopes to identify and track reliably.
In November 2025, three Chinese astronauts aboard the Tiangong space station were forced to delay their return to Earth because their capsule had been struck by a piece of space debris. Back in 2018, a similar incident on the International Space Station challenged relations between the U.S. and Russia, as Russian media speculated that a NASA astronaut may have deliberately sabotaged the station.
The orbital shell Googles project targetsa sun-synchronous orbit approximately 400 miles above Earthis a prime location for uninterrupted solar energy. At this orbit, the spacecrafts solar arrays will always be in direct sunshine, where they can generate electricity to power the onboard AI payload. But for this reason, sun-synchronous orbit is also the single most congested highway in low Earth orbit, and objects in this orbit are the most likely to collide with other satellites or debris.
As new objects arrive and existing objects break apart, low Earth orbit could approach Kessler syndrome. In this theory, once the number of objects in low Earth orbit exceeds a critical threshold, collisions between objects generate a cascade of new debris. Eventually, this cascade of collisions could render certain orbits entirely unusable.
Implications for Project Suncatcher
Project Suncatcher proposes a cluster of satellites carrying large solar panels. They would fly with a radius of just 1 kilometer, each node spaced less than 200 meters apart. To put that in perspective, imagine a racetrack roughly the size of the Daytona International Speedway, where 81 cars race at 17,500 mph while separated by gaps about the distance you need to safely brake on the highway.
This ultradense formation is necessary for the satellites to transmit data to each other. The constellation splits complex AI workloads across all its 81 units, enabling them to think and process data simultaneously as a single, massive, distributed brain. Google is partnering with a space company to launch two prototypesatellites by early 2027 to validate the hardware.
But in the vacuum of space, flying in formation is a constant battle against physics. While the atmosphere in low Earth orbit is incredibly thin, it is not empty. Sparse air particles create orbital drag on satellites; this force pushes against the spacecraft, slowing it down and forcing it to drop in altitude. Satellites with large surface areas have more issues with drag, as they can act like a sail catching the wind.
To add to this complexity, streams of particles and magnetic fields from the sunknown as space weathercan cause the density of air particles in low Earth orbit to fluctuate in unpredictable ways. These fluctuations directly affect orbital drag.
When satellites are spaced less than 200 meters apart, the margin for error evaporates. A single impact could not only destroy one satellite but also send it blasting into its neighbors, triggering a cascade that could wipe out the entire cluster and randomly scatter millions of new pieces of debris into an orbit that is already a minefield.
The importance of active avoidance
To prevent crashes and cascades, satellite companies could adopt a leave no trace standard, which means designing satellites that do not fragment, release debris, or endanger their neighbors, and that can be safely removed from orbit. For a constellation as dense and intricate as Suncatcher, meeting this standard might require equipping the satellites with reflexes that autonomously detect and dance through a debris field. Suncatchers current design doesnt include these active avoidance capabilities.
In the first six months of 2025 alone, SpaceXs Starlink constellation performed a staggering 144,404 collision-avoidance maneuvers to dodge debris and other spacecraft. Similarly, Suncatcher would likely encounter debris larger than a grain of sand every five seconds.
Todays object-tracking infrastructure is generally limited to debris larger than a softball, leaving millions of smaller debris pieces effectively invisible to satellite operators. Future constellations will need an onboard detection system that can actively spot these smaller threats and maneuver the satellite autonomously in real time.
Equipping Suncatcher with active collision-avoidance capabilities would be an engineering feat. Because of the tight spacing, the constellation would need to respond as a single entity. Satellites would need to reposition in concert, similar to a synchronized flock of birds. Each satellite would need to react to the slightest shift of its neighbor.
Paying rent for the orbit
Technological solutions, however, can go only so far. In September 2022, the Federal Communications Commission created a rule requiring satellite operators to remove their spacecraft from orbit within five years of the missions completion. This typically involves a controlled de-orbit maneuver. Operators must now reserve enough fuel to fire the thrusters at the end of the mission to lower the satellites altitude, until atmospheric drag takes over and the spacecraft burns up in the atmosphere.
However, the rule does not address the debris already in space, nor any future debris, from accidents or mishaps. To tackle these issues, some policymakers have proposed a use tax for space debris removal.
A use tax or orbital-use fee would charge satellite operators a levy based on the orbital stress their constellation imposes, much like larger or heavier vehicles paying greater fees to use public roads. These funds would finance active debris-removal missions, which capture and remove the most dangerous pieces of junk.
Avoiding collisions is a temporary technical fix, not a long-term solution to the space debris problem. As some companies look to space as a new home for data centers, and others continue to send satellite constellations into orbit, new policies and active debris-removal programs can help keep low Earth orbit open for business.
Mojtaba Akhavan-Tafti is an associate research scientist at the University of Michigan.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Endings are tricky: You want closure and to go out with a bangwhich is a hard balance. Its natural to want the end of the year to be meaningful. Even the moon appears to agree with this sentiment, and it’s about to prove it.
The final full moon of 2025, which is also called the cold moon, will be a bright supermoon occurring on December 4.
Before we get into how best to moon-gaze, lets break down what that all means, and do a year-end moon review.
Why is Decembers full moon called the ‘cold moon’?
Human beings assign names even to celestial happenings. The Old Farmers Almanac compiled the most commonly used monikers, based on Old English and Native American sources.
Decembers moon is called the cold moon because of the chilly winter temperatures. According to EarthSky, it is also known as Moon Before Yule or the Long Night Moon.
What is a supermoon?
The moon orbits Earth in an elliptical pattern, which means the orb has differing proximity to the planet.
When the full moon lines up with the closer approach to Earth, known as perigee, a supermoon occurs. The moon appears brighter and fuller because it is physically closer to Earth.
What makes this supermoon special?
Decembers supermoon offering is the finale of three consecutive supermoons, which also occurred in October and November this year.
Because the orb will mirror the sun, Decembers supermoon will also be the highest-hanging full moon of 2025.
What is the moons 2025 recap?
There were 12 full moons in 2025. (Sometimes, because of the lunar year length, there are 13, such as in 2023.)
2025s dozen included three supermoons, two total lunar eclipsesand a partridge in a pear tree. (Well, the scientific nature of the latter is questionable . . . but tis the season.)
How best to view the December supermoon
The most dramatic time to view the supermoon is just after moonrise, because of the moon illusion.
This phenomenon, which is when the moon appears larger when near the horizon, cant be fully explained by science. This optical illusion of sorts, combined with the fact that the supermoon appears brighter and bigger, makes for one spectacular nighttime view.
Since viewing times vary by location, use this moonrise tool to best plan your moon-gazing experience.If you miss tonight, never fear. The moon will reach its peak on December 4 at 6:14 p.m. ET, but it will appear full for a couple of days, so you have wiggle room that allows for more moon-gazing opportunities.
Its a tale as old as the modern workplace: In the 1960s, women entered the workforce en masse, ready to compete with their male counterparts for promotions, pay, and opportunityonly to find the system wasnt built for them.
Today, women comprise almost half of the U.S. labor force. The playing field looks different now, but the fight for equal access hasnt gone away. It just moved into subtler territory.
Companies make quiet calculations about whos worth investing in, says Corinne Low, gender economist and associate business professor at the University of Pennsylvanias Wharton School of Business.
Women often face career penalties in anticipation of motherhood as employers presume theyre more likely to take leave or step back. Once in their 40s, past childbearing, this bias fades.
But not before its done damage.
The cost of inaction is huge: 4 out of 10 mothers in the first five years after childbirth resign. In 2025, around 400,000 mothers with young children resigned from the U.S. workforcethe sharpest decline in more than 40 years.
Mothers face a training penalty that hinders their career advancement
On average, data shows women working full-time only earn 83% of a mans median annual salary. Mothers face even worse oddstheir pay is often reduced by 3% for every child they have.
A new study from the University of Connecticut finds that, one to three years after childbirth, women are 17% to 22% less likely to receive on-the-job training opportunities, such as seminars, workshops, and development programs, compared with a 3% to 8% decline for men who became fathers.
The result is a hidden skills and promotion gap that may explain nearly a third of the motherhood wage penalty.
When women have children, theyre viewed as less committed or competent, research showsa bias that leads employers to assume mothers are too busy, distracted, or disinterested to participate in training opportunities.
This is called benevolent prescriptive stereotyping, and it doesnt do mothers any favors, says Joan C. Williams, distinguished professor of law emerita and founding director of the Equality Action Center at UC Law San Francisco.
As Williams points out: If you don’t get work, you eventually either get laid off because you’re not progressing, or you leave because you’re disgusted that you don’t get good work. Or you just stall out.
If a mother turns down an opportunity for training or advancement, its important to circle backnot to assume its a permanent no, says Williams. She also recommends employers keep track of who receives opportunities in their workplaceand who doesnt.
Supporting mothers isnt a charity case
Another opportunity mothers are often left out of is informal networking, like happy hours, dinners, or travel, says Kate Westlund Tovsen, founder of Society of Working Moms, a supportive community for and by working mothers. Even if a mother cant attend, Its nice to be invited, Tovsen adds, who suggests teams try daytime coffee hours as a caregiver-friendly option.
Mothers are forced to be proactive, as many companies lack frameworks to support leave or reintegration, Williams cautions. She advises scheduling meetings with superiors before and after taking family leave to make a plan. And though being a new mother is a relatively short blip on a womans career, companies often make permanent decisions in terms of who they’re investing in based on this kind of temporary period when women are most squeezed, says Low.
Supporting mothers is not a charity case, she argues, but a competitive edge that lets them retain talent long term.
Caregiver strategies and investments, including benefits and return-to-work programs, deliver measurable business returns, states Jess Ringgenberg, professional certified coach and CEO of Elxir, an advisory firm focusing on caregivers in the workplace.
Companies see three to six times ROI through higher retention, productivity, and lower absenteeism with such programs, Ringgenberg says. Replacing a mid-level caregiver comes with backfill, training, and ramp-up expenses that can reach $200,000, says Ringgenberg, or totaling twice the employees annual salary.
But some companies are already working hard to help mothers succeedand its paying off.
Small and large companies finding solutions
Frontier Co-op, an Iowa-based wholesaler of natural and organic products with around 580 employees, created the Breaking Down Barriers to Employment initiative, which includes an on-site childcare center, subsidized to $120 per week per child.
Their childcare program enables parents to participate in training programs and developmental opportunities that might otherwise be missed, explains Megan Schulte, vice president of human resources.
She says 100% of new parents returned to work after their parental leave.
While Frontier Co-op eases the logistical strains of childcare, Brigade Events, a woman-owned and operated event strategy and management company in Dallas with 10 full-time employees, tackles rebuilding confidence and access for women who stepped out. The company views its mentorship and project-based work model as a form of retraining, recognizing womens existing expertise, rather than resetting them to zero. Senior employees work on a hybrid schedulethree days from home, two in-officeto preserve collaboration while creating space for caregiving.
Brigade doesnt bat an eye at blocked calendars for a childs doctor appointment or school event. Our whole culture is giving grace to each other, says April Zorsky, partner and chief creative officer.
One of their policies is that mothers returning from their 16-week maternityleave take a transition month working at 50% capacity. This can mean working from home, setting their own schedules, and easing back in without penalty. As moms, we feel its crucial to have flexibility, says Zorksy.
Larger companies can learn to be more flexible and collaborative, too, says Marissa Andrade, a veteran HR executive and former chief people officer at Chipotle.
She recalls when one of her field managers chose to take a six-month maternity leave during a period of company-wide turnaround. Before she left, she requested an interim hire from the Mom Project, a digital platform that helps companies to hire skilled mothers, to support her leave. It went so smoothly that the field manager was able to reenter without missing a beat.
Andrada recommends establishing employeebusiness resource groups. At Chipotle, one employee-created group, The Hustle (Humans United to Support the Ladies Experience), formed a maternity program to keep employees in the loop while on leave, and reoriented them on compliance and training updates on their return.
Dont overlook the power of your employees as your consumer, says Andrada.
When companies invite access for mothersto training, to support, to opportunities that just dont reacclimate them to their roles, but get them to thrive in themeveryone wins.
Mothers arent just reentering the workforce with confidence. Employers are retaining their talent, too.