Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2025-08-21 13:54:04| Fast Company

Jamel Bishop is seeing a big change in his classrooms as he begins his senior year at Doss High School in Louisville, Kentucky, where cellphones are now banned during instructional time.In previous years, students often weren’t paying attention and wasted class time by repeating questions, the teenager said. Now, teachers can provide “more one-on-one time for the students who actually need it.”Kentucky is one of 17 states and the District of Columbia starting this school year with new restrictions, bringing the total to 35 states with laws or rules limiting phones and other electronic devices in school. This change has come remarkably quickly: Florida became the first state to pass such a law in 2023.Both Democrats and Republicans have taken up the cause, reflecting a growing consensus that phones are bad for kids’ mental health and take their focus away from learning, even as some researchers say the issue is less clear-cut.“Anytime you have a bill that’s passed in California and Florida, you know you’re probably onto something that’s pretty popular,” Georgia state Rep. Scott Hilton, a Republican, told a forum on cellphone use last week in Atlanta.Phones are banned throughout the school day in 18 of the states and the District of Columbia, although Georgia and Florida impose such “bell-to-bell” bans only from kindergarten through eighth grade. Another seven states ban them during class time, but not between classes or during lunch. Still others, particularly those with traditions of local school control, mandate only a cellphone policy, believing districts will take the hint and sharply restrict phone access. Students see pros and cons For students, the rules add new school-day rituals, like putting phones in magnetic pouches or special lockers.Students have been locking up their phones during class at McNair High School in suburban Atlanta since last year. Audreanna Johnson, a junior, said “most of them did not want to turn in their phones” at first, because students would use them to gossip, texting “their other friends in other classes to see what’s the tea and what’s going on around the building.”That resentment is “starting to ease down” now, she said. “More students are willing to give up their phones and not get distracted.”But there are drawbacks like not being able to listen to music when working independently in class. “I’m kind of 50-50 on the situation because me, I use headphones to do my schoolwork. I listen to music to help focus,” she said. Some parents want constant contact In a survey of 125 Georgia school districts by Emory University researchers, parental resistance was cited as the top obstacle to regulating student use of social and digital media.Johnson’s mother, Audrena Johnson, said she worries most about knowing her children are safe from violence at school. School messages about threats can be delayed and incomplete, she said, like when someone who wasn’t a McNair student got into a fight on school property, which she learned about when her daughter texted her during the school day.“My child having her phone is very important to me, because if something were to happen, I know instantly,” Johnson said.Many parents echo this generally supporting restrictions but wanting a say in the policymaking and better communication, particularly about safety and they have a real need to coordinate schedules with their children and to know about any problems their children may encounter, said Jason Allen, the national director of partnerships for the National Parents Union.“We just changed the cellphone policy, but aren’t meeting the parents’ needs in regards to safety and really training teachers to work with students on social emotional development,” Allen said. Research remains in an early stage Some researchers say it’s not yet clear what types of social media may cause harm, and whether restrictions have benefits, but teachers “love the policy,” according to Julie Gazmararian, a professor of public health at Emory University who does surveys and focus groups to research the effects of a phone ban in middle school grades in the Marietta school district near Atlanta.“They could focus more on teaching,” Gazmararian said. “There were just not the disruptions.”Another benefit: More positive interactions among students. “They were saying that kids are talking to each other in the hallways and in the cafeteria,” she said. “And in the classroom, there is a noticeably lower amount of discipline referrals.”Gazmararian is still compiling numbers on grades and discipline, and cautioned that her work may not be able to answer whether bullying has been reduced or mental health improved.Social media use clearly correlates with poor mental health, but research can’t yet prove it causes it, according to Munmun De Choudhury, a Georgia Tech professor who studies this issue.“We need to be able to quantify what types of social media use are causing harm, what types of social media use can be beneficial,” De Choudhury said. A few states reject rules Some state legislatures are bucking the momentum.Wyoming’s Senate in January rejected requiring districts to create some kind of a cellphone policy after opponents argued that teachers and parents need to be responsible.And in the Michigan House in July, a Republican-sponsored bill directing schools to ban phones bell-to-bell in grades K-8 and during high school instruction time was defeated in July after Democrats insisted on upholding local control. Democratic Gov. Gretchen Whitmer, among multiple governors who made restricting phones in schools a priority this year, is still calling for a bill to come to her desk. Associated Press writers Isabella Volmert in Lansing, Michigan, and Dylan Lovan in Louisville, Kentucky, contributed. Jeff Amy, Associated Press


Category: E-Commerce

 

LATEST NEWS

2025-08-21 13:30:00| Fast Company

AI usage has been deemed by some to be an inevitablity. Many recent headlines and reports quote technology leaders and business executives who say AI will replace jobs or must be used to stay competitive in a variety of industries. There have already been real ramifications for the labor market; some companies have laid off a substantial number of employees to go all-in on AI. In response, a number of colleges are creating AI certificate programs for students to demonstrate at least their awareness of the technology to future employers, often with backing from the AI companies themselves. Looking at the history of technology, however, the pronouncements that have been made about generative AI and work can be better understood as marketing tactics to create hype, gain new users, and ultimately deskill jobs as a result. Deskilling reduces the level of competence required to do a job and funnels knowledgeable workers out of their positions, leaving brittle infrastructure behind in their place.  [Photo: Courtesy Mar Hicks] Mar Hicks, a historian of technology at the University of Virginia, researches and teaches about the history of technology, computing, and society, and the larger implications of powerful and widespread digital infrastructures. Hicks award-winning book, Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge In Computing, is about how Britain lost its edge as the leader in electronic computing as the proportion of women in computing declined, and there was a dearth of workers with the expertise, knowledge, and skills required to operate increasingly complex computer systems. Hicks co-edited Your Computer is On Fire, a collection of essays that examine how technologies that centralize power tend to weaken democracy, and is currently working on two projects: a history of instances of resistance to large hegemonic digital systems, and a book on the dot-com boom and bust. Fast Company recently spoke with Hicks about hype cycles, how tools get framed as inevitable, and the relationship between technology, labor, and power. This interview has been condensed and edited for clarity. There is a lot of AI hype. Previously, there was blockchain hype. In the late 1990s, there was dot-com hype. How has hype played a role in the adoption of and investment in prior technologies? The new technologies that we’ve been seeing over the past 20 to 30 years in particular are dependent on a cycle of funding from multiple sources. Notably, sometimes really significant funding from venture capital sources. They almost by necessity have to sell themselves as more than what they feasibly are. They’re selling a fantasy, which in some cases they hope to make real, and in some cases they have no intention of making real. This fantasy attracts venture capital because venture capital bets on all sorts of different technologies, many of them somewhat outlandish, in hopes of one or a few being billion-dollar technologies. As that cycle has become more tuned and understood by both the investors and those whose companies are being invested in, various ways to game the system have come about. Sometimes they can be incredibly harmful. One way to game the system is to promise something you know you can’t deliver simply for very short-term gain. Another way of gaming the system, which I would argue is more dangerous, is to promise something that you either know you can’t deliver, or you’re not sure that technology can deliver, but by trying to essentially reengineer society around the technology, reengineer consumer expectations, reengineer user behaviors, you and your company are planning to create an environmenta labor environment, a regulatory environment, a user environmentthat will bring that unlikely thing closer to reality.  In doing so, a lot of dangerous things can happen, especially when the attempt to do these things does something like labor arbitrage, where the profit for a particular technology isn’t coming out of the technology’s utilityit’s coming out of the fact that that technology allows employers to either pay far less or nothing for labor that is somehow seen as equivalent, even if it’s much inferior, or to do things like browbeat labor unions through the threat of a certain technology.  We’re seeing that a little bit now with certain technologies that are being marketed not just as labor saving, but as something that can replace even human thought. That might seem very dystopian, but it’s a common thread in the history of technology. With every wave of technologies that automate some part of what we do in the workplace, the benefits are always overpromised and under-delivered.  Hype cycles seem to be centered on tools. Why are tools historically hyped in a way that invisible systems, practices, and knowledge are not?  Hype cycles tend to be centered on visible products and tools because it’s much harder to explain and mobilize excitement around processes and infrastructures and knowledge bases. When you start going into that level of complexity, you automatically start talking about things in ways that are a bit more grounded in reality. If you’re going to try to hype something up, you’re promising radical new change. You really don’t want to get into the details very much about how this fits with existing processes, labor markets, and even business models. The more you get into those details, you get into the weeds about how things might not work, or how things quite obviously don’t make sense. Where the profit is coming from not only can start to look unclear, it can start to look very shortsighted. This focus on tools comes up again and again in a way that’s both very specific but also kind of hand wavy. We see these tools as a thing that we can understand and grasp onto, literally or metaphorically, but also, the tool stands in for a whole bunch of other things that become unsaid or hidden and are left to people to infer. That puts the person or corporation who&8217;s selling the tool in a very powerful position. If you can drum up hype and get people to think the best of what the possible future outcomes might be without telling them directly, then you don’t have to make as many false promises, even as you’re leading people in a very specific direction. Technology adoption is frequently framed as inevitable by those advocating for it. Why are technologies framed that way? This seems like a technologically deterministic way of thinkingas if it is predetermined that people will adopt a technology just because it exists. Framing anythinga technology, a historical movementas inevitable is really a powerful tool for trying to get people and organizations on your side, ideologically. In some cases, when a very powerful person, organization, or set of social interests says that something is inevitable, this makes it much harder for other people who might not see that inevitability or not want that thing to come to pass. Instead of just disagreeing on the level of whether something will work well, the discourse is shifted to arguing whether or not it’s inevitable, and how to mitigate the harm if it is. Once you fall back to the position that the technology may be inevitable as a critic, you’re already arguing from a much weaker position. You’re arguing from a position that assumes the validity of the statement that a technology is just going to come along and there’s nothing that can be done to stop it, to regulate it, to reject it. This is a technologically deterministic way of thinking, because it produces this idea that technologies shape society when, of course, it’s usually the other way around. It’s society that creates and chooses what those technologies are and should be. Saying a technology is inevitable and that it is going to determine how things historically develop puts so much power in the hands of the people who make the technology.  I think some of the feeling of inevitability with regard to AI comes from the fact that AI features have already been integrated by engineers into many tools that people rely on, and the makers of these technologies do not provide a way to opt out. How inevitable is widespread AI usage? The only way we can truly answer that question is in hindsight. If it were inevitable, that means that people and the governments that they have elected to supposedly represent them do not any longer have a say in the process. Technology corporations are essentially doing a massive beta test of the generative AI LLMs for the public at low introductory rates right now, or even for free. I think it’s really premature to say that things have to go the way that the people boosting, profiting, and funding these technologies want them to go. It’s not inevitable. History doesn’t just happen. People, organizations, and institutions make it happen. There’s always time to change things. Even once a technology or a set of practices becomes entrenched, it can still be changed. That’s real revolution. To say that the technology can become inevitable and sort of entrenched in these simple terms is, let’s just say, a big oversimplification. A talented team of women, known as computers, were responsible for the number-crunching of launch windows, trajectories, fuel consumption, and other details that helped make the U.S. space program a success. [Photo: NASA/JPL-Caltech] How can individuals resist technologically deterministic thinking and AI hype? There are a couple of things that I would caution people to be on the lookout for. Whenever something is framed as new and exciting, be very wary about just uncritically adopting it or experimenting with it. Likewise, when something is being presented as free, even though billions of dollars of investment are going into it and its using lots of expensive resources in the form of public utilities like energy or water. That is sort of a red flag that should cause you to think about not uncritically adopting something into your life, and not even playing around with it with the goal of checking it out. That is exactly the behavior and curiosity that companies rely upon to get people hooked, or at least talking about, creating positive rhetoric, and generating buzz for these products to help them spread farther and faster, regardless of their level of utility or readiness.  I would really love to see folks being a little more skeptical of how they use technologies in their own lives, and not saying to themselves, oh well, it’s here, so nothing I do is going to change that. I guess I’ll just use it anyway. People do have agency, both as individuals and as larger groups, and I would foreground that if we’re thinking about how we combat technological determinism. I’ve been really heartened to see that so many journalists have changed their approach in the last few years when it comes to pulling back on the breathless, uncritical reporting of new tools and new technologies. Science and technology reporting frequently focuses on new advances and therefore reporters seek interviews with scientific or technological experts, rather than people who study the broader context. As a historian of science, what do you think is left out when that perspective is not included? It’s totally reasonable to expect that science and technology journalists will talk to the folks who are experts in a particular technology. But it’s really important to get the context as well, because technology is only useful as much as it’s applied. While these folks are experts in that technology, they are not, just by nature of what they do, going to be experts in the application and the social propagation, the way this is going to impact things economically or politically or educationally. It’s not their job to have very deep or good answers to those things. Domain experts from those fields need to be brought into the conversation as well, and need to be brought into any reporting of a new technology. We’ve gone through a pretty dangerous metamorphosis since the late 20th century, where anybody who is an expert in computing or can even be seen as competent in computing, not even an expert, has been given an intellectual cachet, where their opinions are considered more important than people who aren’t technological experts, and are being asked questions that they’re not well equipped to answer. In the most benign case, that means you get poor answers. In the worst case scenario, it means that people who are trying to manipulate public discourse to help their business interests can do that really easily.  I have seen AI compared to the calculator, the loom, the industrial revolution, and mass production, among other things. Are any of these historically accurate comparisons? I think that certain apects can be historically accurate, but the way that people cherry pick which aspects to talk about and which technologies to talk about usually has more to do with how they hope things will go rather than being explanatory for how things are likely to go.  As a historian, I think it’s important to use examples from the past, but I prefer to see them used in a way that’s a bit more critical. Instead of just saying AI is like a calculator, it’s just a new tool, get over it, maybe we should be comparing it to automated looms and automated weaving, and thinking about how that affected labor, and how frame breakersLudditeswere coming in and trying to get this technology out of their workplaces, not because they were against technology, but because it was a matter of their survival as individuals and as a community. These historically accurate comparisons are tricky, and I would just say, be wary of anybody who’s giving a historical comparison that they say is going to 100% map onto what’s happening now, especially if they’re doing it to say get over it, people were afraid of this technology at the start, too.  AI has been purported to boost or augment workers skills as it automates tasks that they can spend less time on. This reminds me of Ruth Cowans More Work for Mother. Do new technologies tend to save time?  Oftentimes, new technologies do not save time, and they often do not function in the ways that they are supposed to or the ways that they’re expected. One of the throughlines in the history of technology is that big, new infrastructural technologies often make more uncompensated labor. People may have to do more things to essentially shepherd those technologies along and make sure that they don’t break down, or the technologies create new problems that have to be addressed. Even if it seems like it’s saving time for one person, a lot of the time it is creating a ton of work for other peoplemaybe not in that immediate moment. Maybe people, days, months, even years down the line, have to come in and fix a mess that was created in the past. In your book Programmed Inequality, you write about how feminized work, work that was assumed to be rote, deskilled, and best suited to women, was critical for early computing systems. This work was anything but unskilled. Now we have work that is assumed to be unskilledand has historically been done by womenbeing marketed as replaceable by AI: using chatbots to virtually attend or take notes of meetings, to automate tedious tasks like annotating and organizing material, to write emails, reports, code. What do we lose when we let AI do these kinds of tasks rather than letting people accomplish them on their own, if the task is theoretically getting done either way? In my book, I talk about how early computing, especially in the U.K.but this was also true in the U.S. to a very large degreewas feminized. In other words, it was done largely by women. The other thing that the word feminized means is work that is seen asemphasis on seen asdeskilled, and it’s undervalued as a result. It was seen as just another kind of clerical work, or very rote mathematical work, nothing that required any sort of real brilliance, even though it did require a lot of education, skill, and creative thinking to do these early programming jobs at the dawn of the electronic age. Bug tracking software, tools that help people keep their code neat, or even compilers [did not exist]. It was so much more difficult to do these things back then.  When you have this dynamic of hiding the real cost of labor behind automating tools, you’re building incredibly brittle infrastructure. You’re creating a situation where the emphasis is on the tools and the physical technology, and completely hiding, discarding, not paying for, not writing into the budget, all of the labor, knowledge, and expertise that is required to make those tools and systems work. When you take out that big chunk of infrastructural systems, you take out the know-how. You take out the people and the processes that can make sure that these things are staying on track. And then everybody who relies on that infrastructure is in a dangerous situation. It might fail slowly, or things might be going fine, and then all of a sudden, something catastrophic happens.  What are the potential problems with seeking a technological fix to social or economic issues? The problems are twofold. The first is that it can only, at best, fix one small part of a larger problem. If you’re seeking a technological fix, you’re fixing only one part of what’s going on. It simply cannot fix the other parts. You need other kinds of solutions for those, working, hopefully cooperatively, with technological change.  I’m not anti-technology by any means. For instance, public health technologies, like the sanitary sewers that most of us in the U.S. and Western Europe only got in the 1800s, have enabled us to live happier and longer lives. But that was a sociopolitical technological fix. Without political buy-in, without states funding these massive infrastructure projects, we would not have those things, and we wouldn’t have the taxpayer bases that fund their maintenance. The technology alone is not enough.  The other problemwhich I think is a big problem and becoming bigger right nowis that if you say there’s a technological fix only to something, you cut out all the other stakeholders, and you put tremendous powerdecision-making power, economic power, all types of powerin the hands of the persons or organizations, companies that are going to come up with that technology. That’s not democratic at all. You research the history of the adoption of computing systems, or computerization. How is computerization historically related to power?  It is, in a very direct and material way, related to the power of states, their attempts to conduct warfare, and their attempts to control the essentially human resources of their population through collecting data and manipulating that data. Many people don’t know or pay attention to that history because they were raised in the era of the personal computer, or the phones, laptops, and wearable technologies that we use today. These are presented to us as personal consumer technologies that are for us when, in fact, if you dig below the surface, even just a little bit, you see how these are hooked into really huge systems, and they function because of these huge systems that are doing many, many more things than just giving us our little communication devices. The adoption of computer systems has both expanded and also paradoxically hidden the power of governments and the companies that they contract with to do things that are in the government’s interest.  Programmed Inequality begins with a short tale from 1959: a (female) computer operator trained (male) new hires with no computing experience, who were promoted into management roles while the operator was demoted to an assistantship below them. This reminded me of what is now occurring with AI, but rather than men displacing women, technology is displacing humans. Specifically, people are choosing to replace people with technology. How do you contextualize this belief that technology is more skilled than people? The historical context for this myth that technology can fix things in a wa that labor can’t is tied up in an effort to centralize power in the hands of people and organizations that are already very powerful. The reason that we see this very extractive and oftentimes counterproductive pattern with technology adoption is because technology sells the fiction to those who already have a lot of power that this will allow them to wield that power more directly, and, in fact, amass more power. They won’t have to do the messy work of talking to the people who are going to be doing the thing they want them to do. They won’t have to deal with organized labor or even unorganized labor. They won’t have to actually work with people. The reason that’s so burdensome is because people usually have a lot of interesting thoughts and ideas, and have a lot of specific domain knowledge that might slow things down.  If you can sort of ram a technology through, maybe the job is not going to be done very well, but it has this enormous benefit to the people in power that they have either a real or fictional sense that they have ever more control and power, and that can be really dangerous because it’s essentially promoting a type of technological oligarchy. Is it possible to square the extractive nature of AI with its supposed benefits?  You can’t square that circle, unless you really don’t care about where things are being extracted from. If you’re in one of the communities or classes of people where that value is getting extracted out of, that’s going to be a problem. You are always going to be giving more than you are getting. Technologies aren’t a rising tide that lifts all boats, unless we very specifically design them that way. With computing, they have not been designed that way. There’s been a ton of rhetoric, from the government to the news media, trying to present the image that it’s going to be good for everybody and it is going to raise everybody up in a somewhat equal manner. That has not been the case since the beginning, and it is not the case now. How did we get to this current moment, with tech billionaires wielding unprecedented power over the federal government, from the original rise of computing? The legacy of feminized labor very broadly, not just in computing, but even in the industrial revolution, tells us an awful lot about how we got here. What we see in the throughline of computing history is how it’s been feminized and then gendered masculine, and how certain types of expertise have been valued while certain people have been cut out. The historical process of computing as a field mirrors some politically and socially regressive trends. While the rest of the United States in the mid to late 20th century and into the early 21st was going in a more progressive direction, these technologies were freezing in amber earlier structures of hierarchy, power, and control. As they scaled up and became embedded into all of our institutions, they necessarily brought that with them.  A lot of people have been warning about this since the beginning, but it didn’t really click with the broad mass of people until more recently because the harms hadn’t been quite so evident for a lot of people in wealthy nations. Now those harms are becoming obvious. Our chickens are coming home to roost, in a sense, and while it isn’t surprising, it’s really destructive and sad to see. These are not things that are either inevitable or irreversible, but they are big problems that we are going to have to tackle in a way that’s much more robust than saying, what kind of technological fix can we slap on this technological problem? Technologies always create more problems, that then [tech companies and marketers] want to sell you another technology to fix, and that just continues the cycle of harm. What we need are political, social, economic, communal responses and solutions, and to have enough power as a society made up of people, not just machines, to get those implemented for all of our good.


Category: E-Commerce

 

2025-08-21 13:27:12| Fast Company

At least 600 employees of the Centers for Disease Control and Prevention are receiving permanent termination notices in the wake of a recent court decision that protected some CDC employees from layoffs but not others.The notices went out this week and many people have not yet received them, according to the American Federation of Government Employees, which represents more than 2,000 dues-paying members at CDC.The U.S. Department of Health and Human Services on Wednesday did not offer details on the layoffs and referred an AP reporter to a March statement that said restructuring and downsizing were intended to make health agencies more responsive and efficient.AFGE officials said they are aware of at least 600 CDC employees being cut.But “due to a staggering lack of transparency from HHS,” the union hasn’t received formal notices of who is being laid off,” the federation said in a statement on Wednesday.The permanent cuts include about 100 people who worked in violence prevention. Some employees noted those cuts come less than two weeks after a man fired at least 180 bullets into the CDC’s campus and killed a police officer.“The irony is devastating: The very experts trained to understand, interrupt and prevent this kind of violence were among those whose jobs were eliminated,” some of the affected employees wrote in a blog post last week.On April 1, the HHS officials sent layoff notices to thousands of employees at the CDC and other federal health agencies, part of a sweeping overhaul designed to vastly shrink the agencies responsible for protecting and promoting Americans’ health.Many have been on administrative leave since then paid but not allowed to work as lawsuits played out.A federal judge in Rhode Island last week issued a preliminary ruling that protected employees in several parts of the CDC, including groups dealing with smoking, reproductive health, environmental health, workplace safety, birth defects and sexually transmitted diseases.But the ruling did not protect other CDC employees, and layoffs are being finalized across other parts of the agency, including in the freedom of information office. The terminations were effective as of Monday, employees were told.Affected projects included work to prevent rape, child abuse and teen dating violence. The laid-off staff included people who have helped other countries to track violence against children an effort that helped give rise to an international conference in November at which countries talked about setting violence-reduction goals.“There are nationally and internationally recognized experts that will be impossible to replace,” said Tom Simon, the retired senior director for scientific programs at the CDC’s Division of Violence Prevention. The Associated Press Health and Science Department receives support from the Howard Hughes Medical Institute’s Science and Educational Media Group and the Robert Wood Johnson Foundation. The AP is solely responsible for all content. Mike Stobbe, AP Medical Writer


Category: E-Commerce

 

Latest from this category

21.08EPA to rule on biofuel waivers, but big oil refiners may need to wait
21.08Is conscious unbossing the Gen Z version of quiet quitting?
21.08How ESPN finally made the leap from cable TV to the app era
21.08Forget Uber EatsChipotles latest delivery option might shock you
21.08Democrats are teaching candidates how to use AI to win elections
21.08China weighs expanding digital currencies globally with a yuan stablecoin
21.08Johnson & Johnson announces $2 billion investment to boost U.S. manufacturing as tariffs loom
21.08Employees perform better when life outside work is fulfilling. Heres why
E-Commerce »

All news

21.08Tomorrow's Earnings/Economic Releases of Note; Market Movers
21.08Bull Radar
21.08Bear Radar
21.08What Makes This Trade Great: Reentry Power
21.08Mid-Day Market Internals
21.08EPA to rule on biofuel waivers, but big oil refiners may need to wait
21.08Clean Science bulk deal: Norges Bank picks Rs 158 crore stake. Nippon, SBI MF among buyers
21.08Is conscious unbossing the Gen Z version of quiet quitting?
More »
Privacy policy . Copyright . Contact form .