Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2025-07-29 06:00:00| Fast Company

Dearest reader,  I hope this article finds you in good health.  I deeply desire also that if you use generative AI to boost your productivity at work, that you, for all that is good and holy, review everything it produces, lest it hallucinate data or quotes or address your boss by the wrong nameand you fall on your face and embarrass yourself. Sincerely, Your unchecked AI results AI is taking the workforce by storm and stealth, as the rules for how to use it are still being written and employees are left to experiment. Many employees are under pressure to adopt AI: Some companies such as Shopify and Duolingo are requiring employees to use AI while others are ratcheting up productivity expectations so high that some workers may be using it just to meet demands. This creates an environment ripe for making mistakes: weve seen Grok spew hate speech on X, and more recently an AI agent deleted an entire database of executive contacts belonging to investor Jason Lemkin. Funnily enough, no one wants to share their own AI-induced flub but they have a story to tell from someone else. These AI nightmares range from embarrassing to potentially fireable offenses, but together they reveal why there always needs to be a human in the loop.   The Email You Obviously Didnt Write  Failing to review AI-generated content seems to be the most common mistake workers are making, and its producing errors big and small. On the small side, one worker in tech sales who asked to remain anonymous tells Fast Company her colleague used to ask ChatGPT to write natural-sounding sales emails, then contacted clients with Dickensian messages that began, I hope this email finds you in good health. The Slackbot Gone Awry Similarly, Clemens Rychlik, COO at marketing firm Hello Operator, says a colleague let ChatGPT draft Slack replies largely unchecked, and addressed him as Clarence instead of Clemens. When Clemens replied in good fun, calling his colleague the wrong name too, their reaction was, of course, guilt and shameand the responses after that were definitely human. The Inappropriate Business Recommendation On the larger side, some people are using AI to generate information for clients without checking the results, which compromises the quality of their work. Alex Smereczniak is the CEO of the startup Franzy, a marketplace for buying and selling franchise opportunities. His company uses a specially trained LLM on the back end to help customers find franchises, but Smereczniak says their clients often dont know this. So when one client asked to see opportunities for wellness-focused franchises, and the account manager recommended she open a Daves Hot Chicken, she was less than pleased. Smereczniak says the employee came clean and told the customer he had used AI to generate her matches.  We took a closer look and realized the model had been weighting certain factors like profitability and brand growth too heavily, without enough context on the prospects personal values, says Smereczniak. We quickly updated the models training data and reweighted a few inputs to better reflect those lifestyle preferences. When the Franzy team fired up the AI again, the model made better recommendations, and the customer was happy with the new recommendations. At a startup, things are moving a million miles a minute, Smereczniak says. I think, in general, its good for us all to remind ourselves when we are doing something client-facing or externally. Its okay to slow down and double checkand triple checkthe AI. The Hallucinated Source Some companies have used AI mistakes to improve their work processes, which was the case at Michelles employer, a PR firm. (Michelle is a pseudonym as shes not technically allowed to embarrass her employer in writing.)  Michelle tells Fast Company that a colleague used Claude, Anthropics AI assistant, to generate a ghostwritten report on behalf of a client. Unfortunately, Claude hallucinated and cited imaginary sources and quoted imaginary experts. The quote in this piece was attributed to a made-up employee from one of the top three largest asset management firms in the world, she says. But the report was published anyway. Michelles company found out by way of an angry email from the asset management firm. We were obviously all mortified, Michelle says. This had never happened before. We thought it was a process that would take place super easily and streamline the content creation process. But unfortunately, this snafu took place instead. Ultimately, the company saved face all around by simply owning up to the error and successfully retained the account. The PR firm told the client and the asset management firm exactly how the error occurred and assured them it wouldnt happen again thanks to new protocols.  Despite the flub, the firm didnt ban the use of AI for content creationthey want to be on the leading edge of technor did it solely blame the employee (who kept their job), but it did install a series of serious checks in its workflow, and now all AI-generated content must be reviewed by at least four employees. Its a mistake that could have happened to anyone, Michelle says. AI is a powerful accelerator, but without human oversight, it can push you right off a cliff. The AI-Powered Job Application AI use isnt just happening on the job, sometimes its happening during the job interview itself. Reality Defender, a company that makes deepfake detection software, asks its job candidates to complete take-home projects as part of the interview process. Ironically its not uncommon for those take-home tests to be completed with AI assistance. As far as Reality Defender is concerned, everyone assumes, and rightfully so, that AI is being used in either the conception or full on completion for a lot of tasks these days, a rep for the company tells Fast Company. But its one thing to use AI to augment your work by polishing a résumé or punching up a cover letter, and another to have it simply do the work for you. Reality Defender wants candidates to be transparent. Be very upfront about your usage of AI, they said. In fact, we encourage that discretion and disclosure and see that as a positive, not a negative. If you are out there saying, Hey, I did this with artificial intelligence, and its gotten me to here, but I am perfectly capable of doing this without artificial intelligence, albeit in a different way, you are not only an honest person, but it shows your level of proficiency with artificial intelligence. Personally, I dont think its necessarily bad to use [AI] to some extent, but at th very, very least, you want to check whats being written and reviewed before we share it, says Rychlik at Hello Operator. More broadly, I ask everyone to pause regularly on this because if your first instinct is always ask GPT, you risk worsening your critical thinking capabilities. Rychlik is tapping into a common sentiment we noticed. On the whole, companies are trying to use mistakes as a learning opportunity to ask for transparency and improve processes.  Were in an age of AI experimentation, and smart companies understand mistakes are the cost of experimentation. In this experimental stage, organizations and employees using AI at work look tech-savvy rather than careless, and were just finding out where the boundaries are. For now, many workers seem to have adopted a policy of asking for forgiveness rather than permission.


Category: E-Commerce

 

LATEST NEWS

2025-07-29 04:16:00| Fast Company

This article is republished with permission from Wonder Tools, a newsletter that helps you discover the most useful sites and apps. Subscribe here. Four new AI tools caught my attention recently for solving specific problems well. They are free to try and quick to learn, and they point toward where AI is heading. 1. Lovart: Create a brand kit or marketing campaign with an AI design agent Lovarts conversational interface allows you to generate posters, social posts, branding kits, storyboardseven packaging. Unlike other image generation tools, you can generate dozens of images from a single prompt, then iterate on the results in a chat dialogue. You can also edit the images. I used an eraser to remove stray text in a promo poster. Pricing: Free (limited use), or $15 to $26/month billed annually for additional usage and pro models. 2. Little Language Lessons: Brush up on French, Spanish, or other languages Polish your linguistic skills in three different ways using Googles Little Language Lessons. Unlike Duolingo, Babbel, and other subscription language-learning systems, this is completely free. Its just for micro-learningpicking up some words, phrases, and grammarnot for developing full fluency. Tiny Lessons: Pick from a long list of languages and type in a scenariolike hosting a meeting or going to a concert. Learn related words and phrases. Slang Hang: Catch up on popular new chitchat by watching a conversation thread between native speakers. While listening, youll see the translation. Word Cam: Snap a picture to get translations of objects in the image, along with related phrases. Tip: Use this app on a mobile deviceit will be handier for capturing images than your computers webcam. 3. Gemini Scheduled Actions: Set up simple AI automations Scheduled actions are an emerging format where AI assistants send you personalized updates. You design the task and choose its frequency. ChatGPT Tasks, Perplexity Tasks, and Geminis Scheduled Actions are three Ive been testing. Get notified when a task is completed by email, push notification, or within the app. Here are a few examples. Generate a summary of headlines on your niche topic. I get positive news memos to counter the weight of news negativity. Ask for one-sentence takeaways, source links, specific subtopics, or whatever else interests you. Get weather-related wardrobe suggestions. Create morning weather updates with outfit ideas based on a list of wardrobe items you provide for personalized guidance. Plan a creative spark moment. Get a dailyor weeklyprompt for a creative activity: writing, drawing, journaling, cooking, or whatever you love. Catch up on your favorite teams, shows, or bands. Request updates on your favorite artists or athletes. Unlike services like Google Alerts, these AI actions let you use natural language to detail your personal interests. Explore new restaurants to try. Ask for a weekly summary of new nearby eateries, cafés, or dessert spots, with whatever criteria matters to you most. 4. MyLens: Create an infographic from a link, YouTube video, or text Creating infographics can be complicated and time-consuming. Ive been experimenting with MyLens to convert raw material into visuals. How it works: Paste in text or upload a PDF, image, or CSV/Excel file. Or add a link to a site, article, or YouTube video. What you can make: Generate timelines, flowcharts, tables, or quadrant diagrams. Or upload data to create line, bar, or doughnut charts.  Watch MyLenss one-minute demo video to see it in action. Pricing: Free to create three non-editable, public infographics (stories) a day, or $9/month billed annually for 300 monthly editable creations. Alternatives: Ive covered Napkin.ai, Venngage, and apps for creating timelines. This article is republished with permission from Wonder Tools, a newsletter that helps you discover the most useful sites and apps. Subscribe here.


Category: E-Commerce

 

2025-07-29 00:00:00| Fast Company

When we started Equal Research Day on June 10, 2022the anniversary of women finally being included in U.S. clinical research in 1993we intended it to be a celebration of progress and a call for more inclusive science. We wanted to mark how far wed come and how much opportunity still lay ahead. We never imagined that just three years later, wed be fighting to keep that progress from being undone. The Trump administrations ongoing federal actions targeting women, diversity, and equitysuch as budget cuts affecting critical research funding, and the sporadic erasure of critical data and educationhave already caused massive damage and hindered progress for health parity in only five short months. We’re just beginning to wrap our minds around the lost progress and bleak future that we’re facing if there is no change of course. And we don’t have time, let alone four years, to wait on continuing health parity workfor women and for all marginalized groups harmed by the administrations actions.  If it feels like we are going back in time, it’s because we are. As founders building the future of womens health, we cant stay quiet. We are witnessing the erasure of womenagain. Medicines long history of leaving women behind While women weren’t required to be included in clinical research until 1993, the National Institutes of Health (NIH) didnt required researchers to account for sex as a biological variable until 2016. While some progress has been made, even in 2024 we were far from closing the research gapparticularly for marginalized and underrepresented groups. Because women have been left out of research for so long, many of the drugs, diagnostics, and standards of care we rely on today were never tested on womens bodies. As a result, women are diagnosed, on average, four years later than men across hundreds of diseases. Women are more likely to die in surgery if their surgeon is a man, and women are twice as likely to die after a heart attack, compared to men. Were more likely to be misdiagnosed, to experience severe medication side effects, and to be told our symptoms are all in our heads. Behind already, we’re taking massive steps backwards in closing the gender health gap and reaching health equity. In 2025, history is repeating itself This year alone, the NIH slashed $2.6 billion in contracts, plus an additional $9.5 billion for research grants, a devastating blow to women’s health research. The Womens Health Initiative (WHI)a decades-long study of 160,000 women, critical for better understanding chronic disease, hormone therapy, and morewas abruptly defunded in April (an apparent reversal to the cut was later confirmed in May), leaving the WHI in limbo for weeks. The Centers for Disease Control (CDC) fired 18% of its staff, including entire teams dedicated to maternal health, contraceptive guidance, and drug-resistant sexually transmitted infection (STI) tracking. And the National Science Foundation (NSF) canceled over 1,400 grants, especially those tied to gender, equity, or health disparities.  Federal agencies were given directives to reject funding for any research grants that include “banned words” such as “women, trans, or diversity,” at the NIH, and for the NSF, an even longer list, including: -“Female” and “women,” but not male or men -“Male dominated” -“Gender” -“Equity” -“Diversity” -“Minority” -“Underrepresented” -“Antiracist” -“Diversity” -“Trauma” -“Biases” -“Disability” -“Inclusion” -“Victims” -“Racially” This is targeted, strategic, and deeply dangerous for not only women, but for all underserved and under-researched groups that need the funding and research the most. Data and education are disappearing, too As if defunding wasnt enough, the federal government scrubbed over 8,000 public health web pages. These included critical health guidance on contraception, LGBTQ+ health, STIs, and maternal outcomes. Some of the pages were hastily scrubbed and restored while missing key facts, essentially erasing certain groups. The CDC removed or changed key datasets and web pages on the LGBTQ+ community and other underrepresented, marginalized groups. The CDC also pulled fact sheets on HIV prevention, HIV diagnosis, and transmission, and then republished some of the information, leaving out transgender people. The FDA also took down an entire website dedicated to minority health and health equity. This kind of censorship isnt just alarmingits life-threatening. If we cant see the data, we cant measure the problem. And if we cant measure the problem, we can’t fix it. This is more than a research crisis. Its a public health emergency, and it will hit the most vulnerable communities the hardest. The U.S. has the highest maternal mortality rate of any wealthy nation. Erasing programs like PRAMSwhich monitors postpartum complications, means entire states are now totally unequipped to track what happens to postpartum women. Shuttering research labs and programs on STIs, HIV, and sexual health will hinder progress for women’s sexual health and disease prevention, particularly for women and LGBTQ communities. Finally, widespread government directives to cut research funding for anyone who focuses on gender threaten to uno all the progress we’ve made since 1993, and this in turn, hinders what we can change moving forward. We know that when women are under-researched, we pay the highest price. Women already spend 25% more of their lives in worse health than men. And, 64% of common medical interventions are less effective or less accessible for women, compared to only 10% for men. For every woman diagnosed with a womens health issue, approximately four are not diagnosed. (There are 97 similar statistics published in our book, 100 Effed Facts About The Gender Health Gap.) This will only get worse with the current federal actions. What can be done While some companies and researchers are stepping in to fill the void, in reality, no private innovation can replace the scale, accountability, and public good of federally funded research. As founders of a women’s health company, we believe more than anyone about the power of private, high-growth solutions for the world’s most pressing problems. We are doing our part at Evvy. But even we don’t see the path through without government investment. Alone, we simply can’t approach the scope and magnitude of what the government to help the more than 50% of the population who deserve better. Startups can pilot new tools, but they cant collect longitudinal data on maternal mortality across all 50 states. Academic labs can push science forward, but they cant maintain national health surveillance systems. The erosion of public health infrastructure means were losing the connective tissue that links discovery to care. And without it, even the best innovations risk being isolated solutions in a broken system. This isnt just about research; its about rights. Its about refusing to let an entire half of the population be sidelined under the excuse of cost cutting. We need to fund the science that sees us, protect the data that tells our stories, and build a healthcare system where womens bodies are studied, understood, and prioritized. We can fight for funding, for research, for truth. And, most importantly, we can fight to make sure women are never again an afterthought in the story of medicine. To help, join the Equal Research Day campaign to demand equal research funding for women, or donate to nonprofits funding critical research like Womens Health Access Matters and the Foundation for Womens Health.Priyanka Jain is CEO and cofounder of Evvy. Laine Bruzek and Pita Navarro are cofounders of Evvy.


Category: E-Commerce

 

Latest from this category

29.07EPA seeks to repeal holy grail finding for climate regulation
29.07Cash App wants you to throw your money in the pool (literally)
29.07Siltronic lowers annual revenue guidance as semiconductor business weakens
29.07Spreading housing market weakness sees $23 billion builder offer $50K incentives per sale
29.07Global stocks mostly rise as U.S. continues trade talks with China
29.07Peptide stacking is the latest viral wellness trend. Experts urge caution
29.07The U.K.s Online Safety Act has sparked an explosion in VPN downloads
29.07Twin meteor showers will light up the July sky: Heres when to catch shooting stars tonight ahead of the Perseids
E-Commerce »

All news

29.07What Makes This Trade Great: Gap-Up Reversals That Pay
29.07Police, auto enthusiasts drive donations for Special Olympics at La Grange Park car show
29.07EPA seeks to repeal holy grail finding for climate regulation
29.07Community Park Fitness opens in La Grange Park less than 2 weeks after YMCA closes
29.07Union Pacific and Norfolk Southern seek 1st transcontinental railroad through a massive merger
29.07Hinsdale ranked 8th wealthiest suburb in the country
29.07NTPC Q1 Results: Cons PAT rises 11% YoY to Rs 6,108 crore, revenue falls 3%
29.07Cash App wants you to throw your money in the pool (literally)
More »
Privacy policy . Copyright . Contact form .