Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2026-02-13 13:41:00| Fast Company

From AI tools to self-driving cars, new technologies regularly tout themselves as being autonomous. Yet, their companies often have to recruit us humans for help in unexpected ways. The most recent example comes courtesy of Waymos self-driving cars. The Alphabet-owned company has been hiring DoorDash drivers to close vehicle doors after a passenger leaves them open, CNBC reports. Yes, Waymos whole thing is driverless cars, but it needs another type of driver to show up and fix the simplest things. The arguably embarrassing predicament came to light when an Atlanta-based DoorDash driver shared Waymos request on Reddit. It reportedly offered the gig worker $11.25 to close a Waymo door less than a mile away. The driver was guaranteed $6.25 with the remaining $5 sent after verified completion. The request also showed a deadline to complete the task and clearly stated that it didnt require pickup or delivery.  Waymo and DoorDash have confirmed to CNBC that the companies are running a pilot program in Atlanta to get cars quickly on their way. Waymo claims that automated door closures are coming to future vehicles.  Fast Company has reached out to Waymo and DoorDash for more information, including when Waymo will roll out automated door closures. We will update this post if we hear back.  Atlanta is one of the limited cities where Waymo vehicles operate without a safety driver. Notably, though, riders in Atlanta call the cars through Uber, which operates UberEats, a DoorDash competitor.  However, Waymo and DoorDash announced in October that they would be testing autonomous delivery services in Phoenix. Waymo has also used Honk, an independent roadside assistance company, to shut doors. The Washington Post reports that users have received $24 a door in Los Angeles.  Fleet response workers on call from abroad  This isnt Waymos first time relying on humans for support. Earlier this month, Waymos chief safety officer, Mauricio Pea, told senators during a hearing that the company has some fleet response workers stationed abroad. These individuals, based in countries such as the Philippines, can provide suggestions when a vehicle is in an unusual circumstance. Senator Ed Markey of Massachusetts called this unacceptable, Business Insider reports. He continued: Having people overseas influencing American vehicles is a safety issue.


Category: E-Commerce

 

LATEST NEWS

2026-02-13 13:25:06| Fast Company

When I spoke at the Arabian Business Awards a few years ago, I showed a slide describing research that shows meetings literally make people dumber: a study published in Transcripts of the Royal Society of London found that meetings cause you to (during the meeting) lose IQ points. A bunch of people in the audience took photos of that slide. The same was true when I presented a slide describing research published in Journal of Business Research showing that not only do 90 percent of employees feel meetings are unproductive, but when the number of meetings is reduced by 40 percent employee productivity increases by 70 percent. A bunch of people took photos of that slide, too. Both findings seem easy to remember, if only because the research confirms what most people feel about meetings: Most of the time, the only person who thinks a meeting is important is the person who called the meeting. But what if you really wanted to remember that meetings tend to make participants dumber, and tend to negatively impact overall productivity? Or, more broadly, have a better shot of remembering things you really want to remember? Dont take photos. In a study published in Journal of Experimental Psychology: Applied, researchers evaluated the effectiveness of a variety of memory-boosting strategies: taking photos, typing notes, and writing notes by hand. As you can probably guess, people who wrote notes by hand scored the highest on subsequent recall and comprehension tests, even when people who took photos or typed verbatim notes were allowed to review those items before they took the tests. Or maybe you couldnt guess that: The researchers also found that learners were not cognizant of the advantages of longhand note-taking, but misjudged all three techniques to be equally effective. So why does taking notes by hand work so well? According to the researchers: Which makes sense. Taking a photo requires no mental participation at all. You dont have to consider, synthesize, decide how youll capture the information in shorthand, etc. Typing notes verbatim for example, transcribing a lecture or meeting recording is more of a process than a thought exercise. The focus is on accuracy, not retention. (I can type fast enough to capture everything someone says in real time, but that doesnt mean I remember any of it without reviewing what Ive typed.) Maybe thats why Richard Branson carries a notebook everywhere he goes. (Literally: Ive seen him with one at least 10 times.) Summarizing, putting concepts or ideas in your own words, deciding not just what to write, but how to write it all those things engage different parts of your brain, and therefore improve your retention and recall. Especially if you dont stop there. According to a study published in Psychological Science, people who study before bed, then sleep, and then do a quick review the next morning can not only spend less time studying, they also increase their long-term retention by 50 percent. Try it. At night, take a quick look at notes youve written during the day. Take a few moments to remember not only what, but why: why youll use what you jotted down. When youll use it. Why it will make a difference in your professional or personal life. Then do a quick review the next morning. Unless youre a compulsive note-taker, both exercises will take only a minute or two. After all, if it was important enough to write down, its important enough to remember and more to the point, to do something with. Because knowledge is useful only if you do something to make it useful. Inc. This article originally appeared on Fast Company‘s sister publication, Inc.  Inc. is the voice of the American entrepreneur. We inspire, inform, and document the most fascinating people in business: the risk-takers, the innovators, and the ultra-driven go-getters that represent the most dynamic force in the American economy.


Category: E-Commerce

 

2026-02-13 12:30:00| Fast Company

Hello again, and welcome back to Fast Companys Plugged In. A February 9 blog post about AI, titled Something Big Is Happening, rocketed around the web this week in a way that reminded me of the golden age of the blogosphere. Everyone seemed to be talking about itthough as was often true back in the day, its virality was fueled by a powerful cocktail of adoration and scorn. Reactions ranged from Send this to everyone you care about to I dont buy this at all. The author, Matt Shumer (who shared his post on X the following day), is the CEO of a startup called OthersideAI. He explained he was addressing it to my family, my friends, the people I care about who keep asking me so what’s the deal with AI? and getting an answer that doesn’t do justice to what’s actually happening. According to Shumer, the deal with AI is that the newest modelsspecifically OpenAIs GPT-5.3 Codex and Anthropics Claude Opus 4.6are radical improvements on anything that came before them. And that AI is suddenly so competent at writing code that the whole business of software engineering has entered a new era. And that AI will soon be better than humans at the core work of an array of other professions: Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service. By the end of the post, with a breathlessness that reminded me of the Y2K bug doomsayers of 1999, Shumer is advising readers to build up savings, minimize debt, and maybe encourage their kids to become AI wizards rather than focus on college in the expectation it will lead to a solid career. He implies that anyone who doesnt get ahead of AI in the next six months may be headed for irrelevance. The piecewhich Shumer told New Yorks Benjamin Hart he wrote with copious assistance from AIis not without its points. Some people who are blasé about AI at the moment will surely be taken aback by its impact on work and life in the years to come, which is why I heartily endorse Shumers recommendation that everyone get to know the technology better by devoting an hour a day to messing around with it. Many smart folks in Silicon Valley share Shumers awe at AIs recent ginormous leap forward in coding skills, which I wrote about last week. Wondering what will happen if its replicated in other fields is an entirely reasonable mental exercise. In the end, though, Shumer would have had a far better case if hed been 70% less over the top. (I should note that the last time he was in the news, it was for making claims involving the benchmark performance of an AI model he was involved with that turned out not to be true.) His post suffers from a flaw common in the conversation about AI: Its so awestruck by the technology that it refuses to acknowledge the serious limitations it still has. For instance, Shumer suggests that hallucinationAI stringing together sequences of words that sound factual but arentis a solved problem. He writes that a couple of years ago, ChatGPT confidently said things that were nonsense and that in AI time, that is ancient history. Its true that the latest models dont hallucinate with anything like the abandon of their predecessors. But they still make stuff up. And unlike earlier models, their hallucinations tend to be plausible-sounding rather than manifestly ridiculous, which is a step in the wrong direction. The same day I read Shumers piece, I chatted with Claude Opus 4.6 about newspaper comicsa topic I often use to assess AI since I know enough about it to judge responses on the flyand it was terrible about associating cartoonists with the strips they actually worked on. The more we talked, the less accurate it got. At least it excelled at acknowledging its errors: When I pointed one out, it told me, So basically I had fragments of real information scrambled together and presented with false confidence. Not great. After botching another of my comics-related queries, Claude said, I’m actually getting into shaky territory here and mixing up some details, and asked me to help steer it in the right direction. Thats an intriguing glimmer of self-awareness about its own tendency to fantasize, and progress of a sort. But until AI stops confabulating, describing it as being smarter than most PhDs, as Shumer does, is silly. (I continue to believe that human capability is not a great benchmark for AI, which is already better than we are at some things and may remain permanently behind in others.) Shumer also gets ahead of himself in his assumptions about where AI might be in the short-term future when it comes to being competently able to replace human thought and labor. Writing about the kind of complex work tasks he recommends throwing AIs way as an experiment, he says, If it even kind of works today, you can be almost certain that in six months it’ll do it near perfectly. That seems extraordinarily unlikely, given that all kinds of generative AI have been stuck in the kind-of-works era for years now. A decent rule of thumb: Dont believe AI will be able to do something well until it actually does. Ultimately, the takeaway from Shumers post Ill remember most isnt anything he wrote. In the spirit of AI experimentation, I fed his piece to ChatGPT, Gemini, and Claude with the prompt Give me an analysis/critique of this essay. Tell me whether its overly cautious, not cautious enough, what your own take is on the subjects discussed, etc. I was prepared for them all to respond with something facile but shallow, more a bullet-point summary than anything. ChatGPT and Gemini lived up to those expectations. However, Claudes swing at the topicwhich it popped out in seconds, and you can read in its entirety herestartled me. It took issue with some of the same things about the piece that bothered me, but also raised other weaknesses I hadnt considered. I felt smarter after reading it, which still isnt what I expect from AI prose. A few nuggets: Coding is a uniquely favorable domain for AI the outputs are verifiable, the feedback loops are tight, the training data is abundant, and the environment is fully digital. Lawyering, medicine, and management involve ambiguity, institutional context, human relationships, regulatory accountability, and adversarial dynamics that are qualitatively different. He acknowledges this briefly and then waves it away. . Yes, many people tried GPT-3.5 and wrote off the whole field. That’s a real problem. But plenty of serious, current users including people paying for the best models still encounter significant limitations: hallucination (reduced but not eliminated), brittleness on novel problms, inability to maintain coherent context over truly long projects, poor calibration about its own uncertainty, and difficulty with tasks requiring real-world grounding. Shumer treats all skepticism as outdated, which is intellectually dishonest. . He also doesnt seriously engage with the possibility that the economic incentives might not play out as smoothly as AI can do the job, therefore the job disappears. Adoption friction is real. Liability concerns are real. Organizational inertia is real. The history of technology suggests that even transformative tools take much longer to restructure industries than the pure capability timeline would suggest. Electricity took decades to reshape manufacturing even after it was clearly superior to steam. I think the underlying signal Shumer is pointing at is real. AI capabilities are improving faster than most people outside the industry appreciate, and the gap between public perception and current reality is genuinely large. He’s right that most people should be engaging with these tools more seriously than they are. But I think the essay is doing a thing that’s very common in Silicon Valley: mistaking the potential trajectory for the inevitable trajectory, and compressing timelines based on vibes and extrapolation rather than grappling seriously with the many real-world frictions that slow adoption. The piece reads like it was written in the immediate emotional aftermath of being impressed by a new model release and those moments tend to produce overconfidence about pace. To recap: In the same day that I found Claude Opus 4.6 writing something about Shumers piece that was not only coherent but insightful, it also devolved into a hallucinogenic fit. Thats just how AI is these days: amazing and terrible at the same time. Somehow, that reality is tough for many observers to accept. But any analysis that ignores it is at risk of badly misjudging what will come next. Youve been reading Plugged In, Fast Companys weekly tech newsletter from me, global technology editor Harry McCracken. If a friend or colleague forwarded this edition to youor if you’re reading it on fastcompany.comyou can check out previous issues and sign up to get it yourself every Friday morning. I love hearing from you: Ping me at hmccracken@fastcompany.com with your feedback and ideas for future newsletters. I’m also on Bluesky, Mastodon, and Threads, and you can follow Plugged In on Flipboard. More top tech stories from Fast Company Developers are still weighing the pros and cons of AI coding agentsThe tools continue to struggle when they need to account for large amounts of context in complex projects. Read More  AI expert predicted AI would end humanity in 2027now he’s changing his timelineThe former OpenAI employee has rescheduled the end of the world. Read More  Discord is asking for your ID. The backlash is about more than privacyCritics say mandatory age verification reflects a deeper shift toward routine identity checks and digital surveillance. Read More  A Palantir cofounder is backing a group attacking Alex Bores over his work with . . . PalantirCurrent and former employees tell Fast Company the ad campaign is driven by opposition to the Democratic hopeful’s support for AI regulation. Read More  Facebook’s new profile animation feature is Boomerang for the AI eraThe feature is part of a wider push toward AI content in Meta apps. Read More  MrBeast’s business empire stretches far beyond viral YouTube videosBanking apps, snack foods, streaming hits, and data tools are all part of Jimmy Donaldson’s growing $5 billion portfolio under Beast Industries. Read More 


Category: E-Commerce

 

Latest from this category

13.02Listeria fears hit BJs Wholesale Club in several states. Avoid this recalled frozen salmon product
13.02Pinterest stock is falling off a cliff for a surprising reason: Heres whats driving the PINS collapse today
13.02Match Group CEO: Public performance reviews build a culture of transparency
13.02This top lawyer at Goldman Sachs just resigned, as close ties with Jeffrey Epstein emerge
13.02What to do when your colleague keeps making excuses
13.02Waymo is hiring gig workers to close car doors, revealing how autonomous tech quietly relies on human labor
13.02Keep forgetting things? To improve your memory and recall, science says start taking notes (by hand)
13.02AI is still both more and less amazing than we think, and thats a problem
E-Commerce »

All news

13.02Inflation eases in US as prices for used cars fall
13.02Meta is reportedly working to bring facial recognition to its smart glasses
13.02Hello Kitty designer steps down after 46 years
13.02The ridiculously tiny Kodak Charmera captured our hearts (and lots of shoddy pictures)
13.02Listeria fears hit BJs Wholesale Club in several states. Avoid this recalled frozen salmon product
13.02Morgan Stanley said to consider $500 million India fund, shifts some assets
13.02Duroflex, Premier Industrial Corporation, 3 more companies get Sebi nod to launch IPO
13.02Pinterest stock is falling off a cliff for a surprising reason: Heres whats driving the PINS collapse today
More »
Privacy policy . Copyright . Contact form .