Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2025-11-24 11:00:00| Fast Company

As I said in previous articles, executives like to say theyre integrating AI. But most still treat artificial intelligence as a feature, not a foundation. They bolt it onto existing systems without realizing that each automation hides a layer of invisible human work, and a growing set of unseen risks.  AI may be transforming productivity, but its also changing the very nature of labor, accountability, and even trust inside organizations. The future of work wont just be about humans and machines collaborating: It will be about managing the invisible partnerships that emerge when machines start working alongside us . . . and sometimes, behind our backs. The illusion of automation  Every wave of technological change begins with the same illusion: once we automate, the work will disappear. However, history tells a different story. The introduction of enterprise resource planning (ERP) systems promised end-to-end efficiency, only to create years of shadow work fixing data mismatches and debugging integrations. AI is repeating that pattern at a higher cognitive level.  When an AI drafts a report, someone still has to verify its claims (please, do not forget this!), check for bias, and rewrite the parts that dont sound right. When an agent summarizes a meeting, someone has to decide what actually matters. Automation doesnt erase labor; it just moves it upstream, from execution to supervision.  The paradox is clear: The smarter the system, the more attention it requires to ensure it behaves as expected.  A new McKinsey report calls this the age of superagency, where people spend less time performing tasks and more time overseeing intelligent systems that do. The smarter the system, the more attention it requires to ensure it behaves as expected.  The rise of the hidden workforce A recent analysis found that more than half of workers already use AI tools secretly, often without their managers knowledge. Similarly, another investigation warned that employees are quietly sharing sensitive data with consumer-grade chatbots, exposing companies to compliance and privacy risks.  This is the new silent workforce: algorithms doing part of the job, unseen and unacknowledged. For employees, the temptation is obvious: AI offers instant answers. For companies, the consequences are dangerous.  If those silent partners are consumer-grade models, employees might be sending confidential data to unknown servers, processed in data centers located in countries with different privacy laws. Thats why, as I warned in a previous article about BYOAI, organizations must ensure that any questions employees ask, and any prompts they use are directed to properly licensed, enterprise-grade systems.  The problem isnt that employees use AI. Its that they do it outside the data governance.  When intelligence goes underground Unapproved AI use creates more than data risk: it fractures collective learning. When employees each rely on their own AI assistant, corporate knowledge becomes fragmented. The company stops learning as an organization because insights are trapped in personal chat histories.  The result is a paradoxical kind of inefficiency: everyone gets smarter individually, but the institution gets dumber.  Organizations need to treat AI access as shared infrastructure, not a personal tool. That means providing sanctioned, well-audited systems where employees can ask questions safely without leaking intellectual property or violating compliance. The right AI model, as Microsoft knows extremely well, is not just the most powerful one: Its the one that keeps your data where it belongs.  The hidden human labor of ‘intelligent’ workflows Even when AI use is authorized, it introduces a layer of invisible human effort that companies rarely measure. Every AI-assisted workflow hides three forms of manual oversight: Verification work: humans checking whether outputs are correct and compliant Correction work: editing, reframing, or sanitizing content before publication Interpretive work: deciding what the AIs suggestions actually mean These tasks arent logged, but they consume time and mental energy. They are the reason that productivity statistics often lag behind automation hype. AI makes us faster, but it also makes us busier: constantly curating, correcting, and interpreting the machines that supposedly work for us.  The ethics of invisible labor  Invisible labor has always existed: in care work, cleaning, or customer service. AI extends it into cognitive and emotional domains. Behind every smart workflow is a human ensuring that the output makes sense, aligns with brand tone, and doesnt violate company values.  If we ignore that labor, we risk creating a new inequality: those who design and sell AI systems are celebrated, while those who quietly fix their errors remain invisible. Productivity metrics improve, but the real cost, the human vigilance keeping AI credible, goes unrecognized. Even executives experimenting with AI digital clones admit they dont fully trust their virtual doubles. Trust, as it turns out, remains stubbornly human.  Managing the silent partnership  When AI becomes embedded in everyday workflows, leadership must evolve from managing people to managing collaboration between people and systems. That requires new governance principles:  Authorized intelligence only: Employees must use licensed, enterprise-grade AI systems. No exceptions. Every query sent to a public model is a potential data leak.  Data residency clarity: Know where your data lives and where its processed. The cloud is not a place, its a jurisdiction.  Transparency by design: Any AI-assisted output should be traceable. If an AI helped generate a report, label it clearly. Transparency breeds trust.  Feedback as governance: Employees must be able to report errors, hallucinations, and ethical concerns. The real safeguard against model drift isnt a compliance checklist, its a vigilant workforce. AI without governance isnt innovation. Its negligence. The cognitive supervision era We are witnessing the emergence of a new human skill: cognitive supervision, or the ability to guide, critique, and interpret machine reasoning without doing the work manually. Its becoming the corporate equivalent of teaching someone how to manage a team they dont fully understand.  Training employees in this skill is urgent. It requires awareness of bias, logic, and the limits of automation. Its not prompt engineering, its critical thinking. And its what separates organizations that collaborate with AI from those that merely consume it.  The best companies understand this already. They are investing in education, not just tools, and treating AI literacy as strategic infrastructure. A recent profile of Vivens AI-employee clones revealed that the real question is not whether AI can replicate workers, but whether organizations can govern the replicas they create.  What executives must do now If you lead a company, assume that AI is already part of your workflows, whether you approved it or not. The task ahead is not to prevent its use but to domesticate it responsibly.  Audit your AI exposure: Map where your people are already using tools.  Provide safe alternatives: If you dont, theyll use whatever works, secure or not.  Recognize hidden labor: Build metrics that reward verification, correction, and interpretation.  Make transparency cultural: No AI-generated output should hide its origin. Done right, AI can become a trusted colleague, one that accelerates learning and amplifies creativity. Done poorly, it becomes a silent, unaccountable partner with access to your data and none of your ethics.  A quiet revolution AIs arrival in the workplace is not loud or cinematic: Its silent, gradual, and pervasive. It hides behind polished interfaces, automating just enough to convince us its working on its own. But beneath that silence lies an expanding layer of human effort keeping the system ethical, explainable, and aligned.  As leaders, our job is to make that effort visible, measurable, and safe. The most dangerous AI is not the one that replaces people: its the one that quietly depends on them, without permission, oversight, or acknowledgment. When AI becomes your silent partner, make sure its one you actually know, trust, and license properly. Otherwise, you may discover too late that the partnership was never yours at all.


Category: E-Commerce

 

LATEST NEWS

2025-11-24 10:30:00| Fast Company

As I uploaded a 1940s photo of my grandpa Max and hit a few buttons in Googles Veo 3 video generator, I saw a familiar family photo transform from black and white to color.  Then, my grandpa stepped out of the photo and walked confidently toward the camera, his army uniform perfectly pressed as his arms swung at the sides of his lanky frame. This is the kind of thing AI lets you do nowvirtually bring back the dead.  As a hilarious Saturday Night Live sketch this weekend highlighted, though, just because we can reanimate our departed loved ones, that doesnt necessarily mean we should. Grilling the dog The sketch, which The Atlantic has already called SNLs Black Mirror Moment, features Ashley Padilla as an aging grandmother in a nursing home. Her family membersplayed by Sarah Sherman and Marcello Hernándezvisit her on Thanksgiving, and use an AI photo app to bring her old family photos to life as short videos. At first, things go well. Padillas character marvels over a black and white image of her father waving as he stands in front of a spinning ferris wheel. But then, things go hilariously, predictably wrong. A photo of family members at a barbecue turns into a horror scene when the fictional AI app has Padillas father (played by host Glen Powell) roast the family dog, which happens to have no head. As other photos come to life, Padillas father pays a bowling buddy to perform a lewd act, and in a baby photo, her mothers torso splits from her body and floats around the frame as a nuclear bomb explodes in the background. The sketch is hilarious because its so relatable. Anyone who has played with AI video generators knows that they can make delightfully wonky assumptions about the laws of physicsoften with spectacular results. In my testing of AI video generator RunwayML, for example, I asked the model to create a video of a playful kitten at sunset.  Things start out cute enough, until the kitten splits in two, with its front half attempting to exit stage-right as its back half continues adorably cavorting around. Show me the movements Video generators make these errors because of the way theyre trained. Whereas a text-based AI model can learn by reading essentially every book, website, and other piece of textual data ever published, the amount of training-ready video content is far more limited. Most AI video generators train on videos from social media platforms like YouTube. That means theyre great at creating the kinds of videos that often appear on those platforms.  As Ive demonstrated before, if you want people knocking over wedding cakes or having heated arguments with their roommates, video generators like Veo and Sora excel at making them. For less commonly posted scenes, though, the available training data is far more limited.  Most online videos, for example, show interesting things happening. People rarely post hour-long clips of themselves casually walking around (or to SNLs example, holding a baby or grilling a hot dog) on YouTube or Instagram.  Those videos would be so terminally boring that no person would want to watch them. Yet copious amounts of video of these kinds of boring, everyday activities are exactly what AI companies need to properly train their video generators.  This has created a fascinating market for such clips. Companies like Waffle Video are popping up to serve the need, paying creators to film themselves doing things like chopping vegetables or writing specific words on pieces of paper for AI training. Until AI companies can get their hands on more videos of these kinds of mundane actions, though, AI video generators will struggle to mimic them.  Ironically, video generators are currently great at showing fanciful, dramatic actions. Ask them to make the kinds of everyday scenes you might find in an old black and white family photo, though, and you get Fido on the barbie. Reanimate grandma? All that brings us to the question: should you use todays AI tools to reanimate your dead loved ones? My best advice: wait a bit. AI video tech is advancing incredibly quickly. The first tools that added movement to family photoslike Deep Nostalgia from My Heritage, which launched in 2021used machine learning to perform their wizardry. The tech felt revolutionary at the time. Today, it looks primitive compared to the full motion scenes like the one of my Veo-animated grandpa. And even with those advances, Veo and its ilk are still in their avocado chair moment.  Image generators have improved tremendously as their creators have gotten better at training them. Video generators will see similarly vast improvementsespecially as AI companies invest millions in buying bespoke training data of everyday movements. Personally, I brought photo of my grandpa to life because I thought the real Grandpa Max would find it amusing. Ive resisted reanimating photos of more recently departed loved ones, though, for many of the reasons implicit in SNLs sketch.  Family photos are intimate things. Its nice to see your late loved one smile and wave at you. Seeing them split in two or explode in a nuclear fireball, though, would be disturbingand something you couldnt unsee once youve conjured it up from the depths of Sora or Veos silicon brain. Until AI models can be trusted to avoid these kinds of distributing, random visual detours, we shouldnt trust them with our most prized memories. A splitting kitten is amusing. A splitting grandma, less so.


Category: E-Commerce

 

2025-11-24 10:00:00| Fast Company

When entrepreneurs list their principal reasons for launching a company, small business owners often cite being their own boss, flexibility in setting their working hours, and turning a commercial concept into reality as their main motivations. Now, new data identifies another incentive that may convince future entrepreneurs to take the plunge. According to a recent analysis by the Federal Reserve Bank of Minneapolis, the average self-employed person earns significantly more income during their career than people who work for someone else. However, the reports findings also note the widely varying levels of income among small business owners, and the length of time usually required before stronger earnings start flowing in. Those details may lead some less enterprising prospective entrepreneurs to stick with punching a clock after all. The analysis by the Minneapolis Fed differs from most research on small business owners, which often relies heavily on survey responses. The shifting makeup of participants in those inquiries often produces widely contrasting results, creating what Minneapolis Fed authors likened to the parable of the blind men and an elephant: Each poll was essentially touching only one part of the body, and led to researchers drawing different and incomplete conclusions.  To establish a more complete picture of the nations entrepreneurs, the Minneapolis Fed used U.S. tax and Social Security Administration data from 2000 to 2015. That allowed it to determine the income those small business owners collectively generated for themselves, and identify why they stuck it out with companies that were often slow to reach profitability. And that wasnt due to setting their own hours. (W)e find that self-employed individuals have significantly higher income and steeper income growth profiles than paid-employed peers with similar characteristics, the report said, while also refuting frequent survey results that suggest many entrepreneurs stay in business for the perks of not having to answer to a boss.  Contrary to earlier studies based on surveys plagued by underrepresentation in the right tail of the income distribution, we find that non-pecuniary benefits of self-employment are not substantial when considering the source of most business income, it said. What that means, in non-economist-speak, is that many entrepreneurs earn up to 70% more than people working for other employers over their careers, with their income increasing considerably faster than paid workers. That winds up vastly outweighing the advantages surveys often identify of founders setting their own work schedules or getting to ask employees to fetch their coffee. The study found that during the 15-year period, a 25-year-old entrepreneur earned on average about $27,000 per year in 2012 dollars, while an employee of the same age made $29,000. About five years later, that income disparity had typically reversed, and then continued growing larger in small-business owners favor. By age 55, our estimate is an average (entrepreneur) income of $134,000 in 2012 dollarsmuch higher than the estimate of $79,000 for the paid employed, the study said. It added that gap was probably even larger before government agencies adjusted small-business income declarations by 14% to 46% to account for presumed underreporting.  These dierences in profiles for the self- and paid-employed would be even more striking if we were to (re)adjust reported incomes to account for business income underreporting.  Not every small-business owner winds up earning as much as people working for salaries, howeveror as much as their more successful peers. The study said about 80% of the total income of entrepreneurs it identified was generated by people earning $100,000 annually or more. That means a lot of small-business owners fared less well than the more affluent minority at the top. As a result, the authors said in wonky terms, a minority of self-employed people made even less than workers working for someone else. IRS data shows that many of the primarily self-employed earned less over the sample years than paid-employed peers with similar characteristics, but in the aggregate this subgroup has a much lower share of the total income than those that earned more than their peers, it noted. The Minneapolis Fed noted some other interesting observations in its findings.  One was that many entrepreneurs continued working salaried jobs, or had other income coming in as they supported their still unprofitable new ventures. Those supporting funds improved the cohorts overall positive revenue figures, even during early lean years. In other words, when starting a new business, owners rely on other sources of labor earnings, through either paid employment or other business enterprises, it said. Thus, even though most businesses have losses, few owners have negative individual incomes. Another significant detail was what the authors said was their use of official data to create a more precise collective financial portrait of entrepreneurscontrasting the results of many surveys that may simplify the motives and activities of limited samples of small-company owners. (T)he literature on entrepreneurship has an array of narratives, describing the typical business owner in many possible ways: as a gig worker seeking flexible arrangements, a misfit avoiding unemployment spells, an inventor seeking venture capital, a tax dodger misreporting income, it said, before noting its own use of official income statistics collected from millions of entrepreneurs. This data provides new insights into the central questions of the entrepreneurship literature and will hopefully prove useful for researchers interested in calibrating models of self-employment and business formation. Bruce Crumley This article originally appeared on Fast Companys sister publication, Inc. Inc. is the voice of the American entrepreneur. We inspire, inform, and document the most fascinating people in business: the risk-takers, the innovators, and the ultra-driven go-getters that represent the most dynamic force in the American economy.


Category: E-Commerce

 

Latest from this category

24.11This CEO just sold his company for $21 billion. It wouldnt have happened without federally funded research
24.11Why the urge to persuade can undermine your idea for change
24.11Make-A-Wish requests for content creators have more than doubled in the past decade
24.11Kids will keep these practical gifts for years
24.11AI isnt just automating jobs. Its creating new layers of human work
24.11In L.A.s fire zone, factory-built houses are meeting the moment
24.11I tried Apples iPhone Pocket. Its as awkward as it is beautiful
24.11You can virtually bring back the dead with AI. Should you?
E-Commerce »

All news

24.11US senators call for probe of scam ads on Facebook and Instagram
24.11This CEO just sold his company for $21 billion. It wouldnt have happened without federally funded research
24.11Why the urge to persuade can undermine your idea for change
24.11I tried Apples iPhone Pocket. Its as awkward as it is beautiful
24.11In L.A.s fire zone, factory-built houses are meeting the moment
24.11AI isnt just automating jobs. Its creating new layers of human work
24.11Kids will keep these practical gifts for years
24.11Make-A-Wish requests for content creators have more than doubled in the past decade
More »
Privacy policy . Copyright . Contact form .