|
|||||
It used to be that artificial intelligence would leave behind helpful clues that an image it produced was not, in fact, real. Previous generations of the technology might give a person an extra finger or even an additional limb. Teeth could look odd and out of place, and skin could render overly blushed, like something out of Pixar. Multiple dimensions could befuddle our models, which struggled to represent the physical world in a sensical way: Ask for an image of salmon swimming in a river, and AI might show you a medium-rare salmon steak floating along a rapturous current. Sure, we were in the uncanny valley. But at least we knew we were there. Thats no longer the case. While there are still some analog ways to detect that the content we see was created with the help of AI, the implicit visual tip-offs are, increasingly, disappearing. The limited release of Sora 2, OpenAIs latest video-generation model, has only hastened this development, experts at multiple AI detection companies tell Fast Companymeaning we may soon come to be entirely dependent on digital and other technical tools to wade through AI slop. That has ramifications not only for everyday internet users but also for any institution with an interest in protecting its likeness or identity from theft and misappropriation. Even [for] analysts like me who saw the evolution of this industry, it’s really hard, especially on images, Francesco Cavalli, cofounder of one of those firms, Sensity AI, tells Fast Company. The shapes, the colors, and the humans are perfect. So without the help of a tool now, it’s almost impossible for the average internet user to understand whether an image or a video or a piece of audio is AI-generated or not. Visual clues are fading The good news is that at least for now there are still some telltale visual signs that content was generated via artificial intelligence. Researchers are also hunting for more. While extra fingers appear less common, AI image generation models can still struggle to produce sensible text, explains Sofia Rubinson, a senior editor at Reality Check, a publication run by the information reliability company NewsGuard. Remember that surveillance video of bunnies jumping on a trampoline that turned out to be AI-produced? You might just have to consider whether rabbits actually do that, Rubinson says. We really want to encourage people to think a little bit more critically about what they’re seeing online as these visuals are going away, she adds. Rubinson says its possible to search for whether a portion of a video has been blurred out, which might suggest that a Sora 2 watermark used to be there. We can also check who shared it. Toggling to an accounts page sometimes reveals a trove of similar videosan almost-certain giveaway that youre being served AI slop. On the flip side, usernames wont necessarily help us discern who really produced content: As Fast Company previously reported, its somewhat easy, though not always possible, to grab a Sora 2 username associated with a famous person, despite OpenAIs rules on using other peoples likenesses. Ultimately, we may need to become fluent in a models individual style and tendencies, argues Siwei Lyu, a professor at the State University of New York at Buffalo who studies deepfakes. For instance, Sora 2-generated speech can appear a little too fast. (Some have dubbed this an AI accent.) Still, Lyu warns that these indications are subtle and can often be missed when viewing casually. And the technology will improve, which means its unlikely such hints will be around forever. Indeed, researchers say the visible residue that AI was involved in creating a piece of content already seems to be fading. The tips that we used to give in terms of visual inconsistencies are disappearing, model after model, says Emmanuelle Saliba, a former journalist who now leads investigations at GetReal Labs, a cybersecurity firm working on detecting and studying AI-generated and manipulated content. While incoherent physical movement used to indicate AIs use in the creation of an image, Sora 2 has improved significantly on mimicking the real world, she says. At Reality Defender, also a deepfake detection firm, every one of the companys researchershalf of whom have doctorateshave now been fooled by content produced by newer generations of AI. Since the launch of Sora, every single one of them has mislabeled a deepfake as real or vice versa, Ben Colman, cofounder and CEO of Reality Defender, tells Fast Company. If people who’ve been working on this for 5 to 25 years cannot differentiate real from fake, how can average users or those using manual detection? Labels wont save us, either. While companies have touted watermarking as a way to identify AI-generated content, simple workarounds appear to foil these tools. For instance, videos from OpenAIs Sora come with a visual watermarkbut online tools can remove them. OpenAI, like other companies, has committed to the C2PA standard created by the Coalition for Content Provenance and Authenticity. That specification is supposed to encode the provenance, or source, of a piece of content into its metadata. Yet the watermark can be removed by screenshotting an image created by OpenAI technology. Even dragging and dropping that image, in some cases, can remove the watermark, Fast Companys tests with the tool show. OpenAI concedes this flaw, but a spokesperson said they werent able to reproduce the drag-and-drop issue. When Fast Company posed questions about this vulnerability to Adobe, which operates the C2PA verification tool, the company said the issue was on OpenAIs end. Updating methodologies Of course, the companies Fast Company spoke to are interested in selling various products designed to save us from the deepfake deluge. Some envision that AI content detection might go the way of virus scanning and become integrated into myriad online and workplace tools. Others suggest that their platforms will be necessary because the rise of tools like Sora 2 will make video call-based verification obsolete. Some executives believe their products will play a role in protecting brands from embarrassing AI-generated content. In response to the release of the Sora app, a few of these firms do say theyre seeing growing interest. Still, like humans, even these companies need to update their methodologies when new models are released. Even if the human cannot spot anything from the tech point of view, there’s always something to investigate, Sensitys Cavalli says. This often requires a mixed-methods approach, one that takes into account a range of factors, including studying a files metadata and discrepancies in background noise. Sensitys detection models are also retrained and refined when new models come online, Cavalli adds. But even this isnt always perfect. Lyu from SUNYBuffalo says that while the detection systems his team has developed still work on videos produced with Sora 2, they have lower accuracy compared to their performance on generative AI models. And thats after some fine-tuning. Hany Farid, a UC Berkeley professor who cofounded Reality Defender and serves as its chief science officer, says the companys forensic and data techniques have seen better but not perfect generalization in the latest models. In the case of Sora 2, some of the companys video techniques have remained effective, while others have required fine-tuning, he says, adding that the audio detection models still work robustly. Thats a change from earlier eras of generative AI, when forensic techniques had to be continuously updated to apply to the latest models. For our digital-forensic techniques, this required understanding specific artifacts introduced by the AI models and then building techniques to detect these artifacts. For our more data-based techniques, this required generating content from the latest model and retraining our models. Whether these deepfake detection methods will continue to hold up is unclear. In the meantime, it seems that were increasingly heading toward a world flooded by AI but still building its seawalls.
Category:
E-Commerce
Those AI tools are being trained on our trade secrets. Well lose all of our customers if they find out our teams use AI. Our employees will no longer be able to think critically because of the brain rot caused by overreliance on AI. These are not irrational fears. As AI continues to dominate the headlines, questions about data privacy and security, intellectual property, and work quality are legitimate and important. So, what do we do now? The temptation to just say No is strong. It feels straightforward and safe. However, this safe route is actually the riskiest of all. An outright ban on AI is a losing strategy that creates more problems than it solves. It fosters secrecy, increases security risks, and puts you at a massive competitive disadvantage. Im the founder of two tech agencies and a big proponent of AI. As a business, we also deal with customer data, often from industries like government, healthcare, and education. However, I believe theres a much better way to address the threats posed by AI. In this article, Ill share the dangers of flat-out AI bans and what companies can do instead. Curiosity crisis You can ban tools, but you cant ban curiosity, especially among developers and product managers who are paid to be innovative. Theyre not living in a vacuum. Theyve heard they can do this and that in a fraction of the time, and they simply want to try it out. Additionally, employees may feel that not using AI daily puts them at a disadvantage compared to their peers working at other companies where AI is allowed. When you forbid the use of AI tools, you dont stop it. Multiple studies confirm that you simply drive it underground. A Cisco survey revealed that 60% of respondents (including security and privacy professionals from various countries) entered information about company internal processes into genAI tools; 46% entered employee names or information, and 31% entered customer names or information. This creates Shadow ITthe unsanctioned use of technology within an organization. In recent years, a new term has also emerged: Bring Your Own AI, or BYOI. Another study by Anagram paints even a more shocking picture: 58% of the surveyed employees across the U.S. admit to pasting sensitive data into large language models. Moreover, 40% were willing to knowingly violate company policy to complete a task faster. I guess the forbidden fruit is indeed the sweetest. As a consequence, you have zero visibility. You dont know what tools are being used, what data is being input, or what risks are being taken. The irony is real: the problem you wanted to control is now completely out of your control. Security paradox The primary reason for a ban is to protect sensitive data. However, the ban makes a leak more likely, not less. Employees may create personal accounts and use free or cheaper plans, which often default to using your data for model training. They lack robust security features, audit logs, and data processing agreements (DPAs). On the contrary, enterprise plans, such as ChatGPT Business or Enterprise, often come with assurances that your data will not be used for training purposes. They may offer SSO, data encryption, access controls, and administrative oversight. Your sensitive data, which you feared would be leaked through an official channel, gets leaked through dozens of untraceable personal accounts. You could have secured it with an enterprise plan, but instead, you pushed it into the wild. Driving in the slow lane While you’re debating, your competitors are executing. With only 19% of C-level executives reporting a more than 5% increase in revenue attributed to AI, it may be too early to talk about the ROI. Also, the reason for a relatively small ROI may not be in the AI itself, but in the way we use it. More and more businesses are now considering applying AI to central business operations rather than peripheral processes. AI use cases in a business setting are manifold. I know firsthand that the productivity gains from AI are not marginal. At Redwerk, we do not shy away from AI-assisted development, and were teaching our clients how they can set up such workflows. We dont view AI as a threat stealing developers lunch; we view it as a tool, allowing us to do more in-depth work faster. One practical use case for developers is generating boilerplate code or documenting APIs in seconds, rather than hours. With AI, our product managers can analyze user feedback at scale, brainstorm feature ideas, and conduct market research far more quickly. At QAwerk, we use a range of AI testing tools to generate test cases, identify obscure edge cases, and even perform initial security vulnerability scans. AI is here to stay Its not a fad, and its not going anywhere. More and more apps are developing AI features, keeping pace with the competition. AI will continue changing the anatomy of work. Workplace productivity tools like Slack, Zoom, and Asana (which are all now enhanced with AI) have become ingrained in the daily operations of tech-forward businesses. Major cloud and database providers are now offering agentic AI for enterprises. Investors are pouring billions into OpenAI, despite the company operating at a loss, because they recognize that AI is the future. All these facts clearly signal one thing: AI is here to stay. How to adopt AI responsibly Throughout my entrepreneurial journey, Ive quickly learned that being proactive is a more effective strategy than being reactive. And that pertains to everything, including AI. When questions like Are you using AI? are asked to support employees rather than reprimand them, youre on the right track. You dont need to overcomplicate things; just start. Step 1: Guide, dont forbid Create a simple Acceptable Use Policy (AUP) with dos and donts. Please, no 40-page PDFs no one has ever touched (besides the person creating it). Clearly define what is and is not acceptable. For example, AI tools are approved for brainstorming, learning, and working with nonsensitive ode. Do not input any client data, PII, or company IP into public AI models. Step 2: Equip your team Invest in a secure, enterprise-level AI tool. The cost is minimal compared to the productivity gains and the risk of unmanaged use. Before you do that, survey the team for their preferences. They probably have a ton of prompts that maybe work better in Gemini rather than Claude or ChatGPT, or vice versa. You need to gather all major use cases and conduct research on the tool that can address them best. This provides your team with a secure, approved sandbox to work in. Step 3: Educate and empower Did you know that millennials are even bigger advocates for AI than Gen Zers? 62% of millennials self-reported high expertise with AI. In many organizations, millennials occupy managerial positions, and they can become true champions of change. So, their enthusiasm should be nurtured rather than stifled. Run workshops. Share best practices for prompt engineering. Create a dedicated Slack/Teams channel for people to share cool use cases and discoveries. Turn it into a positive, collaborative exploration. Step 4: Listen and iterate Dont let the policy be a stone tablet. Let your team explore, get their feedback, and then formulate more detailed policies grounded in their practical, real-world experience. Youll learn what actually works and where the real risks lie. Final thoughts Things are moving extremely fast in the AI space. So fast, its challenging to keep up even without any bans. If you cant avoid the inevitable, embrace it. Yes, data privacy and security are no joke, but banning AI is not how you ensure its integrity. Let your team experiment and innovate within the guardrails you both find reasonable and agree on. Allow industry-compliant tools, provide training, and use them to your advantage.
Category:
E-Commerce
What if the women leaders who were long overlooked are the ones we cant afford to ignore today. The proverbial career ladder has long been the dominant metaphor for success. For many, it works: a clear, linear climb, one predictable rung at a time. For others, it doesnt, because the ladder was never built to hold the weight of multiple roles and ambitions. Women, in particular, have mastered a multi-hyphenate model of leadership out of necessity: mother and manager, founder and caregiver, mentor and innovator. What looked nonlinear was simply a different kind of training ground, one that creates resilience, adaptability, and perspective. Todays multi-hyphenates are entrepreneurs-executives-authors, CEOs-board members-storytellers, and founders-volunteers-mentors. Theyve pivoted across industries, re-entered the workforce after pauses, and taken lateral moves to gain new skills or flexibility. Those shifts and gaps arent liabilities, theyre evidence of courage, perspective, and the kind of agility various lived experiences produce. Lattice, not ladder Instead of advancing only upward, these women have built careers in multiple directions. A lattice (or jungle gym) career is about growing wider, deeper, and smarter, not just higher. For generations, womens professional ambition has been constrained and conditional: dont pause, dont deviate, dont improvise. Today, women are rejecting those outdated rules and designing careers on their own terms. A multi-hyphenate career isnt about abandoning ambition, its about redefining it. Success is measured not just by titles or tenure, but by influence, impact, and the ability to bring others along. In fact, when women come together, through mentorship, collaboration, and shared experience, they create a multiplier effect that accelerates learning, leadership, and impact across organizations.Thats not to say the traditional ladder is irrelevant. For many leaders, it remains a powerful and valid route to the top. It just cant be the only one. What the modern workplace needs Even in corporate roles, this era of constant disruption is testing every leaders ability to make high-stakes decisions and rally teams through uncertainty and upheaval. Employee expectations are shifting: new generations demand empathy, flexibility, and cultural fluency. These are existential challenges that require resilience and the ability to hold multiple perspectives at once. In that context, the women leaders who have crossed sectors, scaled startups, and taken a career pause are uniquely positioned for what the modern workplace needs right now. No matter where they sit, multi-hyphenates carry the very skills once dismissed as soft but now recognized as indispensable: empathy, emotional intelligence, adaptability, and the ability to build trust across divides. Innovation thrives at intersections, and these women leaders know how to bridge industries, cultures, and generations. Good for the bottom line The business case is undeniable. Companies with women executives outperform competitors by 30% and women founders deliver higher ROI when funded. From our vantage pointsone leading chief, the worlds largest network for women executivesand one who is CEO and cofounder of a leading AI driven, and the fastest-growing executive search firm in the U.S.we both hear directly from thousands of women navigating this reality. Together, they represent not just a large share of todays workforce, but the very talent pipeline companies will depend on for the C-suite of tomorrow. The pattern is unmistakable: nonlinear careers are producing leaders uniquely equipped for todays complexity. Yet, too many companies still cling to neatly sequenced résumés over pivots, pauses, or plurality. And they overlook a critical truth that leadership today is rarely developed in isolation. Women, in particular, are adept at building support networks that foster growth and create opportunities for many in their orbit. This collective strength can amplify influence far beyond what one individual could achieve alone. After all, leadership is a team sport. Systemic change Unlocking this potential requires change across the system. Boards should prize crisis navigation and cross-functionality. Recruiters must weigh adaptability and emotional intelligence alongside tenure. HR leaders should create returnships and project-based roles. And women themselves must stop apologizing for nonlinear journeys and claim their value. Were not looking to replace the corporate ladder. For some, it still works, and thats fine. But clinging to it as the only credible path is a mistake. Nonlinear, multi-hyphenate careersonce dismissed as messy, flawed, or unfocusedare proving to be a highly effective model for leadership. Women have been beta-testing this blueprint for decades. It works. And its time for companies to embrace it.
Category:
E-Commerce
All news |
||||||||||||||||||
|
||||||||||||||||||