|
Despite billions of dollars of AI investment, Googles Gemini has always struggled with image generation. The companys Flash 2.5 model has long felt like a sidenote in comparison to far better generators from the likes of OpenAI, Midjourney, and Ideogram. That all changed last week with the release of Googles new Nano Banana image AI. The wonkily named new system is live for most Gemini users, and its capabilities are insane. To be clear, Nano Banana still sucks at generating new AI images. But it excels at something far more powerful, and potentially sinisterediting existing images to add elements that were never there, in a way thats so seamless and convincing that even experts like myself cant detect the changes. That makes Nano Banana (and its inevitable copycats) both invaluable creative tools and an existential threat to the trustworthiness of photosboth new and historical. In short, with tools like this in the world, you can never trust a photo you see online again. Come fly with me As soon as Google released Nano Banana, I started putting it through its paces. Lots of examples onlinemine includedfocus on cutesy and fun uses of Nano Bananas powerful image-editing capabilities. In my early testing, I placed my dog, Lance, into a Parisian street scene filled with piles of bananas and showed how I would look wearing a Tilley Airflo hat. (Answer: very good.) [Image: Thomas Smith] Immediately, though, I saw the systems potential for generating misinformation. To demonstrate this on a basic level, I tried editing my standard professional headshot to place myself into a variety of scenes around the world. [Image: Thomas Smith] Heres Nano Bananas rendering of me on a beach in Maui. [Image: Thomas Smith] If youve visited Wailea Beach, youll recognize the highly realistic form of the West Maui Mountains in soft focus in the background. I also placed myself atop Mount Everest. My parka looks convincingthe fact that Im still wearing my Travis Matthew polo, less so. [Image: Thomas Smith] 200s a crowd These personal examples are fun. Im sure I could post the Maui beach photo on social media and immediately expect a flurry of comments from friends asking how I enjoyed my trip. But I was after something bigger. I wanted to see how Nano Banana would do at producing misinformation with potential for real-life impact. During last years Presidential elections here in America, accusations of AI fakery flew between both candidates. In an especially infamous example, now-President Donald Trump accused Kamala Harriss campaign of using AI to fake the size of a crowd during a campaign rally. All reputable accounts of the event support the fact that photos of the Harris rally were real. But I wondered if Nano Banana could create a fake visual of a much smaller crowd, using the real rally photo as input. Heres the result: [Image: Thomas Smith] The edited version looks extremely realistic, in part because it keeps specific details from the actual photo, like the people in the foreground holding Harris-Walz signs and phones. But the fake image gives the appearance that only around 200 people attended the event and were densely concentrated in a small space far from the plane, just as Trumps campaign claimed. If Nano Banana had existed at the time of the controversy, I could easily see an AI-doctored photo like this circulating on social media, as proof that the original crowd was smaller than Harris claimed. Before, creating a carefully altered version of a real image with tools like Photoshop would have taken a skilled editor daystoo long for the result to have much chance of making it into the news cycle and altering narratives. Now, with powerful AI editors, a bad actor wishing to spread misinformation could convincingly alter photos in seconds, with no budget or editing skills needed. Fly me to the moon Having tested an example from the present day, I decided to turn my attention to a historical event that has yielded countless conspiracy theories: the 1969 moon landing. Conspiracists often claim that the moon landing was staged in a studio. Again, theres no actual evidence to support this. But I wondered if tools like Nano Banana could fake some. To find out, I handed Nano Banana a real NASA photo of astronaut Buzz Aldrin on the moon. [Image: NASA] I then asked it to pretend the photo had been faked, and to show it being created in a period-appropriate photo studio. [Image: NASA/Thomas Smith] The resulting image is impressive in its imagined detail. A group of men (it was NASA in the 1960sof course theyre all men!) in period-accurate clothing stand around a soundstage with a fake sky backdrop, fake lunar regolith on the floor, and a prop moon lander. In the center of the regolith stands an actor in a space suit, his stance perfectly matching Aldrins slight forward lean in the actual photo. Various flats and other theatrical equipment are unceremoniously stacked to the sides of the room. As a real-life professional photographer, I can vouch for the fact that the technical details in the Nano Bananas image are spot-on. A giant key light above the astronaut actor stands in for the bright, atmosphere-free lighting of the lunar surface, while various lighting instruments provide shadows perfectly matching the lunar lander shadow in the real image. A photographer crouches on the floor, capturing the imagined astronaut actor from an angle that would indeed match the angle in the real-life photograph. Even the unique lighting on the slightly crumpled American flagwith a small circular shadow in the middle of the flagmatches the real image. In short, if you were going to fake the moon landing, Nano Bananas imagined soundstage would be a pretty reasonable photographic setup to use. If you posted this AI photo on social media with a caption like REVEALED! Deep in NASAs archive, we found a photo that PROVES the moon landing was staged. The Federal Government doesnt want you to see this COVER UP, Im certain that a critical mass of people would believe it. But why stop there? After using Nano Banana to fake the moon landing, I figured Id go even further back in history. I gave the system the Wright Brothers iconic 1903 photo of their first flight at Kitty Hawk, and asked the system to imagine that it, too, had been staged. [Image: John T. Daniels] Sure enough, Nano Banana added a period-accurate wheeled stand to the plane. [Image: John T. Daniels/Thomas Smith] Presumably, the plane could have been photographed on this wheeled stand, which could then be masked out in the darkroom to yield the iconic image weve all seen reprinted in textbooks for the last century. Believe nothing In many ways, Nano Banana is nothing new. People have been doctoring photos for almost as long as theyve been taking them. An iconic photo of Abraham Lincoln from 1860 is actually a composite of Lincolns head and the politician John Calhouns much more swole body, and other examples of historical photographic manipulation abound. Still, the ease and speed with which Nano Banana can alter photos is new. Before, creating a convincing fake took skill and time. Now, it takes a cleverly written prompt and a few seconds. To their credit, Google is well aware of these risks, and is taking important steps to defend against them. Each image created by Nano Banana comes with an (easy to remove) physical watermark in the lower right corner, as well as a (harder to remove) SynthID digital watermark invisibly embedded directly into the images pixels. This digital watermark travels with the image, and can be read with special software. If a fake Nano Banana image started making the rounds online, Google could presumably scan for its embedded SynthID and quickly confirm that it was a fake. They could likely even trace its provenance to the Gemini user that created it. Google scientists have told me that the SynthID can survive common tactics that people use to obscure the origin of an image. Cropping a photo, or even taking a screenshot of it, wont remove the embedded SynthID. Google also has a robust and nuanced set of policies governing the use of Nano Banana. Creating fake images with the intent to deceive people would likely get a user banned, while creating them for artistic or research purposes, as Ive done for this article, is generally allowed. Still, once a groundbreaking new AI technology rolls out from one provider, others quickly copy it. Not all image generation companies will be as careful about provenance and security as Google. The (rhinestone-studded, occasionally surfing) cat is out of the bag; now that tools like Nano Banana exist, we need to assume that every image we see online could have been created with one. Nano Banana and its ilk are so good that even photographic experts like myself wont be able to reliably spot its fakes. As users, we therefore need to be consistently skeptical of visuals. Instead of trusting our eyes as we browse the Internet, our only recourse is to turn to reputation, provenance, and good old-fashioned media literacy to protect ourselves from fakes. Now, if youll excuse me, Burning Man is just ending, and I should really get back to the festivities. [Image: Thomas Smith]
Category:
E-Commerce
Youve probably encountered images in your social media feeds that look like a cross between photographs and computer-generated graphics. Some are fantasticalthink Shrimp Jesusand some are believable at a quick glanceremember the little girl clutching a puppy in a boat during a flood? These are examples of AI slop, or low- to mid-quality contentvideo, images, audio, text or a mixcreated with AI tools, often with little regard for accuracy. Its fast, easy, and inexpensive to make this content. AI slop producers typically place it on social media to exploit the economics of attention on the internet, displacing higher-quality material that could be more helpful. AI slop has been increasing over the past few years. As the term slop indicates, thats generally not good for people using the internet. AI slops many forms The Guardian published an analysis in July 2025 examining how AI slop is taking over YouTubes fastest-growing channels. The journalists found that 9 out of the top 100 fastest-growing channels feature AI-generated content like zombie football and cat soap operas. The song “Let it Burn,” allegedly recorded by a band called The Velvet Sundown, was AI-generated. Listening to Spotify? Be skeptical of that new band, The Velvet Sundown, that appeared on the streaming service with a creative backstory and derivative tracks. Its AI-generated. In many cases, people submit AI slop thats just good enough to attract and keep users attention, allowing the submitter to profit from platforms that monetize streaming and view-based content. The ease of generating content with AI enables people to submit low-quality articles to publications. Clarkesworld, an online science fiction magazine that accepts user submissions and pays contributors, stopped taking new submissions in 2024 because of the flood of AI-generated writing it was getting. These arent the only places where this happenseven Wikipedia is dealing with AI-generated low-quality content that strains its entire community moderation system. If the organization is not successful in removing it, a key information resource people depend on is at risk. Last Week Tonight with John Oliver delves into AI slop. Harms of AI slop AI-driven slop is making its way upstream into peoples media diets as well. During Hurricane Helene, opponents of President Joe Biden cited AI-generated images of a displaced child clutching a puppy as evidence of the administrations purported mishandling of the disaster response. Even when its apparent that content is AI-generated, it can still be used to spread misinformation by fooling some people who briefly glance at it. AI slop also harms artists by causing job and financial losses and crowding out content made by real creators. The placement of this lower-quality AI-generated content is often not distinguished by the algorithms that drive social media consumption, and it displaces entire classes of creators who previously made their livelihood from online content. Wherever its enabled, you can flag content thats harmful or problematic. On some platforms, you can add community notes to the content to provide context. For harmful content, you can try to report it. Along with forcing us to be on guard for deepfakes and inauthentic social media accounts, AI is now leading to piles of dreck degrading our media environment. At least theres a catchy name for it. Adam Nemeroff is an assistant provost for innovations in learning, teaching, and technology at Quinnipiac University. This article is republished from The Conversation under a Creative Commons license. Read the original article.
Category:
E-Commerce
A few years ago, we had a bottleneck within our organization at Super.com, the membership program focused on saving, earning, and credit building. Every new idea depended on our engineers, and our internal requests were piling up faster than we could clear them. We were adding new people to our company every week, but our engineering team was underwater. Every new feature, every minor internal tool, every process tweak depended on our developers. We were hiring as fast as we could, but it felt like shoveling sand against the tide. So we tried something different. We started teaching nontechnical employees (designers, product managers, operations leads, corporate support teams) how to build their own tools and automations. At first it felt radical. Then it felt obvious. Fast forward: we grew to over $200M in annual revenue with only about 200 people. We stopped the hiring frenzy, but the business was growing faster than ever. Along the way, we documented everything in an internal playbook for our team. Soon our LinkedIn inboxes and emails were full of messages from other leaders trying to do the same thing. Turns out the problems we had been solving werent unique. Here are the five lessons that have resonated most with other leaders about empowering nontechnical teams to build their own solutions, and why they matter to any leader. 1. Create Spaces Where Technical and NonTechnical Minds Meet Inside our company we set up two AI guilds: one for technical implementation (e.g., building tools) and one focused on adoption and use cases (e.g., using tools). They met monthly, included people from every department, and shared concrete experiments. A product manager might present how she used an AI tool to understand the codebase without tapping on the shoulder of an engineer. An operations lead might show how he used a simple script to automate dispute management. The takeaway: dont keep AI or automation knowledge locked in engineering. Build crossfunctional forums that normalize sharing wins, questions, and learnings. Those conversations surface use cases youd never see from the top down. 2. Invest in Easy-to-Use Tools You cant empower nonengineers if the only tools they have require a CS degree. We invested in lowcode environments like Superblocks, Zapier, Amplitude, and Glean Agents, and we made tools that are typically only used by developers, such as Cursor (AI IDE) and Coder (Remote Environments), accessible. Our developer operations team took on the challenge of making onboarding as simple as possible. They stripped out every unnecessary step and automated the rest, until getting set up took less than 10 minutes. We learned quickly that if a tool required more than 10 minutes of training, adoption would stall. Most non-technical teammates could follow the instructions on their own, but for anyone who preferred extra help, our IT team sat down with them one-on-one. 3. Set Guardrails That Empower We published a clear internal AI policy that spelled out approved use cases (like automation scripts, bugfix prototypes, and research tools), quality standards (human oversight required for anything customerfacing or that becomes part of routine process), and security guidelines (no sensitive data in prompts without review). Engineers didnt police these policiesthey coached. Any piece of code went through review, whether it came from an engineer or not. That consistency was the point: non-technical team members could submit pull requests, and instead of dismissing them, engineers gave feedback the same way they would with peers. Coaching meant guiding contributors through fixes and best practices, not shutting the door. Guardrails, not gatekeepers, is what makes experimentation sustainable. 4. Celebrate Small Wins Publicly When someone outside engineering built something that moved the needle (like an operations process automation or AI triage of customer reported issues), we made sure everyone heard about it. These wins were shared in weekly business reviews and companywide meetings. That visibility did more than motivate others to try; it changed our culture. During objectives and key results planning wed prompt each team to consider, Could AI help me hit my goals faster? Could I build this myself instead of waiting? Sharing wins turns isolated hacks created by individuals into company-wide capabilities. 5. Rethink the Role of Your Engineers When non-engineers have access to the right tools, software engineers become even more valuable. At our company, 93% of developers use AI tools daily. Engineers still own the hard, highimpact work: major features, architecture, deep debugging. But now they spend less time answering basic questions or making tiny fixes for other teams. The result: your best technical talent gets to focus on ambitious projects, while everyone else can handle the smaller, more routine tasks themselves. In a world where AI and lowcode tools are everywhere, the companies that win wont just have great engineers. Theyll have a culture that empowers everyone to build.
Category:
E-Commerce
All news |
||||||||||||||||||
|