|
|||||
As AI oozes into daily life, some people are building walls to keep it out for a host of compelling reasons. Theres the anxiety about a technology that requires an immense amount of energy to train and contributes to runaway carbon emissions. There are the myriad privacy concerns: At one point, some ChatGPT conversations were openly available on Google, and for months OpenAI was obligated to retain user chat history amid a lawsuit with The New York Times. Theres the latent ickiness of its manufacturing process, given that the task of sorting and labeling this data has been outsourced and underappreciated. Lest we forget, there’s also the risk of an AI oopsie, including all those accidental acts of plagiarism and hallucinated citations. Relying on these platforms seems to inch toward NPC statusand thats, to put it lightly, a bad vibe. Then theres that matter of our own dignity. Without our consent, the internet was mined and our collective online lives were transformed into the inputs for a gargantuan machine. Then the companies that did it told us to pay them for the output: a talking information bank spring-loaded with accrued human knowledge but devoid of human specificity. The social media age warped our self-perception, and now the AI era stands to subsume it. Amanda Hanna-McLeer is working on a documentary about young people who eschew digital platforms. She says her greatest fear of the technology is cognitive offloading through, say, apps like Google Maps, which, she argues, have the effect of eroding our sense of place. People dont know how to get to work on their own, she says. Thats knowledge deferred and eventually lost. As we give ourselves over to large language models, well relinquish even more of our intelligence. Exposure avoidance The movement to avoid AI might be a necessary form of cognitive self-preservation. Indeed, these models threaten to neuter our neurons (or at least how we currently use them) at a rapid pace. A recent study from the Massachusetts Institute of Technology found that active users of LLM tech consistently underperformed at neural, linguistic, and behavioral levels. People are taking steps to avoid exposure. Theres the return of dumbphones, high school Luddite clubs, even a TextEdit renaissance. A friend who is single reports that antipathy toward AI is now a common feature on dating app profilesnot using the tech is a green flag. A small group of people proclaim to avoid using the technology entirely. But as people unplug from AI, we risk whittling the overwhelming challenge of the tech industrys influence on how we think down to a question of consumer choice. Companies are even building a market niche targeted toward the people who hate the tech. Even less effective might be cultural signifiers, or showyperhaps unintentionaldeclarations of individual purity from AI. We know the false promise of abstinence-only approaches. Theres real value in prioritizing logging off, and cutting down on individual consumption, but it wont be enough to trigger structural change, Hanna-McLeer tells me. Of course, the concern that new technologies will make us stupid isnt new. Similar objections arrived, and persist, with social media, television, radioeven writing itself. Socrates worried that the written tradition might degrade our intelligence and recall: Trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, Plato recorded his mentor arguing. But the biggest challenge is that, at least at the current rate, most people will not be able to opt out of AI. For many, the decision to use or not use the technology will be made by their bosses or the companies they buy stuff from or the platforms that provide them with basic services. Going offline is already a luxury. As with other harmful things, consumers will know the downsides of deputizing LLMs but will use them all the same. Some people will use them because they are genuinely, extremely useful, and even entertaining. I hope the applications Ive found for these tools take the best of the technology while skirting some of its risks: I try to use the service like a digital bloodhound, deploying the LLMs to automatically flag updates and content that interest me, and before I then review whatever it finds myself. A few argue that eventually AI will liberate us from screens, that other digital toxin. Misaligned with the business modeland the threat A consumer-choice model for dealing with AIs most noxious consequences is misaligned with the business modeland the threat. Many integrations of artificial intelligence wont be immediately legible to non- or everyday users: LLM companies are highly interested in enterprise and business-to-business sectors, and theyre even selling their tools to the government. Theres already a movement to make AI not just a consumer product, but one laced into our digital and physical infrastructure. The technology is most noticeable in app form, but its already embedded in our search engines: Google, once a link indexer, has already transformed into a tool for answering questions with AI. OpenAI, meanwhile, has built a search engine from its chatbot. Apple wants to integrate AI directly into our phones, rendering the large language models an outgrowth of our operating systems. The movement to curb AIs abuses cannot survive merely on the hope that people will simply choose not to use the technology. Not eating meat, avoiding products laden with conflict minerals, and flipping off the light switch to save energy certainly does something, but not enough. AI asceticism alone does not meet the moment. The reason to do it anyway is the logic of the Sabbath: We need to remember what its like to occupy, and live in, our own brains.
Category:
E-Commerce
Amazon is well aware that youre spending hours agonizing over the reviews for seven different near-identical toaster ovens before you actually make a decision. Now, it has an AI feature for thatand we have to admit, its pretty helpful. Help me decide is a new AI shopping function that rolled out on October 23 across millions of U.S. customers on the Amazon shopping app and mobile browser. It uses large language models and AI tools from Amazon Web Services (AWS) suite of offerings to analyze your shopping history, purchase details, and preferences, and then match those insights with product details and customer reviews to recommend products that you might be most interested in. Designed to cut down on shopper indecision and usher users straight to the checkout cart, the feature is a smart move for Amazon, and it might make holiday shopping a bit less tortuous for customers. As the worlds most popular online retail site continues to roll out new AI features, its serving as a proving ground for how AI is radically reshaping online shopping as we know it. How to use Amazon’s new “Help me decide” feature To try out Help me decide, you can either navigate to the Keep shopping for tab on the Amazon homepage, or just click on a bunch of related products until you see a black pop-up with a sparkle icon. From there, the tool will select the product that it deems best of your recently viewed based on customer reviews, your personal product criteria, and prices and return rates. Its selection includes an AI-generated summary of why you should commit to its choice, highlighting the most relevant product features and including one stand-out review of the item. At the bottom of the screen, you can also toggle to two other suggestions: one budget pick, on the lower end of the price spectrum, and one upgrade pick, if youre inclined to get spendy. “Help Me Decide saves you time by using AI to provide product recommendations tailored to your needs after youve been browsing several similar items, giving you confidence in your purchase decision, Daniel Lloyd, vice president of Personalization at Amazon, said in a press release. I gave the tool a try after spending the past several days window shopping for cat trees that are definitely outside my budget. True to its description, Help me decide picked a tree in the middle of the price range (still $99.98), describing it as the ultimate choice for your furry friends indoor adventure. The summary went on to describe the trees impressive 70-inch height, spacious hammock, and removable top perch that ensures easy cleaning. Despite the flowery language used in the AI summaries, I found the tool generally helpful and easy to use. How AI is changing online shopping The Help me decide add-on is the latest in a growing bevy of AI shopping features from Amazon. These include the companys AI shopping assistant, Rufus; an Interests feature that tracks personalized shopping categories; and AI-generated review highlights that give top notes on customer reactions to products. Over the past several months, brands including Ralph Lauren and Pinterest have invested in their own AI tools to drive online shopping. Walmart and Sams Club have partnered with OpenAI to allow customers to shop from within the chatbot. And the AI-powered app Daydream is purpose-built to help users find the perfect outfits. In a recent Adobe Analytics study on holiday shopping behaviors, the company shared that 2024 was the first time it noticed a measurable surge in AI traffic to U.S. retail sites before the holidays. Now, its expecting a major escalation of that trend, estimating that holiday AI traffic to retail sites will rise by 520% in 2025. AI is quietly rewiring the way we shopboth in subtle ways, like by improving product recommendations, and in more direct ways, like via AI chatbots that can literally shop on behalf of a user. It won’t be long until every part of the online shopping experience is guided, at least in some way, by a dedicated AI model.
Category:
E-Commerce
The kinds of videos that do well on YouTube Shorts are depressingly predictable: cute cats, heated arguments, crazy stunts, and plenty of good old-fashioned shots of people suffering low-key injuries. The issue is that the real world produces only so many epic fails. And of the small number that do happen, even fewer are caught on video. Think of all the airplane passenger arguments and dropped wedding cakes that have gone untaped and unposted! Enter Sora. OpenAIs new video generator is hyperrealistic, and was clearly trained on billions of hours of short-form, vertical video. That makes it incredibly good at generating the kinds of short, grabby videos that pull in our attention and manipulate our emotions. How do I know? I used Sora to create an entirely fake YouTube channel, populated with AI-generated versions of the kinds of videos I see on YouTube Shorts and TikTok all the time. It took me about 30 minutes to build and it cost nothing. In less than a week, I have 21,400 views and counting. Lets dig in. Slop by the bucketful Getting access to OpenAIs Sora social network is hard. The platform launched as an invite-only app, and despite this hurdle quickly ballooned to more than 5 million active users. Its growing even faster than ChatGPT. Once youre into Sora, though, using Sora 2 (the actual video generation model behind it) is extremely easy. You just type in the concept for a video, and Sora 2 writes the script, generates about 11 seconds of very realistic vertical video, and even adds synchronized audio. The app struggles with beautiful, cinematic footage. In my early testing, Googles rival Veo 3.1which the tech behemoth launched to compete with Sora 2is much better at that. But where Sora 2 succeeds is in generating emotionally charged, short-form vertical videos. The model was likely built to drive the Sora social video network, and it shows. I decided to test the appeal of Sora 2s videos by moving them over to a traditional short-form video platform so they could compete in the real world against actual grabby, vertical clips. To that end, I opened up Sora 2 and started typing in ideas for emotionally heated videos at random. I quickly found that Sora 2 can work with either very detailed or very vague ideas. For one video, I used ChatGPT to write a detailed script for a complex scenario: a woman making a phone call in order to reconnect with her estranged mother. Sora 2s video nailed the task. From the subtle jump cuts to the swelling music (again, entirely AI-generated), its 11 seconds of surprisingly powerful micro-cinema. For other videos, I went much simpler, letting Sora 2 run with my basic prompt. The text two roommates have an argument, cellphone video yielded this: Entering A man mistakenly knocks over a giant, beautiful wedding cake and people are shocked, realistic cellphone video produced this gem, which is my favorite Sora video so far: In total, I created eight videos. Each one took about 60 seconds to generate. Using Sora 2 within the Sora app is currently free. Basically, the system generates AI slop by the bucketful. Your job is simply to give the model direction and scoop up its output. Cat fail arbitrage You can post your AI slop directly to Sora itself. But I wasnt content to stop there. Instead, I wanted to see how these videos would do in the real world. So I went over to YouTube and started uploading them to the platforms YouTube Shorts sectionbasically YouTubes clone of TikTok. Rather than starting a channel entirely from scratch, I used a neglected one where I had previously posted videos of my dog, Lance. It had no traffic to speak of, and only a handful of videos, mostly uploaded to share with friends and family members. The channel felt like an ideal blank slate; it wasnt entirely newI was worried that YouTube might flag and delete a fully new channel that started posting AI content right out of the gatebut hadnt been developed at all. I could thus test what would happen if an existing YouTuber suddenly started posting nothing but Soras delightful slop. I uploaded each of my new videos. Crucially, I didnt want to deceive anyone, so I left Soras prominent watermarks in place. I also fully disclosed that the videos are AI generated, using YouTubes Altered Content flag. It doesnt seem to have mattered. As I write this about a week later, my videos have already received 21,400 views. Poor little Lances best video had gotten only 2,600 views in the three years since I posted it. My top video from Sorathe one of the wedding cake fallingis at 12,000 views and counting. Containment is impossible AI-generated videos wouldnt be so much of a threat to the traditional social media landscape if they staye put. You could go to Sora for AI-generated fails, and TikTok or YouTube Shorts for the authentic ones. My experiment proves that this containment is unrealistic. Its shockingly easy to move videos from Sora to other vertical video platforms. And despite disclosures and watermarks, users seem to engage with the AI videos just as much as they would with real ones. Sora the social network is also a pared-down experience when it comes to running the Sora 2 model. In its new API, OpenAI provides developers with direct access to Sora 2, including customizable video lengths and aspect ratios. Videos generated through the API cost $0.10 per second. They have no distinguishable watermarks. It took me only about 20 minutes to code up an integration in Python, and I was creating fully automated AI slop for about $1 per video, at scale. All thats to say: YouTube, TikTok, and Instagram are about to be inundated with an unstoppable deluge of this stuff. YouTube tacitly admitted that when it introduced its Altered Content flag over a year ago. At the time, AI video was so janky and unusable that YouTubers were confused as to why anyone would need to disclose AI contents origins. Now we know. For consumers, the message is clear. From here on out, trust nothing that you see on vertical video apps. That amazing bottle flip or delightfully juicy neighbor fight clip may well have emerged not from real life, but from the endless slop bucket of Sora 2.
Category:
E-Commerce
All news |
||||||||||||||||||
|
||||||||||||||||||