|
How many apps do you use to chat with other people? I dont mean tweeting out into the ether. I mean actually interacting with a fellow human in a one-to-one way. For most people, the number is one or two. And it’s probably fewer than five. In the U.S., you likely keep in touch with friends and family through Apple or Google Messages, and touch base with colleagues on Slack or Teams. (For Europeans, that’s texting via WhatsApp.) If youre a particularly rabid Swiftie, you might also have a raucous group chat through X or Instagram DMs; if youre a gamer, you keep up with people on Discord. But for all the time we hopscotch across apps like TikTok, YouTube, Snapchat, and Instagram to consume content, the number of platforms we actually use to interact with others remains minimal. So whats the point of Spotifys new direct messaging feature, unveiled this week and rolling out to select markets? The feature allows users to click a “Share” button while listening to a song, podcast, or audiobook and instantly start a conversation with a friend, seeded with a link to whatever theyre listening to. Why nobody wants a do-it-all app In the press release announcing the feature, the music streaming giant promises that “Messages are for the conversations youre already having about music, podcasts and audiobooks with your friends and family.” But the flaw is right there in the pitch: if youre already having those conversations on WhatsApp or iMessage, why would you uproot them into a side-channel inside Spotify? At best it duplicates whats happening elsewhere, and at worst it fractures the conversation into yet another notification stream. (Fortunately, users can turn off the function, too.) Besides the fact that friends who insist on foisting their musical tastes on you are often the most infuriating, the feature looks destined to join the long list of rarely used, half-baked add-ons app makers tack onto their products. (Fast Company has reached out to Spotify for comment.) Spotifys move highlights how wayward app makers decision-making has become, and how easily feature bloat clutters our smartphones. On paper, adding DMs to Spotify makes sense, just as adding AI autosuggestions does for LinkedIn, or layering shortform video onto YouTube does for Googles video platform. The goal is to become an everything appthink of a more modest version of Chinas WeChatthat secures a permanent slot on your home screen. The problem: That never actually happens. Attempts to provide everything to everybody end up providing nothing to anyone, and the app inevitably flops. And flop, this one will. With the exception of YouTube Shortswhich simply extended YouTubes core video offering with another formatmost attempts to ride the zeitgeist are shallow grabs at relevance. The best features, in the best apps, are developed with deep thought about how to benefit users. About that Q2 earnings report Spotify is just the latest entrant in the copycat race, a space where Big Tech borrows whats proven popular in smaller apps. Meta has been one of the most aggressive, and its results hardly inspire confidence. Crowbarring one apps feature into another rarely works. Take Spotify: Its user-loop involves search, pressing play, listening, maybe saving, maybe sharing. Messaging apps, meanwhile, follow a loop of open thread, type, send, await reply. Those loops are fundamentally different. Smush them together and you create friction in both. Do I really want to share that Im listening to the same song for the billionth time? Theres also the matter of network effects. Messaging is winner-take-most because your friends, not you, determine the platform. Thats why people tolerate green bubbles versus blue, or hop between WhatsApp and iMessage depending on the conversation. A new chat feature inside a vertical app fragments your attentionand worse, your conversation. The song you send in Spotify DMs is now marooned from the rest of your life in WhatsApp or iMessage. It creates context silos for no good reason. If an app genuinely wants to be an everything app, it needs a foundation deeper than FOMOwhich seems to be Spotifys rationale here. WeChats everything-ness is anchored in identity, payments, and ubiquity, all under unique market conditions. Silicon Valleys obsession with replicating it is misguided. Every app except WeChat that tries to become an everything app ends up as an anything appa junk drawer of half-ideas. Spotifys new feature feels like more clutter, more cruft, dumped onto an already junk-filled smartphone notification screen. A cynic might say the new messaging feature amounts to a ploy to distract from Spotifys latest earnings miss: a swing to a second quarter $100 million loss despite topping forecasts on subscribers and revenue. The mismatchusers growing, but profits shrinkingunderscores why the company keeps reaching for engagement gimmicks. If it cant squeeze more money out of music, maybe, the thinking goes, it can invent new ways to keep you glued to the app. But yet another inbox on our phones wont make Spotify more essential. It’s just more noise.
Category:
E-Commerce
Have you ever read an article or social post and thought, This is terrible! I bet it was written by AI!? Most people know bad AI writing when they see it. But unless youre a closeted copy editor, its surprisingly hard to put your finger on exactly why AI writing sucks. Now, Wikipedias editor team has just released what amounts to a master class in the clichés, strange tropes, obsequious tones of voice, and other assorted oddities of AI-generated prose. Its a list called Signs of AI Writing, and its a fantastic resource for people who want to get better at spotting AI writingor who want to disguise their own. Add your own slop As one of the internets most trusted sources of information, Wikipedia is uniquely exposed to the risks of LLM-generated content. Large language models love to pontificate on random topics, even when they have very little actual knowledge. Wikipedia covers many of these random topics, from the ash content of Morbier cheese to the gorey details of Justin Biebers love life. Wikipedia famously crowdsources its information through a network of volunteer contributors and editors. This combination of crowdsourced data and highly specific, niche topics is a recipe for the misuse of AI. Theres also an increasingly potent financial incentive for people to pollute Wikipedia with AI slop. As search engines like Google laser in on EEATa tortured acronym that describes the authoritativeness of a brandhaving a Wikipedia page is becoming more valuable to brands as a metric for their legitimacy. Youre not supposed to create or edit your own Wikipedia page, but many brands do. And one of the easiest ways to hide this off-label tinkering is to drown ones nefarious edits in a sea of seemingly unrelated updates and contributions to esoteric Wikipedia pages. AI can spin these up at scale. Everything is fascinating Because of the risks that AI-generated content poses to the site, Wikipedias editors have gotten incredibly good at recognizing AI writing. Their Signs of AI Writing document distills this knowledge into an easy-to-follow guide. Wikipedias list is useful and unique largely because its so specific. Many other rubrics for recognizing AI writing offer broad, generic advice or focus on detection hacks that are easy to bypass. Researchers recently realized, for example, that LLMs tend to overuse the em dasha wonderful and remarkably versatile punctuation mark that I happen to absolutely love. As I recently discussed with Slate, for a brief moment, the presence of an em dash in an article was a good way to detect AI writing. Quickly, though, AI content generators caught on and started to avoid the punctuation mark. Simple hacks for detecting AI writing have a limited shelf life. The arms race of AI content creation and AI content detection means these methods are quickly rendered useless as soon as theyre made public. Wikipedias guidelines go much deeper. Rather than focusing on quick detection hacks, they dig into the more fundamental patterns present in bad AI contentthe writing conventions and literary tropes that LLMs consistently overuse. Wikipedias editors point out, for example, that LLMs place undue emphasis on symbolism and importance. Everything LLMs write stands as a symbol of something, or carries enhanced significance. Natural locations are always captivating, all animals are majestic and everything is diverse and fascinating. Wikipedias editors also note that LLMs tend to overuse transition words and phrases like in summary or overall. Often these show up as negative parallelisms. For example, LLMs love to summarize things theyre already written with tropes like: Its not only but also A restaurant might be described as not only a great place for Italian food, but also a shining example of local entrepreneurship. Every concluding paragraph starts with In conclusion or In summary. The editors also point out that AI writing often overuses the Rule of Three–a handy literary trick that capitalizes on the fact that humans brains love groups of three things. A person might be creative, smart and funny according to ChatGPT, or a company could be innovative, rule-breaking and impactful. Good writing gone bad Interestingly, Wikipedias editors acknowledge that many of these conventions would be considered good writing if they came from a human. Its not that LLMs are inherently bad at writingits just that they write in predictable ways that make their output feel formulaic and robotic. The editors also note that LLMs polished writing style and tendency to follow conventions often serves to obscure their lack of actual knowledge about a topic. By following conventions like the Rule of Three, LLMs make their superficial explanations appear more comprehensive. As readers, we often mistake good form for good contentif an LLM writes with perfect grammar and its content flows beautifully, we might not realize that its not actually saying anything useful or substantive. Beyond these stylistic issues, Wikipedias list goes into extreme detail about technical specifics of AI writingthe ways LLMs consistently format text, use headings, handle punctuation (like curly quotation marks), and sprinkle their content with bolded words and emojs. Spot it (or make it) The guidelines are useful for anyone who edits Wikipedia. But theyre also relevant for anyone who wants to get better at recognizing AI writingor who wants to create their own AI content that doesnt sound machine-generated. If youre reading an article or social media post that feels a bit off and youre curious whether it might be AI-written, Wikipedias guidelines provide a fantastic checklist for validating your suspicions. Compare the suspect writing with Wikipedias list. Do you see the Rule of Three appear a bit too consistently? Are there too many transition words? Does it sound too effusive? Although the editors stress that humans are perfectly capable of generating bland and formulaic writing without an AIs help, spotting these patterns in a piece of writing can lend credence to the idea that it was written by a machine. And if you use LLMs to create content for your businessor even for personal emails or social postsWikipedias list can help you tweak it so its genuinely readable and doesnt sound quite so robotic. As a human editor, you can manually scan the output of ChatGPT, Claude or Gemini for the patterns Wikipedia identifies, and inject your own human touch when the chatbots start sounding a bit too AI. Theres an easier approach, too. Ive found that pasting Wikipedias entire Signs of Writing list into a chatbot as part of your prompt yields noticeably better writing than LLMs produce alone. Spinning up a social post for your bands first mall gig, or generating the landing page copy for your crochet business Square page? Prompt ChatGPT or Claude as you normally would, but tell the chatbot to avoid the items on this list. Then, paste in the full contents of Wikipedias Signs page. Your LLM-generated writing will feel markedly better, with very little effort. Make sure to use your powers for good! With their specificity, focus on stylistic rather than technical patterns, and attention to subtle details of AI writing (see, Rule of Three!), Wikipedias list is a fantastic tool for anyone who wants to spot lazy AI writingor make their own AI content feel a bit less lazy and generic.
Category:
E-Commerce
Its not just the tech industry that is being battered by mass layoffs this year. Grocery store giant The Kroger Co. (NYSE: KR) is cutting nearly 1,000 jobs from its corporate workforce. Heres why, and how the companys stock is reacting. Whats happened? Yesterday, Kroger interim CEO Ron Sargent said that the grocery chain would lay off hundreds of corporate workers, according to a memo seen by Fast Company. The layoffs will total fewer than 1,000 employees. Kroger currently employs around 409,000 workers, the majority of whom work in its 2,700 grocery stores, which include Kroger, Food4Less, CityMarket, and more. In the memo, Sargent revealed that, “In the past few months, we have all looked for ways to simplify the organization, shift resources closer to our customers, and focus on work that creates the most value. However, the job cuts will not affect employees in the companys stores, distribution centers, or manufacturing facilities. The memo went on to say that the savings from the corporate layoffs would be reinvested in the company and used to help fund new locations, create store-level jobs, and offer price reductions to customers. The layoffs and memo were reported earlier by Reuters. A Kroger spokesperson confirmed the job cuts when contacted by Fast Company. Layoffs follow store closures and failed merger The newly announced layoffs mark another low point for Kroger over the past 12 months. Within that timeframe, the company has incurred significant setbacks. The most dramatic of those is the failed merger between Kroger and Albertsons, which was valued at $25 billion. The merger would have seen the two grocery store chain giants join ranks, creating a new supermarket juggernaut. This would have allowed the newly formed company to compete against grocery offerings from arch-rivals Walmart and Amazon. However, in December, a federal judge blocked the merger on anti-competition fears. The merger was later abandoned entirely and has led to legal proceedings between the companies. Kroger has also announced this year the closure of 60 of its stores, which are expected to shutter by the end of 2026. Kroger said that the closures would provide it a modest financial benefit. Kroger investors shrug off the job cuts Despite the devastating effect that these layoffs will have on the impacted workers, the news seems to have had no impact on Krogers stock price. As of the time of this writing, in premarket trading, KR shares are currently trading flat at $69 apiece. Thats just one cent higher than their closing price of $68.99 yesterday. Year-to-date, Kroger shares are up over 12.8%. And over the last 12 months, KR shares have climbed more than 30%. This story has been updated with Kroger’s response to our inquiry.
Category:
E-Commerce
Most enterprises treat AI implementation as a procurement problem. They evaluate vendors, negotiate contracts, and deploy solutions. But this transactional approach misses a fundamental truth: successful AI implementation isn’t just about buying technologyit’s about orchestrating an ecosystem. The companies winning with AI understand that implementation requires a web of relationships extending far beyond traditional vendor partnerships. They are building networks that include universities, regulatory bodies, ethicists, suppliers, and even customers. They recognize that in an environment in which AI capabilities evolve monthly, isolated implementation is a recipe for obsolescence. This article draws on insights from our forthcoming book, Reimagining Government (Faisal Hoque, Erik Nelson, Tom Davenport, Paul Scade, et al.) identifying the key components you will need to reconcile to successfully orchestrate a comprehensive AI partner ecosystem. The Expanding Universe of AI Partners When enterprise leaders think about AI partnerships, they typically start and stop with technology vendors. This narrow view blinds them to the full spectrum of relationships that determine success or failure in AI implementation. Academic institutions offer capabilities that money alone can’t buy. Universities are where breakthrough AI research happens, often years before commercial availability. Building relationships with labs, research centers, and individuals academics can provide access to cutting-edge research, specialized expertise, and talent pipelines that vendors can’t replicate. Government agencies are partners, not just regulators. Forward-thinking companies will work with agencies to shape AI standards, participate in regulatory sandboxes where they can test implementations and receive guidance, and collaborate on public-private initiatives that define industry practices. Ethics and oversight partners are becoming essential as AI stakes rise. Third-party ethicists provide a layer of credibility that internal roles cant match. Audit firms specializing in AI bias detection offer independent validation. Compliance specialists navigate the emerging patchwork of AI regulations. These partners don’t just reduce riskthey become competitive differentiators when customers demand proof of responsible AI use. Consultants and implementers bridge the gap between AI potential and operational reality. They build custom tools that integrate AI into existing workflows, train teams on new capabilities, and manage the organizational change that AI demands. The best ones will transfer knowledge while implementing systems, building internal capabilities that will endure after they leave. Supply chain partners determine whether AI creates value or chaos. When your AI-optimized inventory system hands off to a supplier’s manual processes, many of the benefits evaporate. Enterprises should look to coordinate AI decisions across their supply networks, encouraging shared model adoption, and ensuring that AI-to-AI handoffs work seamlessly. Customers are perhaps the most overlooked partners in AI implementation. They’re not just users but cocreators, providing the feedback that shapes AI development, the data that improves models, and the trust that makes implementation possible. Strategic Imperatives for Partnership Design Building an AI ecosystem that creates value involves more than just accumulating partners. Relationships and networks need to be designed to amplify capability while maintaining flexibility. Enterprises should focus on: Interoperability by design. Using proprietary models can lead to the creation of silos within enterprise networks. Selecting open-weight models helps ensure transparency and compatibility among partners. Alignment across the value chain. A pharmaceutical company implementing AI for drug discovery must ensure that contract research organizations, clinical trial partners, and regulatory consultants all work with compatible systems and standards. This doesn’t mean that all partners must use identical tools, but it does mean establishing common data formats, shared evaluation metrics, and aligned security protocols. Risk distribution. AI failures can cascade through networks. Smart partnership agreements distribute both opportunity and liability, ensuring that no single partner bears catastrophic risk while maintaining incentives for responsible development. This includes technical risks (system failures), ethical risks (bias, privacy violations), and business risks (market rejection, regulatory penalties). Translation layers. When government agencies partner with commercial vendors, they often use specialized contractors who serve as a critical middle layer, translating the generally applicable technology to meet agency-specific requirements. This middle layer adapts cloud-native solutions for secure environments, restructures Silicon Valley business models for public sector procurement cycles, and bridges cultural gaps between tech innovation and public service. Private enterprises can adopt this model as well, using specialized partners to translate general-purpose AI products for their specific industry needs. These translation partners can package the technical adaptation skills, business model alignment know-how, and cultural bridging that turns raw AI capability into operational value. Critical Partnership Challenges Three challenges consistently derail AI partnership ecosystems. The IP question can become extremely complex in multiparty AI development. When your data trains a vendor’s model that’s customized by a consultant and integrated by a systems implementer, who owns what? Imagine that a financial services firm discovers their AI vendor is using patterns learned from their fraud detection system to improve products sold to competitors. This might be permissible under the vendors standard contract, so it is important to think ahead to ensure that explicit boundaries are drawn between vendor improvements and innovations rooted in the clients operations and data. Lock-in risks extend beyond technology to psychology. Technical lock-in is a familiar problem: specific vendor systems can become so deeply integrated that switching becomes prohibitively expensive or onerous. But psychological lock-in is just as dangerous. Teams can become comfortable with familiar interfaces, develop relationships with vendor personnel, and resist exploring alternatives even when superior options emerge. Coordination complexity multiplies with each partner. When an AI system requires inputs from five partners, processes from three more, and delivers outputs to 10 others, coordination becomes a full-time job. Version mismatches, update conflicts, and finger-pointing when problems arise can paralyze initiatives. Building Your Partnership Strategy Creating an effective AI ecosystem requires a systematic approach, not just building a sequence of ad hoc relationships. Map your ecosystem needs across every dimension. Where are your technology gaps? Which expertise is missing? What ethical oversight do you need? How will implementation happen? Don’t just list vendorsmap the full spectrum of partnerships required for successful AI implementation. Include the nonobvious: the anthropologist who understands how your customers actually behave, the regulator who’ll evaluate your system, the supplier whose cooperation determines success. Design for flexibility. AI capabilities change monthly. Build partnerships that can evolve with them, with regular review cycles, clear performance metrics, and graceful exit provisions. Avoid agreements that lock you into specific technologies or approaches. The perfect partner for today’s needs may be obsolete tomorrow. Create governance structures that acknowledge the complexity of AI partnership networks. Establish steering committees with senior representation from key partners. Define escalation paths before problems arise. Create shared metrics that reflect interconnected outcomeswhen success requires five partners working together, individual KPIs create dysfunction. Plan exits from day one. As we emphasize in our recent book Transcend, knowing how partnerships end is as important as knowing how they begin. Define termination triggers, data ownership post-partnership, and transition procedures. The best partnerships are those either party can leave without destroying value. The AI revolution will not be won by technological advances alone. The strength of an enterprises ecosystem will play a key role in separating the winners from the losers. Companies that can see past traditional vendor relationships to orchestrate comprehensive partnership networks will transform AI from an implementation challenge into a sustainable competitive advantage.
Category:
E-Commerce
AI tools are disrupting creative work of all kinds, and Runway AI is a pioneer in the spacemaking major waves in Hollywood through partnerships with the likes of Disney and Netflix. Runways cofounder & CEO Cristóbal Valenzuela dissects the companys breakneck growth, the risks and responsibilities of AI tool makers, and how AI is redefining both business expectations and our notion of creativity. This is an abridged transcript of an interview from Rapid Response, hosted by the former editor-in-chief of Fast Company Bob Safian. From the team behind the Masters of Scale podcast, Rapid Response features candid conversations with todays top business leaders navigating real-time challenges. Subscribe to Rapid Response wherever you get your podcasts to ensure you never miss an episode. You released your Gen-4 model not long ago. You had your Aleph video editing tool come out. Correct. And there are these other tools out there too now, Google’s Veo 3 which I see folks using. Of course, there’s OpenAI’s Sora, Midjourney. What’s the difference between all these? I mean, are you all utilizing similar engines, or are all these things popping up now because the compute has reached a certain place? It’s a combination of things. I mean, we’ve been working on this for almost seven, eight years, so there’s a lot that we’ve learned after being alone and building this. I would say these days it’s becoming more evident to many that the models are getting pretty good at tackling and doing a lot of different things, and so that becomes interesting for obvious business reasons. All models are different. I think all models are trained for different reasons. We tend to focus on professionals and folks who want to make great videos. This amazing model we’ve released only recently, just a couple of weeks ago, allows you to modify and create video using an existing video. That was never possible before. And so those kinds of breakthroughs are just allowing, I guess, way more people to do much more interesting things. I saw you in another video use voice prompts to create a video scene. Your tools generate camera angles and change objects. They extend a scene outward, filling in what isn’t there. In one video, we see a cityscape, and then street lamps come on, and the windows of office and apartment buildings start blinking, and the lights are switching on and off in this very choreographed sequence. Can you explain how that was created? It took us less than an hour to make that video, and you start with a scene, an initial video, and then you ask Runway for things you want to change in that video. And so, we could ask if it’s daylight, we can ask the model, “Just show me a night version of that same scene.” And so what the model will do is it will understand what’s in that scene, and it will turn down the light metaphorically, but also literally we’ll just turn day into night, while maintaining pretty much the consistency for everything else. You might turn on the lights of the streets. And you can be much more specific. You can be “Only turn the lights on the left,” or “Only turn the streetlights while keeping everything else dark.” You can be like, “Now start turning the lights one by one, starting from the one on the left to the one on the right.” So in a way, it’s editing reality. Maybe you can think about it like that. You have an existing piece of content and you’re working through that content with AI, asking it to modify it in whatever way you want, which is really fun to be honest. It’s something I think we’ve never had the chance of doing ever before, and so it’s really fun to play with. I’ve played with Runway a little. It’s awesome, but I can’t write a single natural language prompt and get a full film yet. I mean, there is craft and discipline to getting these tools to work at their potential. I mean, are we going to get to the point where all you need to create a film is the idea for it? The vision and the production itself is all automatic? I think a great concept of what you mentioned was tools. This is a tool. It’s a very powerful tool, and this tool allows you to do things that you couldn’t do before. Knowing how to use the tool would always be important, and the tool is not going to do work on its own if you don’t know how to wield the tool, how to use it in interesting ways. And so I guess the answer for the question of will we ever get to a point where you can just prompt something and get exactly what you want? I guess the answer is kind of-ish. Depends on how good you know how to use the tool. I think about what tools people are using today to make films, like a camera. Can a camera help you win an Oscar? Of course. If you have a camera, will you win an Oscar? No. What makes a great filmmaker is like, well, knowing where to point the camera, knowing how the camera works and functions and how you can tell a story with a camera. And I think that’s no different from how we think about AI tools and Runway specifically, which is it can help you go very far. You can do amazing things with it. You just have to learn how to think with it and work with it. And if you know, then you’ll get far. You mentioned work that you do with studios in Hollywood. I know you’ve partnered with Netflix and Disney and AMC networks and whatever. How are they using Runway’s AI today? Because AI can be a little bit of a dirty secret in Hollywood. People are using it, but they don’t always want to admit it. Yeah, I think it’s a tool that’s the answer. And so the best studios and the best folks in Hollywood have realized that, and they’re using it in their workflows to combine them with other things they know pretty well. The thing is that there are no rules. You can start inventing them right now. I mean, Aleph is a couple weeks old, and so people are figuring out things and ways of using the technology that we never thought possible, and that’s what I enjoy the most. It’s a general purpose technology. It can be used in ways that are diverse and creative and unique, and if you’re creative enough, you’re going to uncover those things. At some point in the future, there may be a whole different medium about the way you do it. Right now, I can imagine they take ideas and they create essentially a prototype of a film to show to get ideas through. Is that part of how it’s used? You can think about, broadly speaking, in two stages. There’s preproduction and postproduction. Preproduction is, well, writing the script and doing art direction and selecting characters and casting and location scouting and just preparing to make the stuff. And so there’s many use cases of Runway in there. Of course, the obvious ones are storyboarding and helping you with writing the script and helping you with casting characters and seeing how they’re going to behave and what they’re going to do. And then in post, once you film or we record something, there’s a lot of visual effects and things that you need to aply and change to the videos themselves. And so let’s say the example that we were speaking before, turning day into night. Let’s say you’ve recorded something and it happens to be that someone changed the script later and the shot that you recorded had to happen at night. Well, the way you would do it before was that you had to go back and shoot again and spend more time and fly the actors again and do the whole thing. Or now you can go into Runway and just ask the model to turn that scene into night, and it will do it for you. So it’s less of them coming to Runway and typing, “Get me a multi-award winning film now, fast and cheap,” and more about, well, I have this problem, it’s very expensive to solve. I have a tool now that can help me do it faster and better. Can I use it? Will it make my movie? No, but it will help you very much in getting there faster and cheaper.
Category:
E-Commerce
Sites : [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] next »