Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2025-08-25 08:30:00| Fast Company

Although AI is changing the media, how much it’s changing journalism is unclear. Most editorial policies forbid using AI to help write stories, and journalists typically don’t want the help anyway. But when consulting with editorial teams, I often point out that, even if you never publish a single word of AI-generated text, it still has a lot to offer as a research assistant.  Well, that assertion might be a bit more questionable now that the Columbia Journalism Review has gone and published its own study about how AI tools performed in that role for some specific journalistic use cases. The result, according to CJR: AI can be a surprisingly uninformed researcher, and it might not even be a great summarizer, at least in some cases. Let me stress: CJR tested AI models in journalistic use cases, not generic ones. For summarization in particular, the toolsincluding ChatGPT, Claude, Perplexity, and Geminiwere asked to summarize transcripts and minutes from local government meetings, not articles or PowerPoints. So some of the results may go against intuition, but I also think it makes them much more useful: For artificial intelligence to be the force for workplace transformation as it’s often hyped to be, it needs to give helpful output in workplace-specific use cases. {"blockType":"creator-network-promo","data":{"mediaUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/mediacopilot-logo-ss.png","headline":"Media CoPilot","description":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com","substackDomain":"https:\/\/mediacopilot.substack.com\/","colorTheme":"blue","redirectUrl":""}} The CJR report reveals some interesting things about those use cases and how journalists approach AI in general. But mostly it shows how badly we need more of this: systemic testing of AI that goes beyond the ad hoc experimentation that has too long been the default for many organizations. If the study shows nothing else, it’s that you don’t need to be an engineer or a product designer to judge how well AI can help in your job. Putting AI to the newsroom test To test AI’s summarization abilities, the evaluatorswhich included academics, journalists, and research assistantscreated multiple prompts to create short and long summaries from each tool, then ran them several times. A weakness of the report is that it doesn’t reveal the outputs so we can see for ourselves how well it did. But it does say it quantified factual errors to evaluate accuracy, comparing them with human-written summaries. Without seeing the outputs, it’s hard to know how to improve the prompts to get better results. The study says it got good results for short (200-word) summaries but saw inaccuracies and missed facts in longer ones. One surprising outcome was that the simplest prompt, Give me a short summary of this document,” produced the most consistently good results, but only for short summaries. The study also looked at research tools, specifically for science reporting. I love the specificity here: The CJR researchers were very particular about the use case: giving the tool a paper and then asking it to perform a literature review (finding related papers, citing them, and extracting the overall consensus). They also chose their targets deliberately, evaluating AI-powered research services like Consensus and Semantic Scholar instead of the usual general chatbots. On this, the results were arguably even worse. The tools typically would find and cite papers that were completely different from what a human picked for a manually created literature review, and even different from the other tools. And when they ran the same prompts a few days later, the results would change again. Getting closer to the metal I think the study is instructive beyond the straightforward takeaways, such as using AI only for short summaries and thinking twice before using AI research apps for literature review. Prompt engineering matters: I get that the three different prompts for summaries were probably designed to simulate casual usethe kinds of natural language text a busy journalist might dash off. And maybe AI should ultimately produce good results when you do that. But for out-of-the-box tools (which is what they used), I would recommend more thoughtful prompting. This doesn’t have to be a big exercise. Simply going over your prompt to make vague language (“short summary”) more precise (“200-word summary”) would help. The researchers did ask for more detail in two of the three prompts, but the study criticizes the longer summaries for not being comprehensive when the language in the prompts doesn’t specifically mention comprehensiveness. Asking the AI to check its own work sometimes helps too. The app layer struggles: Reading the part about the various research apps not producing good results had me nodding along. I don’t want to read too much into this since the study was narrowly focused on research apps with a very specific use case, but I’m currently living through something similar while experimenting with AI content platforms for my plans at The Media Copilot. When you use a third-party tool, you’re an extra step removed from the foundation model, and you miss having the flexibility of being “closer to the metal.” I think this points to a fundamental misunderstanding of the so-called “app layer.” Most AI apps will put a veil over system prompts and model pickers in the name of simplification, but it isn’t the UX win that many think it is. Yes, more controls might confuse AI newbies, but power users want them, and it turns out the gap between the two groups might not be very large.I think this same misunderstanding is what stymied the GPT-5 launch. Removing the model pickerwhere you could pick between GPT-4o, o4-mini, o3, etc.seemed like a smart, simplifying idea, but it turned out ChatGPT users were more sophisticated than anyone had thought. The average ChatGPT Plus subscriber might not have understood what every model does, but they knew which ones worked for them. Iterate, iterate, iterate: The study’s results are helpful, but they’re also incomplete. Testing outputs from models is only the beginning of the process of building an AIworkflow. Once you’ve got them, you iterate: Adjust your prompts, refine your model choice, and try again. And again. Producing consistent results that save time isn’t something you’ll get perfect on the first try. Once you’ve found the right combination of prompting, model, and context, then you’ll have something repeatable. Coming halfway Where does this leave newsrooms? This might sound self-serving since I train editorial teams for a living, but after reading this report, I’m more convinced than ever that, despite predictions that apps and software design will abstract away prompting, AI literacy still matters. Getting the most out of these tools means equipping journalists with the skills they need to craft effective prompts, evaluate results, and iterate when necessary. Also, the CJR study is an excellent template for testing tools internally. Get a team together (they don’t need to be technical), craft prompts methodically, and then evaluate thembut then iterate. Keep experimenting. Find what consistently gets good resultsnot just quality outputs, but a process that actually saves time. Just doing “vibe checks” won’t get you very far. Because there is one more thing the study is off-target about. When a journalist considers how to complete a task, the choice usually isn’t between a machine output and a human one. It’s the machine output or nothing at all. Some might say that’s lowering the bar, but it’s also putting a bar in more places. And with some training, experimentation, and iteration, raising it inch by inch. {"blockType":"creator-network-promo","data":{"mediaUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/mediacopilot-logo-ss.png","headline":"Media CoPilot","description":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com","substackDomain":"https:\/\/mediacopilot.substack.com\/","colorTheme":"blue","redirectUrl":""}}


Category: E-Commerce

 

LATEST NEWS

2025-08-25 08:00:00| Fast Company

Find focus and fatigue-proof your routine with these six standout productivity books of 2025. [Photo: Next Big Idea Club] 99% Perspiration: A New Working History of the American Way of Life By Adam Chandler An enlightening and entertaining interrogation of the myth of American self-reliance and the idea of hard work as destiny. Listen to our Book Bite summary, read by author Adam Chandler, in the Next Big Idea App or view on Amazon. [Photo: Next Big Idea Club] The Ambition Trap: How to Stop Chasing and Start Living By Amina AlTai Drawing on her work with Fortune 500 leaders, Olympic gold medalists, start-up founders, and former girlbosses, AlTai guides you through the process of reconciling your ambition, starting with healing the core wounds and insecurities currently driving you. Listen to our Book Bite summary, read by author Amina AlTai, in the Next Big Idea App or view on Amazon. [Photo: Next Big Idea Club] The Brain at Rest: How the Art and Science of Doing Nothing Can Improve Your Life By Joseph Jebelli The definitive, science-backed guide to achieving contentment, creativity, and success by letting your brain decompress. Listen to our Book Bite summary, read by author Joseph Jebelli, in the Next Big Idea App or view on Amazon. [Photo: Next Big Idea Club] Sleep Groove: Why Your Bodys Clock Is So Messed Up and What to Do About It By Olivia Walch This myth-busting guide to sleep is the perfect introduction to how circadian science can demystify your nights and help reset your days. Listen to our Book Bite summary, read by author Olivia Walch, in the Next Big Idea App or view on Amazon. [Photo: Next Big Idea Club] No New Things: A Radically Simple 30-Day Guide to Saving Money, the Planet, and Your Sanity By Ashlee Piper From an award-winning sustainability expert, a witty, no-nonsense guide to regaining control over your time, consumerist impulses, and financial and mental wellness. Listen to our Book Bite summary, read by author Ashlee Piper, in the Next Big Idea App or view on Amazon. [Photo: Next Big Idea Club] Sound Affects: How Sound Shapes Our Lives, Our Wellbeing, and Our Planet By Julian Treasure A lively, engaging narrative that takes readers on an epic journey spanning disciplines, continents, and centuries that spotlights sounds incredible impact on our bodies, feelings, thinking, and behavior. Listen to our Book Bite summary, read by author Julian Treasure, in the Next Big Idea App or view on Amazon. This article originally appeared in Next Big Idea Club magazine and is reprinted with permission.


Category: E-Commerce

 

2025-08-25 08:00:00| Fast Company

AI is transforming work, but its not just the tools that matter. Its how teams use them, and who they become in the process, that sets them apart.  While many organizations scramble to integrate AI into every corner of the business, the best teams are asking better questions. Theyre not just moving faster; theyre working with greater intention. They protect whats human: trust, creativity, and long-term thinking. And they reshape how they collaborate, communicate, and grow, turning disruption into a durable advantage. For Eric, the SVP of product at a global advertising technology company, the mission was clear: lead AI integration across all business units. But not everyone was on board. Peer teams were skeptical, overloaded, and unsure whether AI would help or hinder their work. Instead of forcing adoption, Eric built a cross-functional AI Champions Circle. Their goal wasnt to become experts overnight. It was to explore, experiment, and learn together. They surfaced use cases, skill gaps, and unexpected opportunities to showcase the companys strengths. One early win, a prototype that automated client reporting, showed how AI could streamline customer delivery, create more space for higher-value work, and give skeptics a reason to lean in. We didnt need people to become prompt engineers, Eric said. We needed them to think more boldly and trust each other enough to try. Through our work advising dozens of companies facing similar dynamics, Kathryn, as an executive coach and keynote speaker, and Jenny, as an executive advisor and Learning & Development expert, we have seen a clear pattern: high-performing teams in this new era follow three simple but powerful habits. 1. Build Skills That AI Cant Replace AI can assist and accelerate work, but it cant replace sound judgment, real curiosity, or ethical discernment. These are distinctly human strengths, and according to a 2025 report by the World Economic Forum, they rank among the top emerging skills for the future of work.  What separates high-performing teams is their ability to use AI as a starting point, not a crutch. They ask sharper questions. They challenge assumptions. They make connections across functions and domains. These are not just soft skills; theyre power skills that generate meaningful insight and influence. After launching the AI champions circle, Eric noticed something unexpected. The group wasnt just surfacing use cases; instead, it was improving how the company framed problems. The questions got better, he told us. It wasnt just, What can this tool do? It became, What are we trying to solve, and how could this help? Eric began encouraging teams across the business to build that same muscle. They shifted from asking what AI could do to asking where their thinking mattered most. Team members started exploring: Where does this work require human judgment? Where are we relying too heavily on automation? What are the tradeoffs we need to weigh? That shift gave people permission to think more critically, not just execute faster. The result wasnt just their output. It was their judgment. Teams grew more confident in evaluating options, analyzing risks, and owning what only they could uniquely contribute. 2.  Focus on Outcomes, Not Optics When AI speeds up how teams generate content, analyze data, or respond to requests, its easy to confuse motion with impact. Leaders may see faster email replies, more polished updates, or nonstop activity on project trackers, but that doesnt mean the team is aligned or productive. High-performing teams focus less on the illusion of productivity and more on strategic clarity, a key element of organizational health. According to McKinsey’s Organizational Health Index, companies that consistently communicate direction and hold teams accountable outperform their peers. Googles team effectiveness research reinforces this: structure and clarity is a key condition for team success. One leader we worked with, Cheryl, the head of customer experience at a fast-growing SaaS company, noticed her team was delivering with greater speed but less depth. They were generating AI-powered customer sentiment reports, support email templates, and team dashboards, but rarely stepping back to assess if those outputs were actually solving the right problems. Everything looked efficient on paper, she told us, but we werent moving the needle on what mattered. To refocus, Cheryl launched a lightweight Slack ritual called Output vs. Outcome Fridays. Every Friday morning, team members posted one thing they worked on, and a sentence about how (or whether) it advanced a core goal. It gave the team a way to pause and reconnect their effort with their purpose. Over time, this mitigated performative work and reinvested that time into customer-centric improvements. Pro tip: Start the feedback cycle. Now that your team has a rhythm for internal reflection, add an external feedback loop by connecting with your customer whether thats an external client or an internal partner like sales, finance, or HR. Have each team member identify one stakeholder to engage in real-time input. Start with a quick 2-3 question check-in, and add quarterly conversations to deepen the learning. This helps tie your outcomes to the experience of the people you serve and ensures your efforts create meaningful impact. 3.  Leave Room for Strategic Play AI can tempt teams to over-optimize for efficiency, often at the expense of creativity, judgment, and long-term thinking. But without space to think and try, teams become reactive, not strategic. Weve seen the most effective teams treat curiosity like a business advantage. They embed space to explore like dedicated days of the week to experiment with new tech, or sprint retrospectives where team members share quick experiments and insights. They carve out time to try new tools, test ideas in low-stakes ways, and share what theyre learning. That kind of structured experimentation isnt a distraction; its a discipline. And it often surfaces key insights long before any formal pilot begins. McKinseys research shows that cultures rooted in learning and innovation are more adaptive and resilient, especially during periods of disruption and uncertainty. Cheryl applied the same thinking to how her team explored AI. She introduced a recurring segment to her teams sprint retrospectives called AI Experiments. Each week, a different team member shared one thing they tested: a new prompt, a time-saving tool, or even a failed experiment. The goal wasnt to be right; it was to get curious. This created a low-stakes, high-learning environment. People started volunteering ideas, sharing small wins, and building on each others discoveries. The team didnt just become more efficient. They became more creative, collaborative, and resourceful, increasing their collective confidence. Pro tip: Create a shared AI experiments tracker. Start a lightweight hub (like with Notion, Google Docs, or a dedicated Slack channel) where team members can log quick notes on what theyre testing. Keep it simple and informal, no slides, no pressure. Suggested fields: Tool or prompt used What worked What didnt What wed try next The goal is to normalize small bets, shared learning in real time, and build momentum across the team. The Real AI Advantage is Human AI is only as powerful as the people who use it with intention. The most effective teams arent winning because they have mastered the latest tools: they stand out because they have adopted the right habits, redefining how they think, decide, and learn together. They prioritize judgment over automation, experimentation over perfection, and shared purpose over performative productivity.  What gives teams a lasting edge isnt access to better technology. Its the courage to slow down, ask better questions, and lead with what only humans can bring: discernment, trust, and adaptability. In the age of AI, your competitive advantage isnt artificial. Its deeply human.


Category: E-Commerce

 

Latest from this category

25.08Exclusive: The NFL has a new fashion partnerand its your favorite mall retailer
25.08How companies can handle layoffs compassionately
25.08How to harness your personality traits to level up your career
25.08The gap between AI hype and newsroom reality
25.086 books on productivity that you need to read in 2025
25.08What high-performing teams do differently in the age of AI
24.08Quiet firing is spreading, but there are business risks to tactics to push workers out
24.08Smarter AI is supercharging battery innovation 
E-Commerce »

All news

25.08Half of UK job losses in hospitality, say bosses
25.08Exclusive: The NFL has a new fashion partnerand its your favorite mall retailer
25.08Cohance Lifesciences and other pharma stocks jump up to 5% after Jefferies initiates Buy recommendation
25.08How companies can handle layoffs compassionately
25.08The gap between AI hype and newsroom reality
25.08How to harness your personality traits to level up your career
25.08Vodafone Idea shares rally over 10% in 2 days amid AGR relief buzz
25.08JM Financial initiates coverage on ITC Hotels with Sell rating, sets Rs 215 target price
More »
Privacy policy . Copyright . Contact form .