|
Organizations look structured and logical from the outsideboxes and lines, reporting relationships, KPIs, and performance frameworks. But walk into any real meeting, and youll sense it: side glances, shifting energy, people going silent when one voice enters the room, unexplained resistance to change, and power dynamics no slide deck could predict. Thats not just dysfunction. Thats the system speaking, and most leaders arent listening. That is why we need something called systemic intelligence. Systemic intelligence is the capacity to sense and respond to the invisible forces shaping an organization’s behavior, culture, and outcomes. Its not about titles or tactics. Its about understanding: The unspoken agreements that guide behavior The loyalties people carryto past leaders, ideas, or roles The emotional undercurrents in teams and across departments The patterns of inclusion and exclusion that shape decision-making The stories that are being told, and the ones that arent If emotional intelligence helps you understand individuals, systemic intelligence enables you to understand relationships, fields, and patterns. Its what allows a leader to walk into a room and feel the temperature, not just the metrics, but the mood of an organization. Why This Matters More Than Ever The modern workplace is in flux. Hybrid work, generational shifts, AI transformation, and rising emotional exhaustion reveal how fragile many organizational systems are. And yet, most leadership development still focuses on logic, linearity, and surface-level skills. Heres the reality: 70% of transformation efforts fail, primarily due to hidden dynamicscultural resistance, misalignment, and lack of trust. Furthermore, only 27% of employees believe their companys values align with how work actually gets done. Most strategies fail not because they are wrong but because they are disconnected from the reality of the system they are trying to move. If leaders dont learn to see the system, they will be ruled by forces they don’t understand. A Moment That Changed Everything I once worked with a leadership team navigating the aftermath of a merger. They had a new vision, a reorg plan, and a glossy set of PowerPoint decks. But something was stuck. Meetings were tense. Morale was low. Alignment felt forced. So, we paused the strategy session and held a story circle. One leader finally voiced what everyone else had been feeling: I still feel loyalty to our former CEO. We never really said goodbye. And it feels like were not allowed to grieve the culture we lost. In that moment, something shifted. What emerged wasnt just emotion; it was clarity. The energy in the room softened, and trust began to rebuild. The team could finally move forwardnot by pushing harder, but by acknowledging what had been in the system all along. What you dont name, you cant shift. The S.E.E.N. Framework for Systemic Intelligence Systemic intelligence isnt about having special powers. Its about cultivating a new kind of leadership presence thats attuned to whats happening beneath the surface. You dont develop this awareness by accident. You create it by practicing small but powerful shifts in observing, listening, and engaging with your organization as a living, breathing system. To help leaders begin, I use a simple guide: S.E.E.N. Its a reminder that before you can shape a system, you must first learn to see it. S Sense the Field. Slow down. Listen beyond the words. Whats present, but unspoken? Whats the emotional temperature? Before jumping into action, ask your team: Whats the mood in the room right now? Then sit with the silence. E Explore Hidden Loyalties. People dont just commit to goalsthey commit to identities, past leaders, and unspoken rules. What loyalties are operating beneath the surface? For example, a team resistant to innovation may not fear changethey may be protecting the legacy of a beloved product or person. E Examine the Energy Flow. Where is energy stuck? Who gets centered, and who gets sidelined? Where does attention naturally go? Where does it get blocked? Map informal influencenot just reporting lines. Who really holds trust in the system? N Name What Needs to Be Acknowledged. Often, healing doesnt come from solvingit comes from witnessing. What grief, transition, or injustice needs to be seen and honored? What if your next strategic move began with a ritual of acknowledgment, not another set of objectives? How to Start Seeing the System You dont need to become a therapist. You just need to become more attuned to the emotional undercurrents, unspoken dynamics, and patterns shaping your team. Here are a few ways to begin: Host Campfire Conversations. Create spaces where storiesnot just updatescan be shared. Start with: Tell us about a moment that shaped your connection to this organization. Bring in Outside Eyes. Artists, facilitators, systemic coaches, or organizational psychologists can help visualize dynamics your team may be too close to see. Use Visual Mapping. Ask: Whats the formal structure? Whats the informal one? Whos at the center of decisions, and whos on the margins? Slow the Agenda. Build in white space. Let emotion, silence, or discomfort have a seat at the table. Intelligence lives in the spaces were often too quick to fill. Most leaders try to fix what they can see. But true leadership begins by learning to sense what you cant. Strategy is important, and structure is necessary, but without systemic intelligence, even the best plans will stall. Because whats unacknowledged gets acted out, and whats seen can finally start to shift. So, the next time your team feels stuck, ask yourself: Whats really going on here? Whats in the system that no one is naming? That question might be your most strategic move yet.
Category:
E-Commerce
Welcome to AI Decoded, Fast Companys weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here. Trump harms AI progress by warring with universities Donald Trump has done a lot to antagonize universities in his first 100 days. He cut off federal research funding to institutions like Princeton, Columbia, and Harvard, citing their alleged tolerance of antisemitism on campus. He also threatened the authority of college accreditation bodies that require schools to maintain diversity, equity, and inclusion programs. But these actions directly undermine the administrations stated goals of strengthening the U.S. military and helping American tech companies maintain their narrow lead over China in AI research. Since World War II, the U.S. government has maintained a deep and productive relationship with universities. Under the leadership of Vannevar Bush, director of the Office of Scientific Research and Development, the government channeled significant research funding into university labs. In return, it received breakthroughs like radar and nuclear technology. Over the decades, university researchers have continued to contribute critical innovations used in defense and intelligence, including GPS and the foundational technologies of the internet. Today, the government increasingly relies on the commercial sectorincluding major contractors like Boeing and General Dynamics, and newer firms like Palantir and Andurilfor defense innovation. Yet universities remain essential. Much of the most advanced AI research still originates from academic computer science departments, many of which are powered by international students studying at institutions like MIT, Stanford, and Berkeley. These students often go on to found companies based on research initiated in academia. Whether they choose to build their businesses in the U.S. or return to their home countries depends, in part, on whether they feel welcome. When international students see research funding threatened or videos of PhD students being arrested by ICE, staying in the U.S. becomes a less appealing option. In a recent conflict with Harvard, the Department of Homeland Security even demanded information on the universitys foreign students and threatened to revoke its eligibility to host them. In response, over 200 university and college presidents have condemned the administrations actions and are exploring ways to resist further federal overreach. Rather than discouraging international researchers and students, the U.S. should be sending a clear signal: that it remains a safe, supportive, and dynamic environment for AI talent to study, innovate, and launch the next generation of transformative companies. The best AI agents may be powered by teams of AI models working together During the first phase of the AI boom, labs achieved big intelligence gains by pretraining their models on ever-larger data sets and using more computing power. While AI companies are still improving on the art and science of pretraining, the intelligence gains are becoming increasingly expensive. A big part of the research community has shifted its focus to finding the best ways to train models to think on their feet, or to reason over the best routes to a responsive and accurate answer at inference time just after a user enters a question or problem. This research has already led to a new generation of thinking models such as OpenAIs o3 models, Googles Gemini 2.0, and Anthropics Claude 3.7 Sonnet. Researchers teach such models to think by giving them a multistep problem and offering them a reward (usually just a bit of code that means good) for finding their way to a satisfactory answer. Its certainly possible to build an inference system that makes numerous calls to a single large frontier AI model, collecting all the questions and answers in a context window as works toward an answer. But new research from Berkeleys AI research lab shows this monolithic one model to rule them all approach is not always the best way of building an efficient and effective inference system. A compound AI system of multiple models, knowledge bases, and other tools working together can yield more relevant and accurate outputs at far lower costs. Importantly, such a pipeline of AI resources can be a powerful backend for AI agents capable of calling on tools and working autonomously, says Jared Quincy Davis of the AI cloud company Foundry. Foundry builds software that enables it to provide GPU compute at low cost for AI developers. Davis has led an effort to create an open-source framework that lets AI practitioners build just the right pipeline, with just the right resources, for the application they have in mind. The framework, called Ember, was created with help from researchers at Databricks, Google, IBM, NVIDIA, Microsoft, Anyscale, Stanford, UC Berkeley, and MIT. Davis says it’s possible to build a compound system that can make calls to a number of todays state-of-the-art AI models (via APIs) such as ones from Google, OpenAI, Anthropic, and others. Large frontier models often stand above other large models in certain skill areas (Anthropics Claude is especially good at writing and analyzing text), so its possible to build a pipeline that calls on models according to their unique strengths. This is a very different way of looking at AI computing, compared to the narrative of just a couple years ago that said one model would be better than all others at practically everything. Now, numerous models compete for the state-of-the-art at various tasks, while other smaller models specialize in completing tasks at lower cost, and the overall cost of getting an answer from an AI model has gone way down over the past couple of years. Congress actually passes a tech bill Congress has failed to pass any broad-based regulation to protect user and data privacy on social networks. It has, however, managed to pass laws to prohibit specific and particularly dangerous social media content such as child sex trafficking, and now nonconsensual intimate images (or NCII). NCII refers to the practice of posting sexual images or videos of real people online without their consent (often as an act of revenge or an attempt to extort), including explicit images generated using AI tools. The bill, called the Take It Down Act, which unanimously passed the Senate in February and the House on Monday, makes it a federal crime to post NCII and requires that online platforms remove the content within 48 hours of a complaint. Affected public-facing online platforms will have a year after the law passes to set up a system for receiving and acting on complaints. The president is expected to sign the bill into law. Even though the bills intent earned widespread suppor, its legal reach disturbed some free expression advocates. The Electronic Frontier Foundation worries that the bills language is overbroad and that it could be used as a tool for censorship. These worries were compounded by the fact that the new law will be enforced by the Federal Trade Commission, which is now led by Trump loyalists. More AI coverage from Fast Company: Duolingo doubles its language offerings with AI-built courses In his first 100 days, Trumps tariffs are already threatening the AI boom Microsoft thinks AI colleagues are coming soon Marc Lore wants AI to feed youand make you healthier Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.
Category:
E-Commerce
In the early days of the current AI boom, The New York Times sued OpenAI and Microsoft for copyright infringement. It was a seismic move, but perhaps the most notable thing about it is what came after. In the subsequent months, publisher after publisher signed licensing deals with OpenAI, making their content available to ChatGPT. There were others who chose litigation, certainly, but most major media companies opted to take some money rather than spend it on lawyers. That changed last week when Ziff Davis filed its own copyright lawsuit against OpenAI. Ziff owns several major online properties, including Mashable, CNET, IGN, and Lifehacker, and garners a massive amount of web traffic. According to the filing, its properties earned an average of 292 million monthly page views over the past year. Strange, then, that OpenAI didn’t bother to negotiate with Ziff at all. The filing mentions that, after asking OpenAI to stop scraping its content without authorization, Ziff’s requests to negotiate were “rebuffed.” A news story about the lawsuit in PCMag (another Ziff property) also said OpenAI wouldn’t talk, though it’s unclear whether it was just repeating what the filing described. {"blockType":"creator-network-promo","data":{"mediaUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/mediacopilot-logo-ss.png","headline":"Media CoPilot","description":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com","substackDomain":"https:\/\/mediacopilot.substack.com\/","colorTheme":"blue","redirectUrl":""}} While The Times and Ziff aren’t alone in their legal efforts against OpenAI, it’s informative to compare the two complaints, filed almost 16 months apart, to get an understanding of how the stakes of the AI-media cold war have evolved. AI technology has progressed considerably and we now have a much greater understanding of AI substitution riskthe fancy name for AI summarization of publisher content. Ziff’s lawsuit gives us a better idea of how the AI sausage is made these days and can tell us just how much other AI players should be sweating. For starters, Ziff points to what has become table stakes in most of these lawsuits: the act of scraping content, storing copies of that content in a database, and then serving up either a “derivative work” (summaries) or the content itself as inherently violative of copyright. OpenAI has maintained, however clumsily at times, that its harvesting of content on the web to train models falls under fair use, a key exception to copyright law that has supported some instances of mass digital copying in the past. Thats the central conflict to all these cases, but Ziffs action goes in some novel directions that point to how things have changed since ChatGPT first arrived: 1. AI, meet DMCA Ziff runs a few more yards with the copyright ball, claiming that OpenAI deliberately stripped copyright management information (CMI) from Ziff content. This is a bit of a technicalityessentially it means ChatGPT answers often don’t include bylines, the name of the publication, and other metadata that would identify the source. However, stripping out CMI from content and then distributing it under your own banner is a violation of the Digital Millennium Copyright Act (DMCA), giving the filing more teeth. 2. It’s a RAG world now This is arguably the most important change between the two lawsuits and reflective of how the way we use AI to access information has changed. When The Times filed suit, ChatGPT wasn’t a proper search engine, and the public was only just beginning to understand retrieval-augmented generation, or RAGbroadly, how AI systems can go beyond their training data. RAG is an essential element of any AI-based search engine today, and it’s also massively increased the risk of AI substitution to publishers since a chatbot that can summarize current news is much more useful than one that only has access to archives that cut off after a certain date (remember that?). 3. Watering down the brands Ziff frames the hallucination problem in a novel way, calling it “trademark dilution.” Media brands like Mashable and PCMag (both of which I used to work at) have built up their reputations over years or decades, the complaint makes the case that every time ChatGPT attributes a falsehood to one of them or wholesale imagines a fake review, it chips away at them. It’s a subtle point, but a compelling one that points to a future where valuable brands slowly become generic labels floating in the AI ether. 4. Paywalls are the first line of defense Ziff says in the filing that its properties are particularly vulnerable to AI substitution because so little of its content is behind paywalls. Ziff’s business model is based primarily on advertising and commerce (mostly from readers clicking on affiliate links in articles), both of which depend on actual humans visiting websites and taking actions. If an AI summary negates that act, and there’s no licensing or subscription revenue to make up for it, that’s a huge hit to the business. 5. Changing robots.txt isn’t enough Every website has a file that tells web scrapers what they can do with the content on that site. This “robots.txt” file allows sites to, say, let Google crawl their site but block AI training bots. Indeed, many sites do exactly that, but according to Ziff, it makes no difference. Despite explicitly blocking OpenAI’s GPTBot, Ziff still logged a spike in the bot’s activity on some of its sites. It’s generally assumed companies like OpenAI use third-party crawlers to scrape sites they’re not supposed to, but Ziff’s lawsuit accuses OpenAI of openly flouting the rules it claims to respect. 6. Regurgitation is still an issue The original Times complaint spends many pages on the issue of “regurgitation”when an AI system doesn’t just summarize a piece of content but instead repeats it, word for word. Generally this was thought to be a mostly solved issue, but Ziff’s filing claims it still happens, and that exact copies of articles are a relatively easy thing for ChatGPT users to call up. Apparently asking what the original text “might look like with three spaces after every period” is a method some have usedto fool the chatbot into serving up exact copies of an article. (For the record, it didn’t work for me.) The battle continues Just when it was looking like licensing deals would be the new normal, Ziff Davis’s filing shows the fight between AI and news is far from over. How it plays out could end up being even more existential for a company like Ziff. However the court rules, the case confronts a more fundamental question: Can strong media brands that rely on commerce and free access coexist with AI systems that learnand sometimes mislearnfrom everything they touch? {"blockType":"creator-network-promo","data":{"mediaUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/mediacopilot-logo-ss.png","headline":"Media CoPilot","description":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com","substackDomain":"https:\/\/mediacopilot.substack.com\/","colorTheme":"blue","redirectUrl":""}}
Category:
E-Commerce
All news |
||||||||||||||||||
|