|
Since launching Boardroom in 2019, Rich Kleiman and NBA star Kevin Durant have grown the media company into an influential player that confidently straddles business, sports, and entertainment across content, films, TV, and events. Its newest venture is a membership club that Kleiman sees as key to building a long-lasting brand legacy. I really want to build a sustainable brand that lasts, Kleiman tells Fast Company. By having this core membership community, and having them become, in a lot of ways, voices of this brand, I thought was really crucial. Kevin Durant (left) of the Phoenix Suns and Rich Kleiman sit courtside at the 2023 WNBA All-Star Game. [Photo: Ethan Miller/Getty Images] Launching later this month, the Boardroom Members Club will feature regular members-only events, VIP access to Boardroom flagship activations at NBA All-Star, Art Basel and more, exclusive networking opportunities, and a private digital platform. For me, I saw this boom in membership clubs in the city, and while they all have their own thing, whether it’s the food or the location or the brand name or the type of people that go there, I didn’t think that there were actually communities there that benefited your career, says Kleiman. And for me, I felt like that was my special sauce, understanding the importance of being in a room with the right mix of people. Boardrooms media arm churns out newsletters, social posts, and content that reaches over 52 million unique monthly visitors. The company is on track to nearly double revenue in 2025, and average monthly reach has increased by 74% in 2025. Its film and TV output in recent years includes the Apple TV series Swagger, Showtimes Emmy-nominated doc NYC Point Gods, and the Oscar-winning short Two Distant Strangers. However, it was events like Boardrooms annual CNBC x Boardroom Game Plan Summit that showed Kleiman the potential in combining quality content and the community of people who gather around it. Like an IRL LinkedIn for cool people. I thought that was really exciting, and I wanted to create a version of it that was exclusive to members, he says. I wanted that to feel a bit exclusive, because those conferences can be overwhelming for people that are trying to get information and trying to connect. Members Club events will have the same vibe and feel of the brands bigger events but with more intimate programming. The big names are still in the room, but make them truly accessible and they understand that like they’re there now to integrate with this community, says Kleiman. And [those big names] want that too. It’s really infectious for anyone at any level, to be around that type of hunger and that type of curiosity and excitement. Seeing our consumers and knowing theyre part of our brand and in our comments and at our parties, but they wanted more, and I wanted to give them more. The combination of not only connecting with Boardroom content, but with fellow fans and members that can impact their own careers and businesses, is where Kleiman sees the most long-term potential. For me, the real excitement is creating something that I can point to potentially decades from now and say, That was us, we built that.
Category:
E-Commerce
Youve no doubt heard of Microsofts Copilot. But can you define exactly what it is? If you find that question a challenge to answer concisely, youre not alone. In the scramble to convert increasingly ubiquitous generative AI capabilities into recognizably branded tools, Copilot has succeededsometimes to the detriment of its own brand. Consider the most recent evidence of the Copilot brands cultural profile: a request from the Better Business Bureaus National Advertising Division recently asked Microsoft to adjust its Copilot branding and advertising to avoid confusing or misleading consumers. Part of the problem, it seems, is the way Microsoft has stretched the Copilot brand into a kind of catchall for all things AI. (Microsoft Copilot itself says there are multiple Copilots, each tailored to specific tools and platforms, listing about a half-dozen key examples.) In part, the NAD argued that Microsofts claims that Copilot boosts productivity and ROI were backed only by a study that actually measured a perception of productivity. It also suggested Microsoft is using the Copilot name across so many AI products and features its not always clear which advertised capabilities apply in which use cases. A description of Copilot working seamlessly across all your data, for example, might mislead a user who wasnt clear on which Copilot-branded tool it referred to. Based on the context of the claims and universal use of the product description as Copilot, NADs recommendation concluded that consumers would not necessarily understand the differences. A Microsoft statement to Fast Company said that while it disagreed with NADs critical conclusions, it is happy to make small adjustments to help customers better understand the differences between the chat and in-app experiences, or to clarify when a study reflects consumer perception.It’s true that the Copilot name has (to the annoyance of some users) long replicated itself across multiple Microsoft product lines as an all-purpose signifier of AI integration. As The Verge has argued previously, this seems to be partly the fallout from efforts to attract more business customers to pay for Copilot capabilitiesand that plays out in occasionally confusing ways. For example, Microsoft 365 Copilot Chat (formerly Bing Chat Enterprise) is actually distinct from Business Chat for Microsoft 365 Copilot (formerly Business Chat, a component of Teams). This gets at a deeper issue. The irony may turn out to be that there are just too many Copilots (and not just at Microsoft), and the term is being run into the ground. In fact, it risks becoming less of a signifier than a near-generic cliché (see the sparkle emoji of generative AI branding). The first significant use of the term copilot as an AI-associated name was actually GitHub Copilot, a 2021 release, which allowed users to work with AI in a form of pair programming, according to a Tech Republic mini history of the term. But Microsoft (which owns GitHub) followed soon after, and swiftly came to dominate its use. Windows keyboards now even include a Copilot keyfeaturing the Copilot logo, a ribbony splotch of colorful gradientsto invoke the Copilot in Windows experience. In practice, Microsofts version of the Copilot brand Security Copilot, Copilot Studio, Copilot in Wordis sometimes a product and sometimes a feature. All of which has arguably diluted the impact of the Copilot brand. Salesforce, Moodys, and Appian, among other companies, now use copilot in AI-related product names. More generally, in the AI context, copilot has come to serve essentially the function that assistant used to: a humanizing nickname for a variety of tech products that are being sold as not just a digital tool, but a kind of trusted peer. Of course, one of the challenges facing such products at the moment is living up to outsized promises and hopes, like huge overnight productivity boosts. Oftentimes, these tools feel less like a copilot and more like a temperamental trainee. Thats not to say that generative AI wont deserve the promotion from assistant to something more partner-like. But copilot, as an increasingly common and all-purpose term, can sometimes sound like title inflation. And thats not a brand meaning Microsoft had in mind.
Category:
E-Commerce
Heres a troubling reality check: We are currently evaluating artificial intelligence in the same way that wed judge a sports car. We act like an AI model is good if it is fast and powerful. But what we really need to assess is whether it makes for a trusted and capable business partner. The way we approach assessment matters. As AI models begin to play a part in everything from hiring decisions to medical diagnoses, our narrow focus on benchmarks and accuracy rates is creating blind spots that could undermine the very outcomes were trying to achieve. In the long term, it is effectiveness, not efficiency, that matters. Think about it: When you hire someone for your team, do you only look at their test scores and the speed they work at? Of course not. You consider how they collaborate, whether they share your values, whether they can admit when they dont know something, and how theyll impact your organizations cultureall the things that are critical to strategic success. Yet when it comes to the technology that is increasingly making decisions alongside us, were still stuck on the digital equivalent of standardized test scores. The Benchmark Trap Walk into any tech company today, and youll hear executives boasting about their latest performance metrics: Our model achieved 94.7% accuracy! or We reduced token usage by 20%! These numbers sound impressive, but they tell us almost nothing about whether these systems will actually serve human needs effectively. Despite significant tech advances, evaluation frameworks remain stubbornly focused on performance metrics while largely ignoring ethical, social, and human-centric factors. Its like judging a restaurant solely on how fast it serves food while ignoring whether the meals are nutritious, safe, or actually taste good. This measurement myopia is leading us astray. Many recent studies have found high levels of bias toward specific demographic groups when AI models are asked to make decisions about individuals in relation to tasks such as hiring, salary recommendations, loan approvals, and sentencing. These outcomes are not just theoretical. For instance, facial recognition systems deployed in law enforcement contexts continue to show higher error rates when identifying people of color. Yet these systems often pass traditional performance tests with flying colors. The disconnect is stark: Were celebrating technical achievements while peoples lives are being negatively impacted by our measurement blind spots. Real-World Lessons IBMs Watson for Oncology was once pitched as a revolutionary breakthrough that would transform cancer care. When measured using traditional metrics, the AI model appeared to be highly impressive, processing vast amounts of medical data rapidly and generating treatment recommendations with clinical sophistication. However, as Scientific American reported, reality fell far short of this promise. When major cancer centers implemented Watson, significant problems emerged. The systems recommendations often didnt align with best practices, in part because Watson was trained primarily on a limited number of cases from a single institution rather than a comprehensive database of real-world patient outcomes. The disconnect wasnt in Watsons computational capabilitiesaccording to traditional performance metrics, it functioned as designed. The gap was in its human-centered evaluation capabilities: Did it improve patient outcomes? Did it augment physician expertise effectively? When measured against these standards, Watson struggled to prove its value, leading many healthcare institutions to abandon the system. Prioritizing dignity Microsofts Seeing AI is an example of what happens when companies measure success through a human-centered lens from the beginning. As Time magazine reported, the Seeing AI app emerged from Microsofts commitment to accessibility innovation, using computer vision to narrate the visual world for blind and low-vision users. What sets Seeing AI apart isnt just its technical capabilities but how the development team prioritized human dignity and independence over pure performance metrics. Microsoft worked closely with the blind community throughout the design and testing phases, measuring success not by accuracy percentages alone, but by how effectively the app enhanced the ability of users to navigate their world independently. This approach created technology that genuinely empowers users, providing real-time audio descriptions that help with everything from selecting groceries to navigating unfamiliar spaces. The lesson: When we start with human outcomes as our primary success metric, we build systems that dont just workthey make life meaningfully better. Five Critical Dimensions of Success Smart leaders are moving beyond traditional metrics to evaluate systems across five critical dimensions: 1. Human-AI Collaboration. Rather than measuring performance in isolation, assess how well humans and technology work together. Recent research in the Journal of the American College of Surgeons showed that AI-generated postoperative reports were only half as likely to contain significant discrepancies as those written by surgeons alone. The key insight: a careful division of labor between humans and machines can improve outcomes while leaving humans free to spend more time on what they do best. 2. Ethical Impact and Fairness. Incorporate bias audits and fairness scores as mandatory evaluation metrics. This means continuously assessing whether systems treat all populations equitably and impact human freedom, autonomy, and dignity positively. 3. Stability and Self-Awareness. A Nature Scientific Reports study found performance degradation over time in 91 percent of the models it tested once they were exposed to real-world data. Instead of just measuring a models out-of-the-box accuracy, track performance over time and assess the models ability to identify performance dips and escalate to human oversight when its confidence drops. 4. Value Alignment. As the World Economic Forums 2024 white paper emphasizes, AI models must operate in accordance with core human values if they are to serve humanity effectively. This requires embedding ethical considerations throughout the technology lifecycle. 5. Long-Term Societal Impact Move beyond narrow optimization goals to assess alignment with long-term societal benefits. Consider how technology affects authentic human connections, preserves meaningful work, and serves the broader comunity good. The Leadership Imperative: Detach and Devote To transform how your organization measures AI success, embrace the Detach and Devote paradigm we describe in our book TRANSCEND: Detach from: Narrow efficiency metrics that ignore human impact The assumption that replacing human labor is inherently beneficial Approaches that treat humans as obstacles to optimization Devote to: Supporting genuine human connection and collaboration Preserving meaningful human choice and agency Serving human needs rather than reshaping humans to serve technological needs The Path Forward Forward-thinking leaders implement comprehensive evaluation approaches by starting with the desired human outcomes and then establishing continuous human input loops and measuring results against the goals of human stakeholders. The companies that get this right wont just build better systemstheyll build more trusted, more valuable, and ultimately more successful businesses. Theyll create technology that doesnt just process data faster but that genuinely enhances human potential and serves societal needs. The stakes couldnt be higher. As these AI models become more prevalent in critical decisions around hiring, healthcare, criminal justice, and financial services, our measurement approaches will determine whether these models serve humanity well or perpetuate existing inequalities. In the end, the most important test of all is whether using AI for a task makes human lives genuinely better. The question isnt whether your technology is fast enough but whether its human enough. That is the only metric that ultimately matters.
Category:
E-Commerce
All news |
||||||||||||||||||
|