Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2025-11-20 07:00:00| Fast Company

Business leaders are scrambling to understand the fast-moving world of artificial intelligence. But if companies are struggling to keep up, can todays business schools really prepare students for a new landscape thats unfolding in real time out in the real world? Stanford University thinks it might have the answer. At its Graduate School of Business, a new student-led initiative aims to arm students for a future where AI is upending in ways that are still unfolding. The program, called AI@GSB, includes hands-on workshops with new AI tools and a speaker series with industry experts. The school also introduced new courses around AIincluding one called AI for Human Flourishing, which aims to shift the focus from what AI can do, to what it should do.  But Sarah Soule, a longtime organizational behavior professor who became dean of the business school this year, told Fast Company that preparing students for this brand-new work environment is easy to say, harder to do. Especially given how quickly AI is changing every function of every organization, she says.  So the school hopes to lean on its network of well-connected alumni, as well as its location in Silicon Valley, the heart of the AI boom, to lead business schools not just into a future where AI knowledge will be necessarybut in the present, where it already is. [Photo: SGSB] It would not be easy for me as the new dean to just come in and mandate that everybody begin teaching AI in whatever their subject matter is, Soule said, explaining that that approach likely would fail. In a conversation with Fast Company, the dean shared more about what she hopes will work, and how she plans to train the next generation of leaders for an AI-powered world. This interview has been lightly edited for clarity. Many business schools are adding AI courses. But it sounds like youre thinking of AI as less of an add-on, and more like a core part of the schools DNA going forward. How do you make that distinction?  I think it has to be [a core part]. Developing a very holistic leadership model, alongside all the offerings in AI, is going to allow usI hopeto think about the questions of ethics and responsibility, and the importance of human beings and human connection, especially in an AI-powered organization.  AI is going to change the future of work completely. So having those two parallel themes at the same time is going to be critical. What does ethical, responsible AI mean to you?  HR comes to mind right away. Im thinking about privacy concerns: What do we need to be worried about? If were outsourcing scans of résumés and so on to algorithms and agents, do we need to worry about privacy?  I also think about: What does the world look like if a lot of entry-level jobs begin to disappear? How do we think responsibly about reskilling individuals for work that will enable AI?  I dont think we have the answers to these questions, but Im really glad that we as a business school are going to beand have beenasking these questions. The new AI initiative is student-led. But what is the school doing to train faculty to better understand how they can, or should, teach about AIor use AI in their classes? Implementing this has been a mixed bag for a lot of universities. We have a teaching and learning hub here that has very talented staff [members] who are pedagogical experts and who are offering different kinds of sessions on AI. So thats of course been helpful.  But one of the most gratifying things to see is how faculty are talking to one another about their researchto see them really jazzed about how they’re using AI in the classroom, and sharing speakers that they’re going to bring in, and thinking about new case studies to write together. Its really fun to see the buzz amongst the faculty as they navigate this. Many, if not most, of our faculty are using AI in their research. I think because theyre becoming so comfortable with AI, theyre genuinely excited about teaching AI noweither teaching content about AI, or bringing AI into the pedagogy. I’ll give you an example. In one particular class, the faculty member essentially created a GPT to search all of the management journals and to help answer common managerial questions and dilemmas. So it’s an evidence-based management tool that the students can use. They could say, What’s the optimal way to set up a high-functioning team? And it will search through the journals and give an evidence-based answer.  One of Stanford GSBs most popular courses is Interpersonal Dynamics, known as the Touchy Feely class. Do you think teaching skills like emotional intelligence as an aspect of leadership becomes even more important in an AI-dominated world?  Absolutely. Touchy Feely is an iconic class. Even though it’s an elective, nearly every student takes it; it transforms people’s lives, and they love this course. It focuses on an important facet of leadership: self-awareness. But that’s only one piece. We also have courses that get students to think about a second facet of leadership, which is perspective-taking: the ability to ask very good questions, and to listen really well to others to understand where they’re coming from. So, self-awareness and perspective-taking are part of the leadership model. The third thing: We have a wonderful set of classes on communications, not just about executive presence and executive communications, but classes that focus on nonverbal communication and written communication.  The last two facets of our leadership model are: critical and analytical decision makinghaving the judgment and wisdom to make the kinds of decisions that leaders always have to makeand contextual awareness to think about the system in which they’re embedded. Not just to understand it, but to navigate it, and to have the will to try to change it if it needs to be changed.&nbs; All of those dimensions of leadership are going to be more and more important in the coming years with AI. So many of the rote tasks and analysis will be being done pretty wellmaybe better than humansby AI.  But we are going to need people who can lead othersand lead them well, and lead them in a principled and purposeful fashion.


Category: E-Commerce

 

LATEST NEWS

2025-11-20 01:29:00| Fast Company

As companies adopt AI, the conversation is shifting from the promise of productivity to concerns about AIs impact on wellbeing. Business leaders cant ignore the warning signs. The mental health crisis isnt new, but AI is changing how we must address it. More than 1 billion people experience mental health conditions. Burnout is rising. And more people are turning to AI for support without the expertise of trained therapists. What starts as empathy on demand could accelerate loneliness. Whats more, Stanford research found that these tools could introduce biases and failures that could result in dangerous consequences. With the right leadership, AI can usher in a human renaissance: simplifying complex challenges, freeing up capacity, and sparking creativity. But optimism alone isnt a strategy. Thats why responsible AI adoption is a business imperative, especially for companies building the technology. That work is not easy, but its necessary. UNCLEAR EXPECTATIONS Weve seen what happens when powerful platforms are built without the right guardrails: Algorithms can fuel outrage, deepen disconnection, and undermine trust. If we deploy AI without grounding it in values, ethics, and governancedesigning the future without prioritizing wellbeingwe risk losing the trust and energy of the very people who would lead the renaissance. Ive seen this dynamic up close. In conversations with business and HR leaders, and through my work on the board of Project Healthy Minds, the signals are clear: People are struggling with unclear expectations around AI use, job insecurity, loneliness, uncertainty, and exhaustion. In a recent conversation with  Phil Schermer, founder and CEO of Project Health Minds, he told me, Theres a reason why professional sports teams and hedge funds alike are investing in mental health programs for their teams that enable them to operate at the highest level. Companies that invest in improving the mental health of their workforce see higher levels of productivity, innovation, and retention of high performers. 5 WAYS TO BUILD AN AI-FIRST WORKPLACE THAT PROTECTS WELLBEING Wellbeing should be at the core of the AI enablement strategy. Here are five ways to incorporate it. 1. Set clear expectations Employees need to understand how to work with AI and that their leaders have their back. That means prioritizing governance and encouraging experimentation within safe, ethical guardrails. Good governance builds trust, and trust is the foundation of any successful transformation. Investing in learning and growth sends a powerful message to employees: You belong in the future were building if youre willing to adapt. We prioritize skill building through ServiceNow University so every employee feels confident working with AI day-to-day. In a conversation with Open Machine CEO and AI advisor Allie K. Miller, she told me that we need to redefine success in jobs by an employees output, value, and quality as they work with AI agents. This means looking at things like business impact and creativity, not just processes or tasks completed. 2. Model healthy AI behavior AI implementation is a cultural shift. If we want employees to trust the technology, they need to see leaders and managers do the same. That modeling starts with curiosity. Employees dont need to be AI experts from day one, but they need to show a willingness to learn. Set norms around when, why, and how often teams engage with AI tools. Ask questions, share experiments, and celebrate use cases where AI saved time or sparked creativity. AI shouldnt be an opt in for teamsit should be part of how we work, learn, and grow. When leaders use AI thoughtfully, employees are more likely to follow suit. 3. Pulse-check employee sentiment consistently To design meaningful wellbeing programs, leaders must ground analysis in data, continuously improve, and build for scale. That starts by surveying employees to track sentiment, trust, and AI-related fatigue in real time. Then comes the harder part: acting on the data to show employees theyre seen and supported. Leaders should ask: Are we tailoring wellbeing strategies to the unique needs of teams, regions, and roles? Are we embedding empathy into our platforms, workflows, and automated tasks? Are our AI tools safe, unbiased, and aligned to our values? Are we making mental health a routine part of manager check-ins? According to Schermer, The organizations making the biggest strides are the ones treating wellbeing data like commercial data: measured frequently, acted on quickly, and tied directly to outcomes. 4. Focus on connection, keeping people at the center  AI should not replace professional mental healthcare or real-world connections. We must resist the urge to scale empathy through bots alone. The unique human ability to notice distress, empathize, and escalate is largely irreplaceable. Thats why leaders should advocate for human-first escalation ladders and align their policies to the World Health Organizations guidance on AI for health. Some researchers are exploring traffic light systems to flag when AI tools for mental health might cross ethical or personal boundaries. AI adoption is a human shift, so people leaders need to take responsibility for AI transformation. Thats why my chief people officer role at ServiceNow evolved to include chief AI enablement officer. Todays leadership imperatives include reducing the stigma around mental health, building confidence in AI systems, creating space for open human connection, and encouraging dialogue about digital anxiety, loneliness, or job insecurity. 5. Champion cross-sector collaboration We need collaboration across industries and leadership rolesfrom tech to healthcare, from HR professionals to policymakersto create systems of care alongside AI. The most effective strategies come from collective action. Thats why leaders should partner with coalitions to scale access to care, expand AI literacy, and advocate for mental health in theworkforce. These partnerships can help us shape a better future for our people. THE BOTTOM LINE: AI MUST BE BUILT TO WORK FOR PEOPLE The future of work should be defined by trust, transparency, and humanity. This is our moment to lead with empathy, design with purpose, and build AI that works for people, not just productivity. Jacqui Canney is chief people and AI enablement officer at ServiceNow.


Category: E-Commerce

 

2025-11-20 01:14:00| Fast Company

Most of the software that truly moves the world doesn’t demand our attention: It quietly removes friction and gets out of the way. You only notice it when it’s broken. That’s not a bug in the business model; it’s a feature. In fact, “unnoticed but indispensable” is the highest customer-satisfaction score you can get. Consider these categories that already figured this out. The log-in that isn’t a task anymore Password managers, once you build the habit, fade into the background. They fill the box before you even remember there was a box. Single sign-on (SSO) systems go a step further and make logging in to everything feel like one action instead of 17 small, annoying ones. And passkeys get rid of passwords entirely. The pattern is consistent: Tools that turn a chore into a non-event ultimately win. It’s tempting to treat authentication like a “moment”: a page, a button, a ritual. The better approach is to treat it like plumbing. You notice good plumbing by its absence. Otherwise, you just enjoy the hot shower. Invisible infrastructure already won the internet Some technologies graduate from “choice” to “ambient.” Transport layer security (TLS) and HTTPS used to be optional. Now they’re table stakes, largely thanks to Let’s Encrypt making it approachable. Your browser nudges everyone toward secure defaults and the ecosystem complies. We don’t “do” TLS; we benefit from it. This wasn’t always so seamless. In Windows early days, you literally had to install a Winsock stack just to speak TCP/IP. Today, the network stack is simply present, like oxygen. Progress in software often looks like this: The thing we once had to fiddle with becomes the thing we don’t think about anymore. AI’s next act: not a chat box Chatbots are neat, but they aren’t the end state of AI. They’re a first draft, like when we used to watch early web pages load images line by line. The real value emerges when intelligent assistance is in the room where work already happens, and it becomes part of the workflow. In a CRM, the note writes itself while you talk and is already tagged correctly when you hang up. In design tools, the spec is updated everywhere when you change a component once. In code review, a suggestion appears inline with a one-click fix, not in a separate AI tab that hijacks your focus. This is the same story as passwords, SSO, and HTTPS: The win comes from disappearing the steps, not adding a new surface area for attention. (The funny thing is, most of the work of making AI invisible is just plain old engineering. Yes, there’s lots of AI engineering to make the bots work at all. But plugging them into things in a way that works, that’s the part we’re really behind on.) BORING ON PURPOSE IS A STRATEGY At my company we talk about being boring in a specific way: Security and connectivity should feel like electricity. You flip the switch, the lights come on, and nobody argues about the generator or the continent-wide high-voltage distribution network. Being invisible is not the same as being trivial; it’s the reward for sweating details users never see. Here are five design principles for making software people won’t notice 1. Make the default the decision. Someone once told me the golden rule of user interface design: If there’s a popup with two options, imagine one of them is “work” and the other one is “don’t work.” Then make “work” the default and delete the popup. Most users will never visit settings. If the secure, performant, accessible path is the default, adoption happens for free. 2. Budget for latency like it’s a feature. Under ~100ms, interactions feel instantaneous. Over ~1s, they feel like work. Invisible software feels fast because it never gives the user time to switch contexts. Cache, prefetch, and defer like your product’s life depends on it. Because it does! 3. Automate the paperwork, keep the signatures. Autofill, SSO, and passkeys are all versions of the same idea: The system should carry the burden. Let humans make approvals and set intent; let machines do the form filling and compliance trail. 4. Progressive disclosure beats feature sprawl. Hide power tools until they’re needed. The user who needs advanced controls will find them; the one who doesn’t should never meet them. UIs that start simple and get deep on demand feel “light” and earn trust. 5. Fail quietly, recover loudly. When background systems hiccup, self-heal first. If you must involve the user, say exactly what to do in one step and show you’ve already done the other three. Invisible products don’t turn every exception into a ticket. THE BUSINESS CASE FOR BEING FORGETTABLE “Unobtrusive” can sound like “unmonetizable,” but it’s the opposite. Products that vanish into the workflow produce fewer support tickets, shorter onboarding, and more expansion inside organizations. They spread by word-of-mouth because they don’t create new habits; they remove old pain. You don’t need a big campaign to sell relief. The tricky part is cultural, not technical. Teams must be okay shipping value that isn’t screenshot-worthy. That means investing in the edges: reliability, identity, zero-touch setup, and instant rollback-so customers never have to learn those words. A SIMPLE TEST If turning your product off causes immediate, confused swearing from the people who didn’t even know they depended on it, congratulations: you’ve built something great. Now make it a little faster and a little quieter, and do that every quarter. Because the best compliment your software will ever get is silence.Avery Pennarun is CEO and cofounder at Tailscale.


Category: E-Commerce

 

Latest from this category

20.11Can business schools really prepare students for a world of AI? Stanford thinks so
20.115 ways leaders should prioritize mental health
20.11Design your software for disappearance
20.11Storytelling can reframe the economic conversation
20.11What if fintech worked for the people who dont download fintech?
19.11Nvidias Jensen Huang isnt feeling the AI bubble
19.115 bold ways I use AI as a CEO, beyond automation
19.11Nvidia forecasts stronger-than-expected fourth-quarter revenue
E-Commerce »

All news

20.11Can business schools really prepare students for a world of AI? Stanford thinks so
20.11Thursday Watch
20.11PSU Banks steal the spotlight as private lenders lose steam: Sunil Subramaniam
20.11The forgotten Tata stock: Will investors ever recover Rs 25,000 crore loss that keeps getting bigger?
20.11Groww shares fall 18% in 2 days. Whats triggering the fall in Billionbrains?
20.11Infra & consumption will power Indias GDP together: Mirae's Bharti Sawant on next big investment theme
20.11Q2 earnings show Rs 4 lakh crore-crisis may deepen in stock market's riskier corners
20.11Major League Baseball signs deals with Netflix, ESPN and NBCUniversal
More »
Privacy policy . Copyright . Contact form .