|
|||||
Corporate leaders today are stuck between a rock and a hard place. Nobody can see events playing out in the streets in Minnesota and elsewhere and not be moved in some way. At the same time, they have a fiduciary responsibility to act in the best interests of their stakeholders, regardless of their personal feelings. I know this dilemma because I experienced it myself. In 2004, I was managing Ukraines leading news organization during the Orange Revolution, the third in a series of nonviolent uprisings known as the color revolutions that overwhelmed autocrats in Serbia and then the Georgian Republic before arriving in Kyiv. As I explained in my book, Cascades, these things follow a specific pattern of contagion, adoption, and defection driven by networks. Eventually, the nonlinear nature of network cascades overwhelms regimes and compels institutions to act. Now, that pattern is unfolding right here and, for corporate leaders, it is no longer something you can afford to ignore. 1. Contagion: How Movements Learn, Adapt, and Spread 2004 was an election year in Ukraine, so politics was in the air. We all saw the campaigns get underway, with ads hitting the air and rallies being held. But from my vantage point inside a news operation, I also began to hear about a youth group, called Pora, that was organizing students and activists against the regime. But the true origins started even earlier, in a Belgrade café in 1998. It was there that a small group of five activists met and established the youth group Otpor. Their efforts got a boost from a little-known academic named Gene Sharp, who had developed nonviolent methods of overthrowing authoritarian regimes and established the Albert Einstein Institution to support activists around the world. The Otpor activists would lead the overthrow of Serbian strongman Slobodan Milo¹eviæ. Shortly after, West Wing star Martin Sheen would narrate a hit documentary about the events, and activists from other Eastern European countries began reaching out to learn how the Serbians applied Sharps methods. In 2003, President Eduard Shevardnadze was brought down in Georgias Rose Revolution. In the spring of 2004, the Ukrainian Pora activists traveled to Serbia to receive training to lay the ground for the events I witnessed in the Orange Revolution. We can see a similar process unfolding in Minnesota and beyond. When federal agents began to descend on the community, activist networks first established in the aftermath of the killing of George Floyd were activated. They began to organize to protect their communities from ICE and CBP patrols, learning and honing their methods as they went. Now, as other communities begin to prepare for ICE and CBP activity, activists around the country are watching and learning. Ordinary Americans are attending trainingonline and in personthat transmits what has been learned in Minnesota: how to organize, dispatch activists, and engage with federal officers on the ground. 2. Adoption: When Participation Becomes the Default We are a product of our environments. Decades of studies indicate that we tend to conform to the opinions and behaviors of those around us, and this effect extends out to three degrees of relationships. So not only do our friends friends influence us deeply, but their friends toopeople who we dont even knowaffect what we think and do. Yet the inverse is also true. The people around us are usually doing pretty ordinary things, like going to work, taking the kids to soccer practice, and cooking dinner. Most people who are not actively opposing agents of the state have little idea how to go about doing so. We are, for the most part, trapped in mundane, ordinary lives and resist changing our habits significantly, yet that can change quickly. In a highly influential 1978 paper about resistance thresholds, sociologist Mark Granovetter showed how even small clusters of individuals, with low barriers to adoption, can influence those with greater resistance. Once these come on board, they begin to influence others as well. It is a pattern we see over and over again: small groups, loosely connected, but united by a shared purpose are what drive transformational change through network cascades. We can see those same patterns unfolding in America today. Ordinary people, appalled by the actions of ICE and CBP patrols, have joined activists in opposing the raids. As they do, they tell their friends and neighbors, some of whom begin to join in. As they do, their actions influence others who are slightly more reticent and, as they join, momentum builds even more. I experienced this directly during the Orange Revolution. In the spring of 2004, I was aware of the demonstrations, but not participating. As a foreigner, I wasnt sure it was my place. But then my wifes friends started going and invited my wife. Once she joined in, I began going too and others came with me. The numbers became overwhelming and the regime fell. 3. Defection: When Silence Stops Being Safe At this point, many readers will begin to notice a problem. Didnt other movements, such as #Occupy and Black Lives Matter, follow these very same patterns and fail to achieve their objectives? The answer, of course, is an unqualified yes. The presence of a network cascade is necessary, but not sufficient, to bring change about. For that, you ned institutions. Martin Luther King Jr. didnt just organize marches and boycotts. He used the power of mobilization to influence politicians like Lyndon Johnson. In much the same way, in Poland the Solidarity activists didnt just organize strikes. They actively engaged the Catholic Church. Early on during the color revolutions, activists learned that international institutions could be powerful allies and were able to successfully leverage that support. This is, perhaps, the most striking vulnerability for the present administration. Early on, it targeted institutions, such as law firms and universities, but went about it in a very ham-handed way, and key targets successfully fought back. Others, such as Senators Thom Tillis and Bill Cassidy, have voiced opposition to ICE and CBP tactics. Chris Madel, a Republican candidate for Minnesota governor, ended his campaign in protest. Yet corporate leaders, despite widely reported misgivings, have been largely sitting it out, even as former CEOs like Reid Hoffman, Bill George and Robert Rubin have urged them to weigh in. Good corporate stewardship, however, requires more than just operating a business and managing a balance sheet. It requires being effective leaders of your corporate community. Getting Ahead Of What Comes Next I remember attending a group dinner in Kyiv in late 2007 and sitting across from an executive from Sony Ericsson, who confidently told me that the iPhone launch earlier that year hadnt yet affected his companys sales. Yet the same pattern of contagion, adoption and defection would soon kick in and Sony Ericsson would lose relevance and ultimately be absorbed, as the smartphone cascade reshaped the entire industry. Once a cascade begins, it takes on a life of its own. Corporate leaders in America today face a similar dilemma. Their first responsibility is to their stakeholders, whatever their own personal feelings. Yet among those millions taking to the streets are employees, customers, shareholders and their family members. Hoping you can stay on the fence is dangerously naive. It is only a matter of time before someone in your corporate community is affected by ICE and CBP violence: an arrest, getting roughed up, pepper-sprayedor worse. The time to act is now. If Renee Good or Alex Pretti were one of your people or their children, what would you want to have in place for them and their families? What legal, medical, or psychological support are they and their coworkers going to need? You need to start preparing for that eventuality now. In much the same way, you need to begin to audit your partners and suppliers. Make sure the people you do business with share your values and those of your stakeholders. If they are supporting or engaging in activities that could harm your corporate community, dont wait for an incident. Cut ties. Most of all, you need to be explicit about your values and make sure you are living up to them. That doesnt mean taking a political position, but it does mean being clear where you stand. As someone who has had to rise to the challenge of running a business during a revolution, I can tell you from experience that someday you will want to look back on these times, reflect on what you said and did, and be proud of what you did.
Category:
E-Commerce
Public debate about artificial intelligence in higher education has largely orbited a familiar worry: cheating. Will students use chatbots to write essays? Can instructors tell? Should universities ban the tech? Embrace it? These concerns are understandable. But focusing so much on cheating misses the larger transformation already underway, one that extends far beyond student misconduct and even the classroom. Universities are adopting AI across many areas of institutional life. Some uses are largely invisible, like systems that help allocate resources, flag at-risk students, optimize course scheduling, or automate routine administrative decisions. Other uses are more noticeable. Students use AI tools to summarize and study, instructors use them to build assignments and syllabuses, and researchers use them to write code, scan literature, and compress hours of tedious work into minutes. People may use AI to cheat or skip out on work assignments. But the many uses of AI in higher education, and the changes they portend, beg a much deeper question: As machines become more capable of doing the labor of research and learning, what happens to higher education? What purpose does the university serve? Over the past eight years, weve been studying the moral implications of pervasive engagement with AI as part of a joint research project between the Applied Ethics Center at UMass Boston and the Institute for Ethics and Emerging Technologies. In a recent white paper, we argue that as AI systems become more autonomous, the ethical stakes of AI use in higher ed rise, as do its potential consequences. As these technologies become better at producing knowledge workdesigning classes, writing papers, suggesting experiments, and summarizing difficult textsthey dont just make universities more productive. They risk hollowing out the ecosystem of learning and mentorship upon which these institutions are built, and on which they depend. Nonautonomous AI Consider three kinds of AI systems and their respective impacts on university life: AI-powered software is already being used throughout higher education in admissions review, purchasing, academic advising, and institutional risk assessment. These are considered nonautonomous systems because they automate tasks, but a person is in the loop and using these systems as tools. These technologies can pose a risk to students privacy and data security. They also can be biased. And they often lack sufficient transparency to determine the sources of these problems. Who has access to student data? How are risk scores generated? How do we prevent systems from reproducing inequities or treating certain students as problems to be managed? These questions are serious, but they are not conceptually new, at least within the field of computer science. Universities typically have compliance offices, institutional review boards, and governance mechanisms that are designed to help address or mitigate these risks, even if they sometimes fall short of these objectives. Hybrid AI Hybrid systems encompass a range of tools, including AI-assisted tutoring chatbots, personalized feedback tools, and automated writing support. They often rely on generative AI technologies, especially large language models. While human users set the overall goals, the intermediate steps the system takes to meet them are often not specified. Hybrid systems are increasingly shaping day-to-day academic work. Students use them as writing companions, tutors, brainstorming partners, and on-demand explainers. Faculty use them to generate rubrics, draft lectures, and design syllabuses. Researchers use them to summarize papers, comment on drafts, design experiments, and generate code. This is where the cheating conversation belongs. With students and faculty alike increasingly leaning on technology for help, it is reasonable to wonder what kinds of learning might get lost along the way. But hybrid systems also raise more complex ethical questions. One has to do with transparency. AI chatbots offer natural-language interfaces that make it hard to tell when youre interacting with a human and when youre interacting with an automated agent. That can be alienating and distracting for those who interact with them. A student reviewing material for a test should be able to tell if they are talking with their teaching assistant or with a robot. A student reading feedback on a term paper needs to know whether it was written by their instructor. Anything less than complete transparency in such cases will be alienating to everyone involved and will shift the focus of academic interactions from learning to the means or the technology of learning. University of Pittsburgh researchers have shown that these dynamics bring forth feelings of uncertainty, anxiety, and distrust for students. These are problematic outcomes. A second ethical question relates to accountability and intellectual credit. If an instructor uses AI to draft an assignment and a student uses AI to draft a response, who is doing the evaluating, and what exactly is being evaluated? If feedback is partly machine-generated, who is responsible when it misleads, discourages, or embeds hidden assumptions? And when AI contributes substantially to research synthesis or writing, universities will need clearer norms around authorship and responsibilitynot only for students, but also for faculty. Finally, there is the critical question of cognitive offloading. AI can reduce drudgery, and thats not inherently bad. But it can also shift users away from the parts of learning that build competence, such as generating ideas, struggling through confusion, revising a clumsy draft, and learning to spot ones own mistakes. Autonomous agents The most consequential changes may come with systems that look less like assistants and more like agents. While truly autonomous technologies remain aspirational, the dream of a researcher in a boxan agentic AI system that can performstudies on its ownis becoming increasingly realistic. Agentic tools are anticipated to free up time for work that focuses on more human capacities like empathy and problem-solving. In teaching, this may mean that faculty may still teach in the headline sense, but more of the day-to-day labor of instruction can be handed off to systems optimized for efficiency and scale. Similarly, in research, the trajectory points toward systems that can increasingly automate the research cycle. In some domains, that already looks like robotic laboratories that run continuously, automate large portions of experimentation, and even select new tests based on prior results. At first glance, this may sound like a welcome boost to productivity. But universities are not information factories; they are systems of practice. They rely on a pipeline of graduate students and early-career academics who learn to teach and research by participating in that same work. If autonomous agents absorb more of the routine responsibilities that historically served as on-ramps into academic life, the university may keep producing courses and publications while quietly thinning the opportunity structures that sustain expertise over time. The same dynamic applies to undergraduates, albeit in a different register. When AI systems can supply explanations, drafts, solutions, and study plans on demand, the temptation is to offload the most challenging parts of learning. To the industry that is pushing AI into universities, it may seem as if this type of work is inefficient and that students will be better off letting a machine handle it. But it is the very nature of that struggle that builds durable understanding. Cognitive psychology has shown that students grow intellectually through doing the work of drafting, revising, failing, trying again, grappling with confusion, and revising weak arguments. This is the work of learning how to learn. Taken together, these developments suggest that the greatest risk posed by automation in higher education is not simply the replacement of particular tasks by machines, but the erosion of the broader ecosystem of practice that has long sustained teaching, research, and learning. An uncomfortable inflection point So what purpose do universities serve in a world in which knowledge work is increasingly automated? One possible answer treats the university primarily as an engine for producing credentials and knowledge. There, the core question is output: Are students graduating with degrees? Are papers and discoveries being generated? If autonomous systems can deliver those outputs more efficiently, then the institution has every reason to adopt them. But another answer treats the university as something more than an output machine, acknowledging that the value of higher education lies partly in the ecosystem itself. This model assigns intrinsic value to the pipeline of opportunities through which novices become experts, the mentorship structures through which judgment and responsibility are cultivated, and the educational design that encourages productive struggle rather than optimizing it away. Here, what matters is not only whether knowledge and degrees are produced, but how they are produced and what kinds of people, capacities, and communities are formed in the process. In this version, the university is meant to serve as no less than an ecosystem that reliably forms human expertise and judgment. In a world where knowledge work itself is increasingly automated, we think universities must ask what higher education owes its students, its early-career scholars, and the society it serves. The answers will determine not only how AI is adopted, but also what the modern university becomes. Nir Eisikovits is a professor of philosophy and the director of the Applied Ethics Center at UMass Boston. Jacob Burley is a junior research fellow at the Applied Ethics Center at UMass Boston. This article is republished from The Conversation under a Creative Commons license. Read the original article.
Category:
E-Commerce
Ive been using ChatGPT and other AI tools recently for quite a few things. A few examples: Working on strategy and operations for my latest business venture, Life Story Magic. Planning how to get the most value out of the Epic ski pass I bought for the year, while balancing everything else. Putting together a stretching and DIY physical therapy plan to get my shoulders feeling better during gym workouts. Along the way, Ive done what I think a lot of AI power users eventually wind up doing: Ive gone into the personalization and settings and told the chatbot to be neutral, direct, and just-the-facts. I dont want a chatbot that tells me That is a brilliant idea! every time I explore a tweak to my business strategy. Theyre not all brilliant, I assure you. And I dont want a lecture about how if I truly have shoulder issues I should see a real physical therapist. Im an adult. Im not outsourcing my judgment to a robot. Stop. I didnt ask you that The result of all this is that Ive developed an alpha relationship with AI. I tell it what to do. If it goes on too long, if it assumes I agree with its suggestions, or starts padding its answers with unnecessary niceties, I shut it down. Stop. I didnt ask you that. No. Wrong. Listen to what Im saying before replying. All I need from you are the following three things. Nothing else. As ChatGPT itself repeatedly reminds me, it has no feelings. HereI even asked it to confirm while writing this article: I dont have feelings, and I cant be offended. You can be blunt, curt, or even rude to a chatbot and nothing is harmed.The awkwardness youre describing is entirely on the human side of the interaction. All good, right? Until I caught myself dealing with customer service. $800 worth of Warby Parker Recently, I was returning most of a large Warby Parker orderprobably close to $600 out of $800 that Id spent on glasses, spread across multiple orders placed on different days last month. I always try to remember that customer service workers are real people, often working on the opposite schedule so they can be available during American waking hours, dealing with one unhappy customer after another all day long. I keep that image in mind, so I remember that whatever small problem Im having probably isnt a big deal. I guess Im trying to be a decent human. I also avoid the remote possibility of becoming the star of some viral customer-service-gone-wrong video. 11 minutes of learning But this call dragged on: 11 minutes in all. Writing that now, it doesnt seem super long, but at the time it felt like an eternity for something that should have been simple. There was a noticeable delay on the line, and not the best connection, and the customer service rep interrupted me several times, assuming that he understood what I was asking and launching into long, off-topic explanations before I could finish. Reflexively, I started talking to him the same way I talk to ChatGPT: Stop. I didnt ask you that. No. Listen to what Im saying before replying. All I need from you are the following three things. Entire life stories To be fair, I caught myself pretty quickly. Also, I probably overcompensated for the rest of the call. In real life, its almost a cliché among people who know me that I talk with everyone and often walk away knowing their entire life story, simply because I find almost everyone interesting. My wife, sitting next to me, as I read this part aloud to her: Mmmm-hmmm. But in that moment, I had slipped into the mode I use with machines: efficient, blunt, and completely unconcerned with the other sides experience. Machines are not human; humans are Ive stripped empathy out of my interactions with AI on purpose. I think that makes sense. I want speed and clarity, not emotional intelligence. Also, Im uneasy with the idea of blurring the lines between humans and machines. But without thinking, I carried that same way of communicating into a conversation with a real, live, fellow human being. When you train yourself to communicate efficiently with something artificialsomething that never needs patience, kindness, or to be treated with dignity, its easy to forget that most of the world still does. And frankly, so do you. Bill Murphy Jr. This article originally appeared on Fast Companys sister site, Inc.com. Inc. is the voice of the American entrepreneur. We inspire, inform, and document the most fascinating people in business: the risk-takers, the innovators, and the ultra-driven go-getters that represent the most dynamic force in the American economy.
Category:
E-Commerce
All news |
||||||||||||||||||
|
||||||||||||||||||