|
Ive been using Comet, Perplexitys AI-powered browser, for the past week. Using it to navigate the internet is very similar to any other browser experience, with one major enhancement: the Comet Assistant. Its a feature that can accomplish web-based tasks independent of you, and Im quickly becoming convinced its the future. I wrote an extensive review of Comet for The Media Copilot newsletter, but here Id like to explore the broader implicationsnot just stemming from Comet, but the whole idea of an AI-powered web browser, because soon well be swimming in them. OpenAI is reportedly about to release its own take on the idea, and certainly Chrome wont be far behind given Googles deep push into AI. Introducing a browsing assistant isn’t just a convenience. It has the potential to fundamentally redefine our relationship with the web. AI browsers like Comet represent the first wave in a sea change, shifting the internet from something we actively navigate to something we delegate tasks to, increasingly trusting AI to act on our behalf. That will present new challenges around privacy and ethics, but also create more opportunities, especially for the media. {"blockType":"creator-network-promo","data":{"mediaUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/mediacopilot-logo-ss.png","headline":"Media CoPilot","description":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com","substackDomain":"https:\/\/mediacopilot.substack.com\/","colorTheme":"blue","redirectUrl":""}} A new browser dawns Those old enough to remember web browsers when they didnt have cookies (which let websites remember you were logged in) or omniboxes (which hard-wired search into the experience) understand how significant those changes were. After using Comet, I would argue the addition of an AI companion transcends them all. For the first time youre surfing the web with a partner. The Comet Assistant is like having your own personal intern for what youre doing online, ready to take on any menial or low-priority tasks so you dont have to. For example, I order most of my groceries online every week. Rather than spinning up a list myself, I only need to open a tab, navigate the store site, and tell Comet to do it. I can command it to look at past orders and my standing shopping list as a guide, give it a rough idea of the meals I want to make, and itll fill up the cart on its own. Or I could tell it to find the nearest Apple Store with open Genius Bar appointments on Saturday morning, and book a repair for a broken iPhone screen. You get the idea. Once you start using Comet like this, it becomes kind of addictive as you search for its limits. Book a flight? Plan a vacation? Clean up my RSS reader (it really needs it)? To be clear, the execution often isnt perfect, so you still need to check its work before taking that final stepin fact, with most use cases, itll require this even if the command is quite clear (e.g. Buy it), which should give most people some relief to their apprehension of outsourcing things theyd previously done by hand. But I believe this outsourcing is inevitable. In practice, Comet functions as an agent, and while its abilities are still nascent, they’re already useful enough to benefit a large number of people. Browser assistants will likely be most peoples first experience with agents, and most will judge them for how effectively they perform tasks with minimal guidance. That will depend not just on the quality of the tool and the AI models powering it, but also how much it knows about the user. Privacy concerns are elevated with agents: think about the grocery example and now extrapolate that to medical or financial information. Can I trust my AI provider to safeguard that information from marketers, hackers, and other users of the same AI? Perplexity has the distinction of not training foundation models, so at least the concern about leaks into training data is moot. But the level of access a browser agent hasessentially looking over your shoulder at everything you do onlinecreates a very large target. Nonetheless, the potential for convenience is so great that I believe many people will use them anyway, and not see the leap to agents as much more than the access they already give major tech platform providers like Apple or Google. Providing informational fuel for agents This has big implications for the media. If you think about the things we do onlineshopping, banking, interacting with healthcare providersall of them are informed by context, often in the form of research that we do ourselves. Were already offloading some of that to AI, but the introduction of a personal browser agent means that can happen even closer to the task. So if I ask the AI to fill my shopping cart with low-fat ingredients for chicken enchiladas, its going to need to get that information from somewhere. This opens up a new landscape to information providers: the contextual searches needed to support agent activity. Whereas humans can only find, read, and process so much data to get the best information for what theyre doing, AI theoretically has no limits. In other words, the surface area of AI searches will expand massively, and so will the competition for it. The field of “AIEO,” the AI version of SEO, is about to get very hot. The spike in agent activity will also hopefully lead to better standards of how bots identify themselves. As I wrote about recently, AI companies have essentially given themselves permission to ignore bot restrictions on sites when those bots are behaving on behalf of users (as opposed to training or search indexing). Thats a major area of concern for content creators who want to control how AI ingests and adapts their content, and if bot activity suddenly becomes much bigger, so does the issue. Information workers, and journalists in particular, will be able to unlock a lot of potential with browser agents. Think about how many of the software platforms you use professionally are browser-based. In a typical newsroom, reporters and editors will use information and context across all kinds of systemsfrom a communications platform like Slack to project-management software like Asana to a CMS like WordPress. Automations can ease some of the tedium, but many newsrooms dont have enough resources for the technical upkeep. With a browser agent, workers can automate their own tasks on the fly. Certainly, the data privacy concerns are even higher in a professional environment, but so are the rewards. An AI informed by not just internet data and the context of your task, but with the goals and knowledge base of your workplaceAND with mastery over your browser-based softwareould effectively give everyone on the team their own assistant. And this isn’t some distant, hypothetical scenarioyou can do it right now. Comet is here, and though the Assistant sometimes stumbles through tasks like a newborn calf, it has the ability to perform research, operate software, and accomplish tasks on behalf of the user. That rewrites the rules of online interaction. While the amplified privacy concerns demand clearer boundaries and stricter accountability, AI browsers represent a step change in how we use the internet: Were no longer alone out there. {"blockType":"creator-network-promo","data":{"mediaUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/mediacopilot-logo-ss.png","headline":"Media CoPilot","description":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com","substackDomain":"https:\/\/mediacopilot.substack.com\/","colorTheme":"blue","redirectUrl":""}}
Category:
E-Commerce
The advent of generative AI has elicited waves of frustration and worry across academia for all the reasons one might expect: Early studies are showing that artificial intelligence tools can dilute critical thinking and undermine problem-solving skills. And there are many reports that students are using chatbots to cheat on assignments. But how do students feel about AI? And how is it affecting their relationships with peers, instructors and their coursework? I am part of a group of University of Pittsburgh researchers with a shared interest in AI and undergraduate education. While there is a growing body of research exploring how generative AI is affecting higher education, there is one group that we worry is underrepresented in this literature, yet perhaps uniquely qualified to talk about the issue: our students. Our team ran a series of focus groups with 95 students across our campuses in the spring of 2025 and found that whether students and faculty are actively using AI or not, it is having significant interpersonal, emotional effects on learning and trust in the classroom. While AI products such as ChatGPT, Gemini or Claude are, of course, affecting how students learn, their emergence is also changing their relationships with their professors and with one another. Its not going to judge you Most of our focus group participants had used AI in the academic settingwhen faced with a time crunch, when they perceive something to be busy work, or when they are stuck and worry that they cant complete a task on their own. We found that most students dont start a project using AI, but many are willing to turn to it at some point. Many students described positive experiences using AI to help them study or answer questions, or give them feedback on papers. Some even described using AI instead of a professor, tutor or teaching assistant. Others found a chatbot less intimidating than attending office hours where professors might be demeaning. In the words of one interviewee: With ChatGPT you can ask as many questions as you want and its not going to judge you. But by using it, you may be judged. While some were excited about using AI, many students voiced mild feelings of guilt or shame about their AI use due to environmental or ethical concerns, or just coming across as lazy. Some even expressed a feeling of helplessness, or a sense of inevitability regarding AI in their futures. Anxiety, distrust and avoidance While many students expressed a sense that faculty members are, as one participant put it, very anti-ChatGPT, they also lamented the fact that the rules around acceptable AI use were not sufficiently clear. As one urban planning major put it: I feel uncertain of what the expectations are, with her peer chiming in, Were not on the same page with students and teachers or even individually. No one really is. Students also described feelings of distrust and frustration toward peers they saw as overly reliant on AI. Some talked about asking classmates for help, only to find that they just used ChatGPT and hadnt learned the material. Others pointed to group projects, where AI use was described as a giant red flag that made them think less of their peers. These experiences feel unfair and uncomfortable for students. They can report their classmates for academic integrity violationsand enter yet another zone in which distrust mountsor they can try to work with them, sometimes with resentment. It ends up being more work for me, a political science major said, because its not only me doing my work by myself, its me double checking yours. Distrust was a marker that we observed of both student-to-teacher relationships and student-to-student relationships. Learners shared fears of being left behind if other students in their classes used chatbots to get better grades. This resulted in emotional distance and wariness among students. Indeed, our findings reflect other reports that indicate the mere possibility that a student might have used a generative AI tool is now undercutting trust across the classroom. Students are as anxious about baseless accusations of AI use as they are about being caught using it. Students described feeling anxious, confused and distrustful, and sometimes even avoiding peers or learning interactions. As educators, this worries us. We know that academic engagementa key marker of student successcomes not only from studying the course material, but also from positive engagement with classmates and instructors alike. AI is affecting relationships Indeed, research has shown that faculty-student relationships are an important indicator of student success. Peer-to-peer relationships are essential too. If students are sidestepping important mentoring relationships with professors or meaningful learning experiences with peers due to discomfort over ambiguous or shifting norms around the use of AI technology, institutions of higher education could imagine alternative pathways for connection. Residential campuses could double down on in-person courses and connections; faculty could be incentivized to encourage students to visit during office hours. Faculty-led research, mentoring and campus events where faculty and students mix in an informal fashion could also make a difference. We hope our research can also flip the script and disrupt tropes about students who use AI as cheaters. Instead, it tells a more complex story of students being thrust into a reality they didnt ask for, with few clear guidelines and little control. As generative AI continues to pervade everyday life, and institutions of higher education continue to search for solutions, our focus groups reflect the importance of listening to students and considering novel ways to help students feel more comfortable connecting with peers and faculty. Understanding these evolving interpersonal dynamics matters because how we relate to technology is increasingly affecting how we relate to one another. Given our experiences in dialogue with them, it is clear that students are more than ready to talk about this issue and its impact on their futures. Acknowledgment: Thank you to the full team from the University of Pittsburgh Oakland, Greensburg, Bradford and Johnstown campuses, including Annette Vee, Patrick Manning, Jessica FitzPatrick, Jessica Ghilani, Catherine Kula, Patty Wharton-Michael, Jialei Jiang, Sean DiLeonardi, Birney Young, Mark DiMauro, Jeff Aziz, and Gayle Rogers. Elise Silva is the director of policy research at the Institute for Cyber Law, Policy, and Security at the University of Pittsburgh. This article is republished from The Conversation under a Creative Commons license. Read the original article.
Category:
E-Commerce
In my writing and rhetoric courses, students have plenty of opinions on whether AI is intelligent: how well it can assess, analyze, evaluate, and communicate information. When I ask whether artificial intelligence can think, however, I often look upon a sea of blank faces. What is thinking, and how is it the same or different from intelligence? We might treat the two as more or less synonymous, but philosophers have marked nuances for millennia. Greek philosophers may not have known about 21st-century technology, but their ideas about intellect and thinking can help us understand whats at stake with AI today. The divided line Although the English words intellect and thinking do not have direct counterparts in ancient Greek, looking at ancient texts offers useful comparisons. In Republic, for example, Plato uses the analogy of a divided line separating higher and lower forms of understanding. Plato, who taught in the fourth century BCE, argued that each person has an intuitive capacity to recognize the truth. He called this the highest form of understanding: noesis. Noesis enables apprehension beyond reason, belief, or sensory perception. Its one form of knowing somethingbut in Platos view, its also a property of the soul. Lower down, but still above his dividing line, is dianoia, or reason, which relies on argumentation. Below the line, his lower forms of understanding are pistis, or belief, and eikasia, or imagination. Pistis is belief influenced by experience and sensory perception: input that someone can critically examine and reason about. Plato defines eikasia, meanwhile, as baseless opinion rooted in false perception. In Platos hierarchy of mental capacities, direct, intuitive understanding is at the top, and moment-to-moment physical input toward the bottom. The top of the hierarchy leads to true and absolute knowledge, while the bottom lends itself to false impressions and beliefs. But intuition, according to Plato, is part of the soul, and embodied in human form. Perceiving reality transcends the bodybut still needs one. So, while Plato does not differentiate between intelligence and thinking, I would argue that his distinctions can help us think about AI. Without being embodied, AI may not “think or understand the way humans do. Eikasiathe lowest form of comprehension, based on false perceptionsmay be similar to AIs frequent hallucinations,” when it makes up information that seems plausible but is actually inaccurate. Embodied thinking Aristotle, Platos student, sheds more light on intelligence and thinking. In On the Soul, Aristotle distinguishes active from passive intellect. Active intellect, which he called nous, is immaterial. It makes meaning from experience, but transcends bodily perception. Passive intellect is bodily, receiving sensory impressions without reasoning. We could say that these active and passive processes, put together, constitute thinking. Today, the word intelligence holds a logical quality that AIs calculations may conceivably replicate. Aristotle, however, like Plato, suggests that to think requires an embodied form and goes beyond reason alone. Aristotles views on rhetoric also show that deliberation and judgment require a body, feeling, and experience. We might think of rhetoric as persuasion, but it is actually more about observation: observing and evaluating how evidence, emotion, and character shape peoples thinking and decisions. Facts matter, but emotions and people move usand it seems questionable whether AI utilizes rhetoric in this way. Finally, Aristotles concept of phronesis sheds further light on AIs capacity to think. In Nicomachean Ethics, he defines phronesis as practical wisdom or prudence. Phronesis involves lived experience that determines not only right thought, but also how to apply those thoughts to good ends, or virtuous actions. AI may analyze large datasets to reach its conclusions, but phronesis goes beyond information to consult wisdom and moral insight. “Thinking” robots? Today, AI takes on many physical forms, from autonomous vehicles and humanoid robots to healthcare robots and drones. If intelligence requires embodiment, as Plato and Aristotle believed, it might seem as if AI is coming closer than ever to human thinking. AI does take on many forms, but theyre still not bodies; they run on code, algorithms, and datasets. Intuitive understanding, emotion, integrity, and practical wisdom seem to require an embodied, perishable form that is moved by experience. Perhaps we should ask AI itselfso I did. When I prompted ChatGPT with the simple question, Can you think?, it responded: I can process information, analyze patterns, and generate responses based on data. So, in some ways, it might seem like Im thinking. But its not thinking like a human does. I dont have consciousness, emotions, intentions, or awareness. Everything I do is based on patterns learned from huge amounts of text and a set of rules and probabilities. I can simulate certain aspects of thinking, but I dont truly think or understand in the human sense. That response aligns with Plato and Aristotles views. For now. Ryan Leack is an assistant professor of writing at USC Dornsife College of Letters, Arts and Sciences. This article is republished from The Conversation under a Creative Commons license. Read the original article.
Category:
E-Commerce
All news |
||||||||||||||||||
|