Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2025-09-12 06:00:00| Fast Company

When incoming freshman Matt Cooper first set his eyes out for a coveted sousaphone position for the L row at The Ohio State University Marching Band, he prepared for auditions like anyone else would: practicing, playing, asking for help.  Except help came not from a coach, but from ChatGPT. For many college students like Cooper, artificial intelligence has become a part of daily life.  This widespread everyday adoption marks a stark contrast from even a couple years ago, though: When OpenAI first introduced its chatbot to the public in 2022, the idea of AI in school settings ignited a heated debate on how the technology belonged in the classroom, if at all.  Just three years later, its adoption has spread rapidly. A recent nationwide study by Grammarly found that 87% of higher ed students use AI for school, and 90% use it in daily life spending 10 hours on average each week using AI. (Another study by the Digital Education Council had similar insights, finding that 86% of students around the world use AI for their studies.) Yet colleges still have a patch quilt of standards for what constitutes acceptable AI use and what’s verboten. Across majors and universities in the US, Grammarly also discovered that while 78% of students say their schools have an AI policy, 32% say the policy is to not use AI. Nearly 46% of students said they worried about getting in trouble for using AI. For instance, using AI to break down complex topics covered in class might be generally accepted, but using ChatGPT to edit an essay might raise some eyebrows.  Meanwhile, as students engage with the real world and consider their career options, they feel like theyre going to be left behind if they dont develop AI expertise, especially as they complete internships, where theyre told as much to their faces. AI literacy has been called the most in-demand skill for workers in 2025.  That’s creating mixed emotions among college students, who are caught in between trying to follow two different sets of rules simultaneously. To understand just how much AI has transformed young peoples lives, Fast Company reached out to undergraduates nationwide to find out how they’re navigating these conflicting mandates. What we found is that as the new technology continues to evolve, its carving a spot into the lives of college students whether adults (or the students themselves) like it or not. In this Premium story, youll learn: The creative ways Gen Z students are incorporating AI into their lives to become AI fluent, even if they can’t use it in their studies Why AI’s popularity as a coding assistant is starting to change how colleges think about AI in the classroom How current and recent students are striking a balance between “old school” and “new school” ways of learning An everyday companion  As Ohio States Cooper practiced all summer for auditions, he found new ways to include technology into his life. AI has actually helped a fairly decent amount with it, in ways that people wouldn’t normally expect,” he says. From generating music sheets, or helping him memorize major scales and read key signatures, ChatGPT became Coopers trusted virtual coach. In a matter of 20 seconds, it can come up with a full sheet of music to practice on any difficulty, he says. (On top of that, the chatbot does it all for free.)  When Caitlin Conway, a senior at Loyola Marymount University in Los Angeles, returned to school after a full-time internship in marketing, she found university life to be a bit of reverse culture shock after being out in the workforce. But shes found easy-to-use chatbots like ChatGPT useful for adding more structure to her days. I found that you have so much time that sometimes you don’t really know what to do with it, Conway says. I use ChatGPT to make a schedule. Like: I want to have this amount of time to do studying, to do my homework, and do a yoga class, and it’ll come up with an easy schedule for me to follow. Maliha Mahmud, a rising senior in business and advertising at the University of Florida, uses AI to streamline daily tasks outside of class. Shell ask ChatGPT to craft a series of recipes with leftover ingredients in her fridge (as opposed to relying on instant ramen like generations of college kids before her). For school, Mahmud relies on AI as a sort of private instructor, willing to answer questions at any time. I’ll tell AI to break a concept down to me as if they’re talking to a middle schooler to understand it more, she says.  Many students also mentioned Googles Notebook LM, an AI tool that helps analyze sources you upload, rather than searching the web for answers. Students can upload their notes, required readings, and journals to the platform, and ask Notebook LM to make custom audio summaries with human-like voices. Still, the value of AI was oftentimes taught outside the classroom, in the workforce. Many students saying they were not only allowed, but encouraged to use AI during their internships. At her first internship at a tech PR company, New York University senior Anyka Chakravarty says that she felt that to be a successful person, you need to become AI fluent, so there’s a tension there as well.  Mahmud echoes Chakravartys experience. During my internship, it was encouraged to be utilizing AI, she says. At first I thought it was a replacement, or that it was not letting us critically think. [But] it has been such a time saver. Mahmud used Microsofts Copilot to automatically transcribe meetings, take notes, and send them to participants tasks an intern would have done manually in the past.  All this is a far cry from how college students have been conditioned to think about AI as potential grounds for expulsion. A checkered past (and present) Todays college generation was raised on plagiarism anxiety. Their pre-GPT world involved rechecking citations and resorting to online plagiarism checkers.   I was just like, I don’t want to touch this, because I don’t want to be ever accused of plagiarism. It definitely could be seen as very taboo, says Grant Dutro, a recent economics and communications graduate from Wheaton College in Illinois. Although more than half of students now use AI routinely, it wasnt always welcomed with open arms particularly for students who started college without it. Most students interviewed expressed an initial hesitation towardsAI, because of that all-too-well known fear of getting flagged for plagiarism. For decades, students were told that they could face severe repercussions for turning to the internet to download pre-written essays, copying material from books or blogs, and more. As technology advanced, so did the opportunities to plagiarize, particularly with the rise of services like TurnItIn, which flags copy-pasted and non-cited sources on essays. Although colleges have managed to catch up with setting guidelines in place, the policies are oftentimes prohibitive, unclear, or left to the instructors. For many teachers, the AI policies in their classrooms are not universal, which is confusing for students and may even lead them to inadvertently getting in trouble.   For students whose policy falls to an instructor-by-instructor basis, this can sometimes mean that students taking the same course, but with different professors, could have vastly different experiences with AI, at least in the classroom.   It’s morally incomprehensible to me that a large institution would not put front and center defining what their policies are, making sure they are consistent within departments, says Jenny Maxwell, Grammarlys head of education. Because of the institution not being clear on their policy, their own students are being harmed because of that lack of communication, Maxwell added. While AI use in school appears to be steadily destigmatized among students, it certainly is in the workplace. Some students who recently completed internships said that not only were they allowed to use AI on the job, but were encouraged to do so (Sure enough, experts recommend recent grads upskill themselves in AI literacy, while one in three managers say theyll refuse to hire candidates with no AI skills.) A new way to learn The conflicting messages of AI gets you in trouble and AI is the future complicates the technologys presence in college students lives, be it in class, on an internship, or in the dorm. But for many, its simply shifting what learning looks like. For instance, the framework to evaluate studentss success might have depended on essays in the past. But today, it might be more suitable to judge both the essay and the process of writing with technology, Grammarlys Maxwell says. Many students say that standards are changing to measure their learning already. Claire Shaw, a former engineering student at the University of Toronto who graduated in 2024, explained that when she began college, she learned the basics of coding at the same time that AI piqued the interest of her instructors. She learned the old school way while being encouraged by some of her teachers to play with new technology. Still, Shaw did not start using AI for school until her fourth year. Now, she believes a balance between old school and new school can exist. You’re allowed to use AI tools, so the standard for those kinds of coding assignments were elevated, Shaw says. It points to a big shift: In academia, where AI was (and in many cases, still does) feel taboo, its also being embraced, even in class. But now that AI is now an expected tool, the difficulty of coding assignments has been elevated, she says, leading to more advanced projects at an earlier stage in a student’s career. And while this might be exciting, and a great prep for the future, Shaw still highlights the need to understand fundamentals skills you learn on your own without AIs help before jumping in head first.  There are certain moments where we still need to test the raw skills of somebody by setting up environments that don’t have AI tool access, she explained, referencing in-person examinations with no AI tools available.  Think of it as learning to drive stick, while automatic cars exist combining AI with traditional teaching methods may create  a more holistic education. Similarly for humanities majors, some instructors are taking notes out of the old school playbook to measure these raw skills, like debating, communication, and critical thinking. We’ve turned to doing a lot more interactive stuff, like doing discussion circles, or handwriting pieces of writing, says NYUs Chakravarty, whos also a mentor in the schools writing center.  College students know AI isnt going anywhere. Even though everyone students, teachers, schools, first bosses continues to stumble their way through adoption, there will be some aspects of the college experience that may never go obsolete.  My professors brought out blue books again, says Chakravarty. Which I hadn’t had since, like, my first semester.


Category: E-Commerce

 

LATEST NEWS

2025-09-12 00:00:00| Fast Company

At some point, youve likely welcomed a recent college graduate into your business. Theyre smart, well-educated, and full of potentialbut on day one, they have little understanding of your companys unique processes, culture, or goals. Large language models (LLMs) are much the same. They carry vast general knowledge yet lack the specific context that makes them immediately valuable to your organization. Just like new hires go through the onboarding ropes, LLMs need structured access to your businesss data, tools, and workflows to become truly useful. Thats where Model Context Protocol (MCP) comes in. MCP enables communication between AI applications, AI agents, applications and data sources. The protocol has quickly moved from an emerging standard to a strategic enabler, and the conversation were having with our clients has shifted from technical architecture to practical application. MCP is not just another integration layer. Its a way to unlock latent value across your organization by connecting AI agents with the systems, data, and workflows that drive outcomes. The real opportunity lies in how you apply MCP. Start with what and why Lets be honest, theres no shortage of MCP primers out there. Most of them walk you through the architecture: hosts, clients, servers. Thats fine, but its not where the real value is. The real question isnt, How does MCP work? Its What can I do with it? and Why does it matter to my business? When we talk about MCP, I try to steer the conversation away from the architecture and toward the outcomes. What problem are you solving? Why is MCP the right tool to achieve your goals? A Midwest health system we worked with wanted to personalize treatment for patients with hypertension, using the vast troves of data stored in their electronic health records (EHR). The strategic hurdle wasnt just accessing the data, it was giving access securely, at scale, and in a way that respected compliance boundaries across thousands of patient encounters. With MCP, we were able to connect AI agents to a rich EHR data model that included vitals, medications, comorbidities, lab results, and even nuanced metrics like ejection fraction readings. MCP serves as the structured conduit, enabling the AI to interact with nearly 800 patient features per encounter without compromising privacy or requiring custom integrations. The predictive accuracy has enabled clinicians to tailor treatment plans with greater precision, according to our client. Patients have gained an estimated 100 additional days of life, and the system saw $2,000 in annual healthcare savings for 20% of its hypertension population. Similarly, a national convenience store chain used MCP to connect AI systems with real-time data on customer movement, promotional engagement, and inventory shrinkage. No retraining models. No custom APIs. Just a scalable model for improving store performance. MCP isnt just a bridge between systems. More vitally, it connects strategic intent with measurable outcomes. Guardrails for autonomy and accountability As we move toward agentic AImodels acting like digital employeesautonomy without structure is risky. You wouldnt let a new hire run wild with sensitive data or make decisions without oversight, and the same goes for AI. One major challenge is idempotency, or the ability to perform the same operation repeatedly with consistent results. Most LLMs arent idempotent. Ask one to write an email five times, and youll get five different versions. Thats fine for creativity, but not for processing payments or executing compliance workflows. MCP introduces guardrails to standardize how agents interact with internal systems, ensuring repeatable, reliable outcomes even if the AIs internal reasoning varies. Thats critical in regulated industries like healthcare, finance, and government. We saw this play out with a Middle East government statistics agency. They had decades of data buried in legacy systems. MCP enabled a voice-powered AI interface that could query massive datasets in Arabic and English. What used to take weeks now takes seconds, and more importantly, decision-makers now have timely, contextual intelligence at their fingertips. Strategic implementation: build once, scale everywhere Heres the thing: MCP isnt about building one-off solutions. Its about creating frameworks that can be reused across departments and use cases. To apply MCP effectively, organizations must think in the following terms: Reliability and repeatability: MCP enforces structured communication, making AI agents more predictable and trustworthy. Scalability and ecosystem growth: With a unified API layer, MCP simplifies deployment and integration, accelerating innovation. Safety and control: MCP ensures AI agents operate within defined boundaries, protecting sensitive data and maintaining enterprise integrity. We worked with a global healthcare technology provider that wanted to simplify complex medical terminology for patients. Instead of building a narrow solution, we used MCP to create a reusable framework that could be extended across departments. AI agents can securely access structured medical data and terminology libraries, apply consistent translation logic, and tailor outputs for patients, clinicians, and administrative staff. That same protocol-driven infrastructure was later adapted for internal training, multilingual documentation, and voice-assisted navigation of clinical systems. MCP made it possible to replicate success without reinventing the wheel. Thats what strategic implementation looks liketurning isolated wins into enterprise-wide transformation. The road ahead MCP is more than protocol, its a strategic enabler. It gives AI agents the structure they need to interact with enterprise data and tools. This means businesses can unlock new efficiencies, reduce development cycles, and build a thriving ecosystem of interoperable AI solutions.   The full potential is still unfolding, but for companies serious about AI, working with partners that understand how to apply MCP can be foundational. With the right guardrails in place, AI can be creative and compliant, autonomous, and accountable. Just what youd expect from any employee helping move your business forward. Juan Orlandini is CTO North America of Insight Enterprises.


Category: E-Commerce

 

2025-09-11 23:21:00| Fast Company

Were in the midst of an extraordinary wave of AI-fueled innovation, and no industry will remain untouched. Its still early days in what promises to be a new technology super cycle. But for impact organizations such as nonprofits and government agencies that typically lag in tech adoption, this moment represents a priceless window of opportunity. Unfortunately, the impact sector is still playing it safe with digital strategies that prioritize incremental modifications over decisive, daring action and technical innovation. These organizations are led by some of the smartest, most dedicated people I know, and they understand the trends. So why are they stuck in their approach to digital? Impact organizations cant afford to ignore AI Before we jump into the reasons nonprofits and government agencies are playing it safe, lets consider the stakesand why time is of the essence when it comes to adopting AI. First, AI is an impact multiplier. Leaning into the technology isnt about adopting new tech for the sake of keeping current. Its about radically amplifying your teams capacity to focus on your core mission rather than rote administrative tasks. Of course, AI isnt a panacea. And there are serious ethical considerations that should be taken into account along the way. The best technology decisions are always values-aligned. But that doesnt mean sidestepping it altogether. Second, the moment to act is now. Over the coming years, the gap between organizations that figure out how to effectively adopt AI and those that dont will widen exponentially. And in these early, chaotic days of technological innovation, AI tools and models are more affordable and accessible than ever before, creating a unique opportunity for even resource-strapped organizations to explore their potential. But realizing that potential requires thoughtful investment, even when entry costs seem low. Finally, all organizations, from corporate giants to small nonprofits, are still figuring out how to adopt AI. In this rare moment of digital parity, you have the chance to position your organization at the front of the curve. Five common mistakes that prevent digital risk-taking There are several reasons non-profits and government agencies fail to take calculated risks in their digital strategies. These mistakes arent unique to the current moment. They are perennial stumbling blocks that hinder digital innovation for many well-intentioned organizations and agencies. Here are five common mistakes that prevent digital risk-taking and how to solve them. 1. Underfunding digital investments Nonprofits and government agencies are fundamentally resource constrained. Budgets in the public sector are never going to rival those available in the private sector. But its also about resource allocation. Digital is often still viewed as just a communications tool or overhead rather than a core investment that is fundamental to program delivery and organizational success. Digital projects are often viewed as a one-time line item rather than an ongoing investment that needs to be refined and improved over time. Further, investments are often made just in technology, but not in the people and processes that will ultimately make that technology successful. Solution: Advocate for increased and sustained digital budgets by aligning digital strategies with organizational goals and measuring ROI over time. 2. Decision by committee The prevalence of committee-driven decision-making and the pursuit of consensus often leads to watered-down strategies and missed opportunities. In a fast-paced digital environment, this approach can also slow down decision-making. The result is a strategy that is already outdated by the time it is implemented. Solution: Streamline decision-making processes for digital initiatives. Implement agile methodologies and empower digital teams with greater autonomy. 3. Thinking you know your audience better than you do Ask any public servant or nonprofit staffer, and theyll tell you that what motivates them is helping people. But when it comes to knowing their audiences, many teams rely too heavily on what they think they know about key stakeholders. Worse, some organizations prioritize internal perspectives over the needs and preferences of their target audiences. This misalignment can lead to digital initiatives that fail to resonate or drive meaningful engagement. Solution: Take the time to conduct direct user research and test products with the people they are designed to serve.  4. Fear of failure The impact sectors current funding models and budget structures create an environment where failure is taboo. This risk-averse culture stifles innovation and prevents organizations from learning through experimentationa crucial element of digital success. Solution: Focus funding proposals on outcomes, not activities, to allow flexibility in approach and create a culture of innovation that embraces calculated risks and learns from failures. 5. Analysis paralysis When confronted with thorny problems like AI adoption, many organizations hang back because they are waiting for the just right moment or a critical mass of decisive information to make a move. In the wildly fast-moving world of AI, this mindset doesnt work. Learning by doing is the best course of action. Experimentation and prototyping are the name of the game.  Solution: Empower your team to experiment with AI tools. Provide flexible guidelines that ensure data security and values alignment without stifling creativity. AI is here to stay, and its indelibly reshaping the digital landscape. For risk-averse impact organizations, avoiding it is the riskiest strategy of all. Elisabeth Bradley is CEO of Forum One.


Category: E-Commerce

 

Latest from this category

12.09Swatch mocks Trumps 39% tariffs on Switzerland with a special edition watch 
12.09Hyundai ICE raid shows Trumps immigration crackdown at odds with push for foreign investment 
12.09Luxury fragrance has a design problem. Rare Beauty has a fix
12.0930-year mortgage rate drops to lowest level in almost a year
12.09Black Rock Coffee Bars IPO today will test investor appetite for restaurant stock listings amid tech fever
12.09Gemini Space Station IPO: Stock price will be closely watched today in Winklevoss crypto exchange debut
12.09Wall Street stocks are mixed as markets count on an interest rate cut from the Fed
12.09How to watch Nintendo Direct ahead of Super Mario Bros. 40th anniversary
E-Commerce »

All news

12.09Hundreds of families to get school uniform cash
12.09Sebi widens IPO anchor book to include insurers and pension funds, raises reservation to 40%
12.09Investors in mad rush commit Rs 1.2 lakh crore for 3 IPOs seeking just Rs 2,400 crore
12.09Microsoft CEO Satya Nadella-backed Groww to file for $800 million IPO
12.09Swatch mocks Trumps 39% tariffs on Switzerland with a special edition watch 
12.09Hyundai ICE raid shows Trumps immigration crackdown at odds with push for foreign investment 
12.09Luxury fragrance has a design problem. Rare Beauty has a fix
12.0930-year mortgage rate drops to lowest level in almost a year
More »
Privacy policy . Copyright . Contact form .