Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2025-11-02 17:15:00| Fast Company

For decades now, tech companies have been promising us a future straight out of Star Trek. Instead of being confined to phones and computers, our digital lives would extend to a network of screens all around us, from connected TVs and smart fridges to kitchen countertop displays and car dashboards. The tech companies called this “ambient computing” or “ubiquitous computing” and extolled how it would get technology out of the way so we could focus on the real world. Here’s what we’ve got instead: Samsung’s smart refrigerators, which range from $1,899 to $3,499, have started showing advertisements on their screens. Amazon’s Echo Show smart displays now have ads that you can’t turn off, even if you’re paying $20 per month for the upgraded Alexa+ assistant. Amazon also shows “Sponsored Screensavers” on its Fire TV devices if you leave them alone for a few minutes. Tesla recently pushed a promotion for Disney’s Tron: Ares to its car dashboards. They got the ambient part right, in that we’ve now surrounded ourselves with screens we don’t control. But instead of blending into the background, the screens are now doing the opposite, distracting us with ads in hopes of padding their makers’ bottom lines. Promises made Ambient computing got its start in a more idealistic setting, in the late 1980s at Xerox Palo Alto Research Center. Mark Weiser, then the head of PARC’s computer science lab (and later its chief technology officer) used the term “ubiquitous computing” to describe how an array of screens in various sizes”tabs, pads and boards“would all work in tandem to help people accomplish everyday tasks. “Machines that fit the human environment, instead of forcing humans to enter theirs, will make using a computer as refreshing as taking a walk in the woods,” he wrote. Tech companies started dusting off the idea a couple decades later, as lightweight processors, low-cost displays, and widespread internet connectivity made ambient computing more feasible. In 2013, for instance, Microsoft opened an “Envisioning Center” to test its ambient computing ideas, including head-to-toe touchscreens for kitchens and common areas. Cisco demoed a “Second Screen 2.0” concept, with screens that could blend into the surrounding walls and provide personalized information as needed. Samsung had an even bolder vision, releasing a “Display Centric World” concept video full of rollable, foldable, and transparent displays. “Technology begins with a love for you,” the video declared, before showing how Samsung’s screens would someday wrap around coffee cups, unfurl from night stands, light up inside car windows, and cover classroom walls. The term “ambient computing” took hold a few years later. In 2017, the tech columnist Walt Mossberg used the term to describe technology that got out of your way, and pretty soon both Google and Amazon were running with it. The technology just fades into the background when you dont need it,” Rick Osterloh, Google’s SVP of devices and services, declared during a 2019 keynote. He continued to describe Google’s constellation of connected phones, watches, speakers, and smart displays as “ambient computing” in the years that followed, and in 2022 called it the company’s “north star.” Dave Limp, Amazon’s former senior vice president of devices and services favored the similar term “ambient intelligence,” describing how cloud computing would power a network of smart gadgets from Echo speakers to Fire TV streaming players. An Amazon Developer blog post from 2021 declared that “ambient is the future,” and would “make life easier and better without getting in your way.” Once the stuff of imagination, ambient computing had arrived in earnest, but there was a problem: The utopian ideal was at odds with how these companies make money. Cheap screens you can’t control It’s not enough to merely sell the device, be it a smart speaker, connected TV, or fridge with a built-in screen. Instead, tech companies expect these devices to generate revenue over time through ads or subscriptions. In some cases they sell these products at aggressively low prices in hopes of recouping the investment later. Meanwhile, the software that runs on these inexpensive screens provide far less control than a computer or even a phone. These are increasingly dumb terminals with software controlled through the cloud, which means you have little recourse when that software turns against you. While you can swap out the search engine on your computer with one that doesn’t fill the screen with ads, no such alternative exists when your smart display starts cycling through banner ads or using voice responses to upsell shopping items. The hardware isn’t exactly simple to replace, either. You might be comfortable tossing out a single smart display or speaker, but what if you’ve filled your home with them and built an entire smart home system around them? And what happens when your TVs, fridges, and car dashboards become digital billboards well? With all this in mind, those flashy Samsung and Microsoft concept videos from the early 2010s take on a different flavor. These companies sold us on a digital utopia powered by pervasive screens and connected software without ever explaining how they’d pay for it. Now that we’re surrounded ourselves with the technology to make it possible, the bill is finally coming due.


Category: E-Commerce

 

LATEST NEWS

2025-11-02 11:00:00| Fast Company

Last year, travel group AAA estimated about 80 million Americans traveled over the Thanksgiving holidays. It was the busiest Thanksgiving ever at airports across the country, and some reports are saying those records could be shattered this year. A lot of that traveling will be done by young adults making their way home from school or new cities to see family and reconnect with old friends. That last part is the crux of Facebooks first brand campaign in four years. In a new ad called Home For The Holidays, we see people making their way back home and various get-togethers being organized on Facebook. Created by agency Droga5 and set to Bob Dylan and Johnny Cashs Girl From the North Country, the spot expertly conjures the comfort and emotional security that only the warm embrace of old friends and familiar surroundings can provide.  The goal here is to reintroduce Facebook to a new generation of users and remind people what made Facebook magic in the first place, according to the campaign press release. Its just the start of the brands efforts in the coming months to reach younger audiences, including upcoming partnerships with Sports Illustrated and 10 American universities tied to college sports.  Facebooks Global Marketing Director Briana de Veer says that one in four young adults (ages 1829) in the U.S. and Canada use Facebook Marketplace. Hundreds of thousands of young adults in the U.S. and Canada create Facebook Dating profiles every month, and young adult matches are up 10% year over year. We see young adults using Facebook to help them navigate life stages, says de Veer. They move into their first apartment and turn to Marketplace to help furnish it on a tight budget, or using Facebook Dating to find love or joining Facebook Groups to meet people in a new city, for example. Sounds great. Except compared to Facebooks reality in culture, the new ad is as much a fantasy as hooking up with your high school crush on that next trip home. This may be Facebooks first brand campaign in four years, but its picked up exactly where it left off in serving up an image of a brand that neither reflects nor defends who it actually is in the real world. Because in the real-life version of this spot, these old friends would likely be in the bar screaming at each other over political hot takes, healthcare facts, and anti-immigrant tirades. Look, we all know advertising is about aspiration. For brands, it’s about projecting the roles they want to play in our lives. For us, it’s about seeing an image we might want to identify with. But marketers need to balance between that manufactured ideal and the reality of how they exist in the world. There’s aspiration and then there’s delusion, and it’s a brand’s job to know the difference. The bad stuff It’s hard to ignore the obvious dichotomy between Facebook’s ads and its real-life decisions. In January, Facebook founder and CEO Mark Zuckerberg announced a gaggle of changes to the companys content moderation, including cutting its fact-checking program, which was originally established to fight the spread of misinformation across its social media apps. Its time to get back to our roots around free expression, Zuckerberg said in a video announcing the changes. He also acknowledged there would be more bad stuff on the platforms as a result of the decision. The reality is that this is a trade-off, he said. It means that were going to catch less bad stuff, but well also reduce the number of innocent peoples posts and accounts that we accidentally take down. Nicole Gill, founder and executive director of the digital watchdog organization Accountable Tech, told The New York Times that this  was reopening the floodgates to the exact same surge of hate, disinformation and conspiracy theories that caused Jan. 6and that continue to spur real-world violence. A former Meta employee told Platformer, I really think this is a precursor for genocide [] Weve seen it happen. Real peoples lives are actually going to be endangered. Amnesty International said these changes posed a grave threat to vulnerable communities globally and drastically increased the risk that the company will yet again contribute to mass violence and gross human rights abusesjust like it did in Myanmar in 2017. That’s not all, though. As Meta plows full steam ahead on building AI superintelligence, it’s leaving a path of unconsidered consequences in its wake. In August, Reuters reported that an internal Meta memo revealed that the companys rules for AI chatbots had permitted sensual chats with children.  Not quite the warm n fuzzy vibes the brand is going for. I asked de Veer about how the company thinks about balancing the parts of the brand they want to reflect back into the world with a campaign like this, and the obvious challenges that remain. We continue to invest in keeping people safe on our platforms and removing harmful content that goes against our policies, she says. That is critical foundational work that makes it possible for people to see and experience the core value of the brand, which is the focus of this campaign. Back to the Future Back at the end of 2020, I called Facebook the Worst Brand of the Year, based on the Grand Canyonsize gap between the company it was projecting itself to be, and the one defined by its actual, real-world actions. Back then, I called Facebook out for how it portrayed itself as a warm and fuzzy marketplace of ideas while knowingly facilitating the spread of health misinformation and political falsehoods. Sound familiar? In 2021, the last time Facebook launched a brand campaign, that ol familiar feeling was back again. This time it was a spot calle The Tiger & The Buffalo, which somehow hoped that dropping some friends inside a 1908 Henri Rousseau painting would distract us from revelations in The Wall Street Journals Facebook Files, the testimony of whistleblower Frances Haugen, and a study on how climate change denial was spreading unchecked on Facebook. The more things change, the more they stay exactly the same at Facebook, it seems. I actually feel bad for ad agency Droga5, which has crafted some truly impressive ads for the brand over the years, including two of the very best to come out of COVID”Never Lost” and “Survive” about a beloved NYC restaurant called Coogan’s. Not only is the Facebook algorithm still fine-tuned to feed you the angriest, most controversial content it can, its also pulling back on the efforts to combat disinformation and vitriol that are known to incite violence. With its new campaign, it’s offering yet another distraction from its problematic role in culture. The strategy here is to remind people why Facebook ever mattered in the first place. It’s to harken back to the halcyon days between 2006 and 2010, when it was actually a tool to primarily connect with people. Two decades later, Facebook is all that and a whole lot moreplus, you know, rage-baiting. Instead of living in the past, the brand needs to celebrate its best while also actively working to solve its worst. It’s definitely not a chair. Perhaps the closest the brand has come to doing just that was in an ad called Here Together. It acknowledged what Zuckerberg recently called the bad stuff, and defined its role in regulating it, saying from now on, Facebook will do more to keep you safe and protect your privacy, so we can all get back to what made Facebook great in the first place. That was in 2018, when all the people in Home For The Holidays were still in high school. Its time this brand grew up, too.


Category: E-Commerce

 

2025-11-02 09:30:00| Fast Company

The latest generation of artificial intelligence models is sharper and smoother, producing polished text with fewer errors and hallucinations. As a philosophy professor, I have a growing fear: When a polished essay no longer shows that a student did the thinking, the grade above it becomes hollowand so does the diploma. The problem doesnt stop in the classroom. In fields such as law, medicine, and journalism, trust depends on knowing that human judgment guided the work. A patient, for instance, expects a doctors prescription to reflect an experts thought and training. AI products can now be used to support peoples decisions. But even when AIs role in doing that type of work is small, you cant be sure whether the professional drove the process or merely wrote a few prompts to do the job. What dissolves in this situation is accountabilitythe sense that institutions and individuals can answer for what they certify. And this comes at a time when public trust in civic institutions is already fraying. I see education as the proving ground for a new challenge: learning to work with AI while preserving the integrity and visibility of human thinking. Crack the problem here, and a blueprint could emerge for other fields where trust depends on knowing that decisions still come from people. In my own classes, were testing an authorship protocol to ensure student writing stays connected to their thinking, even with AI in the loop. When learning breaks down The core exchange between teacher and student is under strain. A recent MIT study found that students using large language models to help with essays felt less ownership of their work and did worse on key writingrelated measures. Students still want to learn, but many feel defeated. They may ask: Why think through it myself when AI can just tell me? Teachers worry their feedback no longer lands. As one Columbia University sophomore told The New Yorker after turning in her AI-assisted essay: If they dont like it, it wasnt me who wrote it, you know? Universities are scrambling. Some instructors are trying to make assignments AI-proof, switching to personal reflections or requiring students to include their prompts and process. Over the past two years, Ive tried versions of these in my own classes, even asking students to invent new formats. But AI can mimic almost any task or style. Understandably, others now call for a return to what are being dubbed medieval standards: in-class test-taking with blue books and oral exams. Yet those mostly reward speed under pressure, not reflection. And if students use AI outside class for assignments, teachers will simply lower the bar for quality, much as they did when smartphones and social media began to erode sustained reading and attention. Many institutions resort to sweeping bans or hand the problem to ed-tech firms, whose detectors log every keystroke and replay drafts like movies. Teachers sift through forensic timelines; students feel surveilled. Too useful to ban, AI slips underground like contraband. The challenge isnt that AI makes strong arguments available; books and peers do that, too. Whats different is that AI seeps into the environment, constantly whispering suggestions into the students ear. Whether the student merely echoes these or works them into their own reasoning is crucial, but teachers cannot assess that after the fact. A strong paper may hide dependence, while a weak one may reflect real struggle. Meanwhile, other signatures of a students reasoningawkward phrasings that improve over the course of a paper, the quality of citations, general fluency of the writingare obscured by AI as well. Restoring the link between process and product Though many would happily skip the effort of thinking for themselves, its what makes learning durable and prepares students to become responsible professionals and leaders. Even if handing control to AI were desirable, it cant be held accountable, and its makers dont want that role. The only option as I see it is to protect the link between a students reasoning and the work that builds it. Imagine a classroom platform where teachers set the rules for each assignment, choosing how AI can be used. A philosophy essay might run in AI-free modestudents write in a window that disables copy-paste and external AI calls but still lets them save drafts. A coding project might allow AI assistance but pause before submission to ask the student brief questions about how their code works. When the work is sent to the teacher, the system issues a secure receipta digital tag, like a sealed exam envelopeconfirming that it was produced under those specified conditions. This isnt detection: no algorithm scanning for AI markers. And it isnt surveillance: no keystroke logging or draft spying. The assignments AI terms are built into the submission process. Work that doesnt meet those conditions simply wont go through, like when a platform rejects an unsupported file type. In my lab at Temple University, were piloting this approach by using the authorship protocol Ive developed. In the main authorship check mode, an AI assistant poses brief, conversational questions that draw students back into their thinking: Could you restate your main point more clearly? or Is there a better example that shows the same idea? Their short, in-the-moment responses and edits allow the system to measure how well their reasoning and final draft align. The prompts adapt in real time to each students writing, with the intent of making the cost of cheating higher than the effort of thinking. The goal isnt to grade or replace teachers but to reconnect the work sudents turn in with the reasoning that produced it. For teachers, this restores confidence that their feedback lands on a students actual reasoning. For students, it builds metacognitive awareness, helping them see when theyre genuinely thinking and when theyre merely offloading. I believe teachers and researchers should be able to design their own authorship checks, each issuing a secure tag that certifies the work passed through their chosen process, one that institutions can then decide to trust and adopt. How humans and intelligent machines interact There are related efforts underway outside education. In publishing, certification efforts already experiment with human-written stamps. Yet without reliable verification, such labels collapse into marketing claims. What needs to be verified isnt keystrokes but how people engage with their work. That shifts the question to cognitive authorship: not whether or how much AI was used, but how its integration affects ownership and reflection. As one doctor recently observed, learning how to deploy AI in the medical field will require a science of its own. The same holds for any field that depends on human judgment. I see this protocol acting as an interaction layer with verification tags that travel with the work wherever it goes, like email moving between providers. It would complement technical standards for verifying digital identity and content provenance that already exist. The key difference is that existing protocols certify the artifact, not the human judgment behind it. Without giving professions control over how AI is used and ensuring the place of human judgment in AI-assisted work, AI technology risks dissolving the trust on which professions and civic institutions depend. AI is not just a tool; it is a cognitive environment reshaping how we think. To inhabit this environment on our own terms, we must build open systems that keep human judgment at the center. Eli Alshanetsky is an assistant professor of philosophy at Temple University. This article is republished from The Conversation under a Creative Commons license. Read the original article.


Category: E-Commerce

 

Latest from this category

02.11Tech giants promised ambient computing. We got digital billboards instead.
02.11Facebooks new holiday ad pines for a social platform thats long gone
02.11This AI authorship protocol aims to keep humans connected to thinking
02.11The discovery that linked signature size to narcissism
02.11Stop choosing between being a friend or a leader. The best executives do both
01.11Forget skincare and tequilaNovak Djokovic just joined the celebrity popcorn boom
01.11This week in business: Netflix shakes up Wall Street, Amazon trims down, and shoppers gear up
01.11Stephen Sondheims creative secret weapon had nothing to do with Broadway musicals
E-Commerce »

All news

03.11Asian stocks trade mixed, gold dips on China move
03.11World awaits landmark US Supreme Court decision on Trump's tariffs
03.11A 10-year SIP in mid-cap MFs can deliver alpha returns
03.11Traders cautiously optimistic as seasonal trends hint at November gains
03.11Which stocks could outperform amid improving market sentiment?
03.11FPIs pour Rs 10,708 crore into domestic primary market in October
03.11Nifty may take a breather after rally, but bullish bias intact
03.11Will AI mean the end of call centres?
More »
Privacy policy . Copyright . Contact form .