Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2025-11-13 10:00:00| Fast Company

OpenAI watchers have spotted something curious over the last week.  References to GPT-5.1 keep showing up in OpenAIs codebase, and a cloaked model codenamed Polaris Alpha and widely believed to have come from OpenAI randomly appeared in OpenRouter, a platform that AI nerds use to test new systems. Nothing is official yet. But all of this suggests that OpenAI is quietly preparing to release a new version of their GPT-5 model. Industry sources point to a potential release date as early as November 24. If GPT-5.1 is for real, what new capabilities will the model have?  As a former OpenAI Beta testerand someone who burns through millions of GPT-5 tokens every monthheres what Im expecting. A larger context window (but still not large enough) An AI models context window is the amount of data (measured in tokens, which are basically bits of words) that it can process at one time. As the name implies, a larger context window means that a model can consider more context and external information when processing a given request. This usually results in better output. I recently spoke to an artist, for example, who hands Googles Gemini a 300-page document every time he chats with it. The document includes excerpts from his personal journal, full copies of screenplays hes written, and much else. This insanely large amount of context lets the model provide him much better, more tailored responses than it would if he simply interacted with it like the average user. This works largely because Gemini has a 1 million token context window. GPT-5s, in comparison, is relatively puny at just 196,000 tokens in ChatGPT (expanded to 400,000 tokens when used by developers through the companys API). That smaller context window puts GPT-5 and ChatGPT at a major disadvantage. If you want to use the model to edit a book or improve a large codebase, for example, youll quickly run out of tokens. When OpenAI releases GPT-5.1, sources indicate that it will come with a 256,000 token context window when used via the ChatGPT interface, and perhaps double that in the API.  Thats better than todays GPT-5, to be sure. But it still falls far short of Geminiespecially as Google prepares to make its own upgrades. OpenAI could make a surprise last-minute upgrade to 1 million tokens. But if it keeps the 256,000 token context window, expect plenty of grumbling from the developer community about why the window still isnt big enough. Even fewer hallucinations OpenAIs GPT-5 model falls short in many ways. But one thing its very good at is providing accurate, largely hallucination-free responses. I often use OpenAIs models to perform research. With earlier models like GPT-4o, I found that I had to carefully fact-check everything the model produced to ensure it wasnt imagining some new software tool that doesnt actually exist, or lying to me about myriad other small, crucial things. With GPT-5, I find I have to do that far less. The model isnt perfect. But OpenAI has largely solved the problem of wild hallucinations.  According to the companys own data, GPT-5 hallucinates only 26% of the time when solving a complex benchmark problem, versus 75% of the time with older models. In normal usage, that translates to a far lower hallucination rate on simpler, everyday queries that arent designed to trip the model up. With GPT-5.1, expect OpenAI to double down on its new, hallucination-free direction. The updated model is likely to do an even better job at avoiding errors. Theres a cost, though. Models that hallucinate less tend to take fewer risks, and can thus seem less creative than unconstrained, hallucination-laden ones.  OpenAI will likely try to carefully walk the link between accuracy and creativity with GPT-5.1. But theres no guarantee theyll succeed. Better, more creative writing In a similar vein, when OpenAI released their GPT-5 model, users quickly noticed that it produced boring, lifeless prose. At the time, I predicted that OpenAI had essentially given the model an emotional lobotomy, killing its emotional intelligence in order to curb a worrying trend of the model sending users down psychotic spirals. Turns out, I was right. In a post on X last month, Sam Altman admitted that We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. But Altman also said in the post now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases. That process began with the rollout of new, more emotionally intelligent personalities in the existing GPT-5 model. But its likely to continue and intensify with GPT-5.1. I expect the new model to have the overall intelligence and accuracy of GPT-5, but with a personality to match the emotionally deep GPT-4o.  This will likely be paired with much more robust safeguards to ensure that 5.1 avoids conversations that might hurt someone who is having a mental health crisis.  Hopefully, with GPT-5.1 the company can protect those vulnerable users without bricking the bots brain for everyone else. Naughty bits If youre squeamish about NSFW stuff, maybe cover your ears for this part.  In the same X post, Altman subtly dropped a sentence that sent the Interne into a tizzy: As we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults. The idea of Americas leading AI company churning out reams of computer-generated erotica has already sparked feverish commentary from such varied sources as politicians, Christian leaders, tech reporters, and (judging from the number of Upvotes), much of Reddit. For their part, though, OpenAI seems quite committed to moving ahead with this promise. In a calculus that surely makes sense in the strange techno-Libertarian circles of the AI world, the issue is intimately tied to personal freedom and autonomy. In a recent article about the future of artificial intelligence, OpenAI again reiterated that We believe that adults should be able to use AI on their own terms, within broad bounds defined by society, placing full access to AI on par with electricity, clean water, or food. All thats to say that with the release of GPT-5.1 (or perhaps slightly after the release, so the inevitable media frenzy doesnt overshadow the new models less interesting aspects), the guardrails around ChatGPTs naughty bits are almost certainly coming off. Deeper thought In addition to killing GPT-5s emotional intelligence, OpenAI made another misstep when releasing GPT-5.  The company tried to unify all queries within a single model, letting ChatGPT itself choose whether to use a simpler, lower-effort version of GPT-5, or a slower, more thoughtful one. The idea was nobletheres little reason to use an incredibly powerful, slow, resource-intensive LLM to answer a query like, Is tahini still good after one month in the fridge? But in practice, the feature was a failure. ChatGPT was no good at determining how much effort was needed to field a given query, which meant that people asking complex questions were often routed to a cheap, crappy model that gave awful results. OpenAI fixed the issue in ChatGPT with a user interface kludge. But with GPT-5.1, early indications point to OpenAI once again bifurcating their model into Instant and Thinking versions.  The former will likely respond to simple queries far faster than GPT-5, while the latter will take longer, chew through more tokens, and yield better results on complex tasks. Crucially, it seems like the user will once again be able to explicitly choose between the two models. That should yield faster results when a query is genuinely simple, and a better ability to solve complicated problems.  OpenAI has hinted that its future models will be capable of making very small discoveries in fields like science and medicine next year, with systems that can make more significant discoveries coming as soon as 2028. GPT-5.1 will likely be a first step down that path. An attempt to course correct Until OpenAI formally releases GPT-5.1 in one of its signature, wonky livestreams, all of this remains speculative. But given my history with OpenAIgoing back to the halcyon days of GPT-3these are some changes Im expecting when the 5.1 model does go live. Overall, GPT-5.1 seems like an attempt to correct many of the glaring problems with GPT-5, while also doubling down on OpenAIs more freedom-oriented, accuracy-focused approach. The new model will likely be able to think, (ahem) flirt, write, and communicate better than its predecessors.  Whether it will do those things better than a growing stable of competing models from Google, Anthropic, and myriad Chinese AI labs, though, is anyones guess.


Category: E-Commerce

 

LATEST NEWS

2025-11-13 09:30:00| Fast Company

If you’re in the business of publishing content on the internet, it’s been difficult to know how to deal with AI. Obviously, you can’t ignore it; large language models (LLMs) and AI search engines are here, and they ingest your content and summarize it for their users, killing valuable traffic to your site. Plenty of data supports this. Creating a content strategy that accounts for this changing reality is complex to begin with. You need to decide what content to expose to AI systems, what to block from them, and how both of those activities can serve your business. That would be hard even if there were clear rules that everyone’s operating under. But that is far from a given in the AI world. A topic I’ve revisited more than once is how tech and media view some aspects of the ecosystem differently (most notably, user agents), leading to new industry alliances, myriad lawsuits, and several angry blog posts. But even accounting for that, a pair of recent reports suggest the two sides are even further apart than you might think. {"blockType":"creator-network-promo","data":{"mediaUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/mediacopilot-logo-ss.png","headline":"Media CoPilot","description":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com","substackDomain":"https:\/\/mediacopilot.substack.com\/","colorTheme":"blue","redirectUrl":""}} Common Crawl and the copyright clash Common Crawl is a vast trove of internet data that many AI systems use for training. It was a fundamental part of GPT-3.5, the model that powered ChatGPT when it was released to the world back in 2022, and many other LLMs are also based on it. Over the past three years, however, the issue of copyright and training data has become a major source of controversy, and several publishers have requested that Common Crawl delete their content from its archive to prevent AI models from training on it. A report from The Atlantic suggests that Common Crawl hasn’t complied, keeping the content in the archive while making it invisible to its online search toolmeaning any spot checks would come up empty. Common Crawl’s executive director, Rich Skrenta, told the publication that it complies with removal requests, but he also clearly supports the point of view that anything online should be fair game for training LLMs, saying, “You shouldnt have put your content on the internet if you didnt want it to be on the internet.” Separately, Columbia Journalism Review (CJR) looked at how the new AI-powered browsers, Perplexity Comet and ChatGPT Atlas, handle requests to access paywalled content. The report notes that, when asked to retrieve a subscriber-only article from MIT Technology Review, both browsers complied even though the web-based chatbots from those companies would refuse to get the article on account of it being paywalled. The details of both cases are important, but both underscore just how far apart the perspectives of the media and the tech industry are. The tech side will always tilt toward more accessif information is digital and findable on the internet, AI systems will always default to obtaining it by any means necessary. And publishers assert that their content still belongs to them regardless of where and how it’s published, and they should retain control of who can access it and what they can do with it. The mental divide between AI and media There’s more happening here than just two debaters arguing past each other, though. The case of Common Crawl exposes a contradiction in a key talking point on the tech side of thingsthat any particular piece of content or source in an LLM’s training data isn’t that relevant, and they could easily do without it. But it’s hard to reconcile that with Common Crawl’s apparent actions, risking costly lawsuits by not deleting data from publications who request them to, which includes The New York Times, Reuters, and The Washington Post. When it comes to training data, some sources are clearly more valuable than others. The browsers that circumvent paywalls reveal another incorrect assumption from the AI side: that because certain behaviors are allowed on an individual basis, they should be allowed at scale. The most common argument that relies on this logic is when people say that when AI “learns” from all the information it ingests, it’s just doing what humans do. But a change in scale can also create a category shift. Think about how paywalls typically work: Many are deliberately porous, allowing a limited number of free articles per day, week, or month. Once those are exhausted, there’s the old trick of the incognito window. Also, some paywalls, as noted in the CJR article, work by loading all the text on the page, then pulling down a curtain so the reader can’t see it. Sometimes, if you click the “Stop loading” button fast enough, you can expose the text before that curtain comes down. One level up from there is to use your browser’s simple developer tools to disable and delete the paywall elements on an article page. Savvy internet users have known about all of these for years, but it’s a small percentage of all usersI’d wager less than 5%. But guess who knows about all these tricks, and probably many more on top of them? AI. Browser agents like those in Comet and Atlas are effectively the most savvy internet users possible, and they grant these powers to anyone simply requesting information. Now, what was once a niche activity is applied at scale, and paywalls become invisible to anyone using an AI browser. One defense here might be server-side paywalls, which grant access to the text only after the reader logs in. Regardless, what the browser does with the data after the AI ingests it is yet another access question. OpenAI says it won#8217;t train on any pages that Atlas’s agent may access, and indeed this is how user agents are supposed to work, though the company does say it will retain the pages for the individual user’s memory. That sounds benign enough, but considering how Common Crawl has behaved, should we be taking any AI company at their word? Turning conflict into strategy So what’s the takeaway for the mediabesides investing in server-side paywalls? The good news is your content is more valuable than you’ve been told. If it wasn’t, there wouldn’t be so much effort to find it, ingest it, and claim it to be “free.” But the bad news is that maintaining control over that content is going to be much harder than you probably thought. Understanding and managing how AI uses your content for training, summaries, or agents is a complicated business, requiring more than just techniques and code. You need to take into account the mindset of those on the other side. Turning all this into real strategy means deciding when to fight access, when to allow it, and when to demand compensation. Considering what a moving target AI is, that will never be easy, but if the AI companies’ aggressive, constant, and comprehensive push for more access has shown anything, it’s that they deeply value the media industry’s content. It’s nice to be needed, but success will depend on turning that need into leverage. {"blockType":"creator-network-promo","data":{"mediaUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/mediacopilot-logo-ss.png","headline":"Media CoPilot","description":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com","substackDomain":"https:\/\/mediacopilot.substack.com\/","colorTheme":"blue","redirectUrl":""}}


Category: E-Commerce

 

2025-11-13 09:30:00| Fast Company

As I write this my 6-and-a-half-month-old daughter is sitting on my lap in my home office, where she spends an hour or two each day. Despite all the toys Ive laid out for her, the thing she typically reaches for is my keyboard, occasionally leading to the odd typo. Ive been a freelance journalist for about 12 years, but never has this work-from-home, choose-your-own schedule arrangement been so valuable. Last year I was able to be with my wife at almost every doctors appointment, ultrasound, and blood test before we became parents in April. Since our daughter was born, I have enjoyed the flexibility not only to make it to every pediatrician appointment and give my wife a helping hand during the day but also to be a part of important milestone moments. I couldnt imagine having to walk out the front door each morning, only to return a couple of hours before bedtime in the evening, but of course that is the reality for most working parents. That is perhaps why solopreneurship is so popular among those with kids, especially women, and particularly those stepping away from extremely demanding careers to start or grow their families. Studies in Australia and Canada have found that many workers make the transition into parenthood and self-employment at the same time, and research even suggests that self-employed mothers outperform those without children. Being more present at home and work When her first child was born, Fernanda Chouza went in the opposite direction, taking on a more challenging role at a fast-growing AI startup in San Francisco. Over time Chouza says she earned the respect and leeway to take time off to care for her kids, but then she got laid off in 2022, when her kids were 2 and 4 years old. As I looked at hyper-growth companies, I realized I would need to put in, like, two years of elbow grease to get to the point where I can take a week off for my kids, she says. The idea of starting from scratch was too hard. Instead, Chouza started a one-women marketing agency called the Launch Shop, offering fractional product marketing expertise to software companies launching new products. Previously, Chouza says she spent many hours at work feeling guilty for not being home with her kids, and many hours at home worrying about whether she was dropping the ball at work. Now I have full flexibility. I don’t have to be constantly apologizing for stuff, and I only show up when I’m at the top of my game, she says. When I’m off, I’m fully off; I don’t have anxiety on the weekends, I don’t have anxiety at night, and I can be a lot more mentally present with my kids. Though she doesnt enjoy the same kind of equity-payout potential, Chouza says her salary is about 50% higher than her previous earnings, while providing significantly more time off. Previously, she said she could take two or three weeks off a year but was expected to be responsive on email and Slack during that time. Thus far this year, Chouza has taken a week or more off from work on eight separate occasions for reasons ranging from her kids eye infection to a two-week trip to visit their grandparents abroad. In corporate, I would have had to grovel and apologize for any time off, she says. It felt like I was being penalized for being a mom and they think of me as a liability, like Were always making so many accommodations for Fern. A side door to new career opportunities Perhaps one of the most unexpected benefits are the kinds of clients Chouza has worked with as a solopreneur. She says most companies are hesitant to hire executives in the current market but still need short-term support, making a contractor with corporate experience a viable option. By being fractional Im actually punching so far above my weight, she says. I would have never had this exposure if I was just trying to go through the front door, but Im coming in through this side door and getting these amazing logos on my résumé and this amazing experience. That is perhaps one of the most surprising benefits for those who step away from the workforce to start an independent venture while raising a family. Though many choose solopreneurship for the flexibility, they often discover that it can also offer a bridgeor even a ladderback into the traditional workforce. You can think of it as not necessarily I’m going to build a startup that’s going to pay me a lot of money, but Im going to write a story for myself that professionally fills those years, explains Kyle Jensen, the director of entrepreneurship programs, and associate dean and professor in the practice of entrepreneurship at the Yale School of Management.  I created something new, I operated it, I ran it, and through all of this I developed all sorts of executive acumen and business sense, and maybe some software skills. Professional benefits aside, Jensen also says part of what makes solopreneurship so appealing to parents is the ability to trade some of the financial rewards for time. With this manner of entrepreneurship, you can treat your human capital as a luxury good, and you can choose different distributions of time that allows you to enjoy things that are important but not necessarily prioritized in our societylike parenting, he says, adding, The only person who’s going to remember that you worked extra hours are your children.


Category: E-Commerce

 

Latest from this category

13.11The Penny, the most-reproduced artwork ever, is officially out of production
13.11The holidays are going to test Trumps new lie that affordability in America is a nonissue
13.11This incredible book can explain physics to a 2-year-old
13.11Adam Grant on lessons from the pandemic, datum versus data, and how abstract numbers can lead to very real human outcomes
13.11How goal-stacking got me out of a motivation rut
13.11The delegation trap every senior leader faces
13.11How the over-65 crowd is propping up colleges right now
13.11Creators are suffering from a mental health crisis, new study shows
E-Commerce »

All news

13.11Faisal Islam: Slow growth raises stakes even higher for the Budget
13.11British Gas boss voices concerns over Scotland's energy jobs
13.11Mokena furniture business seeks village incentive on store expansion/renovation
13.11PC gaming giant Valve unveils new console to rival Xbox and PlayStation
13.11Illinois sees record EV sales as Trump administration ends federal tax credits, but state goals still far off
13.11Rush & Division 6-bedroom home with hand-carved marble fireplaces: $18.5M
13.11The holidays are going to test Trumps new lie that affordability in America is a nonissue
13.11The Penny, the most-reproduced artwork ever, is officially out of production
More »
Privacy policy . Copyright . Contact form .