|
A new study from Cornell University goes against the grain of popular thought, arguing that left-handed people aren’t necessarily more creative than their right-handed counterparts after all. It’s research that hits close to home for this writer. From an early age, I’ve worn my left-handedness as a badge of pride. As a kid, I always felt different from the other students in class, because I had to use a left-handed desk. Back then, I also had to use special scissors in home economics, bat on the “wrong” side of the plate at softball . . . the list goes on. But despite the minor inconveniences, it was a label I readily embraced because I was told I was “special” (only 10% of the population is left-handed) and, perhaps most of all, because I knew I was in good company. Who wouldn’t want to be a member of a club that includes Michelangelo, Leonardo da Vinci, Aristotle, one of the Beatles, Bill Gates, Nikola Tesla, Marie Curie, Babe Ruth, Bart Simpson, Oprah, and Jerry Seinfeld? In fact, five out of the last eight presidents have been left-handed: Gerald Ford, Ronald Reagan, George H.W. Bush, Bill Clinton, and Barack Obama. (President Trump is a rightie.) To this day, I still make a mental note of who is and is not a lefty. Picasso and John Lennon aren’t, but Paul McCartney is. So is my best friend, Gaby, my editor, Connie, and my boss, Christopher. It’s a secret club we lefties share, believing there is something just a little special, a little more creative about us. That’s why the new research from Cornell stopped me in my tracks. The science of creativity In Handedness and Creativity: Facts and Fictions, published in the Psychonomic Bulletin & Review, researchers argue that while there’s a plausible link between creativity and handedness based on theories that look at the neural basis of creativity, they found no evidence that left- or mixed-handed individuals are more creative than right-handers. In fact, they even found right-handers scored statistically higher on one standard test of divergent thinking (the alternate-uses test). “The data do not support any advantage in creative thinking for lefties, said the studys senior author, Daniel Casasanto, associate professor of psychology at Cornell. And while the Cornell researchers acknowledge that left- and mixed-handers may be overrepresented in art and music, they argue that southpaws are underrepresented in other creative professions, like architecture. When determining which professions constitute creative fields, researchers drew on data from nearly 12,000 individuals in more than 770 professions, which were ranked by the creativity each requires. By combining originality and inductive reasoning, they concluded that physicists and mathematicians rank alongside fine artists as having the most creative jobs. Using this criteriaand considering the full range of professionsthe researchers found that left-handers are underrepresented in fields that require the most creativity. The focus on these two creative professions where lefties are overrepresented, art and music, is a really common and tempting statistical error that humans make all the time, Casasanto said. People generalized that there are all these left-handed artists and musicians, so lefties must be more creative. But if you do an unbiased survey of lots of professions, then this apparent lefty superiority disappears. Casasanto did agree, however, that there are scientific reasons to believe that left-handed people would have an edge in creativity when it comes to “divergent thinking”the ability to explore many possible solutions to a problem in a short time and make unexpected connectionswhich is supported more by the brains right hemisphere. But again, the study revealed that handedness makes little difference in the three most common laboratory tests of its link to divergent thinking; if anything, righties have a small advantage on some tests. Finally, researchers conducted their meta-analysis by crunching the data from nearly 1,000 relevant scientific papers published since 1900. Most were weeded out because they did not report data in a standardized way, or included only righties (the norm in studies seeking homogeneous samples), leaving just 17 studies reporting nearly 50 effect sizes. This may be why the newest study came to a different conclusion than what is held in popular belief or prior scientific literature.
Category:
E-Commerce
If youve ever been a patient waitingdays, sometimes more than a weekfor treatment approval, or a clinician stuck chasing it, you know what prior authorization feels like. Patients sit in limbo, anxiety growing as care stalls. Nurses and physicians trade hours of patient time for phone calls, faxes, and glitchy portals. Everyone waits, some in pain, while the people on both sides of the system lose faith in it a little more each day. This isnt a minor inconvenience. According to the American Medical Associations (AMA) 2024 Prior Authorization Physician Survey, 93% of physicians report that prior authorization delays access to care, and 94% say it negatively affects patient outcomes. Physicians handle an average of nearly 40 requests each week, spending 13 hours of their time on the process. Nearly 9 out of 10 share that its a contributing factor to burnout. Weve been down this road for many years. In 2018, health insurer and provider groups signed a consensus statement promising significant improvement of prior authorization. In 2023, the AMA reported that two major insurers pledged to reduce the number of services needing prior authorization. The promises added upso why hasnt the burden eased? Last month, AHIP (a national trade association representing the health insurance industry) and major payers rolled out six reforms standardizing electronic submissions, speeding decisions, improving transparency, and preserving continuity of care when members switch plans. More than 50 leading health plans, encompassing 257 million Americans, signed on to be a part of this reform with all commitments delivered by 2027. Thats important. But for someone awaiting chemotherapy or a nurse on the ward, 2027 feels like forever away. No need to wait Real reform doesnt need to wait. Heres whats already happening inside health plans that have embraced agentbased AI systemstechnology is being put in place that’s designed not just to speed up forms but to fundamentally change how prior authorization and other core operations get done. These systems dont replace people. They work alongside nurses, case managers, and administrators, handling the repetitive, document-heavy work so humans can focus on clinical decisions and patient care. Tangible results This AI is just beginning to be adopted by plans; the transformation is measurable: Turnaround times are slashed by more than 50%. 76% of authorizations are handled automatically. Eighteen minutes are saved per prior authorization request, which for an average-sized health plan unlocks tens of thousands of hours each month, that enables clinical care teams to shift from administrative work to greater focus on patients. AI can handle the tedious parts of the processsorting through PDFs, faxes, and clinical notesin seconds instead of hours, while keeping nurses and physicians involved for the decisions that require human judgment. Paradigm shift This isnt just a tweak to the old processits a shift that allows entire operations teams to work differently. Patients get quicker answers and fewer anxious phone calls. Providers get back more time to spend helping patients. For the people working behind the scenes, it means moving past the repetitive paperwork grindhours spent sorting through forms, faxes, and filesand focusing on work that actually supports better care. AHIPs own language makes the case: these reforms are meant to provide faster access to evidence-based care, simplify workflows for providers, and preserve care continuity when people switch insurers. That should be the minimum standardnot a distant promise. Health plans need to act nowscaling existing AI deployments and embracing process redesignso they can deliver three concrete outcomes today: Patients can start the treatments they need sooner, without the constant backandforth or long waits for approval. Doctors and nurses get back valuable time, so they can spend more of their day with patients instead of buried in forms and phone calls. People can keep their care moving, even if they switch insurance plans midtreatment, without having to start the approval process all over again. Within reach Health plans dont need to wait for 2027. The technology exists today to deliver meaningful prior authorization reform. And lets be honest: another round of press releases wont change outcomes. But scaling the results and impact that AI is delivering to prior authorization todaythats reform in motion. And its within reach. For patients trapped in limbo and clinicians stretched thinner every day, reform cant arrive soon enough. The question isnt whether we can make prior authorization faster, simpler, and less needed. Its when health plans will actfor the sake of all those involved in our health system, we must choose urgency.
Category:
E-Commerce
Over the past five years, advances in AI models data processing and reasoning capabilities have driven enterprise and industrial developers to pursue larger models and more ambitious benchmarks. Now, with agentic AI emerging as the successor to generative AI, demand for smarter, more nuanced agents is growing. Yet too often smart AI is measured by model size or the volume of its training data. Data analytics and artificial intelligence company Databricks argues that todays AI arms race misses a crucial point: In production, what matters most is not what a model knows, but how it performs when stakeholders rely on it. Jonathan Frankle, chief AI scientist at Databricks, emphasizes that real-world trust and return on investment come from how AI models behave in production, not from how much information they contain. Unlike traditional software, AI models generate probabilistic outputs rather than deterministic ones. The only thing you can measure about an AI system is how it behaves. You cant look inside it. Theres no equivalent to source code, Frankle tells Fast Company. He contends that while public benchmarks are useful for gauging general capability, enterprises often over-index on them. What matters far more, he says, is rigorous evaluation on business-specific data to measure quality, refine outputs, and guide reinforcement learning strategies. Today, people often deploy agents by writing a prompt, trying a couple of inputs, checking their vibes, and deploying. We would never do that in softwareand we shouldnt do it in AI, either, he says. Frankle explains that for AI agents, evaluations replace many traditional engineering artifacts, i.e., the discussion, the design document, the unit tests, and the integration tests. Theres no equivalent to a code review because theres no code behind an agent, and prompts arent code. That, he argues, is precisely why evaluations matter and should be the foundation of responsible AI deployment. The shift from focusing on belief to emphasizing behavior is the foundation of two major innovations by Databricks this year: Test-Time Adaptive Optimization (TAO) and Agent Bricks. Together, these technologies seek to make behavioral evaluation the first step in enterprise AI, rather than an afterthought. AI behavior matters more than raw knowledge Traditional AI evaluation often relies on benchmark scores and labeled datasets derived from academic exercises. While those metrics have value, they rarely reflect the contextual, domain-specific decisions businesses face. In production, agents may need to generate structured query language (SQL) in a companys proprietary dialect, accurately interpret regulatory documents, or extract highly specific fields from messy, unstructured data. Naveen Rao, vice president of AI at Databricks, says these are fundamentally behavioral challenges, requiring iterative feedback, domain-aware scoring, and continuous tuning, not simply more baseline knowledge. Generic knowledge might be useful to consumers, but not necessarily to enterprises. Enterprises need differentiation; they must leverage their assets to compete effectively, he tells Fast Company. Interaction and feedback are critical to understanding what is important to a user group and when to present it. Whats more, there are certain ways information needs to be formatted depending on the context. All of this requires bespoke tuning, either in the form of context engineering or actually modifying the weights of the neural network. In either case, he says, a robust reinforcement learning harness is essential, paired with a user interface to capture feedback effectively. That is the promise of TAO, the Databricks research teams model fine-tuning method: improving performance using inputs enterprises already generate, and scaling quality through compute power rather than costly data labeling and annotation. While most companies treat evaluation as an afterthought at the end of the pipeline, Databricks makes it central to the process. TAO uses test-time compute to generate multiple responses, scores them with automated or custom judges, and feeds those scores into reinforcement learning updates to fine-tune the base model. The result is a tuned model that delivers the same inference cost as the originalwith heavy compute applied only once during tuning, not on every query. The hard part is getting AI models to do well at your specific task, using the knowledge and data you have, within your cost and speed envelope. Thats the shift from general intelligence to data intelligence, Frankle says. TAO can help tune inexpensive, open-source models to be surprisingly powerful using a type of data weve found to be common in the enterprise. According to a Databricks blog, TAO improved open-source Llama variants, with tuned models scoring significantly higher on enterprise benchmarks such as FinanceBench, DB Enterprise Arena, and BIRD-SQL. The company claims the method brought Llama models within range of proprietary systems like GPT-4o and o3-mini on tasks such as document Q&A and SQL generation, while keeping inference costs low. In a broader multitask run using 175,000 prompts, TAO boosted Llama 3.3 70B performance by about 2.4 points and Llama 3.1 70B by roughly 4.0 points, narrowing the gap with contemporary large models. To complement its model fine-tuning technique, Databricks has introduced Agent Bricks, an agentic AI-powered feature within its Data Intelligence Platform. It enables enterprises to customize AI agents with their own data, adjust neural network weights, and build custom judges to enforce domain-specific rules. The product aims to automate much of agent development: Teams define an agents purpose and connect data sources, and Agent Bricks generates evaluation datasets, creates judges, and tests optimization methods. Customers can choose to optimize for maximum quality or lower cost, enabling faster iteration with human oversight and fewer manual tweaks. Databricks latest research techniques, including TAO and Agent Learning from Human Feedback (ALHF), power Agent Bricks. Some use cases call for proprietary models, and when thats the case, it connects them securely to your enterprise data and applies techniques like retrieval and structured output to maximize quality. But in many scenarios, a fine-tuned open model may outperform at a lower cost, Rao says.He adds that Agent Bricks is designed so domain expertsregardless of coding abilitycan actively shape and improve AI agents. Subject matter experts can review agent responses with simple thumbs-up or thumbs-down feedback, while technical users can analyze results in depth and provide detailed guidance. This ensures that AI agents reflect enterprise goals, domain knowledge, and evolving expectations, Rao says, noting that early customers saw rapid gains. AstraZeneca processed more than 400,000 clinical trial documents and extracted structured data in less than an hour with Agent Bricks. Likewise, the feature enabled Flo Health to double its medical-accuracy metric compared with commercial large language models while maintaining strict privacy and safety. Their approach blends Flos specialized health expertise and data with Agent Bricks, which leverages synthetic data and tailored evaluation to deliver reliable, cost-effective AI health support at scaleuniquely positioning us to advance womens health, Rao explains. From benchmarks to business data The shift toward behavior-first evaluation is pragmatic but not a cure-all. Skeptics warn that automated evaluations and tuning can just as easily reinforce bias, lock in flawed outputs, or allow performance to drift unnoticed. In some domains we truly have automatic verification that we can trust, like theorem proving in formal systems. In other domains, human judgment is still crucial, says Phillip Isola, associate professor and principal investigator at MITs Computer Science & Artificial Intelligence Laboratory. If we use an AI as the critic for self-improvement, and if the AI is wrong, the system could go off the rails. Isola points out that while self-improving AI systems are generating excitement, they also carry heightened safety and security risks. They are less constrained, lacking direct supervision, and can develop strategies that might be unexpected and have negative side effects, he says, also warning that companies may game benchmarks by overfitting to them. The key is to keep updating evaluations every year so were always testing models on new problems they havent already memorized. Databricks acknowledges the risks. Frankle stresses the difference between bypassing human labeling and bypassing human oversight, noting that TAO is simply a fine-tuning technique fed by data enterprises already have. In sensitive applications, he says, safeguards remain essential and no agent should be deployed without rigorous performance evaluation. Other experts note that greater efficiency doesnt automatically improve AI model alignment, and theres no clear way to measure AI model alignment currently. For a well-defined task where an agent takes action, you could add human feedback, but for a more creative or open-ended task, is it clear how to improve alignment? Mechanistic interpretability isnt strong enough yet, says Matt Zeiler, CEO of Clarifai. Zeiler argues that the industrys reliance on a mix of general and specific benchmarks needs to evolve. While these tests condense many complex factors into a few simple numbers, models with similar scores dont always feel equally good in use. That feeling isnt captured in todays benchmarks, but either well figure out how to measure it, or well just accept it as a subjective aspect of human preference; some people will simply like some models more than others, he says. If the results from Databricks hold, enterprises may rethink their AI strategy, prioritizing feedback loops, evaluation pipelines, and governance over sheer model size or massive labeled datasets, and treating AI as a system that evolves with use rather than a onetime product. We believe the future of AI lies not in bigger models, but in adaptive, agentic systems that learn and reason over enterprise data, Rao says. This is where infrastructure and intelligence blur: You need orchestration, data connectivity, evaluation, and optimization working together.
Category:
E-Commerce
All news |
||||||||||||||||||
|