|
|||||
2025 was a banner year for cryptocurrencies on many fronts. Global regulation eased. Stablecoins powered $46 trillion in annual transactions. And major shifts in U.S. government policy spurred wider adoption. But with that expansion came a notable bump in crypto fraud. A new report from Chainalysis, a blockchain data platform based in New York City, estimates that $17 billion in crypto was stolen last year through fraud and scams. Impersonation scams, where criminals pretend to be trusted entities or use fake tokens or websites to trick victims into sending them crypto, were up a jaw-dropping 1,400% year over year. And while it’s much too early to gather any conclusive data for 2026, the year got off to an inauspicious start. Earlier this month, the FBI warned about the use of Bitcoin ATMs, saying the devices are a magnet for scammers to convince people to send money (their entire life savings, in some cases) overseas. And just this week, the fintech firm Betterment confirmed that hackers had broken into its systems earlier this month and used the data to send a fraudulent crypto note to users, which funneled money to a wallet controlled by the attacker. Meanwhile, former New York Mayor Eric Adams launched a new crypto token on Monday that he said would combat antisemitism and promote blockchain education. It quickly lost 81% of its value, bringing about accusations of a “rug pull” across the crypto community. Chainalysis warned in its report that this could be just the beginning of another year of new highs. “As we move into 2026, we expect further convergence of scam methodologies as scammers adopt multiple tactics and technologies simultaneously,” it wrote. Early projections by Chainalysis indicate scammers in 2025 received at least $14 billion on-chain, a transaction that occurs directly on the blockchain (compared with a speedier and cheaper but riskier off-chain transaction). That’s a big jump from last year’s initial estimate at the same time of $9.9 billion. Ultimately, the 2024 number settled at $12 billion following recalculations. The 2025 total is projected to come in above $17 billion, as more bogus wallet addresses are uncovered in the coming months. That would make last year’s rise in crypto scam losses the biggest since 2020 to 2021, when they doubled. Subsequent years have been fairly flat, hovering between $12 billion and $13 billion. Scams were not only happening more frequently last year, the people perpetrating them were also pocketing more each time. The average scam payment in 2025 was $2,764, a 253% increase over 2024’s $782. “The 2025 data reveal the extent to which cryptocurrency-enabled scams are becoming more sophisticated, organized, and efficient,” Chainalysis wrote. “There are no silver bullets to tackling such entrenched, industrial-scale scamming activity, and to be effective, a multipronged response is required.” Impersonation scams were the biggest driver of losses. Not only were the number of those sorts of cons significantly higher, but the average amount people paid to the groups behind them was up 600%. Crime syndicates in East and Southeast Asia drove many of these, the report says, with forced labor compounds in Cambodia, Myanmar, and other regions forcing trafficking victims to operate the scamsthe most prolific of which was a phishing scam that targeted users of the E-ZPass toll collection system with a fake “outstanding toll.” Artificial intelligence is becoming a weapon of crypto scammers as well. The technology’s ability to leverage large language models and deepfake technology makes the schemes more realistic. As a result, scams that used AI vendors to create on-chain links averaged a haul of $3.2 million, compared with $719,000 for those without. While fraud was on the rise last year, there were some victories by law enforcement. Police in the U.K. recovered 61,000 in stolen Bitcoin. And TerraUS and Luna crypto developer Do Kwon, a Stanford graduate known by some as the cryptocurrency king, pleaded guilty in August to fraud charges stemming from the collapse of Terraform Labs, the Singapore-based firm he cofounded in 2018. Customers lost $40 billion in that fraud, a figure that exceeded the total losses of Sam Bankman-Fried’s FTX. Kwon was sentenced to 15 years in prison.
Category:
E-Commerce
Should I take this project? Say yes to the new job offer? Stick with this plan or walk away? Every choice we make can feel huge. And every path has its own set of risks and rewards. There are always more questions for every life-changing decision. Sometimes the pros-and-cons lists feel more like busywork than progress. You check off the boxes, stare at the lists, and still end up confused, stuck in the same mental loop. Thats why I rely on the rule of 3 framework to make tough decisions. I hope it helps you clarify your life-changing choices. How it works Whenever youre stuck, force yourself to create three paths: B, C, and D. Why not A? A is usually the default for most people. The thing youre already doing. The path of least resistance. It doesnt need your help. What you need are alternatives. Then comes the second step, and this is where most people stop thinking too soon. Now, for each path, think through: First-order effects Second-order outcomes And third-order consequences And then, and this matters, choose the path with the most meaningful but least life-changing consequences. Why the two-option path doesnt work When you only have two options, your brain keeps going back and forth. Right vs wrong. Safe vs risky. Smart vs stupid. You stop being logical. Theres a term for it: binary bias or black-and-white thinking. We do it all the time. Two choices feel better. But they are not. Theyre restrictive and create a lot of unnecessary pressure. Most decisions are not binary, and there are usually better answers waiting to be found if you do the analysis and involve the right people, Jamie Dimon, the CEO of JPMorgan Chase, says. Three options open things up. Adding a third option reduces your emotional load and improves perceived control. You feel less trapped. And more capable. For example, if you are thinking about changing jobs. This is how it usually goes. Option 1: Quit and leap.Option 2: Stay and suffer. Now try the Rule of 3. Path B: Quit and take a new role in a similar field.Path C: Stay for six months and skill up aggressively.Path D: Go part-time or freelance while testing something new. Of course, none of these options is perfect. Thats why the next stage of the process is even more important: the consequences. 1st, 2nd and 3rd order effects It simply means keep asking, and then what? First-order effects are immediate. What happens right away when you make the decision? Second-order effects come next. What does that lead to? Third-order effects are longer-term. Who do you become if this path continues? I will now apply the effects to the job-changing example. Path B: Quit and take a similar role. First-order: New environment. Relief. You may stop dreading Mondays. Second-order: You become more confident. Now, you know youre employable. You can actually change jobs. Third-order: You might stay on the same path longer than you want. Now Path C: Stay and upgrade your skills First-order: You may feel frustrated for a while. You will need a lot of discipline for this path. Second-order: You will get leverage to open your options. Third-order: You redefine yourself from stuck to building a career. You may become indispensable to your employer. The mistake most people make Most people pursue the best outcome. Thats a trap. The future is uncertain. Youre probably guessing what could work. Everyone is. Once you are done with the effects, choose the path with the least life-altering effects. The one that teaches you something. Keeps doors open. And doesnt completely make your life worse if youre wrong. Its my risk psychology approach. People regret irreversible decisions more than bad ones. We hate closing doors we didnt mean to close. Thats why picking the path that means a lot to you but wont burn bridges matters. Make better decisions with the least panic. This framework works when you are emotionally attached to the decision you are about to make. When youre stressed, your brain throws logic out of the window. The rule of 3 gets you back on the rational path. It takes you from reacting to responding to life. It helps you answer the most important question. Which future can I live with? You can use this rule anywhere. Money decisions. Relationship decisions. Creative decisions. A big purchase. Even small ones. Do I say yes to this commitment? What are the effects, and what are my options? And what path can I live with and still function? Force the three paths. Pursue the consequences in places most people ignore. Then, opt for the choice that makes life better without disrupting your entire life. Use it to pick a path with tolerable unknowns The rule of three doesnt remove uncertainty. Nothing does. Youre never picking certainty. Youre picking a path with tolerable unknowns. Good decisions come from better processes. The 3 rule takes away the emotional attachment that drains the life out of you. Most of our hard decisions become unbearable because we want a perfect choice. The one that proves we are smart and avoids regret. So you panic. Or overthink. Some people let time decide for them. Which is still a decision, by the way. I use the rule of three to pick a direction. Adjust where necessary. And keep moving. I want forward motion without self-destruction. You dont need to outsmart the future. Just stop putting so much pressure on yourself. Most choices dont need courage. They need structure. Three paths. Three consequences. It makes overthinking your options almost impossible.
Category:
E-Commerce
AI is no longer just a cascade of algorithms trained on massive amounts of data. It has become a physical and infrastructural phenomenon, one whose future will be determined not by breakthroughs in benchmarks, but by the hard realities of power, geography, regulation, and the very nature of intelligence. Businesses that fail to see this will be blindsided. Data centers were once the sterile backrooms of the internet: important, but invisible. Today, they are the beating heart of generative AI, the physical engines that make large language models (LLMs) possible. But what if these engines, and the models they power, are hitting limitations that cant be solved with more capital, more data centers, or more powerful chips? In 2025 and into 2026, communities around the U.S. have been pushing back against new data center construction. In Springfield, Ohio; Loudoun County, Virginia and elsewhere, local residents and officials have balked at the idea of massive facilities drawing enormous amounts of electricity, disrupting neighborhoods, and straining already stretched electrical grids. These conflicts are not isolated. They are a signal, a structural friction point in the expansion of the AI economy. At the same time, utilities are warning of a looming collision between AIs energy appetite and the cost of power infrastructure. Several states are considering higher utility rates for data-intensive operations, arguing that the massive energy consumption of AI data centers is reshaping the economics of electricity distribution, often at the expense of everyday consumers. This friction between local resistance to data centers, the energy grids physical limits, and the political pressures on utilities is more than a planning dispute. It reveals a deeper truth: AIs most serious constraint is not algorithmic ingenuity, but physical reality. When reality intrudes on the AI dream For years, the dominant narrative in technology has been that more data and bigger models equal better intelligence. The logic has been seductive: scale up the training data, scale up compute power, and intelligence will emerge. But this logic assumes that three things are true: Data can always be collected and processed at scale. Data centers can be built wherever they are needed. Language-based models can serve as proxies for understanding the world. The first assumption is faltering. The second is meeting political and physical resistance. The third, that language alone can model reality, is quietly unraveling. Large language models are trained on massive corpora of human text. But that text is not a transparent reflection of reality: It is a distillation of perceptions, biases, omissions, and misinterpretations filtered through the human use of language. Some of that is useful. Much of it is partial, anecdotal, or flat-out wrong. As these models grow, their training data becomes the lens through which they interpret the world. But that lens is inherently flawed. This matters because language is not reality: It is a representation of individual and collective narratives. A language model learns the distribution of language, not the causal structure of events, not the physics of the world, not the sensory richness of lived experience. This limitation will come home to roost as AI is pushed into domains where contextual understanding of the world, not just text patterns, is essential for performance, safety, and real-world utility. A structural crisis in the making We are approaching a strange paradox: The very success of language-based AI is leading to its structural obsolescence. As organizations invest billions in generative AI infrastructure, they are doing so on the assumption that bigger models, more parameters, and larger datasets will continue to yield better results. But that assumption is at odds with three emerging limits: Energy and location constraints: As data centers face community resistance and grid limits, the expansion of AI compute capacity will slow, especially in regions without surplus power and strong planning systems. Regulatory friction: States and countries will increasingly regulate electricity usage, data center emissions, and land use, placing new costs and hurdles on AI infrastructure. Cognitive limitations of LLMs: Models that are trained only on text are hitting a ceiling on true understanding. The next real breakthroughs in AI will require models that learn from richer, multimodal interactions from real environments, sensory data and structured causal feedback, not just text corpora. Language alone will not unlock deeper machine understanding. This is not a speculative concern. We see it in the inconsistencies of todays LLMs: confident in their errors, anchored in old data, and unable to reason about the physical or causal aspects of reality. These are not bugs: they are structural constraints. Why this matters for business strategy CEOs and leaders who continue to equate AI leadership with bigger models and more data center capacity are making a fundamental strategic error. The future of AI will not be defined by how much computing power you have, but by how well you integrate intelligence with the physical world. Industries like robotics, autonomous vehicles, medical diagnosis, climate modeling, and industrial automation demand models that can reason about causality, sense environments, and learn from experience, not just from language patterns. The winners in these domains will be those who invest in hybrid systems that combine language with perception, embodiment, and grounded interaction. Conclusion: reality bites back The narrative that AI is an infinite frontier has been convenient for investors, journalists, and technologists alike. But like all powerful narratives, it eventually encounters the hard wall of reality. Data centers are running into political and energy limits. Language-only models are showing their boundaries. And the assumption that scale solves all problems is shaking at its foundations. The next chapter of AI will not be about who builds the biggest model. It will be about who understands the world in all its physical, causal, and embodied complexity, and builds systems that are grounded in reality. Innovation in AI will increasingly be measured not by the size of the data center or the number of parameters, but by how well machines perceive, interact with, and reason about the actual world.
Category:
E-Commerce
All news |
||||||||||||||||||
|
||||||||||||||||||