Computers that think

·14 min read

Reasoning models stepped into the spotlight, introducing a new computational heartbeat at inference time and creating seismic shifts in how we think about AI’s power and potential. At first glance, it might look like just another milestone, but these models are subverting everything we believed about scaling, transforming the way AI evolves and the speed at which it can disrupt industries.

Despite the bull market in all things AI, the ride forward is likely to be as volatile as it is exhilarating. Once you let a system scale without hitting a wall, you’re rewriting the rules of not just technology but also society and the economy.

From Static Models to Dynamic Reasoning

For decades, AI models were stuck in a training-time time warp—once you set their parameters, they became etched in silicon. Traditional AI poured enormous compute resources into training, leaving only minimal overhead for real-time thinking. Reasoning models invert that approach. They offload more of the heavy lifting to test time, granting them a flexible, adaptive edge. This shift is more than a cool feature; it’s a fundamental rewrite of how AI solves problems:

  • Enhanced Problem-Solving: By computing “on the fly,” these models handle multifaceted, multi-step challenges that would overwhelm their static ancestors.
  • Extended Scalability: No longer fixated on parameter counts, AI can now improve by harnessing inference-time optimization.
  • Dynamic Resource Utilization: They allocate computational muscle proportionate to a task’s complexity, offering an optimization sweet spot that static models can’t reach.

In 2024, OpenAI unveiled its o1 model in September, introducing a “chain of thought” reasoning approach. This innovation broke complex problems into manageable steps, boosting performance across tasks like legal research, data analytics, and creative content generation. By December, OpenAI released the o3 model, taking on coding, mathematical proofs, and scientific challenges with unprecedented agility. Not to be outdone, Google responded with Gemini 2.0 Flash Thinking Experimental, showcasing runtime reasoning techniques for deeper analytical processing and even early-stage design thinking.

The New Era of Scaling Laws

We’ve long assumed AI gets better by getting bigger: throw in more data, crank up the parameters, and watch performance tick up. That’s worked—until recently. The hype around “just add more GPUs” hit a major snag when training costs ballooned from millions to billions of dollars. Meanwhile, the hardware pipeline couldn’t sprint fast enough to accommodate the colossal appetite for ever-larger training runs.

Reasoning models introduced a second, equally potent scaling dimension: test-time compute. By tapping fresh compute during inference, they transform how AI evolves, effectively unleashing dual scaling pathways. The ripple effects are profound:

  • Dual Scaling Paths: Both training and real-time inference upgrades can move the performance needle, unlocking new routes to improvement.
  • Sustained Progress: As training gains taper, test-time compute can keep pushing AI’s boundaries, offering a lifeline when bigger, more expensive training runs lose steam.
  • Compute Challenges: This approach demands advanced infrastructure and hefty operational budgets, forcing tech leaders to balance ambition against cost.

By harnessing inference-time compute, we’re stepping off the treadmill of raw parameter escalation and onto a fresh trajectory. It’s a textbook flywheel effect: slow to gain momentum but unstoppable once it spins. Nvidia and other hardware leaders are racing to develop specialized AI accelerators to support this shift, tailoring tools for inference-heavy workloads. Cloud providers like AWS, Azure, and Google Cloud are similarly re-architecting data centers, optimizing for the dynamic demands of these new AI models.

Raising the Bar on Problem-Solving

One of the most game-changing aspects of reasoning models is their capacity to “think longer” when tasks get trickier. Longer thinking means more compute, more iterations, and more precise answers. It’s the equivalent of an assembly-line worker stopping to figure out a better fix rather than simply doing the same repetitive motion. For complex tasks—like strategic planning or intricate language understanding—reasoning models ramp up compute on demand, scaling their mental bandwidth in real time.

We’re also seeing a new generation of AI that merges reasoning with reward-driven training, akin to how AlphaGo mastered the game board. Instead of learning merely from labeled datasets or human feedback, these models optimize via iterative self-improvement, pursuing a reward signal to refine performance. What happened in games is now poised to happen across industries, from predictive healthcare diagnostics to automated trading desks. The potential for innovation in fields like personalized medicine, automated legal analysis, and even real-time supply chain adjustment is staggering.

Economic and Societal Impacts

If AI was already a catalyst for upheaval, reasoning models may be the accelerant. Their compute-intensive nature and dynamic approach reverberate beyond engineering circles, transforming every sector that relies on knowledge work and decision-making.

  • Infrastructure Demands: Data centers will need to scale like never before, pumping billions into specialized chips, servers, and energy solutions. As hardware capacity becomes a strategic advantage, we may see shifts in global data-center locations, tied to lower energy costs and favorable regulations.
  • Labor Market Disruption: Some jobs will vanish, while others blossom—big shifts will require upskilling, reskilling, and workforce flexibility. Roles once considered safe in the “knowledge economy” could be at risk as reasoning models automate tasks in research, consulting, and even creative fields.
  • Agents on the Rise: The future of software may revolve around AI-driven agents coordinating in real time. This shift radically alters workflows, reducing the need for countless manual steps and entire tiers of managerial oversight.
  • Sustainability Concerns: The carbon footprint of ever-growing compute needs calls for a hard look at efficiency and renewable energy. AI leaders must tackle not just the cost of more servers but also the ecological impact of feeding these compute-hungry architectures.

The domino effect is real. Cheaper compute could make AI a commodity, while more expensive compute might slow or redirect innovation. Incumbents who fail to adopt an AI-first mindset may see their competitive advantage slip away as ambitious startups harness advanced agents to upend traditional workflows. Whether you’re an industry giant or a bootstrap founder, the choice is stark: adapt quickly or risk becoming obsolete.

A Transformative But Challenging Transition

Deploying inference-heavy AI is no casual undertaking. Hardware investments soar, top talent is scarce, and old business processes can look painfully outdated overnight. Yet the rewards are colossal. It’s essentially a land grab for tomorrow’s economy: those who move fastest to integrate agent-based AI and dynamic reasoning stand to capture entire markets before incumbents even realize they’re behind.

  • Workforce Restructuring: With agents capable of tackling complex, time-consuming tasks, entire layers of middle management or administrative roles may shrink. New positions—like AI strategists, agent overseers, or ethicists—will emerge, but the net effect is far from clear.
  • Implications for Incumbents: Legacy institutions might find that the very structures they once relied on—hierarchical management, siloed teams—are now liabilities. Agility is the name of the game, and smaller, tech-savvy rivals can leapfrog slower-moving giants.
  • Data & Privacy Concerns: As reasoning agents handle more sensitive tasks, trust and security become paramount. Businesses that proactively address data governance, compliance, and ethical frameworks will emerge as leaders in the new landscape.

Change of this magnitude is rarely a smooth ride. Think of reasoning models as the “Great Reshuffler,” distributing power and resources in entirely new ways. The business playbook is being rewritten in real time, and only those who embrace the new rules will thrive.

The Bigger Picture: A Bullish Perspective

No innovation worth its salt arrives without friction. The key is understanding that friction often sparks progress. By pushing AI beyond static, training-only paradigms, reasoning models unlock new horizons. They can adapt, iterate, and pivot in real time, taking on challenges once deemed too sprawling or complex.

Yes, there will be turbulence: skyrocketing compute costs, job market aftershocks, and ethical quandaries that make the headlines. But adversity is often the breeding ground for game-changing breakthroughs. We learn, we adjust, and the trajectory bends upward over the long haul. If we navigate this carefully—balancing compute, ethics, and opportunities for human-AI collaboration—we may emerge with a more dynamic, prosperous, and equitable future.

Conclusion

Reasoning models and their reliance on test-time compute aren’t merely adding horsepower to AI; they’re redefining how machines learn, reason, and solve problems. With them, AI steps off a linear path and into an era of near-limitless expansion, offering a glimpse at breakthroughs that once felt like science fiction. Along with these marvels come questions about cost, sustainability, and societal impact, each requiring deliberate attention.

What we’re witnessing is more than an upgrade—it’s a reinvention. In a world where agents manage entire workflows, knowledge work gets reshaped, and AI can “think longer” in real time, the transformation is profound. History suggests that every seismic shift in technology has ultimately led to greater opportunity. In short, it’s the start of a future where AI doesn’t just follow instructions—it collaborates, innovates, and helps shape the world we want to build.

Tickers: Longer-Term Outlook

Foundational AI & Hardware

  • Nvidia (NVDA):
    Why It Matters: Despite questions about whether GPU-centric compute will dominate forever, Nvidia has entrenched itself in both training and inference acceleration. It’s also diversifying into networking and software stacks (CUDA). Barring a massive pivot to non-GPU architectures, it’s likely to remain a key enabler of AI infrastructure.

  • Advanced Micro Devices (AMD):
    Why It Matters: AMD is second to Nvidia in GPUs, but it’s making strides in high-performance CPUs and FPGAs (via the Xilinx acquisition). If future AI systems demand specialized chips and AMD can optimize its software ecosystem, the company could narrow the gap.

  • Taiwan Semiconductor (TSM):
    Why It Matters: The world’s leading semiconductor fab, TSMC makes the chips for Nvidia, AMD, Apple, and others. As compute demand skyrockets, TSMC’s role as the linchpin of advanced node manufacturing becomes ever more crucial—unless a major competitor emerges or geopolitical factors intervene.

  • Intel (INTC):
    Why It Matters: Still a giant in CPUs, Intel risks obsolescence if it cannot cement a leadership position in AI accelerators and advanced fabs. Its recent push into GPUs and dedicated AI hardware shows potential—but it’s in a race against time. Success means regaining top-tier status; failure spells a slow fade.

Essential Enablers & Infrastructure

  • Arista Networks (ANET):
    Why It Matters: High-speed data-center networking is vital for reasoning models that need to shuffle massive data in real time. Arista’s focus on scalable, low-latency switching could keep it front and center—unless next-gen AI drastically reduces networking overhead or moves compute to the edge.

  • Cisco (CSCO):
    Why It Matters: A legacy networking titan trying to pivot into software and security. If AI reasoners become adept at auto-managing networks, Cisco’s traditional hardware business might contract. On the other hand, Cisco’s size and existing footprint could help it co-opt agent-based orchestration and become a services powerhouse.

  • Equinix (EQIX):
    Why It Matters: A colocation and data-center operator. As reasoning models demand unprecedented compute, data-center real estate could boom. However, if future AI architectures become hyper-efficient or distributed across edge devices, that could curb Equinix’s long-term growth trajectory.

  • NextEra Energy (NEE):
    Why It Matters: Advanced AI at scale is compute-hungry, which means high energy consumption. Renewables could see a surge in demand from data-center operators eager to lower costs and reduce carbon footprints. NextEra, as a leading clean-energy provider, could ride this wave—assuming AI systems don’t find radical ways to slash energy consumption.

Platform Players & Software Giants

  • Microsoft (MSFT):
    Why It Matters: Already deeply invested in OpenAI and integrating AI into every corner of its product suite (Azure, Office, GitHub). Microsoft has the scale and partnerships to remain a dominant AI platform—unless more nimble agent-based ecosystems bypass large-scale cloud offerings altogether.

  • Alphabet/Google (GOOGL):
    Why It Matters: With DeepMind, TensorFlow, and Cloud TPU tech, Google stands at the forefront of research and deployment. If Gemini 2.0 or a successor redefines the rules of inference, Google’s ecosystem could thrive. But a failure to commercialize and integrate effectively might let challengers erode its lead.

  • Amazon (AMZN):
    Why It Matters: AWS powers countless AI startups. Amazon’s in-house AI ambitions (logistics, retail, healthcare) give it a robust test bed for advanced reasoning. Still, it’s vulnerable if another platform or open-source movement captures developer mindshare.

  • Meta (META):
    Why It Matters: Meta’s generative AI and open-source LLM approach could upend the market, especially if it merges reasoning with social/VR platforms. Long term, if VR and AR become mainstream, agent-based experiences might define new forms of social interaction that Meta is well-positioned to control.

Data & Observability—A Shrinking Need?

  • Snowflake (SNOW):
    Why It Matters: Centralized data warehouses are crucial for training AI. But with reasoners capable of on-the-fly learning and decentralized knowledge sharing, the data warehousing model could change. Snowflake might adapt by offering real-time data “activation” or face competition from distributed AI solutions.

  • Datadog (DDOG):
    Long-Term Risk: In a future where AI can self-diagnose and self-heal, the need for external observability might dwindle. Datadog’s short-term prospects remain strong as complex systems still need monitoring—but in a world of autonomous reasoners, Datadog must pivot or risk irrelevance.

  • Splunk (SPLK):
    Long-Term Risk: Similar story to Datadog. Logging and analytics are critical for debugging today, but tomorrow’s AI might inherently log, analyze, and correct itself. Splunk would need to shift from “human-friendly dashboards” to “AI-friendly data pipelines” to stay relevant.

Enterprise Software: Pivot or Perish

  • Salesforce (CRM):
    Why It Matters: Already embedding AI (Einstein) into CRM workflows. But as reasoners become more autonomous, the lines between CRM, marketing automation, and agent-based orchestration blur. Salesforce’s massive install base is an asset—unless customers adopt new AI-born platforms that replace traditional CRMs.

  • Oracle (ORCL) & SAP (SAP):
    Potential Risk: These enterprise software mainstays have large, entrenched customer bases. But they must integrate advanced reasoning to avoid losing relevance. If reasoners handle ERP, supply chain, and HR tasks automatically, legacy platforms must evolve or face slow attrition.

  • Atlassian (TEAM):
    Potential Risk: Known for Jira and Confluence, Atlassian thrives on manual project management. In a world of agent-based collaboration, do we still need ticketing systems as we know them? Atlassian could pivot to AI-based orchestration or be displaced if reasoners manage tasks end-to-end.

Consulting & Services

  • Accenture (ACN):
    Why It Matters: Offers AI and digital transformation consulting. If enterprise clients adopt advanced reasoning models en masse, Accenture could profit from integration services—unless specialized AI consultancies undercut them with more agile, agent-centric solutions.

  • IBM (IBM):
    Why It Matters: Once a leader with Watson, IBM is revamping its AI focus around hybrid cloud and quantum. If it successfully couples quantum breakthroughs with reasoning models, it could re-emerge as a powerhouse. Failure means it remains overshadowed by trendier tech giants.

Potential Surprises & Dark Horses

  • Palantir (PLTR):
    Why It Matters: Deep analytics and government contracts give Palantir unique data access. The company could evolve into a prime contractor for advanced reasoning solutions in national security, healthcare, and beyond—if it moves fast enough to pivot from “analysis” to “autonomous reasoners.”

  • Tesla (TSLA):
    Why It Matters: Autonomy and robotics are prime candidates for real-time reasoning. Tesla’s full-stack approach (vehicles, Dojo supercomputer, robotics) might yield breakthroughs that cross-pollinate into other industries—or it might prove too insular if broader AI ecosystems outpace it.

  • Disney (DIS):
    Out-of-Left-Field Pick: Content creation and theme-park experiences could be transformed by advanced AI storylines and interactive characters. A forward-looking Disney might leverage reasoners to create immersive worlds. A complacent Disney might miss the boat to smaller, AI-native studios.

  • NextEra Energy (NEE):
    Energy Angle: Mentioned above for renewables, NextEra also has the scale to experiment with AI-based grid management. If reasoners automatically balance supply and demand, NEE could become the model for an AI-driven utility.

In a World Where Systems Fix Themselves

In the longer term, many monitoring, observability, and even some platform services may face existential questions. If advanced AI can self-diagnose and self-correct, the role of third-party software tools—currently indispensable—may shrink or transform into “meta-monitoring” solutions. These shifts won’t happen overnight; they’ll evolve as organizations learn to trust AI systems to autonomously manage risk and resources.

Still, any company hinging on manual oversight, multi-step human approvals, or complexity-driven vendor lock-in could see its market erode. The big winners will be those that either:

  1. Become the underlying compute or data backbone (chips, clouds, green energy).
  2. Offer next-gen AI orchestration that extends beyond human dashboards.
  3. Seamlessly integrate advanced reasoners to automate entire workflows—from code fixes to supply chains.

Disclaimer: This content is for informational purposes only and does not constitute financial advice. Always do your own research or consult a professional before making investment decisions.