Ben Lorica and Evangelos Simoudis on the AI Bubble, Enterprise Adoption, China Competition, and Humanoid Robots.
Subscribe: Apple • Spotify • Overcast • Pocket Casts • AntennaPod • Podcast Addict • Amazon • RSS.
Ben Lorica and Evangelos Simoudis discuss AI-bubble signals (runaway revenue multiples, circular financing), why many enterprise pilots stall, and what separates leaders (use-case matrices, cross-functional ownership, hard metrics). They also examine U.S.–China tech competition in robotics and semiconductors, and offer a pragmatic view on humanoid robots — what works now versus what’s still research-grade.
Interview highlights – key sections from the video version:
Related content:
- A video version of this conversation is available on our YouTube channel.
- Agentic AI Applications: A Field Guide
- Rodney Brooks: Why Today’s Humanoids Won’t Learn Dexterity
- Ben Lorica and Evangelos Simoudis → When AI Eats the Bottom Rung of the Career Ladder
- Ben Lorica and Evangelos Simoudis → Why China’s Engineering Culture Gives Them an AI Advantage
- Ben Lorica and Evangelos Simoudis → Beyond the Agent Hype
- Evangelos Simoudis: From Conversation-Centric to Agent-Centric AI Models
Support our work by subscribing to our newsletter📩
Transcript
Below is a heavily edited excerpt, in Question & Answer format.
AI Market Dynamics and the Current Bubble
What are the most concerning signals about the current AI bubble?
From an investor perspective, the most alarming indicator is valuations as a multiple of revenue—we’re seeing multiples that haven’t been observed since the 1999-2000 dot-com timeframe. Equally concerning are the funding proposals themselves: many are flimsy and lack substance, yet they’re still attracting capital.
Another glaring signal is circular financing, particularly with companies like Nvidia investing in their own customers. If demand for AI infrastructure were truly as strong and organic as claimed, there wouldn’t be a need for chip manufacturers to prop up their customers’ ability to buy from them. For builders, this means capital may be available but fragile; assume the bar for real, measured impact will rise quickly.
How does the current AI bubble compare to the dot-com era?
There are notable similarities and critical differences. Like the late 1990s, we’re seeing concentrated investment in specific technology areas—back then it was internet and telecommunications infrastructure, now it’s AI. The market concentration is striking: in the S&P 500, the “Magnificent Seven” are driving a disproportionate amount of growth without a healthy distribution across the broader index.
However, today’s environment includes significantly more venture funds and corporate investment activity. A critical difference is that this isn’t AI’s first hype cycle. We have the history of previous AI winters to learn from, which should theoretically lead to more educated approaches to investment and adoption decisions. We should be examining what happened in past cycles and comparing those patterns to current developments.
In the dot-com era, the bubble was also fueled by retail investor participation in IPOs of unprofitable companies. Today, while there is some angel investor excitement from those without deep tech or venture understanding, the insanity is more visible in private company valuations and acquisitions at “insane” multiples.
What market concentration issues should practitioners be aware of?
Non-AI sectors are struggling to raise capital. This mirrors the dot-com era when investment concentrated heavily in internet-related ventures while other areas, including biotech, found it difficult to secure funding. This creates an unbalanced market where AI startups may have access to capital, but the broader ecosystem suffers.
What red flags should buyers and builders watch for?
Stress-test vendors on several dimensions:
- Unit economics without subsidies—require evidence of non-subsidized demand
- Unusual deal structures or hype-driven acquisitions
- Pilots with no path to measurable outcomes
- Vendors pushing deployments to “grow into” valuations rather than solving real problems
Insist on explicit success metrics and clear ownership on the customer side before committing resources.
Enterprise AI Adoption and Implementation
What’s driving current enterprise AI adoption, and is it sustainable?
Much of today’s enterprise AI adoption is driven by factors that don’t necessarily lead to sustainable implementation: board pressure without understanding implications, fear of missing out, and “shadow AI” where employees use consumer AI tools in their personal lives and bring them into work. Many enterprises are experimenting with AI simply to test the technology or because of excitement, rather than identifying specific problems that AI can solve.
The biggest mistake is a failure to connect AI initiatives to measurable business outcomes. Many teams jump to models before defining the problem, metrics, and data plan. They can’t answer fundamental questions like: “If we apply AI here, what specific problem are we solving, and how will we measure the improvement in performance?” This approach lacks the systematic thinking required for successful technology adoption.
What distinguishes successful AI implementations from failed experiments?
Successful enterprises treat AI adoption as a comprehensive organizational transformation, not just a technology implementation. They employ what can be called a “use case matrix”—systematically identifying opportunities across the entire organization where there’s a clear opportunity to show improvement.
Critically, they articulate how they’ll measure that improvement before implementation. This requires bringing together technology teams, business units, HR, and finance—breaking down organizational silos. The key differentiator isn’t having an AI Center of Excellence or a technology group claiming expertise. Success comes from treating AI adoption as an opportunity for transformation: organizational transformation, business process transformation, and ultimately culture transformation.
Looking at fundamental enterprise technologies throughout the 21st century—e-commerce, SaaS, cloud computing—each was accompanied by transformation. Companies that undertook both transformation and technology adoption were most successful. Those that just applied technology because they read about it or saw competitors doing it typically didn’t succeed.
What framework should teams use when evaluating AI use cases?
Every use case should connect to one of three end results: productivity improvement, cost reduction, or new revenue generation. Teams need to articulate clearly to their CFO or CEO: “Here’s what we’re going to do, and here’s the measurable outcome.”
Both ROI and capability expansion are valid measures of success. For example, a customer support team might go from handling 100 calls per hour to 2,000, or expand from business hours to 24/7 coverage. The critical question is: How does the customer see this improvement? Do they get a better experience? Lower costs per product? Faster service?
Track customer-facing metrics like customer satisfaction scores, time-to-resolution, and cost-per-ticket. Baseline first, then set target deltas and establish kill criteria for when to abandon an approach.
What organizational changes are required for successful AI adoption?
This isn’t just about technology implementation—it requires bringing together disparate parts of the organization. Technology teams, business units, HR, and finance need to collaborate from the outset. Barriers between these groups must break down. Stand up a cross-functional coalition to align incentives and budget, and plan for process and organizational change, not just model deployment.
For any AI project, ask concrete questions: “If I apply this, how many people am I going to let go? Or will I be able to take on more clients?” Both outcomes require organizational planning beyond the technology itself.
How should teams avoid high pilot failure rates?
While there are methodological questions about specific failure rate statistics, we shouldn’t ignore signals suggesting many experiments are failing. Run staged pilots with explicit exit and scale criteria.
Confirm upfront:
- Data availability (internal plus required external sources)
- Security and compliance requirements
- Integration points
- Who owns the success metric and has budget authority
Instrument everything. If a use case cannot be tied to productivity, cost, or revenue outcomes—or can’t be measured—don’t green-light it. Establish a path where success means automatic progression to production, and failure cleanly winds down with documented lessons learned.
What’s causing the pattern of AI implementation failures?
There’s a mismatch in market dynamics: tremendous pressure on startups to sell—even when there isn’t a strong use case—because investors expect revenue growth to justify high valuations. Startups need to sell to grow into their valuations, but enterprises may not have proper use cases for what’s being sold.
Additionally, many organizations treat AI as just a technology application rather than understanding the transformational implications. They skip the critical step of connecting use cases to metrics and understanding what organizational transformations the adoption requires.
How should startups selling into the enterprise adapt their approach?
Sell into well-scoped, metrics-backed use cases and provide a measurement plan in the statement of work. Help customers with data readiness, process redesign, and change management—not just model deployment. Don’t force mismatched deployments to “grow into” a valuation; that pressure leads to failed experiments that poison future references and damage the broader market.
Global AI Competition: US-China Dynamics
How accurate is the perception that the US is far ahead of China in AI?
There’s an unbalanced perception, especially regarding technologies broadly classified as deep tech. The US is not as far ahead as many believe, particularly among those outside the tech core who aren’t closely following China’s developments.
In battery electric vehicles and battery technology, most in the US acknowledge China is way ahead. However, in areas like adaptive robotics (including humanoid robots), large language models, and large action models, there’s a perception that the US leads—but this is at best a contested statement. This dismissive attitude is dangerous and can lead to strategic surprises, similar to how many were caught off guard by the performance of models like DeepSeek.
What specific technology areas show more parity than commonly believed?
Several critical areas show more competition between the US and China than the general public realizes:
- Adaptive robotics and advanced manufacturing: China has made manufacturing a national priority and is advancing rapidly in intelligent robotics for factories
- Large language models and large action models: Recent developments challenge assumptions about US dominance
- Biotech: China is making significant strides in various biotechnology areas
- Quantum computing: The gap is much smaller than typically understood
DeepSeek was surprising to many, but only to those not closely following China’s AI development. Similar surprises are likely to emerge regarding Chinese humanoid robots and other technologies.
How is China navigating the semiconductor gap, and what can practitioners learn?
While the US maintains a significant lead in cutting-edge semiconductor technology, the gap in cutting-edge semiconductors is real—but necessity is driving invention. If Chinese chips are several years behind, they’re making it work for training their models. They’ll use chips that may be three or four years behind state-of-the-art and find ways to train models effectively, accepting slower cycles while still fielding competitive, cost-effective systems.
This highlights a crucial lesson for practitioners everywhere: you don’t always need the absolute latest and greatest technology—the “tip of the spear”—to build valuable and effective AI applications. The ability to optimize for available resources is a powerful competitive advantage. We need to pay attention not just to who’s at the leading edge, but what’s happening just behind that position—this often has much bigger economic impact than the cutting edge alone.
For practitioners, the lesson is universal: focus on cost-to-serve, latency, and accuracy relative to your business goal. If you can hit those targets without the newest model or chip, you’ll have better unit economics.
What does this mean for investment and development strategies?
The question arises whether we should rely only on private sector investment as we have to date, or whether the US government and European Union need to take a more active industrial policy approach—borrowing elements from how China approaches these issues. There’s a significant difference between how the US and China, as two superpowers, are approaching deep tech areas.
For companies: plan for multi-region suppliers where possible, stay in a state of continuous activity and investment with realistic expectations, and don’t be dismissive of global competition. Don’t get caught up in hype about any particular approach or assume competitive moats are wider than they actually are.
Humanoid Robots: Reality vs. Expectations
What’s the reality check on humanoid robots and their near-term deployment?
Expectations may be unrealistic. The timeline for mass production and deployment of humanoid robots is likely much longer than marketing materials suggest. Many humanoid robots are being taught dexterity through videos of humans performing tasks, but the sense of touch is critical and not remotely close to being solved.
This is particularly important for applications like warehousing and logistics, where robots need to handle heterogeneous or delicate items in real-world environments. The nuanced feedback from physical contact is essential but missing when robots are trained primarily on visual data.
What should practitioners expect regarding humanoid robot categories?
There’s likely to be category expansion where “humanoid robots” eventually includes robots with wheels and other non-humanoid configurations—essentially becoming a marketing term rather than a technical specification. The term may become a broad marketing category simply to meet inflated expectations. We’ve seen this pattern in other technology categories before.
What’s the practical approach for teams evaluating robotics today?
For near-term value, target constrained tasks and environments. Look for:
- Fixtures and repeatable pick operations
- Safety-rated cells with controlled conditions
- The simplest form factor that meets your specific need
Choose pragmatic form factors now and plan for dexterity later as sensing and control technologies mature. Validate robot capabilities in your exact workflows—not demo conditions—and model total cost of ownership under realistic duty cycles. China is widely recognized as ahead in EV batteries and has formidable manufacturing scale in robotics; in adaptive robotics, the US lead is contested, so e
