This is where a venture mindset conflicts with how enterprises operate. For AI implementation to work, it must balance the risk tolerance of venture capital with the steadiness of established companies. VC’s are great at tolerating failure and delaying near-term return. At large organizations, AI deployment is regulated to the point where failure can have immediate operational or reputational impact.

You need elements from both approaches for scaled deployments. You can optimize for speed to increase iteration and discovery, but optimizing for reliability reduces failure in production.

Heavy investment without clear ROI also poses a risk of retrenchment, which we saw during the dot-com era, when capital was abundant as long as a company told a compelling story fueled by short-term acceleration. What many did not foresee were the long-term collapses that followed from short-term incentives. This does not mean history is repeating itself, but the similarities are worth addressing, or at least discussing over dinner.

Similar to the dot-com era, capital is moving faster than infrastructure. This is partly because valuations are driven by projected futures, and success is measured by the speed of adoption. Companies that optimize for press and visionary narratives, while deferring the work of building systems that can withstand regulation and cost, risk setting themselves up for stalled deployments and costly reversals.

The dot-com crash in the early 2000s was driven by an unsustainable speculative bubble, with investment flowing into internet companies with weak business models. It was fueled by investor enthusiasm, overpriced stocks, flawed spending patterns, low interest rates, large infrastructure bets, and regulatory shifts (for example, the 1996 Telecommunications Act), alongside aggressive promotion of unprofitable tech stocks. The parallel is not exact, but the pattern is familiar: capital rewards future potential before systems are ready to deliver it.

ROI needs an enterprise definition.

Before causing any worry, I’d like to be clear that AI is not doomed to repeat this history. However, it is showing similarities that directly impact its positioning. The likelihood of this repeating depends on how ROI is framed. When it is defined through a VC lens (e.g., rapid scaling, high tolerance for failure, etc.), it becomes susceptible to incentives that, when applied to enterprises, are, frankly, nonsensical and costly. Enterprises do not get ten shots to justify one win, and their chance for success relies heavily on what is not being openly invested in (e.g., regulations, reputational awareness, and operational constraints). For enterprises, “moving fast and breaking things” leads to implementation issues and lawsuits if over indexed. Speed is not inherently negative–early experimentation can surface valuable insights. However, speed without constraints only increases operational and reputational risk.

Evidence: task gains, system-level tradeoffs.

The pattern across studies is consistent in that AI improves task-level performance, but introduces new failure modes at a system level. BCG, with support of scholars from Harvard Business School, MIT Sloan School of Management, the Wharton School at the University of Pennsylvania, and the University of Warwick, found that participants who used GPT-4 for creative product innovation performed 40% better than those who completed the same task without using GPT-4. In fact, 90% of the 750 BCG consultants who participated improved in tasks involving ideation and content creation when using GPT-4. Yet, they underperformed in comparison by 23% for tasks involving solving business problems.

A widely cited study on GitHub Copilot found that the time it takes software developers to complete tasks has gotten 55.8% faster with the help of AI-assisted tools. However, the study measures task completion in isolation, so it does not consider downstream work (e.g., code review, debugging, integration, long-term maintenance, etc.) that, in practice, can often dominate development time. In other words, faster coding does not necessarily make for faster delivery. Additionally, at the same time, Anthropic found in more recent research that, while AI can speed up tasks (sometimes by 80%), it does not come without costs. For example, one AI group scored 17% lower than those who coded without the help of AI. What’s also interesting here is that the largest gap seems to be around debugging knowledge, suggesting that the ability for developers to understand when and why code is incorrect or fails may be a key issue to solve as more AI is integrated into the software development process.