Navigating bubbles
Or do we simply mean capitalism?
‘Champagne pops/on Monday in case Friday flops’ (Kano)
I have some complicated views on bubbles and you probably do too. I’ve noticed that a lot of people confuse the Abundance agenda with being pro-bubble, suggesting either they haven’t read enough about the former or they haven’t experienced the downsides of the latter. A lot of the ‘bubbles-are-good’ conversation (even Byrne Hobart’s inflection-driven thesis) seems to come with a very hand-wavy treatment of who actually loses money when bubbles burst or deflate. It increasingly reminds me of that business write-off scene in Schitt’s Creek.
Look, like any good capitalist, I want my investments to go up. My mental model of capitalism is essentially a series of bubbles (or at least local pricing maximums), with varying degrees of volatility. And while a given investment may end up being long-term or short-term in duration, it still exists across this plane of (pricing) volatility. While you can make money on both types of investment, it does help to have a robust view about which category it falls into: NFTs or ETFs.
All that being said, when it comes to AI, I do have some sympathy with the cohort of AI leaders who are pre-calling ‘bubble’ in order to compress the Gartner hype cycle and get on with the business of building. On the other hand I am also generally of the view that anyone in an asset class who is decrying that same asset class for being in a bubble harbors some existential competition fears.
I have been spending a LOT of time diligencing bootstrapped training, consulting and implementation companies who are on the ground helping businesses adopt AI solutions. The last decade or so of enterprise SaaS has tended to focus founders on solving reasonably point-shaped problems with software but I’m pretty certain that AI adoption for companies of just about all sizes is going to be the opposite of that: a lot of Forward Deployed Empaths and customisation (on the path to agenticism).
In fact, my mental model of AI enablement/deployment is beginning to resemble ERP projects. Agentic solutions founders may scoff but I think it’s highly likely that for every $1 of enterprise AI revenue generated, about 50% of this will need to be spent on extremely unsexy categories like training, consulting and implementation (50% might seem high but a) ERP tends to have ~35% for the same non-software spend and b) we constantly ask founders this question and that’s the current consensus).
Everyone keeps talking about how the most important AI companies are the foundational models, data centres and energy providers. I’ve reached the point where I think that everyone is wrong about this: actually the most important AI companies in the world right now are the ones working on the entirely unglamourous job of enablement.
This has made me no less bullish on the long-term impact of AI but it has absolutely reinforced my view that a meaningful adoption cycle (by business) will look more like ten years rather than two. Plan your bubble investing accordingly.
Other reading
If you’re interested in some of this post, the always excellent Bryce Elder touched on a lot of these points ostensibly via the lens of Rightmove’s AI-investment-related profit warning recently, I highly recommend it. And you should also read this excellent piece which Jerry Neumann wrote about the challenges of investing in AI (he and I seem to share the distinction between investing versus investing-to-make-money).
Recently there has been some quite good writing (and speaking) on fixing government entropy. Although I do wonder whether energy is better expended towards private cities rather than improved governments. Time will tell. In the meantime I recommend Matt Clifford on making the UK rich again and John Collison on how to make Ireland stop going backwards
Sort of related to the above, fellow Power Broker nerds will be extremely interested in this excellent long piece by Jim Waterson on the Mosesification of London which nearly happened.
