Skipping Chapters
I used to be incredibly stubborn about finishing any book I started and honestly it probably cost me about 1% of my entire life. I think it was Patrick Collison (or possibly Tyler Cowen) who gifted me the mental model of thinking about the chapters in a book as individual dishes rather than a mandatory tasing menu. This approach doesn’t work in all the obvious places but it has helped me navigate not brilliantly written but still very useful books.
I am increasingly beginning to think about my own personal AI skills in this way too. I had a fairly exasperating experience with early OpenClaw which led me to opt out of trying to stay anywhere near the forefront of general prosumer AI usage. Jumping back in a couple of months later, the progress that Cowork and Rebel have delivered made me realise that this might be the only sane and practical approach to consistently incorporating AI into my life and work.
I suspect this may not just be me? Last week AI Enablement Insider released an intriguing survey from ~100 interviews with the people in companies who are buying AI professional services to help with the roll-out/adoption of the actual tools. It’s good, you should read it but that isn’t my point. One of the most common responses I hear from people who are being pushed to adopt more AI at work is some version of ‘I would love to find the time but I just have too many meetings’. Relatedly, a comment I hear quite frequently from CEO/founders is something like ‘I wish people would just SPEND THE TIME learning AI!’ and it’s always interesting to watch the reaction when I point out that they have the unique ability to not go to meetings whereas the rest of the company (which has been hired to literally be in those meetings) would probably get fired for that.
I have never trusted gut feelings. Now before you all lose your mind at me for saying something like that what I mean is that yes absolutely gut feelings and instinct are very much a (System 1) cognitive function. But by definition they are a whole collection of biases and I tend to think of them as a human sorting response, looking for an obvious ordering solution with a reward sensation that gives us closure that we interpret as a ‘correct’ assessment of a scenario. In his final book, focusing on noise in human judgement, Danny Kahneman calls this the illusion of validity. And like everything he’s ever written, it makes me wonder deeply about how much time is wasted on very bad decisioning processes. I highly recommend unless you haven’t read Thinking, Fast and Slow, in which case read that first.
Netflix recently announced a dedicated kids gaming app which is of no surprise whatsoever to folks in the business of kids content (as ever, Emily Horgan has some of the best thinking on this topic). I have a longstanding public bet (which so far nobody has taken the other side of) that AI frontier models (or whatever we end up calling them) will end up rolling out kids products/services to reduce churn as consumer AI behavior continues to converge towards commodity usage.
As some of you know, my quality standards for coffee are unapologetically high (and on that note I’m being pleasantly surprised by some Beijing coffee spots) but nonetheless the history of instant coffee makes for a fascinating read.
