Almost Timely News: Terraforming the AI Use Case DesertHow to help AI take more of your job (the parts you don't want)Almost Timely News: 🗞️ Terraforming the AI Use Case Desert (2026-03-29) :: View in Browser The Big Plug👉 I’ve got a new course! GEO 101 for Marketers. 👉 Just updated! The Unofficial LinkedIn Algorithm Guide, March 2026, now with new information straight from LinkedIn! Content Authenticity Statement95% of this week’s newsletter content was originated by me, the human. You’ll see output from Openwork and Minimax M2.7 in the opening section as demo output. Learn why this kind of disclosure is a good idea and might be required for anyone doing business in any capacity with the EU in the near future. Watch This Newsletter On YouTube 📺Click here for the video 📺 version of this newsletter on YouTube » Click here for an MP3 audio 🎧 only version » What’s On My Mind: Terraforming the AI Use Case DesertIn this week’s newsletter, let’s talk about something that truly baffles me: the AI use case desert. I read with interest a survey recently that said one of the top reasons for lack of AI adoption was lack of use cases. Boy did that make me scratch my head. I’m in the opposite camp; there are SO many use cases for generative AI that there’s no way I can tackle more than a small fraction of them. Deciding which use cases make the most sense to tackle is the hardest part of any client engagement because there’s so many. Even putting together this newsletter every week is less a question of “what should I write about?” and more “which of the dozen major developments this week is worth demonstrating in some way?” What’s behind this chasm? A few things - lack of understanding about what the technology is capable of today, lack of understanding about where AI ends and infrastructure begins, lack of knowledge about what data you have access to, and lack of imagination. If these sound familiar, it’s because they’re part of literally every keynote talk and workshop I’ve delivered over the past 5 years. What sets apart successful folks from less successful folks when it comes to generative AI is critical thinking, creative thinking, and contextual thinking. So let’s dig into how to terraform the use case desert into the use case garden - using AI, of course. Part 1: The Sexy BlockThe top mental block for a lot of folks - consultants especially - is that most use cases for AI are going to be boring. Consultants - and I count myself in this group - often need to show flashy and sexy because that’s what pays the bills. A senior leader sees “the art of the possible” with a big flashy show and wants to sign right away, but very often the real value is in the boring stuff. This is compounded by the fact that the majority of AI use cases are optimization, not innovation. I define this as doing what you’ve always done, but bigger, better, faster, and cheaper - and most folks focus on the faster and cheaper. There’s a maxim in business strategy that you can’t cut your way to growth, and that holds true even in the age of AI. Yes, absolutely reduce the amount of time you spend on less valuable tasks, but if that’s all you’re doing, you’re more or less rearranging the deck chairs on the Titanic. Value comes from optimization to clear up space and time, then innovation - doing something new, doing something out of your normal skill set personally and organizationally, something that adds value in a new and different way. The often silly example I use is taking something traditionally boring, like an RFP response, and turning it into a country song. I, as a non-musical person, can generate reasonably good country music from the most unusual sources. And to be clear, this takes away no paying work from human musicians, because I never even thought to do this before. I’m not taking a task from a human and giving it to a machine. But that’s the difference between optimization and innovation - doing what you’ve always done versus doing something you’ve never done before. Part 2: The ROI BlockThe second major block, typically from more conservative or even resistant folks, is the familiar refrain asking for the ROI of something like an AI initiative. I’ve gone on long rants about this in the past on two fronts - first, people using the phrase ROI instead of results, because ROI is fundamentally a financial calculation. Second, people asking to measure ROI on something they don’t measure today, so they have no basis of comparison. Both are silly. Here’s the reality of ROI. Because it’s a financial formula - (earned - spent) / spent - it takes some time to understand and calculate what you spent and what you earned. And in a field like generative AI, where the technologies change literally every week, by the time you complete a project, there’s a good chance that the old project methods and ways are obsolete. That in turns makes an ROI measurement less valuable, because part of the purpose of measurement is to achieve repeatable, reliable results, and you can’t do that if everything is changing. Imagine you started a project with the equivalent of a campfire and some sticks. A quarter of the way through the meal, someone invents metal and now you have a cooking pot and a fire. Halfway through the meal prep, someone invents natural gas capture and metal manufacturing, and now you have a gas cooktop and cooking pots and pans. Three quarters of the way through the project, someone invents rooftop solar and induction ranges. By the time you’re done with a standard 2-year enterprise project, literally everything is different when it comes to AI - and if you’ve been adapting, then what you started with can’t be used as a measure of where things are now. Most of the time, when people are thinking about ROI, what they really mean is measuring change, and that’s a lot more immediate. It’s much easier to measure change. The same general formula applies - (new - old) / old - but instead of money, it can be any quantifiable unit. It can be time, leads generated, NPS numbers, anything where there’s new numbers and old numbers. Especially for a fast-moving technology like AI, this is a better system of measurement. It’s not ROI, but it is reasonably objective and defensible - and because the cycles of feedback can be shorter than the entire lifecycle of something (which ROI requires), you can know the impact of AI faster. Part 3: Decomposing the Job DescriptionA couple weeks ago, we talked about how to determine the level of exposure in your job from AI by decomposing a job description. The exact same process applies for use cases - and in fact, much of the prompt is reusable, including using the Trust Insights TRIPS Framework. But rather than judge how exposed our job is, we pivot it to focus on what the most useful use cases are for us. The Trust Insights TRIPS Framework is:
|