Almost Timely News: 🗞️ Cultivating an AI Mindset, Part 1 (2025-11-09)How you think about AI leads to your AI resultsAlmost Timely News: 🗞️ Cultivating an AI Mindset, Part 1 (2025-11-09) :: View in Browser The Big Plug👉 Watch my MAICON 2025 session, From Text to Video in Seconds, a session on AI video generation! Content Authenticity Statement100% of this week’s newsletter was generated by me, the human. You will see bountiful AI outputs in the video, especially in the implementation. Learn why this kind of disclosure is a good idea and might be required for anyone doing business in any capacity with the EU in the near future. Watch This Newsletter On YouTube 📺Click here for the video 📺 version of this newsletter on YouTube » Click here for an MP3 audio 🎧 only version » What’s On My Mind: Cultivating an AI MindsetThis week, let’s talk about mindset. No, this isn’t some esoteric deep dive into mysticism or such, but a concrete talk about mindset - your mindset - as you approach the use of AI. Lots of people and lots of studies show conflicting information about the value we derive from AI. Some studies show no benefit. Some studies show massive benefit. Some studies show no ROI. Some studies show bountiful ROI. What are we supposed to believe? Ethan Mollick of Wharton Business School likes to call this the jagged edge - the technology is so new that it is unevenly skilled in what it can do. I extend that to us, to the people using it. A skillful worker using AI can punch well above their weight, becoming that mythical 10x worker. Remember a couple years ago when everyone was talking about the 10X marketer and the 10X coder? Well, today with generative AI, that’s actually achievable. An unskilled worker using AI will create more work and save neither time nor money, and generate bad results. Here’s the disappointing news about this newsletter: I don’t have an answer about how to uniformly get an unskilled worker to be a skilled one. I can help people individually, but every person brings their own strengths and weaknesses to AI, and as I say ad nauseam, AI takes the good and makes it better, and takes the bad and makes it worse. A highly motivated, highly creative person will become even more productive, motivated, and creative with AI. A lazy, checked out person will become even more lazy and checked out with AI. So with those warnings dispensed, let’s dig into what I can share: how I think about AI broadly, and the tactical lessons you can borrow from for your own uses. I can’t promise you success if you adopt the different aspects of my AI mindset, but I can promise you that it will at least broaden your thinking and hopefully help you generate better results. Part 1: FundamentalsTo understand generative AI, you need a fundamental understanding of how the technology itself works. Most AI incorporates in part or in whole a predictive technology called transformers, no relation to the awesome 1980s toys. The transformers architecture is notable for many neat tricks that overcome previous predictive algorithms problems, but we can boil it down to something really simple: Everything helps predict the next thing. In my keynotes and workshops, I explain it like this: imagine you’re texting with a friend. A normal friend responds to your previous text, and you go back and forth, texting as one does. Now imagine you have a really weird friend. That really weird friend, instead of just replying to your message, copies and pastes THE ENTIRE CHAT HISTORY in every message. They copy everything you’ve ever said in that chat. They’re really weird. But in doing so, both you and they get to see the entire conversation history in every message. We could debate how convenient that is, but at the very least it does show you everything that’s happened up till that point. That’s what’s happening when you prompt AI. It’s not responding to the previous message. Every word in the conversation, from both sides, is part of what it uses to respond, which is why it’s so capable. When we prompt AI, it is leveraging all of the knowledge from within that chat to predict its next response. Knowing this fundamental fact means you know how it can go wrong. Say you’re having a conversation about Dorset apple cake and how to make it. Then you switch topics and start talking, in the same chat, about SEO. Guess what’s happening. Has the AI forgotten about apple cake? Nope. That conversation piece is still influencing the overall conversation, and your results will be skewed. You will get different answers about SEO than you would have if the apple cake conversation had not been there. Say a conversation goes on too long. Can you imagine if YOU had to copy and paste the entire conversation over and over again? Eventually you’d get tired. You might make mistakes. You might forget to copy parts of the conversation. The same thing happens to AI - a conversation that goes on too long exceeds its short term memory, and it loses coherence. It forgets, or worse, mangles words and concepts. Part of the AI mindset is knowing what the tools can and can’t do, understanding their limits so you work with them, not against them. Here’s another basic: all AI is based on probability, on generating the most probable responses. I talk about this at length in my book, Almost Timeless: 48 Foundation Principles of Generative AI but here’s a simple trick to use. Avoid asking AI for a single answer. Instead, ask it for several, and require them to be different in some tangible way. For example, instead of saying “give me a title for this video based on the transcript and my audience’s needs (combined with information about who your audience is)”, you would say something like, “give me 3 highly appealing title candidates based on the transcript and my audience’s needs, plus 3 wildly incorrect titles that would annoy my audience”. Why does this work? Because it forces the AI model to generate a wider set of probabilities, a wider range of options. What you’re doing is skewing the probabilities by forcing it to do something almost counterintuitive and that helps it be more creative. Part 2: It’s All About The SaladOne of the ways I explain how AI works, conceptually, is like cooking. I do love cooking analogies, mainly because everyone eats; there are few other activities so universal. Suppose we’re making a salad. Maybe we’re making a chicken Caesar salad. What goes into making a salad? You should probably have a recipe or at least an idea of what to make it with. You should probably have ingredients - hard to eat without them. You should know techniques like chopping, mixing, etc. And be able to make simple dressings from things like egg yolks and vinegar. You should know how to grate cheese, and in what order to introduce ingredients. You’ll need a salad bowl for mixing, plates for serving, tongs or utensils for tossing, probably some way to grill chicken, if you’re making a grilled chicken Caesar salad. All of that is pretty straightforward. Yet people try to dramatically overcomplicate AI. Here’s the breakdown:
What should be immediately obvious from this grand analogy is that AI is far more than just prompting. Prompting is important, yes. Making a dish without a recipe, especially a dish you’ve never made before, is not going to go well. But equally important are things like good ingredients. No amount of skill or great recipes can overcome a bowl of rotten, moldy lettuce. This is critical to understand because if you’re wondering why your AI efforts aren’t yielding results and you’re stuck obsessing over prompts, the problem might be elsewhere - bad ingredients, no skills, bad tools, incompetent chef. Fixating on the prompt blinds you to all the other things that could be going wrong. |