In this newsletter:
Post: How to Make AI Work for You with the Power of Prompting
ICYMI: Apple, Meta, and EV Updates
POTW: Strands by The New York Times
How to Make AI Work for You with the Power of Prompting
If you’ve followed along the last two newsletters, hopefully you know a little bit more about the ethical use of AI along with what it may be capable of. And while it is a very remarkable tool, remember - nobody knows how it actually works. Meaning, this is an ever-changing landscape, and what I share below could become outdated or useless in time.
That said, I’d like to dive a bit deeper on how to make AI work for you. Whether you are using a text or image version of some AI tool, it all comes down to prompting. A prompt is nothing more than the instructions you give to the AI model to help guide its output or response.
There are new job openings at major companies called “Prompt Engineers” that use their knowledge of coding, LLMs, and AI models to help code or re-code the backend of these AI models in order to make them more user-friendly. Alternatively, having this knowledge can also allow engineers to create completely new applications that are essentially a ‘skin’ over a chatbot. Many of the tools I shared in Weekly Wheaties #2410 are just that. Subscribers to OpenAI’s ChatGPT have access to tons of custom GPTs, including one called Prompt Optimizer.
Moving forward, the importance of prompting language will help your use of these chatbots and other AI tools. It’s even backed by science! There are multiple parts to a prompt, so let’s break down what that could look like: context, output, and refining.
First up, context. This is where you tell the AI what persona to take up. You can do this by literally picking someone of celebrity status or get a bit more in depth and dictate experience and background knowledge. For example, you may say, “Pretend you are J.K. Rowling…” or, “You are an expert in [insert subject] with a background in [insert field]. You have a certification/degree in [insert cert or degree] with a concentration in [insert area].” One can imagine this can get pretty specific really quick. But that’s what we want! The more specific the background context, the better.
Prompt types can also be combined. As in, you may say, “Pretend you are J.K. Rowling, and are also an expert in… etc.” Remember, this is generative AI, so we can make things up. One potential problem - as we give specific prompts here, we do tend to pigeonhole the AI into a corner of context. However, this specific context still allows the AI to be creative, only to pull from a certain area of training data built into the LLM. Think of this like a Venn Diagram, and every piece of context provides one more overlapping circle of information.
Next, is the output. This is what we are telling the AI to actually do. As a former high school teacher, I would lose my certificate if I didn’t say - these prompts should include action-type verbiage with higher order thinking skills. A popular way of looking at these terms is with Bloom’s Taxonomy by pulling terminology from the Analysis, Synthesis, and Evaluation levels. For example, prompts should include words like: design, develop, summarize, and compare and contrast. These help create something new, rather than pull from already existing data.
Lastly, let’s look at refining the prompt, or response received. Meaning, we can request any context and output, then have the AI help refine the prompt before giving a final answer. A common and simple way of doing this is to end every prompt with, “Ask me any clarifying questions.” But more in depth, tell the AI what you’re wanting to do, and ask how to redefine the context or the prompt you came up with. You could even ask it questions like, “What other information would be useful?” or “Is there something else I should ask before continuing?” If you ask any of these questions, I would bet 99% of the time, the AI will give you something to go on.
Once our prompt is complete, we can then refine the response. The cool thing about generative AI is it isn’t a one-and-done type of tool. We can literally give it the same prompt multiple times in a row and receive varying responses. Or if we want it to give a response based on something we already have, we can give it an example. Matt Shumer shared an easy way to do this using XML. Don’t worry, it’s pretty simple, even if you’ve never used or heard of XML.
You could also just simply ask it to do the same prompt again, with a twist. My favorite prompt is after asking for something, is to then say, “Now could you provide 10 more examples, but [do this] instead.” Or, modify one of the original parts to the prompt. For example, instead of telling the AI to pretend to be J.K. Rowling, maybe we now say, “Pretend to be Stan Lee instead.”
Another refinement is to make it go through a process. For example, ask first for ideas. Then ask for 10 sub-ideas based on one of the given ideas you like most. Then ask for a detailed outline, followed by a draft copy. Once iterated, ask for a final copy. Believe it or not, after all of that, you can still ask to “Review and provide any comments on areas that may be improved or re-written for better understanding.” Side note: you can also use this prompt with your own writing.
A final note on exporting audio from the various AI tools shared: homonyms can cause sound trouble if you’re not careful. For example, the word r-e-a-d has two ways of pronunciation. Instead of spelling it correctly and hoping it chooses the right word, I will sometimes misspell a word, or use the wrong version solely to make it sound correct. In this case, I would use the words “reed” or “red”. Numbers and monetary values need to be spelled out in some cases, as well as acronyms or weird sounding words.
While there are tons of other resources around AI prompting, I would like to share something put together by two instructors I found. Visit More Useful Things.
Have you found any prompts that have helped your workflow?
ICYMI: Apple, Tesla, and Meta Updates
Apple
Apple announced a new 13- and 15-inch MacBook Air equipped with their updated M3 chip. These laptops promise more performance, faster Wi-Fi, support for multiple monitors, along with a lightweight design and all-day battery life. They are available now starting at $1,099 and $1,299 respectively.
On the mobile front, iOS 17.4 released updates to iMessage, payment methods, alternate browser engines, Podcasts transcripts, and a major change to the App Store for the EU - allowing alternative app marketplaces and third-party downloads from outside their own app store. While some may want to see this in the US, time will tell. On a personal note, I’ve seen great examples of transcripts being used within the Podcasts app! Try it out on your favorite podcast. Perhaps the audio version of this newsletter?
Meta
Last Tuesday, the Meta networks, Facebook, Instagram, and Threads, all had a major outage for a few hours. Proving the bigger these networks become, the more susceptible they are to crashing from updates. No one outside the inner circle is sure on the main reason behind the crash, but consider the BlackBerry story. In order to scale at various levels, new software and technology must be deployed which can cause interruptions in the service. Maybe we’ll get a full explanation in the future. But for now, all we have is this tweet from Andy Stone.
People say it’s suspicious this happened on Super Tuesday. MAYBE they were hacked. Maybe… But if not, there are four other ways of looking at this - or arguing against any conspiracy theory.
They want ad money, why would they go down? They would be losing any potential revenue.
They aren’t even allowing political ads, so that’s a wrong take, too.
This would (and did) push people to other social media sites. That’s definitely not what they want. Especially when they’re having to use another competing platform to share with their users what the problem is. Once anyone leaves a social media site to go somewhere else, the chances of them coming back are low.
It’s also free! All of these social media sites are free (with ads). We should stop complaining when something free is ‘down’ for two hours.
EV Updates
Rivian announced two new models of their EVs, the R2 and R3. The R2 is a 5-seater variant of the R1S due in 2026 at a starting price of $45,000. It is a little smaller, but overall feels the same inside. The R3 is an even smaller crossover, at an unannounced lower price point and much later time.
According to The Verge, “Waymo is now allowed to operate its self-driving robotaxis on highways in parts of Los Angeles and in the Bay Area.” This comes after multiple incidents of pedestrian-involved incidents, along with a software glitch causing traffic jams.
POTW: Strands by The New York Times
You have heard of Wordle before, haven’t you? If not, it is a game where the user has 6 attempts to guess a 5-letter word. Alternate versions I like include Worldle and Framed. That said, The New York Times includes a couple of other fun games in their app and online, including the Crossword, Connections, Sudoku, Letter Boxed, and Tiled. They just released the beta version of a new game - Strands - that hides words in a word search type grid based on a daily theme.