As a new author who is shamelessly promoting my book, I am very attuned to other books, articles, and conversations about how we do (or unfortunately often don’t) talk to each other well. There is a mountain written about how COVID interrupted learning and social interactions, another about the disconnect of remote interactions, and oceans on how the anonymity of the internet and social media allow people to “forget” they are talking to other humans who should be treated with consideration and caring.
At the same time, I am realizing that there is an entire industry building up in teaching people how to talk to machines. As AI technology gallops forward, prompt engineering has become a requisite skill for people across industries. Essentially prompt engineering is framing a question or command to an AI model in a way that provides the best output--ideally an accurate response that addresses all aspects of the intended request.
Good prompting requires good thinking--you do need to have an idea of what output you are aiming for when you start. A key aspect of a good prompt is specificity, both in the contextual information you provide and the kind of output you desire, and examples always help. Most AIs are pretty good at iterating, so feedback on previous answers can help refine.
My goal today is not to teach prompt engineering--I am certainly not an expert on it. At the same time it also may become a less essential skill to learn as these models continue to advance and get better at “inferring” what people want from them.
But I do find it interesting that so many people seem to focus more on the need to learn to interact with machines than learning how to interact with other humans. Maybe that is just because other people are “known” and we assume we know how to do it. But I also worry that too many people have lost the desire to have conversations that have unknown outcomes, that may result in learning something new or finding areas of disagreement.
When we talk to AI, we are demanding something of it--requesting an output. We can tell it that what it told us was wrong, that it needs to refine or redo. There are no feelings to hurt, there are no goals or intentions on the part of the program. In that sense, prompt engineering is easy.
There really isn’t prompt engineering for people. Humans are a bit more unpredictable than computers, we don’t work on algorithms. And we have opinions and moods and even behave irrationally at times. It is very hard to get another human to give us the exact output we want--I certainly don’t even have the formula for ensuring my son responds to a text! But it is rewarding as well.
I wrote a few months ago about the need to understand and optimize what computers are good for, and to take advantage of what AI can do, while not forgetting the importance of how human brains work. It’s the same with conversations. Talking to machines is really talking at machines--providing instructions for them to follow. Talking to people requires work--interactions are a back and forth, and we need to listen to what other people are saying, and be attuned to what they are not saying. And the outcomes are not always what we intended going in.
I’m creating a live, 5 session cohort-based course on how to create impact with business stakeholders. Fill out this survey to get on the waitlist & be the first to hear when enrollments open.
For those who want more about how important it is to talk to people well, on good , this interview with Jefferson Fisher is a good start.
I’ve refreshed and relaunched the Alchymyx website!
Coincidentally, a post about an analysis of AI prompts showed up in my LinkedIn feed today as well. The poster pointed out that when a customer starts a conversation with a chatbot/AI, they're typically trying to figure out how to move forward by:
• Find out something they don’t know.
• Get something they need.
• Fix something that’s broken.
• Decide between options.
• Feel better about a situation.
The post also included a snippet of a "Conversations to be done table" breaking down types of conversations and what customer goal they map to.
Having read your reflections earlier today, it occurred to me that human interactions could probably fall into similar buckets, aligned with goals and that differentiating the conversation types/goals should probably inform how you approach the conversation.
Very good points about the distinctions between talking to humans vs LLMs. The idea of prompt engineering people is both funny and creepy. I do question your saying people can "even" be irrational. I'd say I'm irrational 51% of the time. My wife would put it in the 70% range. And yet we get along great! 51% of the time.