4 Comments
User's avatar
Gretchen's avatar

Coincidentally, a post about an analysis of AI prompts showed up in my LinkedIn feed today as well. The poster pointed out that when a customer starts a conversation with a chatbot/AI, they're typically trying to figure out how to move forward by:

• Find out something they don’t know.

• Get something they need.

• Fix something that’s broken.

• Decide between options.

• Feel better about a situation.

The post also included a snippet of a "Conversations to be done table" breaking down types of conversations and what customer goal they map to.

Having read your reflections earlier today, it occurred to me that human interactions could probably fall into similar buckets, aligned with goals and that differentiating the conversation types/goals should probably inform how you approach the conversation.

Expand full comment
Alexandra Mack's avatar

I think that does cover a lot of human interactions. Though there is also the social aspect, which I realize some people are using AI for as well.

Expand full comment
Gretchen's avatar

Absolutely. Knowing what type of conversation you need to have and understanding your goal for the interaction is just one piece of the puzzle. Collaborating with the people involved and navigating the nuances of the social aspect is the hard part!

Expand full comment
Craig Stoltz's avatar

Very good points about the distinctions between talking to humans vs LLMs. The idea of prompt engineering people is both funny and creepy. I do question your saying people can "even" be irrational. I'd say I'm irrational 51% of the time. My wife would put it in the 70% range. And yet we get along great! 51% of the time.

Expand full comment