Discussion about this post

User's avatar
Gretchen's avatar

Coincidentally, a post about an analysis of AI prompts showed up in my LinkedIn feed today as well. The poster pointed out that when a customer starts a conversation with a chatbot/AI, they're typically trying to figure out how to move forward by:

• Find out something they don’t know.

• Get something they need.

• Fix something that’s broken.

• Decide between options.

• Feel better about a situation.

The post also included a snippet of a "Conversations to be done table" breaking down types of conversations and what customer goal they map to.

Having read your reflections earlier today, it occurred to me that human interactions could probably fall into similar buckets, aligned with goals and that differentiating the conversation types/goals should probably inform how you approach the conversation.

Expand full comment
Craig Stoltz's avatar

Very good points about the distinctions between talking to humans vs LLMs. The idea of prompt engineering people is both funny and creepy. I do question your saying people can "even" be irrational. I'd say I'm irrational 51% of the time. My wife would put it in the 70% range. And yet we get along great! 51% of the time.

Expand full comment
2 more comments...

No posts