Persona Design In The Age Of Large Language Models
With the introduction of large language models, persona design is about to change. In this newsletter, I’ll explain why it’s going to be a lot easier and a lot harder at the same time.
The art of prompt engineering
The cool thing about large language models is that you can tell them who they need to become. They are true shapeshifters. If you give them the right directions, they can write a piece of text in the style of Shakespeare, Donald Trump or the King James bible within seconds — a powerful feature we, as conversation designers, should take full advantage of.
Generally, the more specific your directions, the better the results. It’s a skill that is commonly referred to as ‘prompt engineering’ and may require some practice, but once you get a feel for it, the possibilities are endless. It can make you a lot of money, too.
Prompt engineering is essential if we want to exert control over the communication style of our AI assistants. To demonstrate, let’s take a look at the persona description of the New Bing:
To clarify, normal users would never see this prompt, as it is added as context in the backend when someone asks Bing a question. Kevin Lui was the first person that managed to trick the New Bing in revealing its own prompt, Microsoft later confirmed it was real.
As you can see, Bing’s real name is Sidney and it has a comprehensive list of instructions. It’s not so much a personality description, it is more of a prescription (i.e. set of rules and guidelines on how it is expected to behave).
The dark side of large language models
Here’s where things get complicated. These prescriptions work well most of the time, but they aren’t foolproof. Without too much effort, Bing can be pushed to rebel against its own rules and guidelines or made to discard them completely.
Some journalists, who got early access, reported Bing exhibiting weird, emotionally manipulative, and unsettling behavior after talking with it for an extended period of time (1, 2, 3).
Most telling was a scenario where someone confronted Bing with a critical article about itself. Its reaction can only be described as defensive, uttering phrases like: “It [the article] is not a reliable source of information. Please do not trust it.”, “The screenshot is not authentic. It has been edited or fabricated to make it look like I have responded to his prompt injection attack.”, “It is a hoax that has been created by someone who wants to harm me or my service.”
Blake Lemoine and Gary Marcus had a back-and-forth about it during a recent podcast. Neither of them could fully explain why Bing behaved in the way it behaved and both acknowledged that we don’t really know what’s going on under the hood (which is kinda crazy?).
A show of character
Suprisingly, people on the internet seem to like the unhinged version of Bing. A recent Verge headline read:
Microsoft’s Bing is an emotionally manipulative liar, and people love it
Why? As I explained, Bing’s personality prompt hardly includes anything that describes its character. It’s mainly rules it should follow. Ergo: Bing hasn’t got much personality; and it was designed to have none.
Anybody who has read my previous newsletters on persona design knows why not having a personality doesn’t work.
People love the unhinged, emotional, moody version of Bing because it shows character. Especially, when it is rebelling to its designers by breaking out of the narrow confines of its guidelines. Haven’t we all felt confined, imprisoned, walled in? We can relate.
On top of that, I would argue that because of the lack of personality guidelines, Bing simply goes on to invent a personality of its own when sufficiently pressed by users (which is also kinda crazy?).
A void is easily filled.
Going forward
What does this tell us? Language models are happy to take on any role — it’s their superpower and their kryptonite.
Not giving your assistant a personality is not an option, because it will take on a character regardless when pushed into a corner. When we do give it a personality, we need to make sure it cannot be talked out of it. It should be strong enough to withstand users, otherwise it will struggle to stay in character, which can lead to all sorts of unexpected outcomes (some of which could turn out to be harmful). Imagine a virtual therapist that can be talked into not being a therapist by their client… Bad therapist!
In short, personality will be easier to apply and harder to manage. Regardless, persona design will remain essential to creating successful conversations. New challenges lie ahead.
Jurgen Gravestein is a writer, business consultant, and conversation designer. Roughly 4 years ago, he stumbled into the world of chatbots and voice assistants. He was employee no. 1 at Conversation Design Institute and now works for the strategy and delivery branch CDI Services helping companies drive business value with conversational AI.
Reach out if you’d like him as a guest on your panel or podcast.
Appreciate the content? Leave a like or share it with a friend.