22 Jan The Chat Crash – When a Chatbot Fails
Remember that movie Her, the futuristic romance where everyone wears high-waisted pants and a lonely letter writer named Theodore lands an AI girlfriend called Samantha? Towards the end of the film, Samantha grows super smart, reveals she’s been dating around, and dumps Theodore with a classic, “It’s not you, it’s me.”
It’s a strange cinematic spectacle that perfectly illustrates the general public’s perception of artificial intelligence and machine learning: “Pretty soon, my computer will be more human than me!”
Unfortunately, this same sentiment has seeped into the present-day buzz surrounding chatbot technology. Design articles and experts alike seem settled on the inevitable omnipresence of conversational robots. In the very near future, it appears we’ll spend our waking hours merrily chatting with machines.
When confronted with the complexities of human communication, chatbots tend to respond with all the eloquence of a brick wall. Why is that?
Many chatbots are nothing more than glorified flowcharts, their responses fumbling forth from rigid IF/THEN scripts. Even artificially intelligent chatbots, though skilled at detecting patterns in human language, are lacking when it comes to natural language understanding (the ability to determine intent, especially when what is said doesn’t quite match what is meant).
Herein lies a bigger problem: Human language and conversation are incredibly nuanced. Consider the intricacies of…
- Misused phrases
- Double meanings
- Passive aggression
- Poor pronunciation
- Regional dialects
- Subtle humor
- Speech impairments
- Non-native speakers
The list goes on, and as the obstacles mount, designers are faced with a dilemma: Should projects involving chatbots be avoided?
On one hand, chatbots are overhyped, and in many use cases, they prove to be downright impractical. We’re told they hold the potential to a more streamlined user experience than tap-based apps could ever provide, but on the whole, their present-day performance leaves much to be desired.
On the other hand, designers are problem solvers. Chatbots are clearly in demand across a wide range of industries, and it’s not realistic to wait for advancements in artificial intelligence before taking advantage of the benefits that can be had in the here and now.
Instead, designers must work within the constraints of current technology and develop strategies to implement chatbots in ways that positively impact the user experience.
So, why do chatbots falter?
5 Chatbot Fails and the Path to Improvement
Let’s be clear, chatbots aren’t all bad. Some are quite useful, especially if you’re looking to schedule a meeting, write a report, or turn your lights off. Others, while not particularly utilitarian, possess a certain charm that causes us to ponder the outer limits of bot-human banter. The future isn’t here, but we’re tantalized by the possibilities.
However, if chatbots truly are the technology of tomorrow (and that’s not a foregone conclusion), then designers need to amend the issues plaguing them today. To help, we’ve identified five scenarios where chatbots fail and frustrate users, and for each, we offer advice pointing toward a path of improvement.
Problem #1: Broken Script
Inevitably, chatbots that draw replies from IF/THEN scripts will run into a question or request that wasn’t accounted for. When this happens, most bots will attempt to recover by asking a clarifying question that redirects conversation back to the safety of their predetermined responses.
This isn’t a terrible solution, but problems arise when a bot’s corrective questioning leads to a conversational dead-end or places blame on the user, even subtly. The illusion is broken by a faulty script, and the user becomes the accused. Not good.
Solution #1: Humility
Chatbots fail. The most exquisitely designed scripts run into problems. When it happens, humility is the antidote—even if fault lies with the user. What does it mean for a chatbot to be humble?
At the first sign of communication breakdown, chatbots should be programmed to…
- Acknowledge that confusion exists.
- Assume responsibility for the situation.
- Allow the user to express dissatisfaction.
- Provide options for moving forward.
Problem #2: Impersonal Interactions
Some chatbots are designed to perform a specific duty with great efficiency. For many tasks, this is a good thing, but some jobs require a more sympathetic touch.
Maybe you’ve experienced a customer service agent who keeps cutting in before you’ve finished answering. His goal was productivity, but you probably felt undervalued.
Likewise, chatbots run the risk of being impersonal if designers aren’t keenly aware of their user’s practical and emotional needs when seeking help from a bot.
Solution #2: User Research
If efficiency and profitability are at the core of a chatbot’s design objectives, users will feel it, for better or worse. Here, it may be tempting for designers to think that compassion can be infused through politely written replies or a dose of humor. That might work, but it could also backfire if users expect a different type of interaction.
Chatbots are digital products, so the decisions undergirding their design must be based on actual user research. For instance, a round of user interviews could unearth negative attitudes towards a particular phrase or line of questioning written into a bot. Or a simple survey might reveal that a bot’s intended users value language that exudes an air of authority.
The goal is chatbots that are built on a foundation of real user insights.
Problem #3: Strangely Personal Interactions
Have you ever encountered a stranger who knew too many of your personal details? “Wait…we just met but you know my middle name, birth date, hometown, marital status, and internet search history?” Creepy.
Well, it’s no less cringy when a chatbot knows those things (and more) without ever explicitly asking for them. If the hope is that users will interact with bots in a natural, human way, then bots should meet an expectation that humans have of each other: “Please don’t pry into my private life.”
Still, some chatbot designers are attempting to offset the impersonal nature of bot interactions by mining more and more user information. It seems obvious that this is a bad idea, but some companies continue to learn the hard way.
Solution #3: Transparency
This isn’t complicated. Chatbots should:
- Ask users for personal information.
- Ask users if that information can be stored.
- Clearly tell users how their information will be used.
- Make it easy to turn off any features that may compromise user privacy.
User Privacy 101: If it’s a shady human practice, it’s a shady chatbot practice.
Problem #4: Too General
Was there ever a time when you were approached by a client with a big but impractical vision? “We want the app to be a cross between Facebook, Amazon, and Reddit.”
Overly ambitious, feature-packed digital products rarely do well, especially at launch. When people find a new tool, they want to quickly understand what it can do. If it doesn’t add obvious value, it won’t get used.
The same is true for chatbots. The more they do, the greater they risk irrelevance. Yet, there exists a strange sentiment, likely stemming from the rise of virtual assistants, that bots ought to somehow touch every aspect of user’s lives. “It manages my bank account, buys my groceries, and reads my daughter bedtime stories!”
Solution #4: Focused Features
Chatbots designed to perform specific tasks often do so quite skillfully. However, designers aren’t always present when a bot’s “big ideas” are conceived. Sometimes, we’re brought on board at a later stage, only to discover a baffling cornucopia of product features. Other times, a bot’s scope balloons over the course of a project.
Either way, a choice looms: Do we raise red flags and attempt to correct course, or do we stay silent and try to make the best of a bad situation?
It takes some tact, but it’s best to bring design concerns forward. Here’s how:
- Find the right person and the right time to discuss your observations.
- Show that you understand how the chatbot has evolved through the design cycle.
- Concisely break down all the intended features, and make it known that you believe the bot’s purpose has become clouded.
- Provide a way forward by highlighting a specific area of focus, and show how work done to this point might support the pivot you’re proposing.
- Be ready to respectfully defend your position, and if possible, use specific examples from similar chatbots to further your point.
All of this becomes a lot easier if there’s user research to support your case.
Problem #5: Lone Rangers
Does your chatbot have a cowboy mentality? Is it out there riding across the open expanse of the “interwebs” with no meaningful connection to the rest of your business? For instance, if a service company uses a bot on its website, but there’s no link to its social channels, email list, or scheduling calendar, the bot isn’t living up to its potential.
Sometimes, the range where a bot roams is so remote that it becomes nearly impossible to find. Let’s be honest; people aren’t obsessively checking your company’s website for updates and notifications. There are digital channels where your customers congregate, and your chatbot needs to be there.
Solution #5: Messenger Chatbots & Third-party Programs
Messenger chatbots reside within the messaging applications of larger digital platforms (e.g., Facebook, WhatsApp, Twitter, etc.) and allow businesses to interact with customers on the channels where they spend the most time. Even better, there’s no reason to design a messenger bot from scratch as there are multiple third-party programs that can build bots capable of:
- Use on multiple channels (social, web, apps)
- Custom design elements (response time, contact buttons, images, audio, etc.)
- Payment collection
- Analytics (open rates, user retention, subscribe/unsubscribe rates)
- Human takeover when the bot’s capabilities are surpassed
- Integration with popular digital platforms (Shopify, Zapier, Google Site Search, etc.)
- Customer support when issues arise
Reframing the Chatbot Discussion
There’s a lot of confusion about what chatbots are, what they can do, and where the technology is headed. The waters are further muddied by grandiose proclamations of bot/human social harmony. When we use words like companion, helper, and friend to market bots, we evoke emotions forged by the trials and triumphs of the human experience—emotions like trust, loyalty, and joy.
Are we doing ourselves a disservice?
Are we thinking about chatbots and discussing their capabilities accurately?
Or, are we creating unrealistic expectations that ultimately frustrate users when the bots they encounter are predictably unhuman?
As designers, we’re better off approaching chatbots as tools—tools with the potential to help our clients improve their businesses. A tool is a means to an end. It’s an object to be used for a specific purpose, and if it doesn’t work, we don’t find ourselves shouting, “Tools are useless!” We simply search for a better option.
Chatbots aren’t going anywhere. They’ll continue to advance, and as they do, designers ought to lead the effort to ensure that users understand and benefit from their improving capabilities.