How to Train Your Bot: Best Practices in Managing and Measuring Bots

Posted by
Robert LoCascio
Founder and CEO
Friday, September 1, 2017 - 09:15

In 2017, bots are finally becoming widely seen, though rarely successful, in commercial deployments. Brands, in droves, are implementing AI and chatbot technologies in customer care operations. This chatbot revival, however, has unfortunate shades of the “virtual agents” of the 1990s, which were by all accounts (and I used many of them) a massive disappointment, with failure rates over 90%.

In this new “age of bots” (the third one by my count), brands need to adopt a different mindset. There are two keys to this: first, that bots are not magic, they are software, and need the same type of rigorous management that software makers and administrator use, or they will fail; and second, bots are just another agent, so they need to be treated, again, with the same type of comprehensive approach that contact centers use for agents.

Bots are software

All types of software (or, in fact, any technology) that are used at scale need to be managed. Just as servers, desktop PCs, mobile devices, enterprise software licenses and so on all proliferated, then were followed by management software to keep track of them, bots also require management.

But today, this is rarely the case. Bots spring up in all sorts of places, with no long-term strategy, no system to manage and monitor them, no disaster recovery, no backup plan, no fallback to humans, little or no beta testing, no version control — none of the things that turn technology from chaos into orderly systems. For bots to scale, that needs to change.

Bots are agents

Instead of overreaching and trying to make bots do more than they are capable of, they should be treated just like human agents in how they’re assigned responsibilities, deployed, managed, and measured in real time — which is, ironically, a foundational principle of AI deployment, but not being done enough in the chatbot space. In short, they should be treated like any other human agents in contact centers.

From the consumer perspective, a bot is just another agent employed to accomplish the same goals as a human would: answering questions, fixing issues, and being generally helpful. Customer care managers should adopt the same viewpoint. By analyzing closely, the use cases (the customer intents) where bots succeed or fail, managers can create workflows that assign bots those specific tasks, where they’ll have a high rate of success as well as higher CSAT scores.

It’s better for bots to do one thing well than attempt several things poorly. Sixty percent or higher of an agent’s responsibilities fall into the category of simpler, repetitive tasks that are suitable or highly suitable to be assumed by bots: chiefly the tedious work and routine tasks like linking to answers to FAQs, answering common queries in real time, and handling standard business processes.

Specialist bots are also more measurable; the more specific and granular a bot’s “job,” the easier it is to see whether it’s any good at it. You can measure which bots are succeeding at their given tasks, which need to be improved on, and which need to be replaced or decommissioned. Each specialist bot should be treated as any other agent, with detailed metrics, monitoring, and assistance if needed (a manager jumping into a conversation to help), to ensure consistently high quality.

Embrace the human-bot “Tango”

Bots need to be measured in real time to enlist assistance from human agents as needed. Technology today allows for the seamless transfer between human agents and bots if a conversation starts to head south. Though the technology powering bots is improving every day, they have their limitations in language and understanding, compassion, and abilities. There is no silver bullet bot that can handle everything, and humans are still needed. Evaluating a bot’s performance in real time lets you immediately intervene the moment human involvement is needed.

Train — or fire — your bot when needed

If a bot isn’t meeting expectations, it needs to be further “trained” or “fired,” just as a human agent would be. Managers can identify common metrics including confusion triggers, which cause a conversation to be escalated to a human agent; the number of steps to resolution, to limit consumer frustration; and a real-time sentiment analysis, such as CSAT or our own Meaningful Connection Score (MCS) technology, to trigger a handoff to a human agent, where warranted.  

Look at humans and bots together

In customer service, there is nothing more frustrating than explaining a problem, then being handed to another care advisor…only to re-explain the whole problem again. Care organizations should look at both human and bot agents on a single platform so each is aware of what the other is doing.

The inability to escalate conversations to humans or outside of the platform is a frequent cause of pain to brands experimenting with bots. Consumers feel “dead-ended” when a bot couldn’t help, and they’re pushed to a totally separate channel to start over. It’s important to put in place the human-bot “tango” — passing a conversation back and forth between bot and human — necessary for effective interactions with consumers.

Though brands today can more confidently make their first forays with bots and AI, as modern technologies have improved significantly, broad adoption is not possible until this experimental “let’s try a bot and see what happens” phase is over. Until brands take a more comprehensive approach -- just as they do with managing software in their IT group, and managing agents in their contact centers -- there is the risk of another failure.

 

This post was originally published on MarTechAdvisor.com 

Blog Posts by Category