February 5, 2021

Indecision Trees


Climbing up the wrong tree

Most chatbot products on the market use predictive modeling to determine a variety of outcomes for an end-user, the underlying architecture for which is based on decision-tree learning. For the layperson who interacts with these products and has no knowledge of machine learning, think of a simple flowchart: If this, then that. A classic workflow might look like: “Do you have X product? Yes. Is Y situation occurring? Yes. Okay, then take Z course of action.

The problem with decision trees & the chatbots that function according to their logic is that they are rigid and inflexible. There is very often a lack of accounting for context in their predetermined pathways and outcomes, and there isn’t a way to move back up the logic chain if a question was answered incorrectly, or a wrong choice was made by the end user. The best approximation for going backward, in many cases, is some permutation of the question, “Did this answer your question?” or, “Did this solve your problem?” If the answer is no, the end user goes back to square one. When this happens, users usually have to start all over again. 

Think of it this way: you’re a monkey in a forest, and the fruit you need to eat is at canopy level, the top of the tree. You cannot see the fruit from the forest floor, and you have to climb the tree to get to it. There are other trees around you, some of which have fruit that is inedible or poisonous. Your dilemma: the trees all look alike, and you’re not sure which tree has the fruit you want. If you climb the wrong one, you don’t really know until it’s too late. You then have to start all over again.

Don’t lose the forest

Satisfaction among chatbot users is consistently low, and interactions with actual people are considered far and away much better experiences for customers & end-users. What bots offer in speed, they severely lack in accuracy of addressing human concerns. This is because the AI that powers them is not nearly as “conversational” as it’s chalked up to be, and is usually very lacking in context. The choices and questions presented to users might have nothing to do with the issues they are actually dealing with, leading them down dead-ends. Users are lucky if at the end of the line they are granted the option of calling a customer service agent. More often than not, they are told to email a generic email address & automatically served a response that says, “We’ll get back to you.”  

Chatbots need to improve if enterprises are going to continue to use them. They can’t just lead people down dead-end paths without offering resolutions or the ability to properly escalate issues. They need to be more agile in allowing people to correct the paths they are on if they’ve gone wrong. They should take into account how many people respond negatively to the question, “Did this solve your problem?” and use their actions in the aftermath to inform their AI to do better. Ideally, they should be able to allow users to “switch” from tree to tree, using past behavior of other customers as context for making that switch happen. 

Let’s go back to our forest metaphor. There are hundreds of trees to climb up, and only one of them has the fruit you need. If you want to be efficient about getting it, you need to be able to jump from branch to branch, and from tree to tree. You need to be able to travel throughout the canopy. If you’re constantly climbing from the ground up and starting over, you’ll go nowhere fast. As the saying goes, “Don’t lose the forest for the sake of the trees.”