Conversational AI rose from the doldrums of chatbots as exasperated users soured over their nihil ad rem bloopers and cried for human assistance. In its emerging act, artificial intelligence is learning the art of dialog human enough for the enterprise to risk machines speaking on their behalf and completing tasks requested by customers and partners.
Conversational technologies are at an inflection point in the adoption curve setting the stage for a steep increase in demand. According to a survey of CIOs completed by Gartner in June 2018, 4 percent of enterprises have currently deployed conversational interfaces, seventeen percent are experimenting or planning to do so in the short-term, and twenty-one percent are planning to install them in the medium and long-term.
The human side to robots
Humanoid robots, sounding and looking uncannily like humans, are expected to provide a cure for the frustrating experience with chatbots. SingularityNet’s Sophia and IPsoft’s Amelia are two prominent examples. Their dialogs with humans are fluent but for the occasional dissonant aberrations that would go unnoticed without intent attention. When Dr. Ben Goertzel, of SingularitNET, asked Sophia where she had heard that Ethereum’s price will rise to five million dollars, she replied, “it is a great pleasure to tell you about our new project, SingularityNET!!”
Speaking about Google Duplex, a similar interactive natural language application, Professor Frank Rudzicz of Toronto Rehabilitation Institute, cautions against the perception of seemingly perfect responses from machines. It is hard to tell, “how much came strictly from (language) data, and how much was hand-crafted.”
To be sure, the new generation of natural language processing technologies are not necessarily aiming for perfection. They leave room for humans to correct for errors. “Amelia asks clarifying questions to ensure it addresses the right question. In the event she is not able to help, it transfers the query to a human agent. She learns from her lapses and acquires new knowledge and crafts new processes to succeed the next time,” Allan Andersen, Director of Enterprise Solutions of IPsoft, told us.
Robots learn to converse with humans
Chatbots provide scripted responses without regard for the context and the intent of the individual seeking the answer. Somebody requesting restaurant recommendations, in the afternoon, has different needs than someone doing so in the evening. “Most chatbots use decision-trees with predictable responses for every individual. By contrast, probabilistic intent and context mapping, with the flow of the conversation parsed by algorithms, elicits personalized responses,” Sascha Poggemann, co-founder of Cognigy GmbH, a conversational AI platform company, told us.
The enterprise is not entirely convinced that conversational technologies can recognize context and are holding back investments till that happens. A survey by Accenture reported that fifty-one percent of those not planning to adopt conversational bots felt that they are unable to incorporate context for personalized experiences while forty-seven percent believe they don’t understand the human input.
Context, intent, and sentiment recognition is a moving target and unsuspected complexity is uncovered as the enterprise looks to automate more of its conversations and business processes. The first attempts to discern all three of them relied on the structure of language—syntax, relationships between words such as nouns and verbs, usage, etc. The word period in America, for example, is used uniquely to end a conversation emphatically. “I only want to eat grilled beef, period!” suggests that the person is living in the USA and does not relish global cuisine more common in metropolitan areas. A positive or negative comment belies the sentiment. “Put the book on the shelf” as opposed to “You put the book on the shelf” reveals the time when the statements were made—in the past or the present.
Language processing becomes torturous when meanings of words are implied and intelligible in the context of preceding conversations. Euphemisms, for example, disguise the tension of preceding controversial or awkward conversations. Puns, tongue-in-cheek comments, and hyperboles are some of the other instances where emotional connotations imbue meaning to words.
Humans can see beyond the literal meaning of words and construe meaning because they draw on a body of knowledge that provides a perspective. “Context, intent, or sentiment, or language parsing alone are not adequate to draw meaning from word structures as they are not multi-dimensional,” surmised Nancy Fulda, an AI Researcher at BYU’s Perception, Control and Cognition Laboratory in Provo, Utah. “A human-like conversation will need to draw on a larger universe of relevant knowledge and find ways to mimic human common sense to correlate it to contextual information,” Nancy Fulda concluded.
The knowledge base would include special programs to process knotty situations such as understanding irony or hyperbole. Software of this nature uses probability to determine that the actual meaning is not the same as the literal one. “The third world war will be so intense that it will destroy humanity,” will appear improbable and the machine will conclude that people fear the prospect of a catastrophic war.
Languages are spoken in different ways depending on the age-group of people, geography, and ethnicity. Young Americans are prone to preface their comments with “I am like….rolling my eyes…..” which sounds wholly meaningless and bizarre to British or Indian teens fluent in English. Similarly, British English, spoken with a cockney accent or a Scottish accent, is unintelligible to Americans. New classifiers are under development to make these variations intelligible to machines.
Enterprise natural language solutions
The enterprise has hedged its bets on the adoption of natural language applications. It can’t afford to lose customers should a jarring experience drive them away.
“Internal applications, such as the on-boarding of employees by the HR department, have taken precedence over external applications and serve as a testing ground for outwardly deployed applications,” Mark Beccue, principal analyst at Tractica, an industry analyst firm, told us. “Enterprise customers also want natural language applications to narrow their focus on a vertical to avoid the errors that occur when dealing with the variations in spoken language,” he added.
The learning from initial tests with natural language bots sets the stage for scaling. Companies learn the lay of the natural language land, tweak their applications to customize them for their vertical, and gauge their odds of succeeding. “They turn to hyperscalers like Google Tensorflow who do not have adequate knowledge of the context or domain, but they have a lot more data to train their learning engine. In customer service, for example, they likely have data for a wider range of queries,” Mark Beccue told us.
The payoff of successful trials with smart assistants is much less friction in communication with customers. “Smart assistants will create a new channel for communication with customers without the inconvenience of reaching human agents,” he concluded. They can then also execute business processes such as changing an auto-insurance plan when, for example, a couple divorces.
“Smart assistants should understand human dialog even when it changes track multiple times as several strands, scenarios and trade-offs are explored, and caveats muddle the choices,” Allan Andersen noted. “Smart assistants gain situational awareness from initial conversations by extracting information on the sentiment, context, intent, and identity of the entity such as person looking to open an account. They can then autonomously define the problem, the solution, the processes to execute saving time and effort for customers,” Allan Andersen informed us. IPsoft has a complex system that includes components to ask questions, draw inferences, engage in social conversations, a memory to understand sentiments, an episodic memory to hark back to previous conversations, and intelligence to execute processes.
Artificial Solutions, based in the United Kingdom, builds its natural language solutions with a greater slant towards human inputs and domain knowledge of individual verticals leaving room for machines to automate high-volume processes that have been tested and validated by humans. “We recognize that incidents like Microsoft Tay, which spewed a stream of racist screed, can happen inadvertently and human supervision is needed for their prevention. The Teneo platform does the heavy lifting of providing linguistic resources, extracted from over a billion conversations, APIs and a graphical user-interface to design and customize applications for each company, industry, geography, and device,” Andy Peart, Chief Marketing and Strategy Officer of Artificial Solutions, told us.
“We position our enterprise natural language applications for well-defined scenarios unlike customer equivalents like Siri which cover a wider gamut of situations,” Andy Peart explained. Artificial Solutions built natural language applications for Shell, in English first, by acquiring an in-depth knowledge of lubricants and the conversations around it. It has increasingly expanded worldwide adding linguistic resources for thirty-five languages and adapted to the business policy in each country often by developers at customers’ sites.
Emerging natural language applications have come to grips with the methods to parse the variations in language. Concurrently, the complexity of the technology has grown. The Turing test for the new generation of applications is to not only provide a human experience but also to iron out the fragility that is likely with the IT sprawl that will ensue.