The Big 4 and Chatbot Technology

0
721
How the Big 4 are using Chatbot Technology

The term “chatbot” has been a buzzword for a few years now and businesses across multiple industries are adopting the technology. In simple terms, they are computer programs with the capacity to hold a conversation with humans. Whilst chatbots tend to be text based and amazing at answering questions, the emergence of voice-controlled devices like Alexa and Siri has seen a new generation of what we are calling conversational AI. They have gone from simple FAQ style platforms into truly human-like intelligence.

Conversational AI comes with a huge number of consumer benefits that directly influence the exponentially growing digital world. The biggest of those is that they are available 24/7. Consumers want to be connected and conversational AI allows this to happen. We can talk to Alexa whenever we want, search Google in a second, find anyone we want on Facebook almost instantly and purchase products in a single click. Nobody wants to spend time talking on the phone. Answering queries within the hour increases the likelihood of converting a lead sevenfold.

Over 30% of customers expect to see a live char option available on a retail site and Gartner expect that around 40% of interactions will be managed via a conversational AI agent in 2020. Being cost effective, accurate, fast and scalable means conversational AI is a key business focus and the big 4 technology firms, Google, Amazon, Facebook and Microsoft have it right towards the top of their respective lists. This article looks at how the big companies are using conversational AI technology and taking it further into the future.

The Big 4 and Conversational AI Chatbots

Siri and Alexa have now become household names in America, Xiaoice has been a digital friend to million in China since 2014. This has created a lot of hype around conversational chatbots and their possibilities. The reason why it is important to follow the developments of the Big 4 is because they are starting to overcome some of the obstacles that previously existed with conversational AI such as:

  • Chatbots are easily confused as they don’t have enough knowledge
  • Chatbots find it hard to interpret and understand different languages
  • Chatbots are not secure

Recent developments are finding solutions to these problems. Firstly, with vast amounts of data held at Google, Facebook, Amazon and Microsoft comes a huge amount of knowledge. If you consider that Alexa, Siri, Cortana and Facebook Messenger are collecting data during every interaction from all their combined users, the possibilities are quite endless. Microsoft for one said they designed Cortana to get smarter with every use and through that, can develop powerful AI tools.

The only way to have a truly human like conversation is to use the data created by humans and each of these companies is doing exactly that.

Coupled with the massive amount of data is the need for these corporations to follow customer demand. Since 2015, the number of users on messaging apps has surpassed the numbers using social networking as a monthly average. During 2018, the number of WhatsApp users hit the 1 billion user mark. To put that into perspective, it means that one in seven people worldwide are connected via WhatsApp. In case you didn’t know, WhatsApp is owned by Facebook. Similar metrics apply to the Chinese messaging App, WeChat, which has seen astronomical growth in the last few years. Messaging has become a major way people interact with their smartphones, so companies want chatbots to literally be a part of the conservation. The Big 4 have the data, budget, customer base and technology to be major players in how the conversational AI world progresses. Their platforms will provide the foundations for smaller businesses to develop and must be closely followed as a glimpse into the future.

Facebook and conversational AI

Facebook arguably has the two most popular and dominant messaging platforms in Messenger and WhatsApp. With this amount of data, they have a very clear view as to how people behave and the sort of questions they ask. This would include the way they talk to different people or companies. Using this information, they are in a prime position to invest heavily in conversational bots.

In 2016, Facebook first opened up Messenger to developers. Originally, Messenger was a way for Facebook members to chat privately without all conversations being some sort of public spectacle. As the chat function grew in popularity, there was a realisation from businesses that they needed to be where the customers are. As soon as Facebook made their simple platform available to everyone, even with a minimal amount of technical prowess it was possible to have a chatbot.

Businesses see Facebook Messenger bots as a great way of tapping into huge customer base as a relatively small cost. For example, the Marriott International bot allows members to edit their hotel reservation instead of having to pick up the phone. Given this is one of the primary reasons for calling, the cost savings are significant. Disney are using Facebook bots to interact with children as movies get released to engage them with the characters. Staples send their users shipping notifications and other updates.

A basic Facebook Messenger bot can be set up in around 10 minutes which makes it a bit of a business no-brainer. Messenger is the third most used App in the world as of 2018 and over 2 billion messages are already exchanged with businesses every month. In the remainder of the App market, 71% of users will delete an App within 90 days of downloading it. Therefore Facebook has an edge over the chatbot competition. Why build one on your own site when Messenger has such a huge reach?

For digital marketing, Messenger is proving a success. Assuming consumers have consented, it can send notifications if items are left in a basket or when they have special offers available. It can tie up parts of the journey that would have previously been untouched by Marketing and left to rely on an inbox. On average, customers are 3.5 times more likely to respond to a Facebook message than an email to their inbox. One common frustration with chatbots is that they don’t sound very human. The whole concept of conversational AI is to create program that virtually imitate human speech. A lot of the time when you ask a chatbot a question, it returns the wrong answer or repeats the same answer continually. Messenger is more conversational then most with its responses because of the huge amount of data available.

Microsoft and Conversational AI

Unlike Facebook, Microsoft haven’t got the large social network of consumers and vast volumes of data to delve into behaviours. When they released their assistant, Cortana, it was pretty much for the purpose of gathering data and finding out what people needed to know.

Microsoft are attempting to fill the gaps in conversational AI that the likes of Facebook have not covered, the main part of this being context. Imagine a situation where you ask a chatbot to book a dentist appointment for a certain date. The following day, you have an important meeting requested for the same day as that dentist appointment. Traditionally, the chatbot is not capable of having the contextual back and forth conversation to arrange the dates properly.

In 2018, Microsoft acquired Semantic Machines. This acquisition was completed with the objective of leveraging breakthroughs in conversational intelligence from the purchased business. This technology is set to be integrated with all Microsoft products, including Cortana. Microsoft are suggesting this will give them an almost unrivalled edge in emotional and contextual conversations.

The natural language technology in today’s intelligent assistants such as Cortana leverages machine learning to understand the intent of a user’s command. Once that intent is determined, a handwritten program – a skill – is triggered that follows a predetermined set of actions. This is essentially how Facebook Messenger would work as well. Ask a question and it will provide the right pre-determined response.

For example, the question, “Who won today’s football match between AC Milan and Barcelona?” prompts a sports skill that follows the rules of a pre-coded script to fill in slots for the type of sport, information requested, date and teams. “Will it rain this weekend?” prompts a weather skill and follows pre-scripted rules to get the weekend forecast. We see this use of skills in Amazon Alexa as well and although that is purely voice activated, the same applies.

Developers need to write a script for each of those scenarios and anticipate what is likely to be needed. It is impossible for every permutation to be manually coded by humans and this is where the Microsoft technology will have an amazing competitive advantage. The system will learn its own functionality through the data it collects, a form of machine learning. It will learn how to map what people say to a set of actions needed to carry out the ask.

As an example, instead of the developer telling the computer it needs to use a sports skill, they will show it how it can get sports scores depending on the context. No matter what the sport, the system will be able to work out the best possible answer. If the program learns how to get sports scores, Microsoft say it will be able to generalise that to weather or traffic reports too. This is all because it has learned a concept and not simply a skill. The idea of using concepts at Microsoft to progress conversational AI could be ground-breaking in the very near future.

Amazon and Google and Conversational AI

These two giants are grouped together because of their mainstream foray into the voice controlled device market.

Around 30% of US adults own a smart speaker and Amazon Alexa dominates that market (over 60% saturation). With millions of devices sold, the Amazon Echo voice command device has revolutionised our behaviour in the home. Google Home takes around 24% of that market.

Whenever you ask Alexa a question or give it a command, the Echo records the audio and uploads the snippet to Amazon’s cloud servers. Those servers translate the audio into text, then figure out the best way for Alexa to answer. That info gets sent back to your Echo speaker, where Alexa translates the text back into a spoken response. All of this happens in about a second.

Amazon have built Alexa from the ground up. When the devices were first released, there were several reports of incorrect or unknown responses as Alexa had not learnt those yet. Amazon ask that users install skills on their device and these dictate the context of the question. For example, if you ask Alexa to open up a music skill, you can ask for questions on songs and artists. It doesn’t have the intelligence for “skill-less” commands yet.

Google, in contrast to Amazon have a history of consumer knowledge. Their search engine has been analysing what people want to know and how they search for a number of years. This put them in front of Amazon in terms of conversational intelligence from the outset. Google had a lot more existing data to make use of.

The Alexa platform seems to be more focused on integrations whereas Google are looking towards to conversational AI piece as the biggest potential for development. A little bit like Microsoft, Google is looking at more contextual integration for its conversational AI. For example, instead of having to say “Hey Google” before everything, users will be able to ask for the weather then send a text message and then look at the top trending Tweets and then check replies to their messages. The aim is to do this in a far more natural language as well. As well as that, Google are allowing users to dictate emails with automatic punctuation and spelling.

Google wants their users to have an actual dialog with the assistant, as if it were another human. Amazon are trying to go down the same route with Alexa by developing follow-on conversations and the device can now correct its own mistakes. For example, if it recognises the command provided sounded incorrect, Alexa has the capacity to work that out and correct it. Both Amazon and Google and trying to make a more natural sounding device and there will be a “battle of the smartest” as time goes on.

The future

Everything looks bright for conversational AI. Over the next few years, we can expect both voice and text chatbots to become far more contextual and emotional. Consumers are demanding the perfect mix of digital and offline worlds and conversational AI can do exactly that.

However, chatbots are still bumbling along and trying to a way to generate a return on investment. Whilst some larger companies have had success, this has not been seen in smaller companies and in the mainstream. In fact, the likes of Alexa and Google Home are often seen as service devices rather than retail mechanisms. There still isn’t a huge take-up of people purchasing using their voice device compared to those that own them.

As conversational AI becomes more intelligent, it will breed better engagement from the consumer. The last two or three years have seen a certain degree of tinkering and amending as the bots seek perfection but as the developments in this article suggest, that is all set to accelerate soon. Machines will start to accept our language nuances and context to design truly conversational experiences. Bring on the world of non-human conversation!