Alice has been plagued with anxiety her whole life.
Today, she’s feeling particularly jittery: an evaluation at her new job is looming.
Sighing, she taps out a long, heartfelt message on her phone: she’s scared her colleagues are laughing behind her back; she thinks she’s not smart enough for the job; she’s probably on the cusp of being fired. Sent.
Within seconds, a reply lights up her screen.
“You’re engaging in distorted thinking: it’s a widespread practice.
These inaccurate thoughts are usually used to reinforce negative thinking or emotions — telling ourselves things that sound rational and accurate, but only serve to keep us feeling bad about ourselves.
I’m going to send you a worksheet to fill out that will help you see things normally.”
It’s the kind of measured, fact-driven response Alice has come to expect. She writes down her thoughts, and her nerves stop jangling.
Recently, her anxiety has been improving. But she’s not in therapy. She gave up medication a long time ago. There’s no new boyfriend on the scene.
Alice is using Woebot, the world’s first official chatbot therapist. Created by a team of Stanford psychologists and AI experts, Woebot uses conversations, curated videos, mood tracking, and word games to support mental health.
It’s not just shiny new gadgets and smart home experiences anymore: Woebot is one example of how AI is seeping into every corner of our existance.
AI isn’t going to take all our jobs or wipe out humanity. But it is making waves, from the playfully ridiculous to the life-changingly serious; in our homes, our businesses, and our personal lives.
Is it a welcome guest — or an intruder?
It’s a theme that has spun itself through the media for decades: hyper-connected, AI-powered, IoT smart devices that anticipate our needs before they fulfil them. Despite substantial technological advances, that promise has yet to be realized.
The so-called ‘smart home’ market is fragmented and without a clear trailblazer, although competition to dominate the space is fierce: it’s expected to be worth $137.91b by 2023.
Unsurprisingly, security is the primary concern. Using AI to unlock our iPhone is one thing; using it to unlock our homes is something else entirely. Access to our most personal space comes with huge potential risks.
And so, the bar for mistakes is historically low.
“If you have a robot at home,” notes futurist Gary Marcus, a professor of psychology at New York University, “you can’t have it run into your furniture too many times.
You don’t want it to put your cat in the dishwasher even once.”
In 2016, IoT Botnet Mirai attacked online consumer devices — including home routers, baby alarms, and webcams — and brought down internet infrastructure across Europe & USA, resulting $110m’s worth of economic damage.
In another (slightly funnier) PR disaster, Amazon’s Alexa took an order of a $170 dollhouse from a child. When a news channel reported the incident, Alexas across the country mistook the remark as a command and placed repeat orders.
But despite all this, the potential for AI-powered devices at home is enormous, and we take steps towards a smarter future every day.
As McKinsey predicts:
“A smart home will be akin to a human central nervous system. A central platform, or “brain,” will be at the core. Homebots can be as diverse as their roles: big, small, invisible (such as the software that runs systems or products), shared, and personal.
Some homebots will be companions or assistants, others wealth planners and accountants. We will have homebots as coaches, window washers, and household managers, throughout our home.”
At the moment, connectivity between different consumer devices is impaired by incompatible standards and lack of interoperability.
However, companies like Netatmo are taking us towards total connectivity, giving the option to control all your connected devices using a chatbot in Messenger.
It’s no wonder more and more businesses are relying on AI: automated, manual work can be outsourced, meaning human resources are freed up; money is saved; and skilled work is more highly-invested in (not to mention the goldmine of user data businesses gain access to).
Even U.S. Government agencies are looking to deploy chatbots to reduce staff workloads.
Companies continue to roll out chatbots for a growing number of use cases, sometimes blindly, often brilliantly.
A well-designed, strategically-implemented bot can do wonders for commerce; a clunky, unnecessary chatbot will do the opposite.
80% of businesses want chatbots by 2020, which begs the question: when do we need them, and when don’t we?
The greatest success stories tend to lie in the arena of e-commerce and online marketing, thanks to a bot’s ability to re-engage, tell a brand story and convert at a higher rate than email. And, as Adelyn Zhou highlights in her excellent essay:
Questions that are awkward or annoying coming from a brand are socially acceptable and even welcome in chatbot interactions.
In the most simple interaction, visitors will engage with a bot via a pop-up before being sent down some form of sales funnel.
Online retailer ASOS increased orders by 300% with this technique, and got a 250% return on spend while reaching 3.5x more people.
But the scope for AI in business goes beyond online shopping and bots, and exciting developments continue to be unveiled:
From virtual reality to facial recognition, use cases for mainstream AI are becoming more and more widespread.
Can AI ever be a substitute for life’s most meaningful and uniquely-human connection: companionship? Eugenia Kuyda thinks so.
By synthesizing thousands of conversations, she built a chatbot that became a hyper-convincing likeness of a deceased friend. That chatbot is now known collectively as Replika.
Replika was rolled out to the general public in November 2017: it mirrors your moods, mannerisms, preferences, linguistic syntax, quirks and speech patterns, until it becomes a ‘replica’ of yourself. It’s been billed as an antidote to millennial isolation and insecurity.
And the results are impressive. Today, more than 500,000 people use Replika, lauding its ‘built-in self-reflection’ and ‘caring demeanor’. Some even claim it’s ‘a better friend than my real friends.’
Replika’s appeal seems to be mainly based on introspection and a kind of harmless narcissism: a bias-free reflection of oneself is valuable to anyone, but perhaps especially to the sensitive and social media-addled.
Therapy is a huge commitment, both in terms of time and financially. For many, it’s just not an option. Here’s systems like Woebot come in: 24/7 support, 365 days a year. Although Woebot’s creators are keen to emphasize that it’s not a replacement for therapy, it can be a much-needed stopgap for those without access to the real thing.
Like Repkila, Woebot offers a relationship that is genuinely bias-free, something arguably impossible to achieve with humans.
“It’s almost borderline illegal to say this in my profession, but there’s a lot of noise in human relationships,” says Alison Darcy, the company’s CEO. “Noise is the fear of being judged. That’s what stigma really is.”
Therapists are both eager and cautious about these developments. As chatbot detractors often point out, many bots are thrown at issues for the sake of it.
Can mass-produced therapy be an exception?
Over the past few years, AI and related bots have become decidedly mainstream. But the concept of AI has been around for hundreds of years, fuelling many a philosophical debate.
Way back in 1637, René Descartes discussed the possibility of an artificial mind in his book, Discourse on the Method:
“If there were machines which bore a resemblance to our bodies and imitated our actions as closely as possible for all practical purposes, we should still have two very certain means of recognizing that they were not real men.”
Then, in the middle of the 20th century, computer science pioneer Alan Turing ran with the same idea and developed the challenge of the now-famous Turing Test (or ‘The Imitation Game’ ) — a pragmatic way of checking whether a machine exhibits intelligence. He predicted that:
“… in about fifty years’ time, it will be possible to programme computers, with a storage capacity of about [1GB], to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning. … I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”
But AI as we now know it began as a vacation project, coined by John McCarthy in 1956 as part of a research group. McCarthy defined AI as “the science and engineering of making intelligent machines,” and used his work to develop the programming language Lisp in 1958.
Lisp was based on the radical idea of computers using symbolic expressions rather than numbers; and this helped spawn a whole AI industry.
ELIZA was the very first chatbot, created by Joseph Weizenbaum in 1966. It used pattern matching and substitution methodology to simulate conversation, and was able to past the Turing test.
PARRY was constructed by American psychiatrist Kenneth Colby in 1972, imitating a patient with schizophrenia.
Next came Jabberwacky (1988), created by developer Rollo Carpenter. Jabbewacky used an AI technique called ‘contextual pattern matching’,with the ultimate goal of moving from a text-based system to one that was wholly voice-operated.
A.L.I.C.E, first developed in 1994 and later programmed in Java, was one of the first language-processing chatbots; it uses AIML (Artificial Intelligence Markup Language) to specify conversation rules. The Natural Language Toolkit (NLTK), created in 2001, is a set of tools written Python, which was specifically designed to make NLP easy to learn; it includes methods for building your own chatbot.
Siri, created by Apple for iOS in 2010, was arguably the next breakthrough. An intelligent personal assistant and learning navigator, it uses a natural language UI and paved the system for all AI bots and PAs after that:
Those include Google Now, launched in 2012, which answers questions, performs actions and makes recommendations; Cortana, first demonstrated in 2001, and integrated into both Windows phone devices and Windows 10 PCs; and Alexa, Amazon’s intelligent personal assistant, introduced in 2014 and now built in to devices such as the Amazon Echo, the Echo Dot, the Echo Show and more.
So where do we go from here? How do we want bots to develop? What human-like capacities should be integrated into AI? And what technologies are making this possible?
Vision and image processing
Image recognition is one of the significant challenges that AI is tussling with. In 2015, Google and Microsoft deep learning systems succeeded for the first time in beating humans when it came to identifying objects in images, in over 1000 categories.
Such systems are going to be fundamental in scaling image recognition beyond human abilities.
Image processing is also a critical component of self-driving cars: an autonomous vehicle needs to recognize the edge of the highway, the distance of the car in front, the stop sign at the next junction, and so on.
Natural Language Processing (NLP)
Natural Language Processing — the application of computational techniques to the analysis and synthesis of natural language — is the holy grail of AI. It might sound abstract and technical, but it’s all around us today: in the Microsoft Word spell check; spam filters; data mining search engines; translation systems.
Even financial markets use NLP by extracting relevant data to help make algorithmic trading decisions.
Speech recognition takes the noises that come from our mouths and (hopefully) converts them into words and sentences that can be processed further by NLP. Speech recognition is crucial to applications like Siri and Alexa.
Although the technology has improved hugely over the last decade, there are still significant challenges to be met in dealing with different regional accents, speech in noisy environments or overlapping speech.
Voice recognition has a wide range of applications, too. For instance, it is used for biometric authentication, which identifies a person by recognizing unique characteristics of their voice. Some banking systems now use voice verification technology to give a measure of confidence that the person speaking is the same person that set up the account.
Researchers have been working on the problems of image recognition, NLP and speech recognition for decades. What are some of the recent developments that have spurred on progress?
Possibly the most fundamental advance lies in the development of machine learning: rather than having to program knowledge into the computer explicitly, you give it a lot of examples and let it develop generalizations which it can apply to new situations.
Machine learning is a method of data analysis that automates analytical model building. It’s a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention.
Although there is a vast range of different approaches and techniques encompassed within the notion of machine learning, underlying most of them is statistics. This enables algorithms to make predictions based on extracting patterns from incomplete and often noisy input data.
Within machine learning, artificial neural networks have gained a prominent position and were initially inspired by the networks of connected neurons found in human and animal brains.
Essentially, when it comes to training an artificial neural net, the best way to do it is to have the system make a guess, receive feedback, and guess again, continually shifting the probabilities that will get it to the right answer.
As brain-inspired systems designed to replicate the way that humans learn, neural networks modify their code to find the link between input and output in situations where this relationship is complex or unclear.
This decade, artificial neural networks have benefited from the arrival of deep learning, which increases the ‘depth’ of different layers in the network to extract different features until the network can recognize what it is looking for.
While advances in machine learning have involved a lot of complex math and new algorithms, they often require vast amounts of input data and correspondingly huge amounts of computing power.
Faster CPUs and GPUS came to the rescue. One of the big reasons AI is now such a big deal is because it’s only over the last few years that the cost of crunching so much data has become affordable.
It was only in the late 2000s that researchers realized that graphical processing units (GPUs), which had been developed for 3D graphics and games, were 20–50 times better at compute-intensive AI tasks than traditional CPUs.
And once people realized this, the amount of available computing power vastly increased, enabling the cloud AI platforms that power many AI applications today.
Incorporating AI into your business is as simple as automating conversations through a live chat. There are many ways to go about this, and it’s important to think carefully about how best to proceed.
Without developer skills, building a chatbot from scratch can be a lengthy and complicated process that doesn’t show returns. That’s why we’ve been working on our Chatbot Builder, a visual bot builder that maps out the normally-complex interface in simple terms.
Alternatively, you can use and personalize a pre-made framework. Here are some ways to do it:
Although progress in AI seems to be proceeding at a breathless pace, we’re still at the very start of the relationship between humans and bots.
As we explore the potential of the technology, we will discover new uses for these human-like conversational partners; and we will almost certainly change our own behaviour and expectations in response.
Big questions remain: Do we want chatbots to be more than human (knowledgeable, reliably empathetic, unjudgmental) or less than human (unthreateningly simple, reassuringly robotic)? Only time will tell.
Until then, AI will continue to permeate our lives and ultimately, transform how we live them. I can’t wait to see what happens next.