“Voice assistant, can you make me feel better?”

John Merryman
Publication Date
14 April 2021

“Voice assistant, can you make me feel better?”

With voice platforms adopting a HIPAA eligible stance, the healthcare industry is poised to embark on rapid adoption of voice services. This post shares the Mobiquity perspective on how voice experiences will manifest in an omnichannel context, with rapidly advancing AI/ML technologies in the mix.

Context is dynamic with voice and using multiple channels creates fluidity 

We are all used to talking to our health care provider and are conditioned to use voice as our natural mode of interaction. Imagine being able to use voice to privately interact with your healthcare provider anywhere, anytime - or when you are immobile and require maximum assistance with minimal effort. This is the potential of voice technology for the patient. Let’s unpack how we’ll get there over the next several years. 

The foundation: millions of AI/ML workloads and billions of devices

Over the last 4 years, voice assistants have crept into the collective consciousness of the average person, most often through casual usage in the home with Amazon Alexa and Google Home. With voice assistants now embedded in cars, phones, tablets, and smart devices and 4.2 billion active assistants in 2020, growth for voice assistants is set to exceed the world's population by 2024. 

Despite the ease with which we’ve integrated voice assistants into our lives, however, the natural language processing required for even the most mundane tasks (e.g., what’s the weather today?) is actually quite immense.1 (For a great summary of what is going on behind the scenes for your average consumer smart speaker, read this.) The complexity of engineering a great conversational AI helps explain the moments of exasperation most of us have had with voice assistants not understanding what we are trying to accomplish when we talk to them. But these frustrating experiences haven’t dampened consumers’ expectations for the assistants to become smarter. Rather, it’s a call to action for us to create the skills that answer the call from consumers for the added benefit of voice skills in their lives. 

What is predicted to happen next is going to unpack a whole new realm of voice intelligence. First, the design interfaces for voice are evolving quickly - not only to handle conversational experience, but also to interpret data types and user sentiment. In parallel, machine learning is moving quickly to the edge, meaning machine learning models will execute on smaller, cheaper devices, and independent of cloud computing dependencies. This will transform simple functions by executing faster and with more localized experiences on the device vs. the cloud over time. However, deeper learning and associated outputs will continue to function in cloud-based machine learning. 

At the edge, natural language understanding (NLU) processing and associated intelligence formerly generated in the cloud (aka gigantic server farms) will begin shifting to edge devices (mobile, tablets, handheld devices, you name it). Take the acquisition of Xnor.ai by Apple last year as a case in point, along with numerous similar advances from Microsoft, Google, and Amazon to move machine learning and AI to smaller, less network-dependent devices. Edge-based computing also decentralizes private data, possibly benefiting patient privacy concerns.

Rolling this all together with ubiquitous mobile devices, smart speakers, and a generation of savvy users - there is a wealth of opportunity for innovation on the near horizon to handle patient conversations intelligently and securely between channels. Let’s talk more about data privacy and patient sentiment as it relates to these voice experiences. 

Omnichannel voice and healthcare experiences 

Why is voice a good tool for patients?

Speech is a natural digital entry point for patients and health care providers. We are already accustomed to using voice for a vast majority of our interactions as a patient. For digital voice experiences, personalization is key and patients will expect a tailored experience that reflects their unique context and an interaction model that adapts to their needs. 

Depending on their context, a patient may switch back and forth between a voice assistant and other channels, like IVR, SMS, web, chat, email, or mobile. Leveraging and integrating each touchpoint ensures the patient is supported whenever and wherever they are in their daily journey, with the utmost sensitivity for handling their data safely and securely. 

Context switching is likely to occur within the digital experience, and this is where AI/ML services introduce sophistication not found in your average voice assistant or chatbot. For instance, if protected health information (PHI) data is identified within an interaction, the channel can switch to a secure channel. If patients are exhibiting frustration, sentiment analysis can route the patient to a human vs. machine interface. This kind of experience offers truly patient centric solutions that remove barriers and friction for people who are already taxed with managing their care.

How does voice benefit healthcare providers?

Voice offers efficiencies that reduce effort and enhance care. Hands-free and always on, voice-controlled interactions can automate routine tasks, remove information flow barriers, and directly reduce the time and effort required of care teams. Care team safety, efficiency, and well-being are direct downstream beneficiaries of voice interactions when tied to the patient experience and managed datasets. 

Patient data manifests in medical records and disparate datasets, but is rarely captured through the course of the entire patient journey. Through an omnichannel voice experience, novel data sets and unique patient insights will be obtained. As these datasets are established and scaled, machine learning will unlock additional value for the provider and the patient, identifying patterns that illuminate symptoms, behaviors, and outcomes.

Getting started with voice in healthcare

Mobiquity interacts with clients every day who have amazing insights to their business, patients, and use-cases. When thinking about how to implement conversational AI in your healthcare or life science organization, we recommend you start by shortlisting key use-cases. Prioritizing use-cases and defining patient centric needs can be followed by an evaluation of business requirements and technical discovery to flesh out the viability of specific ideas. Start your voice journey with a well defined and simple use-case, rationalized both from a patient, business, and technology perspective. 

Mobiquity has digital transformation experience and expertise in implementing voice skills for healthcare providers, Fortune 500 companies, and other businesses. Contact us today to kick start your strategy. 

Let our expertise complement yours

Give us your information below to start the conversation.