In part two of this series, John Kane discussed the benefits of IA (aka Intelligence Augmentation or Augmented Intelligence) technology, including how it can enhance employees’ abilities to be more emotionally intelligent and perceptive. This third and final part of John’s series looks at how IA is affecting the broader society and day-to-day life outside of the workplace.
A great deal of progress has been made on the converging roads of machine learning, behavioral science and deep learning since Geoffrey Hinton and colleagues demonstrated in 2006 how to efficiently train deep neural network models. Since then, AI has become integral to our daily lives and on the cusp of large-scale adoption in the enterprise. But its success has mostly adhered to Hinton’s contrarian view of its narrow application scope while delivering significant improvements in efficiency and quality of services.
Hinton and other leading voices in augmented machine learning and deep learning now acknowledge that the latter’s use of available massive data and high-performance computing expand that scope. The current ability to gather data on such a large scale sets the stage for deep learning to achieve reasoning capabilities. This is a perspective that Hinton acknowledged in a recent MIT Technology Review Interview.
Augmentation in machine learning and deep learning are already driving novel new product offerings that startups to enterprises can deploy at scale. Some of the most prominent examples of AI are currently behind the personal voice assistants like Siri and Alexa.
They both represent types of machine learning in artificial intelligence via their use of Natural language processing (NLP) to “read, hear, and understand” content produced by humans. The Google Duplex system takes that a step further by providing a voice assistant that can automatically carry out certain tasks like booking appointments. Although Google Duplex only works for limited domains, it’s now available across 48 states, according to Google’s support page.
This duality of limited scope with broad scalability of some of these solutions has demonstrated how this type of machine learning in artificial intelligence can build unrealistic expectations in the minds of the public. There are still many machine learning and language technology research obstacles to be overcome before a system like Google Duplex can increase the application scope in which it can operate.
Behavioral Machine Learning
That doesn’t mean that the intersection of machine learning and behavioral science isn’t making strides. There are already systems driven by machine learning that empower human users without the threat of replacing them.
That’s why it’s so important for these technologies to be developed through collaboration between software engineers, machine learning scientists, behavioral scientists, and human computer interaction experts. Augmented machine learning enables automated systems to shine in specific but broader ways.
These include repeatable, consistent, and objective tasks where they solve complex, narrowly defined problems. This enables humans to deal with out-of-scope issues where they need social or interactive communication.
Humans still bring a far superior ability to provide much more complex, engaging, and meaningful interactions with other people. The challenge is that humans aren’t always at peak performance when stressed and fatigued. This human potential for a loss of consistency and objectivity when fatigued is an area where behavioral machine learning can heavily influence augmented machine learning.
This intelligence augmentation (IA) approach empowers and complements people’s innate skills. The resulting human and machine collaboration can bring a tremendous amount of value to people’s personal lives and to the enterprise, with the machine learning capabilities that exist today.
IA technology has already delivered significant value in daily life through:
- Furthering autonomous and non-autonomous vehicles through object awareness
- Driver distraction and fatigue awareness solutions like Affectiva
- Conversational AI collaborations between Porsche and HiAuto to recognize specific voices amidst background noise for voice command in autos via deep learning software and speech recognition.
- Objective self-awareness for health monitoring and improvement with trackers like Fitbit, Apple Health, etc.
Machine learning, behavioral science, and deep learning are becoming pivotal to supporting person-to-person interactions across healthcare. This is no less true across vital services like finance, retail/ecommerce, and every industry where customer experience is foundational. It’s becoming clear that conversations now define most digital experiences, and behavioral machine learning plays a critical part.
Advances in Machine Learning and Behavioral Science
As the use of artificial intelligence (AI) grows, so does the public’s imagination about futuristic technologies previously only found in science fiction novels. This tendency to paint AI with a broad brush in terms of perception and usage has led to real and imagined ethical concerns.
These concerns can be seen in facial recognition, which can have a dark side because of its high complexity, variability, nuance, and human bias. There are fewer challenges in more narrowly focused behavioral data science applications where emotions and mood are the targets.
Solutions with specific and more targeted uses of machine learning and behavioral science are making the greatest strides in practical human and machine collaboration. These emotional and conversational AI approaches share the goal of helping humans communicate better through verbal and non-verbal signals, often based on voice analysis.
This approach has become known as Affective Computing, which was first used and explored in an early MIT research report. The idea of detecting various emotions has become the latest growth sector for AI, particularly around customer experience improvement and healthcare.
Behavioral machine learning solutions are using voice technology, audio signal analysis, and natural language processing/understanding methods in healthcare focused on vocal biomarkers. This emerging area of AI is showing how the identification of vocal biomarkers can support human medical professionals in:
- Condition/disease state classification
- Remote monitoring and treatment regimens for patients
Noted Luxembourg Institute of Health Researcher and Director of the Department of Health, Guy Fagherazzi PhD, recently discussed these advances in health science publisher Karger’s Digital Biomarkers Journal. The Karger Journal article lays out approaching advances that will have significant impact on human and machine collaboration. These are specifically around clinical support for diagnosis and treatment of many conditions including:
- Parkinson’s disease
- Alzheimer’s and mild cognitive impairment
- Multiple Sclerosis and Rheumatoid Arthritis
- Mental health and monitoring emotions
- Cardio metabolic and cardiovascular diseases
- COVID-19 and other conditions with respiratory symptoms
In addition to the elevated expectations for these and other behavioral data science approaches, there are still significant scientific hurdles remaining to meeting those promises. It is the ability to balance the contrarian viewpoints of enormous potential and narrow targeting that holds the greatest immediate successes.
This approach makes it easier to attain the potential of empowering and enhancing human behavior with human-oriented augmented intelligence solutions that deliver value in people’s personal lives and in the workplace. The exciting part is that we do not have to wait for the machine learning capabilities to achieve this — they are already here!