Google Embraces the Age of AI at IO 2018
There was a particular segment during Google’s 2018 IO keynote that, even for those with no technical understanding of artificial intelligence, made the hair stand up on the back of your neck. It was a conversation between a hair salon receptionist and an unidentified customer. The customer was trying to find a suitable time for an appointment. She went back and forth with the receptionist, unable to get her initial time, but scheduling an earlier one on the same day. There were pauses, questions were asked, and a compromised was reached. And yet only one of the parties on the phone was a human.
It was a striking moment in an otherwise pedestrian presentation by Google. Sure, Android P is going to feature adaptive battery saving features and prioritize ‘digital wellbeing’ for its users. Yes, Google Photos will make recommendations to improve the overall quality of captured photos. That’s all fine and somewhat expected. But Google’s Duplex demonstration of a machine-learning algorithm replicating the nuances of human speech, complete with “ums” and “ahs,” and seamlessly conversing with an apparently unaware receptionist hit viewers with the force of a thunderclap. It was equal parts awe-inspiring (like shooting a Tesla into space) and eerie (Westworld writ real).
Beyond a stellar set piece for a conference, the Duplex demo is a confirmation of what the industry has known for a while: AI in all of it forms is the answer to the question of “What the hell do we do with all this data we’ve amassed?” Without AI, static developer-written code is only capable of so much. Analyzing roadway conditions in real-time to control a vehicle, identifying faces in low-quality images, flagging trends and patterns in petabytes of insurance industry statistics; AI has thrown the doors open to these and many other applications. It’s not a silver bullet by any means, but machine learning is changing how we are approaching and solving some of the most challenging problems faced across all industries.
At the rate AI design and development are progressing, I get the sense that this phone call will, in a few years, seem like a quaint and unremarkable blip. If this is what Google is willing to show off now, the mind boggles at what they’re working on behind closed doors.
In the few days since the demonstration, some outlets and critics have weighed in on the ethical ramifications of the phone call. Should the AI caller have opened the call with a warning declaring its inhumanity? Are humans ready to interact with bots that are difficult to distinguish from the real deal? Will robocallers roll out virtual armies of these things to prey on those who can’t tell the difference? While I’m sure some of these questions are knee-jerk handwringing from clickbait authors, there are very real concerns about how this tech will be used in the future. Are we really ready for this?
Existential questions aside, Google’s choice to feature AI front and center throughout the entire conference is just the latest indicator the entire industry is shifting. The success of future applications and hardware are going to depend on their capabilities to work with some form of machine learning. Google’s new cross-platform machine learning platform, ML Kit, is poised to bring more developers up to speed with AI by providing accessible frameworks for face detection, text recognition, and other utilities. ML Kit along with other APIs, like Tenserflow, are transforming the way applications, mobile or otherwise, are being developed. The demand for qualified AI developers, like the ones I work, is only going to increase as time goes on. However technology evolves, we’re ready to adapt and harness it for our clients.