TVREV

View Original

How Does Human-Sounding AI Fit With Our Thirst for Authenticity and Trust?

Google stirred a tempest this week during its I/O Conference when it demonstrated how it's weaving its artificial intelligence tools into everyday interactions like making reservations at a restaurant. It wasn't that the tools, especially an experimental one called Duplex, weren't impressive, or convenient. But maybe they were a little too much so.Critics quickly complained, even calling Duplex "horrifying." Listeners couldn't tell when it was an AI almost imperceptibly handling an interaction with a human. Iisn't that super creepy and a real trust buster, especially when it's turned loose to make contact with others on your behalf, but without a lot of your involvement?Google, which the New York Times said was trying to strike a more "humble" tone at this year's conference, quickly backed away from the imperceptible part, saying today that it will now let Humans v. 1.0 know when they're interacting with Post-Humans v. 0.7. Thoughtful.One of the issues here, and one surely not pondered too heavily by the big brains focused on making bigger brains in Google's AI shop, is timing. Google and Facebook have been whacked repeatedly (and occasionally, expensively, in the case of Google's Ad-Pocalypse advertiser boycott). And of course, this was also the week that Democratic legislators released all 3,500 of those inflammatory Facebook ads used by Russian trolls during the 2016 elections to stir up voters on one side or another.Not to put it gently, the Duopoly, for all its very big financial results, still has some festering issues over trust and authenticity. And rolling out pretty human-like and increasingly autonomous tools at such a time seems to smack of Tin Ear Syndrome (there's probably an AI tool to help with that too, but it's clearly not as far along in development).Also this week, Google made much of its new acronym, JOMO, the Joy of Missing Out, as it talked about the ways it's dealing with increasing concerns about internet addiction fueled by UX design that's meant to keep you clicking. The JOMO tools are designed to get you to unhook from your digital umbilical cord more often.And yes, the company is using AI in some ways even a crusty journalist can embrace, like spotlighting quality journalism (they called it "the best of human intelligence") and doing a better job putting it into context (usual post-fake-news-scandals caveats inserted here).Alarmingly , for those who've spent a bit of time with dystopian sci-fi novels over the years, researchers just proved that so-called "dolphin" attacks can be used to inaudibly trigger actions on your Amazon Alexa, Apple's Siri or other digital assistants already sitting in millions of homes and hundreds of millions of devices. Dolphin attacks use signals beyond the range of human hearing,much as dolphins can hear things humans can't. Chinese and U.S. researchers have been able to prove the attacks are possible, sending commands triggering digital assistants to open websites or make calls.We've also seen plenty of conversation (including within the ranks of Google itself, where thousands of employees signed a petition against the company contracting with the U.S. Department of Defense) over autonomous weapons that could use AI-driven facial recognition tools to target humans. Notables such as Elon Musk have decried our rapid AI engagement without figuring out quite what the rules should be for that engagement.That all of these strands are coming together now, and that a company with the resources  – including in public relations – still rolled out Duplex just now, suggests that we have a lot of work to do before we really fully embrace the AI beast.Trust and authenticity will continue to be issues for brands, creators and tech companies for years to come. Not knowing who's talking to us, under whose control, will only make it worse. Maybe Google should consider that before it rolls out another cool but badly timed demo.