The study entailed consumer-led research conducted by the University of Goldsmiths, and the report provides guidelines for brands developing their own customer-facing bots in preparation for an AI first world. Its recommendations are informed by a thought-provoking set of findings: not least that people prefer to confide in a bot when it comes to finance and medical issues, but still like to know when a bot is a bot. In other words, bots should never be too humanoid.
As one participant in the research commented: “I would prefer the interaction with the AI over the human for things like budgeting and managing my money. I think it would be quite intrusive if an actual person was trying to advise you on your spending, but if you could set an AI goals to manage your money better, and you could get advice and tips, that would be a really nice feature.”
Our preference for bots in these circumstances is largely because there is a feeling that a bot will never judge you in the way a human might. Jonathan Crane, CTO of Amelia, argues that: “You are talking to a technology, providing information, and that technology isn’t judgemental. It can’t go further. It can’t misuse your information. And so we think there is going to be an increase in trust of Artificial Intelligence.”
Crucially, however, people are less forgiving of mistakes made by a bot than a human operator. A quantitative survey by Mindshare found that 61% agree with the statement: ‘It would be more frustrating if a chatbot couldn’t solve my problem than a human’. Field trials with Extreme Biometrics, where participants took part in a controlled experiment that explored changes in their stress levels when they completed a task using AI (SwiftKey) versus a human, support this bias.
Chatbots are increasingly used where people are spending more and more of their time – messaging apps. Sixty billion messages are already sent every day within WhatsApp, Facebook Messenger and the like – three times the number of SMS. This trend will be mirrored by brands that seek to implement chatbots on their own digital platforms, on their websites or embedded within apps over the coming months.
While this future will unfold quickly, bots come with a health warning. One site, for example, notifies users that: “Cleverbot learns from people. Things it says may seem inappropriate. Use at your own risk and with discretion.”
Share your views and opinions on this via our LinkedIn Group