Photo Credit: Free image by Brett Jordan from Pexels

Spotify or Apple Music? Waze or Google Maps? Alexa or Siri?

Consumers choose between artificial intelligence (AI)-based systems every day. How exactly do they choose which systems to use? Considering the amount of money and efforts spent on AI performance enhancement, one might expect competence and capability to drive users’ choices.

Advertisement




Instead, a recent study conducted by researchers from the Faculty of Industrial Engineering and Management at the Technion – Israel Institute of Technology shows that the “warmth” of a system plays a pivotal role in predicting consumers’ choice between AI systems.

New research findings from a study featuring more than 1,600 participants, recently published in the Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, offer some insight into the psychology of potential users.

The researchers, Zohar Gilad, Prof. Ofra Amir, and Prof. Liat Levontin from the Faculty of Industrial Engineering and Management at the Technion, examined the effects of users’ perception of AI systems’ warmth, that is, the systems’ perceived intent (good or ill), and AI systems’ competence, that is, the systems’ perceived ability to act on those intentions, on the choices they made.

Most of the research done to date regarding warmth perceptions of AI-based systems addressed systems with a virtual or physical presence, such as virtual agents and robots.

The current study, though, focused on “faceless” AI systems, with little or no social presence, such as recommender systems, search engines, and navigation apps.

For these types of AI systems, the researchers defined warmth as the primary beneficiary of the system. For example, a navigation system can prioritize collecting data about new routes (benefitting the system) over presenting the best-known route, or vice versa.

The researchers found that the system’s warmth was important to potential users, even more than its competence, and they favored a highly warm system over a highly competent system. (ed: emphasis added)

This preference for warmth persisted even when the highly warm system was overtly deficient in its competence. For example, when asked to choose between two AI systems that recommend car insurance plans, most participants favored a system with low-competence (“using an algorithm trained on data from 1,000 car insurance plans”) and high-warmth (“developed to help people like them”), over a system with high-competence (“using a state-of-the-art artificial neural network algorithm trained on data from 1,000,000 car insurance plans”) and low-warmth (“developed to help insurance agents make better offers”). That is, consumers were willing to sacrifice competence for higher warmth.

These findings are similar to what is known of human interactions: warmth considerations are often more important than competence considerations when judging fellow humans. In other words, people use similar basic social rules to evaluate AI systems and people, even when assessing AI systems without overt human characteristics. Based on their findings, the researchers concluded that AI system designers should consider and communicate the system’s warmth to its potential users.

Advertisement

SHARE
Previous articleFinance Minister Liberman Mulls Plan to Abolish VAT Exemption on Internet Sales
Next articleBahrain Officially Appoints the First-Ever Ambassador to Israel
Hana Levi Julian is a Middle East news analyst with a degree in Mass Communication and Journalism from Southern Connecticut State University. A past columnist with The Jewish Press and senior editor at Arutz 7, Ms. Julian has written for Babble.com, Chabad.org and other media outlets, in addition to her years working in broadcast journalism.