Artificial Intelligence: The Good, the Red Flags and the Bottom Line – by Maria Schuld

Maria Schuld
Group Executive – Financial Services Group
Posted on March 20, 2018

“I will take my chances against a computer-generated decision any day of the week; it doesn’t have the biases that people have [in making decisions].”

When I saw this quote, it reminded me why artificial intelligence (AI) can be such a mixed bag for users. After all, despite its potential for problem-solving, AI is only as valuable as the data it has to work with.

Here’s a look at the good of AI, the red flags and how you can make sure this valuable tool has a positive effect on your business and customers.

The Good

Reliability. As the quote suggests, AI produces results unbiased by personal opinions, thereby provides a consistent customer experience (CX). It also mitigates errors associated with high-volume, repetitive tasks. Results also pass the important test of reliability – they’re the same regardless of how many times a test is repeated.

Cost Reduction. Companies are realizing material cost reductions with early applications of AI. As an example, Cognizant’s survey of banking, healthcare and insurance companies found that 26 percent of respondents in the banking sector enjoyed 15 percent or more year-over-year (YOY) cost savings from front-office usage of AI. According to Juniper Research, chatbots could save businesses $8 billion annually by 2020.

Improved Fraud Detection. AI scoring systems use a wealth of data inputs to separate legitimate transactions from fraudulent ones in real-time; AI does a better job of mitigating fraud and reducing false declines, thereby allowing merchants to reclaim lost sales. Javelin Advisory Services estimated that $118 billion in legitimate sales were declined in the U.S. in 2014 due to false positives.

Lower Chargeback Losses. A growing area of concern is chargeback fraud, especially as shopping dollars migrate to e-commerce. Online merchants lost nearly $7 billion to chargebacks in 2016 and losses are projected to balloon to $31 billion by 2020. FIS is successfully making headway against chargeback fraud by employing AI to facilitate dispute handling. With a focus on repeatable practices within chargebacks processing, FIS was able to process 70-80 percent of a high-volume clients’ 30,000 monthly chargebacks with increased accuracy by using robots.

Red Flags

No Empathy yet. One gap in CX applications of AI is its inability to detect human emotions, which can impede satisfactory resolution of customer service problems. Addressing this shortcoming are companies such as Affectiva – a startup spun off from MIT Labs – that is developing technology to enable voice assistants, customer service bots and other devices using AI to infer people’s emotions from the tone of their voices. Think about how a car maker could use AI to flag a driver’s emotional state that affects safety – road rage, for example – and slow the car.

Garbage Input Means Garbage Output. Regardless of the type of model employing AI, data input inaccuracy can result in unintended consequences such as unfair treatment, dissatisfied customers or even major safety problems. Decisions based on AI can become biased depending on what data is fed into the models.

Not a Total Safeguard. Fraudsters work diligently to stay a step ahead of technologies created to thwart their efforts. AI models are more effective at flagging fraudulent transactions and automatically blocking card usage in real-time, as an example. However, AI can’t stop clever fraudsters from obtaining information such as facts about a cardholder or even facial and voice imprints from social media profiles, which could help them to unblock a card.

Can Have Issues. Remember when, two years ago, Microsoft’s teen-talking Tay chatbot quickly devolved to inappropriate remarks in response to Internet trolls who manipulated it to say awful things?

Unresolved Ethical Questions. AI is a truly transformational, rapidly advancing technology. Should those advancements include the ability to weigh in on the ethics of a decision? For example, if gender is part of the data set, should it be allowed as an input? How should biometrics and future technologies be used? Most believe that humans must intervene on questions of ethics, but where the line should be drawn remains fuzzy.

Protecting your Bottom Line

To protect yourself and your company’s bottom line you should:

• Stay current with AI’s capabilities. Monitor and learn about new enhancements.
• Determine where AI would have the most beneficial impact upon your business. What functions could be performed by AI to improve accuracy and save time?
• Outline clear goals for putting together the business case for using a particular AI application. At FIS, we started with dispute handling of chargebacks because of their labor-intensive and time-sensitive nature. We focused on activities that do not require human judgement, such as repeatable practices with set tasks and clear rules.
• Know your sources of data inputs for AI decisioning. Ensure the data is accurate and valid.
• Implement human checks and balances.
• Be transparent about how you’re using AI for making business decisions, especially those with regulatory implications.

What else would you add to the list above? Let’s start a discussion about how to further this transformational technology and protect your business’ bottom line.

Leave a Reply

Maria Schuld
Group Executive – Financial Services Group

With over 20 years of experience in the financial and payments industry, Maria is the Group Executive for debit, credit, fraud operations and business management. Previously, she was a senior management team member for Metavante before its 2009 acquisition by FIS. Other areas of expertise include implementation management, account management, and professional services management.