Can artificial intelligence operate within compliance guidelines?

G. Elaine Wood (elaine.wood@duffandphelps.com) is a Managing Director at Duff & Phelps,and a former federal prosecutor focusing on compliance and risk assessment. Alan Brill (alan.brill@duffandphelps.com) is a Senior Managing Director at Kroll, a Division of Duff & Phelps, and founder of the firm’s cyber risk practice. He is also an Adjunct Professor at Texas A&M University School of Law. Elaine Wood and Alan Brill are both headquartered in New York City.

Traditionally, computer systems follow a set of rules defined by their programming and operate according to pre-established guidelines. Put another way, what they did yesterday, they will do today, and what they do today is what they will do tomorrow. This consistency is one of the basic tenets of compliance testing. But what happens if a system’s programming is designed to evolve, so that the system itself can change the rules by which it operates? The same transaction processed today might have a different result than if it were processed yesterday, and yet another result if processed tomorrow.

That’s the nature of deep learning artificial intelligence (AI) systems. An AI system examines characteristics of transactions and uses them to make changes in the way the system processes data. Consider the example of an AI system built for a bank that is designed to make decisions on applications for personal loans. The bank feels that having the decisions made by a single automated system will protect against claims that individual loan officers are acting in a discriminatory way or otherwise treating loan applicants unequally. 

This document is only available to subscribers. Please log in or purchase access
 


    Would you like to read this entire article?

    If you already subscribe to this publication, just log in. If not, let us send you an email with a link that will allow you to read the entire article for free. Just complete the following form.

    * required field