Do ERM and AI Play Nice?

Do ERM and AI Play Nice?

Oil and water, or peanut butter and jelly? Wait…that is the question?

Sure, it may not be as timeless and eloquent a thought as William Shakespeare’s “to be or not to be?” But it is, undoubtedly, a loaded question for risk management professionals when it comes to the use of artificial intelligence (AI), a technology that is slowly but surely creeping into every aspect of our lives, including business.

Think of all the data your institution deals with on a day-to-day basis, the various systems you use to complete transactions and the vendors you utilize to access those systems. Consider all the new regulations being released each year and the plethora of security threats that find their way to your institution. When it comes to AI, you can certainly create efficiencies and streamline the process of enterprise risk management (ERM)…but is the cost worth the price?

In the risk world, AI requires a certain tactical approach. You must ask yourself – when it comes to forecasting threats, preparing plans, maintaining vendor relationships and creating policies – does AI complement or detract from your credit union’s ERM program? As with any tools you may decide to explore, the only way to be sure that AI is going to play nice (or not) with your ERM standards is to do your due diligence and perform your own risk assessment.

Risk Assessment Part I: The Pros

How exactly does AI benefit risk management? It definitely opens up a new world of possibilities, and can be a source of help and relief in many ways, particularly in terms of credit union risk:

  • AI is the best of both worlds – computer and human intelligence combined.
  • It can handle large amounts of data – more than humans can possibly collect and comprehend.
  • In addition to the sheer amount of data AI can ingest, it’s also a whiz at processing algorithms and performing research in record time.
  • All the information gathering, analysis and logic-based conclusions your AI resources provide play a role in the most critical piece of your operations – helping your risk team, c-suite and board of directors make educated decisions.

Risk Assessment Part II: The Cons

While there are undoubtedly pros to using AI, it is riddled with risks and can pose many challenges for your credit union if not vetted out properly:

  • While not all are, many AI tools are open source. Even if your credit union chooses to use AI resources that aren’t open source, the risk for potentially exposing data is to be expected. And if data is at risk, you can be sure that exposure to cybersecurity threats is also a possibility. My best advice is to research each AI source thoroughly and apply any security procedures you already have in place in a secure testing environment. Then, monitor, monitor and monitor some more. What do the results show?
  • It’s true of almost any credit union operation, and AI is no different – there is always an opportunity for fraud. It’s not an infallible technology, and as sure as the sun rises and sets each day, fraudsters will try to take advantage.
  • AI has all the resources to learn and implement knowledge from a vast array of information databases, allowing it work autonomously. However, it’s not a set-it-and-forget-it kind of technology. It requires a lot of boundaries and constant monitoring to ensure the tools behave as you want them to. Human mediation is key in utilizing AI, because it can just as easily work against your credit union.
  • The current state of AI isn’t exactly balanced when it comes to information bias. Because it pulls upon already established resources and information, the quality of the data it gives back to you may not be accurate or paint a true picture of what you’re looking for. Often times, demographics and other discriminatory types of data can make their way into your algorithms and analysis, making the output less than reliable. As people-driven organizations, AI might not be the best fit for credit unions until there are set standards around fair representation.
  • Integrity and ethics are keystones of our movement. AI, on the other hand, is still working on that. Transparency is a concern when it comes to AI because it’s hard to trace every piece of information and the sources behind your tools. If you’re not sure where your intelligence is coming from, it can be hard to justify the use of AI.

Risk Assessment Part III: Due Diligence

We’ve heard the case for AI in terms of advantages and disadvantages, but what other considerations need to be made as part of your due diligence to determine if it’s a fit?

Will it work with your models? It’s true that you can input all of your vendor information, contracts and tasks into AI tools, but are they suited for your particular risk models? Your framework may require specialized data or utilize algorithms that are not conclusive using AI tools. It’s those little intricacies within your ERM models that you need to factor in, because pursuing AI might not be conducive to your credit union’s particular needs.

Who will be responsible for oversight? ERM is a juggling act with a lot of moving parts. New vendors are always being vetted, procedures need updated, contracts need to be renewed, new threats are emerging and creating new risks…so where is there time to also oversee the performance of your AI tools? Perhaps you have a well-staffed risk team, but maybe not. If AI is going to create more work and involve more people/resources to ensure that it’s working as you want it to, it might not be the most effective risk management tool for your institution.

Are you comfortable using ambiguous information and sources? As we previously discussed, AI can pull information from nearly any data source available. That means it could potentially use facts that were gathered unethically or without sufficient evidence, for example. Sure, you can go back through the analyses and conclusions your AI tools make, but that’s another drain on your time and resources. It may be more helpful to find risk management resources that offer greater control over data inputs and calculations.

Is it a match for your mission, values and ethics? AI is not something you should jump into without extensive testing. When you do test, are you finding that it lines up with your organization’s mission, values and ethics? Or, do you find that your risk team and decisionmakers have concerns that could jeopardize the risk position of your credit union?

The Results Are In…

With all of the results now in, it’s fair to say that whether or not ERM and AI play nicely together is pretty subjective. Maybe, for your organization, the two are more like peanut butter and jelly than oil and water, but maybe the opposite is true. The proof lies in your own assessment, testing, models, risk appetite and entire host of other considerations. The only way to discern the potential outcome is to do ample due diligence, because the pitfalls of AI are real.

It's true that there is a case for incorporating it into your ERM program so you can create efficiencies, streamline data, gather informative insights and establish a better risk management process for your credit union. However – and this a big however – it can also create additional and unnecessary risks, not the least of which include security concerns, the potential for biases and unfounded information, ethical issues, a lack of clear-cut controls and more.

Ultimately, how you choose to utilize AI – if at all – is dependent upon your credit union’s values, research process and overall needs. But remember this: be cautious, be systematic in your evaluations and assessments and be prepared for the unknowns of this new and ever-evolving technology.


Belinda Mumma is Vizo Financial's enterprise risk management director. She has many years of experience implementing and maintaining vendor management and vendor due diligence software. During her career, she also has been responsible for policy and legal review processes; implementing, directing and maintaining enterprise risk management software; and implementing and maintaining audit and exam findings software.