Skip to main content
A A A

Article

AI in Banking Interactive Discussion - Survey Link

When you think about artificial intelligence (AI), does your pulse start to race? Or are you excited to embrace it, but stuck in the quagmire of often conflicting, and even alarmist, information that seems to be coming at you from every direction? With all of the buzz, and the rapid development of new AI technologies, it’s hard to know where to start. Is this still “fintech” or something new?

There’s no shortage of guidance out there. We know that you are seeing the headlines and hearing about AI at every conference you attend, and that won’t end any time soon. At Miller Nash, we’re working with our clients now to develop a long-term strategy that will align with their strategic goals, compliance and security programs, and their tolerance for risk, as they determine when and how to embrace AI.

Opportunities and Risks for Banks

AI presents some incredible opportunities for banks, but there are some serious risks too. Developing your strategy now will help you determine the parameters that work for you and your optimal timing for implementing AI tools—or adopting protective measures—that support business success, provide security, and preserve customer trust. This will also ensure that the bank’s talent development and acquisition is aligned with your projected timing for implementing AI strategies.

Here are some of the opportunities for enhanced capabilities and significant cost savings (without replacing high touch, employee service) that we are seeing:

  • Customer Service. AI has revolutionized customer interactions by providing personalized banking experiences and significantly reducing response times. For instance, chatbots and virtual assistants powered by AI can effectively handle routine inquiries, or at least serve up basic answers that support customer service agents. AI can also be used to analyze and scan customer accounts for fraud (and, if anomalies are identified, proactively reach out to customers).
  • Frontline and Internal Productivity. AI can automate routine tasks, streamline operations, and maximize the capacity of your resources, allowing you to allocate your resources more efficiently and better measure and use your data to continuously improve. In other words, AI can improve your development and performance of the key performance indicators (KPIs) that drive your success.
  • Loan Processing. AI can speed up loan processing, enhance decision-making, improve your risk assessments of applicants and borrowers, reduce fraud, and improve a borrower’s overall experience. For instance, an AI solution that analyzes creditworthiness could supplement an applicant’s credit score and provide a better glimpse into nonpayment risk. This could lead to greater financial inclusion, while also introducing a new pool of creditworthy applicants that may not otherwise meet a bank’s underwriting criteria.
  • Security and Confidentiality. We all deeply care about maintaining the trust of our customers. Data protection and privacy requirements apply to many business customers as well as to banks. AI solutions have the potential to improve security measures, broaden defense capabilities, and significantly reduce and mitigate security threats. For example, AI can strengthen data safeguards, user access management, and threat identification and response. Further, AI is designed to quickly absorb new information across vast data sets and to adapt over time by using machine-learning to self-improve. This ability can help banks keep up with the ever-increasing sophistication of hackers and constantly evolving security threats.
  • Addressing Bias and Discrimination. AI can be used to identify and mitigate unintended bias and discrimination within a bank’s processes and operations. For example, AI could analyze internal data to help flag for further review underserved communities within the bank’s footprint. Also, by leveraging AI, banks can design algorithms that specifically counteract historical biases, ensuring fairer outcomes in credit scoring, loan approvals, and customer service. This proactive approach aligns with the Biden Administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence and the CFPB (and other federal agencies), all of which emphasize the importance of using AI in a manner that upholds civil rights and promotes fair competition, consumer protection, and equal opportunity.
  • Improved and Streamlined Compliance and Consistency. Let’s face it, regulatory compliance is a monumental and expensive endeavor. With AI, the bank can increase productivity, better measure, review, and track risks and compliance tasks and metrics, provide nearly real-time, actionable information to stakeholders, and improve risk assessments, vendor management, and efficiency in completing and documenting required compliance tasks. In addition, it can improve workflows and provide reporting, access, review, and contract management capabilities. AI can work exclusively within the bank’s environment or across external resources. Further, AI technologies can monitor, analyze, and track regulatory and legislative updates, so the bank can stay ahead.
  • Enhanced Employee and Team Performance. Do employees need to fear replacement by AI? We don’t think so. AI is a tool to improve productivity, transparency, capacity, and performance. It expands access and abilities to organize and absorb information for better decisions.


    AI is not a substitute for human judgement or intervention. Customers still appreciate interacting with a “real person,” and careful evaluation, review, and supervision over AI content is vital.

    While it is important for the bank to plan for talent development and training, it’s equally important to include the current team in the planning process, demonstrating how AI will help them be more efficient, produce better results in less time, and become even more valuable. As a practical matter, tech savvy employees are likely already using AI (or considering it), so the bank should have safeguards and policies in place that address acceptable versus unacceptable uses and make appropriate tools and training available to all employees. 

Evaluating the Risks

As we mentioned, AI is a continuously developing innovation, and not without risks. Also, companies with tools are cropping up at a rapid pace. Your strategy should include a robust process for evaluating risks and should establish requirements and standards for ensuring those risks are appropriately addressed. Procedures for reviewing, onboarding, and testing new vendors and tools will be critical to ensure compliance with your requirements and standards and that risks are allocated to vendors as appropriate under the circumstances.

As you reflect on the opportunities we’ve outlined, some important additional risks to consider are:

  • Data Confidentiality and Privacy. Preservation of confidential information and compliance with data privacy requirements top the list of the bank’s most important responsibilities. If not properly managed and carefully selected, AI can expose or unlawfully disclose confidential information. Fortunately, there are many in-house (or “closed-source”) solutions to keep your data “within the four walls” of your bank. Be sure to carefully evaluate a vendor’s security and permitted uses of the bank’s data, as well as ownership of generated outputs and IP.
  • Data Security. While banks can use AI to improve security, bad actors also have access to AI. The integration of AI (or the use of AI by bad actors) can increase compromise of vulnerabilities, enhance spoofing attempts, and improve the sophistication of phishing and cyber-attacks. Banks will want to ensure that security policies and protocols address these new threats. Also, the bank’s vendor management process should include evaluating the security protocols of high-risk vendors with this in mind.
  • Creation of Inequitable Processes. Remember the old adage: garbage in, garbage out. Just as AI can help address and reduce bias and discrimination, processes created with AI, if not carefully developed, can inherit the implicit biases present in their training (machine learning) data or in the design of their algorithms. This can result in inequitable processes or decisions that may create unintended consequences and even legal and regulatory risk. You will want to test your tools and train your team to ensure they understand how to evaluate and use AI-generated content properly. It’s also helpful to encourage diversity among teams developing or evaluating AI solutions to leverage a wide range of perspectives.
  • Poor or Deceptive Customer Service. AI is designed to “think and act,” or at least to mimic human behaviors and responses. AI can take into account styles, tones, and other behavioral factors, depending on the data it has access to and how it is instructed. However, it’s still more like a robot than a human and may not react to behavioral cues, tones, or subtle indicators as a trained customer service professional would. Being stuck in an AI loop that provides limited solutions, and can’t “listen” to the customer, can quickly lead to unhappy and frustrated customers. Also, it’s important to ensure that customers are not misled by a chatbot or deceived into believing the chatbot is a human, and disclosures may be required. The CFPB has warned that chatbots can lead to loss of trust with customers, frustration, and even violations of law.
  • Improper Reliance on Output. Generative AI has known defects and glitches and can inherit and perpetuate problems in the data it uses or gathers. It also is designed to please the requester (think of it like an intelligent, over-eager intern assigned to provide an answer). If it can’t find an answer, it might “hallucinate” one or add information to enhance an answer. Proper training and practice with prompting will help reduce these types of errors; and users should carefully review outputs for accuracy. AI can also incorporate overused or trite words or phrases and may modify quotes to avoid copyright violations (so you need to double check after requesting an exact quote). Evaluate vendor tools for how known or discovered issues are addressed and provide guidance and training to users.
  • Increasing Regulatory Activity. There is increasing regulatory and legislative activity at both federal and state levels and multiple task forces and other initiatives have been launched. For example:
    • Utah has enacted the Artificial Intelligence Policy Act, a sweeping consumer protection law imposing disclosure requirements, covering a long list of deceptive practices, and limiting the ability of companies to “blame” AI for consumer protection violations.
    • The Colorado Artificial Intelligence Act, effective beginning February 1, 2026, targets algorithmic discrimination and imposes requirements on developers and deployers of high-risk AI systems. Violations are considered “unfair or deceptive trade practices.” High-risk systems include financial and lending systems that make or are a substantial factor in making a “consequential decision” (including an approval, denial, terms, or cost decision).

As we’ve seen with data privacy and other consumer protection laws, it’s challenging to track, navigate, and modify policies and procedures based on multiple approaches and legal standards. You’ll want to evaluate vendors on tracking and updating capabilities and assign responsibility for promptly modifying tools as the legal and regulatory landscape develops.

Taking the Plunge (Action Steps)

OK, so you’ve made it to the beach, and you’re ready to dip your toes in the AI water. We have included some action steps below to help you get started.

  • Develop an AI Strategy. Since AI is here to stay and likely to permanently change the way business is conducted, it's imperative to craft a strategic plan that integrates AI into operations and compliance, while addressing potential risks. As we noted, your team will likely use AI even if you prohibit it, so it’s best to plan for it by establishing and communicating your policies and procedures and providing appropriate support.
  • Just Get Started. It’s easy and understandable to be overwhelmed by the hype and number of developing AI tools in various roadmap stages. Overcome inertia by starting small. Some possible places to start are:
    • Choose three areas to explore, based on your internal risk assessment,business development objectives, or budget challenges.
    • Determine how the bank will approach AI. Will you start with protective measures (such as security enhancements and internal policies and procedures) or are you ready to launch an RFP for a new tool?
    • Look at your current HR development and recruitment plans. Have you incorporated development and acquisition of AI capabilities, AI training, or even capabilities to manage or develop your own tools?
  • Monitoring, Education, and Awareness. Start learning more, monitoring key developments, and exploring AI tools and capabilities. This will empower you to make informed decisions and maintain an edge over competitors that are taking a “wait-and-see” approach to AI. If not a full strategic plan, establish a strategic direction and begin to identify your wish list. Learn which technologies are fairly well developed versus those that need more time to mature.
  • Contracts Matter. It is crucial to (1) review vendor contracts for adequate protections and standards and (2) negotiate provisions that provide the bank with reasonable flexibility to abandon technology tools that do not perform or that become compliance risks. Implementations can be complex, and you’ll want to be sure that you’ve included comprehensive specs and metrics, SLAs, termination rights, data use restrictions, IP rights that you expect to have, and fair risk allocations, among other provisions. As a reminder of the importance federal banking agencies place on vendor risk management, here is a link to the recent Interagency Guidance on Third-Party Relationships, which includes a long list of items to incorporate into your vendor vetting, compliance, and contracting processes.

We’re ready to partner with you.

AI presents a significant opportunity for the financial services industry to enhance operational efficiency and customer satisfaction, while impactfully lowering costs. By taking proactive steps early to understand the risks and develop a strategy, you can leverage AI to help the bank compete as the landscape changes.

For more information and to discuss how we can assist you with your strategy, please contact us. 

We’re organizing a follow-up interactive discussion and briefing regarding developing a strategic plan for AI for your bank. If you are interested, let us know by clicking here, and you’ll be added to our invite list.

This article is provided for informational purposes only—it does not constitute legal advice and does not create an attorney-client relationship between the firm and the reader. Readers should consult legal counsel before taking action relating to the subject matter of this article.

  Edit this post