Potential Risks of Chatbots for Financial Institutions
Articles by: Marketing, Jun 21, 2023
The Consumer Finance Protection Bureau (CFPB) recently reviewed the advanced technologies behind the tools used for chatbots and the impacts they may have on privacy and compliance with applicable federal laws. During their studies of the large language models (LLMs) and artificial intelligence (AI), the CFPB found that financial institutions risk violating legal obligations and may be liable for violating federal law and financial regulations by using chatbots.
Threat actors have begun to mimic the industry using chatbots and deploy them to gain the same customer information.
Growth of Chatbots
Chatbots have become an ever-present part of interacting with various forms of communication while conducting business online. The ability to leverage the chatbot to provide consumer feedback with simulated human-like responses allows lenders and depositories to reduce the need for dedicated staffing for customer service agents and can aid in expediting resolution times. With the introduction of AI tools to the general public, like ChatGPT, consumers have become more accustomed to interacting with these technologies. Chatbots powered with LLM can predict the text the customers will likely input while requesting a solution, reducing the time needed to interact with the chatbot.
In 2022, over 98 million customers engaged with a bank or lending institution’s chatbots and expected to grow to over 111 million by 2026. All the top 10 banks in the United States are currently using chatbots. Consumers interact with the chatbot to check account balances, update personal information, check when payments are due, find account numbers, and add authorized users. Many financial institutions have also expanded their use of chatbots to social media platforms and enabled direct chat via these channels.
Chatbots Targeted by Malicious Actors
Threat actors have begun to mimic the industry using chatbots and deploy them to gain the same customer information. As customers are accustomed to interacting with chatbots, when the threat actors send their fake chatbots to them that mimic the authentic requests, they are often unable to distinguish between the scams. Another area of concern is the responsibility of financial institutions to keep the customers’ personally identifiable information (PII) safe. There have been multiple instances where chat logs were compromised; in addition, the LLMs and AI databases are common targets for hackers.
Potential Risks of Chatbots
Congress requires that financial institutions provide straight answers to their customers. With the use of chatbots and AI and LLM, there is the risk of a customer getting stuck in a loop of unhelpful responses from the chatbot, without a way to contact a human directly if the chatbot is the only method to reach out to the financial institution. Chatbots also need to resolve customer disputes properly and maintain customer trust. The chatbots can also provide inaccurate information on a financial service or product, which would directly cause harm to the customer. This harm might leave the lending or depository institution responsible for the chatbot’s misinformation.
Using or considering chatbots? Richey May’s cybersecurity experts can make sure your digital defenses are resilient. Contact us today for a penetration test or a cybersecurity maturity assessment. We will make sure your digital fortress is strong. Contact us at firstname.lastname@example.org.