Several major banks have reportedly decided to restrict the use of OpenAI’s ChatGPT chatbot by their employees, according to a report this week from Bloomberg. The banks are apparently concerned about an array of issues related to the technology, including concerns about the sharing of sensitive data that could draw negative scrutiny from regulators.
Tech giant Amazon had taken steps to control their employees’ use of the chatbot earlier this year, due to concerns that those workers might share sensitive company data or computer code with the AI technology. Company officials had reportedly discovered ChatGPT responses that seemed to be based on confidential Amazon data.
JPMorgan Chase was among the first major banking firms to implement controls on employee use of the technology. The company claimed that its restrictions were part of its “normal controls around third-party software.” However, a Telegraph report suggested that the bank was concerned about data sharing and potential action from U.S. regulators.
JPMorgan has been a leader in AI throughout the banking industry, but has been understandably cautious about the potential regulatory impact of widespread implementation of the technology. Many observers expect banks like JPMorgan to become more eager to embrace artificial intelligence tech like ChatGPT once the regulatory environment becomes clearer.
Other banks that have moved to restrict their workers’ use of ChatGPT include Bank of America, CitiGroup, Deutsche Bank, Goldman Sachs, and Wells Fargo. Some have disabled employee access to the chatbot, and at least one company – Wells Fargo – is reportedly in the process of evaluating the technology to determine a safe way to allow workers to use it.