1 Aprile 2024

How responsible AI helps financial services manage risk and assure compliance

Financial services organizations are energized by the promise of generative AI to transform their businesses in unprecedented ways. They see opportunities across the board, including providing new services to underserved communities, reducing costs, and empowering their employees to increase productivity.  

As interest leads to action, however, financial institutions are rightfully concerned about the risks of AI, including potential issues relating to reliability, fairness, and data privacy. Organizations need guardrails to manage these risks and ensure security and compliance within the complex regulatory environments in which they operate.  

Most organizations are not waiting for government regulatory action before innovating with AI. That’s why Microsoft, in coordination with industry stakeholders and regulators, is addressing these key issues so that financial institution customers can embrace AI with confidence. It is one of the important ways that Microsoft Cloud for Financial Services is helping organizations to unlock business value and deepen customer relationships.  

Microsoft Cloud for Financial Services

The future of financial services in the era of AI

A man sitting at an office desk working on a Surface Studio 2 Plus and a Surface Pro 9 on the desk next to him. No screen showing.

Business value with generative AI for financial services

Financial services firms are making significant investments to leverage the power of generative AI. For many, the recent availability of Microsoft Copilot for Microsoft 365 has improved employee productivity by integrating AI into their everyday applications, such as Microsoft Word and Microsoft Teams.  

Additionally, many customers are building on Microsoft’s enterprise-grade cloud platform and its built-in privacy, security, and compliance controls as a foundation to build customized solutions that are fit-for-purpose within the financial services industry. Early use cases in financial services often address low-risk, internally focused scenarios such as optimizing costs, reducing time to value, and enhancing collaboration.  

Across verticals, we see generative AI use cases driving impact in the following ways: 

  • Banking use cases include contact center agents, financial advisors, content generation, know-your-product, customer, counterparty, and fraud analysis. 
  • Insurance use cases include contact center agents, underwriters, claims managers, virtual agents, and assistants, know-your-product, customer, counterparty, and fraud analysis. 
  • Capital markets use cases include client engagement and customer service, market research and report summarization, pitch book generation, investment and wealth advisory, know-your-product, customer, counterparty, fraud analysis, and accessibility and language translation. 

How Microsoft’s responsible AI aligns to global principles

Microsoft has been a leader in the responsible use of AI since 2017, when our AI, Ethics, and Effects in Engineering and Research (Aether) committee was formed to examine ethical considerations and the effects of AI on society. Those efforts were bolstered in 2019 when the Microsoft Office of Responsible AI was created to help develop responsible standards.  

This and other work led to the Microsoft Responsible AI Standard, which defines product development requirements for Microsoft technologies and sets forth a process that firms can use to build their own governance frameworks and controls to manage risk.  

The Responsible AI Standard defines a set of essential principles, which Microsoft diligently adheres to and which we encourage organizations to embrace. These principles are illustrated as follows: 

A diagram showcasing the six AI principles from Microsoft.

Concurrent to our efforts, many countries have advanced far enough in the regulation lifecycle to have issued national policies, strategies, and guidance for safe implementation of AI. Indeed, Microsoft has been working with regulators to share our perspectives and approaches in establishing guardrails in the use of generative AI.  

In 2018, we gave inputs to the fairness, ethics, accountability and transparency (FEAT) Principles established by the Monetary Authority of Singapore, and we’ve had similar engagements with regulatory bodies including the Dutch National Bank, the French Prudential Supervision and Resolution Authority, the Basel Supervision Committee, the Bank of England, and the International Association of Insurance Supervisors, among many others.1  

From these collective efforts, a consensus among governments and regulators has emerged, including a set of principles to guide responsible adoption of AI for financial services institutions. Encouragingly, they map virtually identically to principles defined in the Responsible AI Standard.  

Microsoft’s six principles for ensuring compliance with AI

In the context of responsible AI, Microsoft has developed solutions that help financial services firms comply with applicable law and regulations, facilitate effective oversight, address supervisory expectations, and protect customers. Compliance and assurance are essential to meeting those requirements, and to help ensure them, Microsoft has developed six essential principles:

1. Effectiveness

AI technology should be effective, reliable, and suitable for its intended use. Building on the Responsible AI Standard, we provide essential criteria for evaluating AI technologies. This includes providing effective transparency and oversight of the Microsoft Cloud, including for our generative AI services, through tools and resources such as the Responsible AI Dashboard, AI Transparency Notes, and Responsible AI Impact Assessment Template

2. Fairness

AI technology should not result in discrimination, societal bias, or unintended outcomes for consumers. We provide a range of tools to help prevent AI system from exhibiting unfair or unwanted behaviors. This includes a methodology for reducing societal bias in word embedding to help avoid gender stereotypes while maintaining potentially useful associations. We also conduct internal testing with tools to evaluate AI quality and Responsible AI metrics for large language models such as Copilot for Microsoft 365. These same tools are made available to customers for Microsoft Azure OpenAI Service

3. Privacy and data security

An AI technology should be supported by strong controls to protect consumer privacy and data security. Azure OpenAI Service stores and processes data to provide AI services and is subject to all controls defined in the Microsoft Products and Services Data Protection Addendum (DPA). To help firms govern, protect, and manage data estates, Microsoft offers Purview, a family of data governance, risk, and compliance solutions. And to ensure optimal security, Microsoft AI technologies are built on the Azure security foundation, which includes information security controls that are readily integrated within a firm’s security program foundation, which includes information security controls that are readily integrated within a firm’s security program.  

4. Transparency

An AI technology should enable traceability and intelligibility, and be auditable. Transparency relies on traceability, communication, and intelligibility. People should be able to understand and monitor the technical behavior of AI systems, and those building and using AI systems should be forthcoming about their design, deployment, and limitations. 

Microsoft provides tools to help provide greater transparency and enable auditability, including tools to track and reproduce models and their version histories. Azure Machine Learning also includes methods and tools to ensure that outcomes or outputs are identifiable and explainable to an important audience, such as regulators or consumers. Azure OpenAI Service also enables model designers and evaluators to explain why a model makes the predictions it does, by providing information relevant to traceability and intelligibility. Further, we commit to, and have experience with, audits by customers and examinations by regulators. 

5. Training and governance

A financial institution employee should have the necessary expertise to implement and review the AI technology, and the AI technology should be subject to governance and oversight within the institution. Microsoft provides AI training programs and content to assist with education and knowledge management within an institution. This includes learning courses such as those provided by Microsoft Learn, which encompass a range of tools and resources for businesses to leverage as part of a governance program. Microsoft’s Service Trust Portal provides for additional levels of assurance, documentation, and the Microsoft Compliance program helps with assessments of the Microsoft Cloud, including AI technologies. 

6. Ethics

Use of an AI technology should align with the financial institution’s code of conduct and applicable ethics standards, which requires ongoing accountability by the firm in terms of use and oversight of the AI technology. The Responsible AI Standard helps ensure ethical use of AI, and we provide information to assist financial institutions to confirm that AI technologies are consistent with their policies and procedures. Microsoft has committed to implementing the National Institute of Standards and Technology (NIST) AI risk management framework, and we have aligned our Responsible AI Standard with the ISO 42001 AI Management System. We remain committed to implementing relevant future international standards, including those which will emerge following the implementation of the European Union AI Act. 

Learn more about AI implementation

Microsoft AI technologies are well-positioned to assist financial institutions in implementing AI in a way that complies with applicable law and regulations, facilitates effective oversight by senior management and the board of directors, addresses supervisory expectations, and protects customers. 

To learn more about our commitment to responsible AI, please visit the responsible AI website. To connect with subject-matter experts to support your risk, audit, and compliance teams and accelerate your cloud adoption, visit the Compliance Program for Microsoft Cloud.  


1MAS introduces new FEAT Principles to promote responsible use of AI and data analytics, Monetary Authority of Singapore.

The post How responsible AI helps financial services manage risk and assure compliance appeared first on Microsoft Industry Blogs.


Source: Microsoft Industry Blog