Community Blog

Community Blog

The Business Case & Practical Steps for Inclusive Tech & Responsible AI in Financial Services

August 02, 2024

OSFF Keynote Panel Recap by Rimma Perelmuter

I was delighted to host the FINOS’ DEI Special Interest Group (SIG) session at the recent Open Source in Finance Forum (OSFF) London, addressing two seemingly unrelated yet highly connected issues of the day - the business case for diverse teams and  how to deploy Responsible AI. 

Our DEI group championed by FINOS members such as BMO, Citi, Morgan Stanley and NatWest has an ambitious program of activities that's championing best practices for embedding diversity and inclusivity across financial services institutions.

 

This session was particularly poignant as it brought together leading financial tech, business operations, digital and software engineering experts to share insights and practical steps for Addressing Bias in Generative AI.

A massive thanks to my brilliant panelists Prachi Kasodhan from Microsoft, Lee-Ann Sansom and Declan O'Gorman from NatWest Group, and Jason Smith from Publicis Groupe. They delved into the challenges and opportunities that come with integrating Gen AI into financial services business processes, focusing on the importance of building diverse teams and addressing the biases that non-diverse teams and skewed data can unintentionally drive into software applications and AI models.

The Advantage of Diverse Teams

We started by recognising the primacy of diverse thought as a competitive advantage -  ‘people who are different to you will challenge what you said”  said Lee-Ann Sansom and yet diverse teams make better decisions and are more innovative. According to a recent DEI study from Kornferry, diverse teams make better decisions 87% of the time and also 75% percent of inclusive innovators are more likely to have their ideas productionized. This is aligned to NatWest’s approach to fostering diversity through inclusive hiring practices, reskilling programs, and strong support for employee-led networks. Lee-Ann stressed the importance of measuring diversity to track progress and ensure continuous improvement.

We know that bias in software engineering and having more diverse teams has been a  significant challenge for some time and the rise of Gen AI presents an opportunity to reframe and reset.

And this is where diverse teams can make an important difference.  Declan O'Gorman highlighted the importance of  making sure we're using unbiased data and accessible design to ensure that we're coding and creating solutions for the benefit of all, not just the people who are creating them. He added practical insights on leveraging existing controls like model risk and validation to ensure fairness in AI models and also stressed the need for robust frameworks to evaluate third-party AI services, ensuring they meet the same high standards of bias mitigation.’

The Business Case for Generative AI

The importance of addressing bias and building Responsible AI in the financial services sector is paramount.. Prachi Kasodhan emphasized that maintaining trust and getting AI right is fundamental to maintaining employee confidence, brand integrity and compliance with ever-evolving regulations. This holistic approach not only safeguards an organization’s reputation but will also unlock new revenue opportunities - creating brand credibility, efficiencies and value.

Practical Steps for ‘Conscious Adoption of AI’

Jason Smith introduced the concept of “conscious adoption” of AI, a framework that encourages organizations to carefully consider the ethical implications of launching AI.  

When considering using AI, he encouraged organizations to start by asking these simple questions: “Can I do this?; “How do I do this?” and most importantly, “Should I do this?

There's no right or wrong answer but taking a systems thinking approach and forming diverse teams encourages them to look at it through multiple lenses to identify the potential unintended consequences of AI implementations. 

Our societies and our communities are pivoting their skills to be able to best accommodate the next wave of technology that's going to come through.  

Jason further highlighted the opportunity of creating a pipeline from school to workplace downstream through apprenticeship schemes which not only teach hard skills but also emphasize diversity of thought and consciousness –  essentially embedding responsible ethical AI principles from the ground up.  

Developing the Right Approach to Data and Choosing the Right Use Case for Now

Data is at the heart of AI, and developing the right approach to handling it responsibly is crucial to ensuring high standards of data quality, avoidance of bias, and integrity for AI models. 

Jason highlighted that the open source community is a great way to explore how we can try and address this. No one has to go and share proprietary data or anything like that, but there are techniques and frameworks that we can use that will help us address that.

For example, we can use synthetic data to fill some gaps that we may have in data sets.  Equally, we can consider utilizing something like Red Teams to identify potential flaws and vulnerabilities in an AI system. This can be explored in an open source environment through collaboration in a way that doesn't necessarily involve disclosing any kind of proprietary data.

Prachi Kasodhan shared Microsoft’s comprehensive approach, which includes policy guidelines, standards, and requirements for accountability, transparency, fairness, safety, reliability, data privacy, and inclusiveness. These pillars guide Microsoft’s responsible AI initiatives and can serve as a potential model for other organizations. 

Right now Gen AI models are like a grumpy teenager - unable to make wise decisions and lacking the capability to independently oversee several of the more complex tasks required for the highly regulated financial operations.  A Human-in the loop/’co-pilot’  is needed to guide and oversee the handling of more complicated cases that may require nuances in judgment that an AI model is currently incapable of. Therefore some use cases may not be ready for Generative AI deployment.

Moving Forward: Open Source Collaboration

One of the key takeaways from the panel was the power of open source collaboration as an enabler and opportunity for building on shared assets.  By sharing best practices, tools and frameworks, organizations can collectively tackle the challenge of bias in AI. Declan O'Gorman and I both emphasized the opportunity of participating in FINOS’ DEI SIG and the recently launched AI Readiness SIG to benefit from a community-driven approach to advancing diversity and ensuring that trust, inclusion and safety are at the heart of Generative AI governance frameworks.

Conclusion

Addressing bias in software engineering and AI is not just a technical challenge but a societal one. It requires a concerted effort from diverse teams; robust governance frameworks and a holistic commitment across organizations to continuous improvement. By leaning into these challenges and leveraging the collaborative power of open source, we can create more inclusive and responsible AI technologies which generate value for business and society.

I encourage everyone to get involved with FINOS’ initiatives and join us in making a positive impact. Thank you to our amazing panelists for sharing their practical insights and to the FINOS community for their unwavering commitment and raising the bar towards a more equitable and inclusive tech future.

Watch the full session from OSFF on June 26th here.

Author: Rimma Perelmuter, VP Strategic Growth, FINOS Join experts from across financial services