Transcript:
Colin Eberhardt: What this SIG is about and what we're considering here is that financial services organizations have a unique set of challenges which make onboarding AI technology, working with AI technology and putting it into production quite a challenge. Now that's not to say that this industry is new to AI.
We've been using AI within financial services for a great many years, but with generative AI, the landscape has changed. Generative AI is turning into a general purpose tool. It's a tool that you no longer need to be a data scientist who can program Python to leverage AI. Members of the general public are leveraging AI through Chat GPT.
Developers are leveraging AI through things like the open AI APIs. AI is becoming a tool for everyone. Getting back to my earlier point, financial services, you cannot just give everyone this tool freely. There are some unique challenges, unique risks that need to be, that need to be addressed. However, to realize the potential of this tool, we do have to find the way to put it in the hands of everyone in a safe fashion.
So this is what the AI Readiness SIG is fundamentally focused on. So it, it has parallels to the Open Source Readiness SIG, tackling similar challenges, which thankfully for the most part have been solved, but we're tackling similar challenges around. How do we actually on board this technology at the moment?
It's a members only group. You can find some of the details on [00:01:30] GitHub in terms of the concrete artifacts that we're looking to create. We're looking to create a governance framework manages the onboarding, the development. And the execution, the running of AI based solutions. So this is the broad gamut from, complicated specialized AI solutions to simple things like Getting GitHub copilot into the hands of developers, getting chat GPT into the hands of product owners, for example, the way we're going to do that is we're going to take a deeply practical and use case based approach.
So we're not going to run towards. The challenges of Oh, what is responsible AI or how do we tackle bias? We don't want to just run to the problem. We want to start with the practical use case and use that to allow us to create a framework where we will look at the threats, look at the risks, look at the controls that we need to put in place.
So we will get to the problems of bias and responsible AI, but we'll do that in a use case driven fashion. So the SIG has been approved. It's up and running. You can register. You could join if you're a member. We're also forming a core team as well. We know we've got quite a lot of work ahead of us and we're forming a core team who are going to probably do a lot more of the content generation.
The actually construct the assets. We're holding an in person workshop in London in a couple of weeks time where we're just going to kick this off. [00:03:00] However, if for whatever reason, I know a lot of people wouldn't be able to attend that. We will likely be holding similar meetings at OSFF and other FINOS events as well.
If you're interested, you can see the contact information on the page there. If you're a non member, you're very much invited to join FINOS to participate at the early stages.