Community Blog

Community Blog

OSFF Keynote Insights: Navigating AI Adoption In Financial Services

July 30, 2024

In the engaging session at the FINOS Open Source in Finance Forum 2024, panelists explored the transformative impact of AI on financial services. Cara Delia introduced the discussion, emphasizing AI readiness as a key strategic initiative for FINOS. Emily Prince highlighted the challenge of making data more accessible for end users, while Ian Micallef discussed overcoming legacy system hurdles and regulatory compliance.

Collin Eberhardt noted the need for new frameworks to handle AI's non-deterministic nature and ethical concerns. The panel concluded with insights into the future of AI, stressing the importance of balancing innovation with robust regulatory practices.

 

 


Join experts from across financial services, technology, and open source to deepen collaboration and drive innovation across the industry in order to deliver better code faster at Open Source in Finance Forum.

TRANSCRIPT

Cara Delia: [00:00:00] Good morning. We'll try to get some time back on here. Again, Cara Delia. And just as you all have been hearing, AI readiness is a strategic initiative of Finos, and we're excited to have an illustrious panel today. Maybe just go down and while we got your titles Emily, Ian, Colin, maybe explain how you're working with AI right now.

Emily Prince: So it's really great to be here this morning. Emily Prince, I'm the group head of analytics. So I think it goes without saying that for most of us, we can imagine the role of AI having a pretty profound effect. I think sometimes, and certainly an area that we focus on is Just how profound that can be from an end user perspective.

We'll talk about this, I think, a little bit later. But, for me, a lot of what we're focused on is how do we liberate and open up information in a way that it becomes much more accessible based on the intent of end users.

Ian Micallef: Yeah, and actually, I think that really resonates with me and my role as [00:01:00] well, where I'm here to look after our developers, make it very easy and enjoyable to deliver great software in the very heavily regulated environment that we operate in.

And that. democratization of the new capabilities we're seeing coming out and making sure that we have frictionless ways for Everyone to access that and develop compelling new features for our clients Is absolutely key.

Collin Eberhardt: Cool. Thank you. I'm Colin Eberhardt, CTO of ScottLogic and I guess in very simple terms I'm interested in the Power that AI has to improve our lives, both our personal lives and our professional lives, across a great many different industries and a great many different sort of roles.

I think AI has a lot to offer almost everyone.

Cara Delia: Ian, I'm going to start with you. How do you balance the the internal [00:02:00] silos, legacy architecture around and models and capability for AI adoption? So how do you really work with your models and the capabilities of those in AI adoption? And I will ask a two part question around regulators.

Ian Micallef: Yeah. It's an interesting field because, leveraging A. I. In financial services is nothing new. So you know, we do have existing robust approaches to doing that safely and in a regulatory compliant way. But on the other hand, we are seeing, a dramatic change in capability, particularly around It's the new era of SaaS hosted, pre trained off the shelf large language models that I think are bringing a number of new challenges and we do have to break out of the old approaches that we have to really enable them.

And I think the other point I'd just pick on is one thing that's obviously clear to everyone is the accessibility of AI has really changed. And Responding to that [00:03:00] and leveraging these new capabilities Does require a you know a brand new look at how we're approaching them

Cara Delia: and from the other panelists Do you have any thoughts to share or add?

Emily Prince: I think maybe one thing I would add is I think there I'll pick on a point earlier. I think there's with Colin mentioned, there's a really galvanizing aspect of AI that is very powerful. And the scale of the potential is arguably unlike anything we've ever seen across lots of different industries and roles at the same time.

And so I think there's an acute focus on what is the use case? What is the problem that we're solving for? And what does success look like? And buried in there is also very. Keen understanding of what are the risks that we are presenting into this? And do we have suitability? So I see that coming through to the four eyes.

We get the benefit of a democrat, the democratization that is presented, making sure we have that discipline on what does success look like? But also, do we have suitability based on the use cases [00:04:00] that we're presenting? And actually, when we think about model risk, that takes you into a segregation between those more deterministic and non deterministic aspects.

Collin Eberhardt: Yeah. And to extend on that, I think Ian, you make a fantastic point about AI being accessible to all. I'm glad you didn't use the word democratization. That bugs me every time people say that. Democratization of technology. But, more specifically, one of the things that I think is a challenge for a lot of users of AI is we've grown up in a world where computers are deterministic.

They tend to tell the truth. With generative AI, It's non deterministic, unpredictable, creative, incredibly useful, but it doesn't always tell the truth. So I think we've, it's a new tool. It's a very powerful tool. But I think we have to form a new mental model for how we're going to engage with AI and potentially with computers.

Cara Delia: In dealing with regulators, who do want you to understand what is determined there are regulations coming out, there's on, how are you [00:05:00] working with regulators in your roles?

Ian Micallef: I think this isn't, Excellent point to mention the AI Readiness SIG and our that Colin and I are co chairing and there's very good reason why our goals are focused around regulatory compliance.

And, for precisely the points raised, which is that, we are dealing with an unprecedented step change in government. capability. We know that's going to continue for a little while yet. There are some very exciting new expansions of that capability in the works. And our regulatory environment of yesterday, our internal controls of yesterday are not well equipped to cope with that.

And I think both my colleagues here mentioned that the non deterministic nature of the of the models, which is really key. I think you have to come up with an approach that embraces that really in a very structured and detailed manner breaks down the risks and the threats that you're addressing.

And the goal we have certainly for our collaboration on this is to come up with [00:06:00] a menu card of mitigants that everyone can leverage because ultimately we all have the same goal, which is we want to Access to compelling new features for our clients, compelling new services that this technology opens up, but we all have a, a very a very important duty to play in doing that safely.

Collectively, we are structurally important to all the world economies. If we get things wrong people don't get paid, they can't they can't make their regular day to day transactions. It's absolutely vital that we adopt this new technology in a safe way, but equally where commercial organizations we have to do that in a very cost effective manner.

And that really goes back to the power of foundations like Finos and everything that Gab said earlier which is, we are all operating under a common regulatory environment. So it just makes sense to come together to do that.

Collin Eberhardt: One little point I'd make there is that human beings are [00:07:00] inherently non deterministic and human beings make mistakes, whether they're working in a call center or whether they're an equity researcher, they make mistakes.

frameworks that accommodate the nondeterminism of human beings? Is it possible that we could have regulatory frameworks that accommodate nondeterministic computers?

Cara Delia: In speaking of framework in the AI readiness, there is a framework that we're working on. Do you anticipate that regulators will be able to participate in the in the building of the framework with the community?

Collin Eberhardt: Yeah, so to, to explain in a little bit more detail with the AI readiness SIG, We have a goal to make it easier for financial services organizations and FINOS members to onboard, develop, and deploy AI technology into production. And to do that, we're following a deeply practical path of creating a governance framework.

So the idea being, we're going to [00:08:00] explore AI architectures, we're going to use the tried and tested approach of threat modeling, modeling the risks, and then looking to put in place I'm going to talk about AI and AI and AI and AI and AI and AI and AI Entirely remove the risk altogether, but at least allow you to demonstrate a Level of risk reduction.

So when you talk about the regulators things like the EU AI Act, for example, it's not finalized yet We don't know what the future AI specific regulations are going to be but we do already know that we have to adhere to existing regulations, whether they relate to personal data or whether they relate to data sovereignty, for example.

So through this framework I feel confident that we will be able to address future AI specific regulation, but also there, there is going to be an open invite to the regulators to [00:09:00] participate. I'm hoping that they will be able, we'll be able to speak a common language.

Ian Micallef: Absolutely. Couldn't couldn't have put that better.

And I think it's important to acknowledge, which I'm sure All of you already know that our relationship with the regulators is not an adversarial one. And we we have over the course of our careers in this in this room, already seen a huge amount of regulatory change.

And, we are an immensely safer industry than we were 10, 20 years ago. I think think we can absolutely anticipate that the regulatory environment is going to continue to involve and by coming together and having a collective view on how we adequately manage our risks, how and how we express them.

We can only help them do that faster to our collective advantage.

Cara Delia: So you can't talk about AI and leave out data. Emily, I'll ask this question of you. You mentioned earlier about what are we solving for. So in looking at [00:10:00] data architecture, what do you find as the opportunities and challenges with the use cases that we've mentioned today?

Emily Prince: Yeah, it does come up quite a bit. I think and then actually in all of the pieces of work that I've done also with customers, this is often sitting at the forefront and quickly followed by the word trust. And what we're seeing is, yes, there are the questions you would expect in terms of quality, which kind of comes back to your question of do you have the quality intent that is required for the use case that you're trying to solve for?

But there's another layer in that first piece. That first dynamic is not unfamiliar to us. Actually, we've been talking about that for decades, but the second dynamic of can I trust the thing that I'm seeing is a new dynamic that is now at scale of when you're surfacing content from a document from data set in your AI system.

Is that something that you can stand behind as an organization, which brings about an interesting dynamic [00:11:00] of which problem are you solving again? So what is your tolerance in terms of the data that is being surfaced and the problem that you're trying to solve for and where do you need the level and what level of trust do you need in it, meaning do you have two people that have validated that information as being suitable for the use cases that you are looking for?

So that you as an organization can stand behind that in presenting a report or whatever the end product is from a use case. But one of the things that I find very important is, again, I'll go back to that kind of separation of use cases, is that if you're thinking about this from the perspective of, I need a comparator, I need to be able to get a highly precise comparator that is going to improve the performance of a human performed deterministic task, Actually, you probably want to be quite open in terms of your specification in terms of inclusion of data in those aspects.

But across the board, what we see is a real emphasis on what information is being exposed into your [00:12:00] system and can you trust it. And this is something that we're seeing very consistently in terms of beyond the training data sets are used, what information is presented and surfaced up through your AI chat into power chat interfaces or whatever end product that you are using.

I also see it as a real step change where for many years, I've seen a lot of people struggling with how do I make more of my own proprietary content? How do I do more with it? How do I co mingle it in a way that I can get more out of it? I do see this is also the same step point where you are actually seeing that become at scale and with the same frameworks that we've introduced in other types of content sets being extended.

So it is a very interesting time.

Cara Delia: Any additional thoughts? No? Okay you do mention trust, and obviously there are reputational risks associated with AI. And why should we care about the data that is coming from the trained model? You [00:13:00] touched on it, but could you go maybe a little bit deeper in any trends that you might be seeing in this?

Emily Prince: Yeah, I think when you think about the again, it comes back to accountability, and I think we'll spend a lot of time. Do you have? Do you feel like you've got accountability for the information that was used in training or otherwise or not? And it's a very simple question but an incredibly important one.

And again, it comes back to which problem we're solving for. Do you have that accountability? And particularly as we see the availability of lots of unstructured information being presented, we're going to see increasing pressure and challenge to introduce. Other types of information that maybe haven't gone through the types of validation that we would historically expect.

So the understanding of, actually I'll weave it back into some of the regulatory points made earlier, understanding those guardrails and what we're collectively responsible for, making sure we've got the explainability, the topology on the data, is of paramount importance. I can't emphasize it enough.

Ian Micallef: No, I think that's a really key point.

And for me, [00:14:00] this actually links back to selecting the right use case. And, if you're trying to solve the wrong use case with this latest generation of, pre trained AI models and you require, That absolutely rigid traceability of data. You're clearly going to fail with it. It's just not the right tool for the job.

So I think it really is key that as we gain familiarity with the capabilities models, we also always take a step back and think about, are we actually using the right tool for the job? Not just the the latest and sexiest one that we've found.

Collin Eberhardt: And I'm going to take a completely different angle on the question.

I think When it comes to understanding the data that was used to train these foundational models, I think it's incredibly important that we know what that data is. Partly, as you mentioned, to determine the efficacy of the model. Is it going to produce the right results? But also there's the legal and ethical side as well.

For a [00:15:00] number of the commercial models, we don't know what data they were trained on. And a number of them are trained on public, publicly accessible works, whether that's Blog posts, journalistic articles, or let's say, for example, code that was sat on GitHub. And they use the argument of fair use. And fair use was developed as a concept many years ago to support, for example, journalists, to allow them to, um, summarize a book or a film without violating copyrights or researchers who could perform statistical analysis on bodies of work without violating copyrights.

Fair use wasn't developed at a point in time where AI existed and I do, ethical issues there, knowing, for example, knowing that some of my code that is on GitHub was used to train Copilot. I like using Copilot, but I'm still slightly uncomfortable that I was part of the process of building that model. So I think there's a massive ethical issue here, which [00:16:00] I think is still to play out.

Ian Micallef: Yeah, and actually, sorry, Oscar, I was just riffing on what Colin said there, and actually you'd have hopefully seen the press about Citi's adoption of Copilot. That's definitely not the only AI assistance we're gonna offer our developers, but we we've moved very quickly to to make that available to everyone, and one of the interesting challenges around intellectual property that I personally I personally hit against there was there is an IP filter that's designed to make sure that it's not verbatim suggesting code that it's been trained on which doesn't obviate the issue that Colin mentioned.

But one of the problems I had is what if you're actually planning to code? contribute back to an open source project. What I found is, when I was trying to work on that from within Citi, then actually the IP filtering was preventing Copilot from making suggestions back to me, because, of course, it was actually sourced directly from GitHub that was in my in my [00:17:00] context.

Obviously that's Relatively easy thing to fix, but yeah, there are new boundaries for working out how these controls can be applied effectively.

Cara Delia: Let's shift gears a little bit and talk about talent management, your work with developers daily. How would you say that they are being affected by AI adoption?

In many ways, they could see it as a hype cycle, disruptive. Do you find that the barrier to for skills is high or low?

Ian Micallef: Yeah, there's Yeah, I'm sure this will be slightly controversial, but at least in our experience, if we look at technologists now starting to look how to incorporate the latest generational models into their solutions.

We have not found that prompt engineering, for example, has been as big a barrier as we thought it might be initially. And actually, at least our experience is that people have picked up those skills and there's a lot of resource online that helps people leverage the models. And of course, with each generational model that comes out.

That also [00:18:00] changes again. So in terms of, our technical teams ability to adopt and use the technology. The barrier is much more about how we do that safely than necessarily the skills gap that we might have.

Emily Prince: Let me build on that one. I think one thing I would add is that it depends, again, not to be a broken record, but again, it depends what you're solving for.

If you're solving for productivity, then the accessibility I completely agree with. But if you're really trying to use this to differentiate and build new types of products, that's where we start to run into the skills gap. And I'll break that into two parts. One of the areas that we're really building around at the moment is the domain expertise that we have in different segments.

Call that anything from exotics to swaps to lots of different segments. Lots of different areas of domain specialism could think of model risk in its own right. And actually managing to curate that domain knowledge in a way that we can get the right level of [00:19:00] accuracy. I worry about this on a go forward basis.

Today it might be fine. We might have all earned our stars and stripes doing the heavy lifting hard work of an analyst all the way through. But in a go forward way, manner when we think we haven't necessarily had our junior talent going through and earning those stars and stripes. Will we get the same level of deep domain expertise and how do we foster that triangle of domain so that we have an ongoing level of technical understanding in the systems that we're building in ultimately critical thinking?

Collin Eberhardt: Yeah, and I'm going to respectfully disagree with Ian to a certain extent in that I think the basic task of prompt engineering, I think developers, and you don't have to be a developer to take to that. Prompt engineering is how you interface with these systems. You direct it, you ask it questions, and you learn how to direct it better.

Funnily enough my life asking it to reason about what it's doing or offering it a digital cookie typically gives a better response which is a bit weird. [00:20:00] But I think that a bigger challenge that, that people are facing with engineering generative AI systems is once again tackling the non determinism.

Within software engineering, we are very used to and very comfortable working in a world of determinism. We lean on our CI and CD systems with a hundred percent test pass. If the test fails, we don't merge the pull request. We operate in a deterministic world. Generative AI does not fit that model.

Whilst The act of prompt engineering, I agree, people learn that relatively quickly. How you then turn that into a production system that repeatedly operates within guardrails that don't necessarily prescribe production. the behaviour in such a way that you would traditionally, that's hard.

Ian Micallef: No, I think that's completely fair.

And the parallel I would personally draw is leveraging SAS products in general, because you're dependent on the vendor. You'll also be partnering very closely with your major [00:21:00] vendors, but They're going to be innovating and one of the massive benefits of going to a SAS model is that you're generally always on latest.

So you don't have that same level of certainty that we did back in the day when we used to handcraft everything ourselves. And I think this is just, as you say, Colin, another step in that direction.

Cara Delia: So fast forward to next year, sitting on this stage, talking about AI or generative AI. What do you think Is the opportunities for open source or what do you think will be the trend then?

Or the big topic?

Collin Eberhardt: I'm going to plug AI readiness again. I think one of the problems I have with generative AI is There's a big gap between the capability that I know it has and what I have in my hands as an end user and as a consumer. There's a massive gap between ChatGPT, which isn't really an end user product.

It's techies and people who are tech curious use it. [00:22:00] The comparable device that the end user has are things like Alexa. And my god is Alexa dumb in comparison. To me, again, maybe it's an open source Opportunity. I think it's an opportunity for everyone. There's a massive gap between the capability we have.

And what we're putting in front of consumers as package solutions. Opportunities are everywhere there.

Ian Micallef: Yeah, and I think what's and again, I'm going to shamelessly plug the air into this thing. I think what I'm certainly seeing is that we have just a new set of capabilities, very accessible capabilities open to us.

And actually, we've got a huge backlog of small little packages. Innovative ideas that are working their way through our now somewhat outdated control processes, and that's actually why I think the work that we're sponsoring is just so important because we have to unlock that. We have to be able to move at the same pace we do with all of the more familiar technologies.

So [00:23:00] what I very much hope is that next year it's more of a story of breaking new boundaries rather than just accessing the capability that we know is there.

Emily Prince: I'll actually build on it. I completely agree with the earlier points. I'll just, maybe one additional aspect I'll bring in is this is not just emanating from financial services or just from the developer community.

This is every single segment. Actually the point was made earlier that this is for everyone. This is relevant to everyone. From a personal trainer trying to build a business to, it is truly across the board. Every single spectrum and actually it's in those lesser developed, sorry, lesser regulated industries that we're seeing some of the fastest Innovation that is actually occurring.

So there's going to be a growing tension as we uphold the regulatory standards and policies and expectations of this community whilst we see those fast-paced developments happening around us. So I see that [00:24:00] pressure between what is possible and what is accessible growing over the next year.

Cara Delia: Thank you so much to our panelists.

I'd like to thank you for your time and for your insights.