- Home
-
Private banking
-
LGT career
-
Market view and Insights
As Head of Data and Innovation, Simon Gomez is responsible for generative artificial intelligence (GAI) at LGT Private Banking. In this interview, he talks about the benefits and limitations of GAI, and why humans (and our faith in them) remain a top priority at LGT.
Simon Gomez: All of LGT's employees were recently given access to a secure internal chatbot, based on ChatGPT. We receive very good feedback from users.
We want to achieve significant efficiency gains by simplifying our employees' work - and ChatGPT is a key part of that effort. To make this possible, we provide the GAI with access to information stored across different areas of the bank, often in places that are difficult to find. Using that pool of data, our chatbot can then help us with a wide variety of tasks, such as searching for and summarising documents and directives, producing meeting transcripts, and coming up with first drafts of texts or e-mails. GAI is our virtual co-creator and sparring partner, supporting us in our day-to-day work and helping us provide better service.
No. We have no intention of making our clients use chatbots. At a private bank like LGT, the top priority is to make sure clients receive individual support and personal advice. We only use the application for internal purposes, and only to support us in our day-to-day work.
We're very open-minded about GAI. When we introduced it last summer, we were one of the first private banks in Europe to use it proactively. But it's very clear: But we're also convinced that in our business, the human factor and trust are two essential elements that can't be replaced by an algorithm.
There's a bit of a misconception that technology is neutral, but GAI is fed mountains of fake and unverified news as well as marketing-driven content, and that's something that unsettles clients and investors. As a private bank, on the other hand, we are experts in our field, and we have a clear investment perspective that we can also help our clients implement. GAI can't do that. But it can support us. For example, by making the quality screening more rigorous and helping us cut through the noise and filter out the nonsense.
Quality screening relies on the credibility of the sources, such as the original article or key study. There's no getting around that. GAI can assist by helping us filter out unreliable information and providing trustworthy sources during the research phase. This helps create a more solid foundation for decision-making. However, the analyst is still responsible for the final selection of sources and the assessment.
Studies in the financial sector estimate time savings of up to 40 % for research and gaining a general overview of a topic. I use the tool several times a day. The GAI can complete complex tasks, such as conducting a detailed analysis of a survey, in just three minutes, saving me several hours of work.
You can never take GAI results at face value. You always have to verify them. Plus, there are some important stylistic and cultural nuances that have to be considered. For example, sometimes the texts come out sounding too American, which depending on the situation, can mean I can't use them.
For the foreseeable future, humans will always play a key role.
In my view, we are still in the early stages of a journey that will ultimately transform the way we work. Voice control and voice recognition will increasingly replace typing. The real game changer is that computers can now understand our language. We no longer need to learn programming languages to interact with them. In the future, we will be able to interact with applications in a voice-controlled manner and GAI will be able to intervene much more deeply in the entire process chain with larger value-added topics.
For the foreseeable future, humans will always play a key role. That's why we have a Compliance & Risk Officer in our AI team. Additionally, under European law, financial institutions are not allowed to offer automated investment advice without being able to explain it. The same applies to staffing decisions. No matter how much testing is done to optimise GAI, the ultimate responsibility still lies with the relationship manager – in other words, with people.
Our goal is to strike a balance where GAI functions as an intern and assistant that supports our people. Despite the hype around GAI and the massive investments it's attracting, we should keep in mind the lesson the Turing Test teaches us: a machine can be considered intelligent if it can converse with a human without the human realising they're speaking to a machine. We're probably close to reaching that point, but if machines do eventually pass the test, it's uncertain how well they will function - and that's a bit concerning. Which is why we want to decide how and at what pace to leverage this technology. And we want to do that carefully and on our own terms. We're also talking to our 5000 employees about this and we're taking their questions and concerns about GAI seriously. We have no choice but to evolve together because the technology is here, and it's already being used.