Enterprise Wide Architectures for Artificial Intelligence

Enterprise Wide Architectures for Artificial Intelligence

The European Banking Authority (EBA) has conducted a series of meetings over the past few months to explore the state of the art of Artificial Intelligence (AI) adoption in the banking sector and identify the best regulatory approach for validation processes. These conversations bring to the surface important aspects regarding transparency of the algorithms, robustness of the processes, security of the applications, ethics of the decision-making. I am not new to discussing similar issues with regulators, given my professional risk management background to validate first internal models in the late 1990s. I was therefore pleased to join the debate and contribute with my experience and the IBM point of view.

No alt text provided for this image

The banking industry experienced a period of "quantitative exuberance" in the 1990s and early 2000s. Financial innovation was almost a daily routine fostering a fierce competition (not always sane) among investment banks and distribution networks (similarly, Fintech innovation is popping up every where). Risk management departments were established and a new profession was created for highly qualified individuals (similar to data science today). Basel I capital accords (1988) motivated banking boards to embed quantitive risk analysis (economic capital estimates) into economic decision-making and commit to significant investments into middle office and front office transformation to qualify for the capital savings granted by internal models (similarly, PSD2 and MiFID II are inviting financial institutions to modernise their banking infrastructure).

From an implementation point of view, financial institutions relied on two different practices:

  • a bottom-up approach centred on the specialisation of algorithms (front office);
  • a top-down approach focused on the integration of risks across all business lines (middle office).

The approaches devoted to specialised algorithms tended to identify economic value of risk management in the development of sophisticated quantitative methods, separating the analysis of profitability from the aggregated understanding of interdependency of risk factors. Instead, the integrated approaches were relying upon the idea of embedding advanced quantitative methods inside Enterprise Wide Risk Management Architectures with the scope to guarantee a joint analysis of all risk factors and to support a coherent capital allocation. The race to the algorithms (typically dominated by front offices) generated an exciting intellectual ecosystem from the point of view of mathematical research and highly specialized solutions, but it did not favour transparent and robust mechanisms capable of interacting with a business context evolving fast on interdependent markets. The financial crisis was inevitable.

Ultimately, banks adopting cartesian deterministic approaches could not find strategic value in their advanced yet isolated quantitative methods, which are always imperfect representations of the real world. Instead, strategic value lays in the capability to support the action of business managers holistically, enabling them to understand what can be measured through risk analysis (in approximation) and what cannot be directly assessed (uncertainty). Only an integrated risk management approach makes it possible to generate an open framework which can be used but also duly criticised, based on the awareness of non-quantifiable uncertainty, on the recognition that data quality is never optimal, on the appraisal of the weakness of statistical hypotheses in periods of market stress.

In the aftermath of the global financial crisis it became evident that only enterprise-wide architectures, able to foster transparent interaction between decision makers, risk managers and regulators, would have allowed to manage more effectively the cost of capital.
No alt text provided for this image

These principles are extremely relevant today when it comes to transforming banking operations with artificial intelligence. The strategic value does not lie in the individual APIs and models of machine learning, often discussed within innovation centres as Fintech proof of concepts. Algorithms are nothing but mathematical optimisations based on a specialized corpus of information. Instead, strategic value lies in the ability to conceive Enterprise Wide Architectures for Artificial Intelligence focusing on a clear definition of the underlying Information Architecture (IA) across the variety of AI models. In fact, only an appropriate design of IA in support of AI algorithms would allow to afford transparent interaction between business management and digital decision-making, thus avoid the creation of unethical and non-auditable black boxes. Transparency and the analysis of potential biases can only be achieved by allowing data scientists to retrace the decision-making process carried out by their AI models, highlighting which elements of the information dataset contribute to the output of the instant lending process, fraud detection mechanism or insurance pricing method. Robustness, transparency and ethics are therefore guaranteed by achieving consistent enterprise wide architectures on which AI models can fit in a coherent and auditable way.

Clearly, this is not only a regulatory and compliance issue, but also a fundamental element for business owners ability to calibrate their mathematical models so that they are consistent with corporate strategies regarding the allocation of risks and the generation of revenues. An example is the potential use of AI for lending operations, which must comply not only with elements of digital viability but also with corporate policies aimed at guaranteeing target levels of diversification, or with business models favouring less attractive customer segments because of a specific geographical or sectorial mission of the financial institution. This is why AI cannot be successfully infused in banking operations as a set of stand alone practices and APIS but requires a consistent and strategic architectural design.

In essence, AI does not exist without IA and from this awareness we must start to transform banking into digital, more transparent businesses ... augmented by artificial intelligence.

Stay tuned for next article 05-2019 of my Fintech Journal.

All my books can be found on Amazon or my author's page thepsironi.com

No alt text provided for this image


Richard Turrin

Helping you make sense of going Cashless | Best-selling author of "Cashless" and "Innovation Lab Excellence" | Consultant | Speaker | Top media source on China's CBDC, the digital yuan | China AI and tech

5y

As always, great article Paolo.   My observation is that AI is being applied to solve individual problems across the spectrum of the bank digital assets.   In one place an AI chatbot, in another an AI based credit card fraud detection system, all of them disconnected and without any underlying AI architecture.   Their haphazard deployment is in part due to the varying age and composition of bank systems they attach to which are anything but a well thought out "Information Architecture."   Whether this creates the opportunity for the next financial crisis remains to be seen, but clearly as these systems advance in complexity and handle more complex tasks there could be trouble coming without an IA. 

Paolo Sironi

Global Research Leader in Banking, IBM Institute for Business Value | Bestselling author | Podcaster | Board advisor | International speaker

5y

Getting ready for Copenhagen FinTech Week Thomas Krogh Jensen

Colin Bennett

Global Head of Marketing and Client Experience | Connecting Clients with Capability | Marketing - FCIM | Digital - CITP

5y

Good read. Outside the enterprise, with open architectures promoting the aggregation of algorithms across many providers to form a final service... regulators need a hugely fresh approach. Good Architecture is a strong start. Taking you organic model / theories, do we need a commonly evolved algorithmic gut, liver, kidney and pancreas to take in, digest and ‘normalise / humanise’ the incoming data nutrients? Or an equivalent to a ‘penicillin discovery’ for the data age?

very interesting and relevant piece of work... Congrats Paolo.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics