This article reflected on how financial institutions could use the adoption of BCBS 239 standard to create value through the use of Big Data and Data Analytics. It also revealed some challenges and advantages of going through an organisational data transformation. The following paragraphs will focus in greater detail on some of the challenges experienced by financial organisations.
Overcoming resistance to change while enhancing collaboration among different departments:
All involved departments have their focus on the current BAU activities. Data transformation initiatives are sometimes seen as a burden and something that shifts the focus out of team’s principal activities. If data transformation is not understood and embraced by top management, there could be misalignment of interests and a push back from affected departments.
On one hand, people working for a soon-to-be transformed department could feel their roles will become obsolete and that transformation is a threat. Senior management should clearly communicate what is the vision for each team, process and role played by each individual.
On the other hand, for senior management, even if the risk of redundancy could be lower compared to the average worker, transformation might not be a priority. Usually bonuses are associated to BAU activities rather than the support given to digital transformation programmes.
Lastly it could be the case that the benefits of data transformation are not clear to everyone and some might question its value in relation to its cost. This provokes lack of engagement and distrust by everyone involved.
Implementing new data governance principles and transforming data quality processes:
Although data is key for Financial Institutions (not only because they deal with immense amounts of it and it represents a huge source of costs, but also because they have access to important data and could use it in their favour), it never seemed to be appropriately taken into account.
Embracing data governance principles is a huge change and affects in some degree the majority of areas in a Financial Institution. Agreeing and creating rules that make sense to all and at the same time comply with regulations could be difficult to reach and coordinating its implementation even more.
Data government principles should be vague enough to group all types of data and regulatory requirements, but at the same time adapted to the reality of the financial institution. Most importantly, principles should derive into specific rules, actions, templates that are easily understandable and applicable by end users.
As for data quality it is important to identify the key data transformation processes and to gather the quality requirements from all internal and external stakeholders. It’s also vital to have the same rules applied to all data processes in order to provide the same level of assurance throughout all processes and to simplify testing outputs.It is key but also very demanding to apply the same testing framework to different systems, with different coding languages, different frequency of processing, and different ways to output and transfer data.
Finally, the end to end testing process (from requirement to resolution of incidents and testing results presentation) should be able to provide audit evidences and to be adaptable on how to show testing outcomes. The easier to evidence and output the process, the less burdening will be to manage regulators, internal and external auditors.
Agreeing data ownership and finding the right data owners:
Since data ownership is a relatively new concept at least in certain areas, people usually get afraid to take that responsibility. It could be because the extent of duties and work involved is not clear, or due to misalignment between allocated data and owner knowledge/responsibilities.
How to divide the ownership is also tough. It could be by product, by source system, by process, by regulatory report, by business area, etc. It will always depend on the amount or complexity of data that exists in each “box”, on how departments/teams are organised and divided. There will be also grey boxes and divergences in some specific cases, but ultimately defining the borders should be somehow agreeable.
The CDO role or team is key in this process. They should clearly highlight in a Data Government document what is expected from the Data Owners or Stewards and to engage with different stakeholders in order to decide the correct allocation of data by each Data Owner.
Refining metadata and documentation standards:
Having a common understanding of data is difficult in complex big scale institutions. On one hand, knowledge is sometimes held by some individuals, but not present in any dictionary or written and accessible to everyone. On the other hand, data descriptions might be ambiguous, thus not providing a complete and clear information. Data is typically documented for a specific purpose or under a certain environment, but documentation doesn’t have in mind different stakeholders that might want to use the data or understand that data in the future.
Providing descriptions or documenting in general is not very exciting and the associated value is not immediately recognised. Therefore, this sort of tasks should be the more automated as it could be, be part of a process or initiative and be reviewed by a second fresh pair of eyes.
Easy but controlled access to data should be an ultimate goal for a Financial Institution, but this serves little if not accompanied by relevant metadata (information about data) and good descriptions.
Here are two examples:
1) without good data labelling, it becomes difficult to comply with GDPR or RDA regulations. Customer data should be labelled with GDPR Personal and Sensitive flags while financial data should be labelled with the related RDA metrics. This provides an easier way to filter or restrict data;
2) let’s imagine a field named “interest” in a table with mortgage information. An end user might question if this is the value or the rate, if it’s fixed or variable, spread or reference rate, interest paid or accrued, what is decimal rounding, type of field, how it was calculated, what is the source, etc.
Improving data lineage and traceability:
Financial institutions must comply with Risk Data Aggregation requirements that helps them to manage and report their risk figures originated from different types of exposures. Both data lineage (data flow or data mapping) and traceability (how concepts are calculated, data is joined, merged, etc.) should be in place and easily shared with regulators and external auditors.
Most probably the majority of financial institutions still perform their risk calculations through ETL processes or excel spreadsheets. On top of that these calculations are performed by different teams across risk and other departments. Therefore control over data and its quality is very difficult to maintain, because it’s demanding not only to implement consistent data quality framework but also to control access to this crucial data.
While there are tools that provide lineage of data, reading from all different types and sources of data, knowledge about end to end calculations, processes or systems used along the way is spread across the organisation. Thus making it challenging to agglomerate knowledge and coordinate efforts.
As for traceability, organisations face the same issue as in data descriptions:
it is hard to homogenise the documentation of metric calculations and data transformations. A data transformation can vary from a simple direct mapping or sum to a complex network of grouping, joints, conditions and intermediate tables. Whoever does it must understand not only what the code says but also what risk users want from it.Data transformation benefits are not immediate. It takes time for financial institutions to start profiting from it. Yet as soon as financial institutions overcome the initial pains, they will start to acknowledge the added value brought by this transformational journey.