Vasyl Soloshchuk
CEO at INSART
11 March 2021

Why It Is Worth Writing Custom Framework for ETL Processing

Software development is a complex process, which requires many management skills and lots of technical expertise to make things done. The lack of experience and processes on the project can cost a big amount of money to businesses. Software development is expensive in itself – so why would one spend the teams’ effort creating custom frameworks? Let me explain why.

ETL stands for Extract, Transform, and Load – three interrelated steps of the data integration process. Fintech is a big and convoluted ecosystem, which is interconnected by data streams and integrations. That’s why efficient data processing is all about the health of specific companies in the Fintech industry. 

ETL Processing | Fintech CTO Club

Where it’s used

There are many sub-industries of Fintech that involve extensive work with big data sets: Investment, Payments, Personal Finance, Equity Financing, Consumer Banking, Insurance, etc. End-to-end integrations are the mainstay of Fintechs, which makes having an efficient ETL process in such companies an advantage.

To see the full picture of integrations in Fintech, you can use Fintech Integration Marketplace. This portal shows the current state of integrations for companies in Wealthtech, Paytech, and Insurance fields, as well as allows us to monitor what types of integrations are in the focus right now for some of the companies. It’s not difficult to see how many prospects such integrations have and how beneficial the efficient ETL processing framework can be.

Problems that Fintechs encounter with ETL

Usually, the problems appear because the dataset is very big and it’s difficult to process such an amount of data in a timely manner. It can cause non defined mistakes in data, and sometimes data loss. Also, it’s necessary to keep in mind the difference between source and target data storage format. There may be different fields and it’s crucial to ensure the data transformation is handled according to business rules.

Meanwhile, the defined challenges do not require building a customer framework from scratch. There are three basic steps to make your ETL process better:

  • Divide source data into packages and process the packages in parallel.
  • Use test automation and stick to a systemic approach while testing the transformation-load results.
  • Trace the changes in the target database and have the ability to reverse the latest changes.

The solution: Сustom framework

Why is it still beneficial to build one? Using a framework that is tailored to your project needs saves a great amount of time on testing. To give you a better view of what processes should the framework accelerate, I’ll provide you an example.

You have a platform for personal investment. Every day the data about new trades and stock price changes. To have any trades executed, the platform should send the data about them to custodians that hold data of this or that particular client. 

Extract

Usually, successful investment platforms integrate data from all the biggest custodians in the world to serve as many clients as they can. The list of custodians includes BNY Mellon, Schwab, Fidelity, Pershing, Folio Institutional, Shareholders Service Group, LPL Financial, and others. Each custodian provides data in its specific format, which makes you transform the source format into the one in the final table. 

Transform

For transformation, business rules should be elicited and described in a way to not repeat themselves but cover any specific data transformation case. To make the work with such huge data amounts possible, you should divide source data into packages and process the packages in parallel. The custom framework allows you to organize rules according to your project needs and manage them effectively.

Load

When you have the data downloaded from sources and transformed according to your business rules, it’s time to load it into the target database. This process is very important and it’s crucial that you monitor it and avoid mistakes. Besides data loss, such mistakes can compromise systems’ security and data of clients. How can one be sure that the process goes smoothly?

Here’s a place where automated testing comes into play. The processes of data transformation are monitored via a table that displays whether a specific process failed or succeeded. Redash panels are used to gather information about certain firms, for example, the number of firms today and yesterday; such information prevents data loss or duplication. Finally, automated tests that cover all the business logic of the integration that ensures data cohesiveness and security.