Pandemic COVID19: The Biggest Economic Crisis of the Century

Pandemic COVID19: The Biggest Economic Crisis of the Century

July 22, 2020 by Antonio Latorre

Guide to a quick and healthy opening of the financial market.

The essential objective of this document is to illustrate to the reader about the economic context that the health crisis has caused in the economies of the region, the chain of events that this crisis is triggering and what remains to be faced.

We will slightly address the macroeconomic context and the consequences it would bring to the financial system. Next, we will illustratively lead the reader to a clear and guided explanation of the credit risk management tools that financial institutions have. Finally, and with clearly defined tools, we will address the strategies that the most prepared institutions in the market are making, so that the reader can identify potential adjustments to their own decisions.

Economic Context Generated by the Covid19 Pandemic

Global evidence has placed the current pandemic generated by Covid19, as the triggering factor of the largest economic crisis recorded in the last 90 years worldwide. The effect generated by the closure of trade, forced cloistering of the population in their houses, limitation to essential work, restrictions on free transit, tourism, travel, etc., have all resulted in the bankruptcy of many companies which in Chile according to data presented by the Superintendency of Insolvency and Re-Entrepreneurship of Chile for the period from January to April 2020 already registers the figure of 22, and the final closure of many others. Perhaps the most recognized cases of solvent companies that were affected by this phenomenon are regional airlines. Some of them have gone to asset reorganization by attaching to chapter 11 in the US, others have proceeded to file for bankruptcies directly, or eventually went to local authorities for state salvage. Examples like these, though less bulled, abound by the hundreds.

In addition to the health measures needed to contain the pandemic, the acting governments have tried to react by taking economic mitigation measures, which in one way or another seek to provide liquidity to the poorest deciles of the population and provide financing to the companies most affected by the crisis, in a desperate attempt to contain the pernicious economic vicious cycle that will inevitably end in high unemployment.

Unlike some OECD countries, assistance in the region has been directed through the granting of state guarantees to banking in order to facilitate access to financing. By way of illustration, Finland has proceeded to directly provide funds to companies in order to carry out new product development projects or research, understanding that the crisis will be time for investment in development and not for a long commercial effort. These are not grant funds; all you need is to submit the project in order for the State to finance the 80% of it.

In economic terms, we can explain the overall economic phenomenon as an economic shock to demand, as described in Figure N° 1, as a contraction in aggregate demand and its well-known shift from the curve to the left

Demand shock

This contraction in demand has led to an immediate downward price (p2) reaction in those services that have been affected by confinement measures. This is how airfare, trains, intercity buses, hotels, accommodation, services in general, retail, etc. have tried to promote their products and services with reduced prices to stimulate their demand. We have witnessed product promotions that are not usable due to mobility restrictions, but their purchase guarantees their use or enjoyment when mobility restriction measures are raised. An example of this is the possibility of purchasing airfare with reductions of up to 500% (5 times from months before the crisis) that could be used within possible sanitary windows on certain future dates.

On the contrary, and as expected in any economy in crisis, demand for goods and services vary, staying in those items of more essential needs such as food, hygiene products, health, etc. This generates likely recognizable increases in food prices (flour, potatoes, bread, chickens, meat, etc.), inputs (alcohol, chlorine, disinfectants, masks, gloves, etc.), medicines (vitamins, flu vaccines, disease control tests, remedies for virus treatments, etc.), which have reached levels that generate recognizable shortages of some of them and increases of more than 400% in their prices, given their greater scarcity in the face of the sudden rise of their demand.

While this should be caused by a shifting movement to the right of the demand curve (D1), it is necessary to emphasize that the scale of this growth is much lower in aggregate terms of demand than the opposite effect generated on the massive demand of thousands of other products or services that represent much of a country's economy (fuels , transport, etc.). Therefore, the effect of these raises does not significantly impact the overall contraction of demand causing its movement to the left in the previous chart (D2).

At the level of productive factors (labor demand), the effect of the demand-per-work curve (see Figure N° 2) is significantly contracted (DT1 to DT2), generating an immediate downward pressure on wages (S1 to S2) and an increase in unemployment (var%=(qt1-qt2)/qt2) in the face of a constant job offer. It is necessary to record that in the face of high unemployment; future uncertainty; and fear caused by the impact of the pandemic, job supply remains constant preventing a price adjustment to new demand for work. The pressure for informal productive work is increasing and the supply of services in the informal economy is greatly expanded, with the well-known impacts on social security systems issues, health protection systems, tax collections, etc. "Necesity knows no bounds", says the saying, and that is how any measure of health confinement will be completely resisted by the need to develop essential trade or work activities that constitute a source of income. However, minimal in order to ensure subsistence. They will abound fraudulent work certificates, temporary permits exceeded, excessive movement of people, etc.

Labour demand

Days ago I helped my 10-year-old son with a task about the culture and traditions of the Maya´s culture, their social castes that involved great differences in privileges and the particular fact that, despite their social differences, they all regularly converged to the market to trade among themselves. Trade in, where cocoa beans were traded as currency for vegetables, animals, handicrafts, work items, textiles, etc. It is this kind of need, an essential function of the human being, that has remained intact over the years without any greater difference than the exchange rate, or currency that represents the redemption values. Yesterday cocoa, today the $/US$/EUR. These days, in the scenario of confinements, the population will look for ways to generate these exchanges and this will create immense pressure on health authority.

A significant contraction in demand for work, such as the one we have seen, has led to a considerable decrease in employment availability (qt1 to qt2). This has triggered a sustained rise in the unemployment rate, which today according to figures at the end of May 2020 for Latin America provided by INE, DANE, INEI and for the US Bureau of Labor Statistics stands at 11.2% in Chile, 21.4% in Colombia, 12.9% in Brazil, 13.1% in Peru, 10.4% in Argentina according to Indec at the end of April 2020, 24.7% in Mexico using estimated figure based on the ETOE survey that numbered the inactive new population of 12.5 million people, and 13.3% in the US, and we will see continue to grow steadily in the coming months, at least in Latin America where labor restrictions generate a severe lag of the recovery in employment (JP Morgan estimates that Chile reaches today, July 2020, unemployment close to 20%). These record figures will bring with them the worst of the pandemics, a significant drop in the economic activity of the countries, which were expecting a rapid V-shaped recovery from 2021, but rather we will see a U-shape starting gradually from 2021. It is important to note that there are no reliable and historical evidence projections, which reasonably allows for an estimated judgment of the turning point of the health crisis that stifles the economic normality of the affected countries. It is now more sensible to expect the provision of a vaccine against Covid19, which according to the information flowing in the media, will be available no sooner than early 2021.

Only from that moment on and from sufficient industrial conditions to allow for the mass production and distribution of the vaccine, will the emerging countries previously mentioned, inoculate initially their at-risk population, to later expand it to the lowest-risk population.

It is sensible then to assume two things: (1) the basis of comparison of world economies this 2020 will be so low that understanding that it can gradually return to normal in 2021, will mean reasonable growth on that basis of comparison. IMF forecasts, puts the estimated growth of 2020 for the region in: Chile -7.5%, Peru -14%, Colombia -7.8%, Brazil -9% and the US -5.9%. This will give way to a slightly better 2021 with rebounds in growth of: Chile 5.0%, Peru 6.5%, Colombia 4.0%, Brazil 3.6% and USA 4.7%. Finally, we will reach 2022 with full freedom of mobility to continue the path of growth, but levels that are smaller than those seen during 2021; and (2) significant mobility restrictions will be reduced or terminated from the beginning of inoculation, allowing the market to adjust its parts and adapt to the new economic reality. Some argue that the herd immunity effect that the already infected population can acquire might reduce rates of future infection, which could also have a favourable impact on the gradual release of mobility-restrictive measures. Nor is there sufficient evidence in this regard because of the still-widening ignorance of the virus to date.

Formal Financial System: Quick and Secure Opening Procedure.

Traditional and next-generation risk management tools.

There is no need to establish causal relationships between the impediment to intermediation and the fall in profitability and solvency of financial institutions. A financial institution that cannot invest its clients' savings in loans to other clients securely is not able to perform the most essential of its functions to broker funds, therefore, the return on its assets will fall, as naturally will affect its financial solvency.

There are those who define the banking players (the main player in the financial system, but not the only one) as those who "lend money and raise money," but the truth is that there is nothing far from that limited definition. Essentially the banking business focuses on its ability to manage risks and returns of its assets. That is, they are able to consistently manage their expected returns and the risk that arises from them. An absent and fundamental part of the definition of lending money is the need to recover the borrowed money, i.e. managing the risk associated with this operation, and on the other hand, raising third-party funds at reasonably attractive rates that allow it to be competitive in anchoring the loans to be granted. Simply put, if loans are granted to risky populations, the fees must be offset by the increased risk and expected recoveries should allow the committed investments of the savers to be repaid on time. The more efficient this intermediation process, the greater the return to the bank's shareholders and the safer the funds of the savers will be.

Currently, banking and financial retail have the following 3 essential tools for managing their credit risk (in this study we will not address other types of banking risks such as operational risk, portfolio risk, Basel indicators, etc. but only those associated with the credit giving process):

  • Credit policies: These constitute rules or criteria that define the framework for the action of granting credits. These are usually knockout criteria, i.e. whether or not customers meet a certain requirement. While they are very easy to apply and easy to operate, they are generally insufficient in predicting risks and are the main drivers of type I and II errors in admission. That is, they do not allow precise detection when approving credits to those who will not pay the credit (type I error) and deny it to those who would pay them back (type II error).

    Examples of these policies are: minimum age of the applicant; non-presence of any delinquent documents in the financial system (or credit bureau); applicant's minimum income; previous negative payment behaviors at the bank; etc.

    Just as there are credit policies that have basic legal and credit risk sense, there are also policies that have historically been used as essential elements of risk control but are highly statistically inaccurate.

    An example of an appropriate risk policy is to require applicants to be at least 18 years of age and are therefore legally responsible for what they sign, as this policy allows the document to be given legal validity of collection (promissory note). On the contrary, other policies such as having some delinquent document in the bureau report may be extremely vague and irrelevant when it comes to specifying the risk. As a guide, in a real example of a loan portfolio of a mobile operator in Latam, it was empirically evident that the population that held up to 2 delinquent documents with a limit of up to US$ 120, had an even better delinquency rate than the population without the presence of any delinquent documents. See Figure N° 3.

    30 days past due delinquency rate (as a function of past due documents)

    The explanations for such a phenomenon are various: by way of general consideration, the abundance of suppliers such as urban roads, financial, insurance companies, insurance brokers, credit cards or account payment agreements (PAT or PAC) with closed accounts, etc. have caused thousands of people to have no knowledge of unpaid bills, which are hard to predict due to their low amounts. At the same time, the collection strategy of these creditors is exclusively the publication of the delinquent document in the commercial bulletin, since the collection costs associated with the value of the document exceed the value receivable. That is, the cost of publication versus amount to recover ratio makes this the only viable channel the to manage the collection efforts (as opposed to more expensive one like call centers). As a general rule, it will be sufficient for the client to apply for a credit at some institution for the delinquency to come afloat, after which he proceeds to its payment, and the financing flows as if this delinquency did not exist.

    Despite the goodness or criticism of this tool, it should be noted that statistically and as a risk control tool, it has a particularity that makes it especially obsolete and useless. It is not able to predict the probability of non-compliance per se. That is, it is not an instrument that correctly measure the level of risk, nor it allows to graduate the desired management or risk appetite of a financial institution. In other words, if the institution wants to obtain a 10% delinquency risk of 90 days in its consumer credit portfolio, then it will be virtually impossible to set that goal as a knock-out criteria and not a risk-controlled measurement tool as it is a scoring model.

    As a theoretical example, if a financial institution has as a single risk control tool, the establishment of a credit policy that requires its clients to have at least 25 years of age and the risk of that population in the system is 16%, then each time you place a credit (Caeteris paribus) with such a population distribution, its result will be that 16% of them will be delinquent. Assuming that now, the same institution wants to lower its risk to 13%, it is likely to be prevented from obtaining such a risk rate if, when setting as a new credit policy - the one that sets clients to be at least 30 years of age, with an expected delinquency of that population being theoretically 8% - then its result will be to obtain 8% as default rate, away from the target of 13% sought.

  • Scoring Models: This tool, the most popular among retail finance institutions such as banking, is constructed by polynomial formulas composed of multiple predictive risk variables that together make up an algorithm whose ultimate goal is to define the probability of occurrence of a given event sought. Typically, such financial institutions define as objective (or target variable to predict) a given delinquency metric that can be 90 days of delinquency in an observation period of 6 to 12 months.

    Data scientists, in charge of building the models, select within a pool of certain and limited internal variables such as age, gender, income, education, transactional behavior, etc., along with those variables that can be acquired from the respective bureaus and advanced technology companies, and apply on them pre-processing techniques of variables for the correct treatment and grouping of them. Once pre-processed, they will give way to the modeling phase itself, in which data scientists enter variables into the polynomial gradually and select those that improve the predictive level of the model based on the objective set, until the maximum possible performance of the formula is reached with the available information: in this case, delinquency. These predicting indicators are typically measured in correlation coefficients, determination coefficients, GINI coefficients, KS, ROC curve, etc.

    Studies conducted by Big Data Scoring have indicated as a simple rule that the value of the data explains about 85%-90% of the value of a model and only an optimized algorithm (formula) can explain 10%-15% of the remaining value of a model. In other words, however skillful or sophisticated an algorithm could get, it can only affect a very limited spectrum of a model's predictive value compared to what the preprocessed data can explain. The value lies in the quantity and quality of the data held.

    The essence of any scoring model, is to generate a sort or ranking of the objective population that places all its observations (customers) in such an order that the one who has the highest ranking (score) and therefore possesses the least probability of default goes to the beginning of the list and gradually places the population until they reach to the observation of lowest ranking (score) and therefore represents the highest probability of default. These rankings, while represented as population distribution curves, are usually grouped into deciles to facilitate risk prediction in tables referred to as Scorecards.


    By way of example, in the Scorecard indicated in Figure N° 4 above, you will be able to identify that the most risky deciles are 1, then 2, and so on until you reach decile 10 that groups those clients with the lowest probability of default. Each decile, called Scoreband, presents a grouping of clients in equivalent bands of 10% of the population (in this case 7,933 observations each). The entire population is sorted and grouped into these Scorebands that are grouping the populations from highest to lowest score. Each Scoreband in turn reflects the percentage of bad customers (Bad-%) and good customers (Good-%) that would result from selecting customers according to the necessary points (scores) to belong to that decile. In other words, Scorebands allow the risk manager to define a priori the risk appetite desired by the institution and set the cutoff point (600 points equivalent to the start of decile 6 in this case) that they need to obtain the desired default rate based on the admission rate. In the example, the cut-off point indicates a score of 600 points that allow you to get a total risk of 10% (bad customers) while admitting 6 deciles, that is, 60% of the total population that requires a credit.

    While this tool represents the pinnacle of risk management, it is necessary to understand that they do not remain free of problems and/or inefficiencies. These deficiencies are primarily set by the predictive level of variables held by the financial institution. If the available variables of the institution are made up of the traditional variables that banks have in addition to those of the bureau on duty in the country, then although we will have reasonable predictive levels, we will not have the optimal ones. This is how it is common to see in banking in Latin America models of admission of new clients with hit in bureau (banked) that achieve GINI yields of 0.42, or new customer without a hit in the bureau (not banked) of GINI=0.23. These yields in developed countries are usually at levels of GINI=0.52 and GINI=0.45, respectively. This is due only to one reason, the inclusion of Big Data or variables from non-traditional sources of data that allow to explain or describe phenomena or behaviors that traditional data does not allow.

    The graphical representation of the performance of a state-of-the-art Scoring Model (which includes big data) versus a Traditional Model is expressed in Figure N° 5, where the X axis is made up of the level of credit admission that a bank can achieve in contrast to the Y axis that makes it the defined risk level (example default or 90-day delinquency).

    GINI: Traditional vs Enriched with Big Data Model

    The straight line (blue) represents a completely random distribution of credit allocation, so the performance of this model is GINI = 0, i.e. the model has no predictive power. The further the curve moves away from the center line (blue), representing a random credit allocation, we will be in the presence of a model with a higher level of prediction or GINI coefficient, which in practical terms means that the more credits we will be able to place at the same level of risk. That is, we will be reducing what we have previously defined as type I and type II errors.

    In the example of Figure N° 5, it can be detected that if the bank defines the risk level of 15% (cutoff point) then, its expected admission through the use of the traditional model (green curve) would be 30% (decile 4 start), but if on the other hand, use a state-of-the-art model with the inclusion of big data (purple curve) would be 42% (decile 5), allowing to increase its intake by more than 43% to the same appetite for current risk.

    These scoring models are very common for predicting the systematic risk of recurrent mass populations so statistical tests allow for future prediction based on past developments. For these reasons, it is common for people and SME risk models to be developed using these techniques, but usually exclude large corporations or other institutions for which not enough observations are available to generate an adequate algorithm (or to stabilize it). That is, not enough observations (behaviours) are available for models to have statistical significance.

  • Business rules: Finally, the risk management of every financial institution combines policy elements and scores, with risk optimization equations, these commonly defined as business rules, establishing for example that if certain conditions converge, then the customer can get certain answers.

    The most common business rules are the ABE (Average Balanced Exposure) in which an algorithm determines the quota to be granted to a given customer based on his risk and the average risk exposure per client that the institution has.

    By way of example, in Figure N° 6 the average credit placed by a bank among its customers is $5,000 and its average default is 10%, then its average exposure would be US$ 5,000 * 10% to US$ 500. This means that the average exposure desired per customer that the bank has is $500; therefore, if a customer has an expected default level of 5%, then for his exposure to be equivalent to US$ 500, the credit granted should be $500/5% to $10,000. Conversely, if his expected default is 20%, then the amount to be granted is calculated as US$ 500/ 20% = $2,500. In this way, the bank ensures that it has the same ABE for each assigned customer: less funding for riskier customers and more funding to the less risky.

    Avg. Balanced Exposure

    Another traditional example of business rules, but less complex, is the determination of the maximum customer indebtedness (Leverage), which is defined according to the level of risk that the client possesses and the debt presented in the financial system, which would have as a scoreband top, a certain level of consolidated leverage. This will give space only for the remainder of the potential leverage to be financed or to give space to the purchase of portfolios from other banks.

Risk management for a healthy and rapid opening of the financial market

At this point, the reader of this "paper" will be wondering the meaning of having explained in detail the previous risk tools. Perhaps the best analogy of the case is to understand that such navigational instruments constitute the control panel that the pilot of the aircraft - called the bank - has to allow it to fly the aircraft during Storm Covid19. But, is it so?

Historically, CROs (Chief Risk Officer), or risk managers of financial institutions, have been exposed to economic crises of various nature. In the last 30 years we recall the Asian crisis in 1997, the subprime crisis of 2008, and regionally, Latin America has suffered tequila effects in 1994, tango effects in 2001, currency devaluations, etc.

Very much in spite of you, dear reader, it is important to note that no matter how old a bank's risk team is, it may have been exposed to at best one or two of these crisis. None of them has the magnitude of the one that looms in our heads this 2020.

Let's get into business. As previously indicated, the present Financial Covid19 crisis has led to an upward trend in the level of unemployment along with a sustained drop in average customer incomes. These two effects alone will have a significant impact on banks' risk models. It will immediately be reflected that the population, on average, will raise their risk, defined as population with delinquencies in the commercial newsletter of the local bureau. As an example, In Chile today (May 2020) 1/4 of the country's population has a delinquency in the trade bulletin, i.e. 4,815,695 people. This implies that if we calculate it on the basis of the country's economically active population (13 million), 36% would have some reported delinquencies. This will continue to increase once publication restrictions or negotiations with financial institutions mandated by the on-call governments have been raised. The same would be the case in the rest of the countries of the region.

In order to anticipate the sustained growth of its delinquency rates, the banking industry has decided unequivocally to apply, among others, the following measures: a) raise the cut-off point (required score) required for obtaining a credit; (b) tighten credit policies; (c) reduce the average amounts placed in order to reduce your ABE; and/or 4) close the placements, cut campaigns, close branches (for health issues among others) until they have no clarity of the effects of the crisis.

Of these 4 measures, there is only one that allows to some extent control the systemic risk that the crisis has generated in its population, this is the a). For this purpose, the bank has decided to raise the required cut-off point in order to maintain the level of risk desired by that institution.

This decision is not harmless. It will inevitably cause credit admission rates to fall dramatically. As an example, Charts N° 7 and N° 8 below represent these effects:

  • the flattening of the risk/admission curve (RA1) to the left (RA2), i.e. now the population has a higher expected delinquency than previously had the modeled population and in consideration, at the same level of admission (cutoff point "a") will have a higher risk. Movement resulting from maintaining the cutoff point "a" over two different risk curves (RA1 and RA2) see Chart N° 7; and

    Increase in Risk, Population
  • in order to compensate for the greatest risk, the movement of the cut-off point defined by the bank will be reduced in such a way as to correct the highest expected risk (point "a" to point "b" in Figure N° 8). This allows to maintain the risk at the same 15% but with a lower admission. In the graph it can be distinguished that the intake is reduced from 40% just under 4 deciles (point "a"), to just over 20% of admission, decile 2 (point "b"). In other words, admission fell by half at the cost of maintaining the 15% risk.

    Cutoff point adjustment to compensate for risk

The decision made by the above-mentioned risk manager is correct, as it has allowed, by using the current tools available to it, the containment of the greater shock risk. On the other hand, the irretrievable effect will be a significant drop in admission, an undesirable side effect.

Now the next challenge of the administrator will be, how to open the market again? How prepared is your current model to predict the new situation? If the cutoff point drops by e.g. 100 base points, what will be the result? These concerns are very valid and obey to the following elements that the risk manager knows:

  • Traditional Models with limited capacity for discrimination:
    Traditional scoring models have not been trained to measure these effects of the crisis; they were trained during times of stability to be able to project such times. This is perhaps the most worrying factor and the manager's fear is correct. If you do not have state-of-the-art predictive risk models with the inclusion of big data and macroeconomic data, your model may not be able to correctly read the economic downturn of the crisis and the process of opening the financial market that this bank performs is gradual and very slow in order not to make unnecessary risks wrong. The bank's time to market will be sacrificed for gradual risk control. Banks in this sector will be the big losers compared to those with state-of-the-art models that include big data and macroeconomic variables.

    As an example, let's look at the following figure, Chart N°.9. In this chart, we can compare two risk models, one with traditional data (green, RA1) and one with big data (purple, RA2). In this table, we can clearly distinguish that the model enriched with big data has significantly higher risk discrimination than the traditional model, so it is able to support more clients: approx. 40% admission for item "b" (up to decile 6) against 30% admission point "a" (up to decile 7), to the same level of risk appetite established by the bank (15%).

    GINI: Traditional vs Enriched with Big Data Model

    Obviously the cut points are not done by decile, but on a continuous line, so the admission percentages of the exercise have rounded them to their nearest decile.

    Let us remember, however, that the economic shock has significantly increased the expected delinquency so the curves (both RA1 and RA2) have flattened to the left and therefore lost admission to maintain the level of risk. Notwithstanding the above, we will see that the credit loss effect of the RA2 model is much less than the loss of the RA1 model given the distance in predictive level that it has between the two. As a result, the impact of the crisis on its admission is lower, as the bank has cushioned the blow of the crisis with a model whose predictive level is higher than today´s model. This is the most popular decision that banks have started making in emerging markets to start opening the key to financing. If until 5 months ago the industry challenge with these models was to increase credit admission while maintaining the level of risk, now the call is to maintain the level of risk without losing admission significantly.

    The expected marginal effects of a state-of-the-art rich model on another traditional model are, depending on the segment being modeled, the following in terms of improvements in prediction:

    • models for customers No Hit in bureau (non-banked customers) increase of 40-80% in the GINI coefficient.

    • models for HIT clients in bureau (banked customers) increase of 12-25% improvements in the GINI coefficient.

    • behavior models (current customers) increase by 5%-12% of improvements in the GINI coefficient; and

    • SME models increase by 25-48% improvements in the GINI coefficient.

    There is no simple three-rule to determine the impact that the improvement in the GINI coefficient has on the highest intake. However, Figure N° 10 shows real effects of variations in GINI coefficients for a sample greater than 300 enriched models in emerging markets, and their effect on increased admission. As we can see, the impact of these enriched models brings in significant variations in admission and therefore in the comparative marginal EBITDA between models.

    Delta GINI and Delta Admission Rates

    As an example, using Chart N° 9, the distance measured between the current model admission (RA1) and the big data enriched model (RA2) is 33% more customers. In other words, for a bank that places 200,000 credits in a normal year using this model, that distance may be approximately 66,666 customers per year versus the traditional model.

    Since it is not possible to determine, and even less generalize about the impact of shock on risk curves (RA1 and RA2), we are not in a position to indicate the magnitude of the displacement of these curves. Notwithstanding the foregoing, it is reasonable to assume that the distance between the two curves will be maintained, this time on the basis of a lower stock of potential loans to be issued. By doing a simple exercise, if the crisis were to generate a 25% shock in the fall in annual placements, then we would be assuming that the bank in question could reduce credit cards from 200,000 to 150,000 under normal conditions (RA1) and to 200,000 credit cards in a position to have enriched models. This means that you will be able to maintain admission and the amount of credits placed in a year while maintaining the desired risk. This effect will have the non-negligible impact on the bank of $38,250,000 annually on the basis of average loans of $900 each when 15% of expected default credits have been deducted based on the risk appetite in Chart No.11.

    Considering an annual EBITDA of $180 per good customer and a bad loss per bad customer of $630, we would see an average annual marginal EBITDA for the bank of $18,225,000 over the next 5 years (assuming an estimated average duration of 60 months). A figure in no lower for a bank that would generate a total annual EBITDA of US$ 72,900,000 on average with its total formalization. In other words, a value destruction of 25% of annual EBITDA will have been prevented.

  • Early Default Models: Scores developed with Big Data:
    As indicated in point (a) above, it is necessary to remember that traditional models or even enriched with big data and macroeconomic variables, are usually developed on the basis of training samples extracted in normal situations of stability in order to be used at similar economic times. It is by this essential design that these models are not useful for predicting one-off events of deep economic crises.

    Models containing macroeconomic variables (built by advanced analytics companies) could partially adjust the risks of national macroeconomic movements, for example by over-weighing the risk of sensitive clients for such purposes and maintaining the risk of those customers insensitive to them.

    Examples of these variables to include in these models are historical series of unemployment, investment reduction, national consumption, growth of the economy, exchange rate hikes, interest, etc. These factors are typically included as Betas that over-weight or under-weight the risk assigned to each client or even the model in aggregate manner in the face of movements beyond the defined tolerance threshold.

    This is how it is necessary for all risk management to add:

    • an instrument for measuring the diagnosis of the crisis and its sharpness; measure the early temperature of the nascent risk in the market. This dimension usually corresponds to models developed to measure the early delinquency of customers less than 30 days in arrears; and as a sine qua non condition, which in parallel has

    • periodic calibration of the model: Like any disease, through the process of development of the economic crisis, a frequent calibration of the model should continue to be carried out, i.e. a periodic adjustment of the blackout indicators and the variables that feed the model, so that to the extent that these indicators vary, the model will collect the best possible predictability of the situation and allow to correct the course in time.

Although the elements described above i. and ii. theoretically seem to outline an easy solution to control, the truth is that that is far from reality. To be able to develop a scoring model capable of predicting early default levels per client, requires, among other things, some elements that banking does not have today in its portfolio of tricks: I) Big Data. Non-traditional data, capable of differentiating individual risk situations on a personal level from customers who may be most affected by risk, from those that are not. The more data that describes these phenomena, the greater the chances of having an adequate measuring instrument.

Some define human nonsense as the mania of trying to combine the same elements over and over again expecting results other than those already obtained. In this path there are no options: if no new descriptive variables are available, the possible result is only one, the traditional result.

II) On the other hand, it is necessary to have sufficiently trained and timely computer tools (software) and modeling equipment (data scientists) to be able to accurately and effectively address the tasks of continuously calibrating and reshaping these models on a weekly, biweekly, or monthly basis over and over again, in order to read the path of the crisis correctly.

Normally these processes today are developed by Systems with Artificial Intelligence (AI Robots) that allow automatically and in fractions of minutes to develop models that meet these requirements, guaranteeing consistency in analyses, quality in the pre-processing of variables, with auditable methodologies and finally combining all possible algorithms to ensure the maximum predictive potential mathematically possible. These are the techniques typically used by advanced analytics companies.

Once the models described in item (b) are available (early default model) objective customer populations are ranked with this instrument, a cross matrix is sorted and fed along with the sorting that state-of-the-art internal model has provided, for the same population.

In this way and combining in a double-entry matrix both rankings, you can identify comfort, caution and risk zones. In the graphical representation below (Table N° 11) indicated, you can see an example of a combination of two customer orderings: on the one hand all customers have been ranked with an internal model of a bank and on the other hand the customer ranking generated by the early default score has been crossed. The resulting table (matrix) allows to see that according to both rankings, there are customers in low-risk green areas that both models have defined, high-risk red zones and finally yellow areas of caution. This tool ensures that the credit grant decision engine is correctly reading the evolution of the crisis, as well as meeting the risk criteria previously decreed by internal models. The result of its application will allow to speed up the opening of comfort zones with greater certainty than doing it blindly or partially blind (with only one model trained with historical data).

Double Entry Matrix: Internal Score / Early Score (Covid19)