Financial Risks, Stability, and Globalization
Chapter

3 On Internal Ratings, Models, and the Basel Accord Issues for Financial Institutions and Regulators in Measuring and Managing Credit Risk

Author(s):
Omotunde Johnson
Published Date:
April 2002
Share
  • ShareShare
Show Summary Details

In publishing its first consultative paper, A New Capital Adequacy Framework, in 1999, the Basel Committee on Banking Supervision has brought unprecedented focus to issues relating to consistency and robustness of financial institutions’ internal credit risk management processes. Regulators aim to ensure that banks set aside appropriate levels of capital for counterparty risks. In determining an appropriate level of capital, banks need to determine both the level of risk and creditworthiness through its internal rating process. The issues discussed here are those that Credit Suisse First Boston (CSFB) has faced in building its internal rating process; the majority of comments reflect our experience in structuring our credit process based on the types of risks within our credit portfolio. This paper focuses on two key issues.

  • Requirements for consistent ratings: consistency must be recognized as the key goal of the internal rating process. In particular, the use of well-validated rating models should play a key role in the development of consistency. We describe methodologies that have been used by CSFB to develop our internal rating process.

  • Validation of internal rating processes: we describe a range of qualitative and quantitative methods available to validate internal ratings and detail advantages and disadvantages of each approach.

Definition of Credit Risk

Generally, risk management practictioners have developed risk measures commensurate with the complexity of the risks they manage. In contrast, the credit industry uses a series of ordinal rating grades that were devised at the beginning of the last century!

Text-based definitions of each rating grade are no longer useful for banks wishing to ensure rating consistency or calculate the absolute risk of their credit portfolio. Figure 3.1 shows how we expect definitions of credit risk will evolve to reflect new requirements from more quantitative techniques within credit risk management.

Figure 3.1.The Future Evolution of Rating Definitions

Increasing pressure from changing industry and regulatory requirements may force the pace of progress in rating definitions. Additional requirements include clarifying the impact of economic cycles and time horizons and the need to quantify credit appetite based on ratings.

At CSFB, we perceive rating definitions as rapidly moving beyond categorization of quantitative probability of default, or expected loss estimates—to define credit rating as a categorization of migration potential. We also see yet another evolution of “ratings” to a categorization of a probability distribution of losses.

Such a definition of credit risk reflects the fact that a rating describes more than just the likelihood of counterparty default. A rating also contains information about the likelihood that a counterparty will either increase or decrease in credit quality. Standard & Poor’s have started to introduce some of these concepts into their quantitative research of rating transitions (Bahar and Nagpal, 1999). We believe this and other literature will greatly help the process of better defining credit risk.

This evolution of rating definitions is important because each iteration results in a more accurate and complete quantitative description of the credit risk. This allows risk managers to measure more accurately and manage the risk in their credit portfolios.

External Ratings: The Role of Rating Agencies

Recent articles discussing the current status of the Capital Accord reform process have focused on the role of the major rating agencies within the latest proposals from Basel.1 A central issue that has dominated this discussion is the relative imbalance in coverage of U.S. domiciled entities by Moody’s and Standard & Poor’s relative to other markets.

The role of the rating agencies within a reformed Capital Accord is predetermined by their being the only globally accepted benchmarks. For this reason, the credit risk management functions of many financial institutions have been built on the basis of methodologies comparable to the major external agencies. As a key objective of the reform process is to build on approaches already inherent and actively used by banks, we believe external rating methodologies must play an important role in any reform process.

This role of rating agencies is also important given the standing of public ratings within financial markets. Names that are highly rated and well researched by both major rating agencies do not receive the same amount of analysis at many banks as those that are not. However, reliance on the rating agencies for providing background analysis and ratings for externally rated names also compounds the level of penetration of rating agency methodologies into banks’ internal rating processes.

Use of External Rating Scales

At CSFB, we decided some time ago to wholly adopt the rating scales used by the rating agencies because it made our internal rating process more consistent with names that were publicly rated within the portfolio. Other factors that continue to promote the use of these scales are the following.

  • Publicly available data: Moody’s and Standard & Poor’s have made their internal default and recovery information available publicly. These data products are now widely used by the industry to feed models used within credit risk management processes.

  • Quantitative ratings: rating agencies have begun to develop more quantitative ratings, which provide useful benchmarks for internal rating models.

  • Market forces: the credit derivatives and asset securitization markets have requirements for public ratings.

Key Issues for Regulators

In addition to well-documented issues of rating agency coverage and potential conflict of interest, the following issues must also be resolved.

Cherry Picking

Basel has made it clear that it wants to avoid the opportunity to “cherry-pick” between different external ratings. Table 3.1 demonstrates the extent to which Moody’s and Standard & Poor’s have different ratings for individual issuers. At CSFB, we use a standard rule that analysts must take the lower of the two ratings. However, reality suggests perceived quality of research within a region, or sector—as well as profile of individual research analysts—is the most important driver of “rating” preferences within an ideal rating process.

Table 3.1.Differences in Issuer Ratings: Moody’s Versus Standard & Poor’s(Percent)
BanksCorporates
By rating notchU.S.Other OECDEMTotalU.S.Other OECDEMTotal
−3 (S&P <Moody’s)30135255
−2 (S&P < Moody’s)539599.2999
−1 (S&P < Moody’s)1615231728293329
S&P = Moody’s3249423840433539
+1 (S&P > Moody’s)3626172915151315
+2 (S&P > Moody’s)87572232
+3 (S&P>Moody’s)00311021
100100100100100100100100
By BIS bandsU.S.Other OECDEMTotalU.S.Other OECDEMTotal
−3 (S&P<Moody’s)00000000
−2 (S&P<Moody’s)00000000
−1 (S&P <Moody’s)118181216171817
S&P = Moody’s7082727376737275
+1 (S&P>Moody’s)19101015810108
+2 (S&P>Moody’s)00000000
+3 (S&P > Moody’s)00000000
100100100100100100100100
Data used: 2600 Issuers with both an S&P and Moody’s rating.Sources: Moody’s Investor Services, Rating Delivery Service; Standard & Poor’s, Electronic Rating Information Service.
Data used: 2600 Issuers with both an S&P and Moody’s rating.Sources: Moody’s Investor Services, Rating Delivery Service; Standard & Poor’s, Electronic Rating Information Service.

Selection Criteria

Basel has already indicated that it needs to strengthen the criteria it listed in the June 1999 proposals. We agree that there is a need to set standards that sufficiently distinguish between external credit institutions. However, we see two issues that must be addressed within this process:

(1) While we believe that a global perspective is crucial to effective credit risk management, some smaller niche rating agencies are well recognized due to the strength of individual analysts or their specialist knowledge within certain markets. The conflict created by these two different perspectives must be addressed.

(2) Within a number of rating agencies, there is a different demand for non-investment-grade research than for investment-grade research. This means a rating may become “stale” if it has fallen below investment grade or is part of a sector with primarily non-investment-grade participants.2

Number of Regulatory Grades

We consider that an approach using more rating grades should be preferred. If a counterparty lies at the boundary of two rating grades, the capital impact of a rating difference is reduced significantly with more rating grades.

Internal Ratings

Elements of the Internal Rating Process

As many institutions define their internal rating processes differently, it is fundamental that minimum standards are established for the “model” approval of internal ratings. Although rating “models” form a critical part of the internal rating process, Basel should focus development of minimum standards on banks’ overall internal rating processes, not just the rating models used within these processes.

In this section, we outline basic requirements for a robust credit risk management process. All these elements must be included in the minimum requirements for use of internal ratings. The framework outlined below also includes other processes that work both to reinforce the level of rating consistency and to reduce the level of credit losses.

Figure 3.2 represents the actual structure of the Credit Risk Management function within CSFB. We specifically have not included elements of credit portfolio management and credit risk models within the scope of this paper to focus exclusively on internal rating processes and management of individual counterparty risks.

Figure 3.2.Critical Elements in the Credit Risk Management Process

Processes required to set globally consistent ratings are as follows.

Credit culture: a definition of acceptable credit risk that is known and applied throughout the organization, such that the risks taken reflect stated risk appetite.

Rating model(s): rating models should provide the foundation of the internal rating process to ensure consistency of approach to the setting of ratings.

Credit policy: outlines how the internal rating process should be applied to various transactions and facilities.

People: the presence of a robust, validated, rating model is the single most important tool to reinforce rating consistency; however, experienced, high-quality people are crucial to understanding the interaction of all tools within the process and, importantly, understanding when the process might not lead easily to the correct result.

Risk review: analysis and audit of previous decisions on the credit portfolio.

Credit forum: setting of internal rating methodology for particular portfolio segments.

Delegated authorities: ensure that the most appropriate level of credit sign-off approves transactions, or facilities, culminating in the credit forum/committee.

Role of Rating Models

Although the industry has generally adopted quantitative methods for risk management of retail banking products, Basel Committee (Basel Committee on Banking Supervision, 2000) listed a wide range of practises that were employed for the risk management of other product types. Basel classified these approaches into judgmental, constrained expert systems, and statistical modeling as shown in Figure 3.3.

Figure 3.3.Spectrum of Internal Rating Processes

Judgmental Approach

We believe it is difficult to consistently assign internal ratings using a judgmental approach because this approach does not allow for comparison across jurisdictions easily and because the “strength” of the rating process falls on individuals, who individually have no means of comparing whether they are assigning ratings correctly.

A judgmental approach is also flawed in that there is no transparency available to see how any individual rating may have been rated differently from the approach taken for the remainder of a portfolio: individual credit analysts or raters can be either too optimistic or too conservative in their credit assessments.

In comparison, within a model-constrained expert system, a credit approver can clearly see the level of discretion that has been taken to move the final rating from a rating output from the model to account for any factors not fully captured within the model itself.

Statistical Models

Basel has noted that statistical models have a more prominent role in small business lending. We believe application of rating models will become more commonplace within credit risk management across all business types. Wider adoption of rating models will be driven by banks’ requirements for consistency and validation, rather than providing stand-alone substitutes for the internal ratings process.

Internal credit rating models should provide the foundation, or starting point, of the internal rating process. However, the process of assigning ratings should not be based solely on a model; adoption of statistical or other models as any form of substitute for a well-structured internal rating process should be avoided.

A More Balanced Approach?

It is apparent that Basel has given much thought to the linkage between use of rating models and validation of internal ratings. The use of rating models within an internal rating process should become a minimum requirement because of the need to demonstrate rigorously the quality of the internal rating process.

It is also critical that implementation of rating models within an internal rating process be structured to accommodate the changing appropriateness of rating models.

Within the industry, we see insufficient distinction of the quality of rating output across different sectors and regions. Vendors of many third-party, statistical-based models do not focus on this characteristic when selling their products.

CSFB’s own rating model is quantitatively based. Since introducing this model over five years ago, CSFB has developed a framework for using rating models that uses appropriateness to determine how the output of the model is interpreted. Appropriateness adds robustness to the internal rating process by utilizing the relative strengths of quantitative methodologies and credit experience within all credit decisions.

The Credit Rating System

Within CSFB, our credit rating model has been developed within a workflow system called the Credit Rating System, or CRS. CRS is an internal database of all information affecting ratings, as well as an internal credit rating methodology (see Figure 3.4).

Figure 3.4.Functionality of the Credit Rating System

CSFB adopted a model-based approach to internal ratings for the following reasons:

Objectivity: Although ratings are determined by either a committee or delegated authorities, a model is important to remove rater bias.

Transparency: Transparency allows greater focus on aspects of rating that are not covered within the model. This also makes it easier to assess the impact on a credit rating of risk mitigants, such as state support or collateral.

Comparability: Rating models support those processes seeking to benchmark against the ratings output of the major rating agencies; they do so, among other ways, by describing the key risk drivers that influence the rating agencies’ output.

Building Credit Rating Models

There is a wide range of approaches that can be applied to the process of modeling ratings. Some popular models seek to model many factors, while others (such as many statistical models) provide an output derived from a relatively small number of inputs. These differences add further complexity to the issue of determining minimum standards for model-based approaches to internal ratings. In Table 3.2, we briefly list some of the advantages and disadvantages of popular approaches to modeling ratings.

Table 3.2.Advantages and Disadvantages of Various Modeling Approaches
ApproachAdvantagesDisadvantages
Multi-discriminant Analysis
  • Well known

  • Z-score mode well understood

  • Long established (Altman, 1968)

  • Objective

  • Transparent

  • Focus on bankruptcy, not rating

Linear Regression
  • Models internal ratings

  • Long established (Horrigan, 1965)

  • Good predictive capabilities

  • Objective

  • Transparent

  • Nonlinear effects not included

Expert Systems
  • Models complex relationships

  • Includes qualitative factors

  • Objective

  • Not statistically robust

  • Require wide team of experts

  • Can be time consuming to implement

Neural Networks
  • Captures complex relationships

  • No preconceived assumptions required

  • Not transparent

  • Can be time consuming to implement

Merton Approach
  • Market-based

  • Based in financial theory

  • Objective

  • Timely

  • Measures probability of default

  • Does not replicate internal ratings

Bond Spread Analysis
  • Market-based

  • Timely

  • Measures probability of default

  • Does not replicate internal ratings

  • Other factors, such as liquidity, affect credit spreads

CSFB’s Approach to Quantitative Ratings

The goal of the rating models within CRS is to replicate the rating methodology of the major rating agencies. However, CRS is part of an overall internal rating process that, itself, does not seek to solely replicate public ratings.

As our initial, principal aim has been to employ an established, quantitative approach to reproduce agency ratings, the linear regression technique was initially chosen. Over time, this framework has now been embellished, so that present models are now hybrids that incorporate elements of nonlinear and expert, rule-based approaches.

These models incorporate financial and market data, sector-specific variables, and qualitative inputs. Experience has shown that it is important to use all these types of information, to be able to distinguish credit ratings adequately. Examples of each are shown in Table 3.3.

Table 3.3.Different Types of Information Used as Input to CRS
Information TypeExamples
FinancialReturn on assets, equity
Asset turnover ratios
Net profit margins
Coverage ratios
Leverage
MarketCredit spreads
Equity values
Market volatility
Sector-specificContingent liabilities
Value of reserves
Asset demographics

Model Development

Rating model development involves the following stages (see Figure 3.5):

Figure 3.5.Process Flow Diagram for Rating Model Development

Inputs: Typically, a large set of accounting measures and ratios is derived from published research, analyst opinion, and default experience. Where possible, the relative importance of these measures is derived from rated counterparties.

Analysis: Analysis is performed on subsets of the variables and the best performing models are then selected for more detailed analysis.

Validation: Several statistical tests are performed on the model to assess its performance across different geographic regions, different sizes of firms, and investment-grade and non investment-grade firms. In addition, expert opinion is sought on the practical application of the model and on how it might fit into the overall internal rating process.

Sign-off: All models must be formally approved.

Maintenance: Rating models are periodically validated with performance measured against internally agreed benchmarks.

Model Performance

At CSFB, we have developed comprehensive tests that are used to assess the performance of rating models, which are used to better understand how to apply these models. The goals of such tests are to

  • measure how well the model reproduces the external agency benchmarks; and

  • provide understanding of areas where further development is required.

The benefit of inclusion of industry-specific variables becomes clear through the analysis of different approaches. Figure 3.6 shows performance of two models against agency ratings. The two models are a sector-specific model for the food industry and a corporate model that uses the same variables to rate all corporate counterparties.

Figure 3.6.Comparison of Sector-Specific Food Model with Sector-Unspecific Corporate Model

(Percent)

The figure shows correspondence to external ratings for a portfolio of food companies, where a “perfect” model would have 100 percent of estimated ratings with 0 notches of error. The difference in notches using rating agency grade notches means that where a company is rated A by an agency and the model estimates that rating at A-, the difference in notches is -1.

This result, showing that the sector-specific food model is better calibrated than the sector-unspecific corporate model, illustrates the benefits of specific sector variables within rating models. Some examples of other sector models developed within the CRS are shown in Figure 3.7. The dotted lines enclose the range of notches that cover one risk bucket as proposed by the Bank for International Settlements (Basel Committee on Banking Supervision, 1999a).

Figure 3.7.Examples of Performance of Credit Rating System (CRS) Ratings Within Different Sectors

(Percent)

Tests of model performance can be graphically illustrated in other ways, for example, to show goodness of fit. An example of this is the Gini curve, which measures how similarly counterparties are distributed over the internal and benchmark rating scales (Figure 3.8).

Figure 3.8.Gini Curve for Integrated and Refining Segments of the Oil Sector

(Percent)

If the distribution of both external ratings and the internal model are identical, then the straight line would be produced. In this example, the model is close to that of the benchmark portfolio. However, this test tells us about the distribution of counterparties over the rating scale and not whether the actual rating of a given counterparty is correct. Hence different tests need to be used in determining model performance.

Model Appropriateness

In practice, the performance of a given rating model is not uniform across all sectors, regions, size of counterparty, and level of risk. This fact is not well understood within the credit industry, where one model is often used in the same way. Figure 3.9 illustrates this point, which shows that the average error and standard deviation of error vary across a number of sectors and regions. Figure 3.9 shows the impact of factors that are not included within rating models, such as implicit state support for electrical and gas utilities in Europe and Japan. The type of analysis shown here allows interpretation of the model output to determine how it should be applied within the credit assessment process.

Figure 3.9.Derivation of Model Appropriateness

(Model grading for oil and utilities sectors)

At CSFB, we have developed model appropriateness criteria to account for these differences within our internal rating process. Appropriateness is determined by

  • statistical tests on the model’s performance;

  • the quality of information in a particular jurisdiction;

  • other jurisdiction aspects; and

  • the impact of nonquantifiable information.

Severity of Loss

Importance of Loss-Given Default

The intention of the reformed Basel Capital Accord is to have a regulatory framework in which the amount of capital that an institution holds as a cushion against unforeseen credit losses is more accurately linked to the absolute amount of credit risk being taken. Both probability of default and loss in the event of default associated with an individual transaction must be assessed for a complete perspective on credit risk.

In addition, many institutions are moving to a framework where risk is measured and managed using economic capital. To calculate accurately the economic capital, the credit risk estimates used should be on an equivalent basis to other risks—i.e., the estimates should be based on loss amounts.

Much of the published debate so far has assumed regulators will opt to link internal ratings to appropriate probability of default buckets and use a transformation to calculate appropriate capital allocations. Although this approach would represent a vast improvement on the existing Basel Capital Accord, we believe it should be an interim step.

We expect regulators to prefer that capital be set using ratings that are defined using an expected loss metric, rather than probability of default, in isolation. Regulators have indicated their view that many banks have spent much more time focusing on rating probability of default than they have on loss-given default (LGD). A reformed Capital Accord should allow for banks’ internal processes to evolve much more robust approaches to measuring LGD. This approach would not only give incentive to all banks to put in place better risk management processes than they have today, but also would allow for the inclusion of better risk estimates within regulatory capital calculations as and when they become available.

Why LGD Is Perceived as Hard to Model

Many institutions have difficulty in producing an estimate for LGD at a portfolio level. This is because evidence suggests that LGD varies considerably by counterparty, point in economic cycle, and the specifics to the given transaction. In addition, sufficient volumes of meaningful recovery data from actual defaults are difficult to collect. In this section, we give a brief review of the publicly available research on LGD estimation in order to illuminate the issues surrounding estimation of LGD values at a portfolio level.

Variability of Average Recovery Rates

There have been several different studies analyzing the historical recovery rates of defaulted public debt. Each study has measured average recovery using a different sample of defaulted debt. One issue apparent from these studies is that different industry practitioners each use different methodologies to calculate recovery rates.

The very large variation in historically observed recovery rates, coupled with a small amount of well-documented recovery data, is one of the reasons LGD is difficult to estimate on an individual facility basis. Furthermore, the relatively low number of defaults that are documented sufficiently to allow accurate pricing of recovery values hinders statistical analysis of how recovery varies with counterparty and transaction specific factors.

Studies of LGD include the following:

  • For publicly traded debt, Hickman (1958) found an average recovery of 34 percent for debt defaulting in the period 1930-1943.

  • Altman and Kishore (1996) calculated the average recovery rate for more than 700 bond issues that defaulted between 1978 and 1995 to be 40 percent.

  • In their latest study of LGD, Standard & Poor’s (1999) found an average recovery value of 44 percent for 533 rated issuers that defaulted between 1981 and 1997.

  • Moody’s (1999) calculated a figure of 45 percent for the average recovery observed for 1,113 rated issues that defaulted between 1977 and 1998.

One major difficulty in predicting recovery rates is the large degree of variation in the level of recovery observed between individual defaults:

  • Altman and Kishore obtained a standard deviation of 26 percent on their 40 percent average recovery rate.

  • Moody’s measured a standard deviation of 27 percent for their 45 percent average recovery rate.

  • Standard & Poor’s found a similar standard deviation of 26 percent for their 44 percent recovery rate.

Variation with Seniority

One alternative to an “across the board” recovery rate would be to stratify recovery rates by seniority. Moody’s, Standard & Poor’s, and Altman and Kishore all agree that the average recovery rate increases with the seniority of the issue. Table 3.4 details average recovery rates by seniority as calculated by Moody’s. Standard & Poor’s and Altman and Kishore give similar results.

Table 3.4.Moody’s Statistics for Defaulted Bond Prices(1977-98)
Seniority and SecurityNumberAverage Recovery
Senior secured bank loans98$70.26
Equipment trust27$65.93
Senior secured public debt118$55.15
Senior unsecured public debt338$51.31
Senior subordinated public debt252$39.05
Subordinated public debt405$31.66
Junior subordinated public debt19$20.39

Variation with Jurisdiction

Rigorous stratification of recovery rates by jurisdiction is made more difficult by the lack of accredited default data. This fact is mainly due to a lack of significant corporate debt markets outside the United States. In addition, cultural differences that, in the past, have not allowed companies to default also severely impact the amount of recovery data, not only default data. There is, to our knowledge, little public work analyzing the impact of LGD by jurisdiction.

Variation with Industry Classification

Although intuitively appealing, analyzing corporate recovery data by industry while controlling for seniority has not yet produced statistically significant results. When Altman and Kishore examined variation of recovery rates by industry, however, they concluded that recovery rates observed in the chemicals and utilities industries are higher than average, confirming that industry sector is important in estimating level of recovery. We have yet to find another study that confirms these findings.

How LGD Is Currently Considered Within Internal Rating Processes

Because the credit industry has not traditionally thought about credit risk as being two separate assessments required to estimate expected loss, they have not been formally recorded separately. We do believe, however, that while many banks do not formally record assessments of the expected LGD by using transaction ratings, they do consider LGD when setting credit exposure limits. We suspect that regulators will find internal counterparty ratings are adjusted for reasons that relate to LGD issues and that limits in place will be different for counterparties with the same internal rating because of issues associated with LGD assumptions.

For example, many banks’ credit analysts naturally consider the following when determining the limits to assign to counterparties.

  • Is legal documentation for this transaction weaker than “normal,” or impaired in any way?

  • Does the transaction structure give higher, or lower, seniority than is “standard” for this type of transaction?

  • Are there any specific reasons why, in the event of default, one would expect to recover more or less from this counterparty than from one of its peers in the same jurisdiction?

A Greater Use for Transactional Ratings?

All banks should formally record LGD estimates used in internal ratings. Transactional ratings should incorporate an assessment of both the counterparty’s likelihood of default and, for each particular transaction, an estimate of the loss in the event of default (see Figure 3.10). This approach would result in a more transparent rating justification and a more accurate categorization of each credit risk.

Figure 3.10.Difference in Calculation of Counterparty and Transaction Ratings

One way to implement transactional ratings would be by using the following framework, as used by CSFB.

  • All counterparties require a counterparty rating.

  • Each transaction requires a transaction rating.

  • A model automatically generates transaction ratings using the counterparty rating and the LGD estimates.

  • The system provides standard LGD values that would be applied in the majority of cases. These estimates might vary by transaction type, jurisdiction, or industry. Credit analysts have freedom to override standard LGD estimates with either higher or lower LGDs—which automatically change transactional ratings.

Validation of Internal Ratings

Within this section we outline a number of approaches to validating internal rating processes, along with their inherent attractions and disadvantages. We also show the output from validation of CSFB’s internal rating process. The Models Task Force of the Basel Committee concluded in their report on range of practices that, “only a third of banks claimed to do any backtesting, but provided little additional information on how this was conducted” (Basel Committee on Banking Supervision, 2000). We present CSFB’s approach both to further the debate on model validation and to show how ratings validation can be achieved in practice.

The validation process should qualitatively evaluate the rating process itself and quantitatively assess the internal ratings that the rating process generates. We believe it is difficult to validate a rating process without any quantitative rating framework that is both transparent and reproducible. We also believe it is unrealistic to rely on just one validation method. Accordingly, we have tested the validity of our rating process under all approaches discussed here.

Qualitative Validation of the Whole Process

We believe internal credit rating processes should always be qualitatively assessed relative to a set of minimum requirements. Qualitative validation involves validating the concepts underpinning the rating process against industry best practices.

Features of a best practice rating process include the following:

  • Separation of the credit officer who determines the rating from the group originating credit assets;

  • Use of guideline benchmarks to promote rating consistency;

  • Mechanisms to ensure that consistent analysis frameworks are used across the organization; and

  • Active audit of ratings process and ratings.

Absence of any of these elements within a rating process will substantially impair the ability of a bank to produce consistent and robust ratings.

Methods for Quantitative Validation of Internal Rating

Experience of Default or Losses

Validation of a credit process using default experience requires calculating actual default rates for each internal rating grade using historical data. Default rates can then be either compared to a benchmark or equated to the amount of capital required for each internal grade. Using actual loss data adds an additional layer to this analysis so that the capital requirement at each rating grade can be calculated. This type of analysis is a form of “backtesting.”

Rating agencies experience very few defaults within the highest rating grades. We believe this characteristic is also inherent in many internally rated portfolios, requiring validation, in addition to analysis, of historic default information. The key factors influencing the assessment of a rating for an issuer rated “B,” or counterparty (where there is good default data), are not same as the key drivers of a rating for an AA counterparty. This feature potentially limits the ability to use Type I/Type II testing of defaults to determine validity.3

Attraction:

  • No external benchmark of comparable ratings required.

Issues:

  • Large historical set of internal data on default and loss required across many sectors and regions.

  • Differences in definition of default and the culture of default across different jurisdictions potentially impact outcome of the validation.

  • For some institutions, default is a rare experience and the sample set will be too sparse to enable rating validation; this is especially true for higher rating grades.

At CSFB, we analyze performance of our rating models in default prediction. In one test, we compared our model’s performance against those of the rating agencies for samples of companies that we know have defaulted. Since our goal is to replicate the approach of the rating agencies and their rating methodology, we look at how close we are to getting the same level of performance from our rating system.

In Figure 3.11, we compare the ratings that our models assigned each counterparty one year prior to default to the rating agency rating one year prior to default. The diagram shows that the distribution of ratings generated by CRS is very similar to the Moody’s rating distribution for the population of defaulted names. This result shows how well our models correlate to key drivers used by the rating agencies as companies approach default.

Figure 3.11.Comparison of Moody’s and Credit Rating System (CRS) Ratings for a Population of Defaulted Companies One Year Prior to Default

Benchmarking Ratings

Comparison of model output, or internal, ratings to agency ratings can provide simple but insightful analysis. This approach to validation involves running statistical tests to compare the internal portfolio against benchmark ratings. Setting minimum pass criteria for each test can demonstrate the level of similarity with a set level of confidence.

Attractions:

  • Once an appropriate benchmark is agreed, it is relatively easy to construct and perform appropriate pass/fail tests.

  • The results are easy to communicate.

  • Historical internal ratings are not essential.

Issues:

  • Internal and benchmark rating scales must be directly comparable.

  • There must be sufficient sample overlap between the internal and benchmark portfolios to give meaningful comparisons; for example, use of a rating agency as a benchmark may not easily allow validation of Asian-based or middle market portfolios, where there are few agency ratings.

  • If rating agencies are used as benchmarks, the internal rating process will probably default to an agency rating where it is available. This situation creates large selection bias; comparing these two sets of ratings gives no information about the rating process. This characteristic also means only ratings that are not externally rated can be used to validate the rating process.

At CSFB, our primary method for validating our rating models and internal rating process is to directly benchmark against both Moody’s and Standard & Poor’s ratings. As an example, the histogram in Figure 3.12 shows the performance of our models in predicting Moody’s Financial Strength Ratings (FSR) for banks. A high percentage of the FSRs from the rating models are the same or within one notch of the Moody’s FSRs for the universe of rated banks.

Figure 3.12.Performance of Credit Rating System (CRS) Rating Models in Replicating Bank Financial Strength Ratings (FSRs) Rated by Moody’s

(Percent)

Benchmarking Credit Migration

Earlier, we introduced the concept of a rating grade using migration potential. As migrations include the probability of migrating to other rating grades, testing based on migration analysis is more comprehensive than testing defaults alone. To validate an institution’s internal ratings migration rates should be compared over time against the benchmark migration rates. Statistical tests can be used to validate whether the two sets of migration characteristics are the same with a set level of confidence.

In Table 3.5, we give, as an example of benchmark migration characteristics, the average one-year credit migrations for Moody’s rated issuers. To rigorously compare migrations, however, the analysis should be completed over several different time frames.

Table 3.5.One-Year Credit Migration Statistics(Percent)
Final Rating
AAAAaABaaBaBCaa-CDefault
Start Rating
AAA92.56.50.90.00.10.00.00.0
Aa1.290.08.30.40.10.00.00.0
A0.02.192.54.70.50.10.00.0
Baa0.00.25.189.74.20.50.10.1
Ba0.00.00.55.186.46.30.31.4
B0.00.10.20.55.984.22.07.0
Caa-C0.00.00.20.52.05.672.519.2
Source: Moody’s Investors Services.
Source: Moody’s Investors Services.

Attractions:

  • Unlike benchmarking actual ratings, this method allows comparison of internal and benchmark portfolios that have no sample overlap.

  • Comparison of nonoverlapping portfolios means there is no selection bias and all external ratings can be used as benchmarks.

  • It is much easier to obtain a large sample of migration data than default data.

Issues:

  • The internal and benchmark rating scales must be comparable.

  • A history of model output, or internal, ratings is required.

The many benefits of this method make it appealing for quantitatively validating internal ratings. The approach can be varied over several time periods to measure correspondence to external ratings, over time and business cycles.

At CSFB, we produce credit rating migration statistics for our model ratings by retrospectively applying them to a historical database of counterparty data. As summarized in Figure 3.13, the similarity between the model rating migration over a five-year time frame and Moody’s rating migrations give us a significantly high confidence that our rating models replicate Moody’s rating processes.

Figure 3.13.Comparison of Five-Year Migration Characteristics for Moody’s Ratings and CRS Model Ratings

Advantages of Validating Credit Models

Validating outputs from models, rather than a single set of internal ratings, can solve several key issues that prevent quantitative validation of internal ratings.

  • Credit models can produce an inferred retrospective history of internal ratings.

  • Rating models can be run against any set of names, not just the institution’s client portfolio, ensuring there is always sample overlap in a benchmarking exercise.

  • Rating model output for externally rated names can be sensibly validated against the actual agency ratings.

All of the quantitative validation methods described here can be used to validate internal credit models. Accordingly, credit rating processes utilizing credit models can be validated more rigorously and more easily than those processes without rating models.

A bank utilizing a credit rating model to provide the foundation, or starting point, of an internal rating process should test performance of the model to ascertain how it replicates credit rating best practices. When a model that provides the foundation of an internal rating process can be demonstrated to be valid, this validation effectively validates the entire credit rating process.

Conclusion

We believe that consistent internal ratings should be a fundamental goal of all banks. The absence of consistent internal ratings means a bank cannot compare risks within their credit portfolio or determine the absolute credit risk of its overall portfolio. The best way to ensure internal rating consistency is through the use of transparent and robust rating models. To illustrate such a process, we have described methodologies used to develop and validate rating models used within CSFB.

Validation of internal ratings is a key issue facing the credit industry. If rating models provide the foundation of an internal rating process and these models are shown to be consistent, this result ensures the internal rating process itself will be consistent.

Consistent, well-validated internal ratings are not the end of the story. Once a bank has these processes in place, it is able to manage much more effectively the risks within its credit portfolio through time.

Reference

Rating models have a large role to play in identifying “stale” ratings that should not be overlooked within an internal rating process.

In simple terms, a Type I error occurs when a model fails to predict a default that subsequently occurs, while a Type II error occurs when a model predicts a default that subsequently does not occur.

    Other Resources Citing This Publication