Fifty years ago, in 1967, I completed my PhD dissertation, which involved the first multivariate model for predicting the financial health of US manufacturing firms and whether or not they were likely to file for bankruptcy. That work was followed shortly afterward (in 1968) by the publication of the model’s specifications. Despite its “old ageâ€, the Altman Z-score is still the standard against which most other bankruptcy or default prediction models are measured and is clearly the most used by financial market practitioners and academic scholars for a variety of purposes. The objective of this paper is to reflect upon the evolution of the Altman family of bankruptcy prediction models as well as their extensions and multiple applications in financial markets and managerial decision making.
Credit scoring systems for identifying the determinants of a firm’s repayment likelihood probably go back to the days of the Crusades, when travelers needed “loans†to finance their travels. They were certainly used much later in the United States as companies and entrepreneurs helped to grow the economy, especially in its westward expansion. Primitive financial information was usually evaluated by lending institutions in the 1800s, with the primary types of information required being subjective or qualitative in nature, revolving around ownership and management variables as well as collateral (see Box 1). It was not until the early 1900s that rating agencies and some more financially oriented corporate entities (eg, the DuPont system of corporate ROE growth) introduced univariate accounting measures and industry peer group comparisons with rating designations (see Figure 1). The key aspect of these “revolutionary†techniques was that they enabled the analyst to compare an individual corporate entity’s financial performance metrics to a reference database of time series (same entity) and cross-section (industry) data. Then, and even more so today, data and databases were the key elements of meaningful diagnostics. There is no doubt that in the credit-scoring field, data is “king†and models for capturing the probability of default (PD) ultimately succeed, or not, based on whether they can be applied to databases of various sizes and relevance.
The original Altman Z-score model (Altman 1968) was based on a sample of sixty-six manufacturing companies in two groups, bankrupt and nonbankrupt firms, and a holdout sample of fifty companies. In those “primitive†days, there were no electronic databases and the researcher/analyst had to construct their own database from primary (annual report) or secondary (Moody’s and Standard & Poor’s (S&P) industrial manuals and reports) sources. To this day, instructors and researchers often times ask me for my original sixty-six-firm database, mainly for instructional or reference exercises. It is not unheard of today for researchers to have access to databases of thousands, even millions, of firms (especially in countries where all firms must file their financial statements in a public database, eg, in the United Kingdom).
To illustrate the importance of databases, Moody’s purchased extensive data on 200 million firms and customer access from Bureau van Dijk Electronic Publishing (EQT) for US$3.3 billion in 2017, while S&P purchased SNL Financial’s extensive database, management structure and customer book for US$2.2 billion in 2015. As indicated in Figure 1, the three major rating agencies established a hierarchy of creditworthiness that was descriptive, but not quantified, in its depiction of the likelihood of default. The determination of these ratings was based on a combination of (1) financial statement ratio analytics, usually on a univariate, one-ratioat-a-time basis; (2) industry health discussions; and (3) qualitative factors evaluating the firm’s management plans and capabilities, strategic directions and other, perhaps “insideâ€, information gleaned from interviews with senior management and experience of the team that was assigned to the rating decision.