CFP Risk EMEA Conference 24 th May 2016 Brandon Davies
} It is often stated that you cannot mange what you cannot measure. } Whilst I personally have many times in a 40 year career in banking had to do just that. } I am, however, more than willing to accept that it is far easier to manage something if you can measure it. } Though this does have one major problem, you need to be sure you do indeed have an appropriate measure of whatever it is you are measuring.
} Note I use the word appropriate not accurate, accuracy is always better than inaccuracy but if the measure is not appropriate you may simply be left with a very accurate wrong answer. } The appropriateness of a measure can be gauged by how well it does the job of describing the thing to be measured so we come to the crux of this presentation. } How well do our current measures do in measuring risk?
} The answer is not immediately obvious. } Many models of risk clearly did not work well in the current series of financial crises that seem to have become a permanent feature of the financial world since the collapse of Lehman Brothers in 2008. } Does this point to problems both with the measure as well as with its application through the models to which it was applied?
} I believe it does, indeed I believe our problems with the measure come from a very early problem in developing a suite of measures, models and market products (tools) for managing risk. } To support my argument I show below (Figure 1) a slide that is now some 30 years old. } It was used to explain to a major banks board committee some of the problems the treasury (it pre dates a dedicate risk department) were having with measuring risk.
GDP = Generalised Pareto Distribution
} The slide focused on the problems of describing an appropriate distribution of prices that we should use to fit the relatively sparse historic date we then had in relation to the assets being measured } It also covered different risk measures as it shows both the Mean/ Variance measure of risk it also shows Expected Shortfall as a measure of risk (as is now expected to be a required measure for market risk as a result of The Fundamental Review of the Trading Book). } Why did the slide show two measures of risk?
} At the time we were not entirely sure as to how we should define risk and with risk we felt the definition and the measure were particularly closely connected, } Define risk one way you need one measure, define it another way and you need a different measure. The two definitions can be described as:- - What is the maximum loss within a given probability that we could suffer as a result of holding the given asset portfolio over a given time period? - What is the maximum loss that we could suffer as a result of holding the given asset portfolio over a given time period?
} The difference is subtle but very important. } Should we measure risk whilst constraining the results within a certain probability of outcome. OR } Should we measure risk as an absolute number. } Subsequently we became very sure about how we wished to define risk and therefore how we would measure risk, we opted for the first definition above and Value at Risk (VaR) as the measure.
} VaR is a constrained measure, as such, it looks at risk as variance measured at some percentile from the mean (average) outcome. } Given this the constraint on the outcome made for a very much more simple measure of risk than we would need were we to look for the most extreme outcomes. } In many ways, however, it was a much less intellectually defensible definition and measure than the alternative.
} When anyone thinks of risk they usually think of some absolutely bad outcome epitomized say by the risk of death, a pretty absolute measure of risk! } So why did we settle for measuring financial risk at some point less than the worst possible outcome?
} Firstly risk of financial loss is difficult to measure even if we are looking at a relatively simple portfolio of assets. } As Figure 1 shows the measurement problem we frequently encountered was uncertainty over the shape of the distribution of bad outturns that occur in the tails of distributions. } The data is sparse and will inevitably need to be cleaned, it may fit a number of different distributions, which whilst we can agree will have a fat tail, just how fat the tail will be often results from a number of assumptions about which data to use and how to “clean” it.
} Secondly we are really not interested in the loss from a single asset or portfolio of similar assets we are really interested in the absolute maximum of losses we could face from holding the entirety of our asset base. } This means we are really not interested in measuring from the mean to the chosen percentile but rather from the furthest point of the distribution back to the chosen percentile. } Measuring risk in this way is usually represented by the Expected Shortfall measure.
} Once we start looking at large and complex portfolios of assets we run into a very serious problem in looking at maximum losses. } The losses are not additive, that is to say the losses will depend not only on the losses of each portfolio of like assets but also on the correlation between each portfolio. } This is called Dynamic Conditional Correlation and is a particular property of extreme outcomes in financial markets.
} What we are saying with DCC is that extremely bad outcomes can be much more likely than would be the case if we assumed that the correlations between different asset portfolios in our overall balance sheet were stable. } We are also saying that the correlations between individual asset portfolios change (are dynamic) and change differently depending on how the circumstances in which we find ourselves are changing (are conditional). } This is not the same as fat tails but in a very real sense is a story of fatter and fatter tails, i.e. the tail risk ceases to be static and become dynamic.
} To put this in the language of statistics we might find in our dynamic world that a 7 standard deviation event is almost inevitable given a 5 standard deviation event has happened, whereas looking at a static distribution in similar circumstances the 7 standard deviation event is still very unlikely. } The challenge to those looking to use the measure are: } To describe the event or events that will start this dynamic correlation process and } To measure how the correlations will change given a certain set of, often evolving, events.
Source: MPI Stylus, Absolute Return Partners LLP }
} One set of events, which have been connected to DCC, is dramatic changes in the liquidity of markets known as liquidity black holes. } Recent events in international markets do seem to validate that liquidity is an important factor in driving changes in asset portfolio correlations.
} These fat tails and dynamic conditional correlation are related concepts as they are the result of relationships between the observed parameters (say losses or asset prices) that are not normally distributed. } Characteristically these non-normal observations are recorded in the tails of distributions that is they are characteristic of extreme values.
} The measuring of these extreme values has taken on more importance across a wide range of problems in finance, notably in credit, options pricing and risk modeling. } It has become increasingly understood that in all these areas many risk and pricing models assumed linearity of results, whereas by observation historic results were not linear. } In credit the example of both Structural (Merton) and Reduced Form (Jarrow) models produced theoretical values that differed from those observed. } In options pricing the “Smile” effect clearly showed that there were effects on options prices that did not conform directly to models based on completeness of markets and the application of arbitrage free conditions.
} To address these problems there was a growing trend towards the use of Copula mathematics. } First developed in the 1950’s copula could be used to look at the dynamics of the underlying asset (or assets). } Technically copula functions enable us to express a joint probability distribution as a function of the marginal probability distributions. } This gets us away from the problem of using (static) correlations which is an effective way to represent co-movements between variables if they are linked by linear relationships but not if the co-movements are non-linear.
} One of the most common uses of copula mathematics in finance resulted from the work of A.J. Lee, which is generally held to have resulted in the use by rating agencies of copula to price Collateralised Debt Obligations (CDO) including Mortgage Backed Securities (MBS). } In the crisis that resulted from the collapse of Lehman Bros. on 15 th September 2008 (date of filing for Chapter 11) these models proved to significantly under estimate the joint default probabilities of the mortgage assets within the individual MBS.
Recommend
More recommend