The post P, Q og solvenshensættelser. Hvad er det egentlig vi snakker om? appeared first on Loaded Dice Analytics.

]]>Når man siger at man bruger P-mål til at udtale sig om fremtiden, betyder det at man tillægger fremtidige hændelser den sandsynlighed man faktisk tror de har. P-måls-verdenen er den hvor man kan antage at aktier i gennemsnit giver 7% om året og at renterne i gennemsnit vil kravle tilbage mod 4%, eller noget i den retning.

Hvis man gerne vil vide hvad sandsynligheden er, for at man taber sin egenkapital i løbet af det næste år, eller for at man får bedre afkast end sine konkurrenter over de næste fem år, så er det P-mål man bruger. SCR er for eksempel noget man regner under P.

Hvis man skal sætte værdi på noget, som ikke har en handelspris (eksempelvis en portefølje af brandforsikringer) kan man kigge på det forventede cash flow som bliver genereret. Ud over at man forventer at få et vist cash flow i gennemsnit, skal man også tage højde for, at man har måtte binde nogle penge til at dække risici eller kapitalkrav imens.

Det normale er
derfor, at man værdiansætter et usikkert cash flow til sin gennemsnitlige værdi
(under P) *efter* at man har inkluderet
omkostninger til risiko eller kapital i sit cash flow. Hvordan disse
omkostninger så beregnes, afhænger af hvilket risiko- eller kapitalmål der er
mest relevant, og hvad den løbende omkostningsprocent er.

Hvis man er så heldig, at man skal værdiansætte et instrument som kan beskrives ved hjælp at andre instrumenter, som har en handelspris, kan man gøre noget mere præcist. Ved hjælp af de andre instrumenter kan man lave en afdækningsstrategi, sådan at man samlet får et kendt cash flow, med en kendt værdi. Da afdækningsinstrumenterne har en handelsværdi i dag, kan man trække den fra, og så har man værdien af det oprindelige instrument.

I praksis bruger man noget smart matematik til at gøre noget andet, som, under passende antagelser, er ækvivalent: Man ændrer sandsynlighederne for fremtidige hændelser, således at alle instrumenter i gennemsnit tjener den risikofri rente. Det er Q-målet. Så kan man værdiansætte sit instrument til den gennemsnitlige værdi af dets cash flow under Q, uden reference til nogen afdækningsstrategi.

Her har vi altså to alternative måder at værdiansætte på, som begge to er formuleret som en forventningsværdi af fremtidige cash flows. Enten under Q eller under P men inklusive kapitalomkostninger. Det særlige ved Solvens II-hensættelser er, at man blander de to metoder:

- Finansielle risikofaktorer betragtes under Q, men bidrager ikke til kapitalomkostninger
- Andre risikofaktorer betragtes under P, og bidrager til kapitalomkostninger

Det er selvfølgelig noget rod, og det kræver lidt mere matematik at give mening til den udtalelse, men i praksis kan det godt lade sig gøre. Det kan dog have nogle anti-intuitive konsekvenser.

Hvis man for eksempel har en teori om at en genkøbsfrekvens afhænger af renten, så skal man værdiansætte sine hensættelser med en genkøbsfrekvens svarende til renten under Q, som i middel ligger under den man ville bruge under P. Dermed vil man have en typisk genkøbsfrekvens når man hensætter, som er forskellig fra den typiske frekvens man har observeret historisk.

Som antydet ovenfor kan værdiansættelsen af fremtidige kapitalomkostninger være lidt af et gæt. Derfor fastlægger Solvens II også nogle antagelser man skal bruge.

- Man beslaglægger kapital svarende SCR hvor markedsrisici er minimeret (fordi de betragtes under Q)
- Kapital koster 6% p.a. at beslaglægge

Denne del af beregningen er risikomargen, men i økonomisk forstand er der ikke nogen grund til at betragte den særskilt fra resten af hensættelsen, altså bedste skøn. De skal begge, principielt, regnes som en middelværdi under det kombinerede P/Q-mål.

De er splittet op, fordi det er ekstra svært at regne kapitalomkostningerne. Det kræver genberegninger af SCR i fremtidige scenarier, hvilket afhænger af hensættelserne i fremtiden. Tanken i Solvens II er, at det er nemmere at regne bedste skøn, og at risikomargen ikke betyder særlig meget. Derfor er der en masse steder, hvor der refereres til bedste skøn i stedet for de samlede hensættelser, og en del muligheder for at forenkle beregningen af risikomargen.

Den forestående udvikling af nye hensættelsesmodeller til Solvens II skal nok blive noget rod. Man kan hurtigt komme i slagsmål om, hvordan man skal tolke den ene eller anden formulering i forordningen eller en guideline. Tit kan man få svarene ved at træde et skridt til siden og kigge på den bagvedliggende logik. Så er man lidt bedre hjulpet.

The post P, Q og solvenshensættelser. Hvad er det egentlig vi snakker om? appeared first on Loaded Dice Analytics.

]]>The post Regarding cake, your ESG shall (a) consume it, (b) retain it appeared first on Loaded Dice Analytics.

]]>The Solvency II regulation is pretty clear about this. Article 22(3) states that your economic scenario generator must be arbitrage free and must reproduce the observed market prices. If the observed market prices involve quotes for EUR OIS swaps (which most consider close enough to risk-free) all EUR denominated assets should yield this interest rate on average, for the model to be arbitrage free.

There is a third requirement, however, which is that the ESG must be consistent with the regulatory risk-free rate – including volatility adjustment, matching adjustment, and transitional on the risk-free rate. Put differently, all EUR denominated assets should yield the *regulatory* risk-free rate on average, for the model to be arbitrage free.

That is kind of hard. Hence the cake metaphor.

Before diving into solution mode, a short historical digression.

When Solvency II was put into place, there was a lot of fuss about discounting rates. Naturally, because the discounting rate of future obligations can mean life or death for an insurance company with long dated obligations. The purist choice for a discounting rate, would be to take the one observed in the market. If this was done, there would be no need to have this discussion, because there would only be one risk-free rate to calibrate to. There were, however, some good reasons not to go with this approach

- Over time, there is no good candidate in the market for a risk-free rate. There is a tendency to use a particular rate as risk free for some time, only to discover that it is not risk free.
- Insurance liabilities are often way longer than the market is willing to hedge, so there is nothing to calibrate to in the long end.
- Using the purist approach would have missed some really good opportunities for serving special interests by playing tricks with the risk-free rate.

The current situation is, that you can end up discounting on several different rates for each currency in the same model: with and without volatility adjustment and potentially with several different matching adjustments, and they are all supposed to be considered risk free.

In some cases, the contradiction can be swept under some simplifying assumptions. If the cash flow whose value you have to estimate is independent of future interest rates, it is sufficient to discount the expected cash flow on the relevant initial interest rate, and you really do not need ESG. Everyone can all live with a practical solution in spite of theoretical difficulties.

When interest rates can influence the cash flow, and you are in ESG territory, you cannot really close your eyes to the contradiction. There are certainly ways to go about this, but there can be many different ways, with different consequences. Questions you should ask yourself are:

- If you keep a EUR in a risk-free account to cover a future benefit payment (discounted on the regulatory risk-free rate), how does it earn a higher interest than if it covers a financial market payment (discounted on the market risk-free rate)?
- How do the spreads between the different risk-free rates develop? Is market data even a relevant guide to this?
- How do you keep the complexity down, here?
- If you have an ESG vendor, do they provide answers to the above questions, or do you have to bolt that on yourself?

The answers to these questions can be quite fundamental to, how you value and hedge options and guarantees. And answering them requires a mix of traditional quant skills and actuarial experience.

Drop me a line if you need help with this.

The post Regarding cake, your ESG shall (a) consume it, (b) retain it appeared first on Loaded Dice Analytics.

]]>The post Allocating capital should be beautifully simple. It just needs a slight adjustment to reality appeared first on Loaded Dice Analytics.

]]>Capital allocation (in the sense used here) is all about creating organisational ownership of the risks taken or capital requirements incurred. Income and cost are traditionally allocated to business activities in order to push the incentive to optimise out to the individual decision makers. The same can, and should, be done with capital consumption or risk. It is not uncommon for a financial institution to have capital costs of the same magnitude as operational cost, so why should it be treated any differently?

The hard part of allocating capital or risk is, that the measures are often complicated and non-linear. They can rely on simulations and involve diversification between risk types, and this makes them hard for the individual business users to understand. A good allocation can help this understanding if it has these properties:

- Completeness: A capital allocation model is no use if the allocated capital does not sum up to the total capital it tries to allocate.
- Business users should be able to understand what drives their numbers without understanding the whole allocation model. This does not mean that the model cannot be complex, you just need to hide it from the users.
- The same risk taken in different parts of the organisation should have the same allocated capital. If not, it will erode buy-in for the allocation model.

Luckily, you can all get this from a mathematical theorem. No matter how complicated a risk or capital measure you have, it basically only requires that when all exposures are doubled, the total risk doubles. This seems reasonable – if it does not hold, you probably have a more fundamental problem with your risk measure, that you have to solve first.

When it holds, there is a way of constructing weights for each type of exposure, so that

Risk = weight1 * Exposure1 + weight2 * Exposure2 + … + weightN * ExposureN

The weights can be kept constant for the whole business, and to get the risk allocated to any subdivision of the business, you use the exposures of that subdivision. The weights depend on all the exposures at once, so in a sense this is just a fancy rewriting of the original risk measure. But it gives you the properties you want

- Since the exposures over business subdivisions sum up to the exposures of the whole business, the allocation is complete
- Business users only have to understand that there are weights that they must use. The complexity is hidden away in the weights.
- An exposure will contribute the same allocated capital because the weight does not depend on the business unit where it resides. There is a business-wide ‘price of risk’.

Up to now, I did not talk about what the risk or capital measure actually was. In reality, this can be all sorts of things: Formula or simulation; Value-at-Risk, Expected Shortfall, or standard deviation; internal or regulatory. Unfortunately, every application of math to the real world comes with a bit of hazzle.

The typical hazzle involves some element of the original risk measure, or the way it is used, that makes the allocation complex over time or gives wrong business incentives. Solving the problem is a matter of giving up completeness, simplicity, or fairness of the allocation. Which one to sacrifice depends on the concrete business. In my experience, people will most often give up completeness, as long as the mismatch is not too big.

Here are three real-world examples.

Value-at-Risk is often computed by running a range of randomised scenarios and computing the losses in each scenario to produce a loss distribution. If you run 10,000 scenarios and want a 1% Value-at-Risk, you just find the 100th-worst loss among your simulated losses.

When doing this, the weights in the risk allocation are the unit losses for each exposure in the 100th-worst scenario. This happens because tiny changes in exposures will not change the fact that this scenario is number 100.

For many purposes, this will be alright, but if you have exposures that are non-linear, or often change directions, there are often many different ‘ways’ of producing a 1% loss. In each of these ways, weights will differ significantly, but it is arbitrary which way of losing money caused exactly the 100^{th}-worst scenario. It will also change from time to time, giving the decision makers in your business wildly varying weights and allocated risk.

A popular solution is to use some kind of smoothing over the scenarios close to the 100^{th}-worst, instead of picking exactly that one. On the con side, you will ruin completeness and will have to continually supervise that you are not smoothing too much.

I once worked for a major Danish life insurer on a project to allocate the Solvency Capital Requirement under the Solvency II Standard Approach. At first sight, this looks like a very benign problem if you know the Standard Approach: it is a (big) closed formula expression, taking as input various exposures to market and underwriting risks.

The issue was that some risks can go both ways. You can have both positive and negative exposure to interest rates. The solution in the Standard Approach is to use the larger of the ‘up’ and ‘down’ risks. This is completely fine if the business is consistently exposed to, say, interest rate decreases. In that case, the weight for the exposure to a government bond would be negative, because buying the bond would partially hedge the risk.

This company, however, was running a pretty tight interest hedging programme, so the direction of the interest rate exposure would change often. And every time the overall interest rate exposure changed sign, all the weights on fixed income instruments would change signs too. Very impractical to the people whose performance was to be measured this way.

There could be several solutions to this. You could choose to ignore interest rate risk entirely, overwrite the weights with zeros, or something else. Again, all these solutions give up completeness of the allocation, and would require that you control interest rate risk in another way.

Most allocation of risk relates to regulatory requirements or external reporting. The frequency of external or regulatory reporting is often quarterly or annual, while the business is conducted on a shorter timescale. Consider, as an example, a measure of market risk that is reported to authorities quarterly, but which can change daily due to portfolio changes.

In principle, this gives the business an incentive to window dress: running the business at one risk level between reporting dates, and at another risk level close to reporting dates. In reality, there are often legal and reputational reasons why a business would not want to do this.

Problem is, if you only charge business units for their risk on reporting dates, you forward the incentive to window dress to every decision-maker in the business. Do they care as much about the legal and reputational repercussions of window dressing as the whole business does?

The most straightforward solution is to charge the business units for their use of risk on a higher frequency than the external reporting. Ideally, it should be on the timescale on which the business units can meaningfully change their exposures. To do that, you must be able to measure at a higher frequency, which would be good governance anyway. And you give up completeness of the allocation.

Implementing a capital allocation tends to be a process that is different for every business, and without standardised methodologies to pick. When you jump into it, be sure to know what corners you want to cut, and which you want to make as sharp as possible. Good luck!

The post Allocating capital should be beautifully simple. It just needs a slight adjustment to reality appeared first on Loaded Dice Analytics.

]]>The post Tired of running stochastic scenarios for your technical provisions? You can probably do with a lot fewer appeared first on Loaded Dice Analytics.

]]>For pricing complex products, such as the Solvency II Best Estimate provisions, you have to estimate the mean net present value of cashflows under the risk neutral measure. In practice, this is done by simulating a large number of independent and identically distributed scenarios from the risk neutral measure, computing the net present value for each scenario, and taking the average.

The problem is, that you potentially have to run a lot of scenarios for the average to be close enough to the mathematical mean. The error you make by running a non-infinite number of scenarios, *n*, typically scales as *n ^{-1/2}*. In other words, if you want to cut the error in half, you have to run four times as many scenarios.

As an example, I have taken the pricing of a portfolio of investment contracts with annual rebalancing to 50 % equities, and which have a range of guaranteed minimum pay-outs after 20 years. The value of the investment portfolio is 100 and the total value of the guarantees is around 16. I perform simulations of this, and as the number of scenarios increases, the error goes down, but slowly.

At 100,000 scenarios, the error (standard deviation) in the valuation of the guarantees is around 0.3 pct. Whether this is good enough, is a question for another day, but running 100,000 scenarios will often be a major technical obstacle for real portfolios. How can we bring that number down significantly?

This technique is a classic, and should be in your toolbox no matter what. It is relatively easy to implement, and in almost all cases it will improve your precision substantially.

The trick is to draw the scenarios in pairs that are ‘opposite’ to each other, hence the name, antithetic. In most cases, this will lead to net present values that are also ‘opposite’ each other in the distribution of net present values, and the average of the two will be closer to the mean you seek.

In my example, equity returns in each scenario are governed by a stochastic vector, **z**, of normally distributed numbers. The negative of the previous ones, –**z**, follows the same distribution as **z**, so there is no problem in using it in the next scenario. Running the scenarios in this paired way, reduces the error significantly.

In this example, the error goes down by a factor of 2.4. If I were content with the error at 100,000 scenarios without antithetic variates, I can do with 18,000 scenarios with antithetic variables.

The effect you get depends a lot on the problem at hand, but antithetic variables work well when you need to value something that has a monotonic dependence on the underlying randomness. It often does.

This technique is about using all the information you have in an active way. In my example, I have equities that follow a Black-Scholes process which has some well-known analytical results about pricing of futures and options. Then, I can rewrite the value of the guarantee as a linear combination of these and of a remainder, which is hopefully easier to tame. Schematically, I have the following relationship if I choose constants *k*_{1} and *k*_{2}

Guarantee = *k*_{1} * option + *k*_{2} * future + remainder

This formula should be understood in two ways. First in the sense of payoffs: in each scenario, I know the payoffs of the guarantee, the option, and the future. The remainder is just what is left. The second is in the sense of values: the value of the guarantee is a linear combination of values of the option and the future (which are known analytically), and the remainder, which I just backed out of the simulation.

Minimising the error is equivalent to minimising the variance of the remainder, which is conveniently done by choosing *k*_{1} and *k*_{2} by an ordinary least-squares regression.

In my example, it makes sense to use the 20-year equity future and the at-the-money 20-year equity future option. Compared to the naïve estimate, the speedup is a notch better than for antithetic variates, reducing the number of scenarios by a factor of 9.

As a bonus, this procedure will ensure that the option and the future are always priced according to their analytical formulas, which is central to a consistent, arbitrage-free pricing.

While antithetic and control variates are the two most common variance reduction techniques, there are more to choose from, but they inevitably require your problem to have some particular feature to work well. I will again recommend you to read Glasserman’s *Monte Carlo Methods in Financial Engineering*.

The example presented here is obviously fictitious, but has not been particularly tweaked to make the techniques look better. Details and code are available on request.

The post Tired of running stochastic scenarios for your technical provisions? You can probably do with a lot fewer appeared first on Loaded Dice Analytics.

]]>The post Are Solvency II technical provisions hard? Here are a few tricks to make your life easier appeared first on Loaded Dice Analytics.

]]>In many cases, technical provisions only have to use the full machinery for the annual calculation. According to EIOPA guidelines, you can make simplifications for your quarterly technical provisions, as long as they are proportionate, and follow common sense.

Not only are you allowed to make such simplifications, you probably should have something like this if you are materially exposed to market risk on your liability side. That way, you can keep an eye on it and run a hedging programme on a shorter time scale than quarterly.

The common sense that EIOPA guidelines prescribe is

- Technical provisions have to be rolled forward with cash in-flows and out-flows, and with new business. Also, the roll-forward assumptions cannot be way off the actual experience.

With respect to risk factors, the simplification should connect to the sensitivity analysis that the actuarial function has to make anyway. - EIOPA only applies this to life obligations, but it sounds like a good idea for non-life too.

From there, it is up to your imagination and modelling skills. Plus, you should probably clear it with your National Competent Authority.

If you are experiencing headaches with your technical provisions, chances are that you are doing stochastic simulations. Although stochastic implies drawing random numbers, make sure they are not too random. In other words, make sure you are using some of the many neat techniques for improving precision. If you get them to work well, it could maybe improve your precision a factor of 10, saving you roughly 99% in simulation time, since precision scales with the square root of the number of simulations.

I will get back to the actual techniques in a later article. Until then, if you are in the business of stochastic simulation for technical provisions, make sure that you, or a colleague, has read and understood Glasserman’s Monte Carlo Methods in Financial Engineering.

I once worked with a calculation that took about six hours on a 10-machine cluster and had to be run in 5-10 configurations every quarter. Something would often go wrong with the input data, so we would diligently spend a whole day on checking the input for two calculations so we could run them overnight with minimal risk of having to recalculate.

No-one knew how extremely wasteful this was, until a colleague and I tried to make a parallel implementation in the R programming language. After a week of programming, it ran in eight hours on a laptop. I recently tried to re-implement it in the highly efficient Julia programming language, and it ran in about 10 minutes on my laptop.

What I found most disturbing, was the way the long calculation times made our processes rot and gave us the impression, that spending weeks every quarter on this task was necessary. We could probably have done it in a day or two with the right tools at hand.

In the same way, you should not accept long calculation times and expensive hardware requirements without question.

If you want to know what questions to ask, be sure to hang around for an article on that subject!

The post Are Solvency II technical provisions hard? Here are a few tricks to make your life easier appeared first on Loaded Dice Analytics.

]]>The post Loaded Dice Analytics opens! appeared first on Loaded Dice Analytics.

]]>The post Loaded Dice Analytics opens! appeared first on Loaded Dice Analytics.

]]>