Appendix - Measuring Volatility
Derivatives-based Portfolio Solutions
DEFINITION OF VOLATILITY
Assuming that the probability distribution of the log returns of a particular security is normally distributed (or follows a normal ‘bell-shape distribution’), volatility σ of that security can be defined as the standard deviation of the normal distribution of the log returns. As the mean absolute deviation is √(2/π) (≈0.8) × volatility, the volatility can be thought of as approx. 1.25× the expected percentage change (positive or negative) of the security.
σ = standard deviation of log returns × √(1/ Δt)
CLOSE-TO-CLOSE HISTORICAL VOLATILITY IS THE MOST COMMON
Volatility is defined as the annualised standard deviation of log returns. For historical volatility the usual measure is close-to-close volatility, which is shown below.
Log return = xi = Ln( (ci + di) / ci ) where di is the ordinary dividend and ci is the close price
Volatility (not annualized) = σx = √ ( (1/N) Σ (xi - xaverage)2 ) where xaverage is the drift
Historical volatility calculation is an estimate from a sample Historical volatility is calculated as the standard deviation of the log returns of a particular securities’ time series. If the log returns calculation is based on daily data, we have to multiply this number by the square root of 252 (the number of trading days in a calendar year) in order to annualise the volatility calculation (as Δt = 1/252 hence √(1/ Δt) = √ 252 ). As a general rule, to annualise the volatility calculation, regardless of the periodicity of the data, the standard deviation has to be multiplied by the square root of the number of days/weeks/months within a year (ie, √ 252, √ 52, √ 12 ).
σAnnualized = σx × √ ( values in year )
BEST TO ASSUME ZERO DRIFT FOR VOLATILITY CALCULATION
The calculation for standard deviation calculates the deviation from the average log return (or drift). This average log return has to be estimated from the sample, which can cause problems if the return over the period sampled is very high or negative. As over the long term very high or negative returns are not realistic, the calculation of volatility can be corrupted by using the sample log return as the expected future return. For example, if an underlying rises 10% a day for ten days, the volatility of the stock is zero (as there is zero deviation from the 10% average return). This is why volatility calculations are normally more reliable if a zero return is assumed. In theory, the expected average value of an underlying at a future date should be the value of the forward at that date. As for all normal interest rates (and dividends, borrow cost) the forward return should be close to 100% (for any reasonable sampling frequency, ie, daily/weekly/monthly). Hence, for simplicity reasons it is easier to assume a zero log return as Ln(100%) = 0.
WHICH HISTORICAL VOLATILITY SHOULD I USE?
When examining how attractive the implied volatility of an option is, investors will often compare it to historical volatility. However, historical volatility needs two parameters.
LENGTH OF TIME FOR HISTORICAL VOLATILITY
Choosing the historical volatility number of days is not a trivial choice. Some investors believe the best number of days of historical volatility to look at is the same as the implied volatility of interest. For example, one-month implied should be compared to 21 trading day historical volatility (and three-month implied should be compared to 63-day historical volatility, etc). While an identical duration historical volatility is useful to arrive at a realistic minimum and maximum value over a long period of time, it is not always the best period of time to determine the fair level of long-dated implieds. This is because volatility mean reverts over time. Using historical volatility for longer periods is not likely to be the best estimate of future volatility (as it could include volatility caused by earlier events, whose effect on the market has passed). Arguably a multiple of three months should be used to ensure that there is always the same number of quarterly reporting dates in the historical volatility measure. Additionally, if there has been a recent jump in the share price that is not expected to reoccur, the period of time chosen should try to exclude that jump.
The best historical volatility period does not have to be the most recent
If there has been a rare event which caused a volatility spike, the best estimate of future volatility is not necessary the current historical volatility. A better estimate could be the past historical volatility when an event that caused a similar volatility spike occurred. For example, the volatility post credit crunch could be compared to the volatility spike after the Great Depression or during the bursting of the tech bubble.
FREQUENCY OF HISTORICAL VOLATILITY
While historical volatility can be measured monthly, quarterly or yearly, it is usually measured daily or weekly. Normally, daily volatility is preferable to weekly volatility as five times as many data points are available. However, if volatility over a long period of time is being examined between two different markets, weekly volatility could be the best measure to reduce the influence of different public holidays (and trading hours). If stock price returns are independent, then the daily and weekly historical volatility should on average be the same. If stock price returns are not independent, there could be a difference. Autocorrelation is the correlation between two different returns so independent returns have an autocorrelation of 0%.
Trending markets imply weekly volatility is greater than daily volatility
With 100% autocorrelation, returns are perfectly correlated (ie, trending markets). Should autocorrelation be -100% correlated, then a positive return is followed by a negative return (mean reverting or range trading markets). If we assume markets are 100% daily correlated with a 1% daily return, this means the weekly return is 5%. The daily volatility is therefore approx. 16% (1% × √252), while the weekly volatility of approx. 35% (5% × √52) is more than twice as large.
High market share of high frequency trading should prevent autocorrelation
Historically (decades ago), there could have been positive autocorrelation due to momentum buying, but once this became understood this effect is likely to have faded. Given the current high market share of high frequency trading (accounting for up to three-quarters of US equity trading volume), it appears unlikely that a simple trading strategy such as ‘buy if security goes up, sell if it goes down’ will provide above-average returns over a significant period of time.
Panicked markets could cause temporary negative autocorrelation
While positive autocorrelation is likely to be arbitraged out of the market, there is evidence that markets can overreact at times of stress as market panic (rare statistical events can occur under the weak form of efficient market hypotheses). During these events human traders and some automated trading systems are likely to stop trading (as the event is rare, the correct response is unknown), or potentially exaggerate the trend (as positions get ‘stopped out’ or to follow the momentum of the move). A strategy that is long daily variance and short weekly variance will therefore usually give relatively flat returns, but occasionally give a positive return.
INTRADAY VOLATILITY IS NOT CONSTANT
For most markets, intraday volatility is greatest just after the open (as results are often announced around the open) and just before the close (performance is often based upon closing prices). Intraday volatility tends to sag in the middle of the day due to the combination of a lack of announcements and reduced volumes/liquidity owing to lunch breaks. For this reason, using an estimate of volatility more frequent than daily tends to be very noisy. Traders who wish to take into account intraday prices should instead use an advanced volatility measure.
EXPONENTIALLY WEIGHTED VOLATILITIES ARE RARELY USED
An alternate measure could be to use an exponentially weighted moving average model, which is shown below. The parameter λ is between zero (effectively one-day volatility) and one (ignore current vol and keep vol constant). Normally, values of approx. 0.9 are used. Exponentially weighted volatilities are rarely used, partly due to the fact they do not handle regular volatility driving events such as earnings very well. Previous earnings jumps will have least weight just before an earnings date (when future volatility is most likely to be high) and most weight just after earnings (when future volatility is most likely to be low). It could, however, be of some use for indices.
σi 2 = λ σi-1 2 + ( 1 − λ ) xi 2
Exponentially weighted volatility avoids volatility collapse of historic volatility
Exponential volatility has the advantage over standard historical volatility in that the effect of a spike in volatility gradually fades (as opposed to suddenly disappearing causing a collapse in historic volatility). For example, if we are looking at the historical volatility over the past month and a spike in realised volatility suddenly occurs the historical volatility will be high for a month, then collapse. Exponentially weighted volatility will rise at the same time as historical volatility and then gradually decline to lower levels (arguably in a similar way to how implied volatility spikes, then mean reverts).
ADVANCED VOLATILITY MEASURES
Close-to-close volatility is usually used as it has the benefit of using the closing auction prices only. Should other prices be used, then they could be vulnerable to manipulation or a ‘fat fingered’ trade. However, a large number of samples need to be used to get a good estimate of historical volatility, and using a large number of closing values can obscure short-term changes in volatility. There are, however, different methods of calculating volatility using some or all of the open (O), high (H), low (L) and close (C). The methods are listed in order of their maximum efficiency (close-to-close variance divided by alternative measure variance).
EFFICIENCY AND BIAS DETERMINE BEST VOLATILITY MEASURE
There are two measures that can be used to determine the quality of a volatility measure: efficiency and bias. Generally, for small sample sizes the Yang-Zhang measure is best overall, and for large sample sizes the standard close to close measure is best.
Efficiency measures the volatility of the estimate
The efficiency describes the variance, or volatility of the estimate. The efficiency is dependent on the number of samples, with efficiency decreasing the more samples there are (as close-toclose will converge and become less volatile with more samples). The efficiency is the theoretical maximum performance against an idealised distribution, and with real empirical data a far smaller benefit is usually seen (especially for long time series). For example, while the Yang-Zhang based estimators deal with overnight jumps if the jumps are large compared to the daily volatility the estimate will converge with the close-to-close volatility and have an efficiency close to one.
Close-to-close volatility should use at least five samples (and ideally 20 or more)
The variance of the close-to-close volatility can be estimated as a percentage of the actual variance by the formula 1/(2N) where N is the number of samples.
Bias depends on the type of distribution of the underlying
While efficiency (how volatile the measure is) is important, so too is bias (whether the measure is, on average, too high or low). Bias depends on the sample size, and the type of distribution the underlying security has. Generally, the close-to-close volatility estimator is too big.
Variance, volatility and gamma swaps should look at standard volatility (or variance)
As the payout of variance, volatility and gamma swaps are based on close-to-close prices, the standard close-to-close volatility (or variance) should be used for comparing their price against realised. Additionally, if a trader only hedges at the close (potentially for liquidity reasons) then again the standard close-to-close volatility measure should be used.
The simplest volatility measure is the standard close-to-close volatility. We note that the volatility should be the standard deviation multiplied by √N/(N-1) to take into account the fact we are sampling the population (or take standard deviation of the sample). We ignored this in the earlier definition as for reasonably large n it √N/(N-1) is roughly equal to one.
Volcc = σcc = √ (F/(N-1)) √ ( Σ (Ln ( ci/ci-1 ))2) with zero drift
The first advanced volatility estimator was created by Parkinson in 1980, and instead of using closing prices it uses the high and low price. One drawback of this estimator is that it assumes continuous trading, hence it underestimates the volatility as potential movements when the market is shut are ignored.GARMAN-KLASS
VolParkinson = σP = √ (F/N) √ ( 1/(4Ln(2)) Σ (Ln ( hi/ci-1 ))2)
Later in 1980 the Garman-Klass volatility estimator was created. It is an extension of Parkinson which includes opening and closing prices (if opening prices are not available the close from the previous day can be used instead). As overnight jumps are ignored the measure underestimates the volatility.ROGERS-SATCHELL
VolGarman-Klass = σGK = √ (F/N) √ ( Σ 1/2 (Ln( hi/li ))2 - (2Ln(2)-1)(Ln( ci/oi ))2 )
All of the previous advanced volatility measures assume the average return (or drift) is zero. Securities that have a drift, or non-zero mean, require a more sophisticated measure of volatility. The Rogers-Satchell volatility created in the early 1990s is able to properly measure the volatility for securities with non-zero mean. It does not, however, handle jumps; hence, it underestimates the volatility.GARMAN-KLASS YANG-ZHANG EXTENSION
VolRogers-Satchell = σRS = √ (F/N) √ ( Σ(Ln( hi/ci )Ln( hi/oi ) + Ln( li/ci )Ln( li/oi )) )
Yang-Zhang modified the Garman-Klass volatility measure in order to let it handle jumps. The measure does assume a zero drift; hence, it will overestimate the volatility if a security has a non-zero mean return. As the effect of drift is small, the fact continuous prices are not available usually means it underestimates the volatility (but by a smaller amount than the previous alternative measures).YANG-ZHANG
VolGKYZ = σGKYX = √ (F/N) √ ( Σ(Ln( oi/ci-1 ))2 + 1/2 (Ln( hi/li ))2 - (2Ln(2)-1)(Ln( ci/oi ))2 )
In 2000 Yang-Zhang created a volatility measure that handles both opening jumps and drift. It is the sum of the overnight volatility (close-to-open volatility) and a weighted average of the Rogers-Satchell volatility and the open-to-close volatility. The assumption of continuous prices does mean the measure tends to slightly underestimate the volatility.
Advisory Services > Bespoke Derivatives Solutions > Non-Linear Solutions > Trading Volatility σ >
|Selection||File type icon||File name||Description||Size||Revision||Time||User|