“the conventional view serves to protect us from the painful job of thinking.” J. K. Galbraith

Volatility is a key concept in finance. It is a statistical measure of the dispersion of returns for a given security or market index. Volatility can either be measured by using the standard deviation or variance between returns from that same security or market index. In general, the higher the volatility the riskier the security. Higher volatility implies also less ‘control’ over performance.

However, there are some shortcomings to using standard deviation to calculate volatility. The most important is that standard deviations assume that returns are normally distributed, with more results near the average and fewer results far away from the average. In actuality, portfolio returns often have asymmetrical distributions, and they can be unusually high or low over time. In addition, volatility tends to change over time, challenging the assumption of an unchanging statistical distribution of returns.

Volatility of single securities or portfolios is computed based on variance. In the case of a portfolio with weights vector *w*, volatility *V* is defined as:

*V*=√(*w*‘ *K* *w*)

where ** K** is a covariance matrix and the expression under the square root is a quadratic form. This approach leads to two main issues. First of all, it neglects structural aspects of the portfolio in question. In fact, the securities forming a portfolio are interdependent (correlated) and this is reflected by a map (graph) of which an example is illustrated below.

The above equation does not take into account explicitly the information on the topology of the map of inter-dependencies as it is simply a weighted sum of* all* the entries of the covariance matrix or an *entrywise* norm of *K*. In other words, the sum is constant, no matter where each entry of the matrix is situated (i.e. in which row and/or column).

In order to take structure into account one may adopt a different approach, for example one based on complexity. The complexity of a system described by a vector {**x**} of N components (measurable properties, such as stock prices), is defined as a function of *Structure* and *Entropy*.

C = *f*(** S** ○

**)**

*E*where ** S** represents an N × N adjacency matrix,

**is an N × N entropy matrix, ‘○’ is the Hadamard matrix product operator and**

*E**f*is a norm operator. The above equation represents a formal definition of complexity and it is not used in its computation. Instead, the adjacency and entropy matrices are determined via a proprietary algorithm. Both matrices contain only those entries which correspond to

*significant interdependencies*. In other words, we establish the

*effective structure*of the system and the interactions within. Once the entropy matrix and the adjacency matrix have been obtained, one may compute the complexity of a given system as the following matrix norm:

C = *║***S** ○ **E**║

The norm we use is a *spectral norm* computed via the Singular Value Decomposition. In particular, the maximum singular value of the above matrix product is used to measure its *spectral radius*.

The second key issue is that of correlation. Correlations are computed based on covariance and standard deviations. Standard deviations measure dispersion around the mean but do not account for the actual distribution of data. A novel measure of correlation, based on entropy, can be devised in order to provide a more realistic measure called *generalized-correlation*. The need to resort to an alternative measure of correlation has been dictated by the fact that linear correlations neglect many key features of data and can be misleading if data is even mildly pathological. An example is illustrated below. The linear correlation coefficient is 0.92, the generalized one is 0.76, a full 16% less, and it neglects the fact that most of the data points are arranged in two clusters. Clusters point to bufurcations, certainly not to linearity.

Linear (Pearson’s) correlation often provides *overly optimistic* values and must be applied with caution only to data which is relatively well-behaved and has ‘linear flavor’. The assumption of continuity is equally dangerous. In the figure below two situations are illustrated in which in both cases the correlation coefficient is 100% and a linear model fits the data perfectly. While it is already striking that two totally different situations lead to the same model (straight line) and an identical correlation coefficient, what is more alarming is the fact that in the case on the right hand side, the linear model between the two clusters of data points is not valid. Assuming otherwise is dangerous. Clustering is consequence of bifurcations, which are clearly present in the mentioned case. In the model on the left hand side there is evidence of a less intricate situation. Basically, the physics which generates both situations is quite different and yet basic statistics suggests otherwise. If one doesn’t actually see the data, this fact will go unnoticed.

Examples in which correlations are applied with disregard as to the underlying character of data are illustrated in the two figures below. The first example (source: Wikipedia) shows two cases reporting r = 0.79 and 77 which appear evidently as excessively optimistic.

It is easy to imagine what consequences this can have when similar models are incorporated into complex risk assessment/risk rating procedures or portfolio design. Situations such as the ones illustrated above can go unnoticed if one is confronted with thousands of correlations and never inspects data visually.

In order to avoid similar issues, a new technique has been developed in which data is not treated using traditional techniques. The basic idea is to literally *transform a scatter plot into an image* and to process it using image analysis techniques in order to determine which images ‘carry information’ and which don’t. Image information content and degree of structure is used to determine the degree of correlation between variables**. **The procedure – a *model-free approach *– is proprietary and emulates in a simple manner what happens on the retina of an eye.

An example of the process of ‘scatter-plot pixelization’ is illustrated below. Once the image has been obtained, its information content is measured using proprietary entropy-based techniques, producing the so-called *generalized correlation.*

Once a scatter plot has been pixelized, a *generalized correlation coefficient* is computed based on entropy concepts.

This brings us to a new entropy and complexity-based measure of volatility. When the probability distributions are asymmetric and leptokurtic, variance may become not enough to measure risk or uncertainty. Unlike variance, which measures concentration only around the mean, *entropy* measures diffuseness of the density irrespective of the location of concentration (Shannon, 1948):

Entropy observes the effect of diversification and is a more general uncertainty measure than the variance. This is because it uses much more information about the probability distribution. Linear correlations capture only linear behaviour and neglect any nonlinearity. Unfortunately, there is little linear behaviour in stock markets. In fact, in some cases the difference between a linear and a generalized correlation can be significant, sometimes close to 20%. A few examples of correlations in a particular twelve-stock portfolio are illustrated in the table below.

Stocks |
Linear corr. | Generalized corr. |

s2-s10 | 0.928 | 0.781 |

s6-s2 | 0.922 | 0.758 |

s3-s10 | 0.866 | 0.786 |

s4-s5 | 0.844 | 0.723 |

s10-s12 | 0.901 | 0.739 |

s12-s6 | 0.848 | 0.676 |

The easiest manner to incorporate generalized correlation in conventional volatility computations is to adjust linear correlations in the equation *V*=√(*w*‘ *K* *w*) by replacing them with the corresponding generalized counterparts.

A new approach to volatility may be proposed as follows. Volatility can in fact be defined conceptually as the sum of entropies of all stocks and complexity:

where Ϭ represents the maximum singular value operator. The first part of the above equation reflects the ‘scalar’ component of volatility while the second accounts for the ‘structural’ part (i.e. stock inter-dependencies). This measure of volatility is called *intrinsic volatility* as it is independent of weights and reflects, therefore, a characteristic of a given system.

Structure is tremendously important because to understand a system is to understand its underlying structure. In an inter-dependent economy this is of paramount importance. Structure may be represented as a simple map (shown at the top of this blog), or as a Complexity Map. A few examples are illustrated below. The size of each node is proportional to how much complexity (and resilience) a given component contributes to the complexity of the whole. In other words, the larger nodes are the drivers of complexity.

Portfolio.

System of major stock market indices.

A system of funds.

An economy, split into market segments and corporations, represented via their corresponding Balance Sheets.

Having established the importance of structure, let us confront classical volatility with the *intrinsic volatility* introduced above.

The first example is that of a balanced portfolio based on all stocks composing the S&P 100, spanning the period January 2000 – December 2005. The red curve represents intrinsic volatility.

It is clear, how after the dotcom bubble burst (March 2000 – October 2002), i.e. in a period of turbulence, the two volatilities are out of phase (approximately 40% of the time). In fact, when volatility drops, intrinsic volatility increases and vice-versa. Afterwards, there is fairly good similarity between the locations of the peaks.

The second example is relative to the Dow. A period of over three years has been analyzed (1998-2000). The red curve represents intrinsic volatility.

In the first half of the analyzed period, the two curves are in sync, while in the second half they are no longer similar.

In conclusion:

Intrinsic volatility is a new measure of volatility which is computed based on entropy and not on standard deviations.

It explicitly takes into account the structure of the inter-dependencies between the components of a portfolio.

It is not dependent on weights, hence it represents an intrinsic property of a system (portfolio).

In periods of low market turbulence it provides information which is similar to conventional volatility.

In periods of high volatility it provides new information and insights. This is because it adopts generalized correlations which are more relevant than linear correlations when data is highly scattered and non-stationary.

Intrinsic volatility collapses into conventional volatility in periods of low turbulence and in which data is well-behaved. This is because in such cases generalized correlations are similar in magnitude to linear correlations.

In essence, conventional volatility is a special case of intrinsic volatility, which is a more general approach. One could draw a parallel between Newtonian mechanics and relativistic mechanics – at low speeds both theories coincide, at high speeds Newtonian mechanics is not applicable. In practice, one theory is an approximation of a more general one. Clearly, when we engineer industrial products we resort to Newtonian mechanics as it provides an excellent approximation. In finance, however, the difference between 100% and 99.9% may still mean a lot of money.

The conventional view serves to protect us from the painful job of thinking. J. K. Galbraith

## 2 thoughts on “On Modern Volatility Measures”