logo
  • Home
  • About Me
  • Location
  • Contact Us

Alpha and Evaluating Investment Advisors

18th July, 2013 · Robert J Frey · 7 Comments

The search for alpha is an often heard refrain in investment circles. Unfortunately, many of those seeking alpha—or attempting to deliver alpha—have only a nebulous idea of what alpha is. First, then, let’s attempt to define exactly what this performance measure means. Then we’ll take a stab at how alpha is best used to evaluate particular investors, investment producets, or investment managers.

Wikipedia’s entry is a good place to start and defines alpha as “… a risk-adjusted measure of the so-called active return on an investment… Often the return of a benchmark is substracted in order to consider relative performance, which yields Jensen’s alpha.” When people talk about alpha, they usually mean performance relative to a benchmark, i.e.,  Jensen’s alpha, and that’s the approach we’ll take here.

The Mathematics of Alpha

At time t for an investment i with return r_{i}(t) against a benchmark m with return r_{m}(t) the estimate of alpha \alpha_i is based on the equation:

\displaystyle{r_{i}(t) -r_{f}(t)=\alpha_i+\beta_{i,m} (m(t)-r_{f}(t))+\epsilon_{i}(t)}

The term \beta_{i,m} is called the exposure of investment i to benchmark m, and \epsilon_{i} is a mean-zero noise or error term that is idiosyncratic to i.

It is important to adjust the returns by subtracting the risk-free rate. There are technical reasons why this is necessary, but the intuitive idea is that the asset’s return over the risk-free rate gives us a picture of the additional return we receive from taking the risks associated with the investment. Failing to subtract the risk-free rate as shown causes significant distortions in \alpha and is the most common mistake made when estimating it.

Thus, the alpha we get is dependent on the benchmark we choose. It is a relative measure that can be estimated statistically but is never observed directly.

More Complex Models for Alpha

In the cases above, we modeled the returns of an investment as a linear equation based on a particular benchmark. A natural question is how appropriate is a given benchmark as the basis for building the return model for a given investment? One response is to use multiple benchmarks, or factors, to achieve a more complete model.

For such a multi-factor model the equation is

\displaystyle{r_{i}(t)-r_{f}(t)=\alpha_i+\sum_{j=1}^{k} b_{i,j} (m_{j}(t)-r_{f}(t))+\epsilon_{i}(t)}

where m_{j} is the return of factor j and b_{i,j} is the exposure of investment i to factor j.

The Challenges of Model Development and Estimation

There are significant challenges to building a multi-factor model. Selecting the factors may be done by collecting a series of representative indices or other financial data; however, fitting such factor models must deal with the inevitable correlations which exist across benchmarks. Uncorrelated factors can be derived by projecting the returns of a collection of representative investments onto a lower dimensional subspace; however, this means that the factors are also not observable and are estimated simultaneously with the parameters.

The parameters of the model are almost certainly time-dependent. Thus, the factor model becomes:

\displaystyle{r_{i}(t)-r_{f}(t)=\alpha_{i}(t)+\sum_{j=1}^{k} b_{i,j}(t) (m_{j}(t)-r_{f}(t))+\epsilon_{i}(t)}

Financial returns are heteroskedastic. This demands that we also estimate time-dependent variances for the model data. (See “All Correlations Tend to One…”, http://wp.me/p3uMW7-2v)

These technical challenges can be dealt with, but the task isn’t easy. Certainly, the naïve approach of collecting a few indices and throwing the data into an off-the-shelf regression program isn’t going to cut it. Unfortunately, it is all too often exactly what is done.

Alpha, the “Average” Investor, and Market Efficiency

Consider the case where we are using some broad, capital-weighted market index as our benchmark and the target investment is one whose potential components are the same as those that constitute the benchmark. The capital-weighted gross returns of the portfolios of all such investors is the benchmark. Therefore, the capital-weighted mean of all alphas will be zero and of all betas will be one.

If, however, we look at the net returns, i.e., returns net of costs such as fees, then the capital-weighted mean alpha must be below zero, because net performance includes costs that are absent from the benchmark or factors. Remember, alpha is a relative measure. If the investment management industry magically experienced a universal rise in competence and allocated capital more efficiently, then the mean alpha would still be slightly below zero.

Arguing that the investment management industry is not doing its job because of that fact is like arguing that American education is failing because half of the students are below average. There certainly may be problems in American education, but any argument for or against that conclusion that is based on that sort of observation is vacuous.

Similarly, pointing out that alpha is approximately zero says nothing about whether markets are efficient or not. Market efficiency means something completely different. Efficient or not efficient, alpha has nothing to say about it.

Forecasting Alpha

Forecasting what alpha we can expect from an investment going forward introduces further complications. The obvious but naïve choice of using the “best fit” estimates as a forecast is not a good idea. Again, there are technical issues why this is so, but we’ll rely on a simple intuitive argument despite the fact that it may make a few of my mathematical colleagues cringe.

Consider the observed alpha of an investment. We can think of this observation as being roughly composed of two parts: skill (model) and luck (noise). Even under the strong assumption that our model is in some fashion complete, the forecast return of the investment should only include the skill component because the future expected value of the luck component is zero.

The luck, although unobserved, has an expected value of zero, so it may seem that the best forecast is still just the model fit. However, if we examine the cross section of alphas across investments, then it is likely that further from the average an estimate is, the greater the likely impact of luck on that realization.

A reasonable approach, then, might be to produce a forecast \alpha_{i}^{\prime} that is shrunk back towards the mean. For example, we could take a convex combination of the alpha estimated by the model fit, \hat{\alpha}_{i}, and the mean alpha, \bar{\alpha}:

\displaystyle{\alpha_{i}^{\prime}=\lambda \hat{\alpha}_{i}+(1-\lambda)\bar{\alpha}}

\displaystyle{0\leq \lambda \leq 1}

The average alpha is near zero. This means that the best forecast is one in which we have shrunk the magnitude of the estimated alpha. Given the other complications we discussed, determining the optimal shrinkage is just one more difficulty we must face.

This says nothing about the fact that things change. Managers are replaced. The competitive landscape evolves. Thus, the relevance of the data we used in our estimation may be in question.

Non-Linear Models, Neural Networks, …

You might remark at this point that a linear model has limitations and a more sophisticated approach would provide more insight. No arguments here. More sophisticated models do nothing, however, to address the concerns discussed above and often exacerbate them. For our purposes here, they change nothing.

Is Estimating Alpha Worthwhile?

So where are we so far? Alpha is a relative, not absolute, measure. There is not “an” alpha; there are a multitude of alphas dependent upon your modeling choices. Alpha is not directly observable; it can only be estimated from the data we have available to us. Those estimates are not only subject to the usual statisitical uncertainties but are also affected by the fact that the behavior of investments change over time. Even if we get all of that right, the mechanics of actually performing the estimation present formidable challenges. Alpha has definite limitations in what it tells us and many read far too much into it. Finally, even when we have done a solid job of estimating historic alpha, there are additional issues we must face in producing a forecast, which is ultimately what we need to do if we wish to use alpha to help us guide our investment decisions.

So, is it worthwhile estimating alpha? Sure if you can be confident that you  have something that is remotely usable. If your approach to alpha is to pick a single benchmark, dump data into a spreadsheet, and hit them with an off-the-shelf OLS routine, then the answer still may be “yes”, but at best that’s a highly qualified “yes”. You must understand what you have done and be certain to take that measure with more than a grain of salt. You must also be certain that those who will use ths estimate to make decisions understand its limitations.

In practice, however, the answer is probably “no”, except perhaps as a side-effect of an analysis performed with different objectives.

So if alpha is not the magic number that allows us to evaluate the performance of an investment advisor or manager, then what do we do? More on that in later posts.

Posted in The Practice of Quantitative Finance |

The Relevance of History

28th June, 2013 · Robert J Frey · 6 Comments

History is philosophy teaching by examples.  ~Thucydides

Several years ago a manager pitched  his algorithmic bond trading fund to a networking group that I was a member of. As someone who had been professionally involved with creating similar systems, my questions naturally focused on aspects of methodology that my experience had shown to be critical to the development of viable strategies.

When asked how much data he used to calibrate and backtest his approach, the answer was about five years of daily data. I expressed my concern that data at that frequency over such a limited period probably weren’t enough. He gave me a somewhat pitying look and stated that bond markets were in a constant state of evolution and that going back further would be pointless because the data would not have been relevant.

Needless to say he did not receive an investment from me. The experience did, however, cause me to reflect on how my natural love of history had benefited me over the years as both an investor and investment manager, not only in understanding its importance in building models that were robust in the face of changing market conditions, but also in how it influenced my ability to assess risk and to decide what sort of risks I was prepared to take on.

The focus of this blog is quantitative finance, and I will not, therefore, attempt to undertake deep and insightful historical analyses of economic or social trends. My goals are modest. Using readily available quantitative data covering extended time periods, I will undertake a simple and straightforward analysis that lead to powerful and actionable insights into the behavior of financial markets. These are insights that are largely inaccessible to the more myopic task of calibrating a conventional financial model.

There are many examples I could have used but decided to limit myself to a single case study: market volatility over the past three decades. What does a historical perspective of that topic show us?

Characterizing Market Volatility

The variance, \sigma_r^2, of return can be expressed by the following, where R is the random variable representing return:

\displaystyle{\sigma_r^2=\mathrm{E}[R^2]-\mathrm{E }[R]^2}

If we take the maximum likelihood estimate (MLE) for the sample variance, \hat{s}_r^2 , and multiply both sides by the sample size, n , we have

\displaystyle{n \hat{s}_r^2=\sum_{t=1}^n r(t)^2 -\frac{\left(\sum_{t=1}^n r(t)\right)^2}{n}}

For most return time series the term to the left of the minus sign is much larger than the term on the right. This leads to the following approximation:

\displaystyle{n \hat{s}_r^2 \simeq \sum_{t=1}^n r(t)^2}

Thus, if we plot the cumulative square returns (CSR), then the slope of the CSR line is approximately the variance of those returns. If the return switches from one stable volatility regime to another, then this shift shows as an abrupt change in slope of the CSR. This is a simple but powerful exploratory data analysis (EDA) technique. With an EDA we are attempting to gain insights that give us intuition about the data. Of course, we want to make sure that our insights make sense, but at this stage we are not unduly concerned with the formalities.

Volatility of the S&P 500

To keep things as accessible as possible for this post we will use the CSR, rather than more formal regime switching techniques, to examine the volatility of the S&P 500 monthly log returns from 1985 forward, 341 observations covering about three decades. The cumulative log total returns of the S&P 500 were:

gSP500CumLogPlot001

Next, we plot the CSR line to explore changes in variance over this timeframe:

gSP500CSRPlot001

Looking at the CSR it does appear that a piecewise linear model yields a reasonably good description. There are extended periods where the slope is roughly constant, and these are separated by definite kinks showing rapid transition from one regime to another. In two cases there are abrupt, near vertical rises with the CSR immediately returning to more usual behavior. If we manually fit a series of lines (actually a simple routine using Mathematica‘s Manipulate[ ] function), then we get  the plot below. The heavy black line is a channel representing the original data, and the yellow line the regimes. The dates are replaced by their integer indices to facilitate the slope estimation.

gSP500CSRFit001

The period represent by the i^{th} segment,  \tau_i, is a distinct regime, The slope of the i^{th} line segment gives us an estimate of the constant variance. If we multiple each variance by 12 and then take the square root, then we have an annualized estimate of the standard deviation, viz., \hat{s}_{\text{annual}}(\tau_i) = \sqrt{12\times\hat{s}_{\text{monthly}}(\tau_i)^2}. A plot of the resulting volatility regimes is:

gSP500VolatilityRegimes001

More conventional methods use linear filters such as moving averages or exponential smoothing to produce dynamic estimates of volatility. GARCH and allied techniques apply ARMA-like models to the variance. While useful in certain contexts, all such approaches are low-pass filters that tend to obscure fine (i.e., high frequency) structure. Much of the apparent information they seem to reveal are often artifacts of the filter, not the data. For example, spikes or abrupt steps up and down first attenuate and then corrupt subsequent estimates until they pass out of the effective range of the filter.

Interpreting the Results

The CSR is a “quick and dirty” approach that gives a quite different view. In the above graph we see spikes occurring, not surprisingly, in October 1987 and September 1998. These events are too short in duration to give anything that is statistically significant in the normal sense, but clearly they highlight immense market moves that were far outside what might have been expected from any”normal” level of volatility.

Aside from the spikes, there were mountains, significant shifts upward that lasted a few to several months. There were also extended plains, periods lasting many years, where volatility was stable. This is a dramatically different view of volatility than we would get from a 12-month rolling standard deviation, GARCH model, or any of the other techniques that many analysts view as de rigueur.

Most importantly, rather that being obsessed with attempting to predict volatility in the short-run, we have stepped back and thought about how to gain a broad historical perspective of the various behaviors of the market over an extended period of time.

Behaviors we have called mountain and spike regimes tell us that volatility  can rise and fall unexpectedly and even catastrophically over very short time frames. It is a gross understatement to say that maintaining hedges in the presence of such effects is problematic. It is also obvious that, regardless of any supposed level of significance achieved by conventional volatility models, those forecasts severely underestimate risk.

The presence of plains, periods of stability extending over many years, are by no means comforting. These plains regimes are most dangerous when their volatility is low. They lull us into a false sense of security.

In the financial markets many investment managers, traders, and analysts have spent their whole career working in such a regime. Firms are only too happy to provide consumers and businesses with the credit and investment products they demanded. Proprietary investments and trades once viewed as horribly risky seem to be wonderfully profitable opportunities with no statistically significant downside.

It all works fine as long as nothing goes wrong. Eventually, something does. The market enters a spike or mountain regime and many of the carefully validated, statistically significant models fail at once.

The Recent Low Volatility Plain

Consider recent events. From 2003 to late 2008, nearly six years, the S&P 500 was in an extremely low volatility plain regime. We had entered a “new normal”, an era of permanent prosperity. Recessions were a thing of the past as the business cycle had been forever tamed. Economic disruptions were either things of the past or could be easily managed. Buoyed by several years of successful predictions, the evidence was that our models were without material flaws; therefore, risks were well understood and tightly controlled.

Government and the Federal Reserve had led us to an economic Shangri-La. Corporate profits were secure and growing, Consumer confidence was high. Fed by easy credit and mortgage-backed securities, home ownership became accessible to a wider range of families, a social good giving more people a stake in the inexorable growth of the economy. These were safe investments on both the buy-side and the sell-side, for everyone knew that the last thing people would default on were their homes. Wall Street and Main Street prospered.

Yeah, that’s laying it on a bit thick. The whole thing was sad and ridiculous. This nonsense was sold by politicians, commentators, investment professionals, economists, and other pundits. A “new normal”? How could anyone who looked at even little history have believed that?

Conclusion

When we step back we gain perspective. Examining market volatility and other effects over the past thirty years gives us insights different from models that attempt to calibrate or forecast our volatility estimates in the short-term, although here also the understanding we gain from long-term history helps improve those efforts.

Think of it this way. When we build a house, we do not base the design on tomorrow’s weather forecast. We are more interested in the climate. We take into account rare extreme events, such as a hurricane,  (spikes) by putting in storm cellars, stocking supplies, or working out evacuation routes. We worry about more conventional storms that occur frequently during the year  (mountains) and make sure the structure can withstand them without difficulty. And we consider the fact that house has to be practical and comfortable in different seasons (plains) by being both warm in winter and cool in summer.

Weather prediction is immensely important; however, it is not enough. Without the historical perspective of climate studied over decades we would not be able to build a very good house. Without a historical perspective of finance and economics we cannot build effective portfolios or understand the risks of our trades.

If this post has gotten it message across, then your reaction should be that I haven’t done a very good job. Thirty years of data are insufficient. What do we find if we look at even longer time scales? Why look only at monthly data? What happens with high frequency observations? The S&P 500 is an important index, but it covers only one type of market in one country. Volatility is an important measure of uncertainty, but what other measures can be examined?

Finally, the CSR was used to keep things simple. I wanted something that would be easily accessible, and all you need for the CSR is a ruler and a pencil. There are number of powerful regime switching models that are worth your while to study, but they are often complex and are not normally covered outside of graduate programs in statistics. You may be interested in a tutorial on hidden Markov models that I developed for students in the Quantitative Finance Program in the Department of Applied Mathematics and Statistics at Stony Brook University: http://bit.ly/1aSVo4e (Mathematica notebook) or http://bit.ly/1ctr665 (PDF).

I can also recommend the work of Hyman Minski, a post-Keynesian economist, who asserted that long periods of stable ecnomic growth leads to a false sense of security and increasingly risky behavior until the economic system is so fragile that a small perturbation crashes the system. He formulated his theory mainly in terms of the characteristics of non-governmental debt: http://bit.ly/19JmWY7.

Posted in General Thoughts, Uncategorized |

All Correlations Tend to One…

4th June, 2013 · Robert J Frey · 18 Comments

Stress Markets and Correlation

We have all heard statements to the effect that “During market corrections all correlations tend to one.”  Often it’s accompanied either by an explanation unsupported by any real critical thinking or presented as some deep impenetrable mystery of finance. The objective of this post is to illustrate a single and simple concept that is more than adequate to explain this phenomenon. The approach used is that of a Gedandanken-Experiment, which is not actually an experiment but the application of a model stripped down to its essential elements to explore a situation or idea. What follows assumes that the reader has a basic knowledge of probability and statistics and high school level algebra skills.

The Capital Asset Pricing Model

We will use the equity market and the Capital Pricing Asset Model (CAPM) as the basis for our Gedanken-Experiment. Despite its limitations and the fact few practitioners believe that it is anything approaching a complete description of a stock’s behavior, it will be enough in that is captures some reasonable amount of reality: Individual stocks clearly move in concert with the market  and some stocks do so more or less than do others. The CAPM asserts that at time t the return r of stock i is expressed in terms of the risk-free rate rf , its exposure the market (or beta) βi, the market return m, and a zero-mean noise term ε that is uncorrelated to the market:

StressMarkets000_Eq00

Let σm2 be the variance of the market return and ηi2 the variance of the noise term, then the variance of the stock σi2 is:

StressMarkets000_Eq01

The market variance is also called the systematic variance and the noise variance is also called the unsystematic or idiosyncratic variance.

The noise terms of different stocks are uncorrelated; hence, the sole source of covariance σi,j between stocks i and j comes from their respective betas. The definition of covariance and some simple alegbra suffices to show:

StressMarkets000_Eq02

The correlation (in the sense here it is usually called r-squared)  ρi,j  is then by definition:

StressMarkets000_Eq03

Heteroskedasticity

When we look at markets, variances are not constant but wax and wane over time. The property of having such time-varying variance is termed heteroskedasticity. In practice, both the systematic and unsystematic variances are heteroskedastic. However, if we view the variance as a measure of new information, then during periods of market stress, it is obviously the market or systematic variance  that will dominate.

The Experiment

Let us consider a simple case in which we have two stocks which during “typical” market environments have equal unsystematic variances which are in turn equal to the market variance. Call that common value σ2. To further simplify the exposition we will assume both stocks have betas of one. Finally, we will characterize a stress parameter k which is a multiplier on the typical market volatility (the square root of variance, the standard deviation), i.e., kσ. We substitute these values into the above expression and simplify to yield the correlation ρ of our stress model:

StressMarkets000_Eq04_01

During normal markets k = 1 and the correlation is 0.5. To examine what happens during stress periods we plot the correlation ρ as a function of market stress k:

StressMarkets000_Fg01_01

 As the graph above shows when the market volatility increases by a factor of 2, the correlation increases from 0.5 to 0.8. When it increases to 3, the correlation is 0.9. If one wanted to estimate a value for our stress parameter k during serious market corrections, then a value of 3 is probably conservative. Clearly, the increase in market volatility, independent of other effects, is sufficient to explain the dramatic increase in correlation across the market.

 Other Effects

Are there other effects that we have not considered here? Almost certainly. For example, there is some evidence for “stress” betas. In other words, not only is there an increase in market volatility but the betas which link stocks to the market also appear to increase during stress periods.

As we mentioned, there may be some concomitant increase in non-systematic volatility. However, if one thinks it through, then any effect shared across stocks generally is a systematic effect, not an idiosyncratic one. It is likely that it is the increase in systematic volatility that dominates.

Conclusion

In our simple model conservative increases in market volatility during stress periods cause dramatic increases in market correlation. When markets correct, then systematic  information is absorbed in common across investments, and market heteroskedasticity is sufficient to explain the observed increase in general correlation.

Posted in Die Gedanken-Experimente |

What is Keplerian Finance?

12th May, 2013 · Robert J Frey · 17 Comments

During a discussion with a well-known colleague about the state of the art in finance and economics, he observed that we were in a state similar to that of physics during the lifetime of Kepler. Johannes Kepler (1571-1630) had realized that planetary orbits were ellipses and had succeeded in describing orbital dynamics by observing that planets sweep equal areas in equal times. Lacking any coherent theory, however, he hypothesized that planetary motion was caused by some motive power from the Sun, which also explained why planets more distant from it orbited more slowly. They were more distant; hence, the Sun exercised less influence.

Kepler also modeled the distances of planets by nesting successive Platonic solids within a sphere. With a suitable arrangement, the six Platonic solids accurately modeled the distance of the six known planets. This was a chance correspondence, but he was convinced that he had found a basic law of the universe.

Kepler-solar-system-2

Source: http://en.wikipedia.org/wiki/File:Kepler-solar-system-2.png

Kepler’s accomplishments extended to mathematics. His work on the use of infinitesimals presaged the infinitesimal calculus, developed independently by Isaac Newton (1642-1727) and Gottfried Wilhelm Leibnitz (1646-1716). It was not until Newton’s theory of gravitation together with the mathematics of calculus, that a fuller understanding of Kepler’s equal area in equal time law of planetary motion could be realized.

Kepler, however, was intensely concerned with the theological consistency of his theories. His publications include a careful reconciliation of a heliocentric universe with Scripture. He was also a practitioner of astrology.

It is easy from our perspective, to criticize these attitudes. Kepler, however, was a genius. He succeeded in describing and predicting planetary motion more accurately and more simply than anyone had previously. He believed in an ordered universe accessible to human understanding and was committed to the principle that theory must be validated by observation.

One may argue that this is always so. Newton was superseded by Albert Einstein (1879-1955), and Einstein’s theory of relativity has yet to be reconciled with quantum mechanics.But the context here is the effectiveness of theory against the set of tasks that we have set for it. Newtonian mechanics is quite good enough for NASA to send planetary probes throughout the solar system. A comparable level of success has yet to be achieved in economics and finance.

What does all of this have to do with economics and finance? This blog is written from the perspective that these disciplines are in a Keplerian (or, if you prefer, a pre-Newtonian) state. We are committed to the scientific method. We have developed some crude theories that appear reasonable and that have been somewhat validated by actual observation or experimentation.

Unfortunately, we have difficulty knowing exactly what and how to abstract from the complexity of reality to build models which offer us viable predictions. We are constantly blindsided by the deficiencies in our understanding and methodology. These failings are not the Eureka! moments of discovery when an unexpected outcome leads to a new insight, but the fumbling missteps that come from having a not quite clear vision of what’s going on.

We lack an equivalent theory of gravitation to enlighten us. Econo-physics, despite some interesting correspondences, is often just a reformatting of existing results. The foundational homo economicus is not quite right, but not quite wrong either. Behavioral economics and finance seem to describe individual decision making with greater fidelity but materially better predictions of economic and financial events have not been forthcoming. And the mathematical tools we possess are not up to the task. Perhaps developments such as agent-based simulations and cellular automata offer hope, but they are still infants awaiting further development.

This, then, is the blog of a skeptic. Not the arrogant skeptic confident in his criticism of astrology or numerology. It is the blog of the humble skeptic who is all too aware of his dim witted wanderings at the edges of enlightenment.

References

  • Bate, Roger R., Fundamentals of Astrodynamics (Dover Books on Aeronautical Engineering), Dover Publications, 1971.
  • http://en.wikipedia.org/wiki/Behavioral_economics, Wikipedia, “Behavioral economics”. Retrieved 2013-05-12.
  • http://en.wikipedia.org/wiki/Einstein, Wikipedia, “Albert Einstein”. Retrieved 2013-05-12.
  • http://en.wikipedia.org/wiki/Galileo_Galilei, Wikipedia, “Galileo Galilei”. Retrieved 2013-05-12.
  • http://en.wikipedia.org/wiki/Homo_economicus, Wikipedia, “Homo economics”. Retrieved 2013-05-12.
  • http://en.wikipedia.org/wiki/Infinitesimal, Wikipedia, “Infinitesimal”. Retrieved 2013-05-12.
  • http://en.wikipedia.org/wiki/Isaac_Newton, Wikipedia, “Isaac Newton”. Retrieved 2013-05-12.
  • http://en.wikipedia.org/wiki/Johannes_Kepler,Wikipedia, “Johannes Kepler”. Retrieved 2013-05-12.
Posted in Description and Policies |

Recent Posts

  • Alpha and Evaluating Investment Advisors
  • The Relevance of History
  • All Correlations Tend to One…
  • What is Keplerian Finance?

Categories

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 171 other subscribers

  • RSS - Posts
  • RSS - Comments
© My Website
  • About Me
  • Location
  • Contact Us