I wrote previously that the Black-Scholes options pricing model replaces the equity drift μ with r - ½σ², where r is the riskless interest rate and σ is the stochastic element of the equity's price movement. This substitution has the effect of eliminating the equity drift coefficient from the model, and obviating the need to decide what the drift coefficient actually is.
The reasoning given by Black and Scholes in their paper that introduced their model (as well as one of the very first uses of risk-neutral pricing generally) were (1) that, through continuous rebalancing, the second-order (dz²) stochastic terms in the return on a portfolio became riskless (this argument implicitly relies upon Itō's Lemma, although the paper did not explicitly refer to it), and (2) that even if they were not riskless due to discontinuity, the second-order terms should be subject to the riskless interest rate because they are uncorrelated with the market rate of return. Thus, they argued, second-order terms must be priced with the riskless interest rate directly, and any risk premium must apply only to any first-order stochastic terms.
This can only be achieved by adapting only the drift coefficient -- and not the stochastic coefficient -- in constructing a risk-neutral probability distribution. This means replacing the equity drift μ with r - ½σ².
And this is all roughly consistent with traditional utility theory.
Showing posts with label probability. Show all posts
Showing posts with label probability. Show all posts
Wednesday, December 12, 2012
Monday, December 10, 2012
Delta Hedge
A delta hedge is a portfolio made from two or more distinct securities that are related to the same underlying security. The delta hedge is designed to cancel out, at least instantaneously, the first order relationship (delta) of any constituent securities to the underlying security.
Itō's Lemma
Itō's Lemma is a theorem of stochastic calculus that holds that within a closed integral, dz² can be replaced by dt, where dz is a stochastic variable with order of magnitude equal to the square root of dt. The Lemma is sometimes erroneously stated as "dz² equals dt," which is not generally true. Integration, with continuity, invokes the Law of Large Numbers. In the absence of continuity the variance of dz² is proportional to Δt, length of measurement intervals taken over the range.
Itō's Lemma is significant in finance because it provides the basis according to which a delta hedge is assumed to be riskless, an assumption that is essential to the Black-Scholes Equation.
Itō's Lemma is significant in finance because it provides the basis according to which a delta hedge is assumed to be riskless, an assumption that is essential to the Black-Scholes Equation.
Wednesday, December 5, 2012
The Lognormal Distribution and Risk-Neutral Pricing
I wrote previously that the ergodic property of the normal distribution is so useful that it often makes sense to assume a normal distribution even in the face of evidence to the contrary.
Before going to that extreme, however, it is sometimes possible to arrive at a normal distribution by looking at a function of an original variable rather than the variable itself. One example of this is the lognormal distribution, which is a distribution whose log is a normal distribution. In finance, the future price of a stock is often considered to have a lognormal distribution, which gives the rate of return on the stock a normal distribution.
In considering a lognormal distribution, we generally refer to aspects of its log. In particular, we usually define a particular lognormal distribution based upon the mean, μ, and variance, σ², of the log.
There are any number of things to be known about lognormal distributions, but the most important fact about them for my purposes is that the expected value of a lognormal distribution is exp(μ + ½σ²).
Getting back to finance, the future price of a stock at time t can be considered to have a lognormal distribution with log-mean lnS + μt and log-variance σ²t, where S is the price of the stock at time 0. (here I have effectively used μ and σ² as the mean and variance of the instantaneous rate of return of the stock, so that at any given point in time in the future t the rate of return on the stock will have mean μt and variance σ²t). This gives the expected value of the stock at time t as exp(lnS + μt + ½σ²t).
Under a risk-neutral pricing regime, however, the expected future price of the stock should be exp(lnS + rt), where r is the riskless interest rate. So if we want to preserve the lognormal distribution of the stock price, we somehow have to adapt our real world expectations into risk-neutral probabilities so that exp(lnS + μt + ½σ²t) equals exp(lnS + rt), or more simply so that μ + ½σ² equals r.
One method to accomplish this is simply to replace μ with r - ½σ². And this is precisely what the Black-Scholes options pricing model does, about which more later.
Before going to that extreme, however, it is sometimes possible to arrive at a normal distribution by looking at a function of an original variable rather than the variable itself. One example of this is the lognormal distribution, which is a distribution whose log is a normal distribution. In finance, the future price of a stock is often considered to have a lognormal distribution, which gives the rate of return on the stock a normal distribution.
In considering a lognormal distribution, we generally refer to aspects of its log. In particular, we usually define a particular lognormal distribution based upon the mean, μ, and variance, σ², of the log.
There are any number of things to be known about lognormal distributions, but the most important fact about them for my purposes is that the expected value of a lognormal distribution is exp(μ + ½σ²).
Getting back to finance, the future price of a stock at time t can be considered to have a lognormal distribution with log-mean lnS + μt and log-variance σ²t, where S is the price of the stock at time 0. (here I have effectively used μ and σ² as the mean and variance of the instantaneous rate of return of the stock, so that at any given point in time in the future t the rate of return on the stock will have mean μt and variance σ²t). This gives the expected value of the stock at time t as exp(lnS + μt + ½σ²t).
Under a risk-neutral pricing regime, however, the expected future price of the stock should be exp(lnS + rt), where r is the riskless interest rate. So if we want to preserve the lognormal distribution of the stock price, we somehow have to adapt our real world expectations into risk-neutral probabilities so that exp(lnS + μt + ½σ²t) equals exp(lnS + rt), or more simply so that μ + ½σ² equals r.
One method to accomplish this is simply to replace μ with r - ½σ². And this is precisely what the Black-Scholes options pricing model does, about which more later.
Monday, October 29, 2012
The Normal Distribution and Ergodicity
The normal distribution, which is represented by the familiar bell curve, is probably the most important probability distribution in statistics. The reason for this is the Central Limit Theorem, which holds that the sum of a large number of identically distributed but independent random outcomes will tend toward having a normal distribution as the number of included results increases, regardless of their original probability distribution.
For the purposes of finance, an important aspect of the Central Limit Theorem is that normal distributions will be ergodic, which is to say that they will exhibit the fractal quality of having indistinguishable characteristics regardless of at what scale they are viewed: a variable that moves with instantaneous, normally distributed perturbations will create a time path that looks the same whether you are looking at movements over one minute, one day, one month, or several years. The reason for this is that, as with every other sort of variable, sums of normally distributed variables tend to be normally distributed variables; the exception is that normally distributed variables also start that way.
To a very broad degree in finance, measurement time sequences are arbitrary. There is no reason to think that the "proper" span of time over which to consider results is a minute, a day, a month, or a year. There are some cosmological conditions -- such as the passing of night and day and the changing of the seasons -- and some social conditions -- such as regularly scheduled weekends and holidays during which financial and economic activity is restricted -- that will create regular cycles that might be taken into account for the purpose of measuring financial results, but beyond this there is no reason to think that any span of measurement is better than any other.
The usefulness of the normal distribution's ergodic property is so great that, in my estimation, it is often worthwhile to use the distribution even when it is known not to match experience for the purpose to which it is put.
For the purposes of finance, an important aspect of the Central Limit Theorem is that normal distributions will be ergodic, which is to say that they will exhibit the fractal quality of having indistinguishable characteristics regardless of at what scale they are viewed: a variable that moves with instantaneous, normally distributed perturbations will create a time path that looks the same whether you are looking at movements over one minute, one day, one month, or several years. The reason for this is that, as with every other sort of variable, sums of normally distributed variables tend to be normally distributed variables; the exception is that normally distributed variables also start that way.
To a very broad degree in finance, measurement time sequences are arbitrary. There is no reason to think that the "proper" span of time over which to consider results is a minute, a day, a month, or a year. There are some cosmological conditions -- such as the passing of night and day and the changing of the seasons -- and some social conditions -- such as regularly scheduled weekends and holidays during which financial and economic activity is restricted -- that will create regular cycles that might be taken into account for the purpose of measuring financial results, but beyond this there is no reason to think that any span of measurement is better than any other.
The usefulness of the normal distribution's ergodic property is so great that, in my estimation, it is often worthwhile to use the distribution even when it is known not to match experience for the purpose to which it is put.
Sunday, October 28, 2012
Probability Distribution Functions and Random Variables
A probability distribution function (p.d.f.) is the collected measure of the likelihoods of each possible outcome of a random event. So a p.d.f. for the future price of a particular stock would give a likelihood of each possible future stock price, from zero on up.
Probability distribution functions have a couple qualities:
Probability distribution functions have a couple qualities:
- they must be strictly non-negative; and
- the sum of all probabilities under a p.d.f. must be one.
If a random variable has a known p.d.f., two important values can be determined for it: its expected value, or mean, which is sometimes designated with the Greek letter μ; and variance, which is designated with σ². Variance is the expected value of the square of the difference between a random variable and its expected value.
There are a lot of things of interest about mean and variance, but for my purpose, only a couple are important.
First, the square root of variance, σ, or standard deviation, can be used as a measure of confidence intervals for a random variable: for a normally distributed variable, for example, the span within about 1.96 standard deviations of the mean of the variable forms a 95% confidence interval.
Second, both mean and variance are additive, which is to say that if X and Y are random variables, then generally the mean of X+Y is the mean of X plus the mean of Y, and the variance of X and Y is the variance of X plus the variance of Y. (The latter isn't quite true, because if X and Y are not independent -- the outcome of X affects the p.d.f. of Y -- then the variance of X+Y is the variance of X plus the variance of Y plus twice the covariance of X and Y. The covariance of two random variables is the expected value of the product of the differences between the variable and their respective means. I'll almost always be assuming independence between variables, so covariance won't matter to me much.)
In combination, these two factors create an important effect: the expected value for a sum of identically distributed independent variables grows with the number of variables included in the sum, while confidence intervals for this sum grow with the square root of the number of variables included in the sum. This gives the Law of Large Numbers: the average of a number of outcomes of independent variables from an identical distribution will approach their expected value.
Subscribe to:
Posts (Atom)