From couples to copulas, David Li was an actuary!

You will recall David Li, the quant who devised the formula that "revolutionized" derivatives. I believe this post from Robert Oak was the last time he was discussed at EP.

Now, along comes this article recently published in FT. While it doesn't shed any new light on the formula, per se, it is a very interesting biographical piece about young Mr. Li. It also retraces key developments in the creation and the ascendancy of quantitative analysis on Wall Street.

In the autumn of 1987, the man who would become the world’s most influential actuary landed in Canada on a flight from China. Neither Xiang Lin Li nor the handful of fellow junior academics with whom he was travelling – all from the University of Nankai – had ever been abroad before, yet they had come at the behest of the Chinese and Canadian governments to do something most unusual: study capitalism. The small band of mathematicians and statisticians would be taking business degrees at Quebec’s Laval University.

[snip]

After graduation from Laval, he enrolled at a new university, Waterloo, near Toronto. He would now be studying actuarial science. And this wasn’t the only change: the move from genteel, francophone Montreal to the more worldly and business-oriented Toronto was profound – and deliberate. According to Jie Dai, a fellow immigrant from China and a classmate at Laval, “I clearly remember [Li] mention that if you are an actuarial guy, you can earn a lot of money.”

But, of course, the really big money was on Wall Street and that is where David Li ended up in 1998.

Li had come to New York to work for a consultancy called the RiskMetrics Group, which had been spun out of JP Morgan, but he was still thinking about life, death and love. In 2000, he published a paper in the prestigious Journal of Fixed Income that gained some serious attention. In it, Li performed a most elegant trick. Borrowing from his work in actuarial science and insurance and his knowledge of the broken-heart syndrome, he attempted to solve one of Wall Street quants’ most intractable problems: default correlation.

[snip]

Li realised that his insight was groundbreaking. Speaking to The Wall Street Journal seven years later, he said: “Suddenly I thought that the problem I was trying to solve [as an actuary] was exactly the problem these guys were trying to solve. Default [on a loan] is like the death of a company.” And if he could apply the broken hearts maths to broken companies, he’d have a way of mathematically modelling the effect that one company’s default would have on the chance of default for others.

It's a really good piece of reading for a Sunday afternoon in the springtime. When I read this kind of story, I think of the movie Sliding Doors. What would the world be like today if David Li had stayed in his small Chinese village?

Subject Meta: 

Forum Categories: 

just astounding

Unless I live in mathematical fantasy land, I simply cannot believe his original paper was so endorsed. This makes me believe in financial mathematics, quantitative analysis, actuarial science, the peer reviewed journal articles from colleagues....they are simply not doing their own homework.

Isn't this incredible, the guy who came up with the "simplification" to reduce an entire group of dependencies to a scalar (that's like saying, ok dudes, multiply by 2 and that's the answer on say calculating the number of beams one needs in a skyscraper) as an infallible model.

It seriously is a mess. To try to reduce such massive dependencies and interactions such as global markets and individual elements into basically a glorified scalar...
well, I have a hard time believing anyone who has a PhD in these mathematical areas would buy that...

so frankly, maybe the 6 figure money for geeks made them all say "who cares" and created some fictional mathematics simply to get filthy rich....I honestly do not know but from the Scientific community side, this is kind of amazing to me that they would take that paper and create an entire derivatives industry out of it that basically collapsed in just 6 years.

I just hope this sheds a little light on Academia. They so often are considered sacrosanct and they really should not be so considered.

AFIU, the simplification rests upon the efficient ...

... market hypothesis. We have a market, we have prices from a market, we can impute the value of risk in the view of market participants ... and then, "because markets are efficient", that is the actual value of the risk.

The historical fact that people systematically under-estimate risks in the midst of a bubble is merely evidence that the efficient market hypothesis does not hold in reality. It does nothing to undermine the convenience of the efficient market hypothesis when engaged in the mathematical modeling of "the economy".

Indeed, the efficient market hypothesis is a good marker for the key difference between an academic field like mainstream Economics and Science ...

... since mainstream Economics is not trying to provide a cause and effect explanation of what is happening, all that is required is a convention among potential referees that "we all have to accept unanchored assumption 'X', because without it, none of use would get anything published".

this isn't quite the same thing

this is an area of mathematics involving advanced probability models with a gaussian "distribution" model (copula) which can be "collapsed" to a scalar (constant) under some assumptions.

This is the issue, they based trillions of dollars (estimates vary widely) on these mathematical models...

which, by the mathematics itself, are simply not valid due to the real interdependencies and multi-variate correlations of assets over global markets.

Taleb in his book, The Black Swan, does a good explanation coming from a layperson's view, i.e. to explain it as conceptual terms so one doesn't have to know advanced statistics, probability theory etc. to see what the problems are.

This is one of those 'necessary/sufficient' things.

The efficient market hypothesis is not sufficient for the assumptions required to collapse a copula to a scalr ... but in this context, it is necessary. Fail to assume the efficient market hypothesis, and there is no way to meet the assumptions.

That does not, of course, imply that the efficient market hypothesis on its own is enough to get you there ... there are additional heroic leaps of faith required.

that is not the assumption on this copula

This paper this is structured finance and has nothing to do with the validity or "behavioral" (math illiterate) economists "argument".

The assumption is a correlation of default rates. The Copula used by Li is a Gaussian distribution function. Copula models, by using them and claiming it was valid (when it wasn't), it was by this modeling structure which allowed "assessment" of mortgage default rates to be reduced. This is the fundamental issue, residential mortgage default rates do not conform to Guassian distributions or even uniform distributions. See Copula and please click the link. I think once one sees this is actually a mathematical formula, a model, it becomes clear Li's model in in the realm of advanced mathematics and has little to do with "data points" (which are assuredly misunderstood in the first place, it's really regions of convergence) of "equilibrium". This is advanced statistics, mathematical modeling and has zero to do with some absurd argument about the realities of equilibrium versus mathematical models used by economists.

It is even worse that Li chose to use as gamma in his formula CDS data but the Copula itself, assumption a simplistic uniform joint probability is not valid.

But this is not a matter of "market's know all that's needed to be known" issue...

The formulation of a mathematical model, especially in this case does not even have any "data" in it initially.

the issue is the Copula itself based on a Gaussian distribution model in addition to a simple joint cross correlation probability model.

In other words, the model itself is simply not valid. One cannot break down to a scalar through this method the probability of Bob defaulting given that Betty Joe defaulted. It simply is not a Gaussian distribution.

I wrote about this earlier but this is frankly advanced probability distributing modeling that is used in structured finance as well as actuarial science and in this case...well, his assumptions are obviously not valid.

I'm going to stick to the math illiterate argument.

From that link:

Gaussian copula
One example of a copula often used for modelling in finance is the Gaussian copula, which is constructed from the bivariate normal distribution via Sklar's theorem.

If markets are not efficient, the classical central limit theorem does not yield a Gaussian distribution, because the events cannot be assumed to be a series of independent and identically distributed events. And it is certainly only the classical CLT that would give a basis for using a Gaussian ... there's never even a weak test for normality for the individual distributions.

I think once one sees this is actually a mathematical formula, a model, it becomes clear Li's model in in the realm of advanced mathematics and has little to do with "data points" (which are assuredly misunderstood in the first place, it's really regions of convergence) of "equilibrium". This is advanced statistics, mathematical modeling and has zero to do with some absurd argument about the realities of equilibrium versus mathematical models used by economists.

In what sense is what Li done different in any respect from what mainstream economists normally do?

Criticizing the quality of the mathematical analysis in a mainstream economic model is beside the point because the model is not anchored in reality ... whether the math is pedestrian or strong, its far more common flaw is that its beside the point. Mainstream economists do not argue about the realities of equilibrium versus the models that they use, they simply make the conventional assumptions required to proceed with their modeling ... as Li did ... and then focus on the math. The quality of the math is one of the main determinants of the quality of the journal that they get published in.

so where does the CLT EVER imply historical data?

Seriously.

I have no idea what you are talking about at this point.

The CLT is for large aggregate numbers but it assumes random probabilities, i.e. random variables. i.e. independent probabilities or independent events.

This has nothing to do with "market equilibrium".

"market equilibrium" is simply, in mathematics terms...
well, solving a basic algebra problem...but in no way does it assume, imply or use random probability variables. Market events are well known to not be random.

i.e.

x1 + x2 + x3 ... + xn = y

and one can take this much further of course, one can add multi-dimensions, they can also add projected probabilities, but as far as I am aware, no one in regular economics, even advanced modeling is using Copulas because one must find a method to which one can claim the probability of event A, where A happens solo....has a uniform distribution (i.e. Gaussian, etc.).

They would use that possibly in future scenario projections modeling but in "regular" macro, it's more deterministic (dependent) modeling.

But in undergraduate economics, it's all algebra and calculus and much of that is for concepts. Many of those concepts, say the law of supply and demand, are proven, not with gaussian distribution models, but with statistical real world information, i.e. real data.

ex. would be Keynes:

Basic Stimulus Keynes:

C+I+G+X-M= GDP

As an example, I'll just take Borjas over there. He has pretty good math. Well, I know for a fact he goes through his data, his statistics in great detail and publishes every assumption when he doesn't have the raw data, if he extrapolates anything.

i.e. they are much more about statistics, data analysis.

Li, on the other hand, has a PhD in actuary science. This is a branch of mathematics that is notorious to pay big huge bucks. This was an unsolved problem simply because one could not easily model the probability of default...
because they were indeed using Markov models, which is kind of an "AI" type of thing as well as diversifying their derivatives, also to spread risk.

But this issue of Li is another entirely separate area.

It is mathematics, applied to finance.

I'm sorry but the point is the mathematics, simply because this is the branch of science at hand.

I hate to say this but many models ARE anchored in reality. I have no idea what you are referring to.

I would not see a "Gaussian Probability Model" in say for example, the model for free trade because it is not applicable. That's another different branch of mathematics and from my readings to date, I have never seen a good macro economic theorist, or say a labor economist, trade theory ever use a "Gaussian distribution", except say something like "skills" per subgroup as superior/inferior, i.e. for data that is known already to cohere to a Gaussian distribution by historical evidence.

I honestly have no idea at this point what is your point.

Maybe you heard a blurb about DSGE modeling, which is used by the world bank and so on.

But Stochastic modeling is quite different. That implies Markov models, which are transitional states, and can be independent of past events, i.e. not a Gaussian Copula, which is a fancy way of saying one is computing to a constant a joint probability model (i.e. dependence events).

Robert, just to sum this all

Robert, just to sum this all up, wasn't the promise of Li's approach the notion that by distributing risk probabilities (throughout a tranched security) the overall risk would be minimized? And, accordingly...would you please summarize (in simple language) where the whole thing was flawed? (Personally, my sense is that in a perfectly closed system without any external variables perhaps his system might have worked. But reliance on Appraisals and FICOs solely as the underlying valuators...fails to consider local/regional employer health, job creation, global outsourcing, real wage impacts, etc.) So, in a nutshell, can we distill Wall Street's main flaw into a simple set of factors?

in "English"

Tranching is more just "tacking on some bad" with some good again with the idea that defaults are not dependent (wrong). so like you buy a piece of cheese and part of the cheese is the "best part", but you get those dried out crappy parts with the "best parts". Ya gotta buy the whole cheese. But every cheese can get mold in it...although mold usually hits the "bad part" first.
Well, in tranching they assume your cheese would never, all at once become moldy...that a new "super strain" of mold would hit your cheese at once, in say...30 minutes.

Therefore since your cheese mold is assummed to start at the bad part....you could eat the good part before the mold overtook your whole cheese.

Mold was also random with "tranches", i.e. cheese regions on your wheel of cheese, i.e. areas where one could expect "more or less" mold to grow on the cheese in that particular area.

Think Gouda or Port Salute or wheels of Brie.

Where the Games begin is evaluating those risks in each tranche. I guess they had been bundling these things up previously, so this new game allowed them, instead of using complex models to evaluate the worth of each tranche, i.e. part of the cheese where the risk of mold would occur, they just used this Li model, cranked out some numbers on the below invalid assumptions and said there! All safe. (ha ha). I guess they used to bundle securities up (with tranches) (crap, I'm getting out here) in ....mumble, mumble..
in let's say in the past tranches of bundled securities were, one part was the cheese, another part was aluminum, impervious to mold, another part was say paper, can get moldy but very very slowly. So in the past they bundled up a bunch of different things to spread risk and were sold that way as bundled securities with tranches.

With Li's formula they bundled the entire security as "sections of the cheese". With Copulas, all cheese was good, bundling only cheese with cheese to manage risk was good, even breaking up bad cheese and selling it together to manage the risk of mold was good.

i.e. a rat's dream.

The cheese in this case was MBS, or assets of all one class, i.e. residential properties.

What Li said was one could safely estimate the chance that if one mortgage defaulted, another one would.

He did that by two problems (that are flawed in my view).

First was the entire concept that defaults are "random" or uniformly distributed. Take that meaningless number IQ score. Well by the scores themselves, they are distributed with an average and then "tails" out on the end. A bell curve. So, the chance if Bob is born with an IQ of 50, Betty would have an IQ of 150, just by the limits of human "intelligence" measurement and the way they want the "distribution" represented to ensure 100 is the "average". (IQ has it's own bs but for another day!). So, Li assumed the chance of one person not being able to pay their mortgage was a simple relationship to someone else, i.e. a more "randomized" or "normal", (evenly spaced with the most being in the middle) distribution akin to a IQ bell curve. Problem is that just ain't the way defaults work, especially historically and especially when companies are busy giving mortgages to people with phony social security numbers, no legal right to be in this country, no income, not enough income, absurd, predatory terms from lenders, so over-inflated prices no one could really afford the asset in the first place and bad credit scores.

So for example, Li said the chance someone in Riverside, CA couldn't pay their mortgage would affect someone in Ohio not being able to pay their mortgage by increasing that Ohio person's chance of defaulting by say 1.5%. and that number would stay constant, that 1.5%...until someone else defaulted.

So, things like "all of Riverside" homeowners defaulting wasn't in the model...even though it's clear Riverside, CA is a "ground zero" of foreclosures due to these other dependencies as well as through time....

In Li's math world, events are all constant and linear. Kind of like a traffic light only ignoring the increasing line of cars waiting for it to turn. The light stays green, red, yellow at constant intervals. The back up of cars along the road....does not.

But in the real world of massive mortgage defaults, things happening are absolutely not constant and linear here. Time ain't on one's side and the relationship of default is not linear either. If all of your neighbors default, you are in serious SOL because not only is your property now worth nothing, the banks just lost big money and all of the stores and employers closed around you because now the majority are flat broke. Time ain't on your side, ain't constant, ain't steady just waiting around for Betty in Ohio to default anymore.

i.e. that chance that Betty will default in Ohio, well firstly it's kind of meaningless to you because everybody around you is now homeless, without a job and the whole area looks like a war zone but also that chance even your neighbor will default is still 1.5%, by Li's model, even though it's clear the default rate in your neighborhood is really 98.9%!

in other words Li claimed that "all defaults are created equal" from their origins, affected by "external forces" by from the start are "separate" from each other directly, each affected only by these large "external forces" that also were not so "regional".

(uniform distribution which is required by the Gaussian Copula to be valid).

So for each "region" (say Ohio and riverside) one could break it down more "piecemeal" and give a default risk number.

In other words, you have a bunch of cheeses where the good parts were taken out. You put them all together. But then you say at time of creation, none of the cheese has mold. Therefore the probability that all of the cheeses, all of the parts of the cheese will go bad, will be a constant, linear value and it doesn't matter anymore that you just packaged up a bunch of cheeses that are the first to mold because you claim that it's still random and uniform on how the mold grows on each piece of bad cheese.

Not so says anyone who knows cheese. If you put bad cheese together it can get moldy quick, it will blow up, explode in disgusting gruesome smell all over your refrigerator, even though each piece of bad cheese (when it's with good cheese) doesn't have that property...you will have a nasty mess on your hands because it spreads and it's not independent or uniform, or constant over time. It is not even linear never mind constant. i.e. Li's assumption flaw.

Second Li flaw I perceive:

Because of the above assumptions, he then claimed that CDS and "market data" was a valid metric to determine increased chances of default. Call it an "indicator" simply because credit default swaps are valued daily and would somehow follow the markets.

Now this is where you get the comment about "equilibrium" as if there is a section of the world that has decided to ban the = symbol as an evil doer.

But it is not the = sign, or how "market data does not reflect equilibrium" (no shit Sherlock, of course all real data does not have equilibrium, firstly there are always unknowns, knowledge is not constant, information is not uniform) but this other notion of instantaneous is pure illusion and anyone who deals with any time based system analysis will tell you that. There is ALWAYS a lag man!

I digress, back to the cheese analogy and the flaw I really see in using CDS data as a risk curve creator:

To me, the proposal of using CDS data was really nuts. I mean obviously nuts and why no one flagged this guy's paper is beyond me. CDSes are not a 1:1 ratio to the underlying asset, one can have unlimited CDSes associated to one mortgage. There is no upper and lower bound on CDSes. CDSes also do not have any historical data pattern to prove they model default data or correlate to it. Jesus man, they haven't been around long enough to claim they, as an asset class by themselves are modeled accurately!

It's like me going out to the highway at midnight, counting cars for an hour, and then claiming that is the constant daily traffic flow, 24/7, every day of the year for that highway.

That's the best I can do (at past midnight).

In Math:

Li joint probability copula

correlation parameter, γ is CDS data based.

This is what is invalidated directly unless one ...

... assumes efficient markets:

He did that by two problems (that are flawed in my view).

First was the entire concept that defaults are "random" or uniformly distributed.

If you assume efficient markets, then you assume away systemic interactions between events, like defaults, that are not captured in the prices established in markets for financial assets affected by the events.

If you do not assume efficient markets, there is no way to get to independent and identical distributions, because there's no scientific basis for the assumption. Its only when, as in mainstream economics, it is normal to invoke commonly used assumption despite that fact that they are contradicted by reality that someone could possibly treat the distribution of the returns on a pool of mortgages as a Guassian.

Obviously there is nothing there in terms of
   (Observation->assumption)

But mainstream economics does not work in terms of (Observation->assumption). It works in terms of:
   (Requirements for tractable solution -> assumption)

So as you describe it, there is nothing about what Li did that is in any way outside of the mainstream practices of economics. That fact that it does not result in a model that applies in the real world is par for the course ... that is the downside for modeling as mainstream economists do, as opposed to modeling as natural and social scientists do ... there is nothing in the process that filters out models that are irrelevant to the real world.

that is not what I said

Once again, he claimed that CDS data modeled accurate market data. The argument of whether or not using real time market data is valid or not as a general concept is not relevant in this discussion.

CDS data is not isomorphic. That's really want I am saying. There is not an inverse, it is not linear, it is not bounded. That's clear by the parameters, the characteristics of CDS itself, not even going into that much credit default swaps actual values.

A correlation coefficient must have certain properties, number one being a linear relation. That's what i mean by it's not "1:1".

CDSes from everything I have read and I haven't gone mathematical diving into them but by the parameters I am aware of to date, are simply not that relationship.

Therefore, use of CDS values, or 1st order derivatives (i.e. the daily spread), is invalid and an input for a requirement in Copulas by the definition of the Copula theorem itself.

I do not expect most people on EP to be able to read Li's paper and digest what he proposed.

I did it.....because I like to go dumpster diving into various mathematical based research and proofs.

On the other hand, I'm really not interested in arguing with some bizarre philosophy or belief that somehow the = sign is the root of all evil.

One can find plenty of bad math out there, just pull up any corporate lobbyists "research" paper and you can dig it out in no time. But all of this tells me Academia, "research" has become increasingly political and is abandoning the original intent of open, objective scientific method, thorough peer review and debate.

Bad math does not imply all evil lies at some very misunderstood concept, such as what equilibrium actually means.

Again, equilibrium is not the issue I see. It is the correlation coefficient properties and how CDS swaps do not possess those required mathematical properties.

That is not what I said either ...

"Bad math does not imply all evil lies at some very misunderstood concept, such as what equilibrium actually means.

Again, equilibrium is not the issue I see. It is the correlation coefficient properties and how CDS swaps do not possess those required mathematical properties."

I don't follow what this is trying to say. What equilibrium "actually means" in mainstream economics is that two equations derived from optimizing models are set equal so that you can solve for price and quantity.

You are saying that Li's mathematical model is built on assumptions which are not the same as the actual properties of the transactions being modeled. In other words, you are saying that Li worked like a mainstream economist.

The most common problem that crops up in mainstream economic theory ... and in mainstream economics, theory is identically equal to mathematical modeling ... is not that important real world relationships are misunderstood, but rather that they are not compatible with the modeling techniques in the toolkit and so must be replaced with a proxy.

And those same mathematical models must be mastered in order to gain most graduate degrees in finance.

No, that is again incorrect

A is a member of B does not imply B is a member of A.

The sky is blue in California on Wednesday does not imply the sky is blue for all time, around the world.

A flawed mathematical model does not imply all mathematical models are flawed.

A branch of mathematics does not imply all economic theory.

You have a serious logic problem going on.

and I'm kind of sick of it. Look, not only is EP an "all things economics" community site, but we are also a reality based site.

So, trying to push some assumption that all mathematical models are bad and all economic theory is wrong just is not going to fly on EP because that is a belief.

If you wish to proof it....well, ya kind of need some data, mathematics and facts. (oopsy!)

Once again Li was an Actuarial Science PhD and worked in structured finance. That is not the same as having a PhD in say Macro Economics or International Economics or labor economics.

Also Li was working at a company who had a vested financial interest to quickly be able to evaluate risk of CDOs.

There are mathematical models and then there is poor interpretation, wrong application, or not understanding the limits of those various models.

One can take trade theory as an example and it will clearly show offshore outsourcing will harm America. Gomory and Baumol's entire second half of their book is nothing but mathematics. They show clearly that free trade is not always a "win-win", especially when one allows certain variables, such as labor supply to not be static (tradable).

The entire first half of the book is written for lay people, those who are mathematics illiterate, where they describe their findings...but those findings are from the mathematical models.

Paul Samuelson also proved this to be the case with a simply use of the trade theory model, bi-variate and he just tweaked a couple of variables.

So to sum:

Economists who use mathematics does not imply that Economist is a Neoconservative.

Classical Economics do not imply one is in the Hayek school.

An equal sign does not imply the broad concept of equilibrium.

One corrrection, Robert.

LOL

What gave you the impression that Mainstream Economics ...

... is driven by a concern about market equilibria? Its a program in applying a certain type of mathematical toolkit to economic modeling. The historical data is never the primary driver of the modeling.

The reason for the focus on market equilibria is that is a big part of the rationalization for the use of the toolkit. But mainstream economic models are originally mathematical models, and only secondarily about the economy as such.

After all, the whole reason for the existence of the field of econometrics is to use characteristics of mathematical models in order to take shortcuts that would not be justified based on the statistics alone. If it were not for those shortcuts, economics would just use statistics, as the natural and social sciences do.

modeling in general, as a concept

This isn't quite true. Mathematical modeling does come originally from data. One gets a hypothesis, an idea, a "spark" and looks to express that in mathematical terms. One observes a pattern, a suspecision. Mathematical models are also validated against data.

"Natural sciences" I hate to tell you this, use mathematical models in droves. The Human Genome project is just one of those. The patterns of flights of birds.....mathematical models. The pattern of algae in ponds, sea dead zones from too much carbon dioxide....again, mathematical models.

Econometrics is not about "shortcuts". It is about validating current economic theory with mathematical modeling and heavy use of statistics. With the advent of processing power, one can "crunch numbers" to validate, test, disprove, many an economic theory as well as many a mathematical model.

Yes, but the mathematical models of economics ...

... did not come originally from data in economics, they are borrowed from the physical sciences.

Its not the presence or absence of mathematical modeling that makes a field of study a science, its the effort to provide cause and effect explanations of what is happening.

I apologize in advance if "try to provide cause and effect explanations" is an excessively obscure philosophical concept.

And, yes, econometrics is quite certainly about validating the application of mathematical models with mathematical models and heavy use of statistics, where the statistics alone would not validate the model. The rationalization within econometrics is that it is "knowledge about the working of the economy" that is being added to the statistics, but that then begs the question, since if you incorporate the underlying assumptions of your model when validating an application of the model, the underlying assumptions are not longer subject to being invalidated by the statistical analysis.

if only your cause and effect

was even valid. I just wrote down some Philosophy 101 logic statements in the last comment.

What constituents a science is the scientific method and that does include mathematics and modeling.

One of the basic laws of supply and demand came from a Muslim economist in the 12th century. i.e. not "the physical sciences".

When one builds a model, they are not using vague, philosophical assumptions for validation. They are using historical results, that's real world information, real world data, experimentation to validate or invalidate the model.

That statement on model validation is so FUBAR I just don't know where to start. I'll say this, it appears you negated your own tautology.

main stream economics and efficient market theory

With all due respect, Bruce, but I don't think your last paragraph is particularly correct. Actually let me correct myself here also, because really I'm focusing on that first sentence of that last paragraph. Economics has been trying to explain cause and effects of the decisions people make and the costs associated with them.

Regarding efficient market theory, that is not held by the majority of those that follow prices, at most its half. What is being sought is price discovery, prices are shown but do they reflect the true nature of the situation? At best, someone offers a bid or ask on the information they've digested. But to assume the price you see is the correctly valued one, takes into consideration that all the market participants have accounted for all available information equally. History has proven otherwise.

I'm not an adherent of efficient market theory because the primary element behind prices are human beings. Human beings are fallible, and in this light, one must realize that one will not get all the information or even comprehended correctly. They say that computers have taken care of this problem, but it is humans who develop these programs. The only ones I have come across who, though not entirely, as orthodox efficient price adherents have been technicians (chart followers) and those who subscribe to Burton Malkiel's Random Walk theory.

I'm to the point where I'd say

That an efficient market is impossible outside a homogeneous population of less than 500 human beings with the same ethical belief system or religion.

As soon as you have ONE actor who is anonymous or does not follow that ethical belief system or religion, and holds values counter to the rest, the inevitable distortions in price that we see with any fraudulent deal will appear.
-------------------------------------
Executive compensation is inversely proportional to morality and ethics.

-------------------------------------
Maximum jobs, not maximum profits.

He developed a special meat

He developed a special meat grinder: in goes chicken, out comes steak. The rest is sausage making history...or rather mortgage securitzation history.;-)

Actuaries

“I clearly remember [Li] mention that if you are an actuarial guy, you can earn a lot of money.” Hahaha, you said it man! I must admit, it's my interest in eventually earning an actuary salary that led me to this post. But wow, what an incredibly substantial discussion! Thanks everyone for stimulating my mind... a lot is over my head, but hopefully what I've absorbed will come in handy.