Yes, models in natural sciences can be quite predictive. A predictive model can be constructed that accurately reveals what occurs with the limited variables offered: Wind of a specified density runs at a specified velocity over and under a wing, the wing is shaped in a specified way. Voilà, predictable lift or drag occurs. Follow the Greek letters in the natural-sciences equation and the predictable outcome appears… and it will appear the next time and the time after that.
But even the certainty in natural sciences has its limits. Heisenberg's uncertainty principle asserts a fundamental limit to the precision with pairs of physical properties of a particle. In finance and economics, the paired properties of individuals is much less certain than those of particles. While it's possible to posit behavior for particles that are hypothesized as identical in structure and unchanging in composition over time, it's impossible to do the same with people. People are undeniably individual and unpredictably changeable.
Because people are unpredictably changeable, ceteris paribus – other things equal – is impossible. Predictive finance models, therefore, are impossible. Of course, models can be descriptive, assuming ceteris paribus: Monetary inflation, through central bank asset purchases and lowering of short-term interest rates, will lead to consumer-price and asset-price inflation, all things equal. That's descriptive. The grit in the gear of the predictive model is that all things are never equal.The ungodly number of possible reactions of unpredictably changeable individuals to even the one variable – money – are impossible to fathom, much less capture.
Nevertheless, finance and economics continually attempt to predict human behavior in the language of math. But much to the frustration of their practitioners, math fails to deliver.
For one, calculus itself can never accurately capture human action. Calculus is expressed in the smoothest of curves, but what happens in markets is never smooth, because people are never smooth. No one acts in infinitesimally small steps. We act in discrete, discernible chunks. To represent human behavior as a smooth calculus-derived curve is a fundamental and intractable flaw.
Averages and probability distractions are no better. Sharpe ratios, Markowitz efficient frontiers, whether calculated ex-ante or ex-post and as intuitive as the variables and calculations appear, are really useless. They reveal nothing about present and future risk, nor do they reveal present and future potential returns. I say this as correlation coefficients of risky assets rush toward 1.
Perhaps more misleading, and more dangerous, are the value-at-risk models. VaR models calculate an amount of money and the probability of the portfolio losing that amount of money over a specified time horizon. Set the VaR with either 1% or 5% probabilities and one-day and two-week horizons and then blissfully engage every moral hazard that comes your way.
Fortunately for the practitioner, VaR is like a gift from heaven to be presented to the risk-adverse CYA regulator. Everything goes wrong, but here's the number to show why nothing should have have gone wrong, and why nothing would go wrong during the "stress test." Ceteris paribus, which we expect, tells us everything is OK. The regulator is placated no matter how the chips eventually fall.
VaR's perceived strength (along with many other financial models' perceived strength) is really its weakness. It is the generalization; the probability distribution for a portfolio’s market value. An average, and then the probability distribution, is a response to the elusiveness of reality. Differences are not eliminated but merely made indeterminate. By extending averages into distributions, data appear to retain the differences. Data can then be expressed within the readily conceived parameters – the mean and the standard deviation.
But danger lurks. The average is taken to be as a value of the variable; therefore, it is of the same dimensions as the variable. If averages are constructs at some remove from reality, then the fact that they are expressed in the same terms as the variable itself can lead the incautious to confuse shadow with form.
More sinister yet, stability appears to increase directly with the inclusiveness of the average. Stability is frequently conjured by eliminating significant causative differences. If the average is sufficiently inclusive and covers a sufficient time frame, even the business cycle disappears .Models constructed on averages can only minimize differences, but never magnify them.
Perhaps it’s better to treat financial analysis and economics less as empirical sciences, as the empirical positivists would prefer, and more of a branch of logic. We can still use math, but in more relevant, more holistic, more informative ways. Ensure everyone knows the math is purely descriptive. When contemplating the future, would it not be more honest to simply acknowledge the following “If we leverage the joint 10-to-1, not only can something go wrong, it will go wrong. We can't possibly know the extent, so perhaps it's best not to leverage 10-to-1" than to present a model to circumvent the logic and leverage anyway?
But where's the fun in that? It can be far more immediately remunerative to construct a model that spits out a billion-to-one chance of anything going wrong if you leverage 10-to-1. Then when the joint implodes after leveraging 10-to-1 you take the model to the regulators and convince them the implosion was the rarest of outliers -- an impossibility really. After the regulators succumb to your math and bestow forgiveness, then you take that same model to new investors, recapitalize, and do it all over again. Now that's fun.