Some forecasting researchers love complexity. Read their papers and you might feel guilty that you are not integrating genetic fuzzy systems with data clustering to forecast sales in
your company. If you are long-term forecaster you may be dreading the day that your boss finds out that you have not been using a neuro-fuzzy-stochastic frontier analysis approach.
Or perhaps you should be using utility-based models, incorporating price forecasts based on an experience curve, which have been fitted to your data using non-linear least squares


Suppose that you make an outrageously extreme forecast. Perhaps you predict a quadrupling of the demand for your company’s product within a year, or 30% inflation in
the US economy by next October, or the successful implementation of a technology that makes road accidents impossible within three years. Your friends give you strange looks and
think you may have been working too hard recently. You catch your colleagues mocking your prediction in the coffee room. And then, incredibly, your forecast turns out to be right.
You are suddenly transformed into a visionary and your boss now hangs on your every word, preparing to invest millions of dollars in every scheme you suggest. Clearly, you are a
person who possesses unbelievably good judgment. Or are you? Not according to Jerker Denrell and Christine Fang (Denrell and Fang, 2010). Their research suggests that those
who accurately predict the next big thing are actually displaying poor judgment.


Predicting the demand for new products poses special problems for forecasters. By
definition, a new product will have little or no demand history. Unless the demand history of
similar existing products is available and is considered to be relevant, this usually rules out
the use of statistical approaches, like exponential smoothing or ARIMA models, which are
designed to detect and extrapolate past patterns in demand.


Recent high-profile events have led to skepticism about forecasting. Most forecasters failed to foresee Donald Trump’s election victory in the USA, or the Brexit vote in the UK.
Yet forecasting brings many benefits to society. In the USA, weather forecasts have been estimated to be worth $286 per year to each household. In the UK in 2015, public weather
forecasts were estimated to bring economic benefits of between £1 billion to £1.5 billion per year. Without forecasting, those frustrating waits for a call centre to pick up your call would doubtless be much longer, electricity companies would struggle to match supply with demand and governments would be sailing their ‘macro-economic ships’ without any idea of where they might be heading.


In a recent blog Uriel Haran and Don Moore (Haran and Moore, 2014) present a simple method that aims to improve the accuracy of judgmental forecasts involving probability
distributions. They call their method SPIES (Subjective Probability Interval Estimate). You start by estimating the lowest possible value for whatever you are forecasting and the
highest possible value. You then split this range is into a set of sub-ranges or bins.


GOOD AND BAD JUDGMENT IN FORECASTING LESSONS FROM FOUR COMPANIES
Robert Fildes and Paul Goodwin

If you are a forecaster in a supply chain company, you probably spend a lot of your working life adjusting the statistical demand forecasts that roll down your computer
screen. Like most forecasters, your aim is to improve accuracy. Perhaps your gut feeling is that a statistical forecast just doesn’t look right. Or maybe you have good reason to
make an adjustment because a product is being promoted next month and you know that the statistical forecast has taken no account of this.
But if you are spending hours trying to explain the latest twist in every sales graph or agonizing over the possible impact of Wal-Mart’s forthcoming price cut, is this time well spent?
Would it make any difference to forecast accuracy if you halved the number of adjustments you made and spent your newly found free time chatting with colleagues at the water


AVOIDING JAIL
In October 2012, the scientific world was shocked when seven people (engineers, scientists, and a civil servant) were jailed in Italy following an earthquake in the city of L’Aquila in
which 309 people died. They had been involved in a meeting of the National Commission for Forecasting and Preventing Major Risks following a seismic swarm in the region. At their trial it was alleged that they had failed in their duty by not properly assessing and communicating the risk that an earthquake in the area was imminent. Their mistake had been that they had simply conveyed the most likely outcome – no earthquake – rather than a probabilistic forecast that might have alerted people to the small chance of a strong earthquake (Mazzotti, 2013).