How ‘Forecastability’ Analysis Improves Forecast Accuracy

What is ‘Forecastability’?

How 'Forecastability' Analysis Improves Forecast Accuracy 1

Every forecaster knows that there is no single, ‘best practice’ way to forecast sales of an item.  But if different items possess different ‘forecastability’ characteristics, surely one can infer different best methods for forecasting them, and hence improve productivity and forecast accuracy.

By ‘forecastability’ (a word that does not actually exist in the dictionary!) we mean identifying how forecastable an item actually is.  Are some items totally unforecastable, for example?

How ‘Forecastability’ Helps Forecasters

Consider an item that is ordered very intermittently and irregularly through time.  Assume also that each order is totally different in quantity to the preceding and following order.  Demand like this is called ‘lumpy’ in the forecasting literature.  Arguably, items like this are actually impossible to forecast accurately:

Lumpy forecast demand
Lumpy demand pattern

Options for addressing ‘Lumpy’ demand are:

    1. Will the customer engage in discussion over making the demand pattern more predictable?
    2. If not, consider forecasting the item at a higher level of customer hierarchy (aggregation).  This may smooth out the irregularity of level and interval and make the item more forecastable.
    3. If this is not possible, consider a safety stock strategy (especially for low volume / value items) because the item is simply not forecastable.  (Holding stock is clearly a difficult ‘sell’ for the forecaster because no one wants to tie up money in inventory.  However, ‘Forecastability’ measurement will help the ‘sell’ by providing an objective and irrefutable measurement of the issue to the business.)

If none of these options are possible then a question has to be answered over whether it is economic to sell this item due to the high cost of meeting demand.

The other end of the ‘forecastability’ spectrum is items which are ordered regularly and where the demand level is similar each period.  These items have ‘Smooth’ demand:

Smooth demand characteristic
Smooth demand characteristic

These items are relatively easy to forecast.  They are very suited to automated statistical forecasting (e.g. via Prophecy’s R engine).  Software, such as Prophecy, should let you quickly identify these classic statistical forecasting candidates and push them into the statistical engine.

Between the ‘Lumpy’ and ‘Smooth’ group of items are the ‘in-between’ items.

An item with ‘Intermittent’ demand has irregular ordering time periods, but a similar quantity in each order:

Intermittent demand pattern
Intermittent demand pattern

These items require a similar approach to the ‘Lumpy’ category.  Forecasting at a higher level of product aggregation can even out the demand interval and make the item easier to forecast, if it’s not possible to work with the customer.  For low volume items, the cost of safety stock may be a fallback where neither of these options works.

Finally, ‘Erratic’ items.  These have orders every forecast period but at vastly different levels.  They are likely to be highly seasonal or promotional.

Erratic demand pattern
Erratic demand pattern

Seasonal items can still be forecasted using statistical methods.  Many statistical forecasting models, including those in Prophecy™, employ ‘outlier detection’ to remove the peaks and troughs of promotions, allowing you to forecast the ‘standard sales’ statistically and overlay promotions afterwards (e.g. using Prophecy™’s Trade Promotions Planning module).

These items require the highest ‘judgemental’ element to the forecasting process.  Particularly for items where the statistical forecasting MAPE (mean average percent deviation) proves high – i.e. erratic or no seasonality.

A forecaster can only “play the hand they’ve been dealt” and one has to be realistic about how easy or difficult the different items can be to forecast.

However, the strategies described here at least provide a starting point to making the best of the hand.

Pulling the four demand categories together into a single view we get:

Forecastability Quadrants
Forecastability Quadrants

The Mechanics of Forecastability

As illustrated above, forecastability is a function of demand level variability (the x-axis on the quadrant graphic) and demand interval variability (the y-axis on the quadrant graphic).

Demand level variability is measured by the Coefficient of Variation (CV) : the ratio of the standard deviation to the mean.  A high variation from the mean indicates a high variability of demand.

Demand interval variability is measured using the Average Demand Interval (ADI).  For example, say there are 36 time periods in the history but only 12 have a sale.  The ADI would then be 36 / 12 = 3.  In other words, on average, a sale occurs every three periods.

Plotting the CV and the ADI provides the quadrant analysis shown in the preceding graphic.  In Prophecy™ you’ll see it rendered like this:

How 'Forecastability' Analysis Improves Forecast Accuracy 2

The red lines delineate the quadrants.  Hover tooltips help identify the quadrants or the item at each plot point.  The items from any quadrant can be instantly pushed into a Prophecy report and statistically / judgementally forecasted with a single click of one of the buttons in the panel labelled ‘Push into Prophecy Report..’.  You can filter out low volume items, which can make it harder to see spot the key forecastability opportunities.  Finally, the graph can also be rendered as a ‘bubble’ plot, where the size of each bubble corresponds to the average sale per period.

The ‘pay-off’

Being able to categorise items for their ‘forecastability’ characteristics has the potential to make forecasters more productive, by helping them to:

  • Choose the best forecasting strategy for each item.
  • Instantly identify the items that lend themselves to automatic forecasting.
  • Objectively measure of the difficulty of forecasting.

Some people, particularly those who are not directly responsible for forecasting, have unrealistic expectations about forecast accuracy.  ‘Forecastability’ measurement can help them understand the issues in a more data-driven and objective way, particularly relating to the ‘lumpy’, or unforecastable items.

6 key components of an integrated sales forecasting solution

This post covers the 6 main pillars of any sales forecasting solution. Yes, it relates them to our solution, Prophecy. But they apply equally to any solution, not just ours, and therefore represent a blueprint for a sales forecasting solution.

Here’s a summary graphic:

Components of a sales forecast software solution
6 components of a comprehensive sales forecasting software solution

1. Multi-level forecasting

Multi-level sales forecasting means forecasting through time, over measures, across multi-level hierarchies of customers and products.

In other words, it lets you forecast at the level of hierarchical detail that exactly meets your need at the time. Bottom-up, top-down or middle and both ways.

It’s true that there is probably no substitute for forecasting at the sku level for the best accuracy. However, a top-down view and forecast override can help correct generally over-optimistic or over-pessimistic forecasts at the sku level.

And top-down saves potentially many hours of manual work when flexing Budget.

Prophecy supports bottom-up, top-down and middle and both ways multi-level forecasting, subject to each user’s security setting.

2. Multi-user forecasting

Forecasting is seldom a single-user activity. Frequently it is the major account sales people who are the active forecasters – after all, they know most about what is going on in their customers.

Concurrent multi-user access to a forecasting solution is a technical challenge. Spreadsheet based solutions face this major hurdle.

Without a single-source, shared forecasting solution you get:

  • Cumbersome manual consolidations
  • Integrity issues with multiple individual documents
  • Maintenance issues – e.g. adding new products, actuals etc..
  • Delays in the forecast / review / re-forecast cycle

Prophecy has supported multiple user, concurrent read-write access to a single, shared database from day 1.

3. A statistical forecasting engine

Business forecasting in the real world doesn’t always lend itself to statistical forecasting. Peaks and troughs caused by promotions, competitor activity, distribution and listing changes and other non-recurrent factors don’t reliably happen at the same time each year. Statistical algorithms don’t ‘know’ about them.

In addition, adding one new period of actuals to the history used by the statistical algorithm surely adds a lot less value to the new forecast than the up-to-date specialist market and customer knowledge of the business forecaster. Please see our white-paper on the pros and cons of statistical forecasting here.

On the other hand, statistical forecasting may be useful as a starting point for an an initial forecast. For example, when you’re initially forecasting a new year. It may also be useful in applying the Pareto principle – where 80% of skus typically account for just 20% of sales. These smaller-selling items can be forecasted with statistical forecasting and reviewed on an exceptions basis via periodic accuracy analysis.

Prophecy has long had a core statistical forecasting engine. It generates forecasts in batch, on demand or as a way of calculating the value added factor of human forecasting.

However, with effect from February 2019, Prophecy statistical forecasting received a major ‘power-up’. It now enables Prophecy users to run of all the forecasting algorithms in the open source R framework for statistical computing, either in ‘Automatic’ mode (forecaster requires minimal statistical / zero R knowledge) or ‘R-Expert’ mode (which provides ‘to the metal’ access to all elements of the full R framework).

As a result, Prophecy enables the generation of statistical forecasts on a par with the very best statistical forecasting systems.

4. Multi-level revenue and profit forecasting

If your forecasting solution cannot integrate volume forecasting for production and inventory control with financial forecasting, how do you know whether what’s planned to sell will deliver the financial forecast?

At a minimum, the volume plan must feed through selling prices and standard costs into revenue and margin forecasts. Ideally, volume, revenue and margin forecasting will be handled in the same solution, removing latency and giving an instant financial forward view.

It probably won’t be relevant (or practical) to analyse revenues and margins at sku and customer level – way too much detail. So using a multi-level view of the financial measures through time is the way to go. Start, for example, at a product group and customer sector level. Then drill through the hierarchies to identify issues or opportunities.

In Prophecy, depending on the measures you configure in your database, you can view revenue, margin or any other measure at any combination of product and customer level through time. For every volume forecast there is a revenue and margin forecast. Prices can be imported or manually forecasted and standard costs are imported automatically. Prophecy handles the hierarchical solves, in real time, providing an always-on view of the financial data broken down over the full hierarchies.

5. Analysis and control

Sales forecasting generates A LOT of numbers! Especially when you’re extending to revenues and margins over large hierarchies of products and customers. You need a solution which lets you tame this data overload and helps you compare the evolving forecasts with previous years, Budgets and Plans.

Prophecy uses a multi-dimensional reporting and graphical environment which lets you flexibly create your own reports and forecast update screens. These screens support drilling though the hierarchies as well as ‘data-driven’ product and customer selections. (An example of a ‘data-driven’ selection is “retrieve the data for products which are currently forecasted at 10% or more below Budget”.)

Prophecy supports a ‘rolling’ forecast for the standard forecasting cycle – i.e. a new forecast version is published each month, with a backward view of (say) the previous 11 versions. In addition, any number of ‘ad-hoc’ versions can be created – for Budgets, Targets or What-If explorations.

Prophecy’s powerful analysis and reporting environment supports full comparatives between all these versions, such as current forecast version versus a Budget.

Prophecy also provides an optional forecaster / reviewer management work-flow cycle.

6. Plan Trade Promotions

Itemised Trade Promotions are very important to many businesses. For these businesses, it’s important to be able to handle the forecasting of these promotions from within their overall forecasting solution.

Prophecy has an optional Trade Promotions Planning module that lets your forecasters plan and manage trade promotions, and to analyse each promotion’s planned and actual incrementality within Prophecy. It aims to provide an appropriate level of business benefit from Trade Promotions Planning, without the very considerable implementation and maintenance complexities and expenses associated with the more full-blown Trade Promotions systems available.


This post has identified the 6 major elements of a viable sales forecasting software solution. One that services all the business processes and helps everyone plan and control the future from within a single, consistent, robust solution. If you’re looking to cover these bases in your new sales forecasting software solution, Prophecy is a great place to start your search!

5 methods for measuring sales forecast accuracy

Why Measure?

Failing to learn from your forecasting mistakes makes you a lot less likely to become a better sales forecaster.  Measuring accuracy should be a positive thing (and definitely not a stick for beating sales forecasters with).

However, calculating accuracy data over hundreds, possibly thousands of items can be a real challenge – especially finding a balanced ‘single number’ measure for accuracy.  Here, we explain the different alternatives of calculating accuracy available in Prophecy, Data Perceptions sales forecasting software solution.  They are mostly well-known methods for getting to a single, summary number that describes the overall accuracy of a group of many separate forecasts.

Exceptions Analysis

Before we get to exceptions analysis, let’s remember that summary measurement is useful for tracking accuracy over time.  Exceptions analysis – identifying and explaining the reasons for the biggest / most expensive forecast errors – arguably has a bigger payoff.  It provides a valuable opportunity to learn from mistakes and apply the lessons of experience to current and future forecasts.

Therefore, step 1 in accuracy analysis should be a process for rapidly identifying the exceptions – the big deviations that caused the most problems.  Prophecy’s Accuracy Analysis control dialog lets you specify an exception percent and only items exceeding this limit are listed in the resulting detail report:

5 methods for measuring sales forecast accuracy 3

Step 1 is to review these exceptions and try to explain them.  Could the reasons have been anticipated?  Forecasting is constrained to skilfully playing the hand the forecaster was dealt at the time the forecast was made.  What’s to learn from reviewing the exceptions?  Clearly, if better information or interpretation was available at the time the forecast was made there is a lesson to apply to future forecasts.

Moving on to obtaining a single number to describe forecast accuracy, there are several common measures and a couple of deviated ones.  Prophecy’s Accuracy module calculates them ‘for free’ – Excel forecasters will need a little more work but they are mathematically simple.

The important thing to bear in mind when looking at all these measures is that they measure what they measure – i.e. simple maths.  They DO NOT measure the real impact on your company’s supply chain or sales, revenue and profit.

Weighted Average % Error

This statistic calculates a percentage deviation by dividing the sum of the absolute, unit deviations by the sum of the actuals, as illustrated in the following example:

5 methods for measuring sales forecast accuracy 4

The ‘simple’ % error, 15%, shown in the ‘TOTAL’ line does not allow for a mix of over and under-forecasts.  They cancel each other out and are therefore misleading.

The weighted average method effectively weights the absolute variance by the size of the actual.  This gives a more appropriate indication of overall forecast accuracy relative to volume, but assumes a product selling twice as much has twice as much impact.  This is not necessarily the case.  For example, where one item has a very high price and the other a very low price.  In that case, consider measuring accuracy using revenue instead of quantity.

However, if you’re trying to measure supply chain impact, both quantity and revenue can be misleading.  Sku-1 could have a long lead time, Sku-2 a short lead time.  If they sell the same and are the same price, how can accuracy measures measure supply chain impact?

Alternate Weighted Average % Error

This calculation is similar to the Weighted Average % Error.  However, instead of dividing by the sum of the actuals as shown in the example above, it divides by the sum of the forecast or actual for each line, whichever is greater.

5 methods for measuring sales forecast accuracy 5

This means that actual=0/forecast=100 has the same weight as actual=100/forecast=0.  For two items with these forecasts and actuals, the weighted percent is 200% (misleadingly high?) whereas the Alternate Weighted percent is 100% (perhaps closer to ‘correct’ because both items were effectively 100% out?).

The Alternate Weighted Average % approach was suggested by a Prophecy customer and is thought to be non-standard.  Nevertheless, it has the stated advantages over the more usual calculation.

Mean Absolute Percent Error (MAPE)

This is a commonly used measure in sales forecasting.  It is the mathematical average of the individual percent errors, ignoring their signs, as shown in the cell outlined in red here:

5 methods for measuring sales forecast accuracy 6

It is a moot point whether to divide by 3 or 4 in the example shown, where Sku-4 has no actual.  If you divide by 4 you are rewarded with a lower MAPE than dividing by 3, but if you divide by 3 you are effectively ignoring Sku-4!

In addition, this method gives equal weight to a percent deviation, irrespective of the relative size of the forecast or actual.  An item that sells a million has the same weight as one that sells 10.

Mean Average Deviation (MAD)

This measure definitely has the most appropriate acronym!  It is calculated from the mathematical average of the individual unit deviations, regardless of their sign:

5 methods for measuring sales forecast accuracy 7

The advantage of MAD over MAPE is that instances like Sku-4, where there is no actual, are included in the measure.  The same percentage error on a high volume item has a much bigger effect on the result.  This may or may not be desirable where, for example, the high seller is low value and the low seller is high value.

In addition, unlike a percent measure, it can be hard to know if the MAD is good or bad – it is just a number.


As with so many areas of sales forecasting, there is no right answer or single ‘best’ measure that can be used to describe sales forecasting accuracy.

The first and most beneficial purpose of accuracy analysis is to learn from your mistakes.  In that sense, exceptions analysis has the highest return.  The summary measures are helpful for tracking cumulative improvement over time and are control barometers.

Author: Peter Boulton, Data Perceptions.

Sales Forecasting with Excel Spreadsheets

This won’t be the first or last blog post on the internet about the dangers of large spreadsheets!

It is inspired by a recent Prophecy implementation where the customer, like many Prophecy customers, is migrating from a spreadsheet-based system that had grown over the years and now suffered many of the (not unusual) issues of complex spreadsheets:

  • The original developer was long gone.
  • It did not support multiple concurrent forecasters.  (All customers were forecasted in a single spreadsheet, avoiding the need for multiple linked spreadsheets but preventing multi-user access.)
  • There appeared to be design errors which lead to incorrect financial forecasts
  • It contained no sales history, other than year to date.
  • It was cumbersome to navigate and locate items quickly.

There are, of course, many additional shortcomings associated with using Excel spreadsheets for sales forecasting and they are documented on our main site at

A recent discovery, for me at least, is research and data on the incidence of serious errors (i.e. errors that affect key numbers) in corporate spreadsheets.

The bottom line from this research is that more than 90% of corporate spreadsheets contain serious errors.  Yes, 90%.  Ray Panko is a Professor of IT Management and has curated the results of numerous studies on corporate spreadsheet errors on his website at  If you doubt the claim about 90% errors, please do hop over to his site and see for yourself!

So, the point of this post is really to say that there is a high probability (90%!) that relying on spreadsheets for sales forecasting imposes hidden costs on your organisation through formula and other errors.  As well as the limitations of spreadsheets in the 10% or so that are actually error free.

That’s why a purpose-built, proven solution like Prophecy™ has to be the right way to go.  If it does nothing else, it avoids the high probability of spreadsheet errors, as well as opening up a raft of serious, forecaster-specific features that will help your forecasters develop better, more defensible sales forecasts, in less time.


The effect of Spectre and Meltdown on Prophecy performance

The Spectre and Meldown ‘bugs’ have, rightfully, been hitting the headlines these last two to three weeks.  There is still a lot of ‘noise’ and uncertainty, as well as a degree of hysteria about these bugs.   And rightfully so – Intel (for example) have been inconsistent about the impact of the microcode ‘fixes’ they have released in response.  Initially, they recommended everyone should apply them but when the number of spontaneous reboots increased ‘out there’ they reined back on that recommendation.  PC vendors were initially encouraged to push the fixes, until some of the deployment issues emerged.  Principally these were increased reboots and performance ‘hits’.

What should we do about Spectre and Meltdown?  Definitely apply the fixes, wait till first adopters work out the bugs?  It is surprising that there has been hardly any questioning over how dangerous these issues actually are in the real world.  The enormity if Spectre/Meltdown relates to the scope of cpus affected – i.e. all from about 1995 onwards rather than the size of the risk.  The following mitigations appear to be correct:

  1. Exploiting these vulnerabilities requires a significantly greater level of technical ability than more traditional malware development.
  2. The data that can theoretically be obtained is much more limited – i.e. no hard disc data, no keystrokes etc..
  3. Like traditional malware, something has to run in order to exploit the vulnerability – ‘drive-by’ is not possible.
  4. OS and browser vendors have already moved to reduce the likelihood of these vulnerabilities having real world impacts.

So overall, there are so many cherries lower down the tree waiting to be picked.  Why would common-garden malware use these exploits when the degree of technical difficulty is so high and the data you can get from them is so limited?

The lowest risk is of course to just patch.

To test the performance impact of the current Windows 10 Spectre/Meltdown mitigations we ran some Prophecy benchmarks, with and without the patched Windows, on the same machine – a gen 3 i5 with 8gb of memory.

Bottom line – the patched Windows took 13% more time in our tests.

So, everyone has to make their own decision and what we see on Prophecy, is sure to be different to other applications.  But a 13% speedup is non-trivial.

Incidentally, you can test your software in the same way.  Steve Gibson / GRC have a small utility which evaluates your machine for the vulnerability and allows you to toggle the protections on and off.  It’s here: