As you take on that challenge, how can you measure your progress in the most meaningful ways? Simply put, forecasting is your attempt to anticipate what will happen.
A perfect forecast is one that aligns exactly with the actual. The deviation between the forecast and the actual is your miss, your margin of error and your opportunity for further improvement and optimization. Getting as close as possible is what I would call the process of managing forecast accuracy.
For a store operator that translates to minimizing the delta between what you planned to happen and what actually happened. Better forecast accuracy minimizes the adjustment delta that your managers need to make to handle the business not coming in as planned it includes actuals coming both higher and lower than the forecast. Remember, the store already has a number of adjustments they need to respond to like surprises in the supply chain, unexpected absences and associate turnover, so minimizing the delta due to inaccuracy in the planning process seems like a no-brainer to help stores execute your brand more effectively.
Is forecast accuracy as important to all organizations? In our line of work at Arkieva , when we ask this question of business folks: What is your forecast accuracy? How is this possible? Imagine a management team being given this range of numbers on the same metric.
I am sure they will not be happy. In this blog post, we will consider this question and suggest ways to report the accuracy so management gets a realistic picture of this important metric. Forecasting and demand planning teams measure forecast accuracy as a matter of fact.
Some commonly used metrics include:. All these metrics work great at the level at which they are being calculated. For example, if a business has 10, SKU — Customer combinations, these metrics can be used to calculate the error of each individual combination. However, when there is a need to calculate the metric at an aggregate level, the negative errors and the positive errors cancel each other and you get a picture that is much rosier than the reality.
Let us look at an example to understand this:. Mark Chapman. One way to check the quality of your demand forecast is to calculate its forecast accuracy, also called forecast error. Forecast accuracy is the deviation of the actual demand from the forecasted demand.
If you can calculate the level of error in your previous demand forecasts, you can factor this into future ones and make the relevant adjustments to your planning. Statistically MAPE is defined as the average of percentage errors. This shows the deviation of forecasted demand from actual demand, in units. It takes the absolute value of forecast errors and averages them over the forecasted time periods. Once you have your forecast error calculations, you need to ensure you act on the data.
In any case, setting your operations up so that final decisions on where to position stock are made as late as possible allow for collecting more information and improving forecast accuracy.
In practice, this can mean holding back a proportion of inventory at your distribution centers to be allocated to the regions that have the most favorable conditions and the best chance of selling the goods at full price. You can read more about managing seasonal products here. Depending on the chosen metric, level of aggregation and forecasting horizon, you can get very different results on forecast accuracy for the exact same data set. To be able to analyze forecasts and track the development of forecasts accuracy over time, it is necessary to understand the basic characteristics of the most commonly used forecast accuracy metrics.
There is probably an infinite number of forecast accuracy metrics, but most of them are variations of the following three: forecast bias, mean average deviation MAD , and mean average percentage error MAPE. We will have a closer look at these next. Do not let the simple appearance of these metrics fool you. After explaining the basics, we will delve into the intricacies of how the metrics are calculated in practice and show how simple and completely justifiable changes in the calculation logic has the power of radically altering the forecast accuracy results.
Forecast bias is the difference between forecast and sales. If the forecast over-estimates sales, the forecast bias is considered positive. If the forecast under-estimates sales, the forecast bias is considered negative. In many cases it is useful to know if demand is systematically over- or under-estimated. For example, even if a slight forecast bias would not have notable effect on store replenishment, it can lead to over- or under-supply at the central warehouse or distribution centers if this kind of systematic error concerns many stores.
A word of caution: When looking at aggregations over several products or long periods of time, the bias metric does not give you much information on the quality of the detailed forecasts. The bias metric only tells you whether the overall forecast was good or not.
It can easily disguise very large errors. You can find an example of this in Table 1. Mean absolute deviation MAD is another commonly used forecasting metric. This metric shows how large an error, on average, you have in your forecast. However, as the MAD metric gives you the average error in units, it is not very useful for comparisons.
An average error of 1, units may be very large when looking at a product that sells only 5, units per period, but marginal for an item that sells , units in the same time. Basically, it tells you by how many percentage points your forecasts are off, on average. This is probably the single most commonly used forecasting metric in demand planning. As the MAPE calculations gives equal weight to all items, be it products or time periods, it quickly gives you very large error percentages if you include lots of slow-sellers in the data set, as relative errors amongst slow sellers can appear rather large even when the absolute errors are not see Table 2 for an example of this.
In fact, a typical problem when using the MAPE metric for slow-sellers on the day-level are sales being zero, making it impossible to calculate a MAPE score. Measuring forecast accuracy is not only about selecting the right metric or metrics. There are a few more things to consider when deciding how you should calculate your forecast accuracy:.
Measuring accuracy or measuring error: This may seem obvious, but we will mention it anyway, as over the years we have seen some very smart people get confused over this.
Aggregating data or aggregating metrics: One of the biggest factors affecting what results your forecast accuracy metrics produce is the selected level of aggregation in terms of number of products or over time.
As discussed earlier, forecast accuracies are typically better when viewed on the aggregated level. However, when measuring forecast accuracy at aggregate levels, you also need to be careful about how you perform the calculations. As we will demonstrate below, it can make a huge difference whether you apply the metrics to aggregated data or calculate averages of the detailed metrics.
In the example see Table 3 , we have a group of three products, their sales and forecasts from a single week as well as their respective MAPEs.
The bottom row shows sales, forecasts, and the MAPE calculated at a product group level, based on the aggregated numbers. Which number is correct? The answer is that both are, but they should be used in different situations and never be compared to one another. The same dynamics are at play when aggregating over periods of time. The data in the previous examples were on a weekly level, but the results would look quite different if we calculated the MAPE for each weekday separately and then took the average of those metrics.
Which metric is the most relevant? If these were forecasts for a manufacturer that applies weekly or longer planning cycles, measuring accuracy on the week level makes sense. But if we are dealing with a grocery store receiving six deliveries a week and demonstrating a clear weekday-related pattern in sales, keeping track of daily forecast accuracy is much more important, especially if the items in question have a short shelf-life.
After all, Product C represents over two thirds of total sales and its forecast error is much smaller than for the low-volume products. Should not the forecast metric somehow reflect the importance of the different products? This can be resolved by weighting the forecast error by sales, as we have done for the MAPE metric in Table 5 below.
This is because the MAPE for each day is weighted by the sales for that day. On the group level, the volume-weighted MAPE is now much smaller, demonstrating the impact on placing more importance on the more stable high-volume product.
The choice between arithmetic and weighted averages is a matter of judgment and preference. On the on hand, it makes sense to give more weight to products with higher sales, but on the other hand, this way you may lose sight of under-performing slow-movers. The final or earlier versions of the forecast: As discussed earlier, the longer into the future one forecasts, the less accurate the forecast is going to be.
Typically, forecasts are calculated several months into the future and then updated, for example, on a weekly basis. So, for a given week you normally calculate multiple forecasts over time, meaning you have several different forecasts with different time lags. The forecasts should get more accurate when you get closer to the week that you are forecasting, meaning that your forecast accuracy will look very different depending on which forecast version you use in calculating it.
The forecast version you should use when measuring forecast accuracy is the forecast for which the time lag matches when important business decisions are made. In retail distribution and inventory management, the relevant lag is usually the lead time for a product.
If a supplier delivers from the Far East with a lead time of 12 weeks, what matters is what your forecast quality was when the order was created, not what the forecast was when the products arrived. In terms of assessing forecast accuracy, no metric is universally better than another. It is all a question of what you want to use the metric for:. The forecast accuracy metric should also be selected to match the relevant levels of aggregation and the relevant planning horizon.
0コメント