Skip to main content

Data, analytics, design & resilience | Highlights

Highlights of real, practical experience from multiple discussions with supply chain practitioners from companies including:

2 Sisters Good Group, 3M, AB Sugar, AB-InBev, Abcam, adidas, AG Barr, Aggregate Industries, AkzoNobel, Aldi, Alexander Dennis, Alstom, Amcor, Amscan, Animalcare Group, AO, Apple, Arla Foods, Arysta, Asahi UK, Asics, AS Watson, ASR Group, Associated British Foods, Aston Martin, Astrak Group, Astrazeneca, Atlas Copco, Avon, Bacardi, Bakkavor, Balt Extrusion, BAT, Bausch Health, Bavaria Breweries, Beiersdorf, Belron, Berkeley Group, BMI Healthcare, Boots, BP, Bridgestone, Bristol Myers Squibb, Brita, British Sugar, Britvic, Brooks Running, BT, Bugaboo, Burberry, Burton's Biscuit Company, C&C Group, Campari, Cantel Medical, Cargill, Carlsberg, Caterpillar, Cath Kidston, Centrica, Chartlotte Tilbury, Clariant, Clarks, cmostores.com, Coca-Cola, Colorcon, Corbion, Costa Coffee, Coty, Cra'ster, Currys, Danone, Dawn Foods, Deckers, DFS, Diageo, Dr Oetker, Dreambaby, DS Smith, Dunelm, easyJet, Electrolux, Energizer, Euro Car Parts, Eurofit Group, Fairphone, Ferrero, Flo Gas, Ford, Freesat, Furniture Village, General Mills, Glory Global, Goodyear, Google, Greencore, Greggs, GRS, GSK, Hachette, Haleon, Halfords, Hallmark, Haribo, HARMAN, Hasbro, Heineken, Henkel, Hilti Corporation, Howdens, HP, HTC Europe, IBM, JCB, JDE, Jewson, John Lewis, Johnson + Johnson, Kao Corporation, Karcher, Kerry, Kimberly-Clark, KIND Snacks, Kingfisher, KP, KraftHeinz, Lactalis, LEGO, Leoni, Lululemon, LVMH, M&Co, Macmillan Education, Majestic Wine, Marks & Spencer, Mars-Wrigley, McCormick, McDonalds, Medtronic, Mondelēz, Monica Vinader, Moove Lubricants, Monsanto, Morrisons, Mountain Warehouse, Müller, Nando's, Nestlé, Nike, Novocure, Nutricia, O-I, Opple, Oriflame, Oxford University Press, Pearson, Pentland, PEP&CO, Pepsico, Pernod Ricard, Perrigo, Pfizer, Philip Morris, Philips, Pladis Global, Primark, PZ Cussons, Reckitt, Red Bull, Ricoh, River Island, RS Group, Sainsbury's, SC Johnson, Shell, Siemens Healthineers, Sky, Smith & Nephew, Sodastream, Sony, Specsavers, STADA, Starbucks, STMicro, Suntory, Superdry, Takeda, TalkTalk, Tata Consumer Goods, Tate & Lyle, Tesco, Teva, The Body Shop, The Nature's Bounty Co., ThermoFisher, The Very Group, TJX Europe, Topps Tiles, TT Electronics, Tupperware, Under Armour, Unilever, Upfield, Vision Engineering, Vivera, Vodafone, Waitrose, Walgreens Boots Alliance, Warburtons, WD-40, Westcoast, Whitworths, WHSmith, William Grant, Wickes and WLI.

Analytics - data first?
FOR: Unless the data is right, the value of any analytics is limited, futile or, potentially, even counterproductive. The challenge is that data cleaning isn’t ‘sexy’ and is a challenge to secure budget and commitment for as it’s a means to an end…you don’t see an immediate return on investment

AGAINST: You could spend months and years trying to get perfect data which is itself a moving target. The cost of not taking advantage of analytics capability until the data is ‘right’ outweighs the cost of the results not being perfect if it still brings useful insights and tangible improvements

Related content
Analytics - insource or outsource?
Insource pros
  • More ability to tailor the analytics capabilities to your own specific needs and circumstances;
  • It might not be data science per se but having internal team members who both understand the business and are data literate but don’t do the coding themselves can really help improve that focus on working on the right things in the right way. The challenge then is how do you build that data literacy capability?

Insource cons
  • Data science is a huge domain and very few data scientists will be able to be experts in forecasting as well as transport network optimisation, for example. Those that can, will command very large salaries;
  • It’s very difficult to hire the right people because whoever is doing the hiring needs to have a deep understanding of data science and its applications;
  • Risk of going to significant expense to build an AI capability on top of data, processes and systems that themselves are not optimal;
  • Even if you have good foundations and enough internal data literacy to be able to hire effectively, you still need enough scale to justify the overhead. The most likely areas where that kind of scale is possible is in analytics for forecasting, network and transport optimisation and, perhaps, inventory optimisation;
  • There are also risks around retention of talent and knowledge: you can invest quite a lot into developing people who are then headhunted for those skills, leaving a capability gap. Even if those people are retained, if they are not using those skills continuously, there can be capability attrition or knowledge obsolescence;

Outsource pros
  • Access to ‘best of breed’ technology and data science ‘know how’ that any single organisation alone is not going to be able to keep up with;
  • It is possible to successfully partner with external organisations if they have sufficient scale and structure relationships so that there is an ongoing commitment to reviewing and improving capabilities that they have supported;

Outsource cons
  • Hype: a lot of solution providers are claiming to offer AI but, from a pure data science perspective, few of them really do. Aside from, perhaps, automated reporting or advanced BI suites, AI solutions can’t be pre-programmed to deliver meaningful outcomes in many different contexts. Each problem is unique in its attributes: organisational structure, processes, how analytics feed into them and what the outcomes should be whether for inventory optimisation, network design or forecasting;
  • External solutions are typically very expensive and, unless you are very careful, any impact of their involvement quickly dissipates once they've moved on unless there is a plan for how to retain that knowledge and capability.

Related content

Analytics - how should cost-to-serve shape SC design?
CTS ‘state of play’
  • A typical CTS measures logistics, but not the full e2e picture - the ‘field to fork’ measure. This can become very complex very quickly, but more companies are now moving towards measuring this. Why? These calculations can then drive commercial teams to price deals, better inform the budgeting process, inform how customers should be serviced most profitably, and can also feed into ESG reporting. 
  • For many, Covid has brought to the fore the need to consider cost-to-serve vs operational risk of depending on import/export. Where products are manufactured or sourced is being reviewed. Use of CTS tends to be unevenly distributed across parts of the business, typically where costs are high or increasing, for example, in shipping and transport costs at the moment. The Amazon effect where customers expect next day delivery is also driving up transport costs so 'traditional CTS' tends to revolve around the logistics function.

CTS today:
  • When data is available, it's not necessarily complete or transparent and the results are sub-optimal, where a conclusion isn't always obvious;
  • CTS is less commonly used to drive routine discussions and decision-making between functions;
  • CTS does not always drive pricing to customer - yet it can and perhaps should;
  • Looking at CTS further up the supply chain quickly becomes complex;
  • Without data in a usable form readily at hand, the situation has often already changed by the time an analysis can be done;
  • One-off CTS analyses typically only provide a snapshot which, while interesting and possibly informative, rarely becomes actionable as an ongoing tool which limits their practical value;
  • As greater emphasis is placed on supply chain resilience, there is greater need to be able to model the implications of possible configurations of footprints, supplier terms etc. and understand its impact on the bottom line.

​​Granular CTS analysis
  • SKU range optimisation is a challenge that requires having the right information and insight...something that traditional systems are not always able to provide. Removing SKUs purely based on revenue or units sold can have adverse consequences. Often, SKUs with high revenue or unit sales are relatively low on profit and in some cases loss making. In other cases, lower selling SKUs are the ones delivering the profit.
  • Best practice is to start with an end-to-end cost-to-serve analysis, using a range of cost drivers that accurately reflect the activities required to fulfil demand. This will not only identify which SKUs are delivering the profit and which are eroding it, it will also identify the ‘why’, enabling a more sophisticated approach to range optimisation and profit improvement. Once the analysis has identified the loss making / low profit SKUs a review of each SKU is undertaken:
  • Identifying the strategic SKUs: new SKUs, range credibility SKUs or those that prevent competitors from getting a foothold in the market – these should be retained, regularly reviewed with a view to improving profit;
  • Profit improvement: can we identify actions and interventions at specific points in the supply chain to improve SKU profitability e.g. changes to fulfilment channels / routes and controls around order multiples;
  • SKUs that cannot be made profitable and are not strategic are the real candidates for range reduction…the critical element is having the right information to get to this point

​​Best practices
  • COST-to-serve is somewhat of a misnomer as it can overly narrow the focus on the cost side whereas framing it as something like 'cost-to-serve contribution', 'Holistic Customer Investment' or taking an EBIT-level view offers better perspective coupled with and end-to-end view;
  • Incorporate data from other parts of the business in whatever format it is recorded (don’t try to introduce or change data format as this will surely be impeded);
  • Start with the invoice data as a base and go to each function in your organisation to see what data can be pulled to augment this?;
  • Set the CTS rules and get buy-in to allocation methodology. The warehouse is often the most complex area;
  • Ensure CTS is not being used to ‘beat up’ one function. Transparency of CTS data helps defuse a sometimes adversarial dynamic where one function or teams' view is pitted against another, acting as a glue rather than a wedge. This transparency itself often prompts more and better questions which are enriched by different viewpoints;
  • Factor in FTE overhead e.g. how many planners, how much devotion in customer service? Those FTEs are most often allocated to customers, or geo’s, or products, so these costs can be attributed;
  • Establish the wins for each function and you’ll get the buy-in! How does better CTS insight help each function perform its role better and contribute to the common goal e.g. improving EBIT? Consider extending the potential “wins” to include suppliers who may need to be motivated to support upstream enhancements (open book discussions). This could be particularly significant in the ESG expectations of suppliers;
  • CTS can start in supply chain but often end up in commercial where it is used as a framework for productive negotiation and to drive deals and offers. Sharing this information with customers can help influence how they accept being served, or nudge buying practices in a positive way;
  • Make CTS a living breathing exercise - take data feeds once a month. A one-off exercise tends to land and be ignored because there is no change or measurement for improvement. CTS analysis can then be used as a data feed for other changes, projects or servicing decisions;
  • Incorporating ESG metrics into CTS analysis is essentially no different to a standard CTS analysis: start with the data and a transparent rules engine to produce regular routine reports. In time, the rules and reports can be adjusted and fine-tuned and become embedded in the decision-making process.

Lessons learned, as reported by discussion participants:
  • CTS initiatives have been broadly successful and been worth the investment;
  • CTS means different things to different people so important to be clear: it’s more than cost-to-fulfil and should incorporate gross price and margin;
  • Complex interdependencies means that brand-level analysis is too coarse and misses the vital interdependencies between behaviours and cost drivers;
  • Specify & validate basic assumptions about core drivers of cost in the network and then focus on just a few SKUs that hold the most potential savings as there’s too much complexity to be comprehensive;
    • focus on the main cost drivers and allocate general ledger costs on a volume or value basis;
    • important to combine at customer level;
    • also consider the cash position: two customers may look the same but, once their payment history and a cost of capital is factored in, it can make a 15-20% difference in their profitability;
  • An 80:20 pattern is common where 80% of the profitability is generated by 20% of the SKUs
    • cutting SKU tails may mean pushing some customers to be served through wholesalers rather than direct
    • have to be careful: even unprofitable SKUs may be contributing to fixed asset costs. Cutting them leads to cost absorption for other SKUs...the volume needs to be replaced with more profitable SKUs
  • Defining costs is important but isn’t the ultimate goal...even if the cost can’t be precisely quantified, there is still high confidence that a course of action will save money so shouldn’t be delayed just because a precise saving hasn’t yet been calculated
  • General trade terms can offer a better lever to improve CTS rather than individual customer terms which become too complex to monitor and manager
    • can’t expect to cut penalties entirely but a few classes of trade terms help set better frameworks & KPIs for managing CTS
    • this may mean the same customer in different classes if, for example, they have different internal customer e.g. convenience stores compared to supermarkets
    • can include terms that help data capture
    • terms can be leveraged with a B2B customer portal by incentivising more profitable behaviours

Ongoing challenges: 
  • Data integration: overlay / integration with ERP or incorporating data from 3PLs? Especially secondary freight can be a black box
  • Collaborating with commercial teams who are reluctant to risk relationships by talking about perceived constraints to service. This is where a projected saving is helpful to aid the rationale for such customer conversations. 
  • Often KPIs on NPD which can result in 100 new SKUs instead of just the best 10

Related content
- Cost-to-serve analytics

Design - how is e2e segmentation improving resilience & profitability?

How is customer segmentation being applied to improve resilience and improve profitability?

  • Move from a ‘one-size-fits-all’ approach to customer service, which aims at maximum service at minimum cost (and, anyway, tends to fall short somewhere) to a segmented approach which ensures that operational decisions are aligned to commercial priorities. High growth / revenue / profitability customers have supply chain resources prioritised and highest levels of service whereas others potentially have slightly lower service levels;
  • Products might be segmented based on obsolescence, channel, complexity, etc. The combinations of customer and product segments form a matrix which can then be separated into two or three (or more) clusters based on their priority level. Terms vary but they might typically be strategic / prime, standard / sustain and manage. In one scenario, the strategic / prime cluster accounted for 50% of profitability but only about 10-20% of the customer product combinations. In contrast, the manage cluster was between 1-5% of profitability but around 30% of customer product combinations;
  • Strategic / prime clusters may go from 96% to 98% OTIF, particularly on the ‘in full’ element and may require higher safety stocks or more flexible delivery. Standard / sustain clusters might continue to have a similar level of service whereas manage clusters may have ordering limited to certain days of the week when cost is lower or capacity is greater, for example. This cluster also presents opportunities for SKU rationalisation;
  • If / when supply is constrained but there is spare capacity, the default expectation is that should be used for a prime cluster product / customer whereas before it might have been taken by whatever was available or the customer was low on stock, even though that isn’t necessarily the most profitable item;
  • Segmentation / clustering helps develop a common framework and understanding between commercial and supply chain teams to support better strategy and execution.

What are the challenges with end-to-end customer segmentation?

  • Data availability for cluster analysis: even standard cost data is good enough to make a meaningful difference to customer segmentation but, where possible, cost-to-serve analysis serves to sharpen those boundaries and build confidence that the changes will have a material impact, particularly because distribution and manufacturing costs tend to be attributed by volume;
  • Buy-in from commercial / sales teams: they need to commit to defining service levels (including the implied trade-offs) and then monitor and manage the implementation through metrics like NPS;
  • Potential complexity /maintaining the same clusters across different markets: for example, an A class SKU in a smaller, tier 3 market might be a C class SKU in a larger tier one market. Similarly, supply chains with centralised production that export to multiple markets can be harder to cluster. Local regulation governing how discretionary decisions over constrained supply can be as well as trade terms have an impact;
  • Customer expectations: in cases where one customer takes a large / full range of products that cuts across SKU classes or segmentation clusters, the customer may expect the same level of service across the board. However, as part of perhaps a longer-term collaboration conversation customers are likely to see the value of having better fill rates;
  • Multi-layer complexity: there may be segmentation for forecasting (e.g. by variability & value), service levels (e.g. variability & volume), replenishment, manufacturing and even raw / packs, each of which using different parameters and, therefore, defining different segments or clusters which are difficult to apply across the end-to-end network. However, customer segmentation can serve as a higher level framework helps rationalise and inform strategic and operational decisions that set the context for the sub-level segmentations.
Design - how are digital twins being used for greater resilience?
A digital twin is a high definition digital replica of the physical supply chain which can be used to optimise performance, evaluate alternatives and determine the impact of changes in a safe, risk-free environment. This supports fast, evidence-based decision making and allows organisations to embark on strategic supply chain transformation programmes with confidence that the changes they make will deliver the expected outcomes.

Potential benefits:
  • Clarity: supply chains evolve over time and accumulate features that inhibit optimal performance due to acquisitions / mergers / divestments, changing strategies, customers and product portfolios, successive technologies, legacy systems etc. Supply chain design modelling introduces a discipline to step back from execution and planning mode into a design mode where default assumptions are challenged and potential new efficiencies are revealed;
  • Granular visibility: by necessity, management teams focus on KPIs which tend to be high level and paint a certain picture of the business. However, they potentially over-simplify the real picture and so obscure issues and opportunities that are revealed at a more granular level;
  • Risk mitigation: enables anticipation and preparation for potential shocks and disruption, opening the doors for greater agility;
  • Higher quality decisions: the discipline of design forces critical evaluation and, once a model is in place, compels decisions to be evidence-based, rationalised and quantified, thereby reducing or eliminating bias. The digital twin / model also helps to democratise sophisticated modelling across business units and functions so reducing potential blind spots and leading to better, more sustainable decisions;
  • Agility / faster decisions: near real-time decision-making becomes possible as the potential impacts of any proposed course of action are modelled and so don’t need to be calculated from zero;
  • Enable further digitalisation: an effective model may enable use of other digital technologies such as RPA, machine learning and artificial intelligence which offer further efficiencies.

Potential challenges:
  • Implementation cost: as ever, it requires significant investment in terms of time, money and effort which also implies opportunity costs;
  • As getting the granular level of data necessary to build simulation models is a common challenge, digital twin initiatives are often run alongside data warehouse / lake projects
  • Theory vs. practice: can be a disconnect between what is possible and desirable in the model versus what is actionable in reality;
  • Continual investment: supply chain design modelling should not be a one-off or occasional exercise; it must be continually updated to incorporate changes and new data.

Use cases include:
  • Closing gap between plan and execution based on unified view and mutual understanding
  • Post-plan analytics to determine where errors were made
  • Footprint optimisation project which reduced spend by 20%
  • Cost-to-serve analysis & improved customer segmentation
  • Modelling and optimisation of distribution network
  • Contingency / continuity planning, including related to Brexit impact
  • Also done for tier 2 & 3 suppliers
  • Shifted from make-to-order to make-to-stock
  • Advanced analytics to improve data-driven planning & decision-making
  • CO2 analysis & optimisation
  • Omni-channel distribution analysis
  • Supply / capacity planning
  • Network design & redesign in response to supply disruptions
  • Ongoing network risk monitoring & management

Related content

Top SC Planning Best Practices...Digested

Real lessons learned distilled from a series of practitioner exchanges on...

Digital Transformation of Supply Chain Planning

Digital transformation has been on corporate agendas for some time already but, borne of necessity, tangible progress accelerated significantly since 2020. A silver lining of the pandemic may be that the business case is clearer so the focus has shifted towards implementing and scaling digitalisation initiatives.

Supply Chain Design Modelling and Analytics

Supply chain network (re)design used to come around every few years or so, often based on quite a high level view of customer segments and associated costs. Now, competitiveness and even business continuity relies on the capability to dynamically adapt network flows based on a much more granular understanding of cost drivers.

Volatility, Agility & Resilience: Next-Level Planning

Volatility, uncertainty, complexity and ambiguity was already an increasing factor before the pandemic but, now, it is clear that we need the next level of demand forecasting, sensing, planning and execution.