DATA, MASTERY AND CULTURE: PART 2

In my blog Metamorphosis, I stated, “In 2017 we began our journey with artificial intelligence and in 2018 put in place powerful new core technologies that enable far more than we envisioned when we began. These core technologies are beginning to transform our company.”

As part of this journey, we have been working in conjunction with leading universities on advanced AI research that underpins our technology platform. I am pleased to report that one of our researchers, Mark Traquair at the University of Ottawa, has been awarded the Winter 2019 Cognos Prize for his advanced work on one of the AI projects involving our deep learning. (Link).

In the same blog, I stated, “One area where I want to see improvement in 2019 is a more direct teaming with customers to ensure that our value is maximized within their company.” To this end, my last blog Data, Mastery and Culture: Part 1 contained an analysis of customers’ results (Figure 1) and suggested some of the things the top performing companies are doing with our tools and data to drive performance.

In this blog, I investigate some of these best practices. I see many parallels between Mark’s work and that of top performing companies around the theme of data, mastery and culture. It’s a culture of value creation, excellence, high subject matter expertise and fact based, data driven investigations.

I don’t think it is a coincidence that the top performers from last week’s blog are the heaviest users of our tools.  But it is how they master our tools that drive results. Note that the star in Figure 1 represents the starting point for the top performing EMS company, now positioned on the extreme right.

At the top of my list of best practices is the organization of data. If you don’t know what you are buying, how many you’re buying and where you are buying them, you have little hope of getting good results. The procurement and engineering data of many companies are in bad shape.  Many of you deal with different and incompatible systems, multiple component naming conventions, spelling errors, visibility only to supplier part numbers and not to the manufacturer’s as well as a host of other complexities that make accurate analysis difficult. Our experience shows that there is a gold mine awaiting companies who organize their data and go after component savings on parts just by eliminating the differences in price being paid for the same components in different factories or locations. I have observed that the for many freebenchmarking.com’s input template is often a first step in data organization as it requires information from different systems to be put into a standard format.

We call these differences Arbitrage and Duplication savings in our reports. Arbitrage savings are those associated with eliminating different prices across the company for the same company assigned part number.  Duplication savings are similar but address different prices for the same or similar MPNs (Manufacturer Part Numbers). Often our clients achieve duplication savings from our tools, matching form fit and function devices with diverse pricing or very similar parts sharing a common pricing distribution. In any case, these are savings at your fingertips. How can a supplier not be moved to correct pricing when you are buying the same thing from them?

Figure 2 shows trendlines for the relationship between duplication savings and Competitiveness Index for all companies that have used Freebenchmarking.com (FBDC) in the past quarter (calendar Q1 2019). In this analysis I used only exact MPN matches; the blue line shows all data and the green line includes the companies with no duplication savings removed. In either case, a relationship exists. In one extreme case, a client had duplication savings equal to 10% of their total materials spend on electronic components. That’s a steep price to pay for poor data organization.

Vast amounts of intrinsic duplication savings exist within EMS companies, Industrial companies and companies that have grown through acquisition.  I implore you, if your company is in one or more of these categories – please drop me a note.  Remember, cost savings are an annuity. If you save $100K on your component spend next month, you’re likely saving that same $100K each month (until your mix has turned over).

Second to data organization is data quality. Some could argue that quality is number one and organization is two and I wouldn’t fight them on it. I have written before about data quality and, in fact, set up a test in my September 2018 blog, Shortage Mitigation Revisited.  The test was to spot the MPNs as shown in Figure 3 from 20 rows of part numbers that are typical of what we receive from customers. Cleansing and reformation of input like this are elements of what our AI enabled systems do in order to fix spelling, extract the true part numbers, separate out run-on concatenations and even realign information into correct columns. If you are sending this to me, there is a good chance you are sending it to your suppliers as well. You should be thankful that your suppliers accept such input and ship you the correct parts – or maybe you shouldn’t. It is not uncommon for suppliers such as distributors to have long term employees who know your company so well that when you order X they know to ship Y. This is a significant risk exposure. If that employee quits, gets promoted or changes roles, you may actually get X when you order X, potentially creating yield, quality or reliability issues along with line stoppages, decertification and a host of other horrible outcomes.

Bottom line, high data quality along with good data organization and access are high up on the list of best practices leading to competitive pricing.

I will be looking at other best practices related to benchmarking and performance management in my next blog.

Ken Bradley is the Chairman/CTO & founder of Lytica Inc., a provider of supply chain analytics tools and Silecta Inc., a SCM Operations consultancy.

Scroll to Top