Garbage in, garbage out does not have to be your reality
Very simply, “spend analytics is easier than you think and can deliver results very quickly” was the core message of my previous article in this series on spend analytics. It was a bit of a rant and a soapbox moment, but in this day and age, there is no reasonable excuse for a procurement team in an organization – whether private, not-for-profit or public in nature, small, medium or large in size – not to know how much it spends, where, on what and with whom.
In my previous article, I alluded to the fourth excuse we often hear an organization give for why they can’t do spend analysis – their data is so bad that no one, anywhere, could fix it. In any system or organization where hundreds, if not thousands of people can buy goods and services, there will inevitable data quality issues that make spend visibility more difficult, but certainly not impossible.
Some of the key data culprits are: errors in the data extraction, duplicate suppliers (a problem which increases as purchasing card spend increases), inconsistent, incorrect or missing classifications and lack of additional information about suppliers (size, diversity and location). Here, I want to provide a process which you or your chosen partner can take when you have decided that you are ready to overcome those data challenges, and you are no longer willing to live with poor quality spend data.
Overcoming the Data Deficit — Collect and Normalize
If you are part of a small organization, you may very well be able to do all of this on spreadsheets with your existing team. If you are a medium-to-large organization, the process becomes more difficult, but the same basic theory still applies.
First, collect the data from each of your transaction systems. At a minimum, this will be accounts payable data for every organization. Layered on top of that may be pCard data, eProcurement data and/or contract data. At a bare minimum, you need to know who was paid, at least one date (invoice, payment or accounting date typically), how much was paid, an invoice number and, ideally, the part of the organization which spent the money (department, business unit etc.)
Second, put the data you have into standardized columns. In other words, you may see “transaction date” in your pCard spend file and “invoice date” in your AP file. One source may say “supplier name” and another may say “vendor name”. The purpose of this step is to take each heading and match them between files. This could be done in Excel or Access, but there are more robust tools used by data scientists to complete this step as the dataset gets larger.
Spend Analytics, Jonathan White
Third, check the fitness/accuracy of the data exports. This is critically important because there is always something wrong with the initial data extract. It could be minor and quickly fixable – but it could also be major and skipping this step would mean that all of the rest of the work you do on your data in later steps is pointless because the base data was wrong. This step can be achieved by doing some simple tests on the data, like assessing whether or not the total spend is what you expect it to be. Look at the data by supplier – are your top suppliers those you expect to see? Sort by department by total spend and again, do they look right? Additionally, you should be checking for duplicated transactions in the file, missing supplier names or addresses, blank department descriptions and out of scope spend to name a few. Some problems can be fixed in place while others will require that the data be re-extracted.
Overcoming the Data Deficit — Turn Garbage Into Gold
The fourth step is to de-duplicate suppliers. Do you know how many ways there are to spell FedEx or AT&T? Software can help at this stage, but human verification is also important to ensure that suppliers are not rolled together incorrectly. This step is important so that you don’t spend more time than necessary in the later steps.
The fifth step is to classify your suppliers or lines to a common taxonomy. Why a common taxonomy? Because if everyone uses common taxonomies, it makes it possible to compare and look for collaborative opportunities more easily. When working with a third-party provider, this step is often completed using a software-based classification tool.
The sixth step is to manually validate and review the classifications applied if classification wasn’t done manually in the first place. Software-based classification tools can only take you so far, and there is no magic system that can correctly classify data to the level of quality and consistency that you need. Whether it is your team or a third-party, human eyes on the data make a fundamental difference to the output quality.
The seventh step is optional, and it’s to enrich your data with third-party data sources. This includes data such as the size of the supplier, the diversity ownership status or an environmental friendliness ranking, but could also extend to information about the credit scores or supplier satisfaction scores from external sources.
Visualizing and Analysing your Transformed Data
The eighth step — and it may come as a surprise to many people that it comes this late in the process – is to decide on a business intelligence toolset to use. This might be as simple as Excel or Microsoft PowerBI (which many organizations already have access to through their Microsoft agreements) or to look at one of the many other visualization tools available, which are too numerous to include here. While the choice of analytical tool is important in getting people to use the data to drive change, it ultimately doesn’t matter if you haven’t gone through the data transformation steps above first.
Wrapping it all up
If you got through the steps above, whether on your own or with help, you will have turned your garbage into gold and be ready to accelerate your ability to identify cost savings, improve efficiency and reduce risk on bought-in goods and services at your organization. You will be amazed how quickly opportunities to do all of those things jump out once you have the visibility in place with data you trust.
Jonathan White, Director, Business Development (Americas) Applied Analytics, Spikes Cavell, part of CSC; 1775 Tysons Blvd., Suite 900 Tysons, VA 22102; (571) 313-5257 ext. 102; [email protected]. Go here or here for information.
_____________
To get connected and stay up-to-date with similar content from American City & County:
Like us on Facebook
Follow us on Twitter
Watch us on Youtube