Transportation

Achieve Data Mastery Before Tackling A Digital Transformation


Matt Elenjickal, the Founder and Chief Executive Officer of FourKites, has pointed out that “data is the raw fuel of a digital transformation.” But it must be quality data! “Just as any chef will tell you that anything less than the best quality ingredients can render a dish inedible, so too, bad data can turn any data-driven IT effort sour.”

When enterprise implementations take longer than expected, almost all do, the amount of time needed to clean up the data is often the leading factor. When the performance of enterprise systems becomes degraded, it is often because the parameters used in the systems are no longer accurate. This is also a data issue. Master data management (MDM) as a critical performance driver for the enterprise.

What is Master Data Management?

Master data management is the discipline in ensuring data is accurate, accessible, and up to date. Unfortunately, for many departments, the necessary data to operate effectively comes from across the enterprise, it is not generated inside that department. APQC and the Digital Supply Chain Institute (DSCI) have conducted best practice and benchmarking research on MDM. APQC is a member-based nonprofit that conducts benchmarking, best practices, and performance improvement research. Part of what makes APQC research so good is that they are able to get large numbers of members to participate and then they have a strong validation process. In this research, more than 1300 qualified respondents participated.

APQC defines master data as the set of identifiers and extended attributes that describe the core entities of an enterprise. These include customers, products, suppliers, and sites/assets. For the supply chain, other core data sets would also include throughput/lead times and global risks. All this data must be kept up to date and clean.

Top performers in APQC’s research engaged in continuous process of organizing, categorizing, synchronizing, and enriching master data records. Typically enabled by technology, MDM requires business and IT teams to work together to ensure uniformity, accuracy, stewardship, semantic consistency, and accountability of the enterprise’s official shared master data assets.

Effective master data management is not a “one and done” process. It is an ongoing journey requiring clear governance and ongoing attention. Centralized ownership is a foundational requirement.

In APQC’s research, when respondents were asked how master data is managed, only 64% reported that their organization centralizes it in an enterprise-wide function, such as a chief data officer. A decentralized approach – for example, when sales owns customer data and the supply chain owns supplier data – sets an organization up for failure.

High quality data is critical. When an organization is ingesting data from dozens or even hundreds of diverse applications, companies will see duplicated records, incomplete fields, formatting inconsistencies, different languages, and different units of measurement.

Achieving Data Mastery

Because so many companies rely on FourKites for real-time transportation visibility, and the complexity associated with providing that, FourKites is a master at master data management. They offer several tips.

Multiple Layers of Checks – “The most effective organizations have multiple layers of checks on data as it’s coming into the organization. The first order of business is stopping bad data from getting into systems in the first place.

Take Humans out of the Loop – Errors are far more common when organizations rely on humans to input the data. Unfortunately, it is not always possible to take humans out of the loop. This is particularly true for entering new customer and supplier data.

Alerts – When bad data is found, the next step is to inform someone. There needs to be an alerting mechanism and a way to escalate alerts for important and time sensitive data that is dirty. Teams should focus on the most critical data problems first.

Remediation – The next step is remediation. In short, what does an organization need to do when they learn data is bad? Companies need to determine whether they can resolve a given data quality issue, or whether they need to go back to external providers who were the source of the data.

Use Technology – Pattern recognition and other functionality can help detect when data is likely to be wrong. For example, FourKites relies heavily on latitude and longitude coordinates for freight. They use technology to ensure this data comes in a specific, numeric form.

Technology for data error detection is also being increasingly embedded in other enterprise applications, particularly supply chain solutions. For example, graph databases are beginning to be used to aid in finding the right master data across a variety of systems needed in a planning implementation. Further, machine learning is being used to keep lead-time information up to date.

Conclusion

One driver of digital transformations is premised on the idea that Big Data can be leveraged for competitive differentiation. While Big Data is a popular concept, most companies are drowning in data. Does all that data need to be checked and cleaned on a continuous basis? Probably not. Not all data is equally important. What is also needed is an understanding of the targeted data needed to accomplish specific goals and tasks and a prioritization of those activities.



READ NEWS SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.