5 steps to ERP Master Data Rationalization

Master data rationalization is the first step that organizations should undertake in their drive for master data quality. This article suggests 5 steps to successful ERP Master Data Rationalization.


Enterprise software applications have become so indispensable that they have a material effect on company valuations. Over the years, we have seen companies incur charges totalling hundreds of millions of dollars because of ERP problems, companies miss the market with their products because of ERP problems, and mergers fail to deliver intended results because of ERP problems. The health and continuing welfare of a company's ERP system is clearly an issue for the CEO. The focus of many ERP efforts revolves around process optimization and process extension to other enterprise systems such as CRM or supplier relationship management (SRM). As the process broadens to involve other organizational units or enterprise applications, many organizations discover that the process efficiency and reliability suffers. Accurate reporting is no longer possible, and confidence in the systems drops. Investigation into these problems reveals that bad master data is often the root cause of these process degradations.

Bad master data that is, master data that is inaccurate, duplicated, incomplete, or out-of-date hampers the accuracy of analysis, causes expensive exceptions that must be resolved, and prevents refinement of processes. Moreover, when bad data or flawed analysis is shared with partners, not only are the associated processes affected, but also the level of trust is undermined. Under these conditions, frustrated employees tend to continue their manual processes and future efforts in collaboration, integration, and automation become more difficult, due to employee resistance. In short, bad master data will destroy the best-designed business processes.

Master data that is correctly classified with a common taxonomy and that has normalized and enriched attributes yields a granular level of visibility that is critical to search and reporting functions. Before undertaking any of these efforts and similar business initiatives, organizations must ensure that they have instituted the policies, procedures, and tools to ensure master data quality.

Master data rationalization is the first step that organizations should undertake in their drive for master data quality. Key to this process is the proper classification and attributes enrichment of the item master record. Most systems use some sort of taxonomy to classify items. However, for use throughout the enterprise and with external partners, organizations should select a taxonomy that delivers depth and breadth, such as UNSPSC (the United Nations Standard Products and Services Code), and that allows granular visibility of the item. Rarely do organizations themselves have the resources in-house to evaluate and select the proper taxonomies. Accordingly, organizations should ensure that their consulting partners demonstrate their experience with taxonomy selection and deployment. Item record attributes play a similar important role. Attributes define the item and are important for successful parametric searches. Incomplete or incorrect attributes prevent items from being found in the systems, resulting in proliferation of parts and bloated inventories.

Step 1: Extraction and Aggregation

The master data rationalization process begins with extraction of the master data from the various systems of record, whether they are internal systems such as ERP, SRM, or legacy, or external systems such as purchasing card suppliers. These records are aggregated in a database that serves as the source for the follow-on processing. Initial validation can take place at this point to send bad records back for repair. (Figure 1)


Step 2: Cleansing

Once aggregated, the data is subjected to an initial screening to identify duplicate records (see Figure 2). Part numbers, descriptions, and attributes (e.g., supplier names) are parsed using predefined rules. Exact matches and probable matches are identified and published. Weeding out duplicate records is an iterative process that requires subject-matter experts to identify those records that cannot be culled in the first round. In this process, rule-based processing is inadequate to manage the volume of data. Statistical processing and artificial intelligence is needed to ensure the maximum level of automation and accuracy.


Step 3: Classification

Classification is a critical step. The master records must be classified correctly, completely, and to a level of detail that makes the record easy to identify for search and reporting functions. Organizations often have multiple classification schemas. Although it is not necessary to choose one particular taxonomy, since taxonomies can coexist, it is necessary to have a taxonomy that supports the enterprise's business initiatives. Our research confirms that the use of widely adopted taxonomies such as UNSPSC, NATO, or eClass improves the performance of enterprise spend management strategies significantly over legacy taxonomies. This step is best executed with the help of a partner that has deep experience in taxonomy deployment.

Step 4: Attribute Extraction and Enrichment

Although classification helps determine what an item is and how it relates to other items, attributes define the characteristics of the item and can run into the hundreds per item. Unfortunately, attributes in the item record may be left blank, be cryptic, or be inaccurate. In particular, ERP master records are full of cryptic attributes, due to poor validations and limited text-field lengths. In this step, attributes are extracted, normalized, and completed as part of record enrichment. This establishes the difference between the discovery of a metal nut and the discovery of a ¼-20 hex nut made of 316 stainless steel. Because of the sheer volume of attributes to be extracted and enriched, an automated approach is the only practical way to execute this step.

Step 5: Final Duplicate Record Identification

Once the records have been classified and their attributes enriched, the records undergo a second round of duplicate identification. With much more record information normalized, enriched, and complete, most of the duplicates are automatically identified during this step. Although this may vary by category, there are usually a small number of records that still must be evaluated by subject-matter experts to determine their status.

Before the development of sophisticated automated tools to perform these functions, this process was an expensive and cumbersome process, and rarely a successful undertaking. If you want to learn more about managing master data, Josh Wess has many interesting articles on the topic.

www.zycus.com

Vendredi 2 Septembre 2011


Articles similaires