Example: Master data synchronisation
Reliable master data synchronization is a classic challenge in system integration. Many of our customers already have our standardized process components MDMS (master data management and synchronization) in use for this task. The use of components reduces the implementation time dramatically, increases quality and offers out-of-the-box monitoring. The high configurability makes the process operation individually adaptable.
Which challenges are met by these components?
- Detection of source data which has changed and needs to be synchronized
- Target system specific extraction of source data
- Dependencies between data which need to be considered in sequence
- Large amounts of data, especially those which are too large to be transferred completely on a daily basis
- Limited target systems related to size and quantity of the data packets being transferred (in parallel)
Which features are included?
- Recognition of relevant changes in source data per target system
- Controllable process volumes by the reading of the source system and the transfer into the target system (packaging and throttling)
- Processing of large data quantities
- Control of the transfer sequence by dependencies between data
- Interfaces to start full loads, delta loads or list loads
- Processing of receipt confirmations by the target system
- Configurability
- Error tolerance (error handling and retry)
- Process monitoring
How does it function exactly?
Recognition of changes in source data (change data capture)
In order to assess which data has changed, hash codes for each integration path are calculated from the relevant source data (change data capture). The moment a data set changes, the hash code changes as well. The MDMS components capture all the data transferred together with their hash codes. A comparison of the hash codes calculated and transferred determines which changes in the data have taken place.
Packaging of transfer data (create and send packages)
In order to synchronize large amounts of data it is possible to create transfer packages. Each package contains a configurable number of changed data sets. In order to avoid overloading the target system the question of whether and, if so, how often and how many packages should be transferred in parallel to the target system can be configured. In this process section dependencies between data sets can also be taken into account.
Data Transfer
A complete package is sent to the target system in this process section. After successful transmission (with or without confirmation from the target system) the hash codes from this package are registered as sent. This way future changes in the data set can be recognised.
Process overview
A minimum of the three process steps for access to the source and target systems must be implemented to ensure a process capable of running.