Data strategy: migration

Data migration moves or migrates the selected and clean data from applications and environments into (or away from) Cloud, Cloud systems or new environments.

Successful migration relies on good preparation – both in terms of data and data structures, and in terms of understanding source systems, particularly if these are CRM or ERP systems.

Each Cloud has its own constraints. Incorvus helps clients to work within those and to extract or prepare data uploads suited to the target environment. Alignment to business processes informs configuration of the target environment and helps to maintain referential integrity as far as possible.

Incorvus’ programme includes preparation, configuration, set up, upload, integration, post go-live issue resolution, training, reporting (or completion upon exit).

Clients maximise benefits from new systems by not recreating the past, but by aligning their intent to the new environment before committal. After preparation, only relevant, clean data correctly migrated, should populate new systems.

Incorvus’ focus is on the data, metadata and its preparation. From use cases we derive business process mapping, data flows and data models (AS IS and TO BE). Putting the data first retains focus on what is critical and feasible so that clients don’t plan really well for something that can’t be delivered or won’t be performant upon delivery.

If necessary, trialling and testing can be done with dummy data (to be provided by client in this event) – or client resources can be mentored blind – to alleviate any data privacy, protection, security or confidentiality concerns.

  • Cloud configuration: the new environment will need configuration before test and live data can be uploaded.
    • Cloud configuration advisory will assist clients who may be uncertain of how best to set up their Cloud or their data, or who wish to review their service level agreements.
  • Migrate, move (or load) prepared data: measure and assess results, performance and success.
  • Validate: check that the data uploaded is in the expected state.
  • Linking: once the data is validated, it can be linked to other systems providing the data precedence is understood and in place. If not, then linking cannot be done until data precedence rules are understood and mapped so that multi-lateral inputs will not be out of sync and will not destroy the data quality of the upload and spoil the migration. This should include any archive processes.

Data precedence rules: assuming complex, multi-lateral input, this should set out the rules and standards for ongoing data quality e.g. merge and de-dupe rules for sibling data; data parsing; and sibling precedence e.g. by time stamp. This may also be handled as part of wider curation activities, or revised to take account of greater complexities therein.