Data migration process checklist
Get your data migration project off to a great start with our tried and tested data migration checklist.
With so many ingredients involved during the data migration process, it’s no wonder that so many projects go over time, over budget and sometimes even stall completely. Gartner even reported that as many as 83% of data migration projects exceed their deadline and/or budget, and sometimes fail completely1. Not the kind of statistic you want to read when you are considering a migration project, but don’t worry, it’s not all doom and gloom.
At PhixFlow, we have helped many customers who have approached us whilst struggling to move data. We thought we would share some of our knowledge to help you get your data migration projects off to a great start. From data integration with your legacy systems, transferring data to target systems and post migration options, our checklist will ensure you have all of your options covered for your data migration plan.
Every data migration is different, with varying amounts of systems and data to be consolidated and moved. No matter the reason behind your data migration, or the complexity, there are a few basics that you should consider to make the process as smooth as possible.
Whilst all data migrations contain three basic stages, Extract, Transform and Load (ETL), it is important that you have a solid migration plan and process to follow to ensure project success. We have identified six key steps in the data migration process as set out below.
1. Identify sources of data
Before any migration should start it is important to understand the legacy data. More specifically, you need to understand what data is needed, where the data is, what format it is in and how it will be accessed.
For example, in more complex migrations, there may be multiple sources of data. Some data will be stored in databases, excel spreadsheets and other systems. Having the knowledge of what data is where, and how to access it is an important first step in the migration process.
If you are using a platform, such as PhixFlow, you can avoid being too detailed at this stage as the iterative process means that you will be able to identify any gaps in data and adjust your data queries accordingly as the project gathers momentum.
2. Identify the filtering rules
Quite often you will find that there is a lot of legacy data that isn’t required, so it’s important to identify this data early on as it can speed up processing times when you are running simulations.
For example, when migrating data from a financial system it may be decided that old purchase orders are no longer required. All sounds good, but there can be instances where companies create ‘open’ purchase orders that remain active for many years. This would mean that the open purchase order would get missed based on its creation date, even though it would still be required.
There may also be compliance issues that need considering. In some instances, companies are required to keep records that date back for a certain amount of time.
The earlier on that you can identify the data you need, and do not need, the better. By eliminating the data that is not required you will be able to process test data much faster and speed up the process of identifying errors.