Data Migration Workbench

From TekiWiki
Jump to: navigation, search

PeopleSoft PeopleTools 8.54 and above includes a tool for data migration. This tool is set up and run from the PIA. Data can therefore be moved by functional analysts and technicians.

Application Data Sets or PeopleSoft Data Migration Workbench

The tool includes the ability to compare incoming data with data in the database and check translate and prompt table edits.

Data migration workbench is made up of two pieces:

  • The dataset generator
  • The data workbench

Dataset generator groups together records with the same root key. It can be generated from a component or built manually by naming the records individually.

Generating a dataset from a component in PeopleTools 8.54 has the following limitations:

  • Components with a search record with no keys cannot be auto generated
  • Components that affect other records using PeopleCode only will not automatically include those records, but these can be added.
  • Some components with unusual key structures do not automatically generate or leave records out when generated.

The data sets, however they are generated, have the following issues:

  • Records without a key (such as PS_INSTALLATION) cannot be migrated, as a workaround you could add a key or create a simple view with a false key, but this does seem a lot of bother.

The data migration workshop allows you to group data sets together into "projects" and determine which keys you want to select for migration.

These groups are loaded from file and checked against the database as a whole. This leads to a limitation:

  • Prompt tables need to be in place prior to loading the main table. This means that prompt tables need to be loaded in separate data migration projects and loaded earlier. This builds a hierarchy of tables to be loaded.
  • There are some tables and columns are difficult to populate:
  • DEPT_TBL.MANAGER_ID needs the employees to be loaded, and the employees need the DEPT_TBL to be loaded.
  • POSITION_DATA.REPORTS_TO needs positions above in the hierarchy to already be in the database.

The data migration project stores for the keys each data set as a list of values. You state a set of criteria and these are then translated into a list of values. The list has to be edited or recreated if you add additional keys.

Large lists of keys can make Data Migration Workbench very slow. For instance, in one installation, we tried to move bank sort codes from one database to another. There were approximately 15,000 codes. In order to get the codes loaded, they needed to be broken into three groups to insert (the generous application server time outs were still being hit). Data Mover was found to be a much faster and convenient option.

This can be a disadvantage if you are reloading data sets repeatedly as values are added.

What is Data Migration Workbench useful for?

Once a system is up and running, moving groups of new codes from Development to Test to Production can be done from the web front end (provided the data sets have been migrated).

This can be a welcome relief to a stretched technical support team. Departments, locations and many other frequently changing values can be maintained across several databases without technical assistance.

The compare ensures that the basic edits are not broken (and in general reminds you if there are supporting codes that need to be loaded).

When is Data Migration Workbench not so helpful?

During a large scale install, the old fashioned Data Mover export and import script still win in my opinion. The script can be automated and will take account of new values without additional work.

Further Reading

Business Analysis