Skip to end of metadata
Go to start of metadata

When updating large databases you need to consider the following issues:

ConsiderationDetails
Memory

You need to make sure there is enough memory available to avoid running out of memory.

When using the 32 bit Java Virtual Machine, the maximum memory size available to SuperCHANNEL on Windows is approximately 1.6 GB.

PagingPaging can affect the database performance. If paging occurs then decrease the caching block size. For more information on the caching block size, use the sxv4drv.conf configuration file to set the cache size.
Disk Utilisation

When working with large databases it is important that disk utilisation is efficient. It is recommended you use a tool to monitor the disk access.

You may need to configure the commit points for your database (see below).

Using Commit Points

Commit points specify the number of records channelled before changes are committed to the target database.

An interrupted build process can be recovered using specified commit points. You can also resume channelling from the last committed record.

Commit points affect the disk utilisation:

  • More commit points allow better disk utilisation.
  • When updating records, use commit points to decrease disk fragmentation.
  • If you set the commit to a non-optimum size, the JVM will run out of memory.
  • After the first update, keep the commit points value setting the same so that disk utilisation will remain the same.
  • A table sorted by key does not require commit points.

To calculate the number of commits you need to consider the following:

  • Number of records to be updated.
  • Number of records to be channelled before the commit is performed.

CommitPoints = UpdateRecords / ChannelledRecords

For example:

30 = 150,000/5,000

  • No labels