How archiving works
All completed and aborted releases are archived, which means that they are removed from the Digital.ai Release repository and stored in a different internal database, which is called the archive database. This improves performance and allows you to create custom hooks that export release information to external databases or reporting tools.
The differences between archived and unarchived releases are:
- Archived releases are read-only. You cannot add comments to tasks in an archived release.
- Archived releases appear in reports. Releases that are not archived do not appear in reports.
- You can create a custom hook that runs when a release is archived; for example, to store the release in an external reporting database.
- In Digital.ai Release 8.5.0, pre-archived releases are available. These are releases which are complete or aborted but not archived yet. For more information, see Pre-archived releases.
By default, releases are archived right after they finish. Every minute, Release runs an archiving job that scans the repository for completed and aborted releases, exports them to the archive database, runs custom export hooks, and removes the releases from the repository.
You can configure the following parameters for the archiving job:
- In Settings > General settings > Archiving, you can configure the amount of time a release must be completed or aborted before it is moved to the archive. By default, this is 30 days; in other words, releases are archived right after they are completed or aborted. If you would like to be able to add comments to a completed release, increase this value; however, keep in mind that releases will not appear in reports until they are archived.
- In the
xlrelease.ArchivingSettings.archivingJobCronScheduleproperty in the
deployit-defaults.propertiesfile, you can configure how frequently the archiving job runs. You must specify the frequency in cron syntax, which allows you to set frequencies such as “every hour” or “every day at midnight”.
Ensure that you configure the archiving job settings so that the overall number of releases that can be archived per day is not less than the number of releases being completed or aborted. For example, do not configure the cron schedule to run once per week without changing the throttling properties, because this will mean that only approximately 18 releases would be archived per week, while many more releases may be completed or aborted during that time.
After 8.5 onwards default archiving setting is changed to 30 days to allow the releases which are completed or aborted to be in ‘active’ database (Digital.ai Release Repository) which allows to show one view for ‘related’ releases , as shown here :
You can throttle the archiving job. This is useful if you have many releases to archive, as it ensures that the archiving job does not use a large amount of system resources and impede Release’s performance.
The following throttling properties are available in the
||Maximum amount of time that one execution of the archiving job is allowed to take (in seconds). With the default setting, approximately 18 releases will be archived, and then the job will stop. The next job execution will trigger after 1 minute. Set to
||Time to wait between archiving each release (in seconds). With the default setting, the job will not archive more than 1 release per second. Set to any negative number to remove the wait time.||
||Search page size when searching for releases to archive.||
||Enables the archiving job. Use this setting to disable the job while you configure or troubleshoot the job.
Important: Do not permanently disable the archiving job. This will cause a negative performance impact.
You can configure the throttling properties at runtime using the JMX-managed bean by path
com.xebialabs.xlrelease:name=Archiving. Note that changing properties using JMX does not change the default values that are stored in the
deployit-defaults.properties file. Therefore, after the Digital.ai Release server is restarted, the configuration is reset to what is set in the file.
searchPageSize property is a low-level setting that should not be changed in most cases. It limits the number of releases that the archiving job finds before it archives the set. For example, if the property is set to
5, Release will find five completed releases, archive them, search for the next five completed releases, and so on until it archives all required releases or the
maxSecondsPerRun limit has been reached.
This property can be used, for example, if the repository contains thousands of releases that must be archived and you want Release to find the releases as quickly as possible. For example, if the
sleepSecondsBetweenReleases are both
-1, then the next archiving job will work as fast as possible to archive all releases. However, the CPU usage of Release will be very high for the entire time that the archiving job runs.
In Digital.ai Release 8.5.0, pre-archived releases are available. Pre-archiving a release is the process of copying it from the live database to the archive database. This is done automatically when a release reaches the completed or aborted state. This enables customers to get an insight into pre-archived releases by means of the global dashboard before the release is archived. This also enables you to remove the restriction of status filters in the releases overview page, this page can be accessed by clicking Folders > Releases tab. From this page, you can select all combinations of release statuses in the filters. Pre-archived releases are included in the results while archived releases are excluded.
Release supports custom export hooks that you can use to export information about completed and aborted releases. They are run when a release is archived.
Export hooks are written in Jython. You can add them to Release as JAR files or by placing files in the Release classpath.
You can define export hooks in two ways:
- Generic export hooks that you can use to export information to any type of storage
- JDBC export hooks that can export data to an SQL database
A sample export hook implementation is available on GitHub.