2 PICARD overview

 2.1 Requirements for running PICARD
 2.2 Important environment variables
 2.3 Running PICARD
 2.4 PICARD options

Picard is a tool for analyzing and combining a batch of astronomical data files that have previously had their instrumental signatures removed (for example by running Orac-dr on the raw data). It is designed to be instrument-independent. Picard uses the same infrastructure as Orac-dr, where data are processed by recipes which contain a series of processing steps called primitives.

Picard is designed to be easy to use. It needs no initialization, has few options and, by default, assumes that all input/output occurs in the current working directory.

2.1 Requirements for running PICARD

Orac-dr (and thus Picard) requires a recent Starlink installation. The latest release may be obtained from http://starlink.eao.hawaii.edu/starlink. Since Orac-dr development is an ongoing process, it is recommended that the newest builds be used. These builds can be obtained from: http://starlink.eao.hawaii.edu/starlink/rsyncStarlink and may be kept up-to-date with rsync.

The Starlink Perl installation (Starperl) must be used to run the pipeline due to the module requirements. The Starlink environment should be initialized as usual before running Picard.

2.2 Important environment variables

Picard does not need to have specific environment variables defined (other than those initialized as part of Starlink). Data are read from and written to the current working directory by default. However, it is possible to define an alternative location for the output data via ORAC_DATA_OUT (which is used by Orac-dr).

Two other specialized environment variables may be defined by users who wish to write their own processing routines: see Section 4 for more information.

2.3 Running PICARD

The only mandatory arguments are the name of the recipe and a list of the files to process. Running Picard is as easy as typing

  % picard <options> RECIPE *.sdf

where RECIPE is the name of the processing recipe to use and *.sdf is the list of files to process. In practice, everything after the recipe name is treated as an input file. The recipe will be applied to all input files, which must be in NDF format. Currently there is no automated conversion from FITS.

More generally:

  % picard [options] RECIPE FILES

where [options] are command-line options of the form -option or -option value. Note that the options must be given before the recipe. The options are described in more detail below.

2.4 PICARD options

Picard has a number of command-line options which may be used to control the processing and feedback.

General Options


Lists help text summarizing the command usage.


Prints out the pipeline version information.


Displays the full manual page.


Enable debugging output, listing primitive entry and exit points, timing and calls to algorithm engines.


Enable verbose output from algorithm engines.


File name of a flat ASCII text file containing a list of files to be processed, one file per line. Files specified this way are added to the list of files given as command line arguments.


Show help text for the given recipe.

Windows and Output

-log sfhx

Similar to ORAC-DR, this option controls whether the pipeline output is logged to the terminal screen (s), log file (f), html log file (h) or to an X-window (x). The default is fx. To avoid opening an X-window, sf is recommended.


Do not launch the display system. No data will be displayed, and GWM, GAIA, etc. windows will not be launched.

Recipe Selection


Modify the recipe search algorithm such that a recipe variant can be selected if available. For example with ‘-recsuffix QL’ a recipe named MYRECIPE_QL would be picked up in preference to MYRECIPE.

Multiple suffices can be supplied using a comma separator.

 -recsuffix QL1,QL2

Recipe behaviour can be controlled by specifying a recipe parameters file. This is a file in INI format with a block per recipe name.


See the documentation for individual recipes in Appendix B for supported recipe parameters.