2 PICARD overview

 2.1 Requirements for running PICARD
 2.2 Important environment variables
 2.3 Running PICARD
 2.4 PICARD options

Picard is a tool for analyzing and combining a batch of astronomical data files that have previously had their instrumental signatures removed (for example by running Orac-dr on the raw data). It is designed to be instrument-independent. Picard uses the same infrastructure as Orac-dr, where data are processed by recipes which contain a series of processing steps called primitives.

Picard is designed to be easy to use. It needs no initialization, has few options and, by default, assumes that all input/output occurs in the current working directory.

2.1 Requirements for running PICARD

Orac-dr (and thus Picard) requires a recent Starlink installation. The latest release may be obtained from http://starlink.eao.hawaii.edu/starlink. Since Orac-dr development is an ongoing process, it is recommended that the newest builds be used. These builds can be obtained from: http://starlink.eao.hawaii.edu/starlink/rsyncStarlink and may be kept up-to-date with rsync.

The Starlink Perl installation (Starperl) must be used to run the pipeline due to the module requirements. The Starlink environment should be initialized as usual before running Picard.

2.2 Important environment variables

Picard does not need to have specific environment variables defined (other than those initialized as part of Starlink). Data are read from and written to the current working directory by default. However, it is possible to define an alternative location for the output data via ORAC_DATA_OUT (which is used by Orac-dr).

Two other specialized environment variables may be defined by users who wish to write their own processing routines: see Section 4 for more information.

2.3 Running PICARD

The only mandatory arguments are the name of the recipe and a list of the files to process. Running Picard is as easy as typing

  % picard <options> RECIPE *.sdf

where RECIPE is the name of the processing recipe to use and *.sdf is the list of files to process. In practice, everything after the recipe name is treated as an input file. The recipe will be applied to all input files, which must be in NDF format. Currently there is no automated conversion from FITS.

More generally:

  % picard [options] RECIPE FILES

where [options] are command-line options of the form -option or -option value. Note that the options must be given before the recipe. The options are described in more detail below.

2.4 PICARD options

Picard has a number of command-line options which may be used to control the processing and feedback.

-help

Lists help text summarizing Picard usage

-version

Prints out the version information

-man

Displays the help text as a manual page

-verbose

Enable verbose output from algorithm engines (e.g. Smurf makemap)

-debug

Enable debugging output, listing primitive entry and exits points, timing and calls to algorithm engines.

-log sfhx

Control where text output is displayed either on the terminal screen (s), a log file (f), HTML log file (h) or to an X-window (x). Default is fx; for most recipes, sf is recommended.

-nodisplay

Do not launch the display system. No data will be displayed and GWM, Gaia etc windows will not be opened.

-recsuffix SUFFIX

Modify the recipe search algorithm such that a recipe variant can be selected if available. For example with -recsuffix QL a recipe named MYRECIPE_QL would be picked up in preference to MYRECIPE.

Multiple suffices can be supplied using a comma separator, e.g. -recsuffix QL1,QL2

-recpars filename

Recipe behaviour can be controlled by specifying a recipe parameters file. This is a file in INI format with a block per recipe name.

  [RECIPE_NAME]
  param1 = value1
  param2 = value2

See the documentation for individual recipes in Appendix B for supported parameters.