hscPipe basic information

This page summarizes the technical terms of hscPipe.

repository, registry

hscPipe generates a database for data reduction. This database has to be located in a directory tree following a set rule. This directory tree is called “repository” and the database itself “registry”. Both are generated by hscPipe’s command. The structure of repository is shown in Structure of repository.


rerun

We call a series of data processing from raw data to reduced images and catalogs with common parameters as rerun. The whole data generated in a rerun are stored in one directory with rerun name. If you reduce your data with different parameters, it is regarded as a different rerun. When you change processing parameters or region, we recommend that you make new rerun.


detaId

dataId is a unique ID to specify input data. The following keywords are often used to define dataID;

  • visit : Name for each observing shot

  • ccd : CCD chip ID

  • tract : ID specifying the region of observed sky

  • pointing : Observing epoch

  • field : A target name corresponding to OBJECT in FITS header

  • dateObs : Date of observations corresponding to DATE-OBS in FITS header

  • filter : Filter name corresponding to FILTER01 in FITS header


tract, patch

The tract and patch are IDs to specify the observed sky area. The tract is the largest square region that is usually defined as to include all observed sky. The small split region of a tract is called patch. In case of HSC SSP data, tract size is about 2 × 2 square degree. In a tract, there are ~100 patches whose size is 4200 × 4200 square pix (1 pix corresponds to 0.168”). Although you can set both tract and patch sizes by yourself, you should choose appropriate tract size to avoid large distortion effect which is possible to appear near the edge of a tract.


PBS/TORQUE

Some of hscPipe commands can run with batch processing called TORQUE (Tera-scale Open-source Resource and QUEue manager). The TORQUE is a popular variant of PBS (Portable Batch System), and manage jobs and queues. Because available resources depend on a computer environment, please check your computer status and submit your job to the smallest queue.

Here, we introduce some linux commands to manage jobs.

  • qstat : Check job status

  • qdel : Kill a job. You can find job ID using qstat command.

  • qsub : Submit a job.


Schema file

The column reference for catalog data is called schema file which is generated under [reduction directory]/rerun/[rerun]/schema