Measuring the camera transfer constant

The photon transfer curve is one of the most commonly used measurements in characterizing CCD camera performance.  The photon transfer curve consists of a plot of temporal signal variance in a single CCD pixel as a function of pixel illumination intensity.  The slope of the transfer curve represents the CCD gain in ADU/electron, and its intercept with the zero-signal axis is the CCD readout noise.  If there were no noise associated in measuring the number of photo-electrons collected in the CCD photo-sites then the only noise remaining would be "photon noise".  Photon noise is due to the random arrival in time of light photons at the CCD; like death and taxes, it is completely unavoidable.  The RMS variation (or standard deviation) of the number of photons counted for a given photon flux in a given time interval is simply given by the square root of the average number of photons counted in the time interval.  Therefore the only way to reduce photon noise is to increase the exposure time.  Since the noise is proportional to the square root of the pixel intensity a large increase in the number of photons counted must be made before a significant improvement is made in the signal to noise ratio.

The square of the standard deviation is the variance, so it follows that the variance of the noise is equal to the average photon count itself.  Now if we plot the variance in the signal (in ADU) for a single CCD pixel across several images taken with the same illumination versus the illumination intensity, then we should obtain a linear plot where the variance at zero intensity represents the readout noise in electrons (once the offset level is subtracted) and the slope of the line represents the gain of the video processing chain in ADU/electron.  This may sound simple enough, however, at least 50-100 images are required at each intensity to obtain a single point on the photon transfer curve!  Also, the light source used must be temporally stable if the results are to be correctly interpreted.

Fortunately there is no need to use a particularly stable light source if one measures many pixels (as one usually does when taking images) and uses the average level of the image to "normalize" the effective intensity of each image frame.  For small intensity variations, the error introduced by this normalization is completely negligible.  An LED powered by an ordinary linear regulator (e.g. LM7805) is a suitable light source.  The statistics can be improved by averaging the results from a few neighbouring pixels.  The images should be integrated over a period sufficiently long that the jitter in the shutter opening time does not significantly affect the effective integration time; 4 seconds is sufficient, 1 second is too short.  The need to take several pictures at many different illumination levels (or different integration times) is obviated by illuminating the CCD with a smooth light intensity gradient and sampling different regions of the frame in the statistical processing.  A gradient that varies by a factor of 2-4 across the image is ideal.  Also, not all of the pixels on the CCD frame need to be digitized (read out); a broad strip through the gradient image is sufficient.

Using this technique, only 3-4 batches of 100 images each are required to obtain a good transfer curve.  For example, the first batch of images may have an intensity gradient that varies from 7000 ADU to 18000 ADU.  The second sequence of images may contain gradients from 18000 ADU to 35000 ADU, and the third could include gradients from 35000 to 63000 ADU.  Note that the most useful measurements are those at lower intensities because these should be more heavily weighted when determining the readout noise.  Near saturation, nonlinearities arise in the CCD and signal processing chain that make interpretation of the transfer curve difficult and will lead to an incorrect estimate of the readout noise when the transfer-line is extrapolated to zero intensity.

Note that if the gain is already known, then the readout noise can be determined without going through all of the trouble of obtaining the photon transfer curve.  All one needs to do is to look at the standard deviation of the pixel intensities in ADU for a dark frame with the camera cooled (the image statistics tool is handy for this).  Multiplying the standard deviation of the pixel intensity by the gain in electrons/ADU yields the readout noise in photo-electrons.  The nominal value for the gain of Pyxis's signal processing chain is 1.72 electrons/ADU with the KAF401e chip installed.  This measurement is almost always sufficient for troubleshooting purposes and is simple enough to perform that it can be done at the beginning of each session to ensure that the camera is operating as expected.


The transfer constant reduction tool

Once a set of images has been acquired, these can be quickly and easily reduced by using the transfer-constant reduction tool in Pyxis.  This tool performs all of the statistics required to reduce the sequences of images acquired to determine the transfer constant.  The photon transfer curve is displayed as it is built up from user-selected sampled regions of the image frame.  Each new datum in the transfer curve is added to a comma delimited file that can be imported into a spreadsheet for further analysis or graphing if desired.

The image below shows what the transfer constant form looks like after it has just been opened:



The user begins by entering the file prefix used in the captured image sequence.  Note that filenames used must consist of an alpha-numeric text string followed by an underscore, followed by a numeric extension number and finally the file extension string (e.g. ".pxi", ".fits", etc...).  Image sequences obtained with the Pyxis software already conform to this format.  All files used must have an extension number within the range "First file number" to "Last file number"; a file MUST exist for each integer in the selected range.  The file prefix consists of the alpha-numeric text portion up to the last underscore in the filename.  If the file open dialog is used (the button with ">>" on the right) and a file is selected, then file prefix will be automatically entered in the file prefix text box when the dialog closes.

The size of the region to sample and average is given by the "Pixels to sample (NxN)" value; the default value of 4 (16 pixels) is sufficient.

The comma-delimited text file where the statistics are to be stored must be specified.  Each {intensity, variance} pair calculated will be stored in this file.  If the file already exists, then the new values are appened to the end of the file.  If the last datum calculated  appears to be an outlier on the transfer curve, then it may be removed from the file by clicking "Clear last point".  The entire file contents can be erased by clicking "Clear all points".  When "Display stats file" is clicked, the sequence of points is read from the statistics file and the points are displayed in the right graphics window.  In this way, previous processing sessions may be reloaded and additional points added to the transfer curve.

The option "subtract bias during normalization" subtracts the offset level from the image before normalization - this option should always be selected unless the user has a good understanding why this shouldn't be done on their data.

The first processing step consists in obtaining normalization values for each image in the selected sequence.  The normalization value for an image is computed by averaging the pixel intensities (less the offset level, if so selected) in the 100 x 100 central part of the image.  If the image is less than 100 pixels high or 100 pixels wide, then a smaller region of the image is used for the averaging.  Once an average intensity is obtained for each image, the average of this list of averages is calculated and each image average is divided by the list average.  The resulting values are referred to as the normalization values for the images.  As far as the user is concerned, all that is involved is clicking the "Normalize image intensities" button, and the processing operation will commence.

Once the images have been successfully normalized, the last image in the sequence is displayed in the left image box, as shown in the example below,



Note that the graphics window on the right would normally be empty at this stage.  The pixel intensity is displayed as the mouse is moved over the image in the left graphics window, allowing the user to select an  image region with the desired intensity to process.  Select a region to process by left clicking on the desired image region; calculation of the statistics for the selected image pixels will begin immediately.
Once two data points are available (i.e. the user has processed at least two image regions), then these points will be displayed on a graph in the right graphics window.  Once several points have been obtained, and the linear trend in the photon transfer curve is clearly visible, then the "Determine camera constant" button can be clicked.  A linear regression line will be computed and an estimate of the camera constant (gain) and readout noise will be displayed.  Note that the image offset in ADU must be provided to obtain a correct value for the readout noise.  The image offset to use can be determined by looking at the "bias level" tag in a few of the images processed to obtain a representative value for the offset (typically about 6000 ADU, for the KAF401e chip).  The bias level tag can be read by opening the image in Pyxis and clicking the header information button or opening the image in a text editor and looking for the bias level in the header.

The points used in the linear regression computation can be restricted to a specific range of intensities by setting "Minimum/Maximum intensity to include in fit" to the appropriate values.  This is useful if it is found that significant departure from the transfer curve is observed at certain pixel intensities (typically near saturation).  Note that severe departure from the transfer curve at moderate ADU can indicate a problem with the signal processing chain.  For example, insufficient settling time for from the clamp or sampling interval could lead to problems at higher signal levels.