Queued Service Observing with MegaPrime:

Semester 2004A Report



A - Introduction

The Queued Service Observing (QSO) Project is part of a larger ensemble of software components defining the New Observing Process (NOP) which includes NEO (acquisition software), Elixir (data analysis) and DADS (data archiving and distribution). The semester 2004A was the third semester for which MegaPrime was used for the NOP system. It was a difficult semester, mostly because of extremely bad weather affecting Mauna Kea (in fact, statistics suggest that it was the worse winter since 1949 on Mauna Kea!) The overheads with MegaPrime are still longer than with CFH12K so we are not as efficient on the sky as we were in the past, although guiding acquisition is now much more reliable.  As showed below, there is some unbalance in the distribution of the different Agencies, due mostly to the bad weather affecting some agencies more than others (e.g. because of RA distribution of the targets). Also, the final share of observing time between the different components of the CFHTLS was not satisfying at the end. This is hardly surprising since the larger component of the LS has strong time constraints, which become even more dominant when the weather is unstable and bad on long periods of time. A full solution to this important problem, which will probably include a dynamic way of shifting priorities between programs, remains to be found.

B - General Comments

The semester started really badly with the entire run in February lost to technical problems with Megacam (leaks in the dewar and upper end mechanical issue; 15 nights lost) and to bad weather (last seven nights).  No data acquired at all during an entire run is awful (even if the entire run was extended by several nights) and we suffered from it during the rest of the semester. The run in March was not much better.... We lost 70% of the run (13 full nights over 19 nights)) to bad weather and seeing was decent (at best) for only two nights when we could observe. The third run in April was better, with "only" about 50% of the run lost to bad weather. Some good nights with good seeing helped catching up somewhat on some high priority programs. The last four runs of the semester were the best by far and we were able to get much more data.  We've lost some time to weather, bad seeing, and technical problems, but this is where we got the bulk of the data for this semester. As detailed below, the global fraction of time lost to weather and technical problems for 2003B is very high, in fact a factor of 2 higher than what we could expect. Thus the final statistics for 2004A are way lower than we would like. This is discussed in the section C below.

Some general remarks on QSO in general for the semester 2004A:

1. Technically, the entire chain of operation, QSO --> NEO --> TCS, is efficient and robust. The time lost to the NOP chain is very small. This is a complex system and we have worked real hard to reduce the overheads on this. Glitches appear from time to time, mostly on the guide star acquisition, but the system is not quite reliable and efficient.

2. The QSO concept is sound. With the possibility of preparing several queues covering a wide range of possible sky conditions in advance of an observing night, a very large fraction of the observations were done within the specifications. The ensemble of QSO tools allows also the quick preparation of queues during an observing night for adaptation to variable conditions, or in case of unexpected overheads. The introduction of the CFHTLS with time constrained observations on the large-scale adds significant complexity to queue scheduling and requires much more work on planning of the run. For 2004A, the global validation rate (validated/observed) was higher than for 2003B. A discussion on this is included in section C.

3. QSO is well-adapted for time-constrained programs. The Phase 2 Tool allows the PIs to specify time constraints. Two of the components of the CFHTLS have very restrictive time constraints. We can handle those easily if the weather is cooperative (of course!) although the introduction of time constrained observations on a large-scale adds up definitive complexity in the scheduling process.

4. Very variable seeing and non-photometric nights represent the worse sky conditions for the QSO mode. In 2004A, we were still short on "shapshot" programs or regular programs requesting mediocre conditions. As a result, we were often forced to observe programs in conditions worse than requested because the weather was very unstable at the beginning of the semester. Again, we were able to calibrate all the fields requesting photometry but originally done during non-photometric conditions. The availability of Skyprobe and real-time measurements of the transparency is extremely valuable and regularly used do decide what observations should be undertaken.

5. Observations of moving targets is feasible in a queue mode. During the 2003A semester, we implemented a way of preparing observations for moving targets in our Phase 2 Tool (ephemeris tables). The process is a bit laborious but works really well and several programs have used this option for 2004A as well. Non-sidereal guiding is not yet offered.

C - Global Statistics, Program Completeness, and Overheads

1) Global Statistics

The following table presents some general numbers regarding the queue observations for 2004A (C, F, H, K, L, and T, D-time, excluding snapshot programs):

Total number of Nights
Nights fully lost to weather
~32 (27%)
Nights lost to engineering + technical problems
~ 4 + 20 = 24 (20%)
QSO Programs Requested
QSO Programs Started 28
QSO Programs Completed 7
Total I-time requested (hr.)
Total I-time validated (hr.)
282.3 (47%)
Queue Validation Efficiency
~ 84 %


2) Program Completeness

The figure below presents the completion level for all of the programs in 2004A, according to their grade:


3) Overheads

There is no doubt that the overheads with MegaPrime are more important than with CFH12K. The following table include the main operational overheads (that is, other than readout time of the mosaic) with MegaPrime during the semester 2004A. The numbers have not changed since 2003B. This is given as a reference; overheads are highly variable during a given night depending on the conditions, complexity of science programs, etc.

Total overhead per night
Filter Change
15 - 25/ night
90s /change
1500 - 2200 seconds
Focus Sequence
8 - 12 / night
200s / seq
1600 - 2400 seconds
Dome Rotation > 45 d
5 ?
< 600 seconds
Guide Star Acquisition
20 - 30 ?
30 - 40 s / acq
600 - 1200 seconds


Note that overheads for calibrations (standard stars and Q98 short exposures for photometric purposes) are not included in this table. For 2004A, we observed about 2 standard star fields during a photometric night (12 minutes / fields due to filter changes). 


D - Agency Time Accounting

1) Global Accounting

Balancing of the telescope time between the different Agencies is another constraint in the selection of the programs used to build the queues. The figure below presents the Agency time accounting for 2003B. The top panel presents the relative fraction requested by the different agencies, according to the total I-time allocated from the Phase 2 database. The bottom panel represents the relative fraction for the different Agencies, that is, [Total I-Time Validated for a given Agency]/[Total I-Time Validated]. As showed in the plots, the relative distribution of the total integration time of validated exposures between the different Agencies was balanced at the end of the 2003B.



2 ) CFHTLS Accounting

The following figures show the time accounting for the different CFHTLS components:

Since each component of the survey is divided into two programs, the global fractions are given in the following table:

Fraction Requested
Fraction Validated for 2003B
Deep Synoptic L01 + L04
30% + 13% = 43%
58% + 6% = 64%
Wide Synoptic L02 + L05
17% + 17% = 34%
9% + 6% = 15%
Very Wide L03 + L06
10% + 12% = 22%
1% + 19% = 20%



E - Conclusion

Our third semester with the queue mode with MegaPrime was a difficult one due to the time lost to weather and technical problems. Even if the statistics are poor, we already have learned a great deal and a lot of progress was made in the semester, notably on some of the operational overheads. Improving efficiency remains a high priority, in particular by increasing the validation rate and implementing the auto-focus feature, and we are hopeful that 2004B will be more productive on the science side.