ds1-ds5 comparisons

I'm comparing simulated ds1-ds5 to real ds1-ds5 comparison tests. In the simulated tests, I compare the recovered map after 20 iterations with 13 pca components subtracted to the input map. There are figures showing this comparison for ds1 and ds5 images in addition to one showing the comparison between ds1 and ds5. The agreement is pretty much as good as you could ask for. These simulations are the most realistic run yet. They include a simulated atmosphere that is perfectly correlated between all bolometers excepting gaussian noise, but the relative sensitivity of the bolometers is varied.

This is what a 'real' ds1-ds5 comparison looks like. The image shown is a "cross-linked" observation of Uranus with downsampling off and on. Note that downsampling clearly and blatantly smears the source flux.

The same image with "beam location correction" looks no better.

The problem is essentially the same with the individual scan directions:

What is causing this difference?

  • higher-order corrections to the atmosphere calculation?
  • inadequate sampling of the model?
  • "pointing" offsets between the model and the data (note that these are NOT pointing offsets, but they may be "distortion map" offsets)?
  • Other?

Examining the weights and scales for two individual (real) observations, ds1 followed by ds5, is not particularly telling; there is one additional outlier bolometer flagged out in the ds1 observation, but there is nothing obviously wrong with that bolometer (it may have much lower high-frequency noise than others).

The simulations actually have more discrepant weights, but that doesn't seem to cause any problems:

The timestreams both have similar artifacts:

while the simulated versions really don't:

This is true even when the relative strength of the atmosphere is higher:

I think the most viable candidate is the 'pointing offset' idea, which will take a little work to simulate properly...

Comments