The recovery for an Airy function is not particularly good, since the pipeline assumes a purely positive response....
Articles in the bgps category
Definitive Testing
It is now possible to make artificial timestreams! First round of tests: For pointing maps with ~21 scans in 15" steps, array angle is approximately negligible. deconv does NOT recover point sources, but reconv does (perfectly) The algorithm starts to decay and produce weird ringey residuals at a S/N ~300. The residuals are only at the 1% level out to S/N ~1000.
Tests running
- Re-doing December 2010 calibration (remember to check & compare AFGL 4029)
- Mapping L351 with deconv, reconv, nodeconv
ARGH something is wrong with l351 pointing! Is the pointing model broken? What HAPPENED?! OK, on to 4/8/2011 stuff: I've done more work on the Memo from a few weeks ago, and now have a concrete recommendation that we do some more observing in 2011. Why I torture myself so, I do not know.... Examination of the 2009 tests shows that reconv is by far the superior approach for pointing observations as previously noted. Since reconv doesn't look bad, but definitely not best, for science, it might be the compromise we have to make. nodeconv completely fails for ds5 data, but the ds5 data in general are abominable. They drop by 15-20% compared to ds1 for Mars. The problem with nodeconv appears to be extremely high noise maps.
Examining deconvolution strategies
This is a direct continuation of what I did yesterday (but only posted today because I forgot to click post). No-deconvolution doesn't work, at least not reliably. It recovers structures that are too large to be believable. Perhaps a higher threshold should be required to include signal in the model? The noise actually looks too high anyway. Also, the flagging doesn't look great. Grr. The agreement between the three is somewhat better, down to a 50% increase of reconv over deconv: [deconv, nodeconv, reconv]: BMAJ: 0.00916667 BMIN: 0.00916667 PPBEAM: 23.8028 SUM/PPBEAM: 8.19765 Sum: 195.127 Mean: 1.04346 Median: 0.751545 RMS: 0.78342 NPIX: 187 BMAJ: 0.00916667 BMIN: 0.00916667 PPBEAM: 23.8028 SUM/PPBEAM: 10.1592 Sum: 241.816 Mean: 1.29314 Median: 1.04031 RMS: 0.778854 NPIX: 187 BMAJ: 0.00916667 BMIN: 0.00916667 PPBEAM: 23.8028 SUM/PPBEAM: 12.1694 Sum: 289.664 Mean: 1.54901 Median: 1.31107 RMS: 0.808915 NPIX: 187 But this time there is less indication of negative residuals than previously. I was very confused by negatives being included in the model for nodeconv, then realized it's because of the grow_mask option. sncut = 1 is now default for nodeconv. I think it makes sense. (sncut = 1 drops the flux by <10%: BMAJ: 0.00916667 BMIN: 0.00916667 PPBEAM: 23.8028 SUM/PPBEAM: 9.31507 Sum: 221.725 Mean: 1.18569 Median: 0.915137 RMS: 0.783316 NPIX: 187) While reconv has produced reasonable results in some cases, a close look at the maps shows that deconv ~ nodeconv << reconv. There is something wrong with reconv. It spreads out and increases the flux artificially. So.... why did it work so damn well for point sources? The new weighting scheme seems to flag a dangerous number of bolos as 'high weight'. It drops after iteration #1, but not all the way. Need to remember to reprocess all December 2009 data with more flagged bolos
Deconvolve and Epochs
I've spent a large portion of the last week working on the deconvolver. I found previously that a reconvolved map does a better job of restoring flux than the straight-up deconvolved map for point sources / pointing observations. However, the same update broke the regular mapping modes, leading to horrible instability in the mapping routines for large maps such as W5. Curiously, it seems that the aspect that breaks is the weighting; somehow the noise drops precipitously in certain bolometers, leading to extremely high weights. Perhaps they somehow dominate the PCA subtraction and therefore have all their noise removed? Either way, there are a few large-scale changes that need to be made:
- Since Scaling and Weighting are now done on a whole-timestream basis, we should only map single epochs at once and coadd them after the fact. This approach will also help relieve RAM strain. Since it appears that individual observations are now reasonably convergent with the proper treatment of NANs in the deconvolution scheme, it should be possible to take any individual map and coadd it in a reasonable way.
- Bolometers with bad weights need to be thrown out. Alternatively, and more appropriately, I need to discover WHY their weights are going bad.
We also need to explore different weighting schemes.
- 1/Variance over whole timestream (current default)
- 1/Variance on a per-scan basis (previous default) [based on PSDs]
- Minimum Chi2 with Astrophysical Model (??)
- Min Chi2 on a per-scan basis?
Because of the extensive testing this will require, it is really becoming essential that we develop an arbitrary map creation & testing routine.
Weird problem
I'm remapping everything, and there's a really strange situation in ic1396... only one field has a source that doesn't have rotation problems, every other observation is clearly improperly rotated. The weird thing is that it's NOT the one you'd expect from the information below: readcol,'/Volumes/disk2/sliced/infiles/ic1396_infile.txt',filelist,format='A'for i=0,6 do begin & ncdf_varget_scale,filelist[i],'array_params',ap & print,filelist[i],ap & endfor/Volumes/disk2/sliced/ic1396/070911_o19_raw_ds5.nc 7.70000 31.2000 70.7000 0.00000 0.00000/Volumes/disk2/sliced/ic1396/070912_o26_raw_ds5.nc 7.70000 31.2000 84.0000 0.00000 0.00000/Volumes/disk2/sliced/ic1396_d/070913_o19_raw_ds5.nc 7.70000 31.2000 84.0000 0.00000 0.00000/Volumes/disk2/sliced/ic1396_l/070913_o17_raw_ds5.nc 7.70000 31.2000 84.0000 0.00000 0.00000/Volumes/disk2/sliced/ic1396_r/070913_o18_raw_ds5.nc 7.70000 31.2000 84.0000 0.00000 0.00000/Volumes/disk2/sliced/ic1396_u/070913_o20_raw_ds5.nc 7.70000 31.2000 84.0000 0.00000 0.00000 One ofthese things is not like the others... but it's 070913_o18_raw_ds5.nc, not /Volumes/disk2/sliced/ic1396/070911_o19_raw_ds5.nc
Progress, but still ds1-ds5 issues
ds1 and ds5 agree pretty well with the recent upgrades to delining and deconvolution. However, there are still counterexamples, e.g. 101208_o13, in which ds5 < ds1:
The 'fitted' values agree better than the 'measured' values now that NANs are treated properly. Spent a few hours today trying to figure out if weighting can explain the difference between ds1 and ds5; it appears to make up for most of it so I'm doing some more experiments. Why is there so much parameter space? Why can't weighting just work? It doesn't.... also wasted a few hours trying to write a python drizzling algorithm, which unfortunately is impossible so I had to resort to an inefficient for loop. Finally got some minor results. It really looks like there is a trend pushing up the recovered flux (i.e. higher volts/Jy) for ds5 over ds1. There is a discrepancy between map types for ds1 but not for ds5, which is actually backwards from what I would have expected, since ds1 will get better-sampled maps.
Luckily, the difference between peak fitting and "measuring" results in very small (insignificant) changes to the calibration curve (recall fitting is direct gaussian fitting; 'measuring' is using the gaussian-fit width and total flux in an ellipse to infer a peak assuming a point source):
Since this work has all been done for the 'bootstrapping' observations that are supposed to tell us if different map sizes are compatible, I have included the map sizes in the diagrams now. However, to really understand the ds1/ds5 difference, there are much better data sets, which I'm now reprocessing using the new and improved methods. (the Whole BGPS is also processing with the new methods in the background, though since the methods are being updated live there may be more changes and it will have to be re-run.... initial looks at W5 are BAD but L030 is GOOD (bordering on amazing))
Careful comparison of ds1 and ds5
I looked very closely at the timestream and maps of 101208_o11 and had a pretty hard time figuring out why the data were different, but it looked like the data really did differ on a point-by-point basis (according to pyflagger). The only conclusion I was able to draw is that the scaling must be off. I realized that the scaling was being done before delining. I moved scaling from readall_pc to premap, and it brought at least this one source into agreement. Time to run ds1-ds5 comparisons again! (this means that ds1 data MUST have deline run on it, but ds5 data doesn't really need it) Here are examples of ds1 and ds5 timestreams, with and without scaling, and ds1 with and without delining:
Trying to bootstrap
I've concluded, based on previous posts http://bolocam.blogspot.com/2011/02/downsampling-why-is-dec-2010-different.html, http://bolocam.blogspot.com/2011/03/revisiting-calibration-yet-again.html, and http://bolocam.blogspot.com/2011/03/workaround-for-individual-maps.html, that ds5 is a problem primarily for undersampled images, i.e. those taken in the normal mapping mode. This makes bootstrapping a bit tricky.
There are two options:
- Map Uranus and AFGL 4029 both in Volts and figure out what flux density AFGL 4029 must have to lie on that curve
- Map Uranus and compute a calibration curve, apply that calibration curve to AFGL 4029, and then compare derived flux densities.
Both have the major problem that the individual AFGL 4029 maps will forcibly be undersampled if I use ds5 data (which is normally OK, according to the first paragraph). In the second case, it is possible to co-add the images and get around the under-sampling issue, while in the first case it is not because of the dependence on total loading (MEANDC).
The real problem is that the whole goal of these observations was to compare the different observing methods and see if they agree (1x1, 3x1, pointing, etc.) since the pointing-style observations were used to calibrate the others. But if the 1x1s are just straight-up unreliable, how can we do the comparison? I think the co-added AFGL 4029 is the only option, but then how do I test if it's correct? It would be really nice to have AFGL 4029 observed with both scan types...
Alright, onto the data. After last week's fix of the bad bolos, I really hope ds1 and ds5 agree. However, first glance at the cal curves says they don't. ds1 and ds2 agree, but ds5 is different.
After checking them out with ds9 *ds[15]*13pca*_map10.fits -scale limits -1 1000 -log -cmap hsv -match colorbars -match scales -match frames wcs &, it appears that the _mask_ data is all... wrong, somehow. That's OK, I want to discard the mask data anyway, so I'm happy to NOT spend time debugging it.
Even after careful examination showing that the fits look good - and noting that the fluxes look pretty much the same - the calibration curves still look rather different. Unfortunately I had to spend 3 hours debugging IDL plotting commands; I want to show the fits each time and save them as postscripts. What does "xyouts" with "/device,/normal" do? I thought that should plot x,y text at the coordinates specified in the plot window... but no, that is JUST /normalize.
Anyway, realized that centroid_map treated NANs as zero. Added ERR keyword (with a reasonable estimate of the error) in centroid_map to ignore NANs. It looks like improper treatment of NANs is responsible for a lot of the scatter seen in the calibration plots.
There is a substantial difference between the "fitted" peak and the "measured" peak (the latter computed by taking the sum of the pixels divided by the area of the fitted gaussian). It looks like the "measured" version is more robust, at first glance. However, unfortunately, for 101208_o11, the difference between ds1 and ds5 exists in both quantities. I will have to examine timestreams now... ARGH.
Well, the timestreams show... that indeed the model is lower in ds1, but not why. The "remainder" (new_astro; the stuff that never gets incorporated into the model but DOES get incorporated into the map) appears to be the same in both. Similarly, there is little to no flux in the PCA atmosphere, so it's not simply being cleaned out. Where is the flux going or coming from?
A workaround for individual maps?
I closely examined the timestreams of 101208_ob7 as I said I would yesterday. Unfortunately, all I can do is describe the symptoms: the first deconvolution model looks good, though it isn't quite as wide as the true source (this should be OK; it is an iterative method, after all). In the second iteration, though, the deconvolution model is even smaller and lower amplitude... and it goes on like that.
Not deconvolving results in a healthy-looking clean map - pretty much what you expect and want to see.
This implies that somehow removing an incomplete deconvolved model leads to more of the source being included in the 'atmosphere' than would have been included with no model subtraction at all. I'm not sure how this is possible. In fact... I'm really quite sure that it is not. The workaround is to only add positive changes to the model. This should 'definitely work' but may be non-convergent and assumes that the model never has anything wrong with it at any iteration. I have demonstrated that this works nicely for the two Uranus observations I tested on, but now I have to run the gamut of tests.... the first (very obvious) problem is that the background is now positive, which is dead wrong. This workaround is not viable. Alright, so what next? I've described the symptoms and that I think they can't occur... A closer look shows that new_astro is not being incorporated into astro_model at the second iteration. Why? AHA! Pyflagger + find_all_points reveals the problem!
Map value: 16.939728 Weighted average: 17.476323 Unweighted Average: 524.573136 scan,bolo,time: mapped astro flags weight scale 3, 22, 12: 8.380408 13.561113 0.000000 0.025132 1.000000 4, 124, 23: 822.005327 13.561113 0.000000 0.000038 1.118012 4, 21, 38: 719.408983 13.561113 0.000000 0.000037 0.946721 5, 20, 7: 4.470616 13.561113 0.000000 0.013303 1.400000 5, 119, 23: 882.508303 13.561113 0.000000 0.000033 0.926887 5, 100, 35: 327.007750 13.561113 0.000000 0.000074 1.184397 5, 106, 38: 162.562098 13.561113 0.000000 0.000704 0.970000 6, 116, 27: 779.075640 13.561113 0.000000 0.000033 0.891768 8, 112, 3: 235.557390 13.561113 0.000000 0.000147 0.947130 9, 3, 14: 966.721773 13.561113 0.000000 0.000032 1.166292 9, 109, 41: 139.753656 13.561113 0.000000 0.000753 1.075269 10, 104, 8: 641.121935 13.561113 0.000000 0.000050 0.927827 10, 105, 24: 4.323228 13.561113 0.000000 0.032759 0.019022 10, 32, 36: 847.646990 13.561113 0.000000 0.000034 1.099406 11, 36, 9: 834.757586 13.561113 0.000000 0.000038 1.184751 11, 76, 37: 566.851891 13.561113 0.000000 0.000040 1.111000 12, 77, 13: 834.603090 13.561113 0.000000 0.000034 1.128464 12, 44, 44: 335.465654 13.561113 0.000000 0.000195 2.165775 13, 26, 17: 50.423143 13.561113 0.000000 0.004826 0.829932 13, 75, 29: 724.884676 13.561113 0.000000 0.000042 0.923077 14, 49, 21: 797.618990 13.561113 0.000000 0.000038 1.091918 14, 29, 33: 743.856012 13.561113 0.000000 0.000035 1.050360 15, 33, 13: 660.670099 13.561113 0.000000 0.000031 0.832180 15, 53, 25: 604.174286 13.561113 0.000000 0.000047 0.889922 15, 88, 40: 4.626476 13.561113 0.000000 0.008241 0.191489 17, 64, 20: 778.950533 13.561113 0.000000 0.000037 1.233108 18, 68, 30: 686.048136 13.561113 0.000000 0.000040 1.387283
Note that the lowest points have the highest weights. They DEFINITELY shouldn't. What's wrong with them? Apparently they have NO sensitivity to the sky! What?! There were a bunch of bad bolos in Dec2010 that weren't flagged out... I wonder if that problem persists to other epochs. Still, why does it only affect pointing observations? Looking at the power spectra... the large-timescale stuff becomes less dominant when scans are longer, but the noisy spectra are still clearly noise-only. How odd. Dropped to 112 good bolos from 134. That is much more believable. Have to go back and fix Dec09 data too... Even after fixing the bad bolos, the model drops with iteration number. Why why why? Well, looking at deconv_map, I've always returned the truly deconvolved version, not the reconvolved... maybe the reconvolved really is better? Again, this will have to be extensively tested, but it certainly gets rid of the obvious/dominant error that the model kept dropping off. However, FINALLY, based on how ridiculously good the reconv-deconvolved map looks, I think I'm ready to do the extensive pipeline tests. So, 10dec_caltest has been started up with all of the new bolo_params applied and the changes in place to deconv_map... let's see what happens.
After that runs, I'll have to re-run the fit_and_plot routines