Haloes are when images look like this:
instead of this, as they should:
Things to try:
- Pass /return_deconv to deconv_map
- Pass /linear to deconv_map
- Disable deconvolve - deconvolve=0
In l123 & l169, at least, /return_deconv worked
Haloes are when images look like this:
instead of this, as they should:
Things to try:
In l123 & l169, at least, /return_deconv worked
I closely examined the timestreams of 101208_ob7 as I said I would yesterday. Unfortunately, all I can do is describe the symptoms: the first deconvolution model looks good, though it isn't quite as wide as the true source (this should be OK; it is an iterative method, after all). In the second iteration, though, the deconvolution model is even smaller and lower amplitude... and it goes on like that.
Not deconvolving results in a healthy-looking clean map - pretty much what you expect and want to see.
This implies that somehow removing an incomplete deconvolved model leads to more of the source being included in the 'atmosphere' than would have been included with no model subtraction at all. I'm not sure how this is possible. In fact... I'm really quite sure that it is not. The workaround is to only add positive changes to the model. This should 'definitely work' but may be non-convergent and assumes that the model never has anything wrong with it at any iteration. I have demonstrated that this works nicely for the two Uranus observations I tested on, but now I have to run the gamut of tests.... the first (very obvious) problem is that the background is now positive, which is dead wrong. This workaround is not viable. Alright, so what next? I've described the symptoms and that I think they can't occur... A closer look shows that new_astro is not being incorporated into astro_model at the second iteration. Why? AHA! Pyflagger + find_all_points reveals the problem!
Map value: 16.939728 Weighted average: 17.476323 Unweighted Average: 524.573136 scan,bolo,time: mapped astro flags weight scale 3, 22, 12: 8.380408 13.561113 0.000000 0.025132 1.000000 4, 124, 23: 822.005327 13.561113 0.000000 0.000038 1.118012 4, 21, 38: 719.408983 13.561113 0.000000 0.000037 0.946721 5, 20, 7: 4.470616 13.561113 0.000000 0.013303 1.400000 5, 119, 23: 882.508303 13.561113 0.000000 0.000033 0.926887 5, 100, 35: 327.007750 13.561113 0.000000 0.000074 1.184397 5, 106, 38: 162.562098 13.561113 0.000000 0.000704 0.970000 6, 116, 27: 779.075640 13.561113 0.000000 0.000033 0.891768 8, 112, 3: 235.557390 13.561113 0.000000 0.000147 0.947130 9, 3, 14: 966.721773 13.561113 0.000000 0.000032 1.166292 9, 109, 41: 139.753656 13.561113 0.000000 0.000753 1.075269 10, 104, 8: 641.121935 13.561113 0.000000 0.000050 0.927827 10, 105, 24: 4.323228 13.561113 0.000000 0.032759 0.019022 10, 32, 36: 847.646990 13.561113 0.000000 0.000034 1.099406 11, 36, 9: 834.757586 13.561113 0.000000 0.000038 1.184751 11, 76, 37: 566.851891 13.561113 0.000000 0.000040 1.111000 12, 77, 13: 834.603090 13.561113 0.000000 0.000034 1.128464 12, 44, 44: 335.465654 13.561113 0.000000 0.000195 2.165775 13, 26, 17: 50.423143 13.561113 0.000000 0.004826 0.829932 13, 75, 29: 724.884676 13.561113 0.000000 0.000042 0.923077 14, 49, 21: 797.618990 13.561113 0.000000 0.000038 1.091918 14, 29, 33: 743.856012 13.561113 0.000000 0.000035 1.050360 15, 33, 13: 660.670099 13.561113 0.000000 0.000031 0.832180 15, 53, 25: 604.174286 13.561113 0.000000 0.000047 0.889922 15, 88, 40: 4.626476 13.561113 0.000000 0.008241 0.191489 17, 64, 20: 778.950533 13.561113 0.000000 0.000037 1.233108 18, 68, 30: 686.048136 13.561113 0.000000 0.000040 1.387283
Note that the lowest points have the highest weights. They DEFINITELY shouldn't. What's wrong with them? Apparently they have NO sensitivity to the sky! What?! There were a bunch of bad bolos in Dec2010 that weren't flagged out... I wonder if that problem persists to other epochs. Still, why does it only affect pointing observations? Looking at the power spectra... the large-timescale stuff becomes less dominant when scans are longer, but the noisy spectra are still clearly noise-only. How odd. Dropped to 112 good bolos from 134. That is much more believable. Have to go back and fix Dec09 data too... Even after fixing the bad bolos, the model drops with iteration number. Why why why? Well, looking at deconv_map, I've always returned the truly deconvolved version, not the reconvolved... maybe the reconvolved really is better? Again, this will have to be extensively tested, but it certainly gets rid of the obvious/dominant error that the model kept dropping off. However, FINALLY, based on how ridiculously good the reconv-deconvolved map looks, I think I'm ready to do the extensive pipeline tests. So, 10dec_caltest has been started up with all of the new bolo_params applied and the changes in place to deconv_map... let's see what happens.
After that runs, I'll have to re-run the fit_and_plot routines
The fundamental problem at this point is making the pipeline run faster. At current speeds, with undownsampled data, it may take ~days to process a single map. Ideas for faster processing:
First comment - delining has no effect on downsampled data. At least for the 0709 epoch, there were NO lines AT ALL in the data. From 0-5 Hz, it was just empty. So we don't have to worry about that... the problem only affects fully-sampled data. Then, onto map comparisons. Curiously, the noise levels don't drop after delining. They actually go up a bit. This may be because of the effects on PCA cleaning. However, flux levels in the sources go up by 0-10%. As usual, the change in flux changes from field to field without any obvious reason. Example 1: A pointing field. The source is ~2% brighter in the delined version, but otherwise the match between the two is nearly perfect.
Example 2: A bigger map, where the flux recovery is much greater when delining, but the background levels are also higher.
The captions are pretty much the same as for the previous post, but this is a larger field and the effects are more serious.
Background: Downsampling is performed using Old Pipeline code called process_ncdf. All BGPS data was downsampled by a factor of 5 before mapping because of data size concerns. I did this 'blindly' (i.e., just accepted that I should) because James said I could. However, I had previously noted that the pointing files could not be done with downsampled data because the beams 'looked funny' or something along those lines; it may also have been a simple map sampling issue in which not all pixels were filled with a downsampled image. Anyway, I decided to go back and quantify the effects. The plots below are from the single "pointing-style" observation of OMC1 from 2009. The units are volts. 'ds1' indicates sampling at 0.02 seconds, 'ds5' indicates sampling every 0.1 seconds. The scan rate was 120"/s.
The beam sizes were measured from the autocorrelation maps. However, because there is structure on many scales in this map, I had to use a rather ad-hoc method to remove the correlated structure. I fitted a gaussian to the elliptical northwest-southeast structure, removed it, then fitted a gaussian to the remaining circular thing in the center, which is approximately the beam. If I fit the "beam" gaussian with an ellipse, I get: Beamsize 1_1: 36.15,26.23 Beamsize 1_5: 48.39,30.21 With a circle: Beamsize 1_1: 29.51 Beamsize 1_5: 35.31
The ds1 and ds5 images compared.
The PSDs of the two images (on identical grids). Note that ds5 loses power at small spatial scales, 50% at 40"!
The pixel-pixel plot with a fit that shows a 10% overall flux loss (best-fit).
I'm running 0,1,2,3,5,7,10,16, and 19 PCA component 51 iteration maps of Gem OB1 with deconvolution. No clue when they'll be done because they're at the end of a long queue. Next (important!) step is to re-run the simulations with linear source sizes but with different numbers of PCA components, different kernel sizes, etc..... there is a LOT of parameter space to cover.
Despite a slew of alignment errors, it appears that the alignment for MOST fields turns out OK using Method 3 of the pixel-shift code; the signal to noise is VERY low in a lot of fields. 070724_o38 does not come up with a good fit, for very good reason - there appears to be no signal at all. 070907_o20 is a problem. The offset was 27 pixels, which is too large, but nonetheless correct. I had to institute the plane fitter at an earlier stage to get it to work. However, the biggest problem: the SCUBA source aligns with 070907_o20 but not the rest of the maps. So I needed to re-fit everything. That was a BIG mistake, we need to check carefully for it in other fields.
I ran the v0.7 reductions with deconvolution on for 50 iterations. I had cut out deconvolution originally because of the funky noise maps, but that was partly an error on my part. There is also an issue with bright sources being largely left over in the noise maps. The deconvolver does MUCH better at filtering out the fuzzy atmospheric emission, so I want to use it. It leaves some flux from bright point sources behind, though, so I decided to try to make the deconvolution kernel smaller to see if that recovers more of the pointlike flux.
0709 11-13 needed to be 83.88 degrees, they're all fixed now. I flagged the l060 070911 maps; I'm not convinced that was weather, there were some fluctuations up to 2400 Jy! Maybe a cloud would do that though.
Problems in the latest run:
l359 - missing files? Reading files from /scratch/sliced/INFILES/l359\_infile.txt FIELD v0.7\_l359 BEGUN at Fri Jan 16 20:11:34 2009 MRDFITS: File access error % HEULER: ERROR - First parameter must be a FITS header or astrometry structure l012 - missing files? % READFITS: ERROR - Unable to locate file /scratch/adam\_work/l012/060614\_o10\_raw\_ds5.nc\_indiv13pca\_map01.fi ts
Page 1 / 4 »