I had removed sigmadeglitch from deline, now it's permanently gone. I think it might be worth exploring re-inserting sigmadeglitch somewhere. My higher-order polynomial fit is creating deeper bowls around sources, so that's a big problem, but if I use a lower order I get the bad streaks. What's the best way to deal with this? I'm thinking perhaps only doing the polysub on the second iteration (i.e. after a source model has been subtracted). Delining runs into some issues now, though, because not all frequencies are sampled (?).
Articles by Adam (adam.g.ginsburg@gmail.com)
Wow - FFT failure
On my huge Cygnus run, the fft keeps failing in the deline code. It's actually pretty impressive, but there are 844604 points in the timestream. The prime factorization of 844604 is 2^2 * 211151. This is just damned bad luck, because I think an FFT is extremely inefficient when it can't factorize. What's the best workaround? .... 9 AM update: I've rewritten the deliner to work on a scan-by-scan basis. It's possible that the delining failed in the past because it was essentially removing a constant amplitude at the line frequencies across the observation (or combined observations!) which is not likely to be true.
RMS fails to shrink
Spent a while today working on these plots after rejecting every obvious bad point: /scratch/adam_work/plots/sourcecompare_0_rawcsoptg_0707.ps /scratch/adam_work/plots/sourcecompare_10_rawcsoptg_0707.ps /scratch/adam_work/plots/sourcecompare_1_rawcsoptg_0707.ps /scratch/adam_work/plots/sourcecompare_2_rawcsoptg_0707.ps /scratch/adam_work/plots/sourcecompare_3_rawcsoptg_0707.ps /scratch/adam_work/plots/sourcecompare_4_rawcsoptg_0707.ps /scratch/adam_work/plots/sourcecompare_5_rawcsoptg_0707.ps /scratch/adam_work/plots/sourcecompare_6_rawcsoptg_0707.ps /scratch/adam_work/plots/sourcecompare_7_rawcsoptg_0707.ps /scratch/adam_work/plots/sourcecompare_8_rawcsoptg_0707.ps /scratch/adam_work/plots/sourcecompare_9_rawcsoptg_0707.ps /scratch/adam_work/plots/models_rawcsoptg_0707.ps The net result is, I still don't have a nice small RMS offset Update 8/12/08: using RA/DEC mapping doesn't help.
Cleaning ideas
It's proving very difficult to get rid of glitches, so here are some more ideas:
Median filter the whole timestream (downsampled) at a resolution that will pull down the glitch peak. Subtract out the median-filtered timestream from the original, and look for outliers in that distribution: in principle, those should be glitches.
Another option: subtract out the noise map AND the best-model map from the original timestream before trying to pull out additional astro model stuff. The map-to-timestream astro model can't include glitches because they're averaged over, but the sky-subtracted timestream DOES include glitches (no matter how many PCA components are removed) because glitches are NOT correlated across detectors. The weird thing is that this is effectively subtracting out exactly what was calculated from the sky subtraction - the question is whether subtracting it out BEFORE sky subtracting again gets you any benefit. If it was just noise, subtracting noise from noise is fine, but the "noise" does include some residual signal.
Glitches and errors
It's definitely important to get rid of the spikes and, more importantly, the exponential decay. The current function is pretty good, but possibly not good enough because there are some situations in which the turnaround decay is obviously not well-enough dealt with and results in high/low streaks. The glitches can be ~20 Jy. The problem is that they sometimes show up in the middle of sources (e.g. in 060609_o26). That will badly distort fluxes. My ideal is still to use a median instead of mean combination of data points into image points, but that seems to be impractical to implement. Other ideas...
Big run this weekend...
In case anyone is wondering why Milkyway is going really slowly, I'm mapping a 69-observation set of Cygnus. It ought to prove to be an interesting test of Milkyway's swap capacity, but other than that I doubt it will be useful. While the pointing is reasonably good at this point (30" still, but whatever), I haven't done ANY work on filtering out bad observations / flagging stuff in Cyg. Data massaging is going to be a long process, it would be great if I could do that instead of pointing stuff. Argh. One thing to note is that this file: /scratch/sliced/l078/070702_o33_raw_ds5.nc is a "_ds5.nc" but is NOT downsampled! Update: Mapped the individual files successfully, picked out noisy ones. The overall map failed - just not enough memory to do a field that large. I split it up in to two sets of 25 observations, plus I'll be mapping each L 70-L 90 field separately (I didn't get ride of noisy observations for this). I'm also remapping the individual observations with PSD flagging enabled to see how that works. For notes on the P Cyg observations, see the file /scratch/adam_work/texts/cygnus_for_pat.in
Images with artifacts
Here's the list from 0507: Streaky offsets: /scratch/adam_work/g34.3/rawcsoptg050706_o47_raw_ds1.nc_indiv3pca_map00.fits .. image:: http://3.bp.blogspot.com/_lsgW26mWZnU/SJtP9wKu4_I/AAAAAAAADNo/gmMEFkzCX-A/s320/050706_o47.png Totally messed up (two sets of images many degrees apart): /scratch/adam_work/g34.3/rawcsoptg050713_o40_raw_ds1.nc_indiv3pca_map00.fits 070709_ob7 : .. image:: http://4.bp.blogspot.com/_lsgW26mWZnU/SJuIDL161GI/AAAAAAAADN4/D4mxFyx1Fr0/s320/070709_ob7_1730m130_peanut.png 070703_o48 : .. image:: http://4.bp.blogspot.com/_lsgW26mWZnU/SJuICzavE_I/AAAAAAAADNw/70Hl5930eU8/s320/070703_o48_3c454.3.png
Looking back at the old pipeline
I've obviously missed something. So to try to figure out what it is, I'm going back to the old pipeline... again... in map_ncdf_reading, lines 441-448, there is something curious that goes back to a definition-of-variables problem: ddec and dra are ADDED to ra and dec to get the new "ra_all" and "dec_all" variables. ddec and dra are calculated from eaz and eel: ERROR OFFSETS in Az and El. Why? What?! I added a new piece of code, correct_eaz_eel.pro. It is extremely short, but extremely necessary.
pro correct_eaz_eel,ra,dec,el,az,eel,eaz,pa dra=-eaz*cos(!dtor*pa)*cos(!dtor*el)+eel*sin(!dtor*pa) ddec=eaz*sin(!dtor*pa)*cos(!dtor*el)+eel*cos(!dtor*pa) dec += ddec/3600. ra += dra/3600. / cos(!dtor*dec) / 15.end
This comes back to the fact that I don't know what ANY of the variables in the NCDF header are supposed to be. Why are "error" variables actually OFFSET variables, and why didn't anyone know about them?
something is wrong in 0507....
Look at that zoomed in. Note that the first 20ish are from 0507. I don't know what's causing the double source. Maybe a 90 degree rotation would do it, I don't know. I do know that I'm applying an array angle of 76.2 degrees for those July observations and 113.6 degrees for the June 05 observations. Maybe that got swapped or something.... It looks like Meredith ran into the same problems when mapping 0506, but not 0507.
comparing mine to hers (I even used the same downsampled/cleand file), it is obvious that something went wrong with the mapping. I tested a number of different fiducial array angles and none of them generated real maps, so the mapping is correct except for offsets in the scan direction, as usual. What the heck.
Distortion (not distortion maps)
I still see distortion in a lot of maps, but not all. I think there is a correlation with altitude ~70+/-2 degrees. So far it's most obvious in the 0606 pointing sources.