Despite a slew of alignment errors, it appears that the alignment for MOST fields turns out OK using Method 3 of the pixel-shift code; the signal to noise is VERY low in a lot of fields. 070724_o38 does not come up with a good fit, for very good reason - there appears to be no signal at all. 070907_o20 is a problem. The offset was 27 pixels, which is too large, but nonetheless correct. I had to institute the plane fitter at an earlier stage to get it to work. However, the biggest problem: the SCUBA source aligns with 070907_o20 but not the rest of the maps. So I needed to re-fit everything. That was a BIG mistake, we need to check carefully for it in other fields.
Articles in the bgps category
Methods Paper: Figures / analysis to produce
The methods paper needs some justification of the number of PCA components used. This will require a map of some field with a range of number of PCA components. Plan: simulate a map of L111 (the most square field) with 0-20 PCA components x 21 iterations and a variety of source sizes and plot the recovered flux vs. number of PCA components. Ideally, do this with both deconvolution and not. Estimated processing time is ~24 hours. Also, a plot of flux vs. iteration number will be useful. Glitch filtering method has been modified: "Glitches are removed by drizzling each bolometer measurement into a given pixel using the mapping M[p], but retaining each pixel as an array of measurements. Then, measurements exceeding $3\times MAD$ (Median Average Deviation) are flagged out in the timestream. In cases where there were too few ($<3$) hits per pixel, the pixel was completely flagged out. This only occurred for pixels at scan edges." Data flagging: Partly covered by deglitching. Many scans were flagged by hand to remove overly noisy scans and those that were observed to confuse the iterative mapper. Hand flagging is more robust than automated and can remove features caused by the filter convolved with the glitch. Creation of astrophysical model: Not entirely sure what this section entails. Should have a subsection on deconvolution though. Jackknifing has not generally been done...
4.3 Relative Alignment and Mosaicing
Relative alignment was performed by finding the peak of the cross-correlation between images and a pointing master selected from the epoch with the best-constrained pointing model for that field. Each observation was initially mapped individually, then all observations of a given field were cross-correlated with a selected master image of that field. The cross-correlation peak was fit with a gaussian and the difference between the gaussian peak and the image center was used as the pixel offset. The offsets were recorded and written to the timestreams. Finally, all observations of a field were merged into a single timestream with pointing offsets applied to create the field mosaic.
a bunch of plots
I wasted a lot of time making these so I figured I might as well waste a little space showing them.
A new series of problems
- There are severe (~5 pixel) pointing offsets in the MOSAICs. They are caused by IRAF and I can't figure out exactly why.
- Deconvolution has created more artifacts at l=54, 70, 357. I don't know how to fix them.
- Either my earlier time estimates were way off, or the mapping has gotten slower. It now takes ~120 computer hours (72 real hours) where before it was taking closer to 48.
Return to Deconvolution
I ran the v0.7 reductions with deconvolution on for 50 iterations. I had cut out deconvolution originally because of the funky noise maps, but that was partly an error on my part. There is also an issue with bright sources being largely left over in the noise maps. The deconvolver does MUCH better at filtering out the fuzzy atmospheric emission, so I want to use it. It leaves some flux from bright point sources behind, though, so I decided to try to make the deconvolution kernel smaller to see if that recovers more of the pointlike flux.
planet fluxes
; ephemerides from the JCMT ; MARS: ; June 30 2005: 730.14 Jy UT:53551 ; July 15 2005: 872.83 Jy UT:53566 ; Sept 10 2005: 1941.72 Jy UT:53623 ; June 5 2006: 553.13 Jy UT:53891 ; June 23 2006: 674.14 Jy UT:53544 ; Sept 10 2006: 135.79 Jy UT:53896 ; July 20 2007: 381.18 Jy UT:54301 ; Sept 10 2007: 597.87 Jy UT:54353 ; ; URANUS: ; June 30 2005: 43.43 Jy UT:53551 ; July 15 2005: 44.35 Jy UT:53566 ; Sept 10 2005: 45.78 Jy UT:53623 ; June 5 2006: 41.71 Jy UT:53891 ; June 23 2006: 42.96 Jy UT:53544 ; Sept 10 2006: 41.62 Jy UT:53896 ; July 20 2007: 43.90 Jy UT:54301 ; Sept 10 2007: 45.57 Jy UT:54353 ; ; NEPTUNE: ; June 30 2005: 17.42 Jy UT:53551 ; July 15 2005: 17.58 Jy UT:53566 ; Sept 10 2005: 17.50 Jy UT:53623 ; June 5 2006: 17.04 Jy UT:53891 ; June 23 2006: 17.33 Jy UT:53544 ; Sept 10 2006: 17.09 Jy UT:53896 ; July 20 2007: 17.59 Jy UT:54301 ; Sept 10 2007: 17.56 Jy UT:54353
Rant: calc_beam_locations
It took me a few days to figure this out, but "calc_beam_locations" is about 800 lines of wasted space. It does nothing substantive until line 335. Everything to that point is parameter parsing. But there doesn't need to be any of that crap, really, and it should have been outsourced to functions to begin with. NCDF files are read to get the rotation angle - JUST as an error check! There is no a priori reason to include it. All that the code does is read in a centroid file (a list of x,y offsets), rotate them, and output them as r,theta,error. Sure, there's a bunch of automated outlier rejection etc, but... seriously?! We don't have enough observations to hold up the statistics necessary for that to begin with! NO ONE would if each observation takes an hour. It's absurd. Odd as it is coming from me, manual rejection makes a lot more sense in this case. Now, I still have to understand WHY the beam locations are rotated by the fiducial array angle.
distortion mapping done?
Created 'beam_locations_0707.txt' from uranus_070702_o42 with a few contributions from g34.3_070630_o34. The rest were created by averaging over all of the beam location files
0707 distortion maps
They're consistent but not very close to each other.