Made a little code to check out convergence, but frankly it's pretty easy to just build a mapcube and plot lines in the iteration axis. The code is in bgps_pipeline/postproc/compareiters.pro, and it is not a single program. Comparing deconvolution to no deconvolution, deconvolution is a lot better. Without it, there are much more substantial negative regions. In the GC, my test region, the noise-dominated areas were about the same, though the no-deconvolve map had a little bit more large scale structure. The signal-dominated regions were very nearly uniformly brighter. The deconvolved map was ~3 Jy brighter in SGR B2, or 4%.
TEN!?!
Look at that. Seriously? a 10' offset? That's ridiculous! There is no WAY our pointing models could be off by that much! Even if I got the signs entirely wrong, that's just not possible. What's going on? First theory: Galactic coordinates fail. Any other suggestions?
Task List
For the moment, I'm going to leave the pointing alone. This leaves us (me) with a very long list of things to do before the data release:
- Determine ideal iterative mapping strategy for low-flux fields
- Determine ideal iterative mapping strategy for fields with bright sources
- Optimize at least the following:
- scan flatting (polynomial subtraction)
- PCA subtraction
- Pre-PCA sky subtraction (necessary?)
- Deconvolution/No deconvolution
- GENERATE MAPS [top priority, but it can't really happen til after the above]
- Assure consistency with catalogs - e.g. Motte, SCUBA, etc.
- Figure out what needs to go into fits header
- Survive semester before comps [lowest priority]
- sleep [oh, right, real lowest priority]
Sarcasm helps me survive. Almost as much as beer.
Got a sign wrong again
Last night's run failed because I had a wrong sign in the pointing model. Do over time!
FFTs and unlucky primes
I've encountered a lot (!) of unlucky prime numbers that I'm trying to FFT in too many places, and it basically stops the code. 211151 was impossibly bad, 5827 is pretty bad too. I solved the first by splitting into by-scan delining, which is better anyway. But the second is a map, in which I KNOW zero padding is OK. So... are there any functions that find the nearest reasonably efficient map size?
Big run this weekend...
In case anyone is wondering why Milkyway is going really slowly, I'm mapping a 69-observation set of Cygnus. It ought to prove to be an interesting test of Milkyway's swap capacity, but other than that I doubt it will be useful. While the pointing is reasonably good at this point (30" still, but whatever), I haven't done ANY work on filtering out bad observations / flagging stuff in Cyg. Data massaging is going to be a long process, it would be great if I could do that instead of pointing stuff. Argh. One thing to note is that this file: /scratch/sliced/l078/070702_o33_raw_ds5.nc is a "_ds5.nc" but is NOT downsampled! Update: Mapped the individual files successfully, picked out noisy ones. The overall map failed - just not enough memory to do a field that large. I split it up in to two sets of 25 observations, plus I'll be mapping each L 70-L 90 field separately (I didn't get ride of noisy observations for this). I'm also remapping the individual observations with PSD flagging enabled to see how that works. For notes on the P Cyg observations, see the file /scratch/adam_work/texts/cygnus_for_pat.in
Images with artifacts
Here's the list from 0507: Streaky offsets: /scratch/adam_work/g34.3/rawcsoptg050706_o47_raw_ds1.nc_indiv3pca_map00.fits .. image:: http://3.bp.blogspot.com/_lsgW26mWZnU/SJtP9wKu4_I/AAAAAAAADNo/gmMEFkzCX-A/s320/050706_o47.png Totally messed up (two sets of images many degrees apart): /scratch/adam_work/g34.3/rawcsoptg050713_o40_raw_ds1.nc_indiv3pca_map00.fits 070709_ob7 : .. image:: http://4.bp.blogspot.com/_lsgW26mWZnU/SJuIDL161GI/AAAAAAAADN4/D4mxFyx1Fr0/s320/070709_ob7_1730m130_peanut.png 070703_o48 : .. image:: http://4.bp.blogspot.com/_lsgW26mWZnU/SJuICzavE_I/AAAAAAAADNw/70Hl5930eU8/s320/070703_o48_3c454.3.png
something is wrong in 0507....
Look at that zoomed in. Note that the first 20ish are from 0507. I don't know what's causing the double source. Maybe a 90 degree rotation would do it, I don't know. I do know that I'm applying an array angle of 76.2 degrees for those July observations and 113.6 degrees for the June 05 observations. Maybe that got swapped or something.... It looks like Meredith ran into the same problems when mapping 0506, but not 0507.
comparing mine to hers (I even used the same downsampled/cleand file), it is obvious that something went wrong with the mapping. I tested a number of different fiducial array angles and none of them generated real maps, so the mapping is correct except for offsets in the scan direction, as usual. What the heck.
0507 mapping
Most of the 0507 pointing source maps seem to have failed. Some of them look like rotator and position angle problems, others have multiple copies of sources mapped to different locations. I don't know what's up, but my first bet would be to change the fiducial array angle. After that, I'd check on rotang and then, if desperate, see what the PA is doing.
« Page 4 / 4