About milkyway? This post is a red herring, but: I decided to go ahead and run the mapper on Cygnus, L33, L111, and the Galactic Center again. The pointing has gotten to the stage where I'm certain I can't do anything more without a stroke of pure brilliance or a conversation with someone who hasn't touched a thing - e.g. Jason - that suddenly enlightens me. In order to fully reproduce Meredith's results, I'd probably have to go back through and follow her 'pipeline' process step by step as well, and I suspect that, if I had used her method on her RA/Dec maps, I would have come up with exactly the same problem I currently see. Therefore, I won't do anything about it. I think the most appropriate response at this point MAY be to just fit a damned polynomial/sine curve in az and include that as part of the pointing model, but I can't justify where that comes from. All I know is that it's present in Meredith's data as well as my own. What's that next step with the PPSes? We need to do that. More important, though, is getting some image optimization ready FAST. I need to be running this stuff before September! Tonight's run will be a test of numbers of PCA components. At the very least, the v0.5 should have a consistent set of images even if we cut out the high-flux ones. We'll deal with that later. For the high flux objects, e.g. g34.3, since we have so much overlapping data, a simple average/magical baseline subtraction might be just as if not more effective than PCA subtraction, so that's one way around it.
Modifications
Things that have been changed in the past 24 hours that are very significant:
- In apply_pointing_model, the signs of FAZO and FZAO have been swapped. I BELIEVE this is correct but somehow things aren't working out still.
- In do_the_pointing I have changed from "eq2hor" and "hor2eq" to
"my_eq2hor" and "my_hor2eq". These implement two major changes:
- The LST is passed as a parameter rather than calculated within the ASTROLIB code
- The conversion is calculated with BOTH hadec2altaz and getaltaz (and similar for the opposite transformation) and compared for error checking purposes. If they differ by more than one arcsecond, the code will spit out an error message and use getaltaz.
Inconsistent
James: "I think it's fair to say, though, that there is some problem with the simultaneous assumption that CSO coords are geo and that we are applying the ab/nut correction correctly." Yep. The relevant files are in /scratch/adam_work/plots/: models_noabnut_radec_0707.ps models_noabnut_rawcsoptg_radec_0707.ps models_oppositeabnut_radec_0707.ps models_oppositeabnut_rawcso_radec_0707.ps I'm afraid they're pretty confusing. 'noabnut' means no aberration/nutation correction was applied during the mapping process. 'oppositeabnut' means that an aberration/nutation correction that, according to the eq2hor and hor2eq texts, should actually convert heliocentric to geocentric, is being used on data that we believe is starting in a geocentric frame. There are two possibilities: 1. We are wrong 2. eq2hor/hor2eq are wrong. The 'rawcso' files have FAZO/FZAO "subtracted out" (removed) in pages 1 and 3, and FAZO/FZAO added back in on pages 2 and 4. 'rawcso' means that we're looking at the ra/dec the CSO gave without the users' FAZO/FZAO corrections. The non-rawcso have the NCDF ra/dec vectors, with precession correction applied (and some form of ab/nut correction), but the FAZO/FZAO are still present. THESE should be equivalent to Meredith's plots, e.g. where FAZO_SET/FZAO_SET are on the y axis. However, that's only true for pages 1 and 3. Pages 2 and 4 have FAZO/FZAO essentially double-subtracted, so they should be ignored. Pages 1 and 3 of the non-rawcso files should be equivalent to pages 2 and 4 of the rawcso files with the exception that the sigma-rejection used to choose the yellow points is different. So what's in each page? On pages 1 and 2, the red lines are my best fit to the yellow data, which is the black data with an iterative sigma rejection applied (i.e. reject at 1 sigma, recalculate sigma from good data, reject at 2 sigma). The blue lines are Meredith's models. The right side includes only the yellow data points with the red line subtracted. On pages 3 and 4, the same is pretty much true except that ONLY the altitude-dependent line has been subtracted: there is no fit to azimuth. Also, the 'sourcecompare' files are similar. I'd check those out too. I still don't have the extremely low RMS that Meredith saw. I'm going to go through and try to reject bad data points by hand to see if I can get to that level. We'll see.
Direct comparison with Meredith's pointing calculations
I've gotten to the point that I'm directly comparing my pointing calculations to Merediths. So far, it looks like: 1. I can very nearly reproduce Meredith's results on her maps, though there are a few differences in centroids and I missed the source a few times where she didn't. 2. There is still a spread between my map centroids and hers. The code I've written to enable this comparison: publish/debug_testing/compare_me_meredith.pro I'm still a bit stumped, but I think I have a path to the answer mapped out.
Centroiding and Meredith's files
I tried the pointing offset calculations on Meredith's maps and I still get an RMS of ~9" in both RA and DEC. Check out the files: /scratch/adam_work/plots/models_meredith* So... what?
Searching for more offsets
The defining feature of my pointing calculations - and their inconsistency - is that the spread for individual sources is bad, which means that there's still something wrong in the way the bulk ra/dec are being calculated.
I checked apply_distortion_map_radec to see if it might have inserted some bulk offset, but it only changes the average position by <2". That might be an issue, but the boresight isn't necessarily aligned with the mean location of the bolometers. It should be shifted by at least .3" according to a simple average of sin/cos of the bolo_params angles. Comparing my pointing code to map_ncdf_reading directly:
readstruct=map_ncdf_reading(filename,/nopixoff) offsets are small (e.g. -0.050665803 -0.0029540043 arcseconds ra/dec) if I don't subtract fazo/fzao offsets in ra/dec are remarkably close to fazo/fzao if I subtract them:
92.830656 -126.12773 95.000000 -120.00000
which is because ra/dec are pretty closely aligned to az/el. So, any difference comes from at least one of the keyword parameters. Therefore I AM missing something, and that something is one of the keywords to map_ncdf_reading
FFTs and unlucky primes
I've encountered a lot (!) of unlucky prime numbers that I'm trying to FFT in too many places, and it basically stops the code. 211151 was impossibly bad, 5827 is pretty bad too. I solved the first by splitting into by-scan delining, which is better anyway. But the second is a map, in which I KNOW zero padding is OK. So... are there any functions that find the nearest reasonably efficient map size?
More cleaning modifications
I had removed sigmadeglitch from deline, now it's permanently gone. I think it might be worth exploring re-inserting sigmadeglitch somewhere. My higher-order polynomial fit is creating deeper bowls around sources, so that's a big problem, but if I use a lower order I get the bad streaks. What's the best way to deal with this? I'm thinking perhaps only doing the polysub on the second iteration (i.e. after a source model has been subtracted). Delining runs into some issues now, though, because not all frequencies are sampled (?).
Wow - FFT failure
On my huge Cygnus run, the fft keeps failing in the deline code. It's actually pretty impressive, but there are 844604 points in the timestream. The prime factorization of 844604 is 2^2 * 211151. This is just damned bad luck, because I think an FFT is extremely inefficient when it can't factorize. What's the best workaround? .... 9 AM update: I've rewritten the deliner to work on a scan-by-scan basis. It's possible that the delining failed in the past because it was essentially removing a constant amplitude at the line frequencies across the observation (or combined observations!) which is not likely to be true.
RMS fails to shrink
Spent a while today working on these plots after rejecting every obvious bad point: /scratch/adam_work/plots/sourcecompare_0_rawcsoptg_0707.ps /scratch/adam_work/plots/sourcecompare_10_rawcsoptg_0707.ps /scratch/adam_work/plots/sourcecompare_1_rawcsoptg_0707.ps /scratch/adam_work/plots/sourcecompare_2_rawcsoptg_0707.ps /scratch/adam_work/plots/sourcecompare_3_rawcsoptg_0707.ps /scratch/adam_work/plots/sourcecompare_4_rawcsoptg_0707.ps /scratch/adam_work/plots/sourcecompare_5_rawcsoptg_0707.ps /scratch/adam_work/plots/sourcecompare_6_rawcsoptg_0707.ps /scratch/adam_work/plots/sourcecompare_7_rawcsoptg_0707.ps /scratch/adam_work/plots/sourcecompare_8_rawcsoptg_0707.ps /scratch/adam_work/plots/sourcecompare_9_rawcsoptg_0707.ps /scratch/adam_work/plots/models_rawcsoptg_0707.ps The net result is, I still don't have a nice small RMS offset Update 8/12/08: using RA/DEC mapping doesn't help.