• ## BGPS Point Source Recovery - Experiment 23 results

Experiment 23, run in mid-May 2013, created power-law synthetic astrophysical sky maps with added point sources in order to examine the pipeline + bolocat's ability to recover point source flux densities accurately.

For the purpose of the BGPS v2 paper, this data is incorporated into Section 5.3: the angular transfer function and comparison with other data sets.

The parameter space explored includes 3 steps in power-spectrum background, $\alpha_{ps}=1,1.5,2$, which is the major parameter being explored. A background with $\alpha_{ps}=2$ is approximately what we measure in both HiGal and the reproduced regions of the BGPS power-spectrum, so it's probably the most realistic, but it is also very bad for the pipeline: most of the power is on the largest angular scales. $\alpha_{ps}=1$, on the other hand, beautifully creates point-source-like structures.

These two images show the results of $\alpha_{ps}=2$ and $\alpha_{ps}=1$ skies respectively with faint point sources on top. The point sources are more pointy and stick out more in the $\alpha_{ps}=1$ situation, but the $\alpha_{ps}=1$ map also looks more generally point-like, so there is greater confusion. If you believe that there are genuine point sources in the 1.1 mm maps, the $\alpha_{ps}=1$ would actually be very difficult to deal with since many extracted sources would actually be part of the background.

We also explored two different sets of source brightnesses, 0.1-1 and 1-10 Jy/beam, uniformly distributed. We picked a high source density, 500 in a 512x512 pixel map, to get decent source extraction statistics - the crowding is still fairly low and does not significantly affect the extracted source properties (only a few source overlap with others). The bright sources are obviously better-recovered than the faint, with good recovery for all of the backgrounds explored. The faint sources, on the other hand, were fairly sensitive to the background.

The background levels used were $\sim1,2,10$ Jy/beam at the peak for the $\alpha=2$ maps (they were lower for the other power-laws, so those will be ignored from now on). As described in the paper, the faint source recovery was good for the low peak backgrounds, but recovery was essentially nonexistent for the 10 Jy/beam background - point sources were not detected at all.

Despite the relative simplicity of this experiment, the data took 51 GB of storage and took about half a day to run.

In principle, one would like to examine a range of different source distributions (power-law flux distribution, upper/lower limits, sizes) on a range of different power-spectrum backgrounds - for the purpose of the v2 paper, this approach would be thoroughly excessive. However, I expect Tim will be taking this sort of approach for the next paper.

• ## BGPS Point Source Recovery

In the last couple drafts of the BGPS paper (e.g. the May 15, 2013 draft), we've included a single simulation demonstrating the flux recovery of the Bolocat:

These figures are regarded as very important, as they demonstrate the capability of recovering accurate flux density measurements from a realistic sky using bolocat.

However, the above figures show a simulation only for one set of parameters, with α = 1 in the power-law flux distribution, which unfortunately is not realistic according to "Figure 8":

So I've started up a new experiment, experiment #23, to examine this problem.

The problem has a few layers:

1. The reason I used α = 1 is that it looks much like a realistic BGPS map after processing, in the sense that most of the field is empty but there are a few hundred sources in the map. However, α = 1 is not a realistic representation of the measured power spectra.
2. α = 2 maps with the previous normalization had a peak value of 18 Jy, which resulted in heavily signal-dominated output maps that did not resemble BGPS maps.
3. The normalization is tricky. One of the key goals of the simulations was to test the effect of different atmospheric to astrophysical signal ratios on the angular transfer function; in order to accomplish this, it was necessary to scale the atmospheric power based on the astrophysical power at its peak in fourier space. i.e., in real timestreams, we can measure the astrophysical to atmospheric power ratio, but we have to perform that measurement somewhere that the angular transfer function is known to be reliable. This is done at about 1 Hz.
4. The normalization is important because of the noise level. In the simulations, we use a fixed noise level of about 30 mJy in the timestreams to match our best observations (though it is not difficult to scale this to other levels). This fixed noise level means that, for some normalizations, all pixels are statistically significant. Also, even though the noise level is fixed, it will be higher because of intrinsic noise in a power-law distributed map.
5. The normalizations used in experiment #21, the angular transfer function measurement, were selected such that there would be high signal-to-noise at all angular scales. This means that white noise would not be dominant on any angular scale, since white noise is equivalent to α = 0. So it wasn't crazy to use these ridiculously high-flux maps, but it is not feasible to use the same maps for analysis of point sources. In maps for which we're interested in small-angular-scale features (<100"), we want the maps to be primarily noise-dominated with a handful of bright features either caused by adding point sources directly or from the local peaks in the power-law distributed flux.

Some notes along the way:

• Using a power-law background, the point-source sensitivity is much worse than without a power-law background. This is intuitive: a 100 mJy source on a 200 mJy background (which may easily include power fluctuations on the smallest scales of comparable magnitude) is not going to be recovered.
• Doubly important: a 1 Jy source should have peak amplitude 1 Jy, but the current method of adding point sources adds them as delta functions that are later convolved, conserving the total flux rather than the peak flux. This needs to be changed! (has been now)

Here are some examples of what the before/after look like with point sources added. The first has bright sources, the second faint sources:

With these new figures, the 40" apertures work fine, but the 120" apertures are still utterly junk. This does not make sense.

A careful analysis of a single source shows that something is wrong. Here are some annular extractions followed by the image:

Input Map:
reg sum npix    mean    median  min         max     var         stddev      rms
--- --- ----    ----    ------  ---         ---     ---         ------      ---
1   26.3323 22  1.19692 1.18083 1.10787     1.28825 0.00208826  0.0456975   1.1978
2   77.8553 74  1.0521  1.047   1.00426     1.123   0.000929507 0.0304878   1.05254
3   124.868 122 1.02351 1.02869 0.996566    1.04427 0.000260295 0.0161337   1.02363

Output Map:
reg sum         npix    mean        median      min         max         var         stddev      rms
--- ---         ----    ----        ------      ---         ---         ---         ------      ---
1   3.89157     23      0.169199    0.175204    0.086872    0.255151    0.00206254  0.0454152   0.175188
2   2.06843     74      0.0279517   0.0275116   -0.0695484  0.155258    0.00210834  0.0459167   0.0537554
3   0.502601    123     0.00408619  0.00629906  -0.121023   0.0834974   0.00143969  0.0379432   0.0381626

Backgrounds:
Input Map:
reg  sum      npix  mean      median    min       max      var          stddev     rms
---  ---      ----  ----      ------    ---       ---      ---          ------     ---
1    297.054  291   1.0208    1.02293   0.98094   1.05234  0.000313419  0.0177037  1.02096
3    2538.6   2618  0.969671  0.972413  0.859551  1.12495  0.0015557    0.0394423  0.970473

Output Map:
reg  sum      npix  mean        median      min        max       var         stddev     rms
---  ---      ----  ----        ------      ---        ---       ---         ------     ---
1    1.49461  291   0.00513613  0.00729431  -0.121023  0.133141  0.001586    0.0398247  0.0401545
3    5.83372  2618  0.00222831  0.00194597  -0.195075  0.181155  0.00211747  0.046016   0.0460699

Bolocat for this source:
In [200]: fields
Out[200]: ['FLUX_40', 'FLUX_40_NOBG', 'BG_40', 'FLUX_120', 'FLUX_120_NOBG', 'BG_120']

In [198]: [inp[61][f] for f in fields]
Out[198]: [0.1896538, 1.3536235, 0.38071653, 0.83893013, 10.77737, 0.42352486]

In [199]: [m20[61][f] for f in fields]
Out[199]: [0.18015364, 0.18674377, -0.016305592, 0.2868295, 0.29759517, -0.00027596406]


With background apertures:

However, note that the background are computed using the mmm.pro sky-background estimation procedure over a range 2r to 4r (i.e., 40-80" and 120-240" radius for the 40" and 120" diameter apertures).

The numbers shown by ds9 disagree fairly severely with those from bolocat. In particular, it appears that the background estimate returned by mmm.pro is off by a factor of 2, in this case giving 0.42 instead of 0.97. Turns out this was due to an indexing error that did not affect the pipeline results in any way.

Out of date analysis: Bolocat's flux total in the 120" aperture is 10.77 Jy/beam, background is 0.42 Jy/beam. There are 218 pixels. The resulting flux should be (10.77-0.42*218/23.8), but this gives 6.9 instead of the expected 0.84. Why?

If we do the same with the ds9 numbers, we get a total of 9.62 Jy/beam, background 0.97 Jy/beam average, so: (9.62 - 0.97*218/23.8) = 0.74. This is consistent with bolocat, and very very wrong.

If we take our background to be 1.02 instead of 0.97, we get 0.28 Jy/beam, which is exactly the right answer according to the pipeline. 1.02 comes from taking a much more local background subtraction from r=40 to r=80 arcsec, which isn't really acceptable. If we go from r=60 to r=120, the disagreement remains fairly bad, with f=0.38 Jy/beam, but certainly a lot better.

Bolocat after the correction:
In [297]: [m20[61][f] for f in fields]
Out[297]: [0.18015364, 0.18674377, 0.00578439, 0.25477698, 0.29759517, 0.0041758944]

In [298]: [inp[61][f] for f in fields]
Out[298]: [0.1896538, 1.3536235, 1.0216572, 0.40137589, 10.77737, 1.0119308]


This may mean that we'll need to re-do aperture extraction with a tighter background region everywhere. I made the change to object_photometry.

However, even with the change, even in the best case, FLUX_120 appears to be totally unreliable. FLUX_80 is acceptable with the change, but only for bright sources (for faint sources, <1 Jy, there is no recovery at all - I think this must be an issue of the source brightness still not being calculated correctly):

I'll have to continue this analysis tomorrow once the full suite of simulations has completed, but I strongly suspect that we'll have to recommend strictly against using FLUX_120 if the background is expected to be α = 2 distributed.

## Day 2

The simulations have partly completed. After correcting the error with convolved point sources vs. delta functions, I reset the flux distributions to be 0.05 to 1.0 Jy for the "faint" distribution and 0.1 to 50.0 Jy for the "bright" distribution (both with power-law distributions α = 2).

The results can be summarized:

1. In the presence of complex background, i.e power-law distributed flux density with amplitude comparable to the detected source brightness, only the smallest aperture (40") is reliable (i.e., recovers a flux in the input and processed map).
2. The recovered flux density has a very high dispersion in the presence of high-amplitude power-law flux. "Very high" = σ≳1 around a mean of 1.
3. There is no evident dependence of the maps on the atmospheric properties. Therefore, there's no sense in varying the atmospheric properties in the simulations.

I decided the simulations needed changing again. First, there was excessive sampling in astro/atmo parameter space; this is not needed (see point 3). More important is the power-law distribution map's peak flux. Also, it is more important to get decent sampling of bright-ish sources than to have a "physically accurate" point source distribution; the distribution of sources does not affect the recovery, but it does affect the signal-to-noise of the recovery measurement. Please don't ask me to defend this statement; it would require another 10 hours of computer time + me time that I really don't want to allocate, but I'm confident that it is true.

The essential conclusion is that, for α = 2, point source recovery is only possible if the point source is brighter than the background, which is a very intuitive result. Background annulus subtraction isn't very effective at pulling out sources.

Conclusions:

1. These experiments show that source recovery is very poor in the presence of a bright power-law background: it is not possible to reliably extract point sources from a map filled with power-law distributed emission brighter than or comparable to the point sources.
2. The 120" apertures aren't really good for anything.
3. There is so much source-extraction parameter space out there that any further study would really deserve its own paper.

40" aperture in the presence of a bright background with faint sources:

Versus the same with a faint background:

Compare these to the 120" equivalents (bright then faint background):

It's fairly easy to see why there are issues with the bright background and the 120" apertures. In this image, bright background on the left, faint background on the right, with faint sources (0.1-1 Jy).

It's more helpful to look at that previous image with the source contours superposed. These images really give a nice feel for what it means to have point sources subsumed in α = 2 background.

• ## Catalog vs Image shift? A possible solution to the ATLASGAL issue

In the previous post, I came up with a final plot showing the pointing offset was, on average, not significant, even in the ATLASGAL overlap zone. So why did the ATLASGAL group infer a net pointing offset? The problem is probably one or two fields with a slight pointing offset, but a huge number of source.  l=1 has an offset of the right sign and is the single most source-rich degree in the survey, with 368 sources.

This figure shows the v1 vs v2 source locations in grey, their average and standard deviation in green, and the cross-correlation offset in red.  The plot is somewhat difficult to interpret, but it appears that the v1 point sources are systematically more shifted to negative longitudes than v2, and the point sources more than the maps themselves.  There may have been some reason sources were systematically selected at more negative longitudes in the v1 catalog; around Sgr B2 there's a lot of structure that had to be decomposed somehow but was not necessarily "source". One thing to note is the reversal in left-right (in pixel space) vs the positive/negativeness in longitude.  The above plot is correct (negative longitudes, as shown on the plot, are "right" in images), but most of my other plots have the X-axis flipped. In the end, after spending two weeks hammering my head against this, I find no clear evidence for an offset  between the BGPS and Herschel or v1/v2 data overall or in the ATLASGAL fields.  In any individual field, that statement is not necessarily true. Despite the strong statistical evidence, it is really hard to be really sure about sub-pixel offsets, since the "model" image is never perfect.  I think we can safely state the ~1/2 pixel offsets (~3") but I just don't feel confident about numbers below that range for ALL fields.

• ## Idea: Multispectral Eigenimage decomposition...

Can use BGPS + HiGal to look for correlated (thermal) components and decorrelated (free-free) components.  Obviously needs to be tried in GC first. Also, need to figure out a method to mitigate negative bowls in unsharp-masking for herschel-bolocam comparison

• ## Pointing & Cross-Correlation yet again

Prompted in part by a recent ATLASGAL paper identifying pointing offsets of about 3" in the BGPS, we revisit the BGPS pointing. The ATLASGAL team compared the source locations in their catalog to source locations in the Bolocam catalog by doing "nearest-match" searches within a 40" radius (see their Figure 8, reproduced here)

Their comparison was over the range -10 < l < 21, so it only covered a small fraction of the BGPS.  It covered 13 fields with independent pointing solutions, so it's possible that they have actually discovered an offset only in some of our fields.

The catalog comparison, while interesting, is potentially quite flawed.  There's no guarantee that a source extraction algorithm will measure source centers accurately when a "source" is just a local overdensity on a complex background.  Using source comparison will also lead to a bias towards the most source-rich fields, e.g. l000 and l001, so an offset in one of those fields would drastically affect the catalog offset.

There is a better way to compare pointing between two images that are expected to be (nearly) identical.  It is well-known that cross-correlation is an effective technique for determining the offsets between two identical images; I'll briefly summarize some of the literature here.

Gratadour et al 2005 used a maximum likelihood estimator approach to determine the "best-fit" offset between two images.  This approach is comparable to Guizar et al (2008), who implemented a fast solution for (highly) sub-pixel image registration in matlab.  In order for the image registration to be fast, it must operate in fourier space, but to get sub-pixel registration in fourier space, you need to either pad (which is slow, and increases memory use drastically) or fit some functional form around the peak of the cross-correlation image.  The alternative approach implemented by Guizar utilizes the Fourier scaling theorem to create a zoomed-in image of the peak pixel, which allows you to get much higher precision for a much lower computational cost. My innovation is to use the minimum $\chi^2$ estimator to determine the goodness of fit and therefore error bars on the best-fit offset.

Because the $\chi^2$ value for each offset is simply determined by sums and multiplication ($\chi^2 = \sum \frac{x_i-\mu_x}{\sigma_{x_i}^2}$), we can compute each term that goes in to the $\chi^2$ value independently with fourier transforms, then create goodness-of-fit contours around the $\chi^2$ minimum.  The statistical requirement for this approach to make sense is that the errors on the data are gaussian distributed, which is an assumption we inevitably make for astronomical images.  I believe there is also a requirement that the errors are independent, which may be more difficult to satisfy, but in the Bolocam images it is satisfied, especially when multiple independent observations are combined.

Strictly, this approach can only be used when the model data have the same multiplicative scale as the fitted data.  The peak will never be wrong using this method, but the errors could be incorrect if the model and data are multiplicatively offset.  In principle, this can be resolved in the future using a Mellin transform [see this site or this for a matlab approach and this for an academic paper on it].

This is the approach I have implemented at  image-registration.rtfd.org.  I used simulated test cases to demonstrate that it is, indeed, effective and accurate.  I used this method to measure the offsets between the v2 data and the v1 data (which should, in principle, be the same as the offsets between ATLASGAL and v1) and the v2 vs Herschel Hi-Gal data (which should be zero). There are actually a few methods implemented in image-registration, and I compared those.  There's a "dft" and a "$\chi^2$" approach, which are the same (except $\chi^2$ includes realistic errors), a method where a 2D gaussian is fit to the peak of the cross-correlation image, and a method where a 2nd-order Taylor expansion is performed around the peak of the cross-correlation image.  The latter two are biased.  An example comparison plot looks like this:

The grey dots are catalog centroid positions offsets measured between v1 and v2.  The green cross represents the mean and standard deviation of the grey points.  The other data points, as labeled, show the offsets between the l000 images in v1 and v2 as measured by the method shown. They all have errorbars plotted, but the errorbars are generally smaller than the points.  The dark spot seen behind the purple point shows the $\chi^2$ contours out to 8-$\sigma$: the error in the offset is tiny, sub-arcsecond.  In this case, the offsets nearly agree:

l000 catalog dx:  -0.31 +/- 0.68   dy: 1.48 +/- 0.64

l000 $\chi^2$ dx:   1.74 +/- 0.03  dy: 1.41 +/- 0.03

This field agreed nicely between v1 and v2.

The comparison to Hi-Gal is perhaps more important; HiGal's pointing is calibrated based on multi-wavelength observations, some of which include actual stars.  It's a space-based mission, so its pointing is more stable.  And finally, being a space mission, there's a large dedicated team instead of a single, part-time individual working on the data. Our offsets from Hi-Gal are pretty small in general, though not trivially small.

And it turns out, the region that overlaps with ATLASGAL had more serious pointing errors than the rest of the survey:

(note: both of the above plots are missing L=359 because I forgot it.  Fixing that now...)

The clearest problem field is l=15, with a longitude offset of -6" between v2 and HiGal.... that's not the question, though.  Somehow I've lost the code that did the v1-HiGal offsets; I'll have to re-write that tomorrow and let it run...

Update 12/13:  I've spent the last couple days clearing up some issues with the offsets.  The error bars should be MUCH smaller than in the above plots.  The means are pretty similar, though. Short story: the offsets between v1 and Hi-Gal are greater in the ATLASGAL overlap regions than elsewhere, and in the right general direction, but not quite as serious as they claimed.  In v2, the ATLASGAL overlap fields and the rest of the survey have the same mean offsets, and those offsets are small (-0.5" in l, -1" in b). The problem now is the table.  If everything made sense, (v1-v2)+(v2-higal)+(higal-v1) = 0.  But that clearly isn't the case, which implies an error in the method, which sucks since I'm claiming this method is superior to alternatives.  It's possible that I'm actually underestimating the errors against Hi-Gal - that can be fixed relatively easily - but the magnitude of the error won't affect the centroid measurements.  So I probably need to investigate one case very carefully.  l050 is a big problem case, with vector sums >1 pixel in both directions.  That will be my next line of investigation. The approach will be: -crop identical fields within l050 from v1, v2, herschel -perform pointing comparison between them -check that vector sum < sum of errors I think - and hope - the trouble is just that I'm using inconsistent sub-fields to compare Herschel with the two different Bolocam versions, which is possible because of the way I selected these sub-fields.  I'll do more careful cropping, and probably re-do this analysis degree-by-degree (with $512^2$ fields, in the hope that it speeds up the FTs). Update 12/14: I've now cropped identical sections in each of the survey, 1 square degree (512 pixels) each - which is great for speed.  As a sidenote, a little line profiling revealed that the make_cross_plots  code was the slow point in the process, and it is dominated by savefig calls, not ffts. I've run a careful examination of self consistency on the l=0 field, with positive results: the offsets agree to well within the errorbars (though there is some residual error at the 0.5" level).

However, a similar inspection of l=50 resulted in a major failure:

In this case, the problem is caused by W51 being exactly on the field edge, leading to huge cross-correlation power at dx=0, but spread over a large y range.  My first thought is to try to downweight the edges, which can be achieved by "zero-padding" the noise image, but with high values instead of zero... or alternatively, by setting the edge region to zero smoothly.

OK, first thought: Bad idea.  Increasing the noise along the edges drastically increases the small-shift autocorrelation for the noise, which in turn ends up ruling out the small shifts as a fit possibility.  I don't think this really makes sense mathematically, but each step does.  Why would increasing the noise along the edges make the $\chi^2$ fit worse?

This revealed a serious bug in the code that, luckily, only affected non-uniform error maps.  Basically, I had decomposed the $\chi^2$ equation wrong (which is as bad as it sounds).

That total mess has been resolved now.  The image edges are downweighted with a gaussian of 12 pixels, error=100 outside and weight=0 outside (with weight^2 inside... best to just view the source if you really want to know the details).  The new versions of the above diagrams:

Less than spectacular for l=50, but acceptable given the errors, which are indeed significantly larger, as you might expect given the lower total signal in l=50. Now I need to re-run the fits on every field.

OK, cool, last thing accomplished today (...by 8pm): offset comparison by square degree for all fields.  Again, I don't reproduce the magnitude of the ATLASGAL-measured offsets, but the ATLASGAL fields are, on average, more offset in longitude (to the negative) than the overall average.

Curiously, for both v1 and v2, there appears to be a -1.5 deg shift in latitude from Hi-Gal.

The vector sums are mostly sub-arcsecond, with most exceptions at l>50.  l=59,64, and 65 are particularly bad - but l=50 isn't so bad.  So I should do the "deep" examination of one or two of those fields... who knows what new errors I'll turn up?

Here's the new v1-ATLASGAL offset plot:

• ## Cross-Correlation Offsets Revisited

Since last time (Taylor Expansion & Cross Correlation,Coalignment Code), I have attempted to re-do the cross-correlation with an added component: error estimates. It turns out, there is a better method than the Taylor-expansion around the cross-correlation peak.  Fourier upsampling can be used to efficiently determine precise sub-pixel offsets (matlab version, Manuel Guizar, author, refereed article). However, in the published methods just cited, there is no way to determine the error - those algorithms are designed to measure offsets between identical images corrupted by noise but still strongly dominated by signal. We're more interested in the case where individual pixels may well be noise-dominated, but the overall signal in the map is still large. So, I've developed a python translation of the above codes and then some. Image Registration on github The docstrings are pretty solid, but there is no overall documentation. However, there's a pretty good demo of the simulation AND fitting code here: Tests and Examples The results for the Bolocam data are here (only applied to v2-Herschel offsets):

• ## How does Bolocam data improve greybody fits?

Long wavelength data can be very useful for constraining the value of beta in a greybody fit.

• ## Bolocat V1 vs V2

I've done some very extensive comparison of v1 and v2. The plots below are included in the current BGPS draft, but I'll go into more excessive detail here. ALL plots below show Version 1 fluxes versus Version 2 fluxes using Bolocat V1 apertures. This means there are only two possible effects in play:

1. Different fluxes in the v1 and v2 maps
2. Pointing (spatial) offsets between the v1 and v2 maps [see http://bolocam.blogspot.com/2012/05/bgps-v2-pointing.html]

Therefore, the plots below are just different ways of visualizing the same information. This holds true despite the fact that different "correction factors" appear in different plots.

Ratios of v2 fluxes to v1 fluxes in the listed apertures. The curves represent best-fit gaussian distributions to the data after excluding outliers using a minimum covariance determinant method

v1 vs v2 with a background subtracted around the source equal to the source area (this was not reported in Bolocat v1, but is a tool Erik implemented so I used it)

v1 vs v2 in 40" apertures, as stated.  There are y=x and y=1.5x lines plotted: these are NOT fits to the data!  The green line is a Total Least Squares linear fit to the data weighted by the measured errors.

Same as above, but the best fit slope is steeper. The best explanation for the steeper slope (i.e., v2 > 1.5(v1)) is that more extended flux is recovered in v2 around bright sources, therefore in the larger source masks, there is greater flux than would be recovered if a simple 1.5x corrective factor was applied. 80" apertures

Same for 120" apertures:

For all 3 of the 40, 80 and 120" apertures both, the 1.5x correction factor is nearly perfect (agrees to <5%).  The background subtraction seems to have different effects depending on aperture size.  I welcome Erik to comment on this, but I do not think it is particularly important. The figures below require some explanation.  NONE of the circular apertures use background subtraction in this comparison (i.e., compare to the RIGHT column above). These figures are histograms of the flux ratio within a given aperture as a function of flux in the v1 aperture.  From bottom to top, the flux in the v1 aperture goes from 0.1 to 10 Jy.  The X-axis shows the ratio of the v2 flux to the v1 flux.  The black dots with error bars represent the best-fit gaussian distribution to each flux bin.  The colorbar shows the log of the number of sources; the most in any bin is about 102.5 ~ 300. In short, there is some sign that the ratio of v2/v1 flux varies with v1 flux.  This effect could be seen in the figures above since a linear fit is imperfect.  The effect is not very strong.  Again, I believe the explanation here is the changed spatial transfer function in v2.

• ## BGPS V2 pointing

BGPS V2.0 pointing offsets relative to V1 and Herschel:

Cumulative Distribution Function of the total offsets.

Histograms of the total offsets.

X-offsets vs Y-offsets (X and Y are GLON and GLAT). The ellipses are centered at the mean of the X/Y offsets and have major and minor axes corresponding to the standard deviations.

Page 1 / 20 »