• A quick performance test on CASA cleaning

I ran some speed tests to compare CASA's speed when imaging single channels vs blocks of channels. The tests were run with savemodel='none' on a lustre node

'speedtest_nchan1_concat': 249,
'speedtest_nchan1_individual': 523,
'speedtest_nchan5_concat': 803,
'speedtest_nchan5_individual': 916,
'speedtest_nchan20_concat': 2303,
'speedtest_nchan20_individual': 2346,
'speedtest_nchan40_concat': 4010,
'speedtest_nchan40_individual': 4136,
'speedtest_nchan80_concat': 8101,
'speedtest_nchan80_individual': 9298,


The fit is y = 98x + 237

So at least until swapping happens, there is no serious advantage or disadvantage to running a different # of channels simultaneously.

• High-mass Protostellar SiO Masers

In working on a high-mass star formation proposal, I reviewed the discovery of SiO vibrationally-excited masers toward high-mass star-forming regions.

Known sources include (with discovery articles listed):

If you know of any I missed, or have a better collection of these masers, let me know.

• Publications from 2016

I haven't updated the 'publications' subdirectory for a long time, mostly because I forgot it existed. I'd like to go through these and update them with highlight figures etc, but there's no guarantee I'll find the time... this is really the kind of thing that should be done as the papers are submitted. But that checklist has been getting very long lately.

Immer2016a
Temperature structures in Galactic center clouds. Direct evidence for gas heating via turbulence

\aap  595  A94  (2016)

Galametz2016a
Water, methanol and dense gas tracers in the local ULIRG Arp 220: results from the new SEPIA Band 5 Science Verification campaign

\mnras  462  L36-L40  (2016)

Ginsburg2016b
Toward gas exhaustion in the W51 high-mass protoclusters

\aap  595  A27  (2016)

Battersby2016a
A Brief Update on the CMZoom Survey

(2016)

Lin2016a
Cloud Structure of Galactic OB Cluster-forming Regions from Combining Ground- and Space-based Bolometric Observations

\apj  828  32  (2016)

Eisner2016a
Protoplanetary Disks in the Orion OMC1 Region Imaged with ALMA

\apj  826  16  (2016)

McLeod2016b
Connecting the dots: a correlation between ionising radiation and cloud mass-loss rate traced by optical integral field spectroscopy

ArXiv e-prints      (2016)

Youngblood2016b
The Orion Fingers: Near-IR Spectral Imaging of an Explosive Outflow

\aj  151  173  (2016)

Svoboda2016a
The Bolocam Galactic Plane Survey. XIV. Physical Properties of Massive Starless and Star-forming Clumps

\apj  822  59  (2016)

Henshaw2016a
Molecular gas kinematics within the central 250 pc of the Milky Way

\mnras  457  2675-2702  (2016)

Robitaille2016a
Python in Astronomy 2016 Unproceedings

35  (2016)

McLeod2016a
A nebular analysis of the central Orion nebula with MUSE

\mnras  455  4057-4086  (2016)

Ginsburg2016a
Dense gas in the Galactic central molecular zone is warm and heated by turbulence

\aap  586  A50  (2016)

Ginsburg2016
CAMELOT: the Cloud Archive for MEtadata, Library & Online Toolkit

(2016)

Goddi2016a
Hot ammonia around young O-type stars. III. High-mass star formation and hot core activity in W51~Main

(2016)

Longmore2016a
Using young massive star clusters to understand star formation and feedback in high-redshift-like environments

(2016)

• CASA synthetic observations

To evaluate imaging robustness & quality, it is necessary to do some sort of synthetic observation. These synthetic observations can be done on real images, e.g. Herschel data shifted to greater distances, or on simulated data.

Some examples:

General process:

1. Import file into CASA .image form (e.g., importfits)
2. Load the .ms file and replace visibilities w/fourier transform of image
3. Clean.

The hard part is setting up the files to be imported.

For example, you can import a FITS file, which to me seems to be the most straightforward approach:

importfits(fitsimage=perseus_rescaled,
imagename=perseus_casa_image,
overwrite=True,
defaultaxes=True,#['RA---TAN','DEC--TAN','FREQUENCY','STOKES'],
defaultaxesvalues=['2.909234541667E+02deg',
'1.451177222222E+01deg',
'233.9468GHz','I'],
# 18" = 1.22 lambda/D
beam=["{0}deg".format(18/3600.*distance_scaling),
"{0}deg".format(18/3600.*distance_scaling),
"0deg"],
)


The beam probably matters for Jy->K, or Jy/beam->Jy conversion. The defaultaxesvalues should correspond to the pointing center of the interferometer pointing or mosaic.

Next is bringing this image - which we now pray is in the correct units - into UV space.:

sm.openfromms("continuum_7m12m_noflag.ms")
sm.setvp()
success = sm.predict(perseus_casa_image)
assert success
# TODO: get these from ASDM_CALWVR and WEATHER
success2 = sm.setnoise(mode='tsys-atm', relhum=60.0, pwv='2mm', tatmos=265.0, )
success3 = sm.corrupt()
sm.done()
sm.close()


This snippet loads the .ms file (the visibilities), then overwrites them with the fourier transform of the image using sm.predict. It's not clear whether setvp is necessary. setnoise + corrupt just adds "appropriate" atmospheric noise to the visibilities.

From there, you should just use the same clean parameters as for the original data, or try to optimize clean here and use these parameters on the real data.

The result will be the 'perfect' (no phase error, no amplitude error) interferometric image.

• CASA clean: "splotchy" artifacts

This image shows artifacts that look "splotchy". They are locations clean has placed a negative model component where none belongs, so the residual is highly positive and the resulting image is negative.

Discussing with others, it appears that this artifact may be unique to multiscale clean and therefore may be a result of a too-strong component being added on a large scale. It is not yet clear how to suppress these artifacts, but I'm attempting to change the scales included. Other ideas would be welcome in the comments.

• Single Dish Combination Continued

Continuing from yesterday, I've done some more analysis of my HC3N data on different scales. The first critical note is that there appears to be disagreement between the 7m and 12m data. In principle, smoothing the 12m data to the 7m resolution should at most recover the 7m brightness; instead it seems that the 12m data significantly overpredict the 7m brightness.

This discrepancy is also evident in the power spectra, where there appears to be missing flux on 20-100 arcsecond scales.

So maybe it's just that the 7m data are bad?

I discussed the image combination process with Baobab, and he suggested that it may not be the flux calibration of the images (their absolute scaling) but the relative weights used to combine them. The default weights, and pretty much all variants I had experimented with, use the single-dish beam as the starting point: you weight the interferometer data by (1-SDbeam) in the fourier domain.

It turns out that, while this weighting scheme is the obvious thing to do because the single-dish beam is well-characterized, it may not be the best. The interferometer data may lose sensitivity on larger scales much faster than (1-SDbeam) would suggest.

Carrying the single-dish weight down to smaller angular scales does a better job of ensuring there are no negatives, but is it really "right"? Unfortunately, there's no good answer to this; the process is essentially the same as modifying the Briggs parameter when weighting intereferometric clean data. You just have to choose a scale, or a weighting function, and go with it. As long as the weights are conservative (i.e., sum to 1), the resulting image should in some sense be correct.

In that figure, the weighting scheme for the large angular scale data was set to a series of different Gaussians. The resolution of the 7m+TP data is (theoretically) 24x11 arcseconds, but preserving the large angular scales below the major beam axis seems to do a better job of preserving the image's positivity.

The effect is more pronounced if we just combine the 12m+TP data, ignoring the 7m data (which might be wrong anyway). For weighting FWHM above 30 arcseconds, the interferometric data dominates the map.

• Single Dish - Interferometer Combination Experiments

I spent 3 days in Köln with Alvaro Sanchez-Mongé, Peter Schilke, Fanyi Meng, and Anika Schmiedeke working on Sgr B2 data and specifically on trying to combine the single dish (total power) data with my merged ACA+12m data from ALMA program 2013.1.00269.S.

tl;dr: feathering appears to work fine, and many other methods work equally well (assuming they can be implemented, which is uniformly harder). Negative bowls persisting after feathering are an indication of a problem with the input data.

Feathering

Fourier space combination of the single-dish and interferometric data is by far the most straightforward to implement and the fastest. However, in the HC3N data, it left strong negative bowls, which should not be possible.

On the Einstein image, the image quality from feathering was not great, but there were no negative bowls left. The dynamic range is lower in the Einstein data, but this still hints that there is a problem with the HC3N images. We're investigating the possibility that the 7m data are improperly calibrated or weighted.

We never managed to create UV data from the single dish data because of complaints about the pointing information.

Cleaning with a single dish image as an input model

We attempted to clean the data using the single dish image as an input model.

On the real data, this failed with both tclean and clean. With tclean, there was no error message, it just hung indefinitely. With clean, the error message is:

*** Error ***  LatticeExprNode - coordinates of operands mismatch
Scanned so far: modelos_0 + __temp_model2

2016-06-01 10:35:05     SEVERE  clean::::       An error occurred running task clean.


Changing CDELT to 1 GHz, which solved a previous issue, had no effect here.

It turns out tclean will fail silently if it doesn't find the startmodel, which has to be specified as an image prefix for version <=4.6, and the image has to exist as a .model file. For higher versions, 4.6+, the model can be directly referenced (as in clean). Eventually, tclean seems to have completed, though the results indicate that it does not treat the units as I expected; the total power data seems to be over-weighted by a factor of 10^3+, probably by the ratio of the beam areas:

For the simulated images (einstein), we get the error:

2016-06-01 10:55:11 SEVERE  SynthesisImager::defineImage (file /Users/rpmbuild/gradle/workdir/casasources/release-4_5/code/synthesis/ImagerObjects/SynthesisImager.cc, line 668)    Error in adding Mapper : Error in createImStore : ::operator!= (const IPosition&, const IPosition&) - left and right operand do not conform
2016-06-01 10:55:11 SEVERE  tclean::task_tclean::   Exception from task_tclean : 2016-06-01 10:55:11        SEVERE  SynthesisImager::defineImage (file /Users/rpmbuild/gradle/workdir/casasources/release-4_5/code/synthesis/ImagerObjects/SynthesisImager.cc, line 668)    Error in adding Mapper : Error in createImStore : ::operator!= (const IPosition&, const IPosition&) - left and right operand do not conform


This is probably an issue with regridding. imregrid doesn't like our data:

2016-06-01 11:00:41 SEVERE  imregrid::image::regrid Exception Reported: Exception: The number of pixel axes in the output shape and Coordinate System must be the same. Shape has size 4. Output coordinate system has 3 axes.
2016-06-01 11:00:41 SEVERE  imregrid::image::regrid+        ... thrown by SPIIF casa::ImageRegridder::_regrid() const at File: /Users/rpmbuild/gradle/workdir/casasources/release-4_5/code/imageanalysis/ImageAnalysis/ImageRegridder.cc, line: 138


Adding a frequency axis to the FITS data (which was missing before...) seems to have fixed this error for the Einstein data, but not for the real HC3N data.

Linear Combination in Image Space

Linear combination of the single dish and interferometer data in image space, followed by image-space deconvolution, has been used successfully on HI data. In principle, this is very straightforward, except for the need for deconvolution. CASA now has an image-space deconvolution program (deconvolve), so I was able to implement this approach. However, the deconvolver seems to only work on the inner 1/4 of the image in each dimension, which left incomplete images that were difficult to compare. Additionally, CASA does not (obviously) carry the appropriate machinery for computing the residual image and adding the convolved model back to the residual image. Still, this is a promising route forward as it is computationally relatively cheap and mostly easy to implement.

Linear combination is partly implemented now using the CASA deconvolve task; it hasn't been generalized but you can see the outline / single working case at this link.

• What does feather do?

This is an examination of the 'feather' task in CASA.

First we have to track down what code is actually called.

On mac, this is:

• /Applications/CASA.app/Contents/Resources/python/feather_cli.py -> feather_cli
• Imager.cc

Within the imager, setvp is used to do something with the primary beam. As far as I can tell, this function spews a lot of text into the logger, sets local variables, sets one larger-scope variable BeamSquint::, then does destroySkyEquation();. In short, I don't know what this does. Probably my reading of the C code is incorrect and these variables are not local, in which case it is just setting a bunch of default beam parameters.

Then, Imager.cc calls Feather.cc.

Feather sets parameters based on the effective dish diameter. It uses 1.22*(speed of light)/frequency/dish_diameter, so it is retrieving the FWHM assuming a Gaussian beam by default.

A lovely aside: Imager.cc passes the local variable lowPassFilterSD to Feather.cc, which is read in to the input boolean parameter doHiPassFilterOnSD. That seems like a potential place for confusion.

If doHiPassFilterOnSD is set or the effective beam diameter is set as a parameter, the single dish image is deconvolved with its beam using the GaussianDeconvolver. This is actually disabled by default, though.

The low resolution image is regridded to match the high resolution image.

The applyFeather command seems to do the actual work. It starts with calcCWeightImage, which sets the "weight" to one minus the peak-normalized fourier transform of the low-resolution (single-dish) beam Feather.cc line 427, with the result stored in the variable cwImage_p. applyFeather then just multiplies the high-resolution image by this weight.

Back in the saveFeatheredImage task, the single-dish data are fourier transformed, then a scaling factor, the ratio between the low and high beam areas, is computed. Finally, the single-dish (scaled) and interferometer fourier-transformed images are added.

Is the single-dish image weighted by the beam at all? The single-dish image will be convolved with a kernel only if doHighPassFilterOnSD is set or if the effective dish diameter is set. If the beam is the same size as the original beam, nothing will happen. In setEffectiveDishDiam the low-resolution image is convolved with the *deconvolved* beam.

The most important note in this post is that the single-dish image is not weighted unless doHighPassFilterOnSD is set, and even then nothing will be done if the single-dish beam size passed as a parameter is the same as the beam size specified in the header. That means only the interferometer image is weighted. Maybe that is the correct behavior? By weighting by (1-convolved beam), which means setting the total flux in the interferometer image to zero, the process is guaranteed to be flux conserving.

• CASA coordinate defaults

A brief public service announcement: CASA coordinate strings follow a convention of their own making that I at least find thoroughly counterintuitive. If you specify a sexagesimal coordinate, e.g. 05:12:15.3 -5:10:20.5 or something similar, CASA interprets that as 5 hours, -5 hours. The declination is interpreted as being in hours. Those are units no one has ever used. The CASA version requires the second half of the sexagesimal string to be decimal separated:

2016-05-19 00:45:13     WARN    SynthesisParams::stringToMDirection (file /var/rpmbuild/BUILD/casa-prerelease/casa-prerelease-4.6.0/code/synthesis/ImagerObjects/SynthesisUtilMethods.cc, line 786)     You provided the Declination/Latitude value "-28:23:29.22" which is understood to be in units of hours.
2016-05-19 00:45:13     WARN    SynthesisParams::stringToMDirection (file /var/rpmbuild/BUILD/casa-prerelease/casa-prerelease-4.6.0/code/synthesis/ImagerObjects/SynthesisUtilMethods.cc, line 786)+    If you meant degrees, please replace ":" by ".".


This has broken my scripts more than a handful of times now, since copying & pasting coordinates from any other program inevitably has the "wrong" units.

• CASA tclean adds negative components

I'm trying to examine some of my 7m+12m combined data and have run into yet another mysterious behavior of CASA. tclean is adding negative components in regions with exclusively positive emission.

In this image, the left is the clean image, which doesn't make any sense; it looks like it should be a residual image but it is not. 2nd image is the clean components (negative!!), 3rd is the residual, 4th is the dirty map.

One tricky thing for this particular image is that the emission is real, BUT it is also directly on top of a major sidelobe of the brightest source in the region. Nonetheless, I cannot imagine any way to explain the negative component. I don't really understand when a negative component is added, but presumably it ought to happen when the residual has a negative value but also the highest amplitude in the map; this obviously never happens here since the residual in fact has one of the highest positive amplitudes in the map.

The imaging parameters seem not to affect this problem. I attempted reducing the gain to 0.05 to no avail. Changing from natural to robust weighting actually removed most of the negative components.

Failed combination of 7m and 12m data

The above was a tangent. The real reason I was working on this data set was to combine the 7m and 12m spectral data. The problem I keep seeing is very large, smooth blobs that are spatially distinct from the more compact emission. It's as if someone took the two independent cubes (12m and 7m) and just added them together; it looks like flux is not being conserved.

The UV plots, especially weight vs uvdist, look fine. The 7m antenna are downweighted relative to the 7m antennae by a factor ~2.

One of the bigger problems is that I have a vague - possibly not justified - sense that there is a velocity offset of ~1 km/s between the 7m and 12m data. If this is the case, there's no wonder the combination isn't working. However, it's nearly inconceivable.

Followup Feb 2017

That vague fear turned out to be entirely justified. It turned out that the 7m and 12m array data sets were not in a common frame, and they were therefore offset by something like the Earth's change in velocity between the observation dates. The solution was to cvel the data sets to the LSRK frame prior to concatenating them. While I know I made these changes, I unfortunately can't find the specific commit where I made them.

Page 1 / 52 »