scipy is capable of doing fft-base cross-correlation, convolution, etc., but it requires the stsci package, which is not generally easy to install. For that matter, scipy can be a pain some of the time. So agpy now includes a 2D cross-correlation code and a power spectrum / power spectral density code. These are pure-numpy codes that should be easy to use without any other bothersome dependencies. EDIT: I have them check for scipy (which can cause crashes if you have a bad scipy install, e.g. 32 bit executables on a 64 bit system) because scipy uses FFTW and numpy appears not to. Also, this code & related stuff has been discussed on astrobetyter agpy correlate2d psds
pstopng
Following this thread and my need to convert IDL .ps files to .pngs so that I can view them in Mac Preview without having to go through an (often failed) conversion process led to a few discoveries. First, it is challenging to get ImageMagick's convert to make an opaque background for the .ps file without seriously degrading the resolution. This can be accomplished by passing both the -alpha Off and the `` -density 300`` simultaneously. This is slow, though, and recommendations to speed it up using the -limit area 4096 -limit memory 4096 tags actually made it slower! However, the thread pointed out that ghostscript can do the conversion directly: gs -dBATCH -sDEVICE=png16m -r300 -dEPSCrop -dNOPAUSE -sOutputFile=XXX.png XXX.ps -sDEVICE sets the output to png, -r300 tag sets the density to be 300 pixels/inch, -dEPSCrop is necessary to get the right sized image out (otherwise it defaults to a portrait 8.5x11), and `` -dBATCH `` prevents the gs command line from activating after the command is executed. I'm not sure if -dNOPAUSE is necessary, but apparently if you don't activate it you have to do something after every page is processed. My code to do batch ps-to-png conversion is available at http://code.google.com/p/agpy/source/browse/trunk/agpy/pstopng. Timing demonstrations (units are seconds, R is 'real' or clock time, 'U' is user time, and 'S' is system time): /usr/local/bin/convert -density 300 -alpha Off deline_zero_10hz_timestreams_003.ps deline_zero_10hz_timestreams_003.png TIMING: R: 3.126 U: 2.948 S: 0.084/usr/local/bin/convert -limit area 4096 -limit memory 4096 -density 300 -alpha Off deline_zero_10hz_timestreams_003.ps deline_zero_10hz_timestreams_003.png TIMING: R: 3.800 U: 2.970 S: 0.161gs -dBATCH -sDEVICE=png16m -r300 -dEPSCrop -dNOPAUSE -sOutputFile=deline_zero_10hz_timestreams_003.png deline_zero_10hz_timestreams_003.psTIMING: R: 0.801 U: 0.781 S: 0.017
pwv
precipitable water vapor, precipitable water vapor, go away, come again some other observer's day little HARPy wants to receive so clear up before I leave stupid bumping-head old man.
pyspeckit: an astronomical spectroscopic toolkit
Jordan and I have been working on our python-based spectroscopic analysis tool for a while now: pyspeckit is a pretty awesome, now functional but incomplete (and incompletely documented) tool.
Python 64 bit!
I got python 64 bit to compile, but it required a number of tricky steps. First, this guy has the instructions I followed: captnswing However, it didn't work entirely as advertised. I ran the configure as advertised: ./configure --enable-framework=/Library/Frameworks \--enable-universalsdk=/ \MACOSX_DEPLOYMENT_TARGET=10.5 \--with-universal-archs=all \--with-readline-dir=/usr/local then the make install, but /usr/local/bin/python pointed to the wrong place, so I replaced the symbolic link in my python path with the correct one: sudo rm /Library/Frameworks/Python.framework/Versions/2.6/bin/python sudo ln -s /Library/Frameworks/Python.framework/Versions/2.6/bin/python-64 /Library/Frameworks/Python.framework/Versions/2.6/bin/python Now python is 64 bit: eta ~$ python -c "import sys; print sys.maxint"9223372036854775807 I haven't checked whether it works yet though... Update: Had to reinstall with gnu readline installed. Also have to install PyQt4 and might have to recompile numpy... numpy won't compile with python 2.6.2: C compiler: gcc -arch i386 -arch ppc -arch ppc64 -arch x86_64 -isysroot / -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypescompile options: '-Inumpy/core/include -Ibuild/src.macosx-10.5-universal-2.6/numpy/core/include/numpy -Inumpy/core/src -Inumpy/core/include -I/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 -c'gcc: build/src.macosx-10.5-universal-2.6/numpy/core/src/_sortmodule.cIn file included from numpy/core/include/numpy/ndarrayobject.h:26, from numpy/core/include/numpy/noprefix.h:7, from numpy/core/src/_sortmodule.c.src:29:numpy/core/include/numpy/npy_endian.h:33:10: error: #error Unknown CPU: can not set endiannesslipo: can't figure out the architecture type of: /var/folders/ni/ni+DtdqFGMeSMH13AvkNkU+++TI/-Tmp-//cceaWIvZ.outIn file included from numpy/core/include/numpy/ndarrayobject.h:26, from numpy/core/include/numpy/noprefix.h:7, from numpy/core/src/_sortmodule.c.src:29:numpy/core/include/numpy/npy_endian.h:33:10: error: #error Unknown CPU: can not set endiannesslipo: can't figure out the architecture type of: /var/folders/ni/ni+DtdqFGMeSMH13AvkNkU+++TI/-Tmp-//cceaWIvZ.outerror: Command "gcc -arch i386 -arch ppc -arch ppc64 -arch x86_64 -isysroot / -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -Inumpy/core/include -Ibuild/src.macosx-10.5-universal-2.6/numpy/core/include/numpy -Inumpy/core/src -Inumpy/core/include -I/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 -c build/src.macosx-10.5-universal-2.6/numpy/core/src/_sortmodule.c -o build/temp.macosx-10.5-universal-2.6/build/src.macosx-10.5-universal-2.6/numpy/core/src/_sortmodule.o" failed with exit status 1 That sucks.
Python 64-bit on Mac OS X 10.6 Snow Leopard
After yesterday's disastrous attempt to install various python packages, I started from scratch. First, I got rid of all of my python frameworks (backed up but removed from the path). Then, I compiled python 2.7 from scratch:
I got some help from http://blog.mahmoudimus.com/2009/12/python-2-6-4-and-twisted-9-on-os-x-10-6-snow-leopard/ ./configure --enable-framework --enable-universalsdk=/Developer/SDKs/MacOSX10.6.sdk MACOSX_DEPLOYMENT_TARGET=10.6 --with-universal-archs=intel -with-readline-dir=/usr/localmake -j 17make -j 17 test make results:
Python build finished, but the necessary bits to build these modules were not found: _bsddb dl gdbm imageop linuxaudiodev ossaudiodev spwd sunaudiodev To find the necessary bits, look in setup.py in detect_modules() for the module's name.
I'm not concerned about these - I don't use any of them and I assume I need to install some other packages to get them to work. During make test, I had two failures that resulted in "python crash" pop-up boxes:
test_subprocess. this bit of output is from a test of stdout in a different process .... this bit of output is from a test of stdout in a different process ... test_sunaudiodev
Then, I got some malloc errors:
test_ioTesting large file ops skipped on darwin.It requires 2147483648 bytes and a long time.Use 'regrtest.py -u largefile test_io' to run it.Testing large file ops skipped on darwin.It requires 2147483648 bytes and a long time.Use 'regrtest.py -u largefile test_io' to run it.python.exe(22914,0x7fff70d3ebe0) malloc: *** mmap(size=9223372036854775808) failed (error code=12)*** error: can't allocate region*** set a breakpoint in malloc_error_break to debugpython.exe(22914,0x7fff70d3ebe0) malloc: *** mmap(size=9223372036854775808) failed (error code=12)*** error: can't allocate region*** set a breakpoint in malloc_error_break to debugpython.exe(22914,0x7fff70d3ebe0) malloc: *** mmap(size=9223372036854775808) failed (error code=12)*** error: can't allocate region*** set a breakpoint in malloc_error_break to debugtest_ioctl
Python magic / advanced numpy indexing
Yeah, indexing python arrays should really be easy. Stefan van der Walt's page In [85]: bi = (f.bolo_indices[np.newaxis,:] + zeros([7751,1])).astype('int')In [86]: whc = (whscan[:,np.newaxis] + zeros([1,107])).astype('int')In [87]: array2d[whc,bi] = temp2d
Python: one-line arrays
Ahhh, refreshing: `` whscan = asarray([arange(scanlen)+i for i in scans_info[:,0]]).ravel()`` Not like IDL, which takes at least 4 lines b/c of the variable declaration. There's probably a better way to do that too.
Python: setting matplotlib defaults
Setting matplotlib defaults is a lot more difficult than it should be. matplotlib.defaultParams['image.origin']='lower'matplotlib.defaultParams['image.interpolation']='nearest'
python stuff
to get imshow defaults to be nearest neighbor, need to edit ~/.matplotlib/matplotlibrc. Still don't know how to change default command-line output format.