Memo for WCSTools

Recently I have been more dissatisfied with astrometry.net, because no matter how I tweak the parameters, some portions in an image can never have catalogue stars’ positions overlaid with the corresponding actual stars. I also have learnt that astrometry.net only provides crude solutions to plate constants of images, and therefore, if one needs high-quality astrometry, WCS by astrometry.net is not accurate enough yet, and other better astrometric software should be used. This is the reason why I need to use WCSTools.

The following is simply a memo to record my steps to install the software. My reference is basically from this webpage.

(1) Download the latest version of WCSTools (3.9.4).

(2) Go to the directory where the file is downloaded, and type in the following commands to install it:

$ tar xvfz wcstools-3.9.4.tar.gz
$ cd wcstools-3.9.4
$ make all

(3) Now the environment needs to be set:

$ sudo cp wcstools /usr/local/bin/
$ sudo cp ./bin/* /usr/local/bin/
$ sudo cp libwcs/*.h /usr/local/include/
$ sudo cp libwcs/libwcs.a /usr/local/lib/

(4) Test the installation:

$ wcstools

(5) You should see a very lengthy message including the version of the software, as well as a number of programme names. The following is what I can get (truncated):

WCSTools 3.9.4 Programs
http://tdc-www.harvard.edu/software/wcstools/

addpix:    Add a constant value(s) to specified pixel(s)
bincat:    Bin a catalog into a FITS image in flux or number
char2sp:   Replace this character with spaces in output (default=_)
conpix:    Operate on all of the pixels of an image
cphead:    Copy keyword values between FITS or IRAF images
crlf:      Change CR's to newlines in text file (for imwcs, imstar logs)
delhead:   Delete specified keywords from FITS or IRAF image file headers
...

Since I am using an online servers to visit star catalogues, the environment variables such as the following ones have been set:

setenv UB1_PATH http://tdc-www.harvard.edu/cgi-bin/scat
setenv SAO_PATH http://tdc-www.harvard.edu/cgi-bin/scat
setenv UCAC3_PATH http://tdc-www.harvard.edu/cgi-bin/scat

Note that I haven’t yet figured out how to connect to catalogue UCAC4. Apparently WCSTools cannot cope with the newly released Gaia either.

As still being in a process of learning the software, I may update this post infrequently in the future.

Advertisements
Memo for WCSTools

Install OrbFit on My Mac

So far I have not yet fully successful with the software. But the problem may well be intrinsic in the codes, rather than mistakes during my installation. I have reported a bug to the OrbFit consortium. But anyway I think I need to record my steps of installing it, which may benefit me in the future if I want to reinstall the software.

Step 1: Download the software from the website and unzip it.

Step 2: Change the terminal to tcsh.

Step 3: Type config -O gfortran, Since I’m using gfortran. Other modes will flag -static which leads to errors of ld: library not found for -lcrt0.o.

Step 4: Type make. It should take seconds to finish.

Step 5: Download ASCII formatted of JPL DExxx to directory src/jpleph. I downloaded DE431. Edit makefile if necessary.

Step 6: Type make ephemerides. It is then converting the DExxx from ASCII to binary, which takes about 20 sec on my computer. After it completes a file called jpleph will appear.

Step 7: Copy that file to lib under the main directory of OrbFit, or goto that directory by cd ../../lib, and then type ln -s ../src/jpleph/jpleph jpleph to create an alias.

Step 8: Goto the main directory and test the installation:
make tests
where I am confronted with errors about segmentation, with the latest version OrbFit5.0.

Install OrbFit on My Mac

Weird Function TRIANGULATE in IDL

Time to update my blog again. It’s been more than a year since my last update after all. Today I encountered with a weird error in IDL when trying to improve computation precision of own polar transform image routines. The source was traced back to function TRIANGULATE. So far I really have no idea about why it occurs. The example is as follows:

IDL> theta=dindgen(360L)
IDL> rad=dindgen(500L)
IDL> xpol=rad#cos(theta/1.8d2*!dpi)
IDL> ypol=rad#sin(theta/1.8d2*!dpi)
IDL> triangulate,xpol,ypol,tri
% TRIANGULATE: Points are co-linear, no solution.
% Execution halted at: $MAIN$

However, if I type the following TRIANGULATE works flawlessly:

IDL> xpol=rad#cos(float(theta/1.8d2*!dpi))
IDL> ypol=rad#sin(float(theta/1.8d2*!dpi))
IDL> triangulate,xpol,ypol,tri
IDL> help,tri
TRI             LONG      = Array[3, 358920]

This is really annoying, as I can’t figure out how come co-linearity can be related to float or double precision whatsoever. Not sure if anyone could help this for me, but I just throw this problem here and see if there’s anyone else faced with the same encounter so that we can potentially sigh together…

Weird Function TRIANGULATE in IDL

My Record of Installing IDL, SSW and SSWDB on My New Mac OS X

I got a new laptop Mac OS X from Dave and needed to install IDL, SSW and SSWDB on it. The following are the main steps how I installed it successfully.

 

Basic Steps of installation of IDL 8.3 on my Mac OS X

1. Download the installation package from the EXELIS.

2. Click on the icon and follow the instructions of the installation wizard.

3. Put the license into the following directory: Applications/exelis/license.

4. The license wizard pops out automatically after the installation completes. Now select the license file in the panel.

5. Now IDL should be successfully launched by double clicking on the workbench icon, however, if you type idl in the terminal, you will be informed that idl: Command not found. Environment has to be set.

6. Type the following commands in the terminal:

vi .tcshrc

7. See if there is the following lines within the file, insert them otherwise.

setenv IDL_DIR /Applications/exelis/idl83
setenv OS darwin.

In Tshell command line, typing idl will now successfully run IDL.

 

Steps of installation of SSW

1. Fill out and submit the form at http://www.mssl.ucl.ac.uk/surf/sswdoc/solarsoft/ssw_install.html.

2. Follow the instructions in that website and the links therein.

3. Type sswidl in the terminal, but fails, so insert the following lines into the .tcshrc file:

set path=(/usr/local/bin /sw/bin /sw/sbin /usr/local /usr/local/ssw/gen/mirror /usr/X11R6/bin /Applications/exelis/idl83/bin $path .)
setenv SSW_INSTR "lasco secchi"
setenv SSW ssw
setenv SSWDB sswdb
setenv ssw_quiet 1
source $SSW/gen/setup/setup.ssw
source $SSW/soho/lasco/setup/setup.lasco_envsetenv SSWDB /usr/local/ssw/sswdb

Installation of SSWDB

1. Anyway I am not able to generate a configuration file from http://www.lmsal.com/solarsoft/sswdb_configure.html, but there is a file called setup.sswdb_upgrade_template in directory of ssw/site/setup. Add lines indicating the packages in need, or, if there are packages already listed in the template, remove the corresponding “#” at the beginning. Root is required:

sudo vi setup.sswdb_upgrade

2. Follow the steps listed at http://www.mssl.ucl.ac.uk/surf/sswdoc/solarsoft/sswdb_install.html.

3. If Perl is installed in a non-default directory, type
ln -s /usr/bin/perl /usr/local/bin/perl
in the terminal.

4.IDL> sswdb_upgrade, /spawn, /passive_ftp

5. Be patient about Step 4, and it took me rather long.

6. Done!

My Record of Installing IDL, SSW and SSWDB on My New Mac OS X

Further to My Previous Blog, rgd RVSF

I got in touch with Dr. Nalin Samarasinha at PSI, who wrote the codes for all of the special cometary processing filters at PSI website. He has beeing patiently answering every of my questions so I now can understand what his RVSF code is basically doing, though I know very little of FORTRAN. The discrepancy I spot between the source code and the explanation file is indeed a typo.

I added a new keyword to my IDL routine so that the user could have the option whether they like to switch on or off the function of sub-pixel sampling of the image prior to the function of the filter or not. Through some simple tests I realized that the differences between sub-pixelization and non-sub-pixelization of the image would not be very obvious in terms of visual inspection. However, the speed matters considerably, especially when the input image size or the part of regions selected is quite large — the processing time, in this case, could be quite consuming. So it’s a good idea to preview the enhanced image without sub-pixel sampling, not only would you save a great amount of time, but also this would give you a basic knowledge about if your kernel parameters are appropriate, and finally, to process the image by sub-pixelization with reasonable kernel parameters.

I couldn’t discern any difference in the enhanced images with or without sub-pixelization, however, a subtraction between the two clearly reveals what is previously hidden behind. See the following image.

The difference between non-sub-pixelization and sub-pixelization sampling. Linear stretch.
The difference between non-sub-pixelization and sub-pixelization sampling. Linear stretch.

Additionally, I’m very delighted to find that the resulting enhanced image processed by means of sub-pixelization shares great similarity with the one processed through Nalin’s code or PSI’s online tool. So I think my routine is quite successful.

At last, before the end of this update, I need to confess that I had mistakenly regarded the author of the source codes to be Padma Yanamandra-Fisher. I had been confused by their names, all look rather long for me… ><

Further to My Previous Blog, rgd RVSF

Testing My Radially Variable Spatial Filter Code

I think I need to update my blog; it has been so long since my last update… Also should I prove that I’m still alive here.

I saw a message in the comets-ml posted by Martino Nicolini that the PSI website has released a web tool for processing cometary images, including azimuthal median/average/renormalization filters, 1/rho coma model division, and, quite unfamiliar to me, radially variable spatial filter (http://www.psi.edu/research/cometimen). I attempted to process an image taken by HST regarding comet C/2012 S1 (ISON) in May 2013 with the online tool, however, I had difficulty in retrieving the enhanced or processed data. Weirdly the size of the to-be-downloaded file was always 0 KB, obviously problematic. With consideration that the network is not permanently available for me, it sounds absurd and waste-of-time for me to wait for accessible network before performing specially processed cometary images, and therefore I decided to write codes for my own purpose.

So I did by taking reference to the explanation file of the enhancement techniques, understanding the global idea of how the radially variable spatial filter works. Realizing the algorithm of the filter was proved easy, and it didn’t take me long to accomplish the IDL routine.

I did some tests with the image CometCIEF_test.fits provided in this page. The following was generated with kernel A = 4.0, B = 4.0, N = 0.4. It looks correct anyway, quite similar to the appearance in the tutorial file.

RVSF test imageHowever, I found my result would look somewhat different from those presented in the tutorial file if the kernel size was smaller — probably the scaling plays a role there, yet anyway my outcome would look less detailed. The corresponding FORTRAN source code of the filter in the PSI page seems to has a typo, which I have already reported to Padma Yanamandra-Fisher. I’m still comparing my codes against the PSI’s…

Testing My Radially Variable Spatial Filter Code

Solution to IDL Memory Allocation Problem

Long time since the last update of this blog.

I’ve confronted with the following error when trying to processing a large array made up of SECCHI COR2 data, and it had been extremely painstaking before the solution was found. Many of the solutions to the problem told you to simply switch a platform, i.e. from Win 32-bit to Linux, in that the problem inherits from discontiguous distribution of  memory allocation. For a better understanding, let’s suppose you get 1000MB free space in total. However, it may well be that summing up several, i.e., five 200MB discontinuous free space gives you this value. Yet if you want to use 500MB at a time in IDL, the following error warning message pops out:

   % Unable to allocate memory: to make array.
      Not enough space
   % Execution halted at: $MAIN$

I get an excellent alternative way to overcome this problem partly after reading guides from Coyote’s Guide. It works for me! At least I can reduce a median background from the daily stack.

When working with IDL after that aforementioned error message, exit the Workbench first. The next step is to find out a file entitled idlde.ini in one of the files storing IDL. Then use some text editor to open it. It may read like following:

   -vm
   {VM_DIR}
   -vmargs
   -Xms256M
   -Xmx768M
   -XX:MaxPermSize=128m

Make sure that {VM_DIR} remains unchanged; it’s the path to Java JVM on your machine and malfunctions may occur otherwise. Change the lines:

   -vm
   {VM_DIR}
   -vmargs
   -Xms128M
   -Xmx128M
   -XX:MaxPermSize=128m

Restart your IDL Workbench and you may see difference. Yet please don’t get too exhilrated. If you still use an oversized array, this may not have your problem resolve. Just never be too careful when running IDL in Win 32-bit machine. Frankly speaking it’s better recommended to running IDL in non-Win machines, on which It’s said that more powerfulness of IDL will be functioned, but I can’t tell more as lacking such experience.

Solution to IDL Memory Allocation Problem