Tuesday, November 5, 2013

pyAstroStack: Getting prepared for first release

...where getting prepared means a LOT of work. By number of code lines maybe even more than there already is.

Things I'm working on, one by one:

Project file

I want the stacking process to be possible to continue on a later time starting from any point. I also want to make possible for example do the stacking couple times on different settings (perhaps to test which method works the best).

I'm implementing this via project file. It holds all the necessary information about source photos and what's done with them. Program reads the project file and it can tell user what's already done and what might be the next step. It also knows location to uncalibrated cfa-images, calibrated images, demosaiced images, registered images etc... so user can do any step again if wanted.

The project file approach is also required by my user interface.

User Interface

Eventually I'd like to have a graphical user interface (most likely PyQt) but for now only command line. I'm starting to like IRIS and its command window so that's what I had originally in mind. I got deviated a bit...

Here's how it's going to work:

AstroStack <operation> <project> <arguments>

Some examples (ones I have already implemented)

AstroStack init Andromeda
- Initialize a new project by the name of Andromeda

AstroStack adddir Andromeda light /media/.../Andromeda/
- Add all files of proper type from specified directory to project as light pictures. Actually this isn't fully implemented. It works now without the "light" argument but it asks what the type is.

AstroStack addfile Andromeda light /media/.../Andromeda/IMG_6234.CR2
- Add one file to project. Otherwise same as before. This probably should be made to understand wildcards.

AstroStack stack Andromeda flat
- Stack the flat images in the project file. This creates a file called masterflat.fits in working directory and will save the information about masterflat in the project file. I'll change all the file names to be project specific so there can be several simultaneous projects that don't interfere each other.

AstroStack demosaic Andromeda
AstroStack register Andromeda
- These are in the code but haven't been tested. Probably won't work yet.

I hope this shows the idea how the program is going to work. I'm also thinking of GUI implementation all the time and I'm trying to write everything compatible for it so no rewrite should be necessary then.

Check and rewrite everything for UI compability

I didn't really think how the UI is going to work when I first wrote this. Everything is classes and objects so it should be trivial to get everything working on UI instead of test script. Mostly the changes have been about where method gets it's arguments. Incorporating project files also requires changes so there won't be several places to update information.

Decide on image format

First I used FITS via astropy.fits, but I ran into some problems with it. I changed image format to TIFF via Pillow.Image but I've ran into even more problems... FITS might still be the better choice. AstroPy also has one advantage to Pillow: It's designed for exactly this use. I understood it supports automatically slicing large arrays into small ones which is handy for calculating medians.

Problems with TIFF
  • Pillow indexes arrays differently than SExtractor does with FITS. This requires coordination system changes in registering
  • Saving intermediate files with better precision than the final. Pillow doesn't like my numpy.float32's and such.
  • Saving RGB images. For some reason all RGB's I tried to save were only mess.
Problems with FITS
  • Creating them. I learned the right switches for Rawtran, but the problem is the program itself. I dislike the idea of having a dependency people have to compile themselves. I'll try to change this to DCRaw only or DCRaw + ImageMagick combo
  • Final result should be other than FITS for easier postprocessing. TIFF suits fine, but that means Pillow would remain as a dependency
ImageMagick is being used for affine transformations and I'm not sure if it can do that to FITS files. Better try soon before deciding. Otherwise I'm choosing FITS.

Change IDE

Does concern the program a bit. Eclipse was massive and sluggish and I constantly had problems with either PyDev or eGit (or whatever the Git plugin was called). I ranted about this on IRC and got suggested PyCharm. The free version seems extremely nice, fast and git works out-of-the-box. First things I did after changing to PyCharm was to fix all the PEP8 problems the program suggested. Agreed, the code is much readable now.

I'm also trying 2-panel setup. I've been switching between to files so much that this could be useful.

Better demosaic and median stacking

I want these to be ready before releasing version 0.1. Demosaic is now bilinear and the way it lost all the colours on my test images suggests it doesn't work too well. I found something called LaRoche-Prescott demosaicing algorithm and I decided to give it a go.

Stacking works now only by mean value, which is fast and easy to implement. Eventually I want to have sigma medians and such but for the first release I want at least median stacking.

Memory usage

While running tests, I ran out of RAM. And I have 16 GB of it. So that's a problem. Seems like Pythons garbage collecting isn't that efficient without programmers help. I managed to make stacking work on about 5 GB (with 30 light images) but I still think that's too much. Have to make that lower.

That's about it...

A lot to do as I said. Where I'm now?


This can already be done, so core modules seems to work. It's everything around it that needs work.

BTW, I've been using Andromeda galaxy as my test images and hence all the images so far processed have been about Andromeda. I decided to give names of celestial objects to different versions of pyAstroStack. Version 0.1 will be Andromeda. That's already a name of a branch in BitBucket.


Tuesday, October 22, 2013

pyAstroStack: Calibration works, affine transformations by ImageMagick

Actually all that worked already before the previous post. I had to do couple of changes to the calibration functions because now images are monochrome and before RGB, but otherwise everything was in order.

I had some problems on master dark being white, but I found out that to be because the program was reusing old temporary dark#.tiff files. Master bias was subtracted once every time I ran the program causing values of uint16 go below zero. I now changed the program to use float32 during the process and output int16 only when everything is ready. Also temporary files are now manually removed before every new test run.

I noticed ImageMagick can do affine transformations based on matching pairs. Just what I need! It's a lot faster than Scikit-Image, which I won't be needing anymore so one less dependency to worry about. I could probably do a lot more with ImageMagick as well so I have to look into that some more. This project isn't about coding everything by myself. It's about getting astronomical image stacking done on open source software. Hence, ImageMagick is fine.

Next I think I should make some kind of a project file which holds information about temporary files and which can be removed and which reused.

So here's the first result of full calibration process.


That's of course not what my code outputs. Postprocessing has been done with Darktable. Bigger version again on Flickr: http://www.flickr.com/photos/96700120@N06/10427590704/

I addition to project file I'll start working on interpolating calibrated raw images into RGB.

Btw, the whole process takes now 7 min 30 s. That's for 30 bias, 10 dark, 7 flat and 30 light frames. That's quite good, I think.

Monday, October 21, 2013

Unexpected problems with image alignment

Continuing the development of pyAstroStack (I have to come up with a better name for it...) and I ran into problems where I didn't quite expect them.

I'm also starting to find out that when I thought I knew what happens in the image registration and stacking, I'm actually missing quite a lot. I knew about the Bayer filter in DSLR, but seems like I misunderstood how it's used. I thought there are 12M red pixels, 12M blue pixels and 24M green pixels in 12Mpix sensor, when there actually are 3M red, 3M blue and 6M green. I also didn't understand that converting a raw photo into fits or tiff with DCRaw's (or Rawtran's) default settings, doesn't give me the real raw data. So the first version of my program used interpolated color data from the beginning. I started to fix this.

I had difficulties understanding Rawtrans switches but I knew what I had to do with DCRaw in order to get debayered data from raws. Rawtran gave me FITS, but DCRaw PPM or TIFF. I wanted my program to output TIFF so I decided to change AstroPy.Fits to something that uses TIFF. I found Pillow. It can also import images into numpy.arrays, so I should be able to do the transition easily...

I made a lot of changes before sunning proper tests. I tried to include dark, bias and flat calibrations at the same time and when I finally ran tests, all the resulting images were mostly black. I removed functions about calibration from my test program but to no effect. I stretched images to extreme and found this:


Seems to me like registration fails. I really couldn't understand this since I hadn't touched anything related to registration. All the changes were in image loading and stacking. The weirdest thing to me was, why alignment fails only for Y-coordinates and X is ok. It took me a while to figure this out. I reverted to older, working, version of the code and started making all the changes to it one by one and running tests after each change. Pillow and TIFF was the cause. Still I couldn't understand why until I ran everything using FITS but the output as TIFF. Result was perfectly aligned image, upside down! Either TIFF handles coordinates in different order than FITS or just Pillow-library makes the numpy.array with flipped Y, but now that I knew the cause, it was simple to fix.

Star coordinates were fetched on SExtractor using FITS even when everything else used TIFF so the Y-coordinate was always reversed. I simply changed the Y SExtractor gave into Ymax - Y and everything worked.


My plan was to have calibration done by now, but this mix up of coordinates took more time than it should. Reminds me how amateur I still am...

What next?

Maybe now I can work on the calibration. For what I've understood the procedure is
  • masterbias = stack(bias)
  • masterdark = stack(dark - masterbias)
  • masterflat = stack(flat - masterdark - masterbias)
  • stack((light - masterdark - masterbias)/masterflat)
Feels like masterflat should be normalized. It doesn't make any sense to me dividing by same and bigger values you find in light images. Dividing by flat normalized to [0,1] feels like a better idea.

Also colouring the images would be nice. As I said, the first images were made from interpolated raws and now I'm using properly debayered (I think). After calibrations I should interpolate monochromes into colour images with a correct bayer mask. If I'm right about how it's done, it doesn't sound too fast of an operation on Python. Perhaps PyCuda here? Some introduction to PyCuda I read said it's at its best on calculating numpy.arrays.

Sunday, October 13, 2013

New project: pyAstroStack

It has been bothering me that there are no free stacking software for astrophotographers for Linux. I've heard PixInsight is awesome and I have no doubt, but it costs money. I wonder why no one has ever made a free (as in freedom) alternative. Maybe because there are decent free (as in free beer) programs such as DSS, Regim or IRIS (which I compared here).

I decided to try and code one myself. I basically understand a lot of the mathematics involved. I've studied programming a bit alongside physics and mathematics so I thought I might have the skills... Still there has been some problems where I least expected them. For example making an affine transform for a data matrix was surprisingly difficult.

So now I announce:

pyAstroStack

An open source stacking software for astronomical images

For now the program is extremely limited. It works from command line and is configured by editing the source code. It also does stacking only by average value, doesn't calibrate images with dark, flat and bias, saves result only in three fits (one for each colour channel)... But it works for my test data! That's when I thought I'd make this public.

My test data was the best astrophoto I've taken. Not much as you can see, but nevertheless it is my best. Here's the first successful result of my own code.

Andromeda, stacked with pyAstroStack and postprocessed with ImageMagick and Darktable
Stacking was done in pyAstroStack and open source software was used also for the postprocessing. First I tried Iris for setting colour balance on the FITS and saving it as TIFF and it did a lot better job than I could in Darktable. My goal was to have everything done on open source software so that's why no Iris is used on this image.

And here's the same stacked with Iris http://www.flickr.com/photos/96700120@N06/10002768985/in/set-72157634344389164

The code can be seen in Bitbucket. It's licensed under GPLv3. I hope this'll go somewhere and that I have time and resources to make it easier to use and install. If you read this far, I assume you are somewhat interested in the project. Awesome. If you have any ideas on how to make the registering faster or reduce the number of required Python libraries, I'm all ears.

Feel free to add enhancement or proposal ideas on issue tracker in Bitbucket.

Sunday, September 29, 2013

Comparing DeepSkyStacker, IRIS and Regim

As I recently wrote, I've been trying to learn AstroSurf IRIS. I've been doing everything on DeepSkyStacker until this, but the result always seems too colourless. Sometimes it looks like grayscale, sometimes sepia... But rarely the colours seem real. I thought it's just my lack of skills, but seems like not necessarily. IRIS and Regim do give me colours on the same data where DSS does not.

On a side note: IRIS works quite well with Wine on Linux. The drag & drop dialog for raw conversion does not, but it's possible to convert raws using the command line. First you have to place the photos on IRIS's working directory and name them by the same theme IRIS uses in everything. This time I copied all the CR2s to /media/data/Temp/iris and renamed them andromeda#.cr2, bias#.cr2, flat#.cr2 and dark#.cr2. Then you just

> CONVERTRAW andromeda andro 31

where "andro" is the generic name for converted image and 31 is the number of pictures. Same for flats, biases and darks, of course. From there on everything seemed to work just as in Windows.


DeepSkyStacker

Procedure is quite straightforward. You load lights, darks, flats and biases, start registering and DSS recommends the best settings for stacking method and else. The process takes a while and afterwards you have a "ready" photo. Of course there's still all the postprocessing do be done, but that I did in Darktable.

Here's the photo
Andromeda with DeepSkyStacker. Same on Flickr

Regim

Procedure is even more straightforward than with DSS. You load lights, darks and flats (no biases possible) tell it to begin and after a while you have picture done. Usually the white balance is wrong but there are good automatic and manual tools for that. This time Regim's astrometrics recognized Andromeda and set the white balance correctly. Otherwise you have to set a B-V value
manually for some star in the picture. I've found these values in KStars, Stellarium or finally by googling stars name.

Regim also has an automatic gradient remover which works quite well, but as you see, I forgot to use it this time. Postprocessing was done in Darktable. I think I overdid the denoise on my first try. It looked too smudgy. Here's the second version.

Andromeda with Regim. Same on Flickr

IRIS

IRIS has the steepest learning curve. I explained the procedure shortly in a previous post. Again the postprocessing was done in Darktable. This time I was more careful with denoise.

Andromeda with IRIS. Same on Flickr

Results

An 100% zoom of the three images. You might argue that DSS gives just as much detail than the others, but I just couldn't get more than that on Darktable. I tried not to mess with colour profiles on postprocessing and show colours the stacking software gave. Of course I adjusted RGB balance in IRIS and did automatic white balance adjustment on Regim. Seems like they don't agree on this. 


I like IRIS's colours the best. Level of details is about the same on Regim and IRIS. IRIS is no doubt the most difficult to learn, but while that, it also gives user a real insight on what's really going on in the stacking

Neptune found

Now I've seen or photoed every planet on this solar system (except for Earth of course). Neptune was the last to be caught.

I was taking photos of Andromeda and while camera was shooting I watched stars with my telescope. Star chart showed that Uranus and Neptune are south and to that direction I have a clear view. I had already photographed Uranus so I aimed for Neptune. I probably saw it on telescope but didn't recognize the star like object for a planet. With camera I had more luck. Neptune was in an easy place middle of stars visible on naked eye. I pointed my camera there and took couple of shots.

Afterwards came the more difficult task of comparing images to a star chart. KStars didn't show enough stars so I tried Stellarium which that too showed only stars up to magnitude of 10. That wasn't enough to recognize which spot is Neptune. I fought with it for a while by adjusting settings and downloading more star charts... Until I realized Stellarium has to bee restarted to make it use those new charts.

Here it is.

Neptune. At least according to Stellarium star chart.
Wouldn't realize it's a planet if charts didn't show it. No wonder Galileo mistook it for a fixed star. With enough imagination you might see some blueish pixels on top of the dot. Those might even be real colours if I managed to set the balance correctly on IRIS.

Here's the image on Flickr. http://www.flickr.com/photos/96700120@N06/9949334363/
And here's the original whole frame. Good luck finding Neptune there: http://www.flickr.com/photos/96700120@N06/9997488305/

I still need photo of Mercury. Otherwise I have (some kind) photos of every planet. I've seen Mercury on naked eye and on telescope but at that time I had no camera to attach to telescope. Also my photos of Venus, Mars and Saturn are from a time before I had a Barlow.

Thursday, September 26, 2013

Beginning to learn IRIS (Andromeda and Double Cluster)

Seem like I'm starting to learn stuff about astrophotography since DeepSkyStacker is starting to bug me a lot. It's easy to learn and does something nice, but the results aren't that nice to work with further. Everyone seems to promote AstroSurf IRIS as a free (as in free beer) stacking software. It's not that easy though.

To use IRIS you really need to understand what happens in whole process of stacking. I read some ebook about it (can't find the link) and I thought I understood it but seems I didn't. I think I finally understood it after following this guide: IRIS:tä aloittelijoille (sorry, it's in Finnish).

If I understood correctly, here's how it goes:

  • Make master offset/bias, flat and dark. These are all important so don't forget to take them.
  • Calibrate images ("light frames"). This roughly means subtracting processed flat and dark frames.
  • Transform images into RGB
  • Register, which means aligning images so that stars are exactly the same places in every photo
  • Stacking itself
  • Postprocessing (colour balance, colour profiles...)
I felt enlightened when I realized that. Maybe I should have read the ebook I mentioned with a bit more thought, or perhaps reading IRIS's manual would have been a good idea as well. Anyways this goes perfectly with everything else with my stargazing hobby. I tend to learn everything by doing it wrong first.

Then the photos:


Andromeda Galaxy. M31, M32 and M110 of the Messier objects. I took 31 of 18 second exposures. 20 s made the stars trail a bit but on 18 it was unnoticeable. First one was to check everything is in order and then 30 more. Photos were taken on my front yard. Luckily Andromeda was in such a position that none of the streetlights was in front of the camera. I also looked at it visually and I really have to say it was the most I've seen of that galaxy. Quite beautiful gray spot on ocular.





The Double Cluster: NGC 884 and NGC 869. While shooting Andromeda I looked through my 200 mm Dobson at everything I thought I might see. I realized I've never seen the Double cluster although it should be easy to find. It looked nice so after 31 exposures of M31 I took 30 of this. Same 18 second exposures than before.



Monday, September 9, 2013

Test run on Raspberry controlled camera (and some nice photos)

I never got around to testing the camera controlling script I wrote on Raspberry Pi before now. I don't remember the internals of the script but luckily I made it extremely easy. I had Raspbian on a SD-card, I installed Gphoto2 and Gcc (required to compile usbreset) and took Raspi on balcony with EQ3-2 and camera.

It took me a while to set up networking (cable out of window) so I could control Raspi remotely but when I had it all set up, everything worked perfectly. I turned the camera to face brightest star I saw (Altair), focused the camera and took some test photos. Here's a single shot with 200 mm focal length, 30 s exposure time and ISO1600. Quite good considering I'm taking these in a city not long after sunset.


Sky was clear. All the crappiness from my camera.

Benefits of Raspi system

I can't take too long exposures. With 200 mm focal length it seems 25-30 seconds is maximum. EOS 1100D can do that. Why this Raspi setup is nice then?
  • More than 10 shots at a time. With just the camera I can take 10 at a time, then have to go and press the trigger once more. This is surprisingly annoying.
  • Automatic upload to NAS. I sat at my computer and watched new photos flow in. With just the camera I have to look at test photos from its screen. Now I saw them instantly on a good screen and following adjustments were a lot easier to do.
  • Automatic naming of photos. I have way too many photos named IMG_4244.CR2 and such. I organize them according to date and object but when unloading the camera I have to remember everything I tried to capture.
I'm considering of building some kind of system for Raspi to control EQ3-2's motor. Last night I had to get up, go out and turn the camera a bit. Come back inside and check results. Then the same again for n times. I've been told the motor on EQ3-2 is way too slow for this so perhaps I'll think some other solution... Anyways, to business:

The test photos

From a star chart I saw that M11, Wild duck cluster, should be nearby my "calibration" star Altair. It was quite easy to locate the cluster and after couple of test photos I got it centered. Later on I noticed M26 is just a bit down and turned the camera there. It fit easily on same photo. Here's some results:

Whole view with M11, M26, NGC 6712 and NGC 6704
Last time I tried to upload a photo this size, Google automatically resized it so here's the original on Flickr if that happens. I also put this on Astrobin.

Nova.astrometry.net is a nice service of astrometric plate solving. The software is free, but I still haven't got around to installing it. It wasn't trivial. Meanwhile this service works. Astrobin also does the plate solving but for some reason it doesn't show me the annotations on full resolution image. That would be nice. Anyways, I got this from nova.astrometry.net:
Source of annotated image: http://nova.astrometry.net/user_images/75597#annotated
Quite a lot in one photo. Some of those I cut in their own pictures:

M11 - Wild duck cluster

M26

NGC 6712

NGC 6704.... I quess.
Move along. Nothing to see here.

Tuesday, July 30, 2013

Gentoo, Linux 3.10 and Nvidia drivers (also Linux 3.11)

Update: Check the end for Linux 3.11

Looks like binary Nvidia drivers aren't compatible with Linux kernel 3.10 for some reason. There's a patch (found it in Nvidia forum DevZone) but for some reason Gentoo maintainers won't include it in gentoo-sources even as an use flag. Portage only tells me:

* Gentoo supports kernels which are supported by NVIDIA
* which are limited to the following kernels:
* <sys-kernel/gentoo-sources-3.10
* <sys-kernel/vanilla-sources-3.10

* You are free to utilize epatch_user to provide whatever
* support you feel is appropriate, but will not receive
* support as a result of those changes.

* Do not file a bug report about this.

I haven't heard about epatch_user before but it seems perfeclty reasonable for Gentoo to have something like this. Basically it is a way for user to easily patch sources to be built without editing ebuilds.

I started to look for info about how to use this epatch_user. Gentoo handbook was the first place to look but I didn't get it to work with that info. After looking at other pages I found out what was wrong. I think the handbook should say
/etc/portage/patches/<category>/<package>/<name>[-<version>[-<revision>]]
instead of
/etc/portage/patches/<category>/<package>[-<version>[-<revision>]]
where <name> is an arbitrary name for the patch. Anyways I wrote a (very) short howto for those with the same problem.

A short howto

Run all the commands as root:
mkdir -p /etc/portage/patches/x11-drivers/nvidia-drivers
wget http://pastie.org/pastes/8123499/download -O /etc/portage/patches/x11-drivers/nvidia-drivers/nvidia-drivers-325.08-lnx311-rc0.patch
emerge nvidia-drivers

Quite easy. Of course this assumes you have gentoo-sources-3.10.x configured and compiled and nvidia-drivers-325.08 unmasked.

Linux 3.11

Seems like something different for Linux 3.11. That's solved by a patch from https://bugs.gentoo.org/show_bug.cgi?id=482168. Put that file in /etc/portage/patches/x11-drivers/nvidia-drivers-325.15/ and Nvidia 325.15 should compile for Linux 3.11.

Monday, July 8, 2013

Regim and processing astrophotos in Linux only

Thanks to Astrophotography community in G+ I found Regim. I've stacked my astrophotos using Deepskystacker in Windows and everything else (with computer in general) I do in Linux, so booting to Windows is always a pain. Now it seems that's over. Regim is written on Java and it works quite well in Linux. I've experienced couple of crashes but otherwise everything seems good.

Regim is easy to use, extremely automatic and does easily some things I haven't found in DSS. For example removing gradient which is quite handy for me since I take photos on my backyard. Every one of them has some gradient from nearby city lights.

On the other hand Regim is quite limited in what it does and it really requires real image processing software to get anything done.

I tried Regim on some of my old imaging data and compared it to what I got with DSS. Here's PanSTARRS comet close to Andromeda processed with DSS.

PanSTARRS and Andromeda stacked with DeepSkyStacker

Here's what I got with Regim. After stacking I removed the gradient, adjusted white balance with Regims "manual" tool (I set the b-v value of Andromeda center to 0.63), and did the final adjustments in Darktable. So everything in Linux only. Finally.

PanSTARRS and Andromeda stacked with Regim

Now if only Gimp could handle 16bit colours...

Tuesday, May 14, 2013

Windows and error messages

People say Linux is difficult to use because when something goes wrong, the error messages are too cryptic. I disagree.

I recently installed new Nvidia drivers for Windows 7 and ended up with computer stuck in reboot cycle. Blue Screen Of Death flashed for a fraction of a second. I understand Microsoft wanting to get rid of the notorious BSOD, but I don't see how hiding the error message altogether helps anything. Of course F8 on boot helped and I got to see the BSOD and it's message. It said something about Nvidia and booting with "latest working configuration" (or something like that) worked. Great.

Couple of boots after (and after Windows had updated its SATA-drivers) I got the same reboot cycle with flashing BSOD. This time the latest working configuration didn't help so I went to see the BSOD. Here it is:
An user friendly error message
I'll also type out the technical information in case someone searches Google for help, as I did.
*** STOP: 0x0000007B (0xFFFFF880009A9928,0xFFFFFFFFC0000034,0x0000000000000000,0x0000000000000000)

My searches didn't right away give me a direct answer on what to do, but I found out this has something to do with SATA (and Windows had just updated SATA-drivers... hmm...). I went to BIOS, changed all the SATA buses from AHCI to IDE and Windows booted up nicely.

Of course this was after couple of days of troubleshooting from Linux. I ran ntfsck and all the other rescue tools I could think of. I also tried all the tools from Windows installation disc.

So... I'm just wondering why the error can't say for example "Can't read system disk" or something other less cryptic than 0x0000007B.

Totally different story is my problems after buying a new motherboard which instead of BIOS is an EFI one. Windows didn't start up, only rebooted the computer without any error messages at all. Nothing on logs either. After couple of days of rescue discs and such I found out the reason was an unpartitioned 500GB HDD. I set up an partition table for it and Windows booted up just fine.

Anyways... Linux has always told me what exactly is wrong. If my skills haven't been good enough to fix it, Google has always found a solution. I've known what to look for.

Tuesday, April 16, 2013

Gentoo on Raspberry Pi

I have some problems with both Raspbian and RaspBMC. RaspBMC is perfect for XBMC which I'm used to having working whenever I need it. Everything else is difficult. I'd like to install some additional software such as Gphoto, irssi, screen... but I have to be extra careful not to break everything. More than 5 times have I borked the installation and had to do it all over again.

Raspbian is Debian. And a proper one. It works out of the box with everything but XBMC. I found a selfprogramming blog with compiled packages and instructions how to install them. Nice, but "difficult". There's no repository, you have to manually softlink files. Maintaining this XBMC will not be easy. Also you have to set yourself it to automatically start, but I don't consider that a bad thing. I still haven't figured how to do that so XBMC won't be running all the time but will be when I want it to.

Anyways... I don't believe Gentoo would magically do everything I want just the way I want. I'm trying it because I'm more familiar with it than Debian. I know how to use dpkg and apt, but that's about as deep as I know Debian.

My plan is to eventually cross compile everything on my desktop. Now I have Distcc set up and desktop handles some of the compilation. It really speeds up the emerge processes. My plan also includes not having portage tree or kernel sources clogging up the SD card. I'll try installing the XBMC first before attending to these problems.

For now the Gentoo is running. Installation finished with some minor problems which I'll address in different posts on this blog.

Friday, April 5, 2013

PanSTARRS and Andromeda

Yesterday, April 4th, must have been quite a day for astrophotographers. The comet C/2011 L4 (PanSTARRS) was as close as it gets to the Andromeda galaxy. I also went outside in hope of catching the view on camera.

The weather could have been better. There were thin clouds obstructing the view and spoiling my photos. Comet itself was quite easy to find although there was no chance of seeing it with naked eye or even through camera. I pointed my camera to where I knew Andromeda was and after couple of shots I had them both on image.

I took several images hoping that DeepSkyStacker could find a bit more of Andromeda than the center. Here's what I have:


If you know what Andromeda looks like, you can see it. I've taken a lot better photos of Andromeda before. Now it was too early for that. Also the gradient is quite disturbing. I've heard IRIS might be able to remove the gradient. DSS certainly did not.


Friday, March 22, 2013

PanSTARRS comet and Aurora Borealis

I saw a picture of the comet PanSTARRS (C/2011 L4) taken in Turku Finland at March 16th 2013 and decided that to go out and search it myself the next day. So Sunday the 17th I went to Kuuvuori (Moon mountain in english) in Turku with some other stargazers who were there yesterday. I was at the spot at 18:50 and started to set up my equipment.

At about 19:50 we found it on my dobson. Hmm... I better check the time because this seems quite late. 19:52 says the EXIF timestamp of my first photo of it. Maybe it was a bit earlier... Anyways here it is.


Whoa! Or not much to see... Depends on how much you respect the astronomy itself. That photo is actually not the first. All the photos through my dobson were more or less blurred. This was taken with 75-300mm zoom lens at max zoom. Maybe I should have shot a video through the dobson, like I do with planets.

Couple minutes after really seeing the comet something happened. I saw "clouds" on recently clear sky, or at least I thought I did. The "clouds" turned green and the show started. All the previous Aurora Borealis I've seen had been faint and required a bit of imagination to see. This one did not. The sky was flashing green and purple so fast my camera couldn't catch it. I saw arches, rays, something that reminded me of a scifi movie "warp speed".

Warp speed
For a while northern lights filled the whole sky. I decided to enjoy the view rather than fiddle with my camera to get better pictures. I got one with both the comet and Aurora in view. Comet is quite small in the photo so I marked it.


I had my camera on an equatorial mount so the ground is tilted. Don't mind that.

I won't lie when I say that half an hour was probably the best show I've ever seen.



Saturday, February 9, 2013

Comparing planet stacking software

Living near city lights makes it quite difficult for me to see deep sky objects. I see many, but has to be an excellent weather. Planets are a lot easier target. Lately Jupiter has been in a good position for viewing. I've also tried to take a lot of photos. One of which was quite a success (with my skills and equipment) because you can see the moons as well.

A single shot of Jupiter. No stacking used here. 
This is the first photo ever I took of Jupiter that didn't require any imagination to recognize the planet. Well... best of the hundred I took that night. But still just a single shot and a bit help from Gimp.

Planets should be photographed by taking a video and stacking that. I've found 3 software to do that. RegiStaxAutoStakkert! and AviStack. None of them is open source but AviStack should work on Linux once you succeed on installing IDL Virtual Machine. For some reason I can't make it work.

I took my Skyliner 200, 2x Barlow, Canon EOS 1100D and a laptop with EOS Camera Movie Recorder (which is open source, yay!). I recorded this video:


...or actually about 6 of those. They're all quite identical so no need to share them all.

With the same movie I tried four different stacking software. My skills with them are almost non-existent so this comparison is mostly about how well the programs do automatically.

Here's the result:

I used drizzle when available, but I have no idea whether that was a good idea. I'm quite sure there was no need for it, but I don't know if it did any damage. This need more testing.

Autostakkert was perhaps the most automatic. It also did the best job although there was no wavelet sharpening. I did that with RegiStax6. The result is a lot better than the image that's also stacked with RegiStax.

Friday, January 25, 2013

Sharpening Moon with deconvolution

I bought a Barlow 2x lens for my Skyliner 200 and immediately went out to test it. I actually got quite lucky to have a clear sky that night. I took some photos of Jupiter and Moon and shared them on IRC. A friend suggested I should sharpen the photo of Moon with something like this: http://www.iceinspace.com.au/63-455-0-0-1-0.html

Having studied physics, I'm familiar with the concept of convolution and I'm aware it should be possible in theory to reverse the process. Still the images on that page seems like magic to me. The software Astra Image seems like a nice program but of course it has a price. I'm not against commercial software and the price isn't too high, but I always try to look for an open source alternative to everything.

I found Grey's Magic Image Converter. It even has a plugin for Gimp. Better yet, it installs right from Portage on Gentoo. There are .debs on their page so probably installation is easy also on Debian based distros such as Ubuntu.

I didn't try the command line version yet, only the Gimp plugin. Some page I found (and lost again so no link) suggested on command line you get better results than plugin. Perhaps there are more parameters to set or something. I'll try that later, but now the Gimp plugin.

I took some photos of the Moon. With my new barlow, I can fit about one fourth of the Moon in one picture. I made a panorama of six photos with Hugin. Because of my haste to get something ready, the images were jpegs from the camera. That's why the dark grey circle around the Moon. I probably should do some editing for the raw photos before stitching the panorama.

The photo is huge. Here's a part of it:


And here is the same photo deconvoluted:

See the difference? I applied (in haste as usually) G'MICs deconvolution filter with almost the default settings and that's what I got. Btw it took quite a while for 4200x4300 image.

Here are the original and deconvoluted photos:

Left one is the original and right one deconvoluted.
EDIT: Looks like Google doesn't show the photos in their original size. I'll add them elsewhere and link here.

Sunday, January 6, 2013

Script to control Canon 1100D on Raspberry Pi

I've finished the first part of my plan to use Raspberry Pi to remote control a camera. I put together the efforts from http://mikkolaine.blogspot.fi/2012/08/controlling-canon-eos-1100d-with-linux.html and http://mikkolaine.blogspot.fi/2012/12/raspberry-pi-canon-eos-1100d-and-gphoto.html and wrote a Python script to do it all.

Why? With the camera itself I can take max 10 photos sequentially after having to go to the camera and press the trigger again, I can use exposure times only to max 30 seconds and as a result I get hundreds of photos named IMG_6013.CR2 and such.

The script I wrote:
  1. Asks for 
    • ISO
    • exposure time
    • number of photos
    • name for the object in photos.
    Name is required for naming the pictures and directory for them.
  2. Sets all the necessary settings for camera. You need of course to aim the camera, focus it manually and set the control wheel to manual mode (M).
  3. Takes number of photos, resetting the USB connection after each photo (because on RasPi this is required)
  4. Downloads each photo instantly from the camera and uploads it to my NAS (mounted by NFS on RasPi). The photos are put to a directory by name asked before and named with a timestamp. For example M42_2012-12-16_16.22.28.099570.cr2.
I just ran a test of 200 photos with exposure time of 31 seconds and it seems to have worked perfectly.

The script and instructions for it can be seen here. It's easier to share files on Google Sites than Blogger and I also want a permanent page of that. I'll update the page to always include the newest version of my script.

Tuesday, January 1, 2013

Mosh - Mobile Shell and unavailable ports

The idea of Mosh is awesome. An SSH-connection that does not disconnect even if network does. On a mobile environment and behind unstable connections an extremely nice feature. After setting it up on our IRC shell server I've been testing it with my Galaxy Tab. I found Mosh for Irssi Connectbot and I've been using it for a while now.

Problem

The server I referred to is almost on our control. It's a virtual server and we have a domain and lots of open ports. No own IP address though. That we have to share with couple other servers. That's why we don't have access to the ports necessary for Mosh. Mosh works by taking a normal SSH-connection and starting a mosh-server on user space. It chooses an open port from 60000–61000. Too bad we don't have access for those.

Solution 1

You can choose the port outside the default range. It's easy. Just connect with
$ mosh -p <port> <user>@<host>
I really don't see this as an option. Our server has at the moment about 20 users. Some more technically oriented than others. To connect, one has to know the available port range (which is 41000-41999 something...) and available port at the moment, since there are 20 other users. Maybe we could give everyone two or three ports for their personal use...

No. This is not an option.

Solution 2

Since the default port range is hardcoded in the source and there is no config file, I thought I'd change the code. In version 1.2.3, in file mosh-1.2.3/src/network/network.h there are lines
static const int PORT_RANGE_LOW  = 60001;
static const int PORT_RANGE_HIGH = 60999;
Change those to preferred ones, make, make install and things should work.

I didn't get into testing this before I found myself once again bothering the developers on IRC. This seems to be an efficient method of solving problems with open source. I only hope the developers don't mind...

Better solution

I had to compile this from source after all, but without any changes. In the Git version they have a feature of giving a port range as an argument to mosh-server. Mosh-server is the program mosh runs on server side after an successful SSH-connection and mosh-client then connects to mosh-server. They both have to know which port to use.

I installed mosh on /usr/local and renamed /usr/local/bin/mosh-server to /usr/local/bin/mosh-server.real. Then I created a new /usr/local/bin/mosh-server

#!/bin/bash
/usr/local/bin/mosh-server.real $@ -p 44800:44999

Mosh calls for mosh-server, which again calls for mosh-server.real passing the original arguments and adding its own port range.

Users don't have to know any of this. They can now connect with
$ mosh <user>@<host>
and everything works as it should.

Best solution (to my opinion)

...would be a server side config file to define the default ports. Too bad there is no such file at the moment.