Monday, November 13, 2017

From One Gun Guy To Another: A Proposal To Reduce Victims In Mass Shootings

Like all of my blog posts, this article gives my personal views and does not represent the views of my employers, past or present. 

Like a lot of folks in America, I grew up around guns. My grandfather and dad taught me to shoot in the clay pits of south Alabama, with a strong focus on safety and responsibility. As I got older, I realized that it was not only a great deal of fun, but a method of effectively protecting myself and my family against folks who might try to do us harm. I take that right and responsibility quite seriously. I practice regularly and have done a lot of reading to educate myself on the implications of using a weapon in self defense. I had a concealed carry permit and carried regularly for years, both at home and when backpacking. I'm a gun guy. I enjoy using them, and I stay ready in case I ever have to use one in defense.

Like a lot of gun guys over the last few years, I've been watching the growing number of mass shootings and struggling with it. Yes, I believe guns in the hands of good guys are a good thing, but how do we balance that against folks who are mentally ill killing or injuring dozens at a time? A lot of gun folks insist it's a people problem, not a gun problem. I argue that it's both. Clearly, mentally ill people must not have access to firearms, to the greatest extent possible. But we all know how hard that is. The background check system fails sometimes, or an ill person simply has someone buy the weapon for them. Sometimes they take a family member's that is not well secured.

So we have a certain number of mentally ill folks that are going to have access to guns. How can we limit the amount of damage they can do without dramatically reducing the freedoms of millions of lawful gun owners? 

We should treat high capacity magazines (>10 rounds) the same way we treat sound suppressors ("silencers"). Existing high capacity magazines should be bought back at fair market value, or the owner acquire the tax stamp and background check. Like suppressors, the penalty for unlawful possession must have serious teeth.

To buy a suppressor, you are subjected to an extensive background check, more so than buying a firearm. For each suppressor you buy, you have to buy a tax stamp, at a cost that is affordable if someone really wants or needs it, but high enough to make it painful to buy a lot of them.  The current cost for the stamp, per suppressor, is $200.

Here are the reasons why this is an effective strategy.

1) It will dramatically reduce the number available to an attacker. Only people who really need them will keep them, and those folks will pass a more stringent background check. They WILL be available to those who need them, though, in modest number.

2) Your kids are probably receiving some form of armed intruder response training in school. I've gotten two variants of it in the workplace. They teach you to run, hide, and as a last option, to wait until the attacker is reloading, and fight. You are trained to throw things, from books to fire extinguishers, and to gang up on the attacker to immobilize him.

If an attacker has a collection of 30 round magazines, pauses to reload don't happen very often. If the magazine capacity is limited to ten rounds, the victims get two additional windows of opportunity per magazine in which to flee or fight. 

3) If someone is planning an attack and tries to accumulate a lot of them, it will be noticed.

4) It effectively limits the number of rounds an attacker can easily carry which are ready to fire.

The existing magazines that were being registered would require serialization. A system would need to be created to allow a worn magazine to be exchanged for a new one without requiring another stamp, and the worn one destroyed.

Undoubtedly, you are raising concerns. They are probably the reactions that I tend to have to any proposed gun control too. 

"The bad guys won't turn theirs in, so it won't help." 

A lot of guns used in mass shootings are obtained legally by the shooter or through family members. Making high capacity magazines harder to obtain will make it more likely that a shooter won't have a pile of those magazines on them when they commit their crime. 

"I can change a magazine really quickly. It won't make a difference."

For some people who are extensively trained to use guns under pressure- military, for example - that's probably reasonably true. For those of us who spend our days in offices, though, changing a magazine at the range doesn't translate to doing it while people are pelting you with books or whacking you over the head with a fire extinguisher. That's hard. That requires more focus than the vast majority of civilians have. 

Besides - if there is no advantage to them, why do you want one? You know that argument doesn't hold water. If they didn't have a significant advantage in an offensive situation, you wouldn't see them issued to infantry around the world.

"If we give an inch, they will take a mile. They want to ban all guns!"

If we refuse to acknowledge that the gun is a force multiplier, and do not genuinely try to find a solution that reduces harm to society, we are far more likely to lose our gun rights. Casually replying that guns don't kill people in response to a tragedy where one mentally ill person has killed dozens of innocents is callous, and counterproductive. 

Decades ago, we as a nation decided that machine guns, suppressors, high explosives, and heavy weapons should be heavily controlled because they place unreasonable power to do harm in the hands of a single person. Our gun rights have not been eroded as a result. This should be extended to high capacity magazines to limit the harm a single ill person can do.

"The second amendment isn't about hunting. We must be ready to repel a foreign invader or a hostile government."

You can do that with a good scoped bolt action hunting rifle. You can do that with 10 round magazines. If you feel really, really strongly about it, obey the law, pass the background check, and buy yourself some high capacity magazines. Then lock them up.

"I need high capacity magazines to defend myself or my family."

High capacity magazines in a pistol are certainly potentially useful in an extended altercation on the street. If you really feel the need, it would still be available. It just would be a bit harder to do. It should be noted, though, that a review of the NRA's Armed Citizen covering 5 years of incidents involving the use of firearms by civilians for self defense indicated the following:

"As might be expected, the majority of incidents (52%) took place in the home. Next most common locale (32%) was in a business. Incidents took place in public places in 9% of reports and 7% occurred in or around vehicles.

The most common initial crimes were armed robbery (32%), home invasion (30%), and burglary (18%). Overall, shots were fired by the defender in 72% of incidents. The average and median number of shots fired was 2. When more than 2 shots were fired, it generally appeared that the defender’s initial response was to fire until empty." (emphasis mine)

Now, if you are talking about your AR, consider that carefully. If you live in a typical city or suburban area and are firing 30 rounds of .223 at your attacker, those rounds will go through walls and stand a high likelihood of injuring someone other than your attacker. 

"High capacity magazines are fun."

They sure are fun. But that is an absolutely inadequate reason to keep them so readily available when they make it easier for one mentally ill person to kill a couple dozen people. 

"The solution is to have more armed law abiding citizens."

That is an entirely separate potential solution that deserves careful study. It doesn't invalidate any of the arguments presented here.

Making high capacity magazines harder for an attacker to get will give victims under attack more time to flee, hide, or fight. 

We, as pro-gun people, need to acknowledge that the weapons we keep DO play a part in these tragedies. T-shirt slogans aren't good enough. We need to propose effective approaches to reducing the death toll from these terrible incidents.

Friday, November 3, 2017

Automated meteor/aircraft/satellite detection for sky camera in Python



Introduction

In a previous post,  I described the construction of a simple networked sky camera built with a Raspberry Pi. I was pleased with how well it worked, but quickly figured out that manually reviewing the more than 5000 frames it generated per night was a drag. 

The following describes a Python script that makes uses calls to ImageMagick on a Linux computer to identify frames that potentially contain something interesting. In a typical night, it reduces the frames for manual review from 5000+ to a dozen or so, and it picks up lighter streaks than I tend to see rapidly skimming through by hand. There are certainly more sophisticated ways to do that, but I was surprised at how well this works. Specific drawbacks are mentioned below.



Things you are likely to catch with a sky camera

My sky camera exposes for 10 seconds per frame. As a result, a bright moving object will leave a line. You can interpret the line to determine what it is.

 1) Airplanes. Lots of airplanes. You'll be surprised at how many airplanes fly over your house or observatory.

Airplanes that fly at night are required to have strobe lights on the wing tips and vertical stabilizer. As a result, they produce a distictive dot-dash-dot-dash pattern as the fly across the sky during a long exposure, which is easy for a person to identify.

Also, they cross from one side of the frame to the other, and take several frames to do it - typically three to four. They don't appear in the middle of the sky and then disappear after short distances.




Airplanes

2) Satellites. Fewer than airplanes, and they are fairly easily distinguished. First, they don't have strobes, so they appear as a fairly consistent brightness that travels in a straight line. They also usually take 3-4 frames to cross from one side to the other. If it's a straight, non-strobing line that lasts more than two frames, it's not a meteor, since it is travelling too slowly across the sky.



A satellite - 1 frame of 3

3) Meteors. If it appears in no more than 2 frames (since it's possible for it to occur at the end of one frame and the start of another) and it doesn't cross the whole frame, congratulations - you've probably captured a rock from space! An airplane or satellite is not expected to start in the middle of the sky, but a meteor can.


Orionid meteor - Oct 2017. Stack of two sequential images that the meteor was on, cropped

Finding frames with possible objects in them

If you are running your sky camera on a clear night, detection of things moving can be as easy as identifying frames that are significantly different from the frame before it. Since the Earth is rotating, we expect it to change a little, so a "fuzz" factor is applied - a small amount of change is ignored. If that threshold is exceeded, though, we should set the frame aside for a person to look at. This simple operation sifts through thousands of frames and pulls out the ones that have something happening remarkably well. 

Of course, if you have trees or clouds in your frame, it's likely that motion of those will trigger it too - a false positive. Clouds can be identified reasonably well by simply dropping frames that exceed a certain brightness threshold, but that increases computing time and you might miss a frame that has something interesting in it that also has a cloud. It's a tradeoff. At the moment, I only capture images on pretty clear nights, so I leave the brightness thresholding turned off to speed processing. If you want the function enabled, you can just uncomment the line in main() that calls it.

Running the program

The program currently runs on a Linux computer with Python and ImageMagick installed. Most distros have these already. An example command looks like this. The first argument is the directory containing the images you captured, and the second is your desired output directory. Frames that the program determines may be interesting are copied to the output folder.

./findMeteors.py 20-10-2017 20-10-2017-output/

You must assign the program file execute privileges on your Linux box - usually, this will do the trick:

chmod u+x findmeteors.py


Program listing and download


Listing follows... please let me know if you find it useful. 


#!/usr/bin/env python

import sys
import os
import subprocess
import shutil

def removeBrightImages(images, threshold):
 #The average graylevel of an image may be found using the string format "%[mean]"
 #convert image -format "%[mean]" info:
 count = 0
 darkFrames = []
 for image in images:
  if count > 0:

   command = 'convert ' + image + ' -format "%[mean]" info:'
   try:
    val = subprocess.check_output(command,shell=True) 

   except subprocess.CalledProcessError as e:

    val = e.output

   if float(val) < threshold:
    print "Adding " + image + ": brightness under threshold, queued for comparison"
    darkFrames.append(image)
   else:
    print "Rejecting " + image + ": brightness exceeds threshold"
  count = count + 1
 print darkFrames
 return darkFrames

def findChanges(images, fuzz, threshold):
 #fuzz is currently hardcoded below
 count = 0
 changedFrames = []
 commandList = []
 for image in images:
  if count > 0:
   #Trying to do this the better way with arguments and no shell=True results in the conversion
   #of the output to an int failing below, and I have not figured out why.

   #print "Comparing " + lastImage + " to " + image

   command = "compare -metric ae -fuzz 15% " +  lastImage + " " + image + " null: 2>&1"
   try:
    val = subprocess.check_output(command,shell=True) 

   #The compare program returns 2 on error otherwise 0 if the images are similar or 1 if they are dissimilar.
   except subprocess.CalledProcessError as e:

    val = e.output

        if e.returncode == 2:
     print "Error in image comparison routine."
     sys.exit()

   if int(val) > threshold:
    print image + ": " + val + ", item found"
    changedFrames.append(image)
   else:
    print "Checking: " + image 

  lastImage = image
  count = count + 1

 return changedFrames

def saveImages(images):
 for image in images:
  shutil.copy2(image, sys.argv[2]) 



def genImageList():
 imageList = []
 for file in os.listdir(sys.argv[1]):
      if file.endswith(".jpg"):
          imageList.append(os.path.join(sys.argv[1], file))
 imageList.sort()
 return imageList


def checkArgs():
 if len(sys.argv) != 3:
  print "Usage: ./findMeteors.py  "
  sys.exit()

 if os.path.isdir(sys.argv[2]):
  print "Output directory exists already, refusing to overwrite it. Exiting."
  sys.exit()
 else:
  print "Creating output directory"
      os.makedirs(sys.argv[2])



def main():

 #if brightness thresholding is enabled, set threshold. Higher is brighter.
 brightnessThreshold = 2000

 #frame difference comparison senstivity. A higher value reduces sensitivity. 0-100.
 fuzzFactor = 15

 #the number of pixels that must be different between frames in order to flag it as interesting.
 diffThreshold = 100

 checkArgs()
 images = genImageList()
 #if you want to enable brightness thresholding, uncomment the line below
 #images = removeBrightImages(images, float(brightnessThreshold))
 changedFrames = findChanges(images, fuzzFactor, diffThreshold)
 saveImages(changedFrames)

if __name__ == '__main__':
  main()

Thursday, October 12, 2017

Raspberry Pi Skycam w/ NoIR V2 Camera and Light Pollution Filter




NASA seems to assign any scientific instrument a contrived acronym. This camera detects rocks from space. Therefore, may I present WHAMMO - the Wadsworth Hillbilly Automated Meteor/Meterology Observatory.

Update: I have a working detection script running on the Linux box storing the images - full article is here.

Introduction

I am interested in building a sky camera for capturing meteors, and also for checking sky conditions. I had a Raspberry Pi and camera. Initial tests were done with a normal V1 camera, which is limited to a 6 second exposure. I switched to a No-IR V2 camera, which allows 10 second exposures and doesn't have an infrared filter. The V2 camera produced much better results.

I added a simple cell phone medium-wide angle lens and found through experimentation that a light pollution filter normally used for observing improved the contrast considerably.

The camera is weather resistant and mounts a shared directory on a network computer, rather than writing to the Pi's flash card. This is to improve reliability, since the Pi's flash would wear fairly rapidly due to the high rate of writes during capture.

This post documents the development of this system and outlines next steps.

Example Video

The camera captures 10 second frames all night long. The frames can easily be stitched together with ffmpeg into a video. The command line I used was:

ffmpeg -thread_queue_size 512 -r 30 -f image2 -i sky_%04d.jpg -vcodec libx264 -preset slow -crf 22 -profile:v baseline -pix_fmt yuv420p test.mp4

The video is best viewed in full resolution rather than a small window - the stars are single pixels, in many cases. The MP4 is available for download here or you can follow the link below to Vimeo and full-screen it there.


Raspberry Pi Skycam w/ NOIR v2 Camera and Light Pollution Filter from Jason Bowling on Vimeo.


The camera was in my back yard in Wadsworth, a bright orange band on the dark sky finder map.




Still Frame + Star Chart



A full resolution copy of this image is here

The image is the result of stacking a set of 25 images in Deep Sky Stacker with the default settings. A single raw frame is available for comparison here.

You can then take that image and do a simple level adjustment to improve contrast - at that point there are far more stars in the image that I can see by eye alone. The Pleides are easily resolved into multiple stars in the image, and it's only visible at my house with some effort.


Full resolution version of the stacked and level-adjusted image is here.

V1 vs V2 camera

I tried the V1 camera first, since I had one, but wasn't very happy with the results. I found the results with the V2 NoIR camera to be much better in terms of sensitivity and noise. The V1 did work reasonably well with the saturation increased. Example video from the V1 camera is here.

Wide Lens

I used a cell phone lens kit similar to this one - the exact kit is no longer available on Amazon. I used the medium-wide lens - the widest resulted in significant distortion and I have too many trees in the back yard to make much use of a wider view anyway. I just tacked the lens to the 3d printed camera case with hot glue, since it's easily removable. I considered cyanoacrylate glue but didn't want to fog the lens.

Capture method

The command below captures the images in timelapse mode, at a resolution selected because it uses 2x2 binning to effectively increase pixel size. This improves low light sensitivity and signal to noise ratio.


#1640x1232 is 2x2 binning mode with v2 cam

mkdir /home/pi/storage/"$(date +"%d-%m-%Y")"

#note: this command is all one line
raspistill -o /home/pi/storage/"$(date +"%d-%m-%Y")"/sky_%04d.jpg -t 500000000 -tl 0 -ss 9900000 -ISO 800 -bm --nopreview -w 1640 -h 1232 &


Light pollution filter 

I found that the addition of a light pollution filter significantly improved contrast for images taken from my home. The filter works reasonably well for visual observation, but the improvement is much more noticeable with a photograph. A comparison is given below. This image is best viewed at full resolution. 



The mount was just a disk of foam board friction fit to the wide angle lens body. The filter is again secured to the mount with hot glue.

The filter does cause a bit of vignetting. I think it's worth it for my purposes.






Mounting A Filesystem On Another Computer

To avoid wear and tear on the flash, and since the Pi wouldn't be up to the image processing tasks I had in mind, I chose to use SSHFS to mount a directory on a Linux server in the house over WiFi.

The command below, all one line, has worked well for me. Without the included options, I occasionally had the mount point drop out between writes sometimes - it has been very reliable since the keep-alive options were included.

sshfs -o reconnect,ServerAliveInterval=15,ServerAliveCountMax=3 jbowling@192.168.1.105:/home/jbowling/skycam storage

If you prefer, you can also mount a share on a Windows computer.

Black Out Those LEDS...

I turned off the board LEDS in software, and blacked out the LED on my WiFi adapter with primer to avoid leaking light up into the dome.

Mechanical Construction

I chose a 6x6x4 junction box from Home Depot (marked Item 10030 on the display case). It doesn't have an IP rating I could find, but has a nice gasket and I've found it held up well under the garden hose.

I chose this acrylic dome and this gasket material.

I drilled holes to mount the Pi on standoffs, and also a radial pattern of larger holes in order to dump heat from the box up into the dome, to reduce dew. 5W isn't a lot of heat, but it might as well get used. A slot is also cut for the camera cable.




I traced the dome with an ink pen and cut it out with scissors, with a final trim by a razor hobby knife.




A shingle nail worked well to punch holes to pass the screws through the gasket. This passed several tests with the garden hose. I am going to use cable glands to waterproof the power line when done.  This design has not been tested long term in heavy weather. 


Power

So far, the camera has just been powered from a 12V jump start battery's USB output. I considered several options for when the camera is mounted permanently.

I considered running an AC power cord into the box and using a standard USB power block to power it, but then I have 110AC in my junction box, and that's not something I want. I decided the way to go was to run 12V over a an extension cord with the ends removed - that way it's nice, heavy double insulated wire being fed by an approved 12V power supply kept nice and dry in the garage. That also gives me spare capacity to install some resistors to reduce ice and dew on the dome, if I choose to. I can install an appropriate low amperate fuse in the line too, in case some water does make it's way in.

If you happen to have a Power Over Ethernet (POE) switch, there are converters that can power the Pi from POE. In my case, that wasn't economical. 

Next Steps

1) The next interesting part will be software to sift through the many thousands of frames and find the meteors and airplanes. That will be a neat challenge.

2) Dew heater. I'll probably add a small network of power resistors under the dome to improve resistance to dew and to melt light ice and snow. The existing setup usually doesn't dew up, but I've had it happen once.

Saturday, September 9, 2017

Sunspots AR 2679, AR 2675, AR 2673 on 09-Sept-2017


Sunspots AR 2679, AR 2675, AR 2673 on 09-Sept-2017 w/ 127SLT and ASI290MC at prime focus.


Given the unusual sunspot activity over the last few days, I made a Scheiner mask to aid in focusing, since you can't use a Bahtinov mask without a star to focus on. The laptop was set up in the back of our SUV to shield it from glare and I used my DIY solar filter mount w/ Thousand Oaks film on the 127SLT. Best 10% of frames from 1 minute of video.

Tuesday, August 29, 2017

4k wallpaper - Partial Moon




Want some high resolution moon wallpaper? This image is free for noncommercial use.

This picture is a composite from video, taken with a 127SLT and ZWO ASI290MC taken at prime focus. The video was composited with Microsoft's Image Composite Editor (ICE), edited in Gimp to place it on a black background of the right proportions, sharpened with Registax Wavelets, and level adjusted in Gimp.

Download the full resolution image from my Dropbox. In the upper right corner, you'll find a button with two dots. Click it and a dropdown box will open - an option there will let you download the full resolution file.

If you want a similar shot with a fuller moon, I have one here. I personally prefer this one, since it shows features in 3D a bit better due to the more oblique lighting.

Lunar 100 Target 4: Apennine Mountain Range


Image Details: 127SLT w/ 1.5x Barlow and ASI290MC, stacked from one minute of video



Somewhere around 3.75 billion years ago, during the Late Heavy Bombardment, a large asteroid or protoplanet hit the northern hemisphere of the moon. The impact caused an enormous impact crater known as the Imbrium Basin, bordered by high steep walls of rock.

Later lava partially filled in the basin, and hardened to a smooth dark surface known as Mare Imbrium (sea of showers). The Mare Imbrium The Apennine mountains are part of the remainder of the high crater edges. Even though partially buried by lava, the highest peaks are 5 km / 3.1 m high.

The mountain range is about 600 km / 370 m long. It is easily visible with binoculars and is a pretty stunning sight when the terminator line is near it, bringing the 3 dimensional structure into view.

The Apollo 15 mission landed here - the position is marked on the photograph.










Lunar 100 Target 2: Earthshine


Photo Details: 127SLT with SLR at prime focus, stacked from video


Shortly after sunrise or before sunset, when the moon is just a bright sliver, you can sometimes see the dark portion illuminated with a soft, dim glow. To photograph it you have to overexpose the lit portion, resulting in the loss of most detail there.

This dim light is sunlight that has first been reflected from the lit portion of the day side of earth, bounced of the part of the near side of the moon that is not directly lit, and then back down to your retina or camera. Cool, huh?

This diagram from Wikipedia demonstrates the concept very well. 






Saturday, July 22, 2017

Low cost DIY solar filter for small/medium telescopes


Photo Details: 127SLT with SLR at prime focus, stacked from 1 minute of video


In preparation for the coming eclipse, I decided I wanted to get a solar filter for solar observation and photography. What I quickly found was that the actual filter material is not expensive, but buying a filter with a mount designed for your specific telescope can be. I decided to build a mount. Here's one approach that has worked for me.

Warning

There are relatively few ways to seriously injure yourself in amateur astronomy, but solar observation and photography is absolutely one of them. All it takes is a glance through an unfiltered telescope to destroy your eye, rendering yourself blind. You remember how you can set ants on fire with a magnifying glass? A telescope is a very large magnifying glass. Read the warnings that come with the solar filter film and follow them. Cover your finder scope! Be careful. Really careful. The information presented here is what worked for me, but your safety is your responsibility alone. If you are not confident in your ability to build a solar filter that will be securely attached to your telescope, don't undertake a project like this. This filter is for occasional use in dry conditions - it will not hold up with exposure to moisture.




This picture shows how the filter mounts - note that you must cover the finder scope before use!

I checked into the various types of solar film, and decided I like the yellow cast that the Thousand Oaks Optical solar film gives. Amazon sells sheets of it in various sizes. I decided that the easiest way to mount it was to buy a sheet larger than my scope's aperture, and sandwich it between two sheets of foam board. I'd then stack some layers of foam board on the back with a cylinder cut out, so that it had a snug friction fit over the telescope's tube. I bought the 8x8" sheet for my 5"/127 mm scope, which cost about $20.

You don't want the filter falling off while you're observing. A gust of wind must not be able to remove it, so I made it as snug a fit as I reasonably could.

Here's the steps I took.I started by cutting two pieces of foam that were a bit larger than my 8x8" solar film. Those two pieces will support the film and serve as the two layers of the foam/film/foam sandwich.

I then cut four more pieces that were a little smaller than 8x8" to serve as the friction mount on the tube. I used the telescope cap as a guide - remember that you want the inside diameter of the cap, though, not the inside diameter.  I traced the outside diameter and then conservatively freehanded the inside diameter. I cut to the inside diameter, leaving a small amount of material. Remember, we want a snug fit - we can't have this thing falling off and letting the sun burn a hole in our retina or camera sensor. Safety first!



The two sandwich pieces should have a hole cut that is smaller than the tube diameter, because you want the filter mount to slide over the tube and then stop. You want it to hit foam board before it hits film.

I stacked four of the smaller friction mount pieces and glued them together hot glue, and then carefully sanded the inner hole until it fit very snugly over the optical tube. I then hot glued the stack to one of the sandwich mount pieces.




I then sandwiched the film in between the two front sandwich mount pieces, and taped them together securely with electrical tape.  Here you see the finished filter face down, from the back/telescope side. Remember to observe the orientation of the film as specified in the film's instructions!





Your finder scope must be covered, or have a filter of its own. You can get the alignment close by watching the shadow cast by the scope. I usually remove the filter and put the protective cap on the telescope (to protect the optics and against mistakes) and then move the scope until the shadow is a round circle. That gets you pretty close. Then I remove the cap and quickly install the filter.

Go slow, and think through every move before you make it - your natural temptation is to look up at your target. If there are kids or adults who are unfamiliar with telescopes and solar observations with you, be cautious and communicate the hazards to them.

I really enjoy using the filter for both observing and photography. Here's the video that the first image was stacked from, just to give you an idea of what to expect. Be careful, and have fun!













Saturday, July 15, 2017

Lunar 100 Target 15 - The Straight Wall


Photo Details: Celestron 127SLT, ZWO ASI290NC, 1.5x Barlow, stacked from 1 minute video

View Full Size Image

Midway across the moon's southern hemisphere, just north of Tycho crater, is an odd sight. On a lunar surface pocked with round craters, a seemingly straight line cuts across one of the dark, smooth cooled lava plains. This is Rupes Recta, or the Straight Wall. It's the best example of a linear fault line to be seen on the moon with a small telescope.

A fault is a crack in an otherwise continuous in a section of rock. In this case, it is thought that the crack resulted from tension in the crust. The rock would have deformed at first, and then broken. One side drops, exposing a rock face called a scarp. The "wall" looks nearly vertical, but is known to have a slope ranging from 7-20 degrees. It is about 110 km/ 68 miles long and 2.5 km/1.5 miles wide. Estimates of its height range from 240m/800 ft to 500m/1640 ft.

The Straight Wall was first recorded in a drawing by Christiaan Huygens in 1686.


Friday, June 30, 2017

Lunar 100 Target 3 - Mare/highland dichotomy


Photo Details: Celestron 127SLT, ZWO ASI290MC, composite from video
03-Apr-2017, cropped from full disk.




A very large high resolution version of this image is available for download.


Note: As mentioned in the first post, these posts will not list the Lunar 100 features in order because the features are not always visible at different times. Item 2, moonshine, will be posted when viewing conditions allow.

The third target on the Lunar 100 list is the mare/highland dichotomy. When you look at the surface of the moon, you'll notice two distinctive surface types. Smoother dark areas, the maria, are so named because they were mistaken for actual seas by early astronomers. The cropped photo above highlights the Mare Crisium. It is approximately 550 km wide. The maria are relatively smooth plains of basaltic rock that formed from cooling lava produced by volcanic eruptions between 3 and 4 billion years ago. It is believe that deep impressions formed by impacts were filled in with magma, and then hardened to form the maria. Higher concentrations of titanium and iron make this rock significantly darker than the rest of the surface of the moon. They are the youngest lunar surfaces, and show significantly less crater activity from impacts than the lunar highlands.


The darker, smoother mare shows a lower density of impact craters than the surrounding lighter highlands


The lighter areas of the lunar surface are significantly older. They are believed to have formed between 4 and 4.5 billions years ago when the surface of the moon was still molten. They are composed primarily of anorthosite, which is an igneous rock. Anorthosite forms when molten rock cools more slowly than in the formation of basalts. This indicates that the highlands solidified under different conditions than the maria. The highlands formed very early in the formation of the solar system, which is itself estimated to be 4.6 billion years old.

Notably, the rocks from the lunar highlands are older than the oldest Earth rocks found this far. On earth, the igneous rocks formed at the beginning of the Earth's life have been predominately covered by tectonic activity and sedimentary rock formation. The moon has cooled sufficiently that it has no significant tectonic activity, and the lack of water and atmosphere make sedimentary rock formation impossible. The oldest rocks on the moon are still exposed.



Lunar 100 Target 1 - The Moon


Photo Details: Celestron 127SLT, ZWO ASI290NC, composite from video
07-May-2017 Larger version configured as 4k wallpaper is here


Welcome! This series of posts will document my efforts to explore, photograph and learn about the Lunar 100. This is a list of significant features on the moon that tell a story about the moon's geology, and history. This list was published by Charles A. Wood in a Sky and Telescope article and was his effort to produce a list similar to the deep sky object Messier List for lunar observers. 

The list is sorted by difficulty, with the easiest to find at the top. However, it is not possible to observe all the objects all the time, and certainly not optimal to photograph them. The best time to photograph a lunar feature is when it is near the terminator - the line separating dark and light - since that sets up less direct lighting. Much like a portrait of a person is less than flattering when shot with the pop-up flash on a camera, and far better with off-axis studio light, lunar features are much more prominent and aesthetically pleasing when shot near the terminator. As a result, I will likely present this list out of order, as I shoot them successfully. My goal will be to present a good photograph of each item and a few paragraphs describing their significance. The original list only gives a few words describing the significance of each, so I will learn a lot doing the research. I hope you find it interesting. 

The first entry in the list is simply the moon itself. (The list is sorted by order of difficulty from easiest to most difficult) The Lunar 100 list describes the significance of the moon as simply "large satellite".

The distance from the moon to earth varies through it's orbit, but averages 238,000 miles/383,000 km. By an odd coincidence, the disk of the full moon in the sky is the same size as the disk of the sun - this is why a solar eclipse can blot out all of the sun but the much wider corona. The moon has effectively no atmosphere, and a mass of 1/80th that of Earth.

The moon is tidally locked with the Earth. This means that it takes the same time to complete 1 revolution about it's axis of rotation (1 moon day) as it does to complete a revolution around the Earth. As a result, the same side of the moon is always facing towards Earth (with minor variance due to libration) An excellent animation showing how this works is here - I had trouble getting it conceptually, but it makes sense instantly when you see it in motion. The far side of the moon is thus never visible from Earth - our only imagery comes from orbiting spacecraft.

The moon's near side is pockmarked with craters from impacts with rocks from space. The far side is even more badly marked. It's enough to make one appreciate Earth's atmosphere even more than before.

There are two related theories as to how the moon was formed. The composition of lunar material is very similar to that found on Earth. The most generally accepted theory is that the moon formed from the ejected material from a glancing impact with an object about the size of mars four and a half billion years ago. The ejected material orbited the Earth and then slowly coalesced in to the moon.

However, simulations of this type of event show that a configuration of moon and orbit only rarely result from such a large single impact. An alternative theory indicate that numerous smaller impacts ejected material into orbit, which formed rings that condensed into smaller moons, and then into the moon. An excellent article on these two theories was published by Sky and Telescope magazine.






A Beginner's Guide to Solar System Photography with the Celestron 127 SLT (and other Alt/Az Scopes) Part 4

Update: Based on feedback, I have broken the original large post into four smaller posts to make it easier to read.









Capture Software Settings and Exposure Determination

I'd recommend you select RAW16 as the format, which will generate SER files as the output. These preserve more color information than an 8-bit per channel AVI. The camera is capable of 12 bits per channel, and SER retains that. It is read by most astronomy image processing software directly.

If your software supports it, turn on WinJupos file naming conventions. This will be useful when you progress to the point of wanting to do software de-rotation.

While you are hunting for your target you'll want the widest view you can get, so set your resolution to max to start.

Gain is similar to the ISO setting on a film or digital camera. It sets the light sensitivity of the camera. Set your gain in the middle for planets and lunar work to start. My camera has a range of 0-600 for gain, and I have found that shooting in the 250-350 range works best because you get more frames, and are more likely to get sharp views as the flicker past. The tradeoff is that you get noisier images, but the stacking will compensate for that, and having more sharp frames to work with is good. Past about 350, the resulting images are too noisy for my taste.

Set your exposure time to about 300 ms while hunting for your target. This is comparable to shutter speed on a traditional camera. This ensures that even one of Jupiter's moons will be hard to miss as you scan around, and the glow from the planet will be visible when you get close, before it is visible in the frame.

At this exposure, moons are clearly visible and all the surface detail is lost, but you sure can see it.




Launch the histogram function and adjust the exposure until the histogram tops out at 65%-70%. This ensures that the frames will not be overexposed, which will lose all surface detail.




Center your target in the frame, and reduce your resolution. The only time I use full resolution is for lunar shots. Planets are fairly small in your view, and capturing the center 800x600 pixels is usually plenty. Remember, the data from the camera is uncompressed, and uncompressed video is HUGE. You'll use a lot less disk space, enabling you to capture longer.

Additionally, you can capture more frames per second if the target is bright enough, and more frames in a short period of time is the name of the game. The USB connection can pass a lot of data, but if your exposure time is short, you can max it out with larger frames.





Processing your video into images - Stacking

There are two very popular stacking programs in use by hobbyists at the time of writing. The first is Autostakkert, and the other is Registax. I personally consider the stacking in Autostakkert to be more robust and produce better results for me. However, Registax is superb for the next step in the processing chain, so I use both. I stack the video frames with Autostakkert and then sharpen the resulting image in Registax.

There is a very good Autostakkert tutorial in the documentation.

It takes some experimentation to figure out what stacking works best. On an average night, I find stacking the best 15-25% of frames gives me the best images. If you stack more, you start to include frames which are not optimal. If you stack less, your noise level increases. It's a balance.

On nights of superb clarity, you might be able to stack as many as the best 40-50% of frames.

Processing your video into images -  Wavelet Sharpening

As you see above, stacking images from video dramatically improves your signal to noise ratio, but the image is probably not as sharp as it could be. This is due to atmosphere and other variables. You can dramatically improve it with cautious wavelet sharpening in Registax. I have had the best results with the noise-trapping technique shown in this set of tutorials.

It is very easy to over-do the use of wavelet sharpening, which results in an image that looks artificial. It's a highly subjective process - it will take some time to determine what you like. I personally like to err on the side of under-sharpening.

Lunar Photography

The process for lunar photography is similar. I still focus on a nearby star with the Bahtinov mask, and then swing the scope over to the moon. Since the moon is so bright, your shutter speeds tend to be pretty fast, and shooting at full resolution uses a ton of disk space, so I usually only shoot a minute's worth of video on a particular target. I sometimes bracket the same shots doing one minute clips at slightly different exposures to see what I like best.

You will get the best results by far if you wait until your target is near the line between light and dark. Shooting a full moon is actually pretty boring, because the light makes everything look flat. It's similar to how a person photographed with on-camera flash will look harsh and flat. Side light is much nicer and brings out far more detail.

Here's an image from stacked video of the Apennine mountain range, near Copernicus crater.



It's also a ton of fun to take a video and pan across the surface of the whole moon, pausing for a few seconds over each area. You can then use Microsoft's Image Composite Editor to stitch a very high resolution image of the moon. It takes a fair amount of processing time, but the results are really good. It results in very large images.

If lunar photography interests you, you might want to check out my blog dedicated to photographing the Lunar 100.

A link to a higher resolution version of this image, scaled down  for use as 4k wallpaper, is here.



Useful software for planning

A key part of astrophotography is planning. The following resources and programs are very helpful.

Stellarium is simply outstanding. There are versions for the PC and mobile devices. I use the PC version to see what will be in the sky, where it will be at a given time and date. You can even see where the moons of Jupiter will be and whether the great red spot will be facing you. If you photograph a moon of Jupiter, you can go back in time later in Stellarium and figure out which moon it was. You can even control a scope with it, but that's a topic for a later article.

Virtual Moon Atlas is terrific for planning your lunar photography session. It shows where the line between light and dark will be and helps you identify what you are seeing.

Weather apps and sites are very helpful for figuring out when the skies will be suitable for observing and photography. The best I've found are these:

Clear Dark Sky
The Clear Outside app for Android

Of special note is windy.com, which can visualize the jetstream. If you set the the altitude to 9000m for the wind, you can see it's path, which is surprisingly varied from day to day. Atmospheric seeing is best when you aren't under it, so if you see a day that is clear and the jetstream has moved off of you, get outside! Their cloud cover map is also great.




Sample workflow

Image capture:

1) Plan session using Stellarium and the weather applications. Look for a night with good to better than average seeing where your target is high in the sky. The higher up in the sky it is, the less atmosphere is in the way between your camera and it. Ensure laptop and telescope power battery is charged, and make sure you have > 50 GB available in disk space.

2) Carefully align your telescope as precisely as you can, according to the manufacturer's instructions.  I like to use an eyepiece that results in a fairly high magnification for this to ensure the alignment star is as centered as I can make it. An EP with a reticle would be handy too!

3) Install your camera and Barlow, as determined in the section on optimal magnification.

4) Install Bahtinov mask

5) Slew to a bright star near your target.

6) Use the focusing aid in your capture software to get the error as close to zero as you can. After this don't touch the focuser.

7) Remove Bahtinov mask. Really. It's easy to forget. :-)

8) Set camera gain to the middle and exposure time to 200-300 ms.

9) Set camera resolution/capture area to maximum.

9) Slew to target and center in the capture window. Using reduced motor controls helps here (motor speed 3-5 on SLT scopes)

10) Start histogram and adjust exposure such that the peak is between 65 and 70% to avoid overexposing.

11) Capture video. Limit videos to times appropriate for the rotation of your target. Optionally, capture a series of 4-6 two minute videos and combine them later using PIPP. This is handy for picking the periods of best seeing, and can also be used in derotation software later.

12) Sleep. Try to resist the urge to look at what you just captured, other than to back it up if desired.

Processing

1) Optionally, use PIPP to crop and center the planet. You can also use it to join multiple 2 minute segments into 1.

2) Stack the video.

3) Use Registax for wavelet sharpening

4) Use Photoshop/GIMP/etc for final level/contrast/saturation adjustments as desired. You can also correct orientation and scale the images. I had best results using the Lanczos algorithm.

Thank you! I hope you have found this series useful. If you have, I'd appreciate you sharing the link.

- Jason











A Beginner's Guide to Solar System Photography with the Celestron 127 SLT (and other Alt/Az Scopes) Part 3

Update: Based on feedback, I have broken the original large post into four smaller posts to make it easier to read.









Focus, Grasshopper....

Believe it or not, the trickiest part of the whole process is getting a sharp focus. You'd think it would be easy, watching the laptop screen, but because the atmosphere is moving, and because the scope mount shakes a bit when you touch it, it's actually kind of hard.

Trust me on this. As early in the process as you can, make or buy a Bahtinov mask. They are a wonderful focusing assist tool. You will save yourself a great deal of frustration. There is nothing quite like the feeling of rolling out of bed at 3:00 AM to shoot Saturn, and later processing the images and determining that you were a touch out of focus. My eyes don't work right at 3:00 AM.

Making one is easy, thanks to the Bahtinov mask generator at AstroJargon. Simply plug in your scope's numbers and print it black. Then have the paper laminated, and cut out the pattern with an exacto blade. I left 4 tabs evenly spaced around the circle that I folded over and added velcro tabs to secure it to the telescope. I also used a Sharpie to black out the back side to minimize reflections.



You can also 3d print one, if you have access to a fairly large printer. The laminated paper one is holding up just fine.

To use it, attach it over the front of the scope. Orientation doesn't matter. If you have a dew cap, you can attach it to the end of the dew cap with no concerns - the distance to the objective is not critical.



Point the telescope at a bright star as close to your target as is reasonable. You need a point light source, not a disk, so pointing it at the moon or a planet is not recommended. Adjust your gain/exposure until you can see three lines making up the diffraction pattern. Two will cross in an X, and the third will be off center when it crosses the other two. As you focus the telescope, that third line will move. When it's centered at the intersection of the other two lines, you're in focus. A gentle touch on the focuser helps, and you'll need to let the scope settle each time you touch it. One of these days I'm going to build an electric focuser, but that's a project for another article.

In the picture below, the middle line, closest to vertical, is the focus indicator. It's not centered between the other two lines, indicating the system is not in focus.



The capture software I'm using has a Bahtinov assist feature, which quantifies how many pixels off center you are. It doesn't always reliably identify all three lines, so it can take a little time for it to settle down, but if that number is oscillating around 0 or 1, you're in good focus. Here's what the sequence looks like.


Way out of focus.  Turn the focuser in the direction that makes this smaller.



Getting better, but still off by 4 pixels



Focus acheived. 

Once you are properly focused, remove the mask and slew the scope over to your actual target. Remember, the mask won't work pointing it at a disk - it needs bright point source. A bright star near your target is best. Slewing the scope can impact your focus, so minimize the movement.

"Lucky Imaging" - making cleaner still images from video

It may seem counter-intuitive at first, but unlike terrestrial photography, you'll get your best results not by shooting a single frame, but by shooting hundreds to thousands of frames. You then use stacking software like AutoStakkert to automatically select the best frames, align them, and combine them into a single image.

The advantage of this approach is that you can overcome some of the variability in atmospheric seeing that is causing the view to swirl and ripple. There are split seconds during which the image is clearer than average, and those frames get selected and combined. By combining many of these images, you can reduce noise, and dramatically improve detail. Essentially, you are improving the signal to noise ratio by integrating data over time into a single image - the details and noise don't happen in the same parts of the image each time, so you get better detail over the whole image as you add frames.

When you are shooting, use a gain in the middle of your camera's range. If you decrease the gain, you'll get cleaner individual frame, because noise increases with gain. However, your exposure time has to increase, and this means that you can't capture as many frames. If the camera exposes for 1/30 of a second for each from, you get 30 frames per second, maximum. Modern astro cameras are much faster than that - connected to a USB3 port you won't be bandwidth limited until 100 fps or more, if you aren't capturing the full view of the camera. More frames increases the odds of getting frames that capture those split seconds of clarity as the atmosphere shifts and ripples.

USB3 and a solid state disk (SSD) are preferred, but I'm currently using a laptop that is limited to USB2 with acceptable results. It does limit my maximum frame rate when shooting bright targets. You can reduce the impact by only capturing a small window of the camera's view - the planet will generally easily fit in the middle 800x600 or 640x480 pixels, so you really only need to capture that.

The resulting files are BIG. It is really easy to shoot 100 GB in a session. Fifty GB is about the minimum. The resulting video files are not compressed.

Start doing video captures of 2-6 minutes. I set the motor speed of my control handset to 3-4 for this, after the planet is centered, so that I can make very small adjustments to keep the planet centered. The tracking is good if you align the scope carefully, but not perfect.

During capture, don't move the scope more than you have to - remember that the stacking software can take care of even significant drifting from center as it aligns the image. Moving the scope more than needed will reduce the available pool of good frames the stacking software has available to it.



Raw video frame 

 

Best 20% Stack of Video Frames


Wavelet Sharpened


Effects of Planet Rotation and duration of video capture

After a couple tests where you capture a few minutes of video, it will occur to you that you can get more frames simply by recording longer videos. I tried 10 minutes at a time on Jupiter, and couldn't figure out why the results showed very little detail compared to the 4 minute captures.

It's pretty simple - everything's moving. While you are taking video of it, Jupiter is rotating, and it rotates FAST. It completes a full rotation in a bit over 9 hours! If you take a couple of sequential 5 minute captures and produce stacked images from each, you'll be stunned at how much it has moved during this time - you can actually see it rotating if you cycle through the resulting pictures.

Limit your exposures of Jupiter to 4 to 6 minutes, and your exposures of Saturn to perhaps 7 or 8. Repeat this several times if you can - the seeing varies substantially from minute to minute, and one of these will usually turn out better than the others in the same session. The time required to shoot another 6 minute sequence is small compared to the time required to set up and focus.

There is a way to use software to de-rotate the planet in longer exposures, but that is beyond the scope of this article. I will link to it when completed. If you are interested, check out the software package Winjupos and the tutorials online.