Saturday, April 14, 2012

More Radars and Optical Sensors


In my last post, I talked about one class of space surveillance sensors – radars.  Radars are active sensors and come with one huge advantage and a similarly huge disadvantage.  The advantage is that radars are not limited by the weather at their location.  They can be affected by solar weather, but that is less likely.  So you can use a radar to track any resident space object – RSO – at any time of your choosing.  This is both operationally and tactically important.  The disadvantage is equally huge.  Radars can only see what they can illuminate.  Since the operating principle is transmitting radio waves and receiving their reflections, a radar can only see as far as it can transmit.  For most of the radars in the US’s Space Surveillance Network (SSN), that means that most radars are quite limited.  Most radars, especially the dual mission missile warning and space surveillance sites, can only see a few hundred miles into space.   There are a few in the SSN that can do much better than that and one or two coming on line in the next five years that will be able to reach several thousand miles out.  But there is a huge price in power:  the inverse square law demands that the energy transmitted drop off as one over the square of the distance away from the emitter.  Even with tight beam control – which most radars do have – this law ends up dictating how much power is needed to see certain size (as measured by reflectivity) objects at a given distance.  You can see the same effect with a flashlight.  Shine it directly in your eyes and you’re blinded.  Do it again at 100 paces and it’s actually dim. 

There’s another way to detect and track objects in space, one that has been used for centuries – telescopes.  And if you can use a telescope to look at the Big Bang 14.3 billion light years away, then you can sure look for satellites barely 22,500 miles above our heads.  And that’s exactly what we do.  For RSOs that are too far from the Earth’s surface to be illuminated by a radar, we look for them, literally.  Over the years, the sophistication of space surveillance telescopes – optical sensors in Air Force parlance – has grown to the point where we can build one telescope that will track most of the RSOs that are invisible to radar in one go – a view so comprehensive that it will overwhelm the current space surveillance processing capability that the Air Force has (more on that in another post). 

Optical tracking has its own challenges.  An obvious one is that weather, the Moon, and the Sun can ruin a good night’s tracking.  Weather:  can’t see anything thanks to the clouds. Moon:  lunar glare can blot out the faint tracks of satellites.  Sun:  same but to a much greater degree – and solar glare can damage delicate optics and associated CCDs.  But with all that accounted for, optical tracking is the best way we have to maintain awareness of the geo belt (geosynchronous orbit, roughly 22,500 mi/36,200 km altitude).

A really big challenge for optical trackers is how to determine an RSO’s position.  Angles are (relatively) easy – you are collecting a streak across the field of view so you can compute the observational angle at the start and finish of the streak.  Altitude is a lot trickier.  Basically it starts with geometry and immediately gets a lot more interesting.  In effect, you have this situation:



You know angles 1, 2, and 3.  You know arc length C (more or less – it’s an arc, not a straight line, so getting the actual length of the arc requires some computation and some guess-work).  You have no idea how long sides A and B are – the range of the RSO.  (And, by the way, they might not be the same; that is, the satellite’s orbit is probably eccentric and so your view is a little skewed.)  This kind of observation is known as “angles-only” and they are difficult to work with. 

These problems aren’t unique to space surveillance – astronomers have had to deal with them for centuries.  So there a number of ways to calculate the information you need – altitude, in this case.  Selection of the math can be based on the information you have, the computers you are using to compute it, the quality of the observations from the telescope . . . . it gets complicated. 

There are also a number of ways to capture the data.  Centuries ago, astronomers wrote down or drew what they saw.  They complied tables of measurements that were then analyzed manually.  This, by the way, is how Kepler gained his famous insight about elliptical orbits – he pored over tables of data collected by many astronomers and found that the orbits weren’t perfect circles after all – they were best described as ellipses.  Details can be found in any orbital mechanics text.

Much later, data was collected with photographic plates and then on film.  The resulting pictures had to be manually analyzed and caused many cases of permanent eye-strain.  The next big step was electronic collection of the data.  In the early days of the US space program, Baker-Nunn cameras were used to gather data by stimulation of their photomultiplier tubes.  The next – and current – step was to replace the tubes with charge-coupled devices – CCDs, the same data collection device in your digital camera or cell phone.  Once this step was taken, the data could actually be transferred directly to the computer doing the math.  The last few decades have seen improvements in both the density of the CCDs and the kinds of algorithms used to extract orbital information from the raw data.

The current generation of space surveillance telescopes are called GEODSS – Ground-Based Electro-Optical Space Surveillance.   They have gone through several upgrades but are now being replaced by the Space Surveillance Telescope (SST) – a picture of which is in the figure above.

So now we’ve covered the major space surveillance sensors.  Next post, we’ll talk about what to do with all those radar and optical observations.  We’ll explore the space surveillance processing center – the JSpOC.

1 comment:

  1. Anyone who is conversant in optical tracking has probably noticed that I told only half the story The tracking example I gave above is called sidereal track - the stars are held constant (by slewing the telescope to track them - hence sidereal track) so that the desired satellite is a streak. It's sibling is called rate track where - you guessed it - the telescope is slewed at the satellite's rate so it appears as a dot while the stars are the streaks. But computing the RSO's position is still done the same general way. And that's all I know about astrodynamics!

    ReplyDelete