Tuesday, December 25, 2012

It's Raining — What Does That Mean? Part 2

This note is a follow up on Part 1, and there is a Part 3.

Part 1 Covers definitions and background
 

Part 2 Gives an example of rain at the border between Light Rain and Moderate Rain

 We now have a tipping bucket rain gauge and can be more specific in the study. Our goal is to teach a practical interpretation of rain intensity that will help with our own observations as well as maybe assist the VOS program once we get a good database established.

The original motivation came from a long-distance outdoor skate (The Red Hook Haul Ash... can be done bike or skates.) During the last event it was raining at just 0.04"/hr and we got soaked, and it seemed that such a light rain would not do that... etc.  But it turns out that remarkably low inches per hour is really a lot of rain. What is  called "heavy rain" would better be called amazingly heavy rain, etc. See earlier article on terms.

We spent Christmas with a nice steady 0.10 to 0.13 inches per hour of real rain. That is, it was not showers, which would mean it could be different a few blocks from here. Thus, besides our own gauge, we have other data from the neighborhood that supports the intensity. Namely the roof top gauge at UW and the Seattle RainWatch program, another UW product. Picture below is from UW roof.

This is accumulated rain, so the  steady slope means it was constant, and if you divide 0.45" by 4 hours you get the 0.11 or so we experienced here, 4 miles away.

Below is a capture done too late that shows the screen from Seattle Rain Watch. To use this neat site, set the  picture to "1 hr pcp" and choose "local metro." This one shows lighter gray for under 0.10, but at the time of our test it was dark gray, meaning solid at or over 0.10, or higher, but we did have our tests and US data to show it was really 0.10 to 0.13. They have animations showing the rain moving, and forecasts.

Now for the data:

Here is what 0.10 to 0.13 looks like in a puddle.



Next, we see what this looks like on the water, from a distance off. Notice that it barely shows, and has only a slight influence on visibility.



Finally we look at 0.10" per hour on a windscreen with windshield wipers. We include this example because mariners of ships and power-driven vessels often use rain on the windscreen as a gauge of intensity for reporting rain fall intensity. Our goal is to build up a data base of this type of video for various rain intensities.  This will take a while, but we will plod on with it as best we can.

Notice the difference between the car (vessel) underway and stopped with regard to apparent rain intensity. Moving at higher speeds you get more rain on the windows.  This car was moving at about 12 kts, maybe a bit faster at times, which is compared to stopped twice.



End of story. The main point to make is even what appears to be significant rain is actually still a small number, ie 0.10 inches per hour.  Put the other way, 0.50 inches per hour is a real downpour. Now that we have our neat gauge, we will get some data to demonstrate this.

By the way, the folks who have really studied this subject are those who have invented automatic windshield wipers. They need multiple ways to decide how often to wipe the screen. I just learned that their patent apps have many interesting graphs of their studies, using read values of inches per hour intensity, so we will try to sort that out and include some here with this.


First Pressure Check on Lacrosse C86234

We have a new Lacrosse weather station model C86234 from Costco. On sale for about $60, list is some $280. Seems to work well so far. (Did a rain gauge check today at about 0.1"/h for 3 or 4 hours, and will write that up shortly. Rain study is the main reason we got the device.  It has a tipping bucket gauge that is wireless for about 200 ft, to a temp and humidity gauge that is then wireless for an additional 80 ft to an electronic panel, which can then be accessed up to 30 ft by BT to a PC.

We obviously have barometers coming out out ears here, but the pressure was easy to check quickly with online resources, which we explain here home units can be checked as well.  We are  in sight of the WPOW1 NWS station here, so that is an easy comparison. You can download their data for past 24h and paste into a spread sheet.  The 24h data is UTC; the onscreen data is local or UTC. WPOW1 great plot of the pressure and trend and even pressure plus wind.  The Lacrosse barograph display on the device is not very useful, but you can download the data to PC, but the process is a bit clunky, and the connection is lost if you close the program.  But it will make a plot or you can export to txt and import to excel.

Here is the plot from today at WPOW1. We did the fit backwards in time, so for comparison, I add another flipped over and stretched to match the excel output.



Next we imported both sets of data to excel and then subtracted them. This is a crude test as I am not 100% certain on the times as I had just set up the lacrosse and had to guess a correction since it had stored data with the wrong times in it.... but this does not really matter, it just means it will be easier if done properly.
This is both sets of data overlaid, so we see at a distance that they comparison is pretty good at least on a cursory level.  Small, short-lived  trends are well reproduced. Next we show the difference, with the scatter marked off which is about ± 0.3 mb, centered at -0.3 mb.  I would guess we are slightly higher than that, maybe -0.4 or a bit more.  We can check that later. In any event, first pass their barrometer looks pretty good over the range 1000 to 1020...especially for $60! 

On the other hand, most electronic units will work well over that range.  It is outside of this that is the bigger challenge, ie P > 1025mb and P < 990 mb.  We will know this soon and post a follow up or amend this one.

We can later take this in to our calibration bench and test it there, but the main point of this exercise is to demonstrate how you can do this quickly by comparison to a nearby station.  If you do not have a nearby station, you can interpolate with our online free function at www.starpath.com/barometers.

Next we  will add part 2 to our ongoing notes on rain now that we have a tipping bucket rain guage at hand.





Monday, December 17, 2012

A Local Mini Surge

The day after writing about the Sandy surge, we get our own local mini surge, which did flood some low lying  waterfront homes. Again, it was a very deep low, with strong winds that hit the Sound near a spring tide. Winds were 30 kts, pressures down to 978 mb, and predicted tides at 12 ft.. The surge added just over 2 ft throughout the tide cycle, peaking at about 14 ft.

The main effect of the surge however took place a few hours before high tides when the winds were strongest and the pressure lowest.

Here are the pics. (I do not need to add that this was not influenced by climate warmed higher sea levels! By the way, you see Puget Sound sea level history here.)

 Here is the strom about 4 hours before high water (as forecasted the day before)

Here are winds and pressure. Peaked at about 30. Low pressure was about 975 mb

Here is wind direction. At high tide, the winds were SW which is onto the beach in the two pictures of high water that follow.


 Notice that the surge ( 2ft above normal) actually started at low water beore midnight, but it did most damage about 6 am at peak wind time.


These canal front homes were flooded. They are at the entrance to the Locks, just around the corner from the next two pictures.


In the above pic the tide height is about 6 ft.... from another day.


This is 9 am on surge day, the tide was about 14 ft at this point (2 ft higher than predicted), but notice the debris that got pushed up earlier in the morning when the tide was much lower but the waves were bigger.  This is an example of tide and wind almost in phase, but not quite.  Had they been truly in phase the flooding and damage would be higher.


Friday, December 14, 2012

NOAA Chief Misspeaks on Sandy Surge


I am a strong supporter of the work of NOAA, especially the NWS, NHC, NCEP, and NOS. We praise their work daily, and teach students how to use the many excellent resources and services they provide. But once in a while, spokesmen of even the best programs can wander off the trail. According to a chief NOAA spokesman:

"Storms today are different now because of sea level rise. The (Sandy induced) storm surge was much more intense, much higher than it would have been in a non-climate changed world."

This statement is very misleading, which has led me to write this note.

Climate change and global warming and the associated rise in sea level are very serious issues that do warrant our attention. Erroneous statements like this, however,  cause much damage, because the media picks up on it, and they spin off into further sensationalism, which just gets worse with cross quoting. They also do not seem to be much into checking what they publish these days.

The interview appeared yesterday on NPR. The NPR interviewer immediately takes the bait, adding: “Even garden-variety storms may someday heave water up to your doorstep...How do we prepare?”  Check your favorite news source  and you will see multiple examples of this—higher sea levels accounting for worse storms and surges.

Later it was stated: "The evidence is indeed piling up that climate change is no longer something that is happening in future decades, and everyone's eyes are glazing over as the scientists are talking about it,"  

And can we blame them for the eye glaze? This statement on the Sandy surge is particularly damaging because it dilutes the true message on global warming. It just provides fodder to those politicians who do not believe in science.

Here is some background on this issue.

The height and extent of a storm surge (which means higher than average waves and tides hitting the beach that bring unusual amounts of water onto the land) is determined by several factors:

1) The strength and direction of the wind. This creates a wind driven surface current that adds to whatever was going to flow in that direction. A first guess is it adds a current of about 3% of the wind speed (but at these high wind speeds, the factor is more like 2% according to recent research), not to mention the dominant factor that strong winds build huge waves. Related to this is the fetch of the waves as they build offshore. A big storm can “trap its fetch” and bring much larger waves than a smaller system that is moving diagonal to the beach.

2) The tide height itself. This is a bit circular, because in a sense surge is unusual tide height, but clearly if a storm hits at high tide it will have a larger net affect than if it hits at low tide. On the other hand, a long-lasting, large storm will inevitably hit at one of the high tides of the day.  If it hits at a time of month that has a high tide to begin with, i.e. new or full moon near a solstice in many parts of the world, then you start out with high water, and it takes less extra water to go over the barriers.

3) Low atmospheric pressure adds to the tides. This is called the inverse barometer effect. It was known to mariners in the 1800s, though they did not know the cause. We cover this interesting affect in The Barometer Handbook. It adds 10cm of tide above the average tide for every 10 mb of pressure below the average pressure.

4) And there are other smaller affects that contribute, such as prevailing offshore currents–if an anomalous current eddy happened to be offshore, it could drive notable amounts of extra water into the region. And indeed the height of the sea level can matter as well.

But this is the main point. She says the Sandy surge was much higher than it would have been in the sea level of a "non-climate changed world."  What does that mean? 

Let us say that means 100 years ago, when the average sea level off of NY- NJ was about 1-1.3 ft lower. The peak water level at NJ was 9 or 10 ft, depending on location, which was about 4 ft of predicted tide and 5 ft of storm-induced extra water (surge). That extra foot of average sea level had very to do with the magnitude of that surge at a particular time. Furthermore, the surge though routinely referred to as a record high for the area was not really that far off of what they have had before, just a little ways down the coast in 1962 (Ash Wednesday Storm, 7 ft water level in Atlantic City, 9 ft in Norfolk VA). The mid Atlantic also had high surge in a 1956 storm. Had these earlier storms had properties like Sandy they would have set records that Sandy did not match.

The point is, the high surge in Sandy was not due to a climate-changed higher average sea level, it was due to a bad storm hitting the beach in conditions that could have happened anytime in history–the only historical difference is Sandy was remarkably well forecasted many days in advance, whereas the terrible Ash Wednesday Storm was not anticipated just a day or two before it hit.

It was not even unusual tides in NJ at the time, contrary to a lot of reports in the news. The high tide was more or less normal. A few weeks different timing and the tide could have been 1 ft higher than it was–clearly ruling out a 1 ft of sea level change as the key factor. Or, had the landfall been 6 hours earlier or 6 hr later, the tide would have been 4 ft lower, with dramatic reduction in effects, again a major influence on its surge intensity totally unrelated to the height of the sea level.

Not to mention the pressure. Sandy did indeed set low pressure records for the region, getting down to 945 mb or so. The mean pressure in Oct for that region is about 1018 mb. Thus we were down 73 mb from the normal, which implies 73 cm of tide. So the pressure alone added some 2.4 ft to the surge.

There have been other storms with very low pressures that reach NE waters, but they usually curve the other way, and head out to sea to wreak their havoc there. The bad luck in this case was an upper air pattern that turned the storm to the left instead of the right. That had nothing to do at all with higher average sea levels.

Here are some pictures to document the notes above.

The storm came ashore at 00z on Oct 30, seen in this document from NHC.
The wind speed and direction offshore on the approach was documented  by an OSCAT satellite pass. Notice the onshore component and huge fetch to the NE.

 

Here are the average sea level pressures over the NE in October from The Barometer Handbook.


 And the standard deviations from these pressures in October.
The average October pressure is 1018 mb with a SD of 6 to 8 mb.

The pressure drop as Sandy passed was recorded by a NOAA CO-OPS station
The actual NJ tides in October, 2012 show an average of about 4.5-ft high water, but with some high tides as high as 5.7 ft.

The actual tides at the time in the area along with the predicted normal tides are available online at the NOAA/NOS CO-OPS site, which has interesting data for several applications. The green curve is the anomalous component (observed minus predicted) that led to a storm surge.
Notice that 00z on the 30th was high tide, but ±6 hr either side was more than 4 ft lower.

Here are the NOAA records of the rising sea levels due to global warming.


I must stress very clearly that global warming is obviously causing the sea level to rise, and this rise could accelerate with time, and we should be more alarmed about this than many are. It will sink atoll communities, and flood major cities, maybe during this generation.

And indeed, we will see more severe surges and flooding with time because of this. Sandy dropped some 8 to 12 inches of rain on this community, which contributed to much of the damage caused–presumably the "more intense" aspect of the surge referred to in the NOAA quote.  This will  happen on a more frequent basis in the future, not just because of the sea level rise, but also because there are likely to be a higher percentage of intense storms in the future, just as there will be more droughts in other parts of the world.

We must be aware of this and take actions to prepare for it.  But we most definitely do not further our cause by making claims for evidence that is not appropriate. There is plenty of unambiguous evidence to look at.

Indeed, if we assume the tides were about the same 100 years ago–I don't know if that is true or not–then this sea level difference could have enhanced this particular maximum water level by some 10 to maybe 20%, depending on how you define surge and depending on what the actual sea level looked like at the time. It is not uniform in time nor space (compare NY and NJ data above). Recorded data are just the annual averages.  But I still believe that implications of statements like this run the risk of doing more damage than good.

And here is one final caveat to consider. "Surge" is most reasonably interpreted at the rise of the tide above the predicted values. The predicted tide heights are based on the known average sea level at the time. Therefore, by definition, sea level cannot have any influence on surge at all. Higher sea level just causes the high water of the normal predicted tides to gradually get higher with time.  In short, intelligent, useful conversation on these topics will require more specific terminology.

It would be interesting to look at a plot of the maximum high tide over these last 100 years–a number we can all understand and interpret properly–rather than average sea level offshore, which is a much more complex concept.  It is rather like comparing the phone book, whose data are right or wrong and easy to check, to the Bible, from which you can prove whatever you want.




Thursday, December 13, 2012

Great Circle Sailing by Sight Reduction

It is not often we need to know the route of shortest distance waypoints (great circle route, GC) from one position to another on the globe, because usually wind or other matters dominate the routing. But sometimes we do–more often than not, when we are taking a navigation exam for some license or certification!

The best practical solution is just type the Lat-Lon of departure and destination into an electronic charting system (ECS) or a computer or calculator program and it will give the results immediately.  The basic results are the distance between the two points and the initial heading of the route (in GC sailing the heading changes continuously along the route). To apply this route, however, we need waypoints along the route since the heading is changing. This is usually accomplished in these programs by telling them a longitude interval, and then the program tells you the Lat at each of these Lon intervals along the route–in other words, a set of waypoints.

Another parameter often given out is the vertex of the route, which is the Lat-Lon of the highest Lat along the route.  Since the GC route only differs notably from the rhumb line route (RL) for high Lat at departure and destination, we often find out that the vertex hits the ice, so we can’t use this route anyway!

If you do not have an ECS device to compute the basic data you can get all the answers by drawing a straight line from departure to destination on what is called a great circle chart or plotting sheet. Then just pull off the waypoints with dividers from the grid on the chart. The distance can also be summed up this way in steps along the route. Places like Captains Nautical Supply sell GC Plotting Sheets.

Without ECS or GC plotting sheets we are left to compute these things on our own. This can be done from the basic spherical trig equations used in cel nav, or we can use the sight reduction tables that celestial navigators use to solve routine position fixing from sextant sights. The best set of tables for this application is Pub. 229, and at this point we will have to assume you are familiar with these tables. This application is standard for the most part, but you will see we need a couple extra interpolations not usually called for, but in principle always a small improvement.

We just do a sight reduction but replace the Assumed Position (AP) with the Departure (dep), and replace the Geographical Position (GP) with the Destination (dest). In other words, the sight reduction process tells us the angular height (Hc) and azimuth (Zn) of a star viewed from the assumed position (a-Lat, a-Lon). And we know that the zenith distance (z = 90º - Hc) is the distance from the AP to the GP, so the GC distance is just z converted from angle to nmi at the rate of 1º = 60 nmi. Zn is then the initial heading (always poleward of the RL heading).

Let us work an example: What is the CG distance and initial heading from 13º 12’ N, 49º 35’ E to 15º   04.6’ N, 54º 49.2’ E?  This brings up the point that we can use this procedure to figure the course and distance between two points, even if we do not care if there is any difference between CG and RL. This example is at low Lat and short distance, so there will not be much difference in the two solutions.

Dear Reader: at this point you have to decide if you really care about this subject, because it gets more tedious from here on.

a-Lat = dep. Lat = 13º 12’ N
a-Lon = dep. Lon = 49º 35’ E

dec = dest. Lat = N 15º 04.6’ (We write the N in front when calling it a declination.)
GHA = dest. Lon =  54º 49.2’ E = 305º 10.8’ (see below)

Now we spot a bit of a twist in the process. The GHA is measured 0 to 360 headed W from Greenwich, whereas Lon goes 0 −180 W and 0 − 180 E from Greenwich (Lon = 0). So we have to convert our dest. Lon into the equivalent meridian labeled as if it were a GHA.

(For example, 20º W Lon is just GHA = 20º, but 20º E Lon would be 360 − 20 =  340º GHA. To get to the meridian of 20º E, i have to go west 340º.)

So Lon 54º 49.2’ E = 359º 60’ - 54º 49.2’ is the same as GHA = 305º 10.8’

Then following regular sight reduction procedures, we figure the Local Hour Angle (LHA) = GHA + a-Lon(E), where we choose minutes of a-Lon so that they add to GHA for a whole degree, thus a-Lon  = 49º 49.2’  so LHA  = 305º 10.8’ + 49º 49.2’  = 354º 60’ = 355º

Next we choose an AP with whole degrees since the tables only have whole degrees of a-Lat and LHA. In this application we can just round them off to get a-Lat = 13º N and LHA = 355º, and we note that since our declination is also N, we have a Same Name solution.

Now we enter Pub 229 with a-Lat = 13 N, dec = N 15, and LHA = 355 same name to get: Hc = 84º 45.2', d = −26.9'* and Z = 067º, and since the Hc is so high, we need to get Z from a-Lat = 14 as well and it is Z = 057.6. In other words, when bodies are high in the sky, any small change in anything changes the bearing, so we will have to interpolate for 13º 12'. (d = altitude difference, and the * means a dsd correction is called for, but we are skipping this for now.)



ie Lat interpolation for consecutive declinations gives:
Zn = Z = 67.0 + [(12/60) x (77.7-67.0)] = 069.14 for dec = 15 at Lat 13º 12'
Zn = Z = 66.9 + [(12/60) x (66.9-57.6)] = 068.76 for dec = 16 at Lat 13º 12'

Now interpolate for the minutes of dec (which at 04.6/60 should be small)
Final Zn = 069.14 - [(4.6/60) x (69.14-68.76)] = 069.1º and that is the initial heading of the GC route we are after.

To find distance we have to finish getting an accurate Hc. With dec min = 4.6' and d = −26.9’ we get the Hc correction in three parts from Pub 229, but we can skip the dsd correction and just add the tens (1.5’) and units (0.5’) corrections to get 2.0’, which is negative, so we have Hc = 84 45.2 − 2.0 = 84 43.2. 

Altitude difference (d) of  26.9 means tens = 20, units = 6, decimals = 0.9, and you get total correction as shown below.


Then z = 89 60 − 84 43.2 = 5 16.8, which converted to nmi = 300 + 16.8 = 316.8.  But this is GC from the AP not from the departure, so we have to make a correction, which is best done by plotting.


Here we see where we computed from (AP = center of plotting sheet) and where we were at the same time, and thus had to add the projected distance (red line) to what we got. This step could also diminish the earlier result. There are several forms of cel nav plotting sheets that can be used for this. See some in the support page for our cel nav text.

After plotting we see we need to add another 9.5 nmi for a total GC distance of 316+9.5 = 324.5 nmi

——————————

Alternatively, if you know these formulas and have a trig calculator, you can get the result this way:


lat = 13º 12' = 13.2º,
dec = 15º 04.6' = 15.077º
LHA = GHA + Lon = 305º 10.8' + 49º 35.0' = 354º 45.8' = 354.763º (we do not need to use AP when doing a direct computation.)

and sin Hc = sin (13.2) sin(15.077) + cos(13.2) cos(15.077) cos (354.763) = 0.995539
or solve the arcsin to get: Hc = 84.5862 and z = 90-Hc = 5.41378º then x 60 to get GC distance = 324.8 nmi.   Our plot and table work was off a hair.)

And:
tan Z = cos(15.077) sin (354.763) / [ cos(13.2) cos(15.077) - sin(13.2) cos(15.077) cos(354.763) ] = -2.6172338039, and solving for the arc tangent:
or Z = Zn = 069.09º, which is what we got from the tables (069.1).

Notation note: cap Z = azimuth angle, lower case z = zenith distance, true bearing or azimuth = Zn.

The above formula is what the programs use. It is obviously much easier to use a calculator that has these formulas already programmed in. We offer a free calculator for this at www.starpath.com/navpubs.

=========

PS. The above note is for doing GC computations over large distances (without a computer!).  If the run you need to compute is less than 500 miles or so at low latitude, you can get a good estimate more easily with mid-latitude sailing. That is:

Solve a right triangle with one side = dLat = 15.077 - 13.2 = 1.877 x 60 nmi = 112.62 nmi

and on the other side take the departure as dLon x cos (mid-lat)

convert to decimals
= (54.82 - 49.58) x cos [(15.077+13.2)/2] = 5.062º x 60 = 303.7 nmi.

Then the run is the hypotenuse = sq root (112.62^2 + 303.7^2) = 323.9 nmi  compared to 324.8,

and the course is then E xx N, where xx = arc tan (112.6/303.7)  so xx = 20.3º, and CMG = 90 -20.3 = 069.7 compared to 069.1

========

Summary:  after you pass all of your tests on these subjects, buy a calculator that will do all this for you... and much more.




Thursday, December 6, 2012

Tactical Use of Scatterometer Data

We will be watching the tropical Atlantic winds very closely for the next 3  months as we assist in the tactical routing of the OAR Northwest expedition from Dakar to Miami. This will bring up many examples of the value of scatterometer winds, about which we have several posts. Here we will just document a few as we proceed, starting with the one below. We highlight the importance of this analysis in our text Modern Marine Weather.

The top picture is our best surface analysis map of the ITCZ (doldrums) just SW of Dakar valid at 18z. The closest ASCAT pass (hi-res from KNMI)  is some hours earlier at 1030z, but this pattern does not change rapidly and our point for now is just to show the type of detail we can see. Searching around the data, we can often find cases closer in time.

The blue rectangle marks the region shown below in the ASCAT data.


The surface analysis does not tell us much about the winds at all, but we can guess that since this is the ITCZ, we would expect the NE trades to be meeting the SE trades, as they are indeed doing. This we see from the ASCAT winds, but notice how much more detail we get.  If you are rowing or sailing in this area, this is tremendously valuable information. Notice how the zone really splits up wind-wise on the eastern end of the region measured, which we have no idea at all about from the surface analysis alone.

Below we see the GFS model wind for 18z, which is effectively a surface analysis for this 18z run, but we do not learn much from it. In short, the scatterometer winds are the most precise wind data we can get at sea... or get at home for a particular part of the ocean.


This is the end of this example. We will add more as we run across them or share ones we actually use in routing.



Study  on your own 
To make a comparison of this type on your own for any location:

(1) get the top picture: opc.ncep.noaa.gov/UA/Atl_Tropics.gif

(2) get the middle picture: knmi.nl/scatterometer/ascat_b_osi_co_prod/ascat_app.cgi

(3) get the bottom picture: passageweather.com/maps/arc/mappage.htm




Friday, November 2, 2012

OPC maps for Google Earth

The Ocean Prediction Center now offers links that will put some of their maps (including Gulf Stream and Unified Analysis) directly onto Google Earth. Once you have done this, they will update automatically next time you open Google Earth.  Hard to think of a more convenient way to get a quick picture of what is going on weather wise on a large scale.

Get the products and instructions at this OPC webpage. Then just save the files anywhere and drag them onto Google Earth. Then save your My Places when you close GE (it should ask you to save), and they will be there and update automatically for you. Very slick.

Here are a couple samples:








We have also made ourselves similar links for getting the ASCAT winds onto GE. This is a project I would like to expand, but will need some help. I call it the ASCAT Genome Project. We want to get all ASCAT and OSCAT winds worldwide linked to GE. It will be a great service, and once I figure a way to coordinate the help of others we will post a note here to join us if you like.

Track of HMS Bounty and the NHC Forecasts

In Modern Marine Weather we point out that if you want to sail in a hurricane you can. We know where they take place and when they take place—and when they do occur, their location and projected tracks are remarkably well forecasted. To sail in one, just look up this information and go there.

On the other hand, if you do not want to sail in a hurricane, look up the same information and then do not go there.

Seems simple enough, but this nutshell summary of avoiding hurricanes is obviously oversimplified. Huge areas of the ocean and long seasons would be blocked out by such a simple guideline. Fortunately, the National Hurricane Center (NHC), in collaboration with experienced mariners over the years, has come up with more realistic guidelines for safe navigation in the presence of tropical systems. It comes in the form of two simple rules: The 34-kt Rule and The Mariner’s 1-2-3 Rule.

The 1-2-3 Rule is simple and easy to apply based on text or voice reports of the storm locations, which are given several times a day in the high-seas broadcasts. Namely, the danger area to be avoided expands the forecasted danger zone by 100 nmi per day, as shown in Figure 1.

The danger zone for this application is characterized by the NHC as the radius about the storm center that includes winds greater than 34 kt.  Thus we start with what the NHC calls The 34-kt Rule: “For vessels at sea, avoiding the 34-kt wind field of a hurricane is paramount. Thirty-four knots is chosen as the critical value because as wind speed increases to this speed, sea state development approaches critical levels resulting in rapidly decreasing limits to ship maneuverability.” They add the natural precaution that sea state outside of the 34-kt radius can also be significant enough to limit course and speed options, so we should monitor this carefully.


Figure 1. The 34-kt Rule and The Mariner’s 1-2-3 Rule




We can use the forecast of Sandy on Thursday, Oct 25 (below) as an example of a long-term forecast for a storm headed north:

NWS NATIONAL HURRICANE CENTER MIAMI FL      
0300 UTC THU OCT 25 2012

[Day 0 - Thursday = 00h]
HURRICANE CENTER LOCATED NEAR 19.4N  76.3W AT 25/0300Z.  POSITION ACCURATE WITHIN  20 NM
PRESENT MOVEMENT TOWARD THE NORTH OR  10 DEGREES AT  11 KT. ESTIMATED MINIMUM CENTRAL PRESSURE  954 MB. EYE DIAMETER  20 NM. MAX SUSTAINED WINDS  80 KT WITH GUSTS TO 100 KT.

64 KT....... 25NE  20SE  20SW  20NW.
50 KT....... 50NE  60SE  40SW  40NW.
34 KT.......110NE 120SE  70SW  60NW.
12 FT SEAS..120NE 300SE 120SW 120NW.

WINDS AND SEAS VARY GREATLY IN EACH QUADRANT.  RADII IN NAUTICAL MILES ARE THE LARGEST RADII EXPECTED ANYWHERE IN THAT QUADRANT.

[Day 1 Friday = 24h]
FORECAST VALID 26/0000Z 24.4N  76.2W
MAX WIND  70 KT...GUSTS  85 KT.
64 KT... 20NE  20SE   0SW   0NW.
50 KT... 70NE  70SE  40SW  50NW.
34 KT...150NE 120SE  70SW  90NW.

[Day 2 Saturday = 48h]
FORECAST VALID 27/0000Z 27.6N  77.2W
MAX WIND  65 KT...GUSTS  80 KT.
50 KT...120NE 100SE  90SW 120NW.
34 KT...250NE 160SE 100SW 230NW.

[Day 3 Sunday = 72h]
FORECAST VALID 28/0000Z 30.5N  74.5W
MAX WIND  60 KT...GUSTS  75 KT.
50 KT...120NE 120SE 120SW 100NW.
34 KT...300NE 270SE 180SW 300NW.

EXTENDED OUTLOOK. NOTE...ERRORS FOR TRACK HAVE AVERAGED NEAR 175 NM ON DAY 4 AND 225 NM ON DAY 5...AND FOR INTENSITY NEAR 20 KT EACH DAY.

[Day 4 Monday = 4-day]
OUTLOOK VALID 29/0000Z 33.5N  71.5W
MAX WIND  60 KT...GUSTS  75 KT.

[Day 5 Tuesday = 5-day]
OUTLOOK VALID 30/0000Z 37.0N  70.0W...POST-TROPICAL
MAX WIND  60 KT...GUSTS  75 KT.


Plot the location of the storm at forecast time, and then plot on your chart the forecasted locations on Day 1, 2, and 3. Then check the forecast for the maximum radius of 34-kt winds. They are given for each quadrant on each day’s forecast.  On Day 1 this was 150 nmi, which occurred in the NE quadrant. With a drawing compass draw a circle around each of the 3 locations with 34-kt radii (in this example) of 150, 250, and 300 nmi. You then have a plot of forecasted storm sizes on these 3 days, which is shown in Figure 2.

These are not the safety zones of the 1-2-3 Rule, these are the actual forecasted locations and sizes of the areas that should be avoided. Next we apply the Mariner’s 1-2-3 Rule to account for uncertainties in forecast accuracy.

To each of these 3 radii, we then add 100, 200, and 300 nmi to account for historic uncertainty in the forecasted track. This guideline is based on the NHC’s records over the past 10 years of past errors in forecast track location. The track locations are actually more precise than this in recent years, and indeed getting better continually, but the uncertainty in intensity and size of the forecasted systems remains more of a challenge and hence the larger safety zones.

Furthermore, this guideline as presented is for tropical systems that are fueled from the warm water below them. Once a storm moves out of the tropics and becomes extratropical, it begins to gain energy from a much broader source—the temperature difference between cool northern air and warmer southern air masses. When this happens the system can quickly become much larger and more intense. This is precisely what happened with the transition between Hurricane Sandy and “Superstorm Sandy.” It got much larger, with a much broader band of strong winds and high seas.

This one became known in the media as a “superstorm” (not a defined meteorological term) because that transition took place at a time that was uniquely favorable to enhanced extratropical development. Extratropical storms on the surface are strongly influenced by the wind patterns in the upper atmosphere, and this one just happened to move north at the worst possible time—not only for enhancement, but for a forced turn to the west, rather than the more normal route to the east. This, however, was not a surprise. This storm was indeed a testimony to the skill of modern numerical weather prediction, which had accounted for these effects.  They were included in the forecasts as shown by the examples here.

In short, we should not look at The 34-kt Rule and The Mariner’s 1-2-3 Rule as an over-conservative guideline. It was spot on in the case of Sandy, with tragic consequences for the Bounty.



Figure 2. Plot of the 3-day and extended forecasts for Hurricane Sandy issued on Thursday, Oct 25, 2012, valid at 00z. Also shown is the track of the HMS Bounty which departed for Florida on that date. The shaded areas encompass the 34-kt wind regions for the next three days, with the radial lines marked that showed the largest radius in each case. The associated colored dashed circles mark the limits of the Mariners 1-2-3 Rule. The two top storm locations mark the extended forecasts, with the dark circles marking the quoted uncertainties in track location. These two dark circles do not imply any wind or storm size predictions. The 5-day forecast of the ECMRF model called correctly for an earlier turn to the west than the GFS model did, but the NHC always balances the input from multiple models when making their forecasts.

In this storm, it was well forecasted that the hurricane would turn extratropical, expand, and intensity, so the 34-kt Rule alone blocked off essentially all of the coastal waters before even applying the 1-2-3 Rule. The 1-2-3 Rule showed how far out to sea the risk extended and remained valuable in the waters off Florida for several days. Figure 3 shows this extra caution was justified.


Figure 2 shows the forecast at the time of departure. The inevitability of the encounter with the 34-kt wind field did not diminish with time.



Figure 3. Satellite wind measurements at 1710z  on Sunday, Oct 28, 2012. For comparison, the 72-hr forecasted location of the storm for that time made four days earlier on Thursday is shown, along with the 34-kt wind and 50-kt wind radii (not part of either of these rules).

The 50-kt wind field was very well forecasted, and for the most part the 34-kt winds as well. But there were large regions of the ocean beyond that 34-kt radius that had winds well above 34-kt, which shows the value of the 1-2-3 Rule. That rule at 72 hr off calls for a large clearance in such a large storm. For storms within the tropics the 34-kt wind range might  not be as large, so maneuvering options could be larger, but tropical storm sizes vary significantly within the tropics.

The NHC website publishes the 1-2-3 Rule boundaries for all storms they track, so watching these online is a good way to get a feeling for how they evolve.

The satellite data here are from the Indian scatterometer OceanSat2 (OSCAT). The European instrument ASCAT is a primary source for this type of data, but it did not have a pass at this time. The sections it did show, confirm that the winds were at least as strong as shown here. We have made a link to all of the scatterometer data along with a convenient way to get ASCAT winds by email at www.starpath.com/ascat.


--------
To see how the 34-kt wind field predictions varied over this period, along with the related probabilities that give some insight into the 1-2-3 Rule, look at this video of the 39-mph winds.

For further information see:

www.nhc.noaa.gov/marine
www.nhc.noaa.gov/prepare/marine.php
www.ecmwf.int






Wednesday, October 31, 2012

The direction of sunrise and sunset—the old fashioned way


…or another value of a good compass to sailors, photographers, and lawyers.



Sailors care about accurate compass bearings for position fixes and for evaluating the risk of collision as required of all mariners in the Navigation Rules:

Rule 7, Risk of Collision

(d) In determining if risk of collision exists the following considerations shall be among those taken into account:

(i) such risk shall be deemed to exist if the compass bearing of an approaching vessel does not appreciably change;

Note the word “shall.” This is not optional. The only way to measure such bearing changes in the allotted time is with a high quality compass.

Photographers sometimes need to know the precise direction of sunset so they can frame a picture precisely as they want it. A cheap compass will not do this as they cannot take bearings accurately enough to pin point a spot on the horizon. A good candidate for this task is the famous French model, known for some thirty years or more as the “hockey puck compass,” though no importer uses that name directly.  It sells for about $120. It can read a bearing to within ±1° or so, but we cannot count on that being the exactly correct bearing because local disturbances can throw this off somewhat, no matter how remote.  Leaning on a car would make much bigger errors than that. Your eyeglass frames could also, as well as your watch when holding this hand held instrument up to your eye.

So standard compass precautions must be taken, but that done, this compass will do the job nicely. It is is also small and rugged, which are bonuses in the field.  We would like to think that the compass in our iPhone might do the job, but they are not dependable for this precision. All electronic compasses are very sensitive to tilt angle.

But having the right compass is just step one. We then need to know what the direction of the sun is when we want to photograph it. If we are sticking with sunrise and sunset, we do not have to worry about time keeping. It happens when it happens.  Lawyers, on the other hand, might want to prove the sun was shining in their client’s eyes at any random time of day, so they do have to worry about time keeping.  We will come back to that.

I do not know of any one magic table that tells us exactly what we want, namely the magnetic bearing of the sun on the horizon for any latitude and longitude on any date. So we have to do a couple simple computations, after which we could create special tables for special locations.

First, the direction of the sun is an astronomical properly, totally independent of the magnetic field of the earth. Thus we must start with true directions. From the Nautical Almanac we can compute the direction of the sun at any time from any place, but this will be true directions (labeled T). That is, north is 000 T, east is 090 T, south is 180 T, and west is 270 T. Southwest would be 225 T. Or we can be more specific, as we will soon want to be, and the direction that is 20° south of west would be 250 T.

Now that the math is done, what do we need to know besides the date?  We need to know our latitude and we need to know the local magnetic variation. You can get your latitude from Google Earth. Just find the location you care about and look to the bottom of the screen. Magnetic variation (often called declination on land) is the difference between True North and Magnetic North. You can get it from the National Geophysical Data Center (www.ngdc.noaa.gov). The variation will have a label, E or W. It is defined in such a way that true bearings = magnetic bearings + Var E (or – Var W).  For our application, we will be going backwards, so:

Magnetic bearing = True bearing + Var W
or
Magnetic bearing = True bearing – Var E.


Now we are left to finding the true bearing of the sun at sunrise and sunset. A celestial navigator can compute this readily from the Nautical Almanac, but this takes tools we do not need.  There is no one table that does this job specifically, but there is one we can use. It is called Table 22. Amplitudes ofthe Sun. It is from Bowditch’s American Practical Navigator (1977 or earlier). 

In Table 22, the word amplitude means the angular difference between the direction of sunrise and due east or the direction of sunset and due west. The motion of the sun is symmetric across the horizon, so these values are the same on a given day. If the sun rises 20° south of east, it will set 20° south of west. In this case the amplitude would be S 20°.

We do have to keep the mind engaged, however, because the arithmetic switches. This is apparent if you look at a compass rose—always a good idea at this stage. That is, 20° south of east means 090T + 20 = 110T, whereas 20° S of west means 270T – 20 = 250T.

And we are now almost done. We are asking for a rather sophisticated result, so it should not be a surprise that we have a couple steps to take. Bear in mind as well that we are doing this with paper tables. In the end, we have apps for this and you can punch a button and get it from your cell phone!

To simplify its presentation, Table 22 does not use date, but rather uses the astronomically more significant parameter called the sun’s declination. This is the same word as used on land for variation, but it is a totally different concept; it is the latitude directly below the sun at this day of the year. (You see now why mariners like to use magnetic variation rather than magnetic declination.)  We are using sun’s declination just as an index to access the tables. Table of the sun's declination.

Suppose we are at latitude 38N, on May 13, and the local magnetic variation is 10° W. What is the compass bearing of sunrise and sunset?

Refer to Declination Table to learn that on May 13 the sun’s declination is N 18° 14’ (18.23°).

Then turn to Table 22 for latitude 38 and see that the amplitude is about 23.5° (ie about half way between 23.1 and 23.7 — fractions of a degree do not matter here at all. You can round up or round down.)

Then apply the label N or S to the amplitude that is the same as that of the declination. North in this case, so amplitude is N 23.5°.

Then the true sunset direction would be located at 270 + 23.5 = 293.5T

And the magnetic (compass) direction would since we have west variation:

Compass of sunset = 293.5 + 10 = 203.5 M.

This method works with these tables included here alone and nothing more.  If you have a computer you can in principle get this data from various sources, but it is not quite as simple as one might guess. The main problem if you want accurate data (ie ±2°) is you have to get involved in time keeping, because on that approach you need to know the time of sunrise or sunset and the values you find in the newspapers will not be accurate and the accurate ones will require you to adjust the times for your longitude… something we have not even mentioned here, etc.

If you care to pursue this, the place to start is www.starpath.com/usno find the time of sunset for your location then come back to that page to get the precise true direction. You still have to apply the magnetic variation on your own... or post questions here and we will try to help.




Sunday, October 21, 2012

Gill Pressure Ports

A barometer reading is sensitive to the wind. A sensor input directly exposed to the wind can lead to variations of 2 or 3 mb with strong wind gusts [1], and the effect depends on wind angle as well as wind speed. Even with a sensor indoors or below decks with a leaky seal to the outside, you will see variations of up to a mb with outdoor gusts [2].  Warships and first responder vessels all have to have pressurized pilothouses, so the barometer must be read from a lead to the outside, usually just a small Tygon tube.

Here is a version from RM Young [3]. This one sells for about $140.  Very simple; no moving parts; but the engineering that goes into them is not so simple and has evolved over the years, starting in the mid 70s. They are now often called a Gill Pressure Port, in honor of the inventor Gerald C. Gill who developed it for what is now called the National Data Buoy Center.

We have tracked down his original extensive technical report on the development of this device [4], which we will make into a pdf ebook that is inexpensive and easy to access. (For now it is too big, and takes some cleaning up.)

The history of this device is quite fascinating, and seems to be an example of some lack of communications between researchers–at least several of the key players do not reference each other during the days of its development.  See for example the early work of Miksad [5] and a related  patent app [6] filed in 1989, canceled in 1996... maybe there is a story there as well!

As should not be surprising, the best of the devices comes from Paroscientific, our neighbors just across the bridge in Redmond, WA. They are the world leader in all matters relating to high accuracy barometric pressure measurement. [7]  Paroscientific has somewhere online at their website a report of the testing of this device, but we have not found this yet.

And we should of course mention Vaisala, the world leader in production of the full range of weather instrumentation, which has a top of the line model as well [8]. These devices are often refereed to as static pressure ports, without reference to Gill, but it seems he was a pioneer in the development of the engineering.

For completeness,  note that the pressure port requirement has applications in other areas, such as pressure drop measurements in air conditioning conduits where there is a varying wind flow through the system [9].


THIS ARTICLE IS A WORK IN PROGRESS. IF YOU HAVE AN IMMEDIATE NEED FOR MORE DETAIL, LET US KNOW AND WE WILL PUSH IT UP THE LIST. FOR NOW I JUST WANTED TO COMPILE THE PIECES WE HAVE SO WE HAVE A REMINDER TO MAKE THE GILL EBOOK... which is in keeping with one of our ebook goals, to preserve obscure but important texts that somehow Google did not do for us. Ed. note: we published this one in 2019. see below.


References:

[1] Guide to Meteorological Instruments and Observing Practices, WMO-8 (related section cited in [7], below. Many copies of the full document online.)

[2] The Barometer Handbook
 
[3] RM Young model 61002

[4] Development and Testing of a No-moving-parts Static Pressure Inlet for Use on Ocean Buoys, Gerald C. Gill 1975-76. 120 pages.  Now available as a Kindle ebook.

[5] An Omini Directional Static Pressure Port, Richard Miksad, 1975-76

[6] Pressure Port Patent app (I do not know where we found this. It was part of the research for the Barometer Handbook. It at least shows the insides of one design, as does the Gill original article. We will, of course, never see the insides of the RM Young or Paroscientifc models online, unless we buy one and take it apart.)

[7] Paroscientific precision pressure port model 8007 (manual)

[8] Vaisala Static Pressure Head SPH10 / SPH20

[9] An Inexpensive Method for Measurements of Static Pressure Fluctuations, Liberzon and Shemer, 2009



Tuesday, September 25, 2012

Dew Point and Temperature vs altitude

In a recent post we discussed a very simple (clearly over simplified) method of coming up with a way to estimate cloud ceilings in special circumstances. A point of the discussion was that the dew point lapse rate, as used in this model, was less than the environmental air temp lapse rate. We came up with something like -1º /1000 ft for the dew point drop with altitude, compared to an average value of the environmental air temp lapse rate, which we took to equal the Standard Atmosphere value of -3.6º/1000 ft.

So now we have to face the truth to see if this model can be justified at all by looking at real soundings and then real cloud height measurements. Soundings are the measured values of T, DP, and pressure (among other parameters) as a function of altitude. Soundings are nicely presented at the University of Wyoming. As you surf around the world looking at these, the  first thing you notice is the the temperature and dew point changes with altitude are all over the place. First reaction would be there is no way at all to predict this behavior.

But we are looking at very special cases, namely we must  have low clouds there in the first place, which means the air temp must drop to the dew point at some altitude less than 7,000 feet (2134 m). In other words, we were not predicting clouds at a certain altitude based on T and DP, but instead saying that if we do have low clouds, then we might estimate the ceiling or cloud base from the T and TP on the surface.

So step one in the data search is to find those cases were we do see the T and DP coming together at some height less than 2000 m. Below is a pic explaining the diagrams (details here), followed by some pictures taken at random. After that we analyze then in the light of our past discussions.


The dew point is always on the left. Background lines are theoretical values of the dry rate (green, 9.8C ~ 5F) and the moist rate (blue, 6C~3F). Notice that the dry rate slope is about 10º shallower, and this 10º corresponds to a lapse rate difference of about 2º F. Notice, too, there are no low clouds in this case; it is even an inversion. For a while the temp is increasing with altitude.

Here are some examples taken at random.  Again, only criteria was that T and DP met below 2000m. In each of the pictures, we marked the temp with a green line, the DP with a red one, then we duplicated a segment of the green line and moved it to the base of the DP so you can see the difference in slope more clearly.










We see first that indeed in these cases the DP lapse rate is lower than the temperature. To get a better feeling we can analyze the slopes of the lines we marked in the figures (1 top, 8 bottom), and from these compute the lapse rates.


Sample slope lapse rate °C/km lapse rate °F/1000ft Delta

DP T DP T DP T T-DP
1 349 333 -3.8 -9.9 -2.1 -5.4 -3.3
2 350 341 -3.4 -6.7 -1.9 -3.7 -1.8
3 351 341 -3.1 -6.7 -1.7 -3.7 -2.0
4 346 339 -4.8 -7.4 -2.6 -4.1 -1.4
5 347 337 -4.5 -8.2 -2.5 -4.5 -2.1
6 352 336 -2.7 -8.6 -1.5 -4.7 -3.2
7 344 338 -5.6 -7.8 -3.0 -4.3 -1.2
8 349 340 -3.8 -7.0 -2.1 -3.9 -1.8








Averages=

-3.9 -7.8 -2.2 -4.3 -2.1

This brief analysis seems to imply that a 2º DP lapse rate is better than the 1º we came up with, but the closing rate used to estimate cloud ceilings is reasonably close. This shows an average of 2.1 and we had 2.6.  But we have to admit this is all very crude analysis. It can only show there is some ballpark value and that the standard values we see in pilot's license training materials and other books (usually 2.5) might not be unreasonable.

There is still another way we can look into the usefulness of this approach and that is to check actual metar reports from airports. They report T, DP, and cloud height, which is presumably measured with some form of a ceilometer–a laser device for measuring cloud height.  We have started this list, but we are already detecting the limits of this analysis, namely they ceilometer data are not being reported to a very high precision.  In fact, it is lower than some standards say it should be.



station T (ºC) DP (ºC) Cloud height (m) k (ºF/1000 ft)





KCHS 18.3 15 300 1.8
MIAMI 26.7 23.3 600 1.9
YBRK 19.6 18.5 600 0.6
KMFL 30.6 23.9 600 3.7
8557 14 7.8 600 3.4
FZAB 31 20.4 600 5.8
KLAX 21.7 15.6 600 3.3
KLGA 12.8 4.4 1000 4.6
DULUTH 4.4 -5 1500 5.2
KDLH 7.2 -2.2 1500 5.2








Average = 3.5

We need to get a lot more of this data before any conclusions. We have limited the cloud height to below 2000 m, but what we find in the data is the next step up is 2500 m, and the other 6 stations found with cloud heights all reported 2500m, which can't be right. Reminds me of the old days when all ship reports had wind only from the cardinal and intercardial directions!

So far we did a crude analysis and predicted k = 2.6, but this came from an estimated DP lapse rate of 1ºF/1000 ft, which was then subtracted from an estimated average T lapse rage of 3.6. This is not consistent with the soundings which showed averages of 2.2 and 4.3 for an average k = 2.1 ±1

The 10 metar data so far gives k= 3.5 ±2.  We do not have a lot of data, but the data included all that matched our criteria: in soundings, we took all we found that had T and DP meeting before 2000m and the metar data we have taken all we found that gave measured cloud heights below 2000m. So the statistics are low, but should form some realistic sample. (PS it could be I am misinterpreting the metar data. i will check that.)

Therefore I must conclude that  I cannot see that there is not a simple way to predict useful cloud heights based on surface values of T and DP.  So we have to change our textbook from saying this is a formula for estimating cloud height, to something like this is the formula some books say can be used to estimate cloud height, but it must have large uncertainties.

This is the end of this discussion for now.  I have to wait to see if someone who knows about these matters might shed some light on this topic... i am a bit in the dark here.