Wednesday, December 13, 2017

Pub. 249 Vol. 1, USNO, and OpenCPN

Alert! These notes only make sense to those familiar with cel nav.

The cel nav sight reduction tables Pub 249 comes in 3 volumes. Vol. 2 and 3 are similar to Pub 229, in that you enter with a-Lat, Dec, and LHA and come out with Hc and Zn.  And like 229, there are specific volumes for specific latitude ranges. Also like 229, these are permanent publications. They never change. If you see Vol. 2 or 3 in a swap meet or used book store for a couple bucks (list is $25 each plus shipping) then you have a good buy. But do not buy an old edition of Vol. 1. Despite its symmetric name, Vol. 1 is a totally different kind of sight reduction table. It is not permanent; it is issued every five years (latest is Epoch 2020, which good for ± 4 years).

On the other hand, unlike Pub 229 and the NAO tables, which can be used to sight reduce any sight, Pub 249 Vol. 2 and 3 are intended for sun, moon, and planets... and coincidentally, any star with declination less than 29º, which is the maximum those intended bodies could have. Thus, in part because Vol. 2 and Vol. 3 will not do stars in general, there is a Vol. 1 intended for "selected stars."  I say "in part" because all of Pub 249 was developed for aircraft cel nav, which has inherently less accuracy (and hence less need for more versatility) and also needs a method that is fast and easy to apply. Pub 249 has stayed in print beyond its expected lifetime (aircraft cel nav has been rare now for many decades), because these books became popular with yachtsman. The British Admiralty call these Rapid Sight Reduction Tables; they are $55 per volume, for the identical content. The US versions are online as free PDFs, although you could not print and bind them for the $25 they sell for in print.

Use of Vol. 1 takes a new approach to star sights. We figure twilight time from the Almanac, then we look up the GHA of Aries at that time, and from our DR at that time we find the LHA of Aries at the proposed sight time. Then we round our DR-Lat to nearest whole degree, and we have effectively established the sky that is overhead. Knowing this, Vol. 1 then gives us a selection of 7 stars by name suitable for sights, with the 3 best ones marked with diamonds. Stars in all caps are bright ones. The LHA Aries marked a specific time, so Vol 1 can tell us the Hc and Zn to each of the 7 stars. It has precomputed these stars for us, which we would otherwise have to do with Pub 229 or a calculator.

Next we take sights to the three stars in the normal manner, noting Hs and WT for each sight as in standard practice. Convert Hs to Ho and WT to UTC and we are ready to complete an abbreviated sight reduction to get the a-value.  Don't worry, you do not have to know these stars, nor how to identify them in the sky.  Just go out at about the time you used, set the sextant to the Hc given, and point in the Zn given, and your star will be there.  A point of pure light in a pale blue sky, often not even visible to the naked eye without a telescope pointing in the right place. Bring it to the horizon and you are done.  Indeed, it is not unreasonable to use Vol. 1 just to select the best stars and get this precomputation done for you. After the sights you can reduce them however you like, but Vol. 1 itself can be used as shown below.

We illustrate the use of Vol. 1 with a trick way to practice cel  nav for any type of reduction, namely we use the USNO computation of celestial bodies to tell us what the heights are from a given time and place, then we pretend that is what we measured, and we use our sight reduction method of choice to see if we can reproduce the Lat-Lon we gave to the USNO.  (The only better practice is to use our book Hawaii by Sextant!)

We start by choosing a DR and a date, then figure the twilight times from the almanac as shown below.

Fig.1 Set, civil, and nautical twilight. Sailing from the West Coast, with WT = PDT (ZD=+7)

So this is where we start, and from the almanac we learn sight time will be about 0440 UTC on July 5. Note this is 2018, and today is mid December, 2017, which reminds us we can do this for any time. As we shall see shortly, before an ocean voyage, you can know ahead of time which stars will be best on any night.  This can change with cloud cover, but intentions can all be planned.

To figure the stars, we round to Lat = 35N, and look up in the NA the GHA of Aries at 0440z on 7/5/18, which is 353º 7.9' and subtract from that our DR Lon of 130º 23.4' to get an LHA Aries of 212º 44.5', which for  now we can call 212 or 213 it will not matter for this planning of the practice.

Now we turn to Vol. 1 to see what stars they recommend. Note that DR is fixed, so LHA Aries varies as GHA Aries, which increases at 15º/hr which is 1º per 4 minutes. So the LHA Aries column is essentially a time scale, at 4 minute intervals, with 212.75 or so equal to 0440 UTC. We are looking here at the best choices and heights of the stars over roughly an hour (15 x 4 min). But we also notice that the best 3 stars do not change. The ones with the diamonds, of which Antares and Regulus are magnitude 1 or brighter stars.

So we will chose those three stars to "take sights of" for this practice with Vol 1. At this point we could do the same thing using Pub 229 or the NAO tables.

Fig. 2. Section from Pub 249, Vol 1. (We see later why Antares is marked at the next line.)

It takes a couple minutes per sight, and we would typically take them in sequence and then repeat the sequence 3 or 4 times, or as long as we can see the horizon in the evening, or until the stars disappear in the morning.

We make this choice for practice:

Kochab taken at 04 40 23  (hh mm ss) 
Regulus taken at 04 42 19, and 
Antares taken at 04 45 03. 

I chose this order at random for this exercise, but in practice there can be a preferred order. Ideally we want to get 4 or so rounds of each sight, so the order would not matter, but we should be aware of their bearings relative to sunset. July at 35N the sun is setting pretty far north, around 300º, so the sky will be darkest showing stars earlier opposite to that at about 120º.  So we might learn in practice that we could get a couple sights of Antares earlier than maybe Regulus, but it might not matter much for these particular stars. That is just a side note to think on. As a rule, you want to stretch out the useful sight time as long as possible to get as many rounds of sights as you can.  Do not add more stars! Just get more sights of these three... I wander into details from our textbook.

Now we go to the USNO to get realistic practice sights. We have made a shortcut to it at  This is not related to Starpath; it is just a quick way to navigate to an important place—we call the navigator's dream machine—that is not so easy to find at random.

The input page looks like this:

The output for this first sight is

At this point for practice, we can simply use the USNO Hc for our Ho, or we make practice problems by adding the corrections we are going to take out (IC, dip, and refraction). They even tell us what the refraction is, -0.8'.  We can then assume some watch error if we like, and fill out a real form to look like this, which is a starpath form dedicated to Vol. 1.

The form shows the actual time we "took the sight" and then we find GHA Aries at that time from the Nautical Almanac to enter the form. This has an hours part with a minutes and seconds correction.

Since we are using the same DR for all three sights, we have to assume we are not moving.  This shows what the top of the form would look  like for a real sight.  For the others we dispense with that.

This form is essentially the same as we use for Vol. 2 and Vol. 3, but has several parts removed. Copies of our forms with instructions are available for download at, along with other tools of interest.

Once we know actual time of sight, we figure the actual LHA Aries (using Almanac and DR-Lon) for it and return to Vol. 1 to get Hc and Zn. The first dip into the tables was just to see what stars to shoot, at some approximate time. Now we have real times, so we need real LHA Aries. All the rest of the form is the same as using Vol. 2 and Vol. 3.

Then we repeat the process for the next sight times, and get two more LOPs. Note that you get to double check that you looked up GHA Aries correctly, and to double check that you got the right Zn. We are not using either of these from the USNO, only the heights they give.

With these examples, we now skip the sextant and time corrections and go direct to the meat of using Vol 1.

And finally the Antares sight.

Now we have 3 LOPs and we can plot them for a fix. They are summarized here.

Now we can plot these in the normal way to see if we get back what we started with, namely 34º 56.7' N and 130º 23.4' W.

This is zoomed in solution of the plotting done in OpenCPN. I will add a video on that trick shortly. We just plant a waypoint at the assumed position, draw a route in direction Zn, add a range ring to the mark with radius = the a-value, and where they cross draw a perpendicular route line which is the LOP,

We are looking for 34º 56.7' N and 130º 23.4' W.  We are off by about 1 mile, but I was not as careful as possible with the plotting, and Vol. 1 only has an inherent ± 0.4' accuracy... i.e., it rounds all sights to 1' and it rounds all azimuths to 1º. You can in each case use the USNO data to see how accurate Vol. 1 was on Hc and Zn. In short, the result here is about as good as we could expect, i.e., it all works.

With this method you can practice any cel nav sight reduction, for any ocean, for any time of the year.  You can also get the Vol. 1 forms at our cel nav book support page cited above.

Here is a link to this form alone: Form_111_Pub_49_Vol_1.pdf  This is new as of this post here. We will incorporate this into our full set of forms, which are available as free downloads or as a bound set of perforated sheets.

Sunday, December 10, 2017

Why the Book "Hawaii by Sextant" is Unique

There never has been in the past, nor will there likely be in the future,
        such a thoroughly documented navigation study of a voyage
        relying purely on celestial navigation to cross an ocean.

The future part is easy. It is near impossible to find an ocean going vessel without a GPS on board, or in someone’s cellphone. From a legal point of view, it would likely be considered negligent to make such a voyage without GPS.

One might argue that a voyage could be or was navigated by cel nav without looking at the GPS, but even that does not really count. Knowing you have a backup solution changes the mentality of the navigator and biases the navigation. Not to mention that you are less likely to stand on deck with sextant in hand for hours waiting for the sun to peek out for a few seconds to get a sight. With a GPS in a box somewhere, you can more likely gamble that you will eventually get a sight and not have to work so hard at the moment… nor would you be forced to study limited data for hours to figure the most likely position.

Navigators can certainly document good cel nav practice underway in the ocean with detailed information, and such studies are indeed valuable contributions, but that is different from relying on it as the sole source of navigation, regardless of conditions. This book shows what it was like to navigate by cel nav with nothing else to go by but compass and log. It raises the questions you would have to face, and proposes solutions to analyzing difficult data.

Furthermore, suppose such a voyage were carried out and good records maintained. Then we have to fold in the probability that someone would devote the enormous amount of time and energy required to organize and present the information in a usable manner for students. We venture that this is highly unlikely…. maybe a few days sail, but not across an ocean.  A look into the past treatment of this challenge only reinforces this factor.

Why no such study exists from the past is a more interesting point–especially since we describe our book as being “in the spirit of early Bowditch editions.”  Bowditch’s American Practical Navigator (1802 up until 1900 or so) and also Norie’s Epitome of Practical Navigation (roughly same period) are two classic 19th century texts on navigation.  And sure enough each of these do include very detailed practice voyages with celestial sight data and logbooks. It is curious that they all involve voyages to or from Maderia, Spain, which must have some historic significance, but not an issue now.

The key point is that even though these records list the vessel names, voyage dates, the captain’s names, and the log keeper’s names, they are all fiction when it comes to the celestial navigation. In short, they made up the data to demonstrate what they wanted to teach… which is what all navigation teachers do to this day at some point.

These Bowditch and Norie Journals may have been based on actual voyages made at some time in the past, but the data presented is blatantly artificial. (We leave it as an exercise to confirm this observation. The books are online.)

We agree with these masters of navigation that this is the best way to teach the process. And certainly we do not even approach the skill and seamanship they represent, nor can we hope to emulate the high standards they set in navigation. The only point we make is data in Hawaii by Sextant is real; the comparable data in these classic texts was manufactured… and we know as well as anyone why they might do what they did.  After an ocean passage under sail, it is sometimes difficult to put the pieces back together to present a coherent picture of the full navigation, day to day–not to mention that the many details required to reproduce the results are tedious. Few would consider them worth preserving. As we point out in the text, and you see in our own records, when the going gets tough, the boat gets more attention than the logbook.

In more modern times there have also been a couple books published that present ocean exercises in celestial navigation, but these too have been based on manufactured data.  We thus maintain that Hawaii by Sextant is a unique contribution to the library of navigation textbooks.

It is obviously not at all unique to sail across an ocean by cel nav alone. Thousands of mariners over the years have done so. Bowditch did many times. It is not clear that Norie actually navigated. He was more famous as an author and book publisher at the time. He was the founder of what is now called Imray Nautical Publications in UK. That fact also is not surprising. Many early classic texts on navigation were from scholars who did not have practical experience underway. Bowditch, Lecky, and Thoms were notable exceptions.

PS. We have been told that there are several highly experienced navigators who do still have all the records of their early voyages done by cel nav alone.  We look forward to their publications if they choose to do so.  The more data we have of this type the better our learning will be. If these are from larger vessels (we understand they were commercial ships), then that too will add another perspective.

Wednesday, November 22, 2017


Someone has to care about the details of marine navigation and weather, or things slip by, which might one day show up and cause confusion, and we don't want confusion. Indeed, a hallmark of good navigation and seamanship is clarity in communications. Today, we ran across a gold nugget of doublespeak: MSLP.

We have published a book called the Mariners Pressure Atlas, which contains pressure statistics that are difficult to find, despite their great value for weather tactics in tropical storm prone waters of the world. The book contains global plots of the mean values of the sea level pressure, called mean sea level pressure  (MSLP) patterns (isobars) along with the standard deviations (SD) of these pressures on a month to month basis. The SD are a measure of the variation of the pressure we can expect purely from a statistical spread around the mean value. Sample sections are below.

These are the mean sea level pressures (MSLP) in this part of the world in July. Below are the SD values.

For example, in the tropics with a MSLP of 1012 mb and an SD of 2.0 mb, we know that an observed pressure of 1010 is one SD below the mean and a pressure of 1008 is 2 SD below the mean. When we observe an average pressure of 1007 mb, we are 2.5 SD below the mean. That takes on special meaning when we look at the probabilities.

In other words, the probability of normal pressure fluctuations being down 2.5 SD is 0.6%. A pressure that low is almost certainly (99.4%) not normal fluctuation—that is the approach of a tropical storm! The wind will for sure not warn of that at this point, and maybe the clouds on the horizon might not either, but there is indeed a tropical storm headed your way.

This powerful storm warning technique was well known in the late 1700s, early 1800s when ships carried accurate mercury barometers, but unfortunately with the advent of aneroids, by1860 or so this knowledge slipped away because they were not accurate enough then to do this job.  Eventually even the textbooks stopped talking about absolute pressures and just started preaching up or down, fast or slow, which is useless for this type of long range storm forecasting.  Now we have accurate barometers (including accurate aneroids), which is why we rejuvenated this classic method... mentioned in Bowditch, but sadly without a link to the crucial MSLP and SD data.

There are spot values of mean sea level pressures in Appendix B of the Coast Pilots.

But that is not the point at hand. What we are dealing with in the above is the mean value of the sea level pressure, which we periodically see abbreviated MSLP in weather and navigation documents.

Now, however, look at these weather maps from Australia and Canada.  The UK Met Office also uses the MSLP notation to describe a surface analysis map.

Now we have an all new meaning of MSLP.  This cannot be the mean sea level pressure we discussed above;  these are the actual values of the pressure at sea level at the valid map time. What is going on here is they are not calling the reference plane "sea level," which we often see, i.e., sea level pressure (SLP), but instead they are calling the reference datum "mean sea level."  MSLP in this context is the same as SLP.

Here is an example of aviation weather (METARs) using SLP; it is also used in some numerical model outputs.

Here is another example that shows the fluidity of the terminology.  The ECMWF defines

"MSLP is the surface pressure reduced to sea level." 

So they know that "sea level" is the same as "mean sea level," but they choose to help us make our point!

It is not unreasonable to tack on the "M"; the sea level does change with the tides (to a good approximation mean sea level, MSL, is halfway between MLW and MHW), not to mention that it varies with the pressure above it (called the reverse barometer effect). In fact, MSL is an even more complex concept, but in ways that do not at all effect our use of it as a pressure reference. For present context, this just reminds us to think through the terms we use.  We might note that on nautical charts building, towers, lights, bridge clearances are referenced to MHW, but spot elevations on the land and elevation contours are actually referenced to MSL.

We thus have in common navigation conversation both:

     MSLP = M - SLP,   being the mean value of the sea level pressure, and

     MSLP = MSL - P,   being the pressure at mean sea level.

If you found this abbreviation in some context of your work, and then went to a navigation or weather glossary to look it up, chances are probably only 50% that the glossary will come back with the appropriate answer for your inquiry. Put another way, you will not find an official glossary that has both definitions; they will have one, or the other.

... which I thought we should document, so no one gets the impression we are sitting around the office all day working on trivial matters.

PS. Just ran across this at the Navy site (FNMOC):

Sea Level Pressure (MSLP): The model-estimated pressure reduced to sea level. Units are in millibars; contour intervals are 4 mb. 

Maybe the "M" stands for "model-estimated"?     

Tuesday, November 7, 2017

Global Warming and Tropical Cyclone Statistics

We happen to be updating some of our training materials today and thought to check the latest stats of tropical storms and hurricanes... there is much talk these days in the news about the various implications of global warming, including affects on tropical storms. Partial results are shown below.

These storms may be getting more severe on average, and maybe wandering off to higher latitudes more often, we have not checked that. All we did is compare what Bowditch reported in 1977 compared to what they report in the brand new 2017 edition as to the total number of systems.  In 1977 there were not many convenient sources of this data. Now we have all the detail we could ever want about every system, and we get it directly from the primary sources, but it is not clear that the new Bowditch data might need updated itself.

The statistics shown below are all data up to 1977 compared with the latest systematic study in the 2017 Bowditch, described as 1981 to 2010. You can click the pic for better view.

The values are average number of incidents per month. "S" is number of tropical storms, meaning sustained (> 1 minute) winds ≥ 34 kts. "H" is number of hurricane-force systems, also sustained.

Note that the storms include the hurricanes... all hurricanes start out as storms.  So of the 12.1 storms per year average in the North Atlantic, only 6.4 of these on average proceeded to become hurricanes. (If you happen to look at the 1977 Bowditch data, they used a different convention on presenting this information; we regrouped that early data to make this comparison.)

The North Atlantic region (including Caribbean and Gulf of Mexico) definitely has more storms (we have about 29% higher chance of seeing storm force winds), but there is a slightly lower chance according to this data that these become hurricanes—but this still leaves us with slightly more hurricanes than earlier, about 20%.

But these are statistics. We could have 10 hurricanes this year (as we did), then 2 the next year, and we are back on the average of 6. The question is, how likely is that, just 2? If we have 10 next year as well, we better have zero the next year, just to approach the average.  In short it could be that these Bowditch stats need to include more recent data, ie 2011 to 2017.

For example, here are the recent data from the NHC.

For the North Atlantic, over the past 7 years (not included in Bowditch 2017) we see 13.8 and 7.1, which is higher than 12.1 and 6.4, but not that much.

Over the past 7 years we see higher numbers for East Pacific: 15.9 and 9.4 compared to 16.6 and 8.9, which is about the same... but both notably higher hurricanes than in 1977 (15.2 and 5.8).

With our check of the recent data we can compute the standard deviations (SD), which are:

East Pacific:  15.9 ( 4.6) and 9.4 (3.0)
North Atlantic: 13.8 (4.8) and 7.1 (3.4)

We do not have a lot of data here, but these are large SDs, which means we can expect large variations of these numbers from year to year. Below is the distribution of events if the variation is indeed random.

This means that 68% of the values should be within 1 SD of the mean, or we can look at it as shown  below

With, say, 7 hurricanes per year with an SD of 3 it means that there is only roughly 16% chance of having 4, or put the other way, there is also a 16% chance of having 10, but if we have 13 events (2 SD above the average) then we are down to 2.3% chance by random, which raises more the issue of looking toward trends.  It would be nice if we had the SD for the 2017 Bowditch data. That was not given in the book, but it is fairly easy to look up the actual values and compute it as we did here.

Without an in depth analysis, it seems we can likely rely on the numbers in 2017 Bowditch values, with the awareness that these do appear to be rising slightly still, beyond that 2010 data sample.  Other notable changes can be seen in the other zones.

It is likely a more interesting study for climatologists to look at severity, but this has little interest to mariners, i.e., we would obviously treat a 150 kts forecast the same as we would a 115 kts... but we might want to keep an eye on storm size. With all the data that is available, one could do a very precise study as a home project on, say, the average area covered by 34-kt winds from inception up to hurricane strength, and then the area covered by 50-64 kts and then >64 kt winds after that.

The other study would be how far north do they go, and indeed how long do they last.  If you have a student with a science project on the horizon, this is very easy data to get online and the analysis would be a good exercise in using numbers.  Furthermore, this has much value to mariners and we cannot count on anyone without a maritime interest in putting these specific values together. (If a student is interested they can call us and we will help.)

Check out the 2017 Bowditch, Chapter 39. They have very good coverage of tropical systems, that even include QR-codes to go directly to the various Regional Specialized Meteorological Centers (RSMC) that do the job of our National Hurricane Center for other tropical cyclone zones.

If you plan to be sailing in a hurricane zone, a mandatory reference is:

Noting especially the Mariners 34-kt Rule and the Mariners 1-2-3 rule on storm track uncertainty.  I would also like to think that our own book would be helpful

Here is a sample of the 2017 Bowditch's extensive use of QR-codes, which is pretty techy,  but all the links in the pdf are interactive in the first place.

Wednesday, November 1, 2017

Compass Bearing Fix — An Overview

This topic is presented in several sections, you can skip to ones that might be of interest.

     What is a compass bearing fix?

     What vessels work best and where to stand

     Compass choice

     Choosing targets for the bearings

     Practice bearing fixes in your neighborhood

     Finding the most likely position from a three LOP bearing fix

     Fix error due to a constant error in both bearings of a two-LOP fix

What is a compass bearing fix?
If the compass bearing to a lighthouse is 045M, then we can go to that light on the chart and draw a line emanating from it in the opposite direction (225M) and we know we are somewhere on that line. If we were to the right of that line, the bearing would have been smaller; on the left of the line, the bearing would be bigger. So we know we are on that line, but we do not know where on that line. That line on the chart is a line of position (LOP).

The next step is to find another identifiable landmark well to the left or right of that one, and take another bearing line and plot that one. The intersection of those two LOPs is a bearing fix. But we do not know much about that fix from these two measurements alone. The two lines will always cross at some position.

If the compass is wrong by some small amount, that fix will be wrong by some amount. The size of fix error depends on the compass error and the angle between the two targets. At the end of this overview there is a note on calculating the error in a two-LOP fix. You can use it to show that a 90º separation minimizes this error, but you do not gain much above 60º separation, whereas you lose fast in accuracy below a 30º intersection.  As we shall see, we learn much more about the accuracy of our fix if we have 3 LOPs.

What vessels work best and where to stand
A position fix from 2 or 3 magnetic compass bearings is one of the basic piloting tools in marine navigation... at least for non-steel vessels. On steel vessels there is likely some disturbance of the compass, and that disturbance (deviation) will likely change from one place to another on the vessel. On ships this type of fix is carried out with gyro bearings, but the error analysis presented below will still apply.

Even on a non-steel power boat there can be issues that need to be checked if you are anywhere near the wheelhouse during the sights.  If you have a favorite place to take sights, but are not sure about it, then stand there and take a bearing to a distant land mark as you slowly swing ship. If there is no deviation, then the bearing to the landmark will remain the same on all headings. If the bearing to the same target from the same location on the boat changes as we turn in a small circle, then we know we have a problem. On one boat we used for our Inside Passage training, we could take good bearings from the starboard door of the wheelhouse, but not the port side door.  From the cockpit of a non-steel sailboat this is rarely an issue.

You can do this test this tied up at the dock as well. Just take bearings to several close targets and see if they cross at your location. That is essentially the process presented below, but we add some details, and we want to start off some place where external deviation is not an issue.

Compass choice
For the practice suggested later on you can use any compass you have.  But thinking ahead to options for use underway, the first choice that comes to mind is the "hockey puck" compass.  This has been the bearing compass of choice for sailors for more than 30 years. It can be read to a half of a degree—which is not to imply the accuracy is that good, but we always want to start off with the best numbers we can read. A compass with index marks only every 5º can be used with practice, but it takes more concentration to interpolate the bearings to a degree.

Another excellent option is a good compass in a pair of binoculars. This has many virtues, not the least of which is you get a better view of the target. When personal vision is limited in twilight, which is a common issue, then this becomes a top choice for compass bearings. These cost anywhere from less than a hockey puck (~$120) up to to $600 or more for top of the line models.

There are many options for compasses these days. I have not surveyed the market in a long time. Electronic compasses would seem an ideal choice, but this choice takes special care. The primary issue for most of them is they are very sensitive to roll and pitch, so we need some way to be sure they are very level. Apps often show a bubble level or other graphic aid to insure it is level. Electronic compasses also typically offer some means of calibration for local deviation (rotating it in some prescribed pattern), but in a sense, this just adds to the mystery of the number we read. Some handheld GPS units include an electronic compass; these have the same issues mentioned. Which reminds us that a magnet glued to bottom of a floating card (magnetic compass!) is a pretty transparent tool—rather like the wheel when in comes to function and simplicity.

Choosing targets for the bearings
The first step is to choose the best targets when we have a choice. Ideally we want three targets 120º apart, so the goal is find three that best match that. If we have just two, then close to 90º is best, but with just two we do not get a real measure of our fix uncertainty, which can be as important as getting the fix itself. So we are concentrating on three sights. We do not really gain by taking more than three.  It is better to take 4 or 5 sights each of 3 targets (1,2,3 1,2,3... not 111, 222...) than it is to take one or two sights of 10 different targets.

The targets should be as well defined as possible, i.e., sharp peak, rather than round peak, and as close to you as possible. For two equally good targets, equally well spaced, take the closer one. If you have your arm around a post with a light on it, the bearing to the light (using the other hand) could be totally wrong and you still get a good fix (i.e., you know where you are), whereas the bearing to Mt Rainer (90 mi off) is essentially the same from one side of Puget Sound to the other, so it is useless for navigation. Fixed aids are much superior to floating aids, because we know where they are. Also an obvious issue, the target we use must be identifiable on the chart... unless we are just using compass bearings to find distance off of that object, not caring what or where it is. That we can do with compass bearings alone, but that is another topic.

When moving we want to take the first bearing to the target whose bearing is changing the least with time (near dead ahead or astern), and the last being to the target whose bearing changes fastest (on the beam). We do this so we have the minimum DR run between sights for a running fix.  This is not an issue for practice at home on land... unless you want to practice with running fixes using a bike or car, but that too is another topic.

Practice bearing fixes in your neighborhood
As noted, for this practice to learn the process, plotting, and analysis, it does not matter what compass you use. To drive that point home, we used an iPhone compass app for this exercise. We learn the process using any compass, and indeed our analysis should accommodate the good ones and the bad ones, providing we have multiple sights of each target.

With that background, we look at one way to practice using a Google Earth (GE) screen capture for a chart. We do not need actual coordinates for this, since we can print the picture and use plotting tools; but we do need the scale, which can be read from the GE ruler tool. Be sure to click the N button (top right of the GE screen) to get north, and also be sure the picture is flat (i.e., shift mouse roll). In lieu of printing and plotting tools, you can also do this with a graphics program as we have done here.  Printing, however, offers better hands on practice.

On the other hand, if you want to import this image as an actual chart in, say, OpenCPN, then put a GE push pin near two opposite corners and record the lat-lon of these. Then you can use those locations to georeference the image in OpenCPN very easily with the WeatherFax plug-in (I need to make a video on that process.)

We used a compass app in an iPhone for this to show that you can use any compass.  Not the best, but usable for this exercise. The targets were three telephone poles. I marked the base of their shadows as the locations. The green circle is where I was standing to do the sights.

Here are the data of the three sights, the averages of which are plotted above. These are expressed as True bearings, which is an option of the phone compass app. There are many free versions of these apps. The compass, like the barometer, inclinometer (heel sensor!), and other sensors in the phone do not have a native display. So to read any sensor we have to load a third-party app.

Note that the standard deviations of the actual bearing angles measured for each sight happened to be about what was estimated, but that is not really pertinent here, so long as they were not notably larger. It is some level of testimony for the phone app, which after all we just point at the target.

We see that even with this poor compass, we did get a triangle surrounding our actual position. So in one sense we can stop here, and you can use the above procedures to practice basic bearing fixes. You will soon learn that the averages of several sights in rotation are better than just taking three.  Needless to say, this would be a great exercise to do from an anchorage on any day sail. Then you can use real charts or echart programs.

Finding the most likely position from a three LOP bearing fix
For those who want to pursue more details, we carry on to look into the accuracy of the fix and what point inside that triangle we might call the most likely position (MLP).  This would have to be considered an advanced topic in navigation, and one that will not often be needed.  It is for those circumstances were we want to do the best possible navigation with what we have to work with.

Once a triangle of LOPs has been plotted (often called a "cocked hat"), a common practice is to choose some center value of the triangle as the MLP, such as the intersection of the angle bisectors. This is better justified if the navigator is confident that the accuracies of the 3 lines are the same. If we have reason to believe the accuracies are not the same, or better put, that the uncertainties in the lines are not the same, then the centroid choice is not correct. To improve on that we must make some assessment of the uncertainty in each of the lines. That process and what we do with it is discussed below.

We have to first assume there is no local deviation that if present could cause a different error for each direction. In our land based practice from a fixed point, this would have to be something that rotates with us, like a wrench in our pocket, or steel screws in eyeglasses. A steel telephone pole on the corner would not really matter, as it would shift all sights the same amount. Furthermore, in principle, a compass app could detect this when we rotate the phone in the calibration mode, and correct for it.  The pole would be just distorting the magnetic field where we stand, regardless of which way we are pointed, unlike on a boat where the disturbance rotates with the boat causing different errors on different headings.

Recall how the compass works. We are standing in a magnetic field that orients the compass card in the direction of that field—it has a magnet glued to its bottom side. Then when we turn the compass to take another bearing, the compass housing and index mark fixed to it rotates around the compass card, which itself is not moving. It swings about a bit, but goes right back to its original orientation, magnet pointing in the direction of the strongest magnetic field, which is called "magnetic north." Electronic compasses work in a different way but that same principle applies.

We have in the practice example a relatively good distribution of targets; not ideal, but not far off.  We can fairly assume—as we must with most magnetic compass bearings—that we have a bearing uncertainty of not better than ± 1º. Even with a best possible magnetic compass, we have the uncertainty of not just the reading of it (this one showed whole degrees only; tenths would be better), but also we have uncertainty of magnetic variation when underway, and potential errors in plotting and reading the plots.

We can nail the variation issue at the geomag web site. We have here 15.8º E, where I was standing,  as of today. Underway you will have a larger uncertainty, unless you install the program geomag on your computer or phone, which is easy to do. Many ECS programs do this automatically for you (OpenCPN has a plug in for this). Note you do not get this from GPS satellites. If your GPS is telling you variation, then the GPS unit itself has this program installed. The satellites tell it where you are, and the software in the GPS unit computes the variation for you.

So if we optimistically assume a 1º uncertainty in each measurement, then we can use geometry to figure how much that offsets the LOPs near the place they intersected, which will depend on how far off they are... again, this is why we want close ones.  We do an approximation here. We have an angle uncertainty and want to translate that into a lateral uncertainty—effectively, how wide is the line?  One way to estimate the width uncertainty of a bearing line is to call it equal to the tangent of the bearing uncertainty multiplied by the distance off. This in turn can be well approximated with the small angle rule that is useful for many tasks in navigation, namely we assume the tangent of 6º is 1/10. (One application of the rule is, if i steer a wrong course by 6º I will go off my intended track by 1 mile for every 10 I sail; there are many applications.)  This means that a working uncertainty in bearing lines can be estimated as

sigma = (target distance) /60.

We call this uncertainty "sigma," as it is on some level representing the standard deviation (often abbreviated with a greek letter sigma) we might expect among a series of sights to the same target.  In the LOP of side 2 above the sigma from this reasoning is ± 1.3 yd. It is almost certainly larger than this, but for now the key issue is we use the same system for each of the three sights. That is, we could double that for each and it would not matter much. The key issue in this type of analysis is the relative uncertainties of the sides. Below are the MLP data for these three sights.

Below is the triangle expanded showing the most likely position calculated from this data: the black dot inside the light blue ring.

The MLP is the red dot. The plot is scaled to the lengths of the sides, given above. The location is plotted relative to the bottom intersection. The green circle was just an estimate of where I was standing doing the fix  before we did any analysis. You can solve for this MLP manually with a form we have available, or use a free app (MLP.exe) that computes the location based on the three sides and three sigmas. It is a simple computation that is easily done with a calculator. This work was largely motivated to analyze cel nav fixes, so it is discussed further in this note (Analysis of a Celestial Sight Session). We will have the full derivation and other discussion online shortly. Below are screen caps from the app, which has a direct digital solution as well as an interactive graphic solution.

The light colored lines are marking the sigma values for each line, which we enter with the sliders on the left. The triangle is formed by dragging any corner. The black ellipses outline the 90% and 50% confidence levels. In this example we multiplied the sides by 10 to make a bigger triangle. (The MLP values shown reflect its location on the plot, which is marked in 20-unit steps.) The scale in this type of analysis does not matter. The main point here is that once you have a triangle, the MLP is not necessarily any of the conventional center points of the triangle, such as intersection of medians or bisectors. Each of these conventional center points can be compared using buttons on the bottom left of the app. The MLP depends on the shape of the triangle and on the sigmas, and on a fixed error if present.

The introduction of a fixed error that applies to all bearings complicates the analysis. First, with a fixed error the directions of the LOPs matter, which is why these LOPs show arrows on them. This is how we distinguish three LOPs at 60º apart from three at 120º apart, even though the triangles are identical! With no fixed error (as in this example) the arrows do not matter.  Practice with the app will show that if the fixed error is larger than the random errors (sigmas), then the MLP will actually be outside of the triangle whenever the span of the bearing directions is less than 180º. Again, that is why we ideally want three bearings at 120º apart. Then a fixed (unknown) error will just make the triangle larger, but the MLP will still be located inside of the triangle. With our app you can play with various configurations to study this behavior.

Again, this is an attempt to get the very most out of our navigation measurements, which is not always needed. This requires extra analysis, and in particular needs realistic estimates of the uncertainty in each of the lines. This is always possible on some level, i.e., a compass bearing line on a chart will never be more accurate than ± 1º, and twice that is more likely. Nevertheless, even with these rather large uncertainties, we can obtain a final fix precision that is notably better than might be suspected based on the uncertainties in the individual lines—assuming we can make realistic assessments of these.  Sometimes doable, other times not.

We also see numerically and graphically with the app what many navigators know intuitively. If one of the lines is notably better (smaller sigma) than the other two, then the fix will be on that line, and the other two just serve to determine where on that line.

Fix error due to a constant error in both bearings of a two-LOP fix
We can compute the error in a 2 LOP fix if there is a constant error in the bearing to each of them. For two targets a distance (D) apart that are separated by an angle (A) from your perspective, a bearing error (E) will cause the fix to move by an amount (fix error FE) given by

FE = D x sin(E)/sin(A)

For example, when the bearing error is 1.5º for two sights to targets 2.2 mile apart, that are separated by 45º, the fix will be in error by 2.2 x sin(1.5)/sin(45) = 0.6 nmi.

In the neighborhood practice example we have three pairs of 2 LOP sights. If we check 1 and 2, the fix error of using just those two (with a 1º compass error) would be

FE = 112 yds x sin(1º)/sin(326.5º - 192.8º) = 2.7 yds.

Note that E will always be a small angle, so you can solve this equation for E = 1º, and then just scale it. In the last example, if the E had been 2º, we would have got 2x2.7 = 5.4 yds.

I might note that the solution for this error seems to have been wrong when it first appeared in the 1977 edition of Bowditch, and the rendering of it has just gotten worse with subsequent versions, leaving it pretty mangled up in the 2017 edition. It appears they copied it in 1977 from the 1938 British Admiralty Manual of Navigation, Vol 3, which itself somehow ends up with a formula that mixes up radians and degrees as well as nautical miles and arc minutes, and that seems to have been copied without noting what they had done.

You might wonder how it can be that the US Bible of Navigation can have this fundamental point wrong for so long? The answer from a colleague here at Starpath is: "Because you work on things no one cares about."   (Maybe I am misinterpreting what is in Bowditch? Let me know and I will remove this.)

Below is a numerical example for two bearings (040 and 100) with an error of 5º using two targets that are 2.24 nmi apart, plotted in OpenCPN.

Monday, October 30, 2017

Analysis of a Celestial Navigation Sight Session

This note is background for another article to be posted shortly by Richard Rice and me entitled "Most Likely Position (MLP) from Three LOPs."  We use the sights below as one example in that article, but do not include these details that must precede an optimized cel nav fix. We go over this process here and present a free Windows app for computing MLP along with a work form for solving for it manually.

We are analyzing a sight session from the book Hawaii by Sextant (HBS), which is a record of the last voyage I did by pure cel nav.  It was in July 1982, Victoria, BC to Maui, HI. No electronics at all except RDF, and that did not work well. The lack of electronic navigation was not a choice willfully made; there simply were no options at the time, which in light of how far we have come, was not that long ago. There are many sight sessions in the book; all are analyzed in detail. It is set up as an exercise in ocean navigation that is intended to be used as a training tool to master skills in cel nav and ocean navigation in general.  When you have nothing but cel nav to go by, it is important to do the best you can with each sight, and to maintain good logbook procedures.  This of course remains true today.

Here is a picture of how that fix might show up on a plotting sheet of the DR track. We are looking into Jupiter-Vega-Altair fix at Log 2614. And indeed—in light of the present topic—we do not have this fix plotted in the best possible position. In the actual voyage we just took a center value of the intersecting LOPs as the fix. Most star sights were pretty good, so this was not an issue at the time, but now we want to concentrate on doing best possible analysis of a single sight session. This process can be helpful when sights are limited or in question, and it is the only fair way to evaluate what we can and cannot achieve with cel nav.  This type of extra work is not often required in mid ocean, but could be valuable on approaches.

Figure 1
(In the book HBS, the fixes along the DR track as shown above were not made from the optimum choice of sights. Each body, as seen below, had several sights, and it is the task of the navigator to analyze these to come up with the best representative of the full set. That process is left as an exercise in the book, with detailed instructions and many examples. The sights used for the sample plots (such as the one above) were selected almost randomly from the sights taken, with the hopes that the readers strive to improve the choices and overall navigation. In short, just one sight of each body was selected, so it had no benefit at all of the others taken. The plots included are just intended to show what the layout of the DR track would look like—although the plotting is accurate, once the choices were made.)

[ Note in passing, since it shows on the plot sheet: The narrow running fix at 1227 was clearly not a very good one as plotted here, but it got corrected with a long LAN measurement with plenty of sights on both sides of LAN to get a good fix. Had we just carried on with good DR we would have been at the 2240 location fairly well. On the other hand, that running fix was not as bad as shown in this plot if it had been analyzed more carefully. The plots in the book, again, just take two random sights from each session to show a full work form and plot. We encourage readers to improve on what is shown, given the full set of data. ]

The actual 1982 navigation logbook page of the log 2614 sight session showing analysis done at the time is below.

Figure 2

First some background notes.  The actual sight reduction at the time was done with an early version of the HP-41 nav calculator that we added a few of our own functions to. With this type of sight reduction we do not use an assumed position, but do all reductions from a common DR position, given at the top of the page.

The vessel was moving at 7.3 kts on course 227 T. The average speed during the sight session, which lasted 41 minutes, was figured by subtracting log readings taken before and after the session. A standard procedure when preparing to advance all sights to a common time.

The altitude intercepts (a-values) in black pencil were done without advancing the individual lines to a common time.  The red pencil are the advanced values, which were figured from a = a + D*Cos(C-Zn), which is a mathematical way to advance the sights explained in the book. They are all advanced to the time of the last sight at 2240 watch time.

It appears that the last Vega sight was discounted at the time because it was so far off the others. In retrospect, that was a mistake!  If you advance that one you get a = 3.6 A, which is about the same as one that I did keep at the time.  (Below we see describe the argument used in HBS for throwing both of them out, but that was not applied to this session when underway.)

For the first step in this analysis now we use all of the sights.  It is crucial that they be advanced. The fix in this case will be wrong by a couple miles if that is not done.  It is taken for granted that all sights must be advanced to a common time.

When the sights are advanced to a common time, we can (as a first approximation) just average them to get an average LOP for each body... assuming the Zn has not changed more than a degree, which is the case here.

So with that averaging of the red ones we get:

Jupiter: a = 3.0' A 200
Vega: a = 3.0' A 058
Altair: a = 5.1' A 090

Note that including the last Vega sight we threw out underway changed the a-value from 2.8 (in the logbook) to 3.0.

These sights are shown below as the light purple lines. The blue ones actually plotted were from our fit-slope analysis, described below. This is an improvement over simple averaging.

Figure 3

Some  background on this plotting:  When doing sight reduction by calculator from a common DR position, the plotting is very easy. We just make a plotting sheet centered on the DR, expanded as needed. In this case, what is normally taken as 60 nmi between parallels, we just change to 6 nmi between parallels. This plotting sheet section shown is about 12 nmi square. Then the plotting is very fast an accurate, all done from the center of the compass rose, with a scale marked in tenths of a mile.

In HBS, we explain what we call the fit-slope method. That is a way to decide from a set of sights which ones are the most consistent with the known slope of the star height versus time. In other words, for five sights taken from five different DR positions (the boat is moving), we can calculate what the heights should be if were exactly at those positions. We don't expect to get exactly those values as we do not know if we were on that track or not, but the slope of these computed sights will be same for anywhere near those locations. To find the slope, we don't need to calculate all of them; we just need a calculation of height from a convenient time before and after the sight session.

When we apply that analysis we get more insight into which sights were likely good, and more to the point, which ones were likely not as good. If we have 3 sights that increased at the right slope and one that was notably off of it, then we can throw that one out and average the rest. Or sometimes we get one that if off and throw it out, and then the others scatter above an below a line at the right slope, so we can average those, or better still, take one that is right on the line.  Doing this we get the a-values below, which are plotted as the blue triangle above. We get a smaller triangle at a slightly different location. That alone is not justification for the method, which has a sounder analytical basis. We encourage anyone who wants to improve their cel nav accuracy to look into the fit-slope method, explained in detail with many examples in HBS.

Jupiter: a = 2.7' A 200
Vega: a = 2.6' A 058
Altair: a = 4.7' A 090

The fit-slope analysis almost always improves the sights. It is effectively a more logical way to do the averaging of the sights. It also demonstrates why it is so crucial to take 4 or 5 sights of each star. It is far better to take 4 or 5 sights of three well positioned stars that it is to take 1 or 2 sights of 10 different stars.

Below, for example, shows all the sights plotted (the red a-values in the logbook), with the fit-slope choices of LOPs now marked in light blue.

Figure 4.
It seems one could do some sort of filtering on this display alone, but the results of the fit-slope method do not always match what we might conclude from such a plot of all sights.   In short, you cannot tell by the spread of the sights alone, which are the good ones. However, we will indeed use the spread (standard deviation) of these multiple sights in the final analysis, below. The red circle is 9 nmi diameter.

The blue triangle is the plot of the 3 LOPs listed above. For the most likely position (MLP) analysis we will want to know the lengths of the sides, 1.9 nmi, 1.7 nmi, 3.0 nmi, which can be read directly from the plot. (This triangle is less than half the size we get from a random selection of LOPs, as plotted in Figure 1.) And we need know the standard deviation of each of these sights.

One can argue about the right way to know the best variance on each sight session, but the definition of standard deviation is clear, and likely valid if the sights are truly independent of each other. This is why we encourage navigators to give the sextant knob a very good turn off of the sight after reading it, so that the next measurement is independent. Just following a star up or down with small adjustments is not a good way to get independent measurements.

The standard deviation is the square root of the sun of the squares of the difference between individual values and the mean value, divided by one less than the total number of sights. If you are using Excel, the function STDEV computes this value for a list of measurements.

Using that we get these standard deviations (sigmas) for the three bodies (advanced and slope-fitted)

Jupiter: sigma = 0.6 nmi
Vega: sigma = 0.6 nmi
Altair: sigma = 0.9 nmi

Without pursuing the validity of the standard deviation for such sights, these are indeed reasonable values for good cel nav sextant sights.

And now after that outline of the process,
we get to the main point of this background information... 

Once we have the best triangle we can come up with, where is our most likely position (MLP) within or near that triangle? This is the same question we would have if these were three compass bearing lines, or any other triangle of three LOPs. It is a fundamental question in marine navigation.

And that is what we have a new solution for, which is noteworthy because we believe we have treated the standard deviations and fixed errors correctly, and we can also formulate the solution in a way that is easy to compute manually underway with a simple calculator.

The manual approach is crucial for marine navigation dependability, but we also offer a free Windows app that computes the answer directly, based on entering just the three sides and three sigmas.  The formulas can also be easily incorporated into spreadsheet or a programmable calculator.

In addition to that, we have a free graphic app (to be released shortly) that lets users vary the triangle and sigmas to study how these affect the  MLP. This tool includes the addition of a fixed error that applies to each sight. Once you choose to add a fixed error, then the direction of the LOPs makes a difference, thus you see below the graphic solution has arrows on the LOPs. For cel nav sights, these arrows are perpendicular to the azimuths of the sights.

Figure 5
The graphic image above is set to closely match the example above. These sights had no known fixed errors, so the arrows do not matter. With this tool you can drag the points around to match your triangle and then experiment with the sigmas and fixed error. The light colored lines either side of the LOPs mark the extent of the sigmas you entered.  You can sometimes tell from this analysis if your data requires a fixed error to be consistent with your choice of sigmas. The ellipses mark the 50% and 90% confidence levels, discussed in the other article.

There is also a work form you can download and use to solve for the MLP by hand. A section of the form is shown below.  There are 5 solutions per page, each showing a numerical example and ways to define the triangles, which is needed because a purely manual solution requires the navigator to measure the location of corner Q3 (x,y) relative to Q1 (0,0), in addition to measuring the 3 sides, and assigning 3 sigmas.

Figure 6

Once the 3 LOPs are plotted on your chart, it should take just a couple minutes to measure what is needed and fill out the form to find the MLP, given relative to Q1. The orientation of the triangle does not matter, and it does not matter which corner you call Q1. The diagrams show the labeling once you choose Q1.

Below is a manual solution compared to our digital solution which is part of the free MLP app.

Figure 7

The top is a spread sheet solution to the manual computation that can be done with a calculator. I do not get precisely the same Px and Py manually as when computed, due to the precision of reading the values from the chart, but they are close. Also it is just a coincidence that Px happens to nearly equal the triangle side "s3" in this example, within the precision used.

With a purely manual solution we must measure location of Q3 (x=2.5, y=1.5), but this is not needed for the digital solution with the app. With the app or a spreadsheet you just enter sides and sigmas, and the solution takes seconds, not minutes.

Below is the plot used to measure Q3 and to plot the resulting MLP.

Figure 8

We should have the main note on this solution to MLP online shortly (this week I hope), with an outline of the derivation and a link to the graphic app. If you have sight data available with enough measurements or other ways to assign the sigmas then you can practice applying this. As explained in the other note, if the sigmas are all the same and there are no fixed errors, the MLP reduces to what is called the symmedian point, which is known by some navigators, but rarely used. The interesting behavior shows up when these sigmas are not the same, and when there is a fixed error folded in as well. The formalism we have is easy to incorporate into any computed solution, and indeed can be solved by hand if needed.


Please send us your thoughts, suggestions, experience, etc with this solution to MLP. They will be much appreciated.

Download MLP form.

Download MLP.exe   Computes MLP from 3 sides and 3 sigmas.

The Mac and PC versions of the interactive graphic solution should be available shortly.

Tuesday, October 24, 2017

100-foot Waves Expected near Aleutian Islands

There is a monster storm in progress in the Bering Strait headed to the Aleutians that promises to generate extremely high waves today and tomorrow. The unusual shape to the Low is a partial result of two storms coming together, which resulted in this long fetch. An isolated system would be more round, with shorter fetch.

By tomorrow about 06z the wave forecasts are for 60 ft, significant wave height of combined seas (SWH).

WW3 model forecast displayed in LuckGrib.  The color bar is for wave height in meters. Light blue is 17-18m (60 ft), Pink is 16m, red 14, yellow 10m

We see SWH of combined seas of 60 ft, but strangely this appears to be mostly all wind waves. The swell component in that region at the time is very low... in fact the WW3 model does not give swell data in the high wave region at all.  They just give combined seas and wind waves, and throughout that region the wind wave heights are within a few feet of the combined seas heights. The forecasted hurricane force winds are completely dominating the sea state.  Swell directions along the perimeter of the system are in all directions. This storm will indeed generate huge swells for other places later on, but at the actual storm site the prevailing swell seems to be very low.

SWH is the average of the highest one third of all waves in a statistical distribution. Other wave heights in that distribution are given in this table from our text Modern Marine Weather.

With SWH at 60, we expect the average of the highest 10% to be about 78 ft and 1 in 2,000 to be 120 ft.  These waves have a period of about 17 seconds, so 2,000 x 17 = 34,000 seconds, which is about 9 hours.  So very roughly every 9 hours or so a region would have a wave of 120 ft and it would not be considered a rogue wave. It is just at the far edge of the expected distribution of wave heights.

But there remains a valid question of where does the base 60 ft wave height come from?  Note too that an average period of 17 seconds is more typical of swells than wind waves, but these are very big waves and this is indeed a normal period for these huge waves. In fact, the waves themselves are consistent with this wind pattern which has had a fetch of some 700 nmi for a day or more.  These stats are compiled in the diagram below.

The winds have been 60+ for a long time, so just follow the 65 kt line across the diagram to a fetch of 700 or so and you see 60 ft waves with a period of 17 seconds... also note that the duration is 36 hr, all consistent with the present system.

So, in short, we get big waves, as would be expected in this system.  A few pics below present other specifications of the system.

This pic shows the swell direction on the edges and lack of swell ht forecast in the system.  The red line is 863 nmi long, and this storm needs only 700 or so to fully develop these seas.

Below are images from the WW3 site, showing period and direction of peak wave energy, SWH of wind waves, and wind wave direction and period. These can be seen by googling "NCEP model guidance" (to find this page, then choose NPAC (North Pacific) and WW3 model, then choose time frame, and product.

Another valuable presentation of even more WW3 parameters is at

Note the red patch of 18-sec period at the storm are waves, but the red 18 seconds off of Baja and farther to the SE are swells.  See below to note there are no wind waves in the Baja region... and the ones that are there farther offshore are going opposite directions, likely remnants of a front that is long past.

Here is an ASCAT satellite pass measuring true winds of 50 to 66 kts over a 500 mile swath at about 4 PM PDT today.  Viewed in LuckGrib.  This is a big system! Not clear how far it extends to the west.  Watch the ASCAT data online to see real values tomorrow.  LuckGrib can show these in GRIB format, as can the Ocens Grib Explorer Pro for iPad.


Here is a follow up for the next day, Thursday, Oct 26.  Reports of measurements of SWH 57.8 ft.  Nice to see that science works!  This gcaptain report mixes up terminology a bit, but actual values will be known better later on.