Part 2

4 Dec

       Chapter 2: My SOSUS Years: 1966-1975

            In 1952, the AT&T management was asked by the Navy Department to investigate the feasibility of installing an undersea sound surveillance system that could allow the Navy to detect and track potentially hostile submarines in the Atlantic and Pacific oceans. The motivation for this effort was the discovery in the late 1940’s by ocean scientists at the Woods Hole and Scripps Oceanographic Institutes of a sound-conducting channel in the oceans. They found that the changes in pressure and temperature as a function of depth created a sound-refracting effect that enabled ocean sounds to propagate within a deep ocean channel, which came to be called the SOFAR layer. While higher frequency sounds were attenuated over short distances within this layer, the low-frequency sounds, in the range of 10-200 hertz, could propagate for many hundreds of miles. This discovery led the US Navy strategists to the idea of creating an undersea listening system that would process sounds propagating in the SOFAR layer and hopefully enable them to detect the low-frequency sounds radiated from the propulsion and other rotating systems on submarines.

The Navy officials knew that AT&T’s subsidiary, Western Electric, had the experience in laying undersea telephone cables for AT&T’s long-distance service across the Atlantic ocean to Europe, and across the Pacific ocean to Asia, and Bell Labs had the expertise in processing and analyzing acoustic signals. So, in the cold war standoff  between the US and the USSR, both of which had nuclear-missile submarines patrolling the oceans within range of each other’s territory, the Navy persuaded AT&T to commit Bell Labs and Western Electric to develop and deploy a system of hydrophone (underwater microphone) arrays that could silently detect and identify the sounds radiated into the SOFAR layer by submarines. Over the next decade hydrophone arrays were manufactured and installed by Western Electric on the continental shelves of the Atlantic and Pacific coasts of the United States at a depth that provided detection of low-frequency acoustic signals propagated in the SOFAR layer. The signals from the arrays were conducted by undersea cables to nearby onshore Navy stations, called NavFacs. This was a covert listening system, giving the Navy “ears” in the ocean, and even the system’s code name, SOSUS (sound surveillance system), was classified(!), and the cover name for the NavFacs was “Oceanographic Research Stations”.

A look at a world map shows that the US had a big advantage in undersea surveillance, since all Soviet submarines had to pass on either side of Iceland in order to get into the Atlantic, and near Japan or near the Aleutian island chain in order to get into the Pacific. Western Electric laid hydrophone arrays in the ocean off  both the east and west coasts of Iceland, connected to a NavFac in Keflavik, Iceland, and in the areas around Japan and the Bering Sea, connected to NavFacs in Guam and Adak islands in the Aleutians. These were the “trip-wire” sites, used to detect Soviet submarines deploying into, or returning from, patrol stations in the Atlantic and Pacific oceans. Additional hydrophone arrays were laid and connected to NavFacs on the eastern Canadian and US coastlines, and at Bermuda and several Caribbean islands; also to NavFacs along the US west coast and at Oahu and Midway islands for the Pacific area. As ‘holes’ in the coverage were discovered, other arrays and stations were added. The last installations I was aware of were two arrays in the eastern Atlantic, connected to a NavFac located in Brady, Wales. The ocean chart below shows the SOSUS deployment covering the North Atlantic region in 1974:




Locations of the Atlantic SOSUS NavFacs where the signals from their undersea hydrophone arrays were beam-formed and analyzed for acoustic ‘signatures’ of USSR submarines. Clockwise from the bottom are Barbados, Antigua, Puerto Rico, Grand Turk, San Salvador, Eleuthera, Bermuda, Cape Hatteras, NC, Lewes, DE, Nantucket, MA, Shelburne, Nova Scotia, Argentia, Newfoundland, Keflavik, Iceland, and Brawdy, Wales, UK. In 1957 the extension of SOSUS to the Eastern Pacific began, with the installation of NAVFACs and associated arrays at San Nicholas Is., Point Sur, and Centerville Beach, CA; Coos Bay, OR; and Pacific Beach, WA. Still later, additional arrays would be terminated at Guam, Midway, Adak (in the Aleutians), and Barber’s Point, Oahu, HI. (source:


The hydrophone arrays comprised a string of hydrophones, which converted underwater sounds into electrical signals that were conducted back to the NavFacs via the undersea cable. At the NavFacs, signal processing techniques called ‘beam forming’ were done by adding specific time delays to the hydrophone signals so that sounds arriving to the hydrophones from a particular direction were reinforced, while signals from other directions were attenuated. In this way, 15 to 20 separate directionally-sensitive beams were formed. The resultant signal for each beam was converted to its frequency spectrum, and recorded on a strip chart, using a sound frequency-time display, called a LOFARgram, similar to the sonagram display that was developed earlier at Bell Labs to analyze the harmonic components of speech [Note 2-1]. The display room in the NavFacs contained a battery of LOFARgram display consoles, one for each beam formed with the signals from the hydrophones in the arrays.

Natural sea noises have broad non-harmonic range of frequencies, so they appear as a random distribution of dots on the SOSUS  display. Any periodic signals, such as the rotation of a submarine screw, or the pulsation of pumps on a submarine, contain a specific set of frequency components (like the overtones of human speech, or musical instruments) so they appear on the display as a set of dark marks that occur at a fixed set of frequencies (hence the same point on the horizontal scale of the display for each pass of the marking pen. Over time, this created lines at particular frequencies on the upward moving chart.




A SOSUS display room in a multi-array NavFac with consoles continuously recording the LOFARgrams for each beam on each array. Navy sonar technicians on watch walked along the aisles scanning each display for a reportable contact.




A LOFARgram display, looking up (back in time), with the horizontal axis showing the frequency spectrum, from about 10-200Hz, of the ocean sounds detected on one of the beams of the hydrophone array terminating at a SOSUS NavFac. Note the steady dark line near the center that had recently shifted down in frequency, probably indicating a change in speed or rotation-rate of some equipment on the sound source; also, at that same time there is a disappearance of some higher frequency harmonics. The scattered speckling on the display represents non-harmonic ‘noise’ in the ocean. The strong harmonic signal can also be seen on the two adjacent beam displays. The time-integrated pattern of a harmonic pattern like this can represent the “signature” of a particular submarine class, or even a particular submarine.


Some of the naval personnel reading these charts eventually became very skilled in recognizing the spectral line patterns (‘signatures’) of the various Soviet submarines, and discriminating them from the signatures of US, Canadian, and British submarines (which had been engineered to radiate less sound). So for example, a pattern picked up near Iceland that matched the characteristic signature of a Soviet class of submarines, or that did not correlate with any of the NATO submarine signatures, would be presumed as an unfriendly Soviet submarine. The start time of contact, its estimated bearing, the spectral line pattern of its signature, and the times of any changes were transmitted over secure data lines to the Evaluation Center (EC) at the Norfolk Naval Base, and from there all Atlantic NavFacs were alerted to be on the lookout for this target with that particular acoustic ‘signature’. If one station got a same contact, they would report a bearing to the target and sometimes a rough range estimate. If two stations detected the target and reported bearings at close to the same time, then the EC could plot the two bearings and get a ‘cross-fix’ — an estimated position where the bearings from the two arrays crossed on the ocean charts. If a sequence of such cross-fixes were obtained, a predicted track of the target could be established. However, simultaneous detections from two of the widely separated listening stations were infrequent, so the tracks of the submarines were often difficult to establish.

To supplement the SOSUS information, the Navy would often send their Lockheed P3 anti-submarine patrol planes out to the estimated target area, where they would drop a pattern of sonobuoys. When each sonobuoy hit the water, it lowered a hydrophone and transmitted the hydrophone signal back to the plane by radio. On the P3s, if any of the sonobuoys detected the target, its location would help to narrow down the submarine’s position. Another means of fixing the target’s position occurred if several NavFacs recorded a sound-transient from the target, for example when pumps were started up or turned off, or when the submarines blew air from their ballasts. If three or more NavFacs reported the time of the transient sound, this could provide an accurate time-difference fix on the target[Note 2-3].

Because at least three stations had to detect a transient sound event, time-difference localizations were fairly rare in the tracking of Soviet submarines. However, I remember one significant case where it was used. In 1968, a Soviet nuclear-missile submarine had an accidental explosion that caused it to sink in the eastern North Atlantic, killing the entire crew. The sound from this explosion was so strong it was picked up by many of the NavFacs, and enabled the US Navy to accurately locate its position. Using this position estimate, a US patrol submarine was able to pinpoint the sunken submarine using its active sonar scanner. Amazingly, the CIA then mounted a deep-ocean salvage operation, code-name “Jennifer”, to retrieve it from the ocean floor. Had it succeeded, it probably would have set off a military confrontation, since Soviet Navy ships were also in the general area, searching for their missing submarine. (There is material for a Hollywood thriller here.)

When my group had become familiar with the SOSUS operations, the thing that struck us was how little of the target data reported by the NavFacs was being be used to establish an estimated position and heading of the target submarine. The reason for this was that simultaneous detections by two or more NavFacs were required for localization. When two or more NavFacs reported contacts on the same target, and if the target bearing data corresponded closely in time, the EC could establish a valid cross-fix. Unlike active sonar contacts, the passive SOSUS detections provided direction but no reliable range data. The complex path of sound propagating in the ocean made estimates of range highly unreliable, unless undersea topographic conditions provided a good range estimate. Likewise, the occasional time-difference data from transient sounds was usable only if three or more NavFacs reported the same event.

Because of this problem, I initiated a research project based on using a mathematical model of the dynamics of submarine motion and a technique called the Kalman filter algorithm to statistically combine every reported bearing and time-difference measurement with a model-predicted estimate of the target’s position at the time of each measurement. The program also computed the error covariance derived from each iteration to generate an elliptical region about the position coordinates that represents the two-sigma (86%) level of confidence in the maximum-likelihood position.

At the same time we were working on this new approach, a team of Bell Labs and Western Electric engineers in Greensboro, NC, were developing a computer system to replace the manual plotting procedures and print the localization results on computer-generated ocean charts at the ECs. This program was dubbed MSL, for multi-sensor localization. As an upgrade to this program, we proposed our Kalman filter tracking program (dubbed MST for multi-sensor tracking), and it was approved by the Bell Labs and Navy project managers.

Then began an intensive period of writing and testing the computer code for the MST program by my group at Whippany, and the compiling and testing of that code on the Navy computers by the software development group in Greensboro. To aid in the training of the Navy SOSUS personnel on this new approach, I wrote “Principles of Localization and Tracking”, a mostly non-technical primer on how to use the MST program. An important key to the proper operation of the this program was the reporting not only of the target bearing, but also the error estimate (standard deviation) on each reported bearing measurement. This depended mainly on the number and angular width of the directional beams detecting the target. To help the NavFac personnel properly to set a standard deviation value on each bearing report, I wrote a bearing estimation training manual, and our team spent quite a bit of time training the personnel at the Atlantic and Pacific NavFacs.

We also participated in training sessions on the operation of the MST program in the Evaluation Centers and at their Naval commands. This included a rather unusual session at the Pacific Fleet Command on the Pearl Harbor Naval Base where my talk was suddenly interrupted by a re-enactment of the attack on Pearl Harbor.4

When the MST program was finally in operation in the ECs at the Norfolk Naval Base in Virginia, and the Pearl Harbor Naval Base in Hawaii, the full use of all target data began to produce target tracks on the computer-generated maps in a detail that was never before possible. Shown below is portion of the MST-generated track of a Soviet missile-launching submarine moving westward from its deployment in the Pacific. (This illustration is taken from the front and back covers of the final version of the MST manual.)


The triangle at the center of each 86% confidence ellipse on this plot indicates the estimated position and heading of the submarine. The size and orientation of each ellipse indicates the size and direction of the uncertainty associated with each track estimate. Note that where there were time gaps in contact reports, the uncertainty generally increased. When contact resumed the uncertainty is reduced. The actual track, if it were known, was probably a somewhat smoother path from the lower-right to the upper-left of this picture

Prior to the operation of this program, the manual plot on this target would have contained just a few scattered position estimates from the few two-bearing cross-fixes available. As a result of this program, the Navy analysts were able to establish a much clearer picture of Soviet nuclear-missile submarine deployments and on-station holding patterns off the east and west coasts of the US. This provided a strategic US advantage, since we knew where each of their deployed missile submarines were. The Navy issued an official commendation to the Bell Labs team that installed our MSL and MST localization and tracking program in their computers. We were all greatly satisfied that our research and development effort had made a major difference in the cold-war struggle, and I believe helped avert the horror of a nuclear holocaust during that critical period.

A side benefit of this hard-working period at Bell Labs was that it took me to some very interesting places ­to visit: Iceland (twice), Nova Scotia, Bermuda, Cape Hatteras, Pt. Sur, San Francisco and Centerville, CA, Hawaii (ten times!), and Midway Island; but also (and more times than I wish to remember), it took me to meetings with high-level Navy officials in Norfolk, VA and Washington, DC.

In spite of all the secrecy of the SOSUS operation, the Soviet naval intelligence knew of its existence. There was even some concern that their warships might attempt to cut the undersea cables that connected the hydrophone arrays to the NavFacs. Partly because of this concern, the movements of their surface ships were closely monitored, so any such attempt would have been challenged. The Russians did make an intensive effort  to acquire the technology to machine propulsion screws like those used on our quiet-running nuclear submarines. Through some subterfuge they were almost able to buy the computer-controlled machining equipment from a Norwegian company that would have enabled them to produce these screws, but the plot was discovered and blocked by the US State Department. However, the success of SOSUS did prompt an intense noise-reduction effort by the Soviet Navy, and by the mid-1980s their new classes of nuclear-powered submarines were quieter and somewhat more difficult for SOSUS to detect and track.

The first public disclosure of the SOSUS operation, to my knowledge, was in a chapter of Tom Clancy’s book “The Hunt for Red October”, published in 1984. In the book, the fictional new Soviet submarine Red October had a ‘hydrodynamic’ propulsion system that allowed it to run silent and avoid detection by SOSUS (no such silent propulsion system ever emerged.) A few years after his book came out, Tom Clancy was invited to give a talk at the Bell Labs Whippany, NJ location. He claimed that he had no access to classified documents and no secret informers regarding SOSUS. He said all his information came from public sources, such as Aviation Weekly magazine.

In April of 1975, I left the SOSUS project, and no longer had the security clearance to learn what was happening. But in the late 1980s, I visited Dan McMillin, whose group had programmed my group’s tracking algorithm, and I learned that they had continued to provide the Navy with improvements in the system. The advances in computers, communications and signal processing techniques allowed for a much more sophisticated analysis and display of the SOSUS tracking data. The old thermal-paper strip-chart recorder consoles shown above were replaced by more versatile computer work-stations and more centralized operations.

The breakup of the Soviet Union in 1991, and the resulting end of the cold-war era, reduced the military need for SOSUS. I believe many of the NavFacs have been shut down, and unless the remaining ones can be transferred to oceanographic institutes for ocean research, they will probably also be shut down. To close this chapter on my SOSUS years, I quote the final paragraph of the Navy’s SOSUS report from their web-site:


“In reflecting on the early years of SOSUS, what is most striking is how much was accomplished in a remarkably short time. Certainly a major factor was the serendipitous confluence of events — the discovery that  low-frequency sounds could travel great distances in the ocean, the realization that submarines radiate identifiable low-frequency energy, and the pioneering work at Bell Laboratories on visual speech analysis. Ease of contracting was also an important element. The Navy’s resolve to conduct undersea surveillance was crucial. The commitment of Western Electric and Bell Laboratories and their decision to assign some of their best people to the project were of considerable consequence.”











2-1. Sound spectrum analysis consists of converting the electrical signals of sounds from microphones into their frequency components as a function of time. This is basically what our auditory system does when we hear sounds. For example, when someone speaks, we hear the basic pitch of the voice (the rate of vibration of the vocal chords) as it changes over time, plus the various harmonics (overtones) produced in the vocal tract which indicate vowels and voiced consonants, and the non-harmonic (noise-like) sounds which indicate the non-voiced consonants in speech, such as “sp” and “ch” in “speech”. Bell Labs developed the sound spectrogram (sonagram) that provided a time-frequency record of acoustic signals. In addition to providing information on the basic components of speech, it was also used to obtain “voice-prints” of individual speakers and, as described here, the acoustic “signatures” of submarines.


2-2. The large baleen whales, such as the grey whales, feed on the huge schools of krill that exist in the northern ocean waters; then each year the whales migrate southward to breed in temperate waters. The krill emit a noisy spectral ‘signature’ on the NavFac strip charts that became stronger and weaker as they moved deeper and shallower in the ocean. Sometimes a strange low frequency signal, around 10-15 Hz., appeared. When the NavFac personnel looked at this signal in detail, it appeared to be a series of regular pulses, sometimes slower, sometimes faster. They first thought this might be an acoustic signaling device used by Soviet submarines, and there was a flurry of effort by the Navy and Bell Labs scientists to find the source of this strange signal. One of my colleagues at the Labs, Dick Walker, was part of the group assigned to locate its source. They used Navy blimps to do visual surveys and drop sonobuoys in the areas off the coast of the US from where the mysterious signals were emanating. They discovered no submarines, but did often spot groups of whales in these search areas. Although to my knowledge it has not been proven, they believed that the mysterious fast and slow pulses picked up by SOSUS emanated from the whales as they rose up and dove down in the ocean after their food source. As baleen whales feed, they open their great jaws wide to ingest the kreel. The whale’s large heart is near the base of the open mouth, and its pulses propagate into the water with the open jaws acting like a giant underwater loudspeaker. When they dove deep, their heart rate slowed, and as they rose up it quickened — and hence the mysterious code-like pulses? Some investigators even speculated that the whales might have learned to use the SOFAR layer to communicate with other whales over long distances. Now that the cold war is over, most or all of the NavFacs are being closed. But recognizing the potential of SOSUS as a tool for the study of ocean life, some oceanographic institutes are attempting to maintain their operation. Perhaps the NavFacs will fulfill their cover name and actually become oceanographic research stations.


2-3. When a transient acoustic event in the ocean, such as a sudden change in a target submarine’s signature pattern, is reported from any two NavFacs, a curve can be plotted on an ocean chart that is the locus of points from which the source of the transient would produce the time difference between the times reported from the two NavFacs. If a third NavFac also reported the transient, then three time-difference curves could be plotted for each pair, and the region within the intersection of the three curves would provide a good estimate of the location of the source. To improve the accuracy of this method, Bell Labs had the Navy conduct experiments with a ship following prescribed paths in the ocean, dropping depth charges every hour, so we could determine the average velocity of the sound from the explosions to each of the SOSUS hydrophone arrays. I used the data from these tests to provide accurate sound velocity values to use in time-difference localization computations.


2-4. While I was conducting a training session at the Pearl Harbor Naval Station, I noticed out the window a large number of old propeller airplanes coming low over our building on Ford Island. As they passed over, I could see the red circle insignia of Japanese warplanes. They were dropping torpedoes in the water as they approached, and huge bursts of flame erupted from the ships docked alongside of Ford Island. Pearl Harbor was being attacked by Japanese planes! This was February 1969,  not December 1941, and the “attack” was part of the filming of the movie “Tora! Tora! Tora!”. Needless to say, our training session was abandoned as we all watched the spectacle. The filmmakers had built facades of the old battleships onto smaller ships, and gas jets that spewed flames in the air. An old B-17 bomber was coming into the small airstrip on Ford Island, and several Japanese Zeros were ‘attacking’ it and ‘strafing’ the airstrip, blowing up our fighter planes lined up there. Many of the parked planes were plywood cutouts, painted to look like real planes. The battleships and the planes looked fake to us, but when I saw the film, it all looked very real. We had witnessed some of the magic of Hollywood!




“The nation’s fixed undersea surveillance assets—A national resource for the future”, ASA 127th Meeting, MIT, 1994 June 6-10


Dziak, Robert P., Christopher G. Fox, Haruyoshi Matsumoto, and Anthony E. Schreiner, 1997, The April 1992 Cape Mendocino earthquake sequence: Seismo-acoustic analysis utilizing fixed hydrophone arrays, Marine Geophysical Researches, vol.19, p. 137-162.


Fox, Christopher G., W. Eddie Radford, Robert P. Dziak, Tai-Kwan Lau, Haruyoshi Matsumoto, and Anthony E. Schreiner, 1995, Acoustic detection of a seafloor spreading episode on the Juan de Fuca Ridge using military hydrophone arrays, Geophysical Research Letters, vol. 22, no. 2, p 131-134.


Fox, Christopher G.. Robert P. Dziak, Haruyoshi Marumoto, and Anthony E. Schreiner, 1994, Potential for monitoring low-level seismicity on the Juan de Fuca Ridge using military hydrophone arrays, Marine Technology Society Journal, vol. 27, no. 4, p 22-30.


Fox, Christopher G., and Stephen R. Hammond, 1994, The VENTS Program T-phase project and NOAA’s role in ocean environmental research, Marine Technology Society Journal, vol. 27, no. 4, p 70-74.


Fox, Christopher G., 1992, NOAA plans for monitoring the northeast Pacific using fixed hydrophone arrays, Proceedings: Characterization of Mid-Ocean Ridge Earthquake Activity Using Acoustic Data from U.S. Navy Permanent Hydrophone Arrays, Woods Hole, April 23-24, 1991.


Nishimura, Clyde E., and Dennis M. Conlon, 1994, IUSS Dual Use: Monitoring whales and earthquakes using SOSUS, Marine Technology Society Journal, vol. 27.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: