# Alternate Propagation Models

March 8, 2013

Prior to the ubiquity of substantial computing horsepower, we had to more or less rely solely on the Commission's standard contour methodology for prediction of coverage. Don't get me wrong, this method works well in many cases. Indeed, the Commission designed the method to be reasonably accurate over a fairly broad slice of circumstances that exist in the United States. That being said, like anything else, it tends to fall apart at the margins -- those situations in which terrain is abnormally smooth or cases where a particular propagation path has some fairly significant undulations. Even in the great flat Midwest, a place I affectionately call home, the standard contour method does not necessarily provide the full picture of what is transpiring.

The limitations in these spheres result from the basic construct of the model. The contour method samples terrain at discrete increments along a given path from 3 to 16 kilometers from the transmitter, and then finds the average of these individual quantities. This average is then applied to either the F(50,50) or F(50,10) curves, depending on whether coverage or interference is sought, resulting in a derivation of the distance to the particular contour. If the variation in the elevations along this path is around 50 meters, plus or minus 25 meters or so, then a reasonable approximation of the field strength is obtained. It is important to remember that the model is statistically based, and as such, really only says that at 50 percent of the locations within the contour at 50 percent, or 10 percent, of the time, the desired field strength will be achieved.

Carrying this a step further consider a generic class A facility. If we assume that the ERP of this facility is 6kW, the center of radiation is at 100 meters above average terrain, and that the terrain is perfectly uniform within the 3-16 kilometer range, then the 60dBu contour will lie at a radius of 28.3 kilometers form the site. But if I were able to construct a substantial concrete wall at 17 kilometers from the transmitter, I still see the 60dBu contour at 28.3 kilometers, although intuitively we know that would not be the case. Enter alternate propagation models.

Although there are several different ways of approaching this problem, I will concentrate on the Longley-Rice model. The Longley-Rice model differs from the FCC contour method in that it predicts the field strength at a given pixel, or cell, instead of deriving an average contour. If, like the FCC staff, you prefer using contours, one can still be constructed using Longley-Rice, but by so doing, the subtleties and wealth of information provided by Longley-Rice is watered down.

## Running numbers

Because Longley-Rice calculates the field strength for a given cell, it looks at the terrain along the path as well as localized conditions in the vicinity of the cell. For an accurate prediction, one where localized subtleties are fully considered, a cell size on the order of 100 meters may be desirable. If 360 radials out to 150 kilometers are considered, then more than half a million different locations have to be considered, sampled, and calculated. Even on a robust platform, this takes a reasonable amount of time and consumption of resources. It seems logical that for this reason, as well as the ability Longley-Rice offers to skew results, the Commission, at least in the Audio Division, has historically shied away from permitting widespread use of Longley-Rice.

Once computing power got to the point where most consultants were able to easily run such predictions with regularity, the inevitable happened. Everybody tried using Longley-Rice for most circumstances, which not only overloaded the OET guys, but also resulted in a growing lack of consistency in showings. In 2003, an unpublished letter issued by the Commission reined in the use of Longley-Rice by laying out specific criteria that must be demonstrated for its use. Among these was a demonstration that Longley-Rice demonstrated coverage at least 10 percent larger than the standard method, as well as the necessity of demonstrating that the variation in terrain (delta-h) from the transmitter to the city of license is less than 20 meters or greater than 100 meters. Although unpublished, every consultant has a copy of this letter, which served as the basis for Longley-Rice use for the next several years.

- continued on page 2

Now under the Skytower decision, released in 2010, the Commission has liberalized the use of supplemental methods. Gone is the delta-h requirement. It has been replaced with a simple demonstration that the 70dBu contour distance exceeds the predicted standard distance by at least 10 percent. It is important to note that supplemental showings pertain only to city-grade field strengths, and by extension apply only to prediction of coverage over the city of license or compliance with the main studio rule. The protected contours for the various classes of FM facilities must still be determined through the use of the standard method.

At some point in the not-too-distant future, the Commission is expected to become better equipped to handle supplemental showings without referral to the OET. What that means for future use of alternate methods is somewhat murky, but the hope is that a standard set of study parameters will be released to the engineering community so everybody winds up on the same page. This will of course reduce the necessity of conflicting interpretations, thereby conserving the limited resources available to the staff.

As I previously mentioned, Longley-Rice lends itself to some fairly wide variations in how the model can be tweaked. For instance, at what height should the receive antenna be considered? The curves were developed with a receive antenna height of 9.1 meters, or 30 feet AGL. Obviously a varying receive heights will yield different field strengths. Similarly, what receive antenna gain should be considered? Is 0dBd appropriate, or is a different value, perhaps more representative of mean antenna gain, a better illustration?

By the same token, how should we address the impacts of localized groundcover? Without delving very deeply into the esoteric mathematics of the model, one can intuitively understand that in a very dense urban environment, the field strength would be expected to be lower due to many more objects "soaking" up the signal. Finally, even the atmospheric conditions in a particular locale will have a bearing on the predicted coverage. Coverage in desert environments such as Phoenix is treated differently than in Chicago, with the latter environment resulting in a larger footprint.

I'll share two coverage examples illustrating a comparison between the FCC standard method and Longley-Rice. The first illustrates the coverage of one of the FM stations in Chicago. The colored shading underneath illustrates the Longley-Rice predicted signal levels, with FCC derived contours overlaid. The second illustration demonstrates coverage for a station in the area of the Rockies. The effect of terrain is quite obvious in this instance.

Comparison of the FCC standard method and Longley-Rice for a station in Chicago (top) and in the Rockies.

## Related Articles

### March 2013

The 2013 NAB Show preview, streaming audio update, Sierra H updates its transmitter site, Field Reports on the Elenos ETG and Presonus StudioLive 24,4,2 and more....

Products

Mobile

FCC

fcc

gallery

industry

fcc

deep-dig

#### > When this broadcast thing started, there was a social contract\n\nThat's fair enough, bu...

Inside The Current Issue Archive →

#### SBE National Meeting

Oct. 26-27, Columbus, Ohio

#### SBE Certification Exam

Nov. 4–20, local chapters

Nov. 8, Paris

#### CES 2017

Jan. 5–8, Las Vegas

News Feed