I believe it's fair to say that most radio broadcast stations in the United States are using computer-based audio systems (CBAS). Further, I believe these systems, if not used for complete automation of a broadcast facility, are used for audio sourcing in some mixture of live-assist or partial automation. This being the case, the keys to the revenue-producing potential of a facility are performance, reliability and ease of operation of the CBAS. Here are some ways to improve performance and reliability of the CBAS in your facility. Ease of operation I'll leave to the vendors.
Problems, proposals and projects
Let's explore some possible scenarios: (1) Your system consists of a server, several workstations and network switches. It used to run OK, but lately the jocks are complaining of sluggishness while recording voice tracks, or when they press a start button on the studio touch screen, the audio cut starts 100 milliseconds later (an eternity!). (2) HD Radio is being rolled out for the facility next year and management has presented you with a conundrum: multicast for all six stations, fully automated, with limited capital outlay for the required hardware.
You figure you can lower the multicast hardware cost by playing a few rounds of golf with the vendor's rep, but you still have to purchase 12 additional workstations for your CBAS, and possibly upgrade the network switch and the server. In this scenario you crunch numbers and find you can purchase the multicast hardware and the workstations, but have nowhere near enough to purchase a new server. The additional load as a result of 12 new workstations is significant. Perhaps there is something we can do to make the existing server suffice, but nothing beats having funding for first-rate hardware.
We often establish baseline performance for our transmitter systems through the use of manufacturer test data sheets, weekly logging of the parameters and the initial sweep of the antenna system by the station's consultant. Of course when an AM directional antenna system is installed we perform field proofs to establish conductivity and system RMS. We also (hopefully) log all branch impedances, currents, inductor tap positions and have a record of transmission line lengths. These things serve us by establishing a reference, or baseline for future problem isolation and performance verification. We should do this for automation, database and file servers, and networks as well. Fortunately, many tools exist to assist you at little to no cost. Allocating the necessary time for this when the system is operating normally will be time well spent.
CBAS baseline performance tools
Figure 1. Windows Performance Monitor
It is my belief that Microsoft Windows 2000/2003 is the prevailing operating system for CBAS servers and fortunately for us, Bill Gates provided us with literally hundreds of baseline tools. To use them, open Performance Monitor (aka, perfmon) and by default, the three most-used performance counters will appear: Average Disk Queue Length, Pages/Sec, and Percent Processor Time. There are numerous options that can be set with these and all other counters such as color, scale, sample rate, graph background, etc. Please bear in mind that using Performance Monitor has a drawback. In order to measure something, you disturb it. Performance Monitor (as seen in Fig. 1), while providing insight into your server's performance, uses resources and in so doing slightly degrades performance. Naturally the more counters you use and the more objects (hard disk, processor, pages) you choose to monitor, the more pronounced this degradation becomes.
Let's address the first scenario where the jocks are complaining of sluggish performance. You might suspect the cause to be network bottlenecks due to congestion, high server processor utilization, or poor disk input/output performance. Windows Performance Monitor and Windows Task Manager may help. Often, I believe you'll find that network utilization is fairly low (a few percent unless file transfers or data backups are underway), and in most modern servers, processor utilization also will be 10 percent or less (although occasional “spikes” of near 100 percent utilization may be normal and no real cause for concern). An Average Disk Queue Length (ADQL) of 40 or 50 could be worrisome, but that depends on several factors including the type of disk storage system the server uses for audio data: a RAID (Redundant Array of Inexpensive Disks) system, Network Attached Storage (NAS), or a single disk. Most often, computer-based audio systems use some form of RAID array, and we'll make that assumption here.
The ADQL number, which you'll note is dimensionless, represents the average number of data requests per unit time that are pending in the disk operation. The rule of thumb to use in the interpretation of this parameter is two times the number of spindles, or hard disks in the system. So, if the RAID array consists of 12 drives (data drives only), then an ADQL of 24 orr less is probably OK for the system. For this example,let us suppose that your system does have 12 drives in the array, and you are seeing ADQLs around 40 to 50 consistently. The most salient point here is that without having established a baseline when things were running smoothly, you will not be absolutely certain if this is the issue or not.