Automation Server Performance Enhancement

January 1, 2008


I believe it's fair to say that most radio broadcast stations in the United States are using computer-based audio systems (CBAS). Further, I believe these systems, if not used for complete automation of a broadcast facility, are used for audio sourcing in some mixture of live-assist or partial automation. This being the case, the keys to the revenue-producing potential of a facility are performance, reliability and ease of operation of the CBAS. Here are some ways to improve performance and reliability of the CBAS in your facility. Ease of operation I'll leave to the vendors.

Problems, proposals and projects

Let's explore some possible scenarios: (1) Your system consists of a server, several workstations and network switches. It used to run OK, but lately the jocks are complaining of sluggishness while recording voice tracks, or when they press a start button on the studio touch screen, the audio cut starts 100 milliseconds later (an eternity!). (2) HD Radio is being rolled out for the facility next year and management has presented you with a conundrum: multicast for all six stations, fully automated, with limited capital outlay for the required hardware.

You figure you can lower the multicast hardware cost by playing a few rounds of golf with the vendor's rep, but you still have to purchase 12 additional workstations for your CBAS, and possibly upgrade the network switch and the server. In this scenario you crunch numbers and find you can purchase the multicast hardware and the workstations, but have nowhere near enough to purchase a new server. The additional load as a result of 12 new workstations is significant. Perhaps there is something we can do to make the existing server suffice, but nothing beats having funding for first-rate hardware.

Baseline performance

We often establish baseline performance for our transmitter systems through the use of manufacturer test data sheets, weekly logging of the parameters and the initial sweep of the antenna system by the station's consultant. Of course when an AM directional antenna system is installed we perform field proofs to establish conductivity and system RMS. We also (hopefully) log all branch impedances, currents, inductor tap positions and have a record of transmission line lengths. These things serve us by establishing a reference, or baseline for future problem isolation and performance verification. We should do this for automation, database and file servers, and networks as well. Fortunately, many tools exist to assist you at little to no cost. Allocating the necessary time for this when the system is operating normally will be time well spent.

CBAS baseline performance tools

Windows Performance Monitor screenshot

Figure 1. Windows Performance Monitor

It is my belief that Microsoft Windows 2000/2003 is the prevailing operating system for CBAS servers and fortunately for us, Bill Gates provided us with literally hundreds of baseline tools. To use them, open Performance Monitor (aka, perfmon) and by default, the three most-used performance counters will appear: Average Disk Queue Length, Pages/Sec, and Percent Processor Time. There are numerous options that can be set with these and all other counters such as color, scale, sample rate, graph background, etc. Please bear in mind that using Performance Monitor has a drawback. In order to measure something, you disturb it. Performance Monitor (as seen in Fig. 1), while providing insight into your server's performance, uses resources and in so doing slightly degrades performance. Naturally the more counters you use and the more objects (hard disk, processor, pages) you choose to monitor, the more pronounced this degradation becomes.

Let's address the first scenario where the jocks are complaining of sluggish performance. You might suspect the cause to be network bottlenecks due to congestion, high server processor utilization, or poor disk input/output performance. Windows Performance Monitor and Windows Task Manager may help. Often, I believe you'll find that network utilization is fairly low (a few percent unless file transfers or data backups are underway), and in most modern servers, processor utilization also will be 10 percent or less (although occasional “spikes” of near 100 percent utilization may be normal and no real cause for concern). An Average Disk Queue Length (ADQL) of 40 or 50 could be worrisome, but that depends on several factors including the type of disk storage system the server uses for audio data: a RAID (Redundant Array of Inexpensive Disks) system, Network Attached Storage (NAS), or a single disk. Most often, computer-based audio systems use some form of RAID array, and we'll make that assumption here.

The ADQL number, which you'll note is dimensionless, represents the average number of data requests per unit time that are pending in the disk operation. The rule of thumb to use in the interpretation of this parameter is two times the number of spindles, or hard disks in the system. So, if the RAID array consists of 12 drives (data drives only), then an ADQL of 24 orr less is probably OK for the system. For this example,let us suppose that your system does have 12 drives in the array, and you are seeing ADQLs around 40 to 50 consistently. The most salient point here is that without having established a baseline when things were running smoothly, you will not be absolutely certain if this is the issue or not.



Server performance enhancement: RAID systems

How do you go about fixing lackluster server performance and/or getting the most from what you have? Poor disk I/O performance can result from outdated or incorrect disk controller drivers, less than optimum setup of the RAID array, or a defective controller. For CBAS servers, we tend to use a RAID 1 (mirror) or a RAID 1 with duplexing (see Fig. 2) for the operating system. RAID 1 arrays consist of two drives in a mirror arrangement and a capacity of (n/2) X. For audio storage we most often use a RAID 5 array, which takes us to the intersection of performance, reliability and efficiency. A RAID 5 array consists of several hard drives (at least three) with a capacity of (n-1) X, where n equals the number of hard drives and X is the individual hard drive capacity. A RAID 5 usually has one or more drives declared to be a global hot spare or dedicated hot spare for failover.

Note that efficiency of surface real estate improves in a RAID 5 array as the number of drives in the array increases. To illustrate this point, consider a RAID 5 consisting of the minimum three drives. Let's suppose that our drives have a capacity of 73GB. Using the formula above, our RAID array would have a capacity of 73GB(3-1), which equals 146GB. Our efficiency of surface real estate is 67 percent. If we have an array of 14 73GB drives, two of which are used for global hot spares (leaving 12 for data), we have a capacity of 73GB (12-1), which equals 803GB. This implies an efficiency of surface real estate for this arrangement of 803GB/876GB × 100 percent, which equals approximately 92 percent. This is one of the benefits of RAID 5 versus other RAID arrangements.

We want to make certain that our RAID arrays are up to standards in order to realize best performance. For a SCSI (Small Computer System Interface) system, you would want drives that spin at least at 7,200rpm/Ultra 320 interface, an Ultra 320 SCSI controller card with 128MB or more cache on-board. You really don't want to skimp here. The SCSI controller card may either be of the non-RAID or RAID type. If the card does not support RAID, then the RAID array will be set up under the operating system. A RAID type controller card will generally support a variety of RAID levels such as: RAID 0, 1, 5 and 10. Software RAID (set up under the operating system) and hardware RAID have advantages and disadvantages.

In most cases hardware RAID is the optimum choice. In this case, it is important to properly set up the array for best performance using the RAID controller's bios settings. Key settings are: RAID stripe size, write-back and read-ahead. RAID stripe size refers to the width of the data stripe in the array, and is not connected to the block size or the size of the allocation units formatted under NTFS (N.T. File System used with Windows).

I mention this because this does cause some confusion, as some administrators believe they should set the array stripe size to be the same as the block size. The optimum block size to set before formatting a drive with NTFS is often a setting recommended by your automation system vendor. For use with streaming music, the maximum setting with NTFS of 64KB is often used (but this may vary with the vendor). In any event, the array stripe size can affect performance of the array quite significantly. Some experts say the bigger the stripe size, the better. I have found 128KB stripe size to be the best for performance with heavily used file servers with multiple clients (as in CBAS operation). That said, go with your vendor's suggestions.

Write back will improve the RAID array's performance by writing data to cache first, then to the disks. This allows the CPU(s) to command a write of the data and then move on, allowing the RAID controller to process the write as hardware timing permits. It's very important to maintain the RAID system's battery when using this option in order to avoid data loss. With regard to the read-ahead setting, you'll find you have three choices: Read-ahead, adaptive read-ahead and no read-ahead.

Read-ahead works like this: A block is read from the disk, and then additional sequential blocks are read in the hope that these will most likely be required next. Adaptive read-ahead uses an algorithm that will use read-ahead until such time as the last two or so read operations were not sequential. For file servers used for audio streaming it's generally best to turn off read-ahead since file block reads are likely not to be sequential. This setting may be shown as normal on your RAID controller.



Server performance enhancement: swap file

Windows makes use of a virtual memory technology called a swap file. When Windows has insufficient electronic memory to store data, program executables or data transfers, it can swap these data to and from a swap file on the designated hard disk. This of course slows system performance inasmuch as hard disks are much slower for reads and writes than system RAM. One very significant thing you can do to improve your server's performance is to move the swap file from the drive(s) used for the operating system to another drive. Often a machine constructed to be a server provides several slots for additional hard drives.

Consider installing a drive for use as a swap file surface and moving the swap file from the OS drive to the new drive. This can be done by selecting properties from My Computer, then Advanced tab, then Performance/Settings/Virtual Memory. Select change and then select the new drive. Then select custom size and set the initial and maximum size to 1.5 times the size of the system memory (so, for a system RAM of 2GB, set the saw file size to 3GB). After this procedure is completed, delete the existing pagefile on the OS drive. Setting the initial and maximum swap file size reduces the possibility of page file fragmentation, which would reduce performance. The page file does not need to be on a redundant drive and would also degrade performance. Format the partition used for the page file with 4KB blocks.

Server performance enhancement: memory (RAM)

More system RAM is better. As you increase the amount of RAM you reduce the amount of I/O operations to the page file. If you see many page faults in Performance Monitor, you need more RAM. If you're looking for a number, most experts would agree 2GB is a good starting point for a CBAS server. Note that server memory is most often of the Error Correcting Code (ECC) variety, which means it costs more than workstation RAM. Be certain to use memory of the type specified by the server's manufacturer and note if the memory sticks need to be installed in pairs or singles.

Conclusion

With careful application of the above information, you can get the most out of your server and perhaps delay having to purchase a new 8-core Blazemaster 3000 server for a while. By establishing a baseline for your server, you will be in a better position to evaluate performance and see the results of any changes or tweaks you make. If you'd like to know more, the Internet and your local bookstore are overflowing with information on Windows 2003 Server performance tweaks. Adaptec's website has some useful white papers on RAID arrays and good general information. I'd like to acknowledge Dave Dart and John Pike from Google/Maestro and Dave Turner of Enco for giving me some of their time and valuable input.


Sloatman is chief engineer for Cox Radio, Orlando, FL.



Want to read more stories like this?
Get our Free Newsletter Here!

Comments