In-Depth
What’s New With NICs
Evolutionary, rather than revolutionary, current network interface cards offer increased speed, better CPU utilization and specialized functionality.
IT’S HARD TO ARGUE with the notion that network interface cards (NICs)
have become commodity items, especially in the SOHO environment; but most
large companies still try to insist on a uniform manufacturer, or close
to it. Despite this, there are the inevitable lesser-known NICs that escape
IT surveillance when users bring in laptops, unauthorized desktops, and
bundled or integrated NICs, or create wireless LANs. The new features
described in this article may be enough to allow the top-tier manufacturers
to garner a premium price as well as access to the corporate boardroom.
The second tier has done a good job of keeping close in terms of performance,
but the products lack some of the specialized, higher-level features.
Except as noted, all the gigabit adapters are backward-compatible with
10- and 10/100-based networks. All of the 64-bit adapters are backward-compatible
with 32-bit slots, but some older PCI slots may have too much “lip” to
allow the far edge of the card to overhang. The 64-bit cards are also
a little bit longer than the 32-bit cards so, in addition, you must have
board clearance to allow the 64-bit card to seat properly.
The innovation in the cards collected in this review falls into roughly
four categories:
- Additional ports on the same card, sometimes permitting trunking
for greater combined throughput or additional fault-tolerance.
- Offloading of various functions to the Application Specific Integrated
Circuits (ASIC) on the NIC rather than the CPU. Even at today’s CPU
speeds, offloading of encryption or stack functions improves CPU utilization
as well as network throughput because at least a whole cycle of communication
with the CPU (and its subsequent interrupt) is saved.
- Gigabit speed over standard cables and the ability to auto-detect
what kind of connection is on the other end. This means no need to stock
crossover cables.
- A higher level of integration in NIC construction. This is partly
what led to a general price depression; but, more important, it should
give much higher reliability, less heat and greater throughput than
the older generation of cards.
Product
Information |
Adaptec Quartet66 (ANA-64044)
$550 for the kit
Adaptec
408-957-2550
www.adaptec.com
Intel Pro/1000 MT Dual Port Server Adapter (PWLA8492MT)
$229
Intel
408-765-8080
www.intel.com
Linksys Instant Gigabit Network Adapter 32-bit (EG1032),
$75; 64-bit (EG1064), $99
Linksys
949-261-1288
www.linksys.com
NetGear 32-bit PCI Copper Gigabit Adapter (GA302T)
$53 to $80, depending on retailer
NetGear
408-907-8000
www.netgear.com
SMC EZ Card 1000 Gigabit Ethernet PCI Network Card
(SMC9462TX)
$102 to $139.99, depending on retailer
SMC Networks
949-679-8000
www.smc.com
3Com Gigabit Server NIC (3C996B-T), $169
10/100 Secure NIC (3CR990-TX-97), $120
10/100 Secure Server NIC (3CR990SVR97), $129
3Com Corp.
408-326-5000
www.3com.com
|
|
|
Gigabit Cards
3Com Gigabit Server NIC
The 3C996B-T is a well-constructed card, on a two-layer, two-sided
printed circuit board (PCB), with a 64-bit Peripheral Component Interconnect
(PCI) connector. It’s backward compatible with most 32-bit slots. This
card, in an appropriate PCI-X slot, will outperform the older, more expensive
3Com gigabit NIC with 1MB of external memory, according to 3Com. There
are four annunciators (lights) on the back of the card to aid in troubleshooting.
The ASIC, a collaboration between Broadcom and 3Com, in the 3C996B-T has
an integrated buffer and performs duplex flow control. With a single pair
of connections (multiplied by a set number of virtual endpoints), the
3Com card returned the second highest throughput of the cards I tested
(see the sidebar). With multiple pairs, the 3Com card again placed second
in throughput. The total throughput increased with more pairs of connections,
indicating a well-made, high-performance card at a good price.
This card requires a PCI-X1.0- or PCI 2.2-compliant bus. Most major OSs
are supported. As with most NICs in this roundup, detailed instruction
comes via software on CD. 3Com includes a Connection Assistant to aid
troubleshooting and various management programs, including a server control
suite (Figure 1) that provides diagnostics, as well as a cable-analysis
program that measures cable performance (in Hertz and dB attenuation per
distance).
Intel Pro/1000
Of all the boards reviewed here, the Pro/1000 (PWLA8492MT) has
the highest level of integration. There are no capacitors of the “cap”
type visible—there’s an Intel ASIC, a large IC, a power transistor, two
tiny op-amps and a crystal. The magnetic “phy” circuit and box, visible
on every other card, is hidden (most likely in the two RJ-45 sockets that
extend onto the board). Unfortunately, the annunciators and signal lights
are also placed on the outside rim of the RJ-45 sockets, leaving room
for only two lights per socket, though they are double colored. With multiple
pairs of connections and both ports active, this Intel NIC averaged a
whopping 643Mbps.
|
Figure 1. 3Com’s Gigabit Server NIC diagnostic
suite includes a program measuring cable performance. (Click image
to view larger version.) |
This is also a 64-bit board, with compatibility ranging from PCI-X, 133MHz,
to standard PCI 2.1, 33MHz. As with the 3Com card, I tested these cards
in both slots. They even fit in a very old Intel motherboard, but not
the old Biostar. Both 3Com and Intel operated flawlessly. Except for a
modest difference in throughput, the 3Com card comes with more inclusive
management utilities, though the Intel card supports DIM (think Intel
LANDesk) and Wired for Management (WIM). When more than one Pro server
NIC is used, Intel supports load balancing.
SMC EZ Card 1000 Gigabit Ethernet PCI Network Card
The SMC9462TX is a 64-bit card supporting up to 66MHz slots. It’s
also well laid out, but is somewhat of a hybrid. It consists of two large
ASICs, some high-quality metal capacitors, a socket for PXE boot, with
much of the other tightly integrated surface mount components in a partial
two-layer, double-sided trace board. It also has my favorite utility:
five annunciators on the back of the card, visible even when the cable
is plugged in. The lights cover speed (10Mbps, 100Mbps and 1,000Mbps),
with separate activity and link lights. The drivers come on a floppy disk,
which also holds some documentation, most of which is also contained within
a slim booklet. This card ran very cool, probably the coolest of all.
SMC provides a lifetime warranty.
|
Figure 2. SMC’s EZ Card 1000 Gigabit Ethernet
PCI Network Card averages a blazing 428Mbps in performance. |
With one pair, the SMC card averaged 482Mbps (Figure 2), but with multiple
pairs averaged 428Mbps, peaking at between 47 percent and 57 percent of
processor time. This contrasts with the Intel entry, which averaged in
the high 20s, and the 3Com, which peaked in the 30s.
Linksys Instant Gigabit Network Adapter
Linksys provided lots of equipment for testing, but because I had
a 64-bit capable server, I elected to mix a 32-bit card (the 32-bit slot
EG1032) at one end point with a 64-bit card, the EG1064T, at the other.
Although, with one paired connection, the throughput was low (154Mbps),
the CPU utilization was the lowest of any gigabit cards tested—a miserly
7 percent on the dual Athlon MP server. On the endpoint single Athlon,
utilization was at 14 percent, never rising to more than 27 percent for
20 pairs. Contrast this to the Intel gigabit entry, for which I recorded
CPU utilization on the dual Athlon MP of 20 percent average for 20 pairs,
but a high 97 percent to 99 percent CPU utilization on the single Athlon
server.
If you plan to run gigabit cards on a server that will do file and print
processes, especially large file transfers, you’ll want to have a good
excess of processing power. Also, a workstation will devote significant
portions of its processing power to handling the transfer. In both cases
above, for Linksys and Intel cards, the default offload Tx checksums was
enabled.
|
Figure 3. The Linksys Instant Gigabit Network
Adapter ran well in high demand scenarios. (Click image to view larger
version.) |
To further investigate the seeming disparity regarding the Linksys pair,
I tested multiple sets of pairs. At three pair, I recorded 371Mbps; at
five pair, 475Mbps; and at 10 and 20 pairs, 621Mbps. As the Linksys pair
is stressed by high demand and simultaneous operations, the throughput
actually increased considerably, bringing it within range of the top-performing
Intel and 3Com entries (Figure 3).
The Linksys drivers ship on floppy disks, but included a great manual—a
slim, detailed booklet. For those of us who dislike manuals, Linksys provides
a handy, one-page quick start card for each common OS. (Why don’t more
manufacturers do this?) The Linksys card appears to be a hybrid design
somewhat similar to that of the SMC gigabit card: one of the ASICs has
a heat sink, but didn’t get unusually warm during testing. There are only
two annunciators, as well as several poly “caps” scattered throughout
the two-sided trace board.
NetGear 32-bit PCI Copper Gigabit Adapter
NetGear advertises that the GA302T runs on standard Category 5
cables and has auto-negotiation to determine if the card should emulate
a crossover cable. I used Category 6 cable for the testing. Although the
card is compact, the first thing one notices is the large heat sink over
the Broadcom ASIC. The board itself became quite warm to the touch. As
if to throw salt at top-tier and other manufacturers, the graphic on the
box back claims throughput numbers showing NetGear besting both a similar
(but not the same) Intel card and a D-Link entry. The power transistor
is enveloped by a comparatively large heat sink screwed to the board.
The board appears to be dual-sided, at least for the traces, and there
are two wires covered by epoxy running from the MagTek magnetics chip,
indicating a later engineering change. There’s even a ceramic disk capacitor
soldered to the board, though most of the components appear to be wave-soldered.
NetGear warranties its products for five years and includes a CD of driver
support, along with a quick start poster.
Although I used shipping drivers for all the other cards, I did update
the driver for this card from the NetGear Web site. With five pairs, the
driver for this card consistently generated a bluescreen when running
Chariot, a performance-evaluation tool.
With the second driver and one pair of connections, I generated an average
throughput of 378Mbps and 45 percent utilization, and three pair returned
389 Mbps and 46 percent utilization, both respectable. At five pair, and
the high throughput script, one of the endpoints failed in a bluescreen,
indicating driver failure. In attempts to track this down, I ran many
other scripts, including the throughput script, which writes a much smaller
file and all the Microsoft scripts I could find. None resulted in a bluescreen,
but when I increased the size of the message contained in the Exchange
script by a factor of three, Chariot would fail near the end with more
than five pairs, indicating a TCP connection failure. Interestingly, the
Chariot trace shows that the throughput periodically dips before recovering,
then quitting.
When running very large file transfers under my simulated heavy loads,
the drivers appear to have a TCP ack communication problem. With smaller
transfers and moderate loads, there was no evidence of any problem; the
throughput, under certain conditions, was much higher than I indicated
above. To be fair, the conditions I created may not exactly simulate real-world
conditions, and a NetIQ white paper states that simulating large numbers
of clients with few endpoints may lead to TCP ack failure. Nevertheless,
the other entries tested showed little evidence of any problem.
10/200 Cards
Most of the 10/100 cards showed similar performance. Obviously, with a
larger test bed, you’d expect differences to become more significant.
There are at least two outstanding entries here, however.
Adaptec Quartet66
Even though the Adaptec Quartet66 (ANA-64044) is only a 10/100
64-bit, 66MHz card with four ports in one slot, it provides great flexibility.
With the included software—Duralink64 Failover—it’s possible to have one
link take over for a failed link. I tested this, and it worked as designed.
I didn’t have the opportunity to test the next feature: the possibility
to trunk not only all four ports in one card (tested), but an aggregate
of three ports’ worth of cards together, up to 1.2GB. In addition to failover
for Ethernet link loss, you may set watchdog timers, abnormal hardware
interrupt, too many collisions or abnormal counters to trigger a failover.
As with 3Com, the number of operating systems supported by this Adaptec
Quartet66 is enormous, even supporting Windows 95 OSR 1. The Quartet also
supports both 3.3 and 5.0 Volt PCI slots. All documentation comes on a
CD, along with the drivers and some impressive management utilities. There
are three other versions of the Quartet66, with varying port densities
and varying PCI speeds supported (33MHz or both 33/66MHz). This Quartet66
is neatly laid out, with four ASICS next to four other ICs, which are
dual-layered, double-sided and all neatly integrated. For four ports,
there are only two annunciators each port, which is a disappointment.
When I trunked all four ports together via Duralink64 Port Aggregation,
I could theoretically generate 800Mbps, 532Mbps nominal, claimed by Adaptec.
With the severe tests from Chariot, I generated in the middle to high
300s, depending on the test, well within the claimed range considering
Ethernet inefficiencies. This card also supports Cisco Fast EtherChannel.
For Port Aggregation to work, you must use a switch at the “hub” point.
The Quartet reminded me of my old favorite, a Mylex four-port, 10Mbps
card, but with better utilities. Rumor has it that Adaptec will be coming
out with a gigabit version of this card. This model card is also the only
one in this series to fully support iSCSI, so it would be useful for small-
or mid-sized SAN or NAS connectivity.
3Com 10/100 Secure NIC and 10/100 Secure Server
NIC
The 3CR990-TX-97 and 3CR990SVR97 cards, with a 3Com 3XP ASIC processor,
are used in concert—one at the server, the other in the workstation. This
model boasts 168-bit encryption or “triple DES” and is intended to run
an embedded firewall. An additional box includes the firewall software.
The advantage of running the software on the NIC is that this frees up
the processor, as well as provides an additional level of security at
the actual servers and workstations in use. The firewall can also be configured
to stop certain attacks originating from within the enterprise. In typical
3Com fashion, this can all be managed from one console, as the management
utilities are included.
As an example, one of the 3Com manuals describes, in simple terms, how
to create a no ping policy to prevent hackers from basic network discovery.
In the policy server, under the management utility with the Firewall policy
server, you simply select a NIC, click on the Policy tab at the bottom
of the screen, and click on the Create Policy icon. A wizard guides you
through selecting Deny ICMP and the proper direction, which blocks ping
attempts. 3Com also provides a “standard Windows 2000 rule set,” which
will block Nmap and Ethereal, as well as provide resistance to Nimda and
Code Red. The policy is associated with the NIC, which is simple to accomplish
through the wizard.
For performance testing, I created some rule sets and measured modified
performance with Chariot and QCheck. Results were in the same ballpark
as my older 3Com NICs, so the idea of embedding the firewall functionality
within the ASIC NIC processor does save CPU time and, under my conditions,
didn’t appreciably affect performance. This solution would be most useful
for small- to mid-sized companies where state-of-the-art firewalls aren’t
in place or not updated regularly.
Testing
Specifications |
My test bed consisted of servers and workstations with
a variety of motherboards—Iwill, Intel and Biostar—and
a mixture of CPUs (both Intel and AMD, from 450MHz to
1800+MHz). All computers had at least 192MB RAM (most
were from 256MB to 512MB) and ran Windows 2000, 2000
Server or 2000 Advanced Server at Service Pack 2 or
3. I used a D-Link gigabit switch and a 24-port 10/100
switch with two gigabit ports from Linksys (www.linksys.com).
Though some NICs claimed to be able to do gigabit Ethernet
over Category 5 copper cabling, I wanted to eliminate
cable crosstalk and, particularly, attenuation as a
variable. With the exception of two crossover cables,
which were Category 5e (Belkin, www.belkin.com),
I used Belkin Category 6 cables exclusively.
For throughput tests, I used NetIQ Corp.’s Chariot
(www.netiq.com),
a complex, script-driven, performance-evaluation tool
used by many NIC manufacturers and is capable of simulating
large environments with very few workstations. A variety
of ready-made scripts available on its Web site simulates
different application mixes. You have to install an
endpoint that runs as a service on each of the pairs
you test. NetIQ doesn’t recommend that you install the
Chariot manager portion on a computer that’s part of
the testing, even if you run the manager from a command
line. An ancillary program available from the company’s
Web site permits the creation of up to hundreds of virtual
IP addresses per network card (setaddr). Another utility
(connect_em.exe) creates open TCP ports. I typically
used between three and 50 pairs in my testing and used
two scripts: a Microsoft script and a high throughput
benchmarking script.
NetIQ’s Web site provides a useful utility that uses
some of the same endpoint principles as Chariot, for
a free download (7MB-plus), called QCheck. QCheck gives
single numbers that reflect average throughput performance,
as well as testing connectivity. I also used NetBench
7.0.2, from www.etestinglabs.com.
Main Server
Because some of the gigabit cards were capable
of using 64-bit PCI slots, as well as the Adaptec four-port
card, I needed a server with 64-bit slots. Desktop computers
typically have 32-bit, 33MHz PCI slots, even those adhering
to the most recent specification, PCI 2.2. Higher-end
servers usually have a PCI-X slot, which is a 64-bit
slot capable of speeds up to 133MHz, but will support
66MHz and 33MHz.
To build a practical, cost-efficient server overnight,
I selected the following components, for the following
reasons, with much help from the various vendors listed
below:
AMD vs. Intel processor (www.amd.com)—The
only 64-bit slots I’ve seen on Intel boards were those
that supported dual Xeons, which are pricey. I selected
an Iwill MPX2 motherboard (rev 1.4) because it provides
two 64-bit slots, high performance, good ancillary utilities
and has received very good reviews.
Iwill’s MPX2 (www.iwillusa.com)—It
supports two AMD MP processors, and I used two 1800+
MHz samples from AMD. The newest BIOS from Iwill supports
at least MP 2200+.
Kingston DDR memory (www.kingston.com)—Experience
has shown Kingston to be reliable. There are or will
be other motherboards that support faster memory (for
example, many of the Intel motherboards support either
Rambus or DDR memory that lets the front side bus run
at 400 or 533MHz), but it wasn’t my goal to optimize
front-side bus configuration.
Antec case (www.Antec-inc.com)—The
SOHO 1080AMG Performance Plus includes a massive, powerful,
430-watt power supply that has the bonus of being quiet.
In fact, the whole case, with its three other fans,
ran more quietly than the D-Link gigabit switch. The
side of the case fan is directly over the AGP and one
of the 64-bit slots. As some of the NICs ran extremely
hot, as measured by CompuNurse, the side fan location
is perfect. Of course, the case is easy to assemble
and has other great features, but one of the best may
be the price ($159).
Western Digital Special Edition Ultra ATA 100, 80GB,
7200 rpm (www.westerndigital.com)—I
thought I only wanted a 133ATA drive. Western Digital’s
Special Edition ATA 100 drives, with an 8MB cache, are
probably faster, though a recent claim that they’re
as fast as my other, comparable SCSI drives is almost
certainly exaggerated.
SiSoftware’s Sandra Pro utility (www.sisoftware.co.uk)—A
free basic download, the Pro version verified that this
drive is very fast, close to single-task SCSI performance.
ATI All-In-Wonder Radeon 8500 (www.ati.com)—It
may be overkill for a server but most servers and server
motherboards from a wide variety of manufacturers use
ATI’s Rage chipset for video. The chipset is only available
now to OEMs.
—Doug Mechaber
|
|
|
It’s Time to Upgrade!
If you haven’t looked at NICs in the last few years, you’ll find
these cards to be more evolutionary than revolutionary, but include some
“very good to have” features. Top-tier manufacturers usually provide management
utilities with the hardware, and support was good across the board for
these manufacturers.
All of the cards I’ve described here work well and improve throughput
and response in most settings. In moderately sized environments needing
firewall functionality, the obvious choice is the 3Com 3CR990 combo. In
niche settings, my absolute favorite is the Adaptec Quartet66—more ports
are better—but if price is a consideration, make sure you need the additional
functions. Larger and high throughput areas should move to gigabit-over-copper
to maintain the cable plant; my choice there is either the 3Com or Intel
entries. Though Intel has slightly higher throughput and dual ports, my
favorite is the 3Com NIC for all the management utilities it includes.
When considering your next NIC purchase, be sure to carefully consider
who has, or will have, .NET support. Even though you may not even be thinking
about another migration, the average NIC lasts four years or more, even
in corporate environments. The incremental cost to purchase higher-performance
cards for later application will save time and money.