2015-04-21



To handle “Big Data,” data centers require thousands of servers.

Just about everyone, and certainly every engineer, has heard of Moore’s Law, in which Gordon Moore predicted that technological advances would lead to a doubling of the number of transistors on a chip approximately every two years. Fewer people have heard of its networking equivalent, Metcalfe’s Law, formulated by Robert Metcalfe, which states that the value of a telecommunications network is proportional to the square of the number of connected users of the system. Simply put, the greater number of users of a networked service, the more valuable the service becomes to the community.

Now think of the Internet of Things (IoT), in which the user need not be a human, but rather a machine. The Ethernet was developed as a system for connecting computers within a building using hardware running from machine to machine. It has evolved into a family of networking technologies and its latest iteration, the 40/100 Gigabit Ethernet (GE) standard known as IEEE 802.3ba, was written with data center communications in mind.

To minister to a high-speed world of constant connectivity, today’s data center is home to thousands of host servers allocated as clusters. Each host consists of one or more processors, memory, network interface, and local high-speed I/O all tightly connected with a high-bandwidth network. The Ethernet serves as a cluster interconnect in the majority of cases (with InfiniBand in second place).

Unprecedented Growth

The data center industry is constantly growing, and at an accelerating rate as more of the world comes online and more businesses turn to the cloud for their data infrastructure. But



Total of connected devices, billions of units/installed base. (Source: Gartner)

perhaps more than any other factor, the IoT will have a potential transformational effect on the data center market, as well as its providers and technologies. The research firm Gartner, Inc. estimates that by 2020 the IoT will include 26 billion units installed generating an almost unfathomably large quantity of Big Data that needs to be processed and analyzed in real time. This data will represent an ever-larger proportion of the workloads of data centers, leaving providers facing new capacity, speed, analytics, and security challenges.

The Necessary Bandwidth

Search engine providers and other Big Data users (social media forums, online shopping sites, streaming video suppliers) pay a lot of money for thick pipes to connect their data centers. Using search engines as an example, thousands of servers in a data center index the entire Web by using keywords and metadata for Internet searching. Google indexes 20 billion pages each day. Once they have completed this task, these indexes have to be moved quickly to other data centers to remain relevant. The pipe connecting data centers must be large enough to accommodate these requirements, but after the indexes have been moved, pipe utilization drops and the servers, which now could be used for other jobs, can stall if the data does not move fast enough.

So bandwidth is one of the biggest considerations for Big Data. It’s a simple, straightforward equation: the faster the connection, the better the service. Currently, 10-Gbit/s transmissions are the fastest Ethernet connections in widespread use. To put this into perspective, consider that most homes and businesses connect to Ethernet with a Category 5 Twisted Pair cable, which can transmit up to 1 Gbit/s.

For their internal infrastructure, data centers are beginning to adopt the IEEE 802.3ba standard for 40- and 10- Gbit/s Ethernet connections — 40 and 100x times faster, respectively, than the household twisted-pair cable. First defined by the IEEE in 2010, 100 Gigabit Ethernet (or 100GbE) and 40 Gigabit Ethernet (40GbE) represent the first instance in which two different Ethernet speeds were specified in a single standard. The decision to include both speeds came from pressure to support the 40-Gbit/s rate for local server applications while 100 GbE better targets network aggregation applications, such as service provider client connections, Internet backbones, network cores, etc. Two years ago the IEEE Bandwidth Assessment Report estimated that core networking bandwidth was doubling every 18 months, with server bandwidth doubling every 24 months.

Deployment of 40- and 100-Gbit/s Ethernet links within data centers has mostly started where traffic is heaviest, such as from rack to rack within the center. Most centers are using 40GbE but with demand increasing, rapid migration to 100GbE is just a matter of time. Internet service providers have been installing 100GbE since it has been available on routers because they need the biggest pipes.

Mobile device apps are also driving what is known as east-west traffic (traffic between and among servers or traffic from storage to server) instead of the traditional north-south traffic (client to server). According to Cisco, last year’s mobile data traffic was nearly 18 times the size of the entire global Internet in 2000. One Exabyte (EB) of traffic traversed the global Internet in 2000 (1 EB equals 1018 bytes or 1 billion gigabytes), and in 2013 mobile networks carried nearly 18 EB of traffic.

Intel has calculated that for every 600 phones that are turned on, a whole server’s worth of capacity has to be utilized to keep these phones fed. Every 120 tablets require another server’s worth of capacity, and so do every 20 digital signs and every 12 surveillance cameras.

At the Speed of Light

Fiber-optic lines transfer bits and bytes as light pulses moving along a cable. In a data center, the data goes into racks connected to internal routers, which in turn, direct the information to servers. The IEEE 802.3ba standard allows multiple 10-Gbit/s channels to run in parallel or via wavelength division multiplexing (WDM), depending on whether they are single or multi-mode fiber (MMF) cables. The 10-Gbit/s channels are stacked to become 4x (40Gbps) or 10x (100 Gbit/s) faster. In most cases, MMF cables are used to provide the additional fiber strands needed for 40 to 100- Gbit/s connections.

Engineers can find Fiber Optic Transmitters/Transceivers/Receivers on the Mouser website from suppliers including Avago, Emerson Connectivity, Omron, Sharp, Toshiba, and TT Electronics.



The structure of a typical single-mode fiber: 1. Core 8 µm diameter; 2. Cladding 125 µm dia.; 3. Buffer 250 µm dia.; 4. Jacket 400 µm dia. (Source: Wikipedia)

With a larger core diameter, MMF cable permits multiple wavelengths of light to travel down its path. Single-mode optical fiber (SMF) is designed to carry light only directly down the fiber and is much narrower than MMF cables. SMF are better at retaining the fidelity of each light pulse over longer distances than multimode fibers, because intermodal dispersion cannot occur so there are fewer opportunities for the data to slow down.

WDM splits multiple wavelengths into separate fibers for single-mode transfer. This allows more data to be transferred on a single cable by using different wavelengths (i.e., colors) of laser light for different pieces of information. A multiplexer and de-multiplexer, placed at either end of the cable, joins or splits this mixed-light signal.

Engineers can find Ethernet media converter modules from suppliers such as Phoenix Contact that allow full duplex transmission from 10/100Base-TX (the fast Ethernet standard supported by the vast majority of Ethernet hardware currently produced) to individual simplex fiberglass with WDM technology. For example, the manufacturer’s part 2902659 offers full-duplex communication with only one fiber and transmission ranges up to 38 km.

Going the Distance

Data centers are becoming massive in scale, occupying millions of square feet, requiring longer and longer reaches for connectivity. A typical cluster has several kilometers of fiber-optic cable acting as a highway system interconnecting racks of servers on the data center floor. The main barrier to adoption for 100-Gbit/s Ethernet connectivity has been not only the expense, but the lack of switch density. The distance between switches in modern data centers often is greater than 100 m; in many cases it can be 500 m and in some cases it can be up to a kilometer or more.

This leaves an enormous opportunity for suppliers to develop high-speed, low-power optical links that can span a great distance in data centers while operating at data rates of up to 100Gbits/s. Several consortiums have recently emerged to satisfy data center operator demands for an affordable, low-power 100 GbE optical interface that can reach beyond 100 m, which is in between the IEEE’s 100GBase-SR4 specification that covers 100m reaches and 100GBase-LR4 that focuses on links up to 10 km.

Intel and Arista (along with eBay, Altera, Dell, Hewlett-Packard and others) earlier this year formed an open industry group and a specification that addresses the up to 2km data center reach over a duplex, multimode fiber with 4 lanes of 25-Gbits/s light paths. The CLR4 100G alliance is designing an affordable, low-power optical interface for a Quad Small Form-factor Pluggable (abbreviated as QFSP or QFSP+) transceiver. Today’s standard optics support 10 lanes of 10 Gbits/s, which leads to thicker, more expensive cables. The CLR4 100G group says its standard will reduce fiber count by 75 percent.

Finisar’s FTL410QD2C QSFP+ transceiver module. (Source: Finisar)

Mouser offers QFSP transceivers from Avago Technologies, Finisar, 3M and TE Connectivity. The compact QFSP+ form factor enables low power consumption and high density. For example, Finisar’s FTL410QD2C QSFP+ transceiver module is designed for use in 40-Gbit/s links over parallel multimode fiber, including breakout to four 10-Gb

CWDM4 MSA (Coarse Wavelength Division Multiplexed 4x25G Multi-Source Agreement) is another group addressing 100GbE over 500 m to 2 km. The four members of the CWDM4 MSA (Avago Technologies, Finisar Corp., JDSU, and Oclaro) say they will offer interoperable 2-km 100G interfaces taking a 4x25G approach over duplex single-mode fiber (SMF).

Six technology vendors have created the Parallel Single Mode 4-lane (PSM4) MSA Group, which will use a four-fiber, parallel approach to 100 Gbits/s in the data center. The companies (Avago Technologies, Brocade, JDSU, Luxtera, Oclaro, and Panduit) say that there is a need for PSM4 optical transceivers to fill the requirement for low-cost 100-Gbit/s connections at reaches of 500 m.

More to Come

The rapid growth of server, network, and Internet traffic is driving the need for ever-higher data rates, higher density, and lower cost optical fiber Ethernet solutions. To support evolving architectures, the IEEE is working on new physical layer requirements. This project aims to specify additions to and appropriate modifications of IEEE Standard 802.3 to add 100-Gbit/s Physical Layer (PHY) specifications and management parameters, using a four-lane electrical interface for operation on multi-mode and single-mode fiber-optic cables, and to specify optional Energy Efficient Ethernet (EEE) for 40- and 100-Gbit/s operation over fiber-optic cables. In addition, it will add 40-Gbit/s Physical Layer (PHY) specifications and management parameters for operation on extended reach (> 10 km) single-mode fiber-optic cables. Called the P802.3bm standard it is estimated to be complete in the first quarter of 2015.

400 Gbps is under development as the next Ethernet speed. Expected on the market after 2016, the IEEE 802.3 400 GE Study Group, formed in March 2013, is establishing initial objectives for 400GE using OM3 or OM4 fiber and 25Gbps per channel, similar to the proposed P802.3bm standards. The new 400 GE standard is estimated to be complete by 2017.

Data is being generated in unprecedented quantity. Research firm IDC predicts that the volume of data will double every 18 months. Twitter receives over 200 million tweets per day; Facebook collects more than 15 Tbytes every day. And the Internet of Things — machine-generated data from sensors, devices, RFID, etc. — could easily dwarf these numbers. But volume is only part of the equation. Velocity matters, too. As more entities engage in social media and tie into the Internet, real-time or near-real-time response becomes critical. This rising demand for web services and cloud computing has created need for large-scale data centers. But without big pipes and fast speeds, the data centers designed to cope with Big Data will drown in it.

Show more