2015-05-22

Intel’s new power chip aims at high-end market dominated by RISC, POWER, mainframes and OLTP; new ASICS will make it scream in hyperscale datacenters as well.



Intel did something a little different with the design of version 3 of its top-end Xeon E7 processor family: It took advantage of Xeon Phi’s new position as Intel’s main entry in supercomputing to expand the range of Xeon’s capabilities to include not only Big-Data applications, but full-bore online transaction processing systems that are the traditional stronghold of mainframes and traditional RISC-processing architectures.

They got more power, too, of course. The number of possible cores a processor in the Xeon E7-4800 V3 and E7-8800 V3 product families can support rose 20 percent, from 15 to 18 – while the cores themselves got an upgrade and power boost from the switch to the Haswell microarchitecture rather than Ivy Bridge, according to Tom’s IT Pro.

The maximum size of the E7′s maximum cache rose from 37.5MB to 45MB while the maximum memory possible per socket rose to 1.5TB from 1TB. Servers can be configured with as many as 32 sockets, with a maximum memory load of 12 terabytes per eight-socket installation.

Support for version two of the Advanced Vector Extensions expands the width of SIMD integer instructions and numeric processing, as well as better data handling, data access and memory management functions, all of which will boost performance in servers dedicated to data-intensive workflows, according to Intel.

They also now support DDR4 memory as well as the slower DDR3. Among the OLTP/mainframe-market-challenging additions to the feature list is a whole raft of reliability accessibility and serviceability (RAS) functions that are delivered via Intel’s RunSure functions and in its Intel Advanced Encryption Standard New Instructions. Those additions are designed to manage memory more smoothly and efficiently and manage large chunks of data with more alacrity than it has in the past. (An Intel white paper posted here goes into more detail on the E7′s RAS enhancements.)

The result, according to Intel, is a processor that can deliver up to 10x the performance per dollar, with a total cost of ownership only 15 percent as high as previous editions – all packaged in 12 different configurations of processor and servers from 17 manufacturers, according to Intel’s announcement.

Those reliability features and sky-high per-socket scalability could serve ably in Cloud implementations, but the E7 leaves most of those jobs to Xeon E3 and E5 and takes aim instead at both real-time Big-Data business analytics and at the online transaction processor (OLTP) server market whose upper reaches are dominated by RISC-based UNIX servers.

Intel should still focus on cost/performance comparisons highlighting the cost advantage of x86 over RISC as recently as last year, but the enhancements in v3 of the E7 family could deliver 3.2 times the output of an E7 v2 system running SAP’s HANA business-intelligence system, according to analyst Kurt Marko of MarkoInsights.

That’s a huge jump in the kind of performance required for in-memory databases as well as OLTP applications in which it would compete for business with IBM POWER8 systems that cost roughly 10 times as much as an equivalent system powered by E7v3, Marko wrote.

The higher-end IBM machines still have some advantages in memory management and some I/O interfaces that can drastically cut its cost-per-transaction — especially for in-memory databases powered by hardware built largely on solid-state drives tweaked to behave like RAM, at 1/17 the cost of actual memory, Marko wrote.

The flexibility of Intel’s lineup of two-, four- and eight-socket systems, however can help customers reduce their spending by, for example, reducing the cost of OLTP licenses whose cost is based on the number of cores by choosing processors with low core counts with frequencies high enough to compensate, Marko suggested.

“If you want the absolutely highest performance on SAP, you would look to IBM, but it comes with some caveats,” Patrick Moorhead, principal analyst at Moor Insights & Strategy told PCMag. “The big difference comes in performance per dollar, where Intel-based systems from HP, Dell, or Lenovo could perform better by 5-10x versus an IBM Power-based system.”

Intel is trying to build that advantage into an even bigger one by adding application specific integrated circuits (ASICs) chips to custom-designed Xeon chips in order to deliver even more workload-specific power to large-scale Cloud companies that are already buying versions of Xeon customized to the workloads in their particular hyperscale datacenters.

The chips are coming from eASIC, whose claim to fame is the ability to design and ship custom ASICs with unusually short lag times based on a specialized eASIC Nextreme architecture designed to cut in half the time required to write software addressing the custom chips, compared to coding for Field Programmable Data Arrays (FPGA), which are the other major option for those looking to add-on special functions to existing chip designs.

“Having the ability to highly customize our solutions for a given workload will not only make the specific application run faster, but also help accelerate the growth of exciting new applications like visual search,” Diane Bryant, SVP and GM of Intel’s Data Center Group said in a release announcing the deal. “This announcement helps broaden our portfolio of customized products to provide our customers with the flexibility and performance they need.”

Intel did not announce which chips or which custom-chip customers would get the eASIC-enabled new Xeons, just as it has not announced details on when it plans to add FPGA processors to expand the abilities of Xeon chips. Because they’re programmable, FPGA chips would allow customers to tune performance for a specific workload and then retune it later to adapt to changes in the workloads they run.

“You want to use an ASIC when the software never changes or when you are looking for the lowest cost or highest level of performance per die size,” Patrick Moorhead told eWEEK. “ASICs are used where standards rarely change if ever.”

The post Intel Fires New Xeon at Cloud, Mainframe Simultaneously appeared first on Go Parallel.

Show more