Intel Reveals Details for Future High-Performance Computing System Building Blocks as Momentum Builds for Intel®Xeon Phi™ Product
Discloses Future Generation Intel Xeon Phi Processor and New Performance and Architectural Details for Intel® Omni-Path Fabric Interconnect Technology
November 17, 2014
NEW ORLEANS–(BUSINESS WIRE)–SUPERCOMPUTING CONFERENCE (SC14)–Intel Corporation today announced several new and enhanced technologies bolstering its leadership in high-performance computing (HPC). These include disclosure of the future generation Intel® Xeon Phi™ processor, code-named Knights Hill, and new architectural and performance details for Intel® Omni-Path Architecture, a new high-speed interconnect technology optimized for HPC deployments.
“Intel is excited about the strong market momentum and customer investment in the development of HPC systems based on current and future Intel Xeon Phi processors and high-speed fabric technology”
Intel also announced new software releases and collaborative efforts designed to make it easier for the HPC community to extract the full performance potential from current and future Intel industry-standard hardware.
Together, these new HPC building blocks and industry collaborations will help to address the dual challenges of extreme scalability and mainstream use of HPC while providing the foundation for a cost-effective path to exascale computing.
News Facts
Intel disclosed that its future, third-generation Intel Xeon Phi product family, code-named Knights Hill, will be built using Intel’s 10nm process technology and integrate Intel Omni-Path Fabric technology. Knights Hill will follow the upcoming Knights Landing product, with first commercial systems based on Knights Landing expected to begin shipping next year.
Industry investment in Intel Xeon Phi processors continues to grow with more than 50 providers expected to offer systems built using the new processor version of Knights Landing, with many more systems using the coprocessor PCIe card version of the product. To date, committed customer deals using the Knights Landing processor represent over 100 PFLOPS of system compute.
Recent high-profile Knights Landing deals include the Trinity supercomputer, a joint effort between Los Alamos and Sandia National Laboratories, and the Cori supercomputer, announced by The U.S. Department of Energy’s (DOE) National Energy Research Scientific Computing (NERSC) Center. Additionally, DownUnder GeoSolutions a geosciences company, recently announced the largest commercial deployment of current-generation Intel Xeon Phi coprocessors, and the National Supercomputing Center IT4Innovationsannounced a new supercomputer that will become the largest Intel Xeon Phi coprocessor-based cluster in Europe.
Intel disclosed that the Intel Omni-Path Architecture is expected to offer 100 Gbps line speed and up to 56 percent lower switch fabric latency in medium-to-large clusters than InfiniBand alternatives.1 The Intel Omni-Path Architecture will use a 48 port switch chip to deliver greater port density and system scaling compared to the current 36 port InfiniBand alternatives. Providing up to 33 percent more nodes per switch chip is expected to reduce the number of switches required, simplifying system design and reducing infrastructure costs at every scale. Expected system scaling benefits include:
Up to 1.3x greater port density than InfiniBand – enabling smaller clusters to maximize single switch investments.2
Use up to 50 percent fewer switches than a comparable InfiniBand-based cluster of medium- to large-size.3
Up to 2.3x higher scaling in a two-tier fabric configuration using the same number of switches as an InfiniBand-based cluster – allowing for more cost-effective scaling for very large cluster-based systems.4
Intel launched the Intel Fabric Builders Program to create an ecosystem working together to enable solutions based on the Intel Omni-Path Architecture. An expansion of the Intel Parallel Computing Centers was also announced, bringing the total to more than 40 centers in 13 countries working to modernize more than 70 of HPC’s most popular community codes.
Intel expanded its Lustre* software capabilities with the release of Intel® Enterprise Edition for Lustre software v2.2 and Intel® Foundation Edition for Lustre software. New appliances using the enhanced Intel® Solutions for Lustre software are currently being offered from Dell*, DataDirect Networks* and Dot Hill*.
Continued TOP500 Momentum
Intel-based systems account for 86 percent of all supercomputers and 97 percent of all new additions, according to the 44th edition of the TOP500 list announced today. In the two years since the introduction of the first-generation Intel Xeon Phi product family, these many-core, coprocessor-based systems represent 17 percent of the aggregated performance of all TOP500 supercomputers. The complete TOP500 list is available at www.top500.org.
Supporting Quotes
“Intel is excited about the strong market momentum and customer investment in the development of HPC systems based on current and future Intel Xeon Phi processors and high-speed fabric technology,” said Charles Wuischpard, vice president, Data Center Group, and general manager of Workstations and HPC at Intel. “The integration of these fundamental HPC building blocks, combined with an open standards-based programming model, will maximize HPC system performance, broaden accessibility and use, and serve as the on-ramp to exascale.”
“The combination of Intel Xeon Phi coprocessors with our proprietary software allows us to provide our customers with one of the most powerful geo-processing production systems to date,” said Dr. Matt Lamont, managing director, DownUnder GeoSolutions. “Our Intel Xeon Phi powered solutions enable interactive processing and imaging from each of our geophysicists’ individual computers. A testing regime that once took weeks can now be achieved in days. We’re thrilled with the Intel Xeon Phi coprocessors and look forward to evaluating the next-generation product.”
Supporting Resources
Intel Xeon Phi product page: www.intel.com/xeonphi
Intel Omni-Path Architecture page: www.intel.com/omnipath
Intel Fabric Builders Program page: http://fabricbuilders.intel.com
Intel Parallel Computing Centers page: https://software.intel.com/en-us/ipcc
Intel Enterprise Edition for Lustre software v2.2:
http://info.intel.com/HPDDSC14AnnouncementLandingPage.html
Intel Foundation Edition for Lustre software:
http://info.intel.com/HPDDSC14AnnouncementLandingPage2.html
About Intel
Intel (NASDAQ: INTC) is a world leader in computing innovation. The company designs and builds the essential technologies that serve as the foundation for the world’s computing devices. As a leader in corporate responsibility and sustainability, Intel also manufactures the world’s first commercially available “conflict-free” microprocessors. Additional information about Intel is available at newsroom.intel.com andblogs.intel.com, and about Intel’s conflict-free efforts at conflictfree.intel.com.
Intel, the Intel logo, Xeon and Intel Xeon Phi are trademarks of Intel Corporation in the United States and other countries.
*Other names and brands may be claimed as the property of others.
1 Latency reductions based on Mellanox CS7500 Director Switch and Mellanox SB7700/SB7790 Edge switches compared to preliminary Intel simulations for Intel Omni-Path switches based on a 1024-node full bisectional bandwidth (FBB) Fat-Tree configuration (2-tier, 5 total switch hops), using a 48-port switch for Intel Omni-Path cluster and 36-port switch ASIC for either Mellanox or Intel® True Scale clusters. Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling, and provided to you for informational purposes. Any differences in your system hardware, software or configuration may affect your actual performance.
2As compared to a shipping 36-port edge InfiniBand switch.
3 Reduction in up to ½ fewer switches claim based on a 1024-node full bisectional bandwidth (FBB) Fat-Tree configuration, using a 48-port switch for Intel Omni-Path cluster and 36-port switch ASIC for either Mellanox or Intel® True Scale clusters.
4 A2.3X based on 27,648 nodes based on a cluster configured with the Intel Omni-Path Architecture using 48-port switch ASICs, as compared with a 36-port switch chip that can support up to 11,664 nodes.
The post Intel Reveals Details for Future High-Performance Computing System, Xeon Phi appeared first on Legit Reviews.