2014-01-15

Intel’s Math Kernel Library provides multicore and vectorized implementations of mathematical functions, eliminating the need to code your own math routines. Jeff Cogswell explores MKL, which includes some implementations of existing standard math libraries.

In the new year, we’re going to be covering a lot of new topics that we haven’t covered much before here at Go Parallel. Intel’s Parallel Studio is an entire suite of products, and one topic I haven’t gone in-depth about so far in my blogs is the Intel Math Kernel Library, or MKL. 

Here’s the basic idea behind MKL. In addition to being a library of advanced mathematical routines, it was developed by Intel’s top engineers to be fully compatible with other math libraries (in terms of function names and prototypes), while taking advantage of SIMD and multicore technology. By using this library, you don’t need to code your own mathematical routines. Instead, you can start with a heavily tested library developed by a team of knowledgeable engineers. In addition to its own math functions, MKL is fully compatible with standard libraries, including BLAS, LAPACK, and FFTW.

Basic Linear Algebra Subprograms

BLAS stands for Basic Linear Algebra Subprograms. The BLAS library, which was originally developed for Fortran in 1979, is divided into three levels of operations that deals with what you’re operating on: vectors or matrices. If you’re doing operations on two vectors, that’s level 1. If you’re operating on a vector and a matrix, that’s level 2. If you’re operating a matrix by a matrix, that’s level 3.

You can see, then, that level 1 (vector by vector) would include such operations as a dot product, whereby you multiply the individual corresponding members of the two vectors, and sum the products. When you’re doing level 2, you’re doing work such as solving triangular systems, which involves a vector and a matrix. When working with multiplying two matrices together, you’re working on level 3. We’ll be covering these in detail in future blogs.

MKL includes an implementation of BLAS that you can use when compiling C++ or when compiling Fortran. 

Linear Algebra Package

Another standard that MKL supports is LAPACK, which stands for Linear Algebra Package. LAPACK was originally developed in 1992 also for Fortran. Although this library deals with Linear Algebra, it touches on other areas. For example, it provides routines for solving systems of equations. Recall from mathematics that to solve systems of equations, you can take the coefficients of the linear equations and place them in matrices. Then you can grind through various procedures to determine the solutions of the equations. This is one area where LAPACK excels. You provide the coefficients, and it provides the routines that will find the solutions. Other areas LAPACK can help is in the fundamentals of Linear Algebra, such as finding eigenvalues and calculating matrix factorizations. There is much more, of course, as you’ll see in future blogs. Like BLAS, you can use both Fortran and C++ with LAPACK.

Fast Fourier Transforms

MKL also includes an implementation of the FFTW library, which stands for Fastest Fourier Transform in the West. FFTW was developed in 1997 at MIT.

Fast Fourier Transforms, or FFTs, are an implementation of Discrete Fourier Transforms. Fourier Transforms are a branch of advanced mathematics that deals with finding the fundamental frequencies that make up a wave. For example, in music, the complex sound produced by a clarinet is made up of a whole series of fundamental sine waves. If the clarinet plays a C note, you can, in theory, use Fourier Transforms to calculate all the sine waves which when played simultaneously will (in theory) sound identical to the clarinet’s note. Fourier Transforms are purely mathematical functions operating on the complex number system. Since computers operate on a discrete number system, computer scientists have developed ways to do Fourier Transforms on the computer called Discrete Fourier Transforms. From there, various “fast” algorithms have been developed. And, apparently, the MIT one is the “fastest” (as the developers claimed it was at the time, which is reflected in the name of their library).

Math Kernel Library includes an implementation of the FFTW, and, like the others, you can use C++ and Fortran.

Conclusion

In addition to supporting multicore and vectorized versions of standard math libraries, MKL also includes many other libraries that aren’t implementations of existing standards, but are equally as useful. Next time we’ll look at what’s available there before diving in and trying out the MKL.

Show more