Recommendations for a usable, fast C++ matrix library?

  • Does anyone have recommendations on a usable, fast C++ matrix library?

    What I mean by usable is the following:

    • Matrix objects have an intuitive interface (ex.: I can use rows and columns while indexing)
    • I can do anything with the matrix class that I can do with LAPACK and BLAS
    • Easy to learn and use API
    • Relatively painless to install in Linux (I use Ubuntu 11.04 right now)

    To me, usability is more important than speed or memory usage right now, to avoid premature optimization. In writing the code, I could always use 1-D arrays (or STL vectors) and proper index or pointer arithmetic to emulate a matrix, but I'd prefer not to in order to avoid bugs. I'd also like to focus my mental effort on the actual problem I'm trying to solve and program into the problem domain, rather than use part of my finite attention to remember all of the little programming tricks I used to emulate matrices as arrays, and remember LAPACK commands, et cetera. Plus, the less code I have to write, and the more standardized it is, the better.

    Dense versus sparse doesn't matter yet; some of the matrices I am dealing with will be sparse, but not all of them. However, if a particular package handles dense or sparse matrices well, it is worth mentioning.

    Templating doesn't matter much to me either, since I'll be working with standard numeric types and don't need to store anything other than doubles, floats, or ints. It's nice, but not necessary for what I'd like to do.

    Is using CUDA an option?

    It could be, later on. I'm not interested in CUDA right now because I'm building a library for an application where matrix multiplication is the least of my concerns. The bulk of the effort will be spent on calling a mixed-integer linear program solver, so using CUDA would be overkill. After I finish my thesis, I plan on looking at algorithms that are more linear algebra heavy and less optimization-centric. I definitely encourage you to post about CUDA libraries, though, if you have experience with them, because I'm sure that other people would be interested to know your thoughts.

    What about Intel MKL and IPP?

  • I've gathered the following from online research so far:

    I've used Armadillo a little bit, and found the interface to be intuitive enough, and it was easy to locate binary packages for Ubuntu (and I'm assuming other Linux distros). I haven't compiled it from source, but my hope is that it wouldn't be too difficult. It meets most of my design criteria, and uses dense linear algebra. It can call LAPACK or MKL routines. There generally is no need to compile Armadillo, it is a purely template-based library: You just include the header and link to BLAS/LAPACK or MKL etc.

    I've heard good things about Eigen, but haven't used it. It claims to be fast, uses templating, and supports dense linear algebra. It doesn't have LAPACK or BLAS as a dependency, but appears to be able to do everything that LAPACK can do (plus some things LAPACK can't). A lot of projects use Eigen, which is promising. It has a binary package for Ubuntu, but as a header-only library it's trivial to use elsewhere too.

    The Matrix Template Library version 4 also looks promising, and uses templating. It supports both dense and sparse linear algebra, and can call UMFPACK as a sparse solver. The features are somewhat unclear from their website. It has a binary package for Ubuntu, downloadable from their web site.

    PETSc, written by a team at Argonne National Laboratory, has access to sparse and dense linear solvers, so I'm presuming that it can function as a matrix library. It's written in C, but has C++ bindings, I think (and even if it didn't, calling C from C++ is no problem). The documentation is incredibly thorough. The package is a bit overkill for what I want to do now (matrix multiplication and indexing to set up mixed-integer linear programs), but could be useful as a matrix format for me in the future, or for other people who have different needs than I do.

    Trilinos, written by a team at Sandia National Laboratory, provides object-oriented C++ interfaces for dense and sparse matrices through its Epetra component, and templated interfaces for dense and sparse matrices through its Tpetra component. It also has components that provide linear solver and eigensolver functionality. The documentation does not seem to be as polished or prominent as PETSc; Trilinos seems like the Sandia analog of PETSc. PETSc can call some of the Trilinos solvers. Binaries for Trilinos are available for Linux.

    Blitz is a C++ object-oriented library that has Linux binaries. It doesn't seem to be actively maintained (2012-06-29: a new version has just appeared yesterday!), although the mailing list is active, so there is some community that uses it. It doesn't appear to do much in the way of numerical linear algebra beyond BLAS, and looks like a dense matrix library. It uses templates.

    Boost::uBLAS is a C++ object-oriented library and part of the Boost project. It supports templating and dense numerical linear algebra. I've heard it's not particularly fast.

    The Template Numerical Toolkit is a C++ object-oriented library developed by NIST. Its author, Roldan Pozo, seems to contribute patches occasionally, but it doesn't seem to be under active development any longer (last update was 2010). It focuses on dense linear algebra, and provides interfaces for some basic matrix decompositions and an eigenvalue solver.

    Elemental, developed by Jack Poulson, is a distributed memory (parallel) dense linear algebra software package written in a style similar to FLAME. For a list of features and background on the project, see his documentation. FLAME itself has an associated library for sequential and shared-memory dense linear algebra, called libflame, which appears to be written in object-oriented C. Libflame looks a lot like LAPACK, but with better notation underlying the algorithms to make development of fast numerical linear algebra libraries more of a science and less of a black art.

    There are other libraries that can be added to the list; if we're counting sparse linear algebra packages as "matrix libraries", the best free one I know of in C is SuiteSparse, which is programmed in object-oriented style. I've used SuiteSparse and found it fairly easy to pick up; it depends on BLAS and LAPACK for some of the algorithms that decompose sparse problems into lots of small, dense linear algebra subproblems. The lead author of the package, Tim Davis, is incredibly helpful and a great all-around guy.

    The Harwell Subroutine Libraries are famous for their sparse linear algebra routines, and are free for academic users, though you have to go through this process of filling out a form and receiving an e-mail for each file that you want to download. Since the subroutines often have dependencies, using one solver might require downloading five or six files, and the process can get somewhat tedious, especially since the form approval is not instantaneous.

    There are also other sparse linear algebra solvers, but as far as I can tell, MUMPS and other packages are focused mostly on the solution of linear systems, and solving linear systems is the least of my concerns right now. (Maybe later, I will need that functionality, and it could be useful for others.)

    I guess my first question would be: Do you ever want to run anything in parallel?

    @MattKnepley: Parallelizing the linear algebra is not my primary concern because the matrix data is going to be fed into a mixed-integer linear programming (MILP) solver anyway, and typically, the MILP solver is the bottleneck, not the linear algebra I do outside of the MILP solver call. That said, I am sure that other people would be interested in hearing about libraries such as Elemental that can be used for parallel numerical linear algebra.

    @JackPoulson: I probably should have worded it "heard from Jack Paulson about Eigen", because I believe we talked about it briefly on Google+. I apologize for the confusion; I didn't mean to imply that you wrote the Eigen package.

    I should mention Trilinos... though it doesn't get a lot of visibility here (yet), it is a viable alternative to PETSc, with a templated matrix package, an eigensolver, and a sparse matrix solver, it also has a package meant specifically for abstracting out the bookkeeping of an algorithm, though I don't know how well it works.

    @AndrewSpott: Good call. I will add links and a description.

    Eigen seems great - a colleague of mine used it in a professional context, and it can get you up and running quickly enough without sacrificing performance.

    You should probably differentiate between header-only template libraries such as Eigen, Boost:uBLAS, and Blitz++ and traditional compiled libraries such as PETSc and Trilinos.

    I would also add the following libraries to your answer: ViennaCL - OpenCL-based C++ header library which can interface into Eigen and MTL. PLASMA - a UTK-based redesign of the BLAS and LAPACK libraries featuring tile-based decompositions. MAGMA - another UTK project which focuses on improving LAPACK/BLAS performance.

    Perhaps it's worth noting that though there may be a binary package for ubuntu, Eigen's trivial to install anywhere since it's a header only library: just add it to the include path (or even directly into your own project) and you're done.

    I am in for Armadillo. I have tried Armadillo, Boost::uBLAS, and Eigen, and found Armadillo has the best API. I believe in the value of good APIs. Thanks for the recommendation.

    OpenBLAS also seems interesting. It offers SMP and vectorization support.

    I can also recommend Eigen. What I find especially useful that it can work together seamlessly with the Intel Math Kernel Library (MKL) which offers automatic parallelization for instance.

License under CC-BY-SA with attribution

Content dated before 6/26/2020 9:53 AM