Large-scale testing

Symmetric Computing in Boston (http://www.symmetriccomputing.com) have very kindly offered some time on their demonstration Trio machines.  Symmetric sell shared-memory supers, and from what I hear they’re good for HPC other than the bioinformatics you see on their website.  I know others have run things like large finite-element models on their machines with good results.  I’ll be running the pipeline network models writ large, with millions of cells and hoping to break the 100 – 200 million-equation barrier.

I need to make an openSUSE 64 build of the platform, which should be straightforward given that I run Ubuntu 64 right now.  The Trio machines have 16 – 96 cores and between 300 – 700 GB memory, so there’s plenty of scope for performance.  A potential client has used our nonlinear solver to a few tens of millions of equations on 16 cores and 32 GB on Windows (it ran really well too…).  This will be really good to see how far everything can be pushed before it breaks, and I owe Symmetric a big nod for making their hardware available.

About Damien

Comments

  1. Tom Everett says:

    That’s very exciting. Will you be posting the results?

  2. @Tom: Absolutely.

  3. That’s excellent Damien

    • Hope it works… 🙂

      The point of this to prove the scalability of the simulation framework. It *should* comfortably host and manage 100 million equations, but I need to know if it will do that very quickly. There’s going to be a billion-ish objects zinging around, and I don’t know how fragmented things are going to get. It’s fine at 10 million. The nice thing about software is that destructive testing isn’t that expensive when you do it to yourself…

Speak Your Mind

*