Why are we here?

Just over thirty years ago, the hardware revolution began when Intel commercialised the 8086.  Just under thirty years ago the software revolution began with MS-DOS.  Twenty years ago, Windows 3.0 changed everything.  Whether you love or hate Microsoft, they gave us the foundation of today’s marketplace.  For twenty years, software has been in the driver’s seat.  Software requirements, particularly gaming, drove faster clock rates and better graphics and the exploding market paid for it all.  Ten years ago HPC started its growth spurt with the Pentium III, gigabit ethernet, MPI and the first Beowulf clusters, riding that same mass market hardware wave.  AMD commercialised the x86-64 bit instruction set, and 64-bit went mainstream.  In the last ten years Linux took over the Top 500, growing from a 5% share in 2000 to 81% in 2010.

Five years ago the hardware companies started ringing alarm bells.  Herb Sutter wrote his landmark piece on multicore, “The Free Lunch Is Over” (http://www.gotw.ca/publications/concurrency-ddj.htm).  Moore’s Law looked like stumbling and the March of the Gigahertz was going to end, to be replaced by this thing called Multicore.  We could still have our next doubling to 6 GHz, but now we had a dual-core processor with 2 x 3 GHz chips on it.  A couple of years ago triple- and quad-cores started arriving, and we could have over 10 GHz, as a 4 x 2.66.  So far we’ve coped well.  Multicore processors have helped people multitask more efficiently on their machines, and most of the office productivity arena can’t do much more even with 12 GHz available.  Gaming still drives a lot of performance gains, and graphics cards have seen staggering leaps in performance, leading to the recent explosion in GPU-based supercomputing.  Memory bandwidth is becoming the next barrier: CPUs and GPUs can process enormous amounts of data today, but we can’t move the data around fast enough.

In the supercomputing world, things are looking very interesting right now.  With Intel and AMD’s latest-generation CPUS, you can have several tens of cores, 100 GHz of speed and 100 GB of memory on your desktop (and 100 TB of storage if you can afford it).  That’s about 3000 times the power and 3000 times the workspace we had with the first Pentiums.  That’s without any GPU acceleration.  Unfortunately, most software won’t let you use all of that power, it doesn’t scale.  There are at least 32 cores in that machine, and next year you can have 64.  Hardware is now driving the bus, and software needs to catch up.  Multicore is a stepping stone, and Manycore is coming.  CPUs provide task parallelism, and GPUs provide data parallelism.  It’s a fantastic software opportunity.

About admin

Comments

  1. Is it possible to get a contact with a director in the field of oil&gas simulation.
    Regards
    Luigi Raimondi

Speak Your Mind

*