Technical Computing and Industry

While doing some site admin I realised I had left this private.  It’s captured in the downloads section anyway as part of a larger doc ( but I thought I’d release it into the wild.

In May 2008 the US Council on Competitiveness and IDC published a study on technical computing and HPC (  About 30 companies were surveyed about their use of technical computing.  Here’s what they found from the companies they surveyed:

•       97% link technical computing to competitiveness

•       57% have problems they can’t solve today

•       53% scale down problems to fit the desktop

•       > 50% increased prototyping for lack of HPC

•       < 20% of applications scale beyond 32 processors

The first two shouldn’t surprise anyone.  The third and fourth are interesting and speak directly to a lack of software scale-up as well as a lack of hardware.  Scaling problems down to fit the desktop could change significantly with the power available in a desktop in the next few years.  The fifth point is a killer.  Not only is there a lack of software, but the majority of software that does exist doesn’t scale.  There’s a lot of potential for improvement.

IDC regularly publish a report, the “Worldwide High-Performance and Technical Computing Server Forecast”.  This diagram is reconstructed from their 2008 report:

Capability clusters are the leading edge in terms of performance and size.  The Top 500 Supercomputing list contains the bleeding edge of this category.  31% of clusters on the June 2010 list have 2000 – 4000 cores, and 59% have 4000 – 8000.  It costs several million dollars to build a cluster that can be on the Top 500, and $100MM+ to make it into the top 10.  These machines typically cost more to run than they do to purchase.  You can’t get onto the list now with less than 1000 cores.  The Top 500 ranking, based on Linpack, isn’t a realistic measure of real-world productivity, although 302 of the 500 machines are used in industry.

Capacity clusters are the volume market.  These have typically 150 CPUs or less and run ISV commercial software, which with multicore CPUs means that there will be hundreds but not thousands of cores in most of these systems.  Based on IDC’s 2012 predictions and ignoring software costs, there are 6,800 new systems coming in the $250k – $499k range, 28,500 new systems in the $100k – $249k range, and 35,000 for less than $100k.  These clusters are where industry gets their work done and where better software can deliver the most benefit.

Overall, HPC is seeing a shift in demographics.  P = Performance is giving way to P = Productivity.  Cluster size is shrinking but core counts are going up.  Commercialisation is increasing HPC adoption, and there are many more first-time users of HPC.  20% of all CPUs sold are used in technical computing, and there were approximately 50 million CPUs sold in 2009.  That’s a significant hardware growth opportunity if new software can exploit multicore processors, especially at the productivity end of the market.

At the end of September 2010, the US National Center for Manufacturing Sciences presented their strategy for revitalising American small and medium-size manufacturers:

Like many of the reports from the US Council for Competitiveness, the strategy identifies simulation and HPC as critical enablers for competitiveness.  The paper identifies some other key points:

•       Advanced computational methods provide a competitive advantage

•       Sufficient HPC human expertise in the workforce is lacking

•       Small and medium-sized manufacturers don’t use HPC to the same extent as their larger counterparts, partly because of risk-aversion.

There’s clearly a lot of potential for volume HPC if there’s software that makes it easier to apply.  The hardware exists today, it’s the software that’s letting industry down.

About admin

Speak Your Mind