Photo of a keyboard


Late last year, we began running tests to see how efficient Amazon’s AWS cloud services are at protein folding, the computationally heavy physical simulations used in a variety of medical research. We discovered a remarkable result – under certain conditions, AWS is cheaper than the tiny additional cost of electricity you’d consume running the simulations at home.

Given this discovery, we were anxious to continue folding (and we still are – follow us on Twitter to learn about our findings on Google’s Compute platform and Microsoft’s Azure). We were especially excited about Amazon’s announcement of the c4 instance type, the successor to the c3 (the type of computer which showed the best performance in our previous tests). These machines offer a new, faster processor which we hoped would translate into even more folding efficiency.

A Minor Update

After collecting the data, however, we were underwhelmed – the c4 instances barely edged out their c3 counterparts in most cases. Other benchmarks (not protein-folding specific) have found a larger performance increase, making our results even more surprising.

Compare the performance of each c3 machine with its c4 counterpart below. Mouse over each bar to see its exact value and select different headers to see the values that went into calculating folding per dollar.

Folding points per dollar = Folding points per hour / Dollars per hour

See our original post to learn about our methodology.

Many Possible Explanations

There are many reasons the new c4 instances might not have delivered the performance boost we hoped for. Maybe folding@home is limited by memory more than by CPU (this seems unlikely, given that our original test showed the memory-optimized r3 instances underperforming the c3). Maybe folding@home is not taking full advantage of the c4's 36 vCPUs. Or perhaps a more statistically rigorous test would show larger gains.

Are you a folding@home aficionado, or do you live and breathe cloud computing? We'd love to hear your thoughts - did you expect larger improvements? Why do you think we saw the results we did? Leave a comment below, or tweet us!