Archive | November, 2012

A few thoughts on ARM vs x86

7 Nov

I’ve exchanged lots of email about ARM vs. x86 recently, here are two snippets that I’ve used repeatedly which are my perspective, somewhat different from what you might be reading in the press:

On ARM vs Intel ATOM power consumption…

I personally believe that just ARM the processor (the current cores in volume use in e.g. Apple, Samsung, Qualcom which are mostly ARMv7 / Cortex-A* style) is not that far ahead of Intel ATOM in reduced power, and that is a huge change from 18-24 mo ago. Intel has clearly invested deeply, even though perhaps they should have done so much earlier and wouldn’t have lost so much to smartphones and tablets. (At Microsoft we told them to invest in power in 1999 – they told us hitting 3-4W was good enough and dumped StrongARM). ARM continues to be ahead in reduced transistor count and therefore these newest Intel ATOM SoC’s are winning on power due to Intel’s much better manufacturing prowess (vs. Samsung or TMSC, etc) in making smaller more complicated dies, as well as their crazy power-management circuitry. A good read on the sea-change at least at the technical level is Note that what the smartphone (and tablet) folks want is not just a low-power CPU, they want a whole SoC that deals with 3G/LTE, GPU, and a host of other things. This is where any advantage purely in the CPU’s power consumption is eliminated, and this is where ARM can continue to have an advantage – these devices are about customizing silicon for the very specific situation, and Intel is not going to be able to create the same level of variety as the many ARM licensees will be able to do. Still, Intel is likely to be able to spot segments that are big, go in and design an “Intel SoC” to fit that market, and sell into it in volume and with the latest process. Also, Intel happens to be able to pick up all the same bits of silicon and munge them together that the ARM guys do, so for example the Medfield ATOM SoC licenses the same PowerVR graphics engine that Apple uses in the A-series – you’ll notice Intel couldn’t possibly use their own integrated graphics because (a) they suck, (b) they are a power-hog.

The catch about Intel & ATOM is i’m not sure where their heart is here — they are in a very tough spot with respect to pricing, because they can’t truly compete with ARM without reducing their prices dramatically. Jean-Louis Gassee’s article ( makes this point really really well with concrete numbers. Though I don’t know any pricing details about Medfield, I suspect pricing will be a challenge, and they will price higher around “x86 compatibility.”

The other catch is that just because you make devices that could run all old x86 software doesn’t mean that you should. Specifically lots of that old software simply doesn’t make sense in the form-factor or with the interface of the new devices. Furthermore the old software probably does a lot of things which suck the life out of batteries. (Personally I think this is one good excuse for Microsoft making surface RT – it will tout that long battery life because only new apps that don’t do battery-dumb things will run on it. The same won’t be true with true Windows8 tablets allowed to run old cr-Apps.)

On Apple shifting to ARM from x86…

I think this is definitely more of a “when” than an “if”, and IMHO it’s less about power consumption (which is what the tech press is fixated on) and more about cost: the Bill-of-Material (BOM) around multi-processing and integrated graphics. Power is a useful side-effect, but improving performance without impacting overall BOM is I think way more important to Apple in terms of keeping up its margins and differentiating.

Consider first that the dual- and quad-core intel chips in the top-of-the-line macs (in the 1.8-2.5 GHz for laptops and in the mid-3GHz for iMacs/mini’s) are likely costing Apple $100-$300, then pull in another $40-50 in (crappy) companion Intel integrated graphics and another $150-200 for an additional nVidia graphics chip for high-end graphics. If you are Apple and you are building all of those things – dual-core CPU at 1.4GHz and 4-core GPU at 2GHz – into iPads and iPad mini’s for $16-$30 per A6x, you have got to be scratching your head about why Intel and nVidia should get so much of your BOM in non-tablets. The app compatibility is simply not an issue this time around – for an additional few $ per-device Apple could dedicate several additional cores just to x86 emulation — that wasn’t something that was possible in the old days when the transition to x86 happened.

The key performance metric/characteristic which Apple can differentiate on in laptops and desktops is cores: offering new laptops and desktops with 8, 16, 32, or 64 cores would be a massive differentiator which Intel would have tremendous trouble offering to the low-end laptop OEMs. And Apple’s software has become multi-core friendly under the hood with the introduction of Grand-Central-Dispatch (GCD) and block-programming.

So the biggest unspoken pressure on Intel will not be power, that is a red herring. It’s their per-core pricing which is going to suffer versus Apple if Apple decides that multi-cores and SoC is how they are going to compete in making better laptops and desktops (which I think they will and are doing). Intel does not know how to deal with radical multi-core in their business model. see and his perspective on Tick Tock.