The mystery behind computer speed

Apple's Mac Pro, which can be configured to house up to a 12-core processor.

Apple’s Mac Pro, which can be configured to house up to a 12-core processor. Photo: Bloomberg

It used to be that megahertz and gigahertz were what sold computers. But clock speeds long ago stabilised around a few gigahertz and the new attraction is the number of cores.

We used to be told that a faster clock means a faster computer. Now we’re told the more cores the better since it lets the computer do more things at once.

Well, yes and no. Just as the earlier obsession with clock speed oversimplified things – lots of factors affect the speed of a computer, and the clock is just one potential bottleneck – multiple cores are not a magic bullet.

Go slow zone: Lots of factors affect how fast your computer operates.

Go slow zone: Lots of factors affect how fast your computer operates. Photo: iStock

A computer program is just a sequence of instructions for a processor to follow.


In the early days a computer had only one processor which is why the processor’s clock speed was seen as so important.

The faster the clock ticked the more instructions would be run per second.

But there are good reasons to build computers with more than one processor, each executing its own sequence of instructions but sharing all the other parts of the system, such as memory and input-output hardware. Each processor can execute a different program, or alternatively the processors can split the work of one program among them "in parallel" to get it done in less time.

Because of the added cost of duplicating the processor and supporting hardware, multiple processors came rather late to personal computing.

It only really took off once manufacturers started putting more than one core on the same chip.

Each core is effectively a separate processor.

It might seem that you need multiple processors to support multitasking, but that isn’t true.

Computers can multitask fine with just one processor. The true power of multiple processors is dividing up complex tasks into lots of similar sub-tasks which are then handled in parallel.

3D graphics, for example, is particularly amenable to this approach. The 3D model in the computer can be divided up into polygons that are treated independently.

The pixels that make up the 2D image can also be divided up for independent treatment. Modern graphics hardware contains hundreds or thousands of specialised parallel processors to take advantage of this. There are lots of more specialised applications of parallel processing, too, such as scientific simulations, software development and weather forecasting.

But a task can only be parallelised if it can be cleanly subdivided.

Any interaction between the sub-tasks reduces the effectiveness of parallel processing as they pause to co-ordinate their activities.

As an extreme example, if one sub-task depends on the results of another then the first will end up waiting for the second and the benefits of parallelism will be lost entirely.

For those programs that can benefit the parallel approach has to be explicitly added by the programmers.

The hardware alone won’t help you if software doesn’t take advantage of it.

Most things we mere mortals do with our computers do not benefit much from multiple processors – or cores.

One processor is always enough, two makes things run a bit more smoothly during bursts of activity, but the returns diminish rapidly thereafter.

Performance doesn’t always improve just because you pile on the processors.

Read more:

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s