The first computer used was a real performance beast. Equipped with Intel’s 486 clocking in at 66 MHz, this machine was ready to take on whatever challenges the future would bring us. The CPU clock speeds increased and soon passed 500 MHz, 1 GHz, and continued upwards. Around 2005, the top speed of the high-end processors settled around 4 GHz and hasn’t increased much since then. Why is that? I’ll explain.
Why Are We Talking About Clock Speeds?
Even though the clock speed doesn’t tell us everything about a processor, most of us automatically connect higher clock speeds with faster processors.
1. How come?
2. What does this number tell us about the performance of a processor?
To understand this, let’s briefly look at how a processor is constructed.
The Processor and Clock Speed
The most important parts in a processor are the transistors, the electronic devices that act as switches in order to construct logical gates. These logical gates are the hard-working components of our processors. Put together in different combinations, they form units capable of arithmetic and complex logical operations.
The speed at which such an operation can be performed is, in layman terms, limited by the frequency at which the transistor can switch from on(1) to off(0), and still perform without failure. Since transistors are the building blocks of the logical gates, this switching frequency also limits the operating speed of our processor.
So, if we feed our processor with one input signal per second and the processor performs our operations error-free, we say that the processor is clocked at 1 Hz. In other words, the clock speed (sometimes referred as clock frequency or clock rate) of the processor is a kind of certification telling us how often we can give it instructions and still have failure-free operations. A processor with a clock speed of 3 GHz allows us to feed it with 3 billion operations per second, and we can still expect it to perform as predicted.
Now it is easy to see why we are interested in a higher clock speed. More operations per second mean that we can get more work done per unit time. For the user, this means that the programs on the computer will run faster — and this without doing much modification to the code. No wonder why all processor manufacturers pushed for higher and higher switching frequencies.
Why CPU Clock Speed has stopped increasing?
Why do we see so few Intel® processors over 3.7 GHz? And why does it seem that the highest clock speeds — requiring cooling through liquid nitrogen — are stuck between 8.5 and 9 GHz?
To understand this, we need to look at another aspect of the processors, namely the transistor count. The transistor count of a processor is the number of transistors that the processor is equipped with. Since the CPUs stay roughly the same size, the transistor count is directly related to the size of the transistors.
So, 170 million transistors that an Intel® Pentium® 4 processor from 2004 was equipped with to the 4.3 billion transistors of a 15-core Intel® Xeon® Ivy Bridge processor from 2013, we see that the sizes of the transistors have shrunk enormously. This increase was observed by Moore and described by his law that states that the integration density of transistors doubles every 18 to 24 months.
Thermal losses occur when you are putting several billions of transistors together on a small area and switching them on and off again several billion times per second. The faster we switch the transistors on and off, the more heat will be generated. Without proper cooling, they might fail and be destroyed. One implication of this is that a lower operating clock speed will generate less heat and ensure the longevity of the processor. Another severe drawback is that an increase in clock speed implies a voltage increase and there is a cubic dependency between this and the power consumption. Power costs are an important factor to consider when operating computing centres.
But how can we get more computing power out of more transistors without increasing the clock speed? Through the application of multicore computing. The overwhelming benefit of multicores can be derived from the following reasoning: When cutting down the clock speed by 30%, the power is reduced to 35% of its original consumption.
Yet, computing performance is also reduced by 30%. But when operating two compute cores running with 70% of the original clock speed, we have 140% of the original compute power using only 70% of the original power consumption (2 x 35%). Of course, to reach this type of efficiency, you would have to program the parallelization of the process code to perfectly exploit both cores operating at the same time.
The Solution and Way Forward
Obviously, this reach a state of the clock speed hasn’t stopped the engineers at Intel and the like to push the envelope in order to achieve more performance. We have already seen that they can fit more and more of them on the same chip, through making the step from 2D, or planar, transistors to 3D, or tri-gate, transistors:
This step not only decreases the power the processors require by reducing the current flowing through the transistor to almost zero when it is in the “off” state, but it also allows as much current as possible when it is in the “on” state. Therefore, it will increase the performance.
Yet, it is the utilization of multicore computing that has contributed and will continue to contribute to increased computer performance.
The Intel® Pentium® 4 processor of 2004 was just a single core processor, but today we are talking about 8, 10, 12, or even 15 cores in a workstation, and these cores can execute instructions independent of each other. This is one of the reasons why an 8-core, lower clock speed 2.6 GHz processor can solve many of your multi-physics models much faster than a dual core with a higher clock speed of 3.5 GHz.
Source: - Intel
Thanks for reading. Hope to see you again in next post...
Thanks for reading. Hope to see you again in next post...
No comments:
Post a Comment