When we were discussing the difference between RISC and CISC architectures, we came across a term called ‘pipeline.’ A pipeline is a series of units connected in a way that allows the processor to process multiple instructions simultaneously. Pipelining increases the operating frequency of the processor.
Is there a simple analogy to understand pipelining?
A common analogy used in classes is that of an automobile assembly line. Imagine for a second, if an automobile manufacturing plant, or any manufacturing plant for that matter, took in just one car at a time.
So the steel sheets would come in. The chassis will be prepared, and only after the car is completely made will the assembly line begin work on the next car. This process seems inefficient.
What if we divide the manufacturing of the car into different stages? And then run all stages simultaneously?
With each stage placed in a succession of the previous, we will be able to work on more than one car at the same time.
If there are three stages, we will have three cars on the assembly line at the same time. If there are five stages, we will have five cars on the assembly line at the same time.
Higher the number of stages, the higher the efficiency of the manufacturing plant.
How does a pipeline work in computer architecture or a processor?
The same principle, as seen in the above analogy, is applied to processors. The execution of an instruction is divided into stages based on the processor architecture. For example, ARM 7 offers a three-stage pipeline. ARM 9 has a five-stage pipeline and so on.
Let us take the case of an ARM 7 processor. The three stages are Fetch, Decode, and Execute. Use the diagram below to understand the concept better.
In the first cycle, the processor fetches the instruction from memory. As you can see, every stage takes one cycle to complete. This is a feature possessed by RISC processors. Hence, pipelining is possible with RISC architecture.
Moving on, in the second cycle, the first instruction moves along the pipeline and reaches stage 2, i.e., Decode. In this cycle, the processor also fetches the second instruction from memory.
So now, we have two stages executing simultaneously — the decoding of the first instruction and the fetching of the second. In the third stage, both the instructions move along the pipeline, and a third instruction is fetched by the processor.
This whole process is known as filling the pipeline. As the number of pipeline stages offered by a processor increases, the amount of work done in each stage decreases. Consequently, the processing frequency of the processor increases.