RISC vs CISC
Instruction sets come in two broad families:
- RISC (reduced instruction set computer), and
- CISC (complex instruction set computer).
These two principles govern fundamental choices of CPU design and implementation.
RISC architectures, such as ARM, uses a smaller set of simple instructions. Each instruction is designed to execute quickly and usually in a single clock cycle. Instructions are usually fixed in length and follow a regular structure, which simplifies decoding and supports high-performance implementations like pipelining. The number of instructions varies by architecture. RISC processors might have ~50 instruction, some have hundreds.
CISC architectures, such as Intel’s x86, on the other hand, have larger and more varied instruction sets. In these design, single instructions can perform complex operations. Instructions may vary in length, and some can execute multi-step operations in a single instruction. While this can make programs shorter, it complicates decoding, execution, and chip design. CISC designs typically have hundreds or even a thousand or more instructions.
Stating the exact number of instructions is a bit of a challenge. Most architectures nowadays have modular instruction sets. This means that there’s a required set of instructions to meet the specification for a given architecture, but that there are also additional instructions that are optional. Each has a base instruction set (the mandatory core) and a set of extensions (optional but standardized add-ons).
Optional doesn’t mean at the programmer’s option, it means at the chip designer’s option. For example, a design might include extensions for cryptography or media processing, that aren’t part of the base instruction set. This does not mean that, in this example, the base instruction set could not be used for cryptography or media processing. Rather it means that with the extended instruction set, these tasks can be accelerated. Cryptography and media can always be done with base instructions, but the extensions provide dedicated operations that make such tasks faster and more energy-efficient. In this way, chip designers have more flexibility in tailoring a processor to perform better on certain kinds of task.
Is RISC vs CISC still meaningful?
Modern processors blur the line between RISC and CISC. For example, x86 processors often translate complex instructions internally into simpler micro-operations, similar to RISC instructions. Also, over the years, RISC instruction sets have tended to grow.
Historically, RISC emphasized fewer, simpler instructions executed in a single cycle, with the compiler doing more work, whereas CISC (like x86) emphasized complex instructions (e.g., string operations, memory-to-memory arithmetic) to do more per instruction.
Today, the line is blurred. Modern CPUs (even “CISC” ones like x86) translate instructions into simpler micro-operations internally, making them RISC-like under the hood. RISC architectures (ARM, RISC-V) now have lots of optional extensions, vector instructions, and specialized operations, so they aren’t “reduced” anymore in a strict sense.
Where it still matters
x86 uses variable-length instructions, while ARM/RISC-V use fixed or semi-fixed lengths, simplifying decode.
x86 has to keep supporting instructions from the 1980s; newer RISC designs don’t have this challenge.
CISC decoders are more complex, but once decoded, execution looks very similar across architectures.
For now, it is enough to understand that ARM is an example of RISC, and x86 is an example of CISC, and that there are trade-offs between complexity of design, and efficiency of operation.
Adapted from "Patterson and Hennessy, Computer Organization, ARM edition" by Clayton Cafiero and Surya Malik.
No generative AI was used in writing this material. This was written the old-fashioned way.