OK, so it's obvious to me, through reading the Intel "bible", that Intel systems are definitely not RISC systems.
I suppose, in my limited experience, that one aspect of CISC systems is complicated opcode schemes.
That is, CISC systems based on n-bit architectures don't necessarily operate on a set of purely n-bit opcodes.
For example, modern Intel 64-bit processors operate on opcodes that may be anywhere from one byte long to 15 bytes long.
My question is: how do Intel systems know that the next instruction is n bytes long and that the instruction pointer register should be adjusted accordingly?
I suppose, in my limited experience, that one aspect of CISC systems is complicated opcode schemes.
That is, CISC systems based on n-bit architectures don't necessarily operate on a set of purely n-bit opcodes.
For example, modern Intel 64-bit processors operate on opcodes that may be anywhere from one byte long to 15 bytes long.
My question is: how do Intel systems know that the next instruction is n bytes long and that the instruction pointer register should be adjusted accordingly?