Instruction set

From Wikipedia, the free encyclopedia

Jump to: navigation, search

An instruction set is a list of all the instructions, and all their variations, that a processor (or in the case of a virtual machine, an interpreter) can execute.

Instructions include:

  • Arithmetic such as add and subtract
  • Logic instructions such as and, or, and not
  • Data instructions such as move, input, output, load, and store
  • Control flow instructions such as goto, if ... goto, call, and return.

An instruction set, or instruction set architecture (ISA), is the part of the computer architecture related to programming, including the native data types, instructions, registers, addressing modes, memory architecture, interrupt and exception handling, and external I/O. An ISA includes a specification of the set of opcodes (machine language), the native commands implemented by a particular CPU design.

Instruction set architecture is distinguished from the microarchitecture, which is the set of processor design techniques used to implement the instruction set. Computers with different microarchitectures can share a common instruction set. For example, the Intel Pentium and the AMD Athlon implement nearly identical versions of the x86 instruction set, but have radically different internal designs.

This concept can be extended to unique ISAs like TIMI (Technology-Independent Machine Interface) present in the IBM System/38 and IBM AS/400. TIMI is an ISA that is implemented as low-level software and functionally resembles what is now referred to as a virtual machine. It was designed to increase the longevity of the platform and applications written for it, allowing the entire platform to be moved to very different hardware without having to modify any software except that which comprises TIMI itself. This allowed IBM to move the AS/400 platform from an older CISC architecture to the newer POWER architecture without having to recompile any parts of the OS or software associated with it. Nowadays there are several open source Operating Systems which could be easily ported on any existing general purpose CPU, because the compilation is the essential part of their design (e.g. new software installation).

Contents

[edit] Machine language

Machine language is built up from discrete statements or instructions. on the processing architecture, a given instruction may specify:

  • Particular registers for arithmetic, addressing, or control functions
  • Particular memory locations or offsets
  • Particular addressing modes used to interpret the operands

More complex operations are built up by combining these simple instructions, which (in a von Neumann machine) are executed sequentially, or as otherwise directed by control flow instructions.

Some operations available in most instruction sets include:

  • moving
    • set a register (a temporary "scratchpad" location in the CPU itself) to a fixed constant value
    • move data from a memory location to a register, or vice versa. This is done to obtain the data to perform a computation on it later, or to store the result of a computation.
    • read and write data from hardware devices
  • computing
    • add, subtract, multiply, or divide the values of two registers, placing the result in a register
    • perform bitwise operations, taking the conjunction/disjunction (and/or) of corresponding bits in a pair of registers, or the negation (not) of each bit in a register
    • compare two values in registers (for example, to see if one is less, or if they are equal)
  • affecting program flow
    • jump to another location in the program and execute instructions there
    • jump to another location if a certain condition holds
    • jump to another location, but save the location of the next instruction as a point to return to (a call)

Some computers include "complex" instructions in their instruction set. A single "complex" instruction does something that may take many instructions on other computers. Such instructions are typified by instructions that take multiple steps, control multiple functional units, or otherwise appear on a larger scale than the bulk of simple instructions implemented by the given processor. Some examples of "complex" instructions include:

  • saving many registers on the stack at once
  • moving large blocks of memory
  • complex and/or floating-point arithmetic (sine, cosine, square root, etc.)
  • performing an atomic test-and-set instruction
  • instructions that combine ALU with an operand from memory rather than a register

A complex instruction type that has become particularly popular recently is the SIMD or Single-Instruction Stream Multiple-Data Stream operation or vector instruction, an operation that performs the same arithmetic operation on multiple pieces of data at the same time. SIMD have the ability of manipulating large vectors and matrices in minimal time. SIMD instructions allow easy parallelization of algorithms commonly involved in sound, image, and video processing. Various SIMD implementations have been brought to market under trade names such as MMX, 3DNow! and AltiVec.

The design of instruction sets is a complex issue. There were two stages in history for the microprocessor. One using CISC or complex instruction set computer where many instructions were implemented. In the 1970s places like IBM did research and found that many instructions were used that could be eliminated. The result was the RISC, reduced instruction set computer, architecture which uses a smaller set of instructions. A simpler instruction set may offer the potential for higher speeds, reduced processor size, and reduced power consumption; a more complex one may optimize common operations, improve memory/cache efficiency, or simplify programming.

[edit] Instruction set implementation

Any given instruction set can be implemented in a variety of ways. All ways of implementing an instruction set give the same programming model, and they all are able to run the same binary executables. The various ways of implementing an instruction set give different tradeoffs between cost, performance, power consumption, size, etc.

When designing microarchitectures, engineers use blocks of "hard-wired" electronic circuitry (often designed separately) such as adders, multiplexers, counters, registers, ALUs etc. Some kind of register transfer language is then often used to describe the decoding and sequencing of each instruction of an ISA using this physical microarchitecture. There are two basic ways to build a control unit to implement this description (although many designs use middle ways or compromises):

  1. Early computer designs and some of the simpler RISC computers "hard-wired" the complete instruction set decoding and sequencing (just like the rest of the microarchitecture).
  2. Other designs employ microcode routines and/or tables to do this—typically as on chip ROMs and/or PLAs (although separate RAMs have been used historically).

There are also some new CPU designs which compile the instruction set to a writable RAM or FLASH inside the CPU (such as the Rekursiv processor and the Imsys Cjip)[1], or an FPGA (reconfigurable computing). The Western Digital MCP-1600 is an older example, using a dedicated, separate ROM for microcode.

An ISA can also be emulated in software by an interpreter. Naturally, due to the interpretation overhead, this is slower than directly running programs on the emulated hardware, unless the hardware running the emulator is an order of magnitude faster. Today, it is common practice for vendors of new ISAs or microarchitectures to make software emulators available to software developers before the hardware implementation is ready.

Often the details of the implementation have a strong influence on the particular instructions selected for the instruction set. For example, many implementations of the instruction pipeline only allow a single memory load or memory store per instruction, leading to a load-store architecture (RISC). For another example, some early ways of implementing the instruction pipeline led to a delay slot.

The demands of high-speed digital signal processing have pushed in the opposite direction -- forcing instructions to be implemented in a particular way. For example, in order to perform digital filters fast enough, the MAC instruction in a typical digital signal processor (DSP) must be implemented using a kind of Harvard architecture that can fetch an instruction and two data words simultaneously, and it requires a single-cycle multiply-accumulate multiplier.

[edit] Instruction set design

Some instruction set designers reserve one or more opcodes for some kind of software interrupt. For example, MOS Technology 6502 uses 00H, Zilog Z80 uses the eight codes C7,CF,D7,DF,E7,EF,F7,FFH[1] while Motorola 68000 use codes in the range A000..AFFFH.

Fast virtual machines are much easier to implement if an instruction set meets the Popek and Goldberg virtualization requirements.

The NOP slide used in Immunity Aware Programming is much easier to implement if the "unprogrammed" state of the memory is interpreted as a NOP.

On systems with multiple processors, non-blocking synchronization algorithms are much easier to implement if the instruction set includes support for something like "fetch-and-increment" or "load linked/store conditional (LL/SC)" or "atomic compare and swap".

[edit] Code density

In early computers, program memory was expensive, so minimizing the size of a program to make sure it would fit in the limited memory was often central. Thus the combined size of all the instructions needed to perform a particular task, the code density, was an important characteristic of any instruction set. Computers with high code density also often had (and have still) complex instructions for procedure entry, parameterized returns, loops etc (therefore retroactively named Complex Instruction Set Computers, CISC). However, more typical, or frequent, "CISC" instructions merely combine a basic ALU operation, such as "add", with the access of one or more operands in memory (using addressing modes such as direct, indirect, indexed etc). Certain architectures may allow two or three operands (including the result) directly in memory or may be able to perform functions such as automatic pointer increment etc. Software-implemented instruction sets may have even more complex and powerful instructions.

Reduced instruction-set computers, RISC, were first widely implemented during a period of rapidly-growing memory subsystems and sacrifice code density in order to simplify implementation circuitry and thereby try to increase performance via higher clock frequencies and more registers. RISC instructions typically perform only a single operation, such as an "add" of registers or a "load" from a memory location into a register; they also normally use a fixed instruction width, whereas a typical CISC instruction set has many instructions shorter than this fixed length. Fixed-width instructions are less complicated to handle than variable-width instructions for several reasons (not having to check whether an instruction straddles a cache line or virtual memory page boundary[2] for instance), and are therefore somewhat easier to optimize for speed. However, as RISC computers normally require more and often longer instructions to implement a given task, they inherently make less optimal use of bus bandwidth and cache memories.

Minimal instruction set computers (MISC) are a form of stack machine, where there are few separate instructions (16-64), so that multiple instructions can be fit into a single machine word. These type of cores often take little silicon to implement, so they can be easily realized in an FPGA or in a multi-core form. Code density is similar to RISC; the increased instruction density is offset by requiring more of the primitive instructions to do a task.

There has been research into executable compression as a mechanism for improving code density. The mathematics of Kolmogorov complexity describes the challenges and limits of this.

[edit] Number of operands

Instruction sets may be categorized by the number of operands (registers, memory locations, or immediate values) in their most complex instructions. This does not refer to the arity of the operators, but to the number of operands explicitly specified as part of the instruction. Thus, implicit operands stored in a special-purpose register or on top of the stack are not counted.

(In the examples that follow, a, b, and c refer to memory addresses, and reg1 and so on refer to machine registers.)

  • 0-operand ("zero address machines") -- these are also called stack machines, and all operations take place using the top one or two positions on the stack. Add two numbers in five instructions: #a, load, #b, load, add, #c, store;
  • 1-operand ("one address machines") -- often called accumulator machines -- include most early computers. Each instruction performs its operation using a single operand specifier. The single accumulator register is implicit -- source, destination, or often both -- in almost every instruction: load a, add b, store c;
  • 2-operand -- many RISC machines fall into this category, though many CISC machines also fall here as well. For a RISC machine (requiring explicit memory loads), the instructions would be: load a, reg1; load b, reg2; add reg1,reg2; store reg2,c;
  • 3-operand CISC -- some CISC machines fall into this category. The above example here might be performed in a single instruction in a machine with memory operands: add a, b,c, or more typically (most machines permit a maximum of two memory operations even in three-operand instructions): move a, reg1; add reg1,b, c;
  • 3-operand RISC -- most RISC machines fall into this category, because it allows "better reuse of data"[2]. In a typical three-operand RISC machines, all three operands must be registers, so explicit load/store instructions are needed. An instruction set with 32 registers requires 15 bits to encode three register operands, so this scheme is typically limited to instructions sets with 32-bit instructions or longer. Example: load a, reg1; load b, reg2; add reg1+reg2->reg3; store reg3,c;
  • more operands -- some CISC machines permit a variety of addressing modes that allow more than 3 operands (registers or memory accesses), such as the VAX "POLY" polynomial evaluation instruction.

[edit] List of ISAs

This list is far from comprehensive as old architectures are abandoned and new ones invented on a continual basis. There are many commercially available microprocessors and microcontrollers implementing ISAs. Customised ISAs are also quite common in some applications, e.g. application-specific integrated circuit, FPGA, and reconfigurable computing.

[edit] ISAs implemented in hardware

[edit] ISAs commonly implemented in software with hardware incarnations

[edit] ISAs only implemented in software

  • ISAC - Instruction Set Architecture C - A "C" Back-end outputs "C" code instead of assembly language - Very few

examples are: ISAC and Lissom . The LLVM Compiler Infrastructure implements a C and C++ back-end.

[edit] ISAs never implemented in hardware

[edit] See also

[edit] Categories of ISA

[edit] Applications where specialized instruction sets are used

[edit] Device types that implement some ISA

[edit] Others

[edit] Programming Language targeting ISA


[edit] References

  1. ^ Ganssle, Jack. "Proactive Debugging". Published February 26, 2001.
  2. ^ a b The evolution of RISC technology at IBM by John Cocke – IBM Journal of R&D, Volume 44, Numbers 1/2, p.48 (2000)

[edit] External links

Personal tools