Stack Computer Architecture

By "stack architecture", I refer to second-generation designs which use hardware stacks, separate from main memory, to perform expression evaluation, parameter passing, and subroutine linkage. These stacks typically make only their top-most or top two elements accessible, granting faster access than register files. Typically, such second-generation stack computers have two stacks: a Data Stack attached to the ALU, and a Return Stack which holds the return addresses of subroutine calls. Philip Koopman wrote the main text [1] on stack machines, though it strongly favours horizontally microcoded designs (which has its own pros and cons) over hardcoded logic.

Stack computers excel at control-heavy tasks, with fast lightweight subroutine calls and minimal interrupt latencies. They also require fewer transistors for a basic useful implementation than plain MIPS-like computers. However, stack computers perform relatively poorly at numerical processing tasks since we have to constantly copy and juggle items on the stack. Future designs should consider instead adding stacks to conventional register files, granting the best of both worlds. Richard L. Sites outlined this idea in a 1978 one-page paper [2].

Nonetheless, for conventional two-stack architectures, my research [3] suggests that with additional improvements (instruction folding, pipelining, an Instruction Stack) they should reach roughly the same perfomance as a plain DLX-like or MIPS-like scalar processor.

  1. Philip Koopman, "Stack Computers: the new wave", Ellis Horwood, 1989. (see also his Stack Computers & Forth page)
  2. Richard L. Sites, "A combined register-stack architecture", ACM SIGARCH Comput. Archit. News, vol. 6, no. 8, pp. 19-19, 1978.
  3. Charles Eric LaForest, "Second-Generation Stack Computer Architecture", BIS Thesis, University of Waterloo, 2007 (Waterloo copy)

Forth, Stack Machines, and Virtualization


fpgacpu.ca