toolkit.doc 39 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896
  1. .\" $Id$
  2. .RP
  3. .ND July 1984
  4. .tr ~
  5. .ds as *
  6. .TL
  7. A Practical Tool Kit for Making Portable Compilers
  8. .AU
  9. Andrew S. Tanenbaum
  10. Hans van Staveren
  11. E. G. Keizer
  12. Johan W. Stevenson
  13. .AI
  14. Mathematics Dept.
  15. Vrije Universiteit
  16. Amsterdam, The Netherlands
  17. .AB
  18. The Amsterdam Compiler Kit is an integrated collection of programs designed to
  19. simplify the task of producing portable (cross) compilers and interpreters.
  20. For each language to be compiled, a program (called a front end)
  21. must be written to
  22. translate the source program into a common intermediate code.
  23. This intermediate code can be optimized and then either directly interpreted
  24. or translated to the assembly language of the desired target machine.
  25. The paper describes the various pieces of the tool kit in some detail, as well
  26. as discussing the overall strategy.
  27. .sp
  28. Keywords: Compiler, Interpreter, Portability, Translator
  29. .sp
  30. CR Categories: 4.12, 4.13, 4.22
  31. .sp 12
  32. Author's present addresses:
  33. A.S. Tanenbaum, H. van Staveren, E.G. Keizer: Mathematics
  34. Dept., Vrije Universiteit, Postbus 7161, 1007 MC Amsterdam,
  35. The Netherlands
  36. J.W. Stevenson: NV Philips, S&I, T&M, Building TQ V5, Eindhoven,
  37. The Netherlands
  38. .AE
  39. .NH 1
  40. Introduction
  41. .PP
  42. As more and more organizations acquire many micro- and minicomputers,
  43. the need for portable compilers is becoming more and more acute.
  44. The present situation, in which each hardware vendor provides its own
  45. compilers -- each with its own deficiencies and extensions, and none of them
  46. compatible -- leaves much to be desired.
  47. The ideal situation would be an integrated system containing a family
  48. of (cross) compilers, each compiler accepting a standard source language and
  49. producing code for a wide variety of target machines.
  50. Furthermore, the compilers should be compatible, so programs written in
  51. one language can call procedures written in another language.
  52. Finally, the system should be designed so as to make adding new languages
  53. and new machines easy.
  54. Such an integrated system is being built at the Vrije Universiteit.
  55. Its design and implementation is the subject of this article.
  56. .PP
  57. Our compiler building system, which is called the "Amsterdam Compiler Kit"
  58. (ACK), can be thought of as a "tool kit."
  59. It consists of a number of parts that can be combined to form compilers
  60. (and interpreters) with various properties.
  61. The tool kit is based on an idea (UNCOL) that was first suggested in 1960
  62. [7], but which never really caught on then.
  63. The problem which UNCOL attempts to solve is how to make a compiler for
  64. each of
  65. .I N
  66. languages on
  67. .I M
  68. different machines without having to write
  69. .I N
  70. x
  71. .I M
  72. programs.
  73. .PP
  74. As shown in Fig. 1, the UNCOL approach is to write
  75. .I N
  76. "front ends," each
  77. of which translates one source language to a common intermediate language,
  78. UNCOL (UNiversal Computer Oriented Language), and
  79. .I M
  80. "back ends," each
  81. of which translates programs in UNCOL to a specific machine language.
  82. Under these conditions, only
  83. .I N
  84. +
  85. .I M
  86. programs must be written to provide all
  87. .I N
  88. languages on all
  89. .I M
  90. machines, instead of
  91. .I N
  92. x
  93. .I M
  94. programs.
  95. .PP
  96. Various researchers have attempted to design a suitable UNCOL
  97. [2,8], but none of these have become popular.
  98. It is our belief that previous attempts have failed because they have been
  99. too ambitious, that is, they have tried to cover all languages
  100. and all machines using a single UNCOL.
  101. Our approach is more modest: we cater only to algebraic languages
  102. and machines whose memory consists of 8-bit bytes, each with its own address.
  103. Typical languages that could be handled include
  104. Ada, ALGOL 60, ALGOL 68, BASIC, C, FORTRAN,
  105. Modula, Pascal, PL/I, PL/M, PLAIN, and RATFOR,
  106. whereas COBOL, LISP, and SNOBOL would be less efficient.
  107. Examples of machines that could be included are the Intel 8080 and 8086,
  108. Motorola 6800, 6809, and 68000, Zilog Z80 and Z8000, DEC PDP-11 and VAX,
  109. and IBM 370 but not the Burroughs 6700, CDC Cyber, or Univac 1108 (because
  110. they are not byte-oriented).
  111. With these restrictions, we believe the old UNCOL idea can be used as the
  112. basis of a practical compiler-building system.
  113. .KF
  114. .sp 15P
  115. .ce 1
  116. Fig. 1. The UNCOL model.
  117. .sp
  118. .KE
  119. .NH 1
  120. An Overview of the Amsterdam Compiler Kit
  121. .PP
  122. The tool kit consists of eight components:
  123. .sp
  124. 1. The preprocessor.
  125. 2. The front ends.
  126. 3. The peephole optimizer.
  127. 4. The global optimizer.
  128. 5. The back end.
  129. 6. The target machine optimizer.
  130. 7. The universal assembler/linker.
  131. 8. The utility package.
  132. .sp
  133. .PP
  134. A fully optimizing compiler,
  135. depicted in Fig. 2, has seven cascaded phases.
  136. Conceptually, each component reads an input file and writes a
  137. transformed output file to be used as input to the next component.
  138. In practice, some components may use temporary files to allow multiple
  139. passes over the input or internal intermediate files.
  140. .KF
  141. .sp 12P
  142. .ce 1
  143. Fig. 2. Structure of the Amsterdam Compiler Kit.
  144. .sp
  145. .KE
  146. .PP
  147. In the following paragraphs we will briefly describe each component.
  148. After this overview, we will look at all of them again in more detail.
  149. A program to be compiled is first fed into the (language independent)
  150. preprocessor, which provides a simple macro facility,
  151. and similar textual facilties.
  152. The preprocessor's output is a legal program in one of the programming
  153. languages supported, whereas the input is a program possibly augmented
  154. with macros, etc.
  155. .PP
  156. This output goes into the appropriate front end, whose job it is to
  157. produce intermediate code.
  158. This intermediate code (our UNCOL) is the machine language for a simple
  159. stack machine called EM (Encoding Machine).
  160. A typical front end might build a parse tree from the input, and then
  161. use the parse tree to generate EM code, which is similar to reverse Polish.
  162. In order to perform this work, the front end has to maintain tables of
  163. declared variables, labels, etc., determine where to place the
  164. data structures in memory, and so on.
  165. .PP
  166. The EM code generated by the front end is fed into the peephole optimizer,
  167. which scans it with a window of a few instructions, replacing certain
  168. inefficient code sequences by better ones.
  169. Such a search is important because EM contains instructions to handle
  170. numerous important special cases efficiently
  171. (e.g., incrementing a variable by 1).
  172. It is our strategy to relieve the front ends of the burden of hunting for
  173. special cases because there are many front ends and only one peephole
  174. optimizer.
  175. By handling the special cases in the peephole optimizer,
  176. the front ends become simpler, easier to write and easier to maintain.
  177. .PP
  178. Following the peephole optimizer is a global optimizer [5], which
  179. unlike the peephole optimizer, examines the program as a whole.
  180. It builds a data flow graph to make possible a variety of
  181. global optimizations,
  182. among them, moving invariant code out of loops, avoiding redundant
  183. computations, live/dead analysis and eliminating tail recursion.
  184. Note that the output of the global optimizer is still EM code.
  185. .PP
  186. Next comes the back end, which differs from the front ends in a
  187. fundamental way.
  188. Each front end is a separate program, whereas the back end is a single
  189. program that is driven by a machine dependent driving table.
  190. The driving table for a specific machine tells how the EM code is mapped
  191. onto the machine's assembly language.
  192. Although a simple driving table might just macro expand each EM instruction
  193. into a sequence of target machine instructions, a much more sophisticated
  194. translation strategy is normally used, as described later.
  195. For speed, the back end does not actually read in the driving table at run time.
  196. Instead, the tables are compiled along with the back end in advance, resulting
  197. in one binary program per machine.
  198. .PP
  199. The output of the back end is a program in the assembly language of some
  200. particular machine.
  201. The next component in the pipeline reads this program and performs peephole
  202. optimization on it.
  203. The optimizations performed here involve idiosyncracies
  204. of the target machine that cannot be performed in the machine-independent
  205. EM-to-EM peephole optimizer.
  206. Typically these optimizations take advantage of special instructions or special
  207. addressing modes.
  208. .PP
  209. The optimized target machine assembly code then goes into the final
  210. component in the pipeline, the universal assembler/linker.
  211. This program assembles the input to object format, extracting routines from
  212. libraries and including them as needed.
  213. .PP
  214. The final component of the tool kit is the utility package, which contains
  215. various test programs, interpreters for EM code,
  216. EM libraries, conversion programs, and other aids for the implementer and
  217. user.
  218. .NH 1
  219. The Preprocessor
  220. .PP
  221. The function of the preprocessor is to extend all the programming languages
  222. by adding certain generally useful facilities to them in a uniform way.
  223. One of these is a simple macro system, in which the user can give names to
  224. character strings.
  225. The names can be used in the program, with the knowledge that they will be
  226. macro expanded prior to being input to the front end.
  227. Macros can be used for named constants, expanding short "procedures"
  228. in line, etc.
  229. .PP
  230. Another useful facility provided by the preprocessor is the ability to
  231. include compile-time libraries.
  232. On large projects, it is common to have all the declarations and definitions
  233. gathered together in a few files that are textually included in the programs
  234. by instructing the preprocessor to read them in, thus fooling the front end
  235. into thinking that they were part of the source program.
  236. .PP
  237. A third feature of the preprocessor is conditional compilation.
  238. The input program can be split up into labeled sections.
  239. By setting flags, some of the sections can be deleted by the preprocessor,
  240. thus allowing a family of slightly different programs to be conveniently stored
  241. on a single file.
  242. .NH 1
  243. The Front Ends
  244. .PP
  245. A front end is a program that converts input in some source language to a
  246. program in EM.
  247. At present, front ends
  248. exist or are in preparation for Pascal, C, and Plain, and are being considered
  249. for Ada, ALGOL 68, FORTRAN 77, and Modula 2.
  250. Each of the present front ends is independent of all the other ones,
  251. although a general-purpose, table-driven front end is conceivable, provided
  252. one can devise a way to express the semantics of the source language in the
  253. driving tables.
  254. The Pascal front end uses a top-down parsing algorithm (recursive descent),
  255. whereas the C and Plain front ends are bottom-up.
  256. .PP
  257. All front ends, independent of the language being compiled,
  258. produce a common intermediate code called EM, which is
  259. the assembly language for a simple stack machine.
  260. The EM machine is based on a memory architecture
  261. containing a stack for local variables, a (static) data area for variables
  262. declared in the outermost block and global to the whole program, and a heap
  263. for dynamic data structures.
  264. In some ways EM resembles P-code [6], but is more general, since it is
  265. intended for a wider class of languages than just Pascal.
  266. .PP
  267. The EM instruction set has been described elsewhere
  268. [9,10,11]
  269. so we will only briefly summarize it here.
  270. Instructions exist to:
  271. .sp
  272. 1. Load a variable or constant of some length onto the stack.
  273. 2. Store the top item on the stack in memory.
  274. 3. Add, subtract, multiply, divide, etc. the top two stack items.
  275. 4. Examine the top one or two stack items and branch conditionally.
  276. 5. Call procedures and return from them.
  277. .sp
  278. .PP
  279. Loads and stores come in several variations, corresponding to the most common
  280. programming language semantics, for example, constants, simple variables,
  281. fields of a record, elements of an array, and so on.
  282. Distinctions are also made between variables local to the current block
  283. (i.e., stack frame), those in the outermost block (static storage), and those
  284. at intermediate lexicographic levels, which are accessed by following the
  285. static chain at run time.
  286. .PP
  287. All arithmetic instructions have a type (integer, unsigned, real,
  288. pointer, or set) and an
  289. operand length, which may either be explicit or may be popped from the stack
  290. at run time.
  291. Monadic branch instructions pop an item from the stack and branch if it is
  292. less than zero, less than or equal to zero, etc.
  293. Dyadic branch instructions pop two items, compare them, and branch accordingly.
  294. .PP
  295. In addition to these basic EM instructions, there is a collection of special
  296. purpose instructions (e.g., to increment a local variable), which are typically
  297. produced from the simple ones by the peephole optimizer.
  298. Although the complete EM instruction set contains nearly 150 instructions,
  299. only about 60 of them are really primitive; the rest are simply abbreviations
  300. for commonly occurring EM instruction sequences.
  301. .PP
  302. Of particular interest is the way object sizes are parametrized.
  303. The front ends allow the user to indicate how many bytes an integer, real, etc.
  304. should occupy.
  305. Given this information, the front ends can allocate memory, determining
  306. the placement of variables within the stack frame.
  307. Sizes for primitive types are restricted to 8, 16, 32, 64, etc. bits.
  308. The front ends are also parametrized by the target machine's word length
  309. and address size so they can tell, for example, how many "load" instructions
  310. to generate to move a 32-bit integer.
  311. In the examples used henceforth,
  312. we will assume a 16-bit word size and 16-bit integers.
  313. .PP
  314. Since only byte-addressable target machines are permitted,
  315. it is nearly
  316. always possible to implement any requested sizes on any target machine.
  317. For example, the designer of the back end tables for the Z80 should provide
  318. code for 8-, 16-, and 32-bit arithmetic.
  319. In our view, the Pascal, C, or Plain programmer specifies what lengths
  320. are needed,
  321. without reference to the target machine,
  322. and the back end provides it.
  323. This approach greatly enhances portability.
  324. While it is true that doing all arithmetic using 32-bit integers on the Z80
  325. will not be terribly fast, we feel that if that is what the programmer needs,
  326. it should be possible to implement it.
  327. .PP
  328. Like all assembly languages, EM has not only machine instructions, but also
  329. pseudoinstructions.
  330. These are used to indicate the start and end of each procedure, allocate
  331. and initialize storage for data, and similar functions.
  332. One particularly important pseudoinstruction is the one that is used to
  333. transmit information to the back end for optimization purposes.
  334. It can be used to suggest variables that are good candidates to assign to
  335. registers, delimit the scope of loops, indicate that certain variables
  336. contain a useful value (next operation is a load) or not (next operation is
  337. a store), and various other things.
  338. .NH 1
  339. The Peephole Optimizer
  340. .PP
  341. The peephole optimizer reads in unoptimized EM programs and writes out
  342. optimized ones.
  343. Both the input and output are expressed in a highly compact code, rather than
  344. in ASCII, to reduce the i/o time, which would otherwise dominate the CPU
  345. time.
  346. The program itself is table driven, and is, by and large, ignorant of the
  347. semantics of EM.
  348. The knowledge of EM is contained in a
  349. language- and machine-independent table consisting of about 400
  350. pattern-replacement pairs.
  351. We will briefly describe the kinds of optimizations it performs below;
  352. a more complete discussion can be found in [9].
  353. .PP
  354. Each line in the driving table describes one optimization, consisting of a
  355. pattern part and a replacement part.
  356. The pattern part is a series of one or more EM instructions and a boolean
  357. expression.
  358. The replacement part is a series of EM instructions with operands.
  359. A typical optimization might be:
  360. .sp
  361. LOL LOC ADI STL ($1 = $4) and ($2 = 1) and ($3 = 2) ==> INL $1
  362. .sp
  363. where the text prior to the ==> symbol is the pattern and the text after it is
  364. the replacement.
  365. LOL loads a local variable onto the stack, LOC loads a constant onto the stack,
  366. ADI is integer addition, and STL is store local.
  367. The pattern specifies that four consecutive EM instructions are present, with
  368. the indicated opcodes, and that furthermore the operand of the first
  369. instruction (denoted by $1) and the fourth instruction (denoted by $4) are the
  370. same, the constant pushed by LOC is 1, and the size of the integers added by
  371. ADI is 2 bytes.
  372. (EM instructions have at most one operand, so it is not necessary to specify
  373. the operand number.)
  374. Under these conditions, the four instructions can be replaced by a single INL
  375. (increment local) instruction whose operand is equal to that of LOL.
  376. .PP
  377. Although the optimizations cover a wide range, the main ones
  378. can be roughly divided into the following categories.
  379. \fIConstant folding\fR
  380. is used to evaluate constant expressions, such as 2*3~+~7 at
  381. compile time instead of run time.
  382. \fIStrength reduction\fR
  383. is used to replace one operation, such as multiply, by
  384. another, such as shift.
  385. \fIReordering of expressions\fR
  386. helps in cases like -K/5, which can be better
  387. evaluated as K/-5, because the former requires
  388. a division and a negation, whereas the latter requires only a division.
  389. \fINull instructions\fR
  390. include resetting the stack pointer after a call with 0 parameters,
  391. offsetting zero bytes to access the
  392. first element of a record, or jumping to the next instruction.
  393. \fISpecial instructions\fR
  394. are those like INL, which deal with common special cases
  395. such as adding one to a variable or comparing something to zero.
  396. \fIGroup moves\fR
  397. are useful because a sequence
  398. of consecutive moves can often be replaced with EM code
  399. that allows the back end to generate a loop instead of in line code.
  400. \fIDead code elimination\fR
  401. is a technique for removing unreachable statements, possibly made unreachable
  402. by previous optimizations.
  403. \fIBranch chain compression\fR
  404. can be applied when a branch instruction jumps to another branch instruction.
  405. The first branch can jump directly to the final destination instead of
  406. indirectly.
  407. .PP
  408. The last two optimizations logically belong in the global optimizer but are
  409. in the local optimizer for historical reasons (meaning that the local
  410. optimizer has been the only optimizer for many years and the optimizations were
  411. easy to do there).
  412. .NH 1
  413. The Global Optimizer
  414. .PP
  415. In contrast to the peephole optimizer, which examines the EM code a few lines
  416. at a time through a small window, the global optimizer examines the
  417. program's large scale structure.
  418. Three distinct types of optimizations can be found here:
  419. .sp
  420. 1. Interprocedural optimizations.
  421. 2. Intraprocedural optimizations.
  422. 3. Basic block optimizations.
  423. .sp
  424. We will now look at each of these in turn.
  425. .PP
  426. Interprocedural optimizations are those spanning procedure boundaries.
  427. The most important one is deciding to expand procedures in line,
  428. especially short procedures that occur in loops and pass several parameters.
  429. If it takes more time or memory to pass the parameters than to do the work,
  430. the program can be improved by eliminating the procedure.
  431. The inverse optimization -- discovering long common code sequences and
  432. turning them into a procedure -- is also possible, but much more difficult.
  433. Like much of the global optimizer's work, the decision to make or not make
  434. a certain program transformation is a heuristic one, based on knowledge of
  435. how the back end works, how most target machines are organized, etc.
  436. .PP
  437. The heart of the global optimizer is its analysis of individual
  438. procedures.
  439. To perform this analysis, the optimizer must locate the basic blocks,
  440. instruction sequences which can be entered only at the top and exited
  441. only at the bottom.
  442. It then constructs a data flow graph, with the basic blocks as nodes and
  443. jumps between blocks as arcs.
  444. .PP
  445. From the data flow graph, many important properties of the program can be
  446. discovered and exploited.
  447. Chief among these is the presence of loops, indicated by cycles in the graph.
  448. One important optimization is looking for code that can be moved outside the
  449. loop, either prior to it or subsequent to it.
  450. Such code motion saves execution time, although it does not save memory.
  451. Unrolling loops is also possible and desirable in some cases.
  452. .PP
  453. Another area in which global analysis of loops is especially important is
  454. in register allocation.
  455. While it is true that EM does not have any registers to allocate,
  456. the optimizer can easily collect information to allow the
  457. back end to allocate registers wisely.
  458. For example, the global optimizer can collect static frequency-of-use
  459. and live/dead information about variables.
  460. (A variable is dead at some point in the program if its current value is
  461. not needed, i.e., the next reference to it overwrites it rather than
  462. reading it; if the current value will eventually be used, the variable is
  463. live.)
  464. If two variables are never simultaneously live over some interval of code
  465. (e.g., the body of a loop), they can be packed into a single variable,
  466. which, if used often enough, may warrant being assigned to a register.
  467. .PP
  468. Many loops involve arrays: this leads to other optimizations.
  469. If an array is accessed sequentially, with each iteration using the next
  470. higher numbered element, code improvement is often possible.
  471. Typically, a pointer to the bottom element of each array can be set up
  472. prior to the loop.
  473. Within the loop the element is accessed indirectly via the pointer, which is
  474. also incremented by the element size on each iteration.
  475. If the target machine has an autoincrement addressing mode and the pointer
  476. is assigned to a register, an array access can often be done in a single
  477. instruction.
  478. .PP
  479. Other intraprocedural optimizations include removing tail recursion
  480. (last statement is a recursive call to the procedure itself),
  481. topologically sorting the basic blocks to minimize the number of branch
  482. instructions, and common subexpression recognition.
  483. .PP
  484. The third general class of optimizations done by the global optimizer is
  485. improving the structure of a basic block.
  486. For the most part these involve transforming arithmetic or boolean
  487. expressions into forms that are likely to result in better target code.
  488. As a simple example, A~+~B*C can be converted to B*C~+~A.
  489. The latter can often
  490. be handled by loading B into a register, multiplying the register by C, and
  491. then adding in A, whereas the former may involve first putting A into a
  492. temporary, depending on the details of the code generation table.
  493. Another example of this kind of basic block optimization is transforming
  494. -B~+~A~<~0 into the equivalent, but simpler, A~<~B.
  495. .NH 1
  496. The Back End
  497. .PP
  498. The back end reads a stream of EM instructions and generates assembly code
  499. for the target machine.
  500. Although the algorithm itself is machine independent, for each target
  501. machine a machine dependent driving table must be supplied.
  502. The driving table effectively defines the mapping of EM code to target code.
  503. .PP
  504. It will be convenient to think of the EM instructions being read as a
  505. stream of tokens.
  506. For didactic purposes, we will concentrate on two kinds of tokens:
  507. those that load something onto the stack, and those that perform some operation
  508. on the top one or two values on the stack.
  509. The back end maintains at compile time a simulated stack whose behavior
  510. mirrors what the stack of a hardware EM machine would do at run time.
  511. If the current input token is a load instruction, a new entry is pushed onto
  512. the simulated stack.
  513. .PP
  514. Consider, as an example, the EM code produced for the statement K~:=~I~+~7.
  515. If K and I are
  516. 2-byte local variables, it will normally be LOL I; LOC 7; ADI~2; STL K.
  517. Initially the simulated stack is empty.
  518. After the first token has been read and processed, the simulated stack will
  519. contain a stack token of type MEM with attributes telling that it is a local,
  520. giving its address, etc.
  521. After the second token has been read and processed, the top two tokens on the
  522. simulated stack will be CON (constant) on top and MEM directly underneath it.
  523. .PP
  524. At this point the back end reads the ADI~2 token and
  525. looks in the driving table to find a line or lines that define the
  526. action to be taken for ADI~2.
  527. For a typical multiregister machine, instructions will exist to add constants
  528. to registers, but not to memory.
  529. Consequently, the driving table will not contain an entry for ADI~2 with stack
  530. configuration CON, MEM.
  531. .PP
  532. The back end is now faced with the problem of how to get from its
  533. current stack configuration, CON, MEM, which is not listed, to one that is
  534. listed.
  535. The table will normally contain rules (which we call "coercions")
  536. for converting between CON, REG, MEM, and similar tokens.
  537. Therefore the back end attempts to "coerce" the stack into a configuration
  538. that
  539. .I is
  540. present in the table.
  541. A typical coercion rule might tell how to convert a MEM into
  542. a REG, namely by performing the actions of allocating a
  543. register and emitting code to move the memory word to that register.
  544. Having transformed the compile-time stack into a configuration allowed for
  545. ADI~2, the rule can be carried out.
  546. A typical rule
  547. for ADI~2 might have stack configuration REG, MEM
  548. and would emit code to add the MEM to the REG, leaving the stack
  549. with a single REG token instead of the REG and MEM tokens present before the
  550. ADI~2.
  551. .PP
  552. In general, there will be more than one possible coercion path.
  553. Assuming reasonable coercion rules for our example,
  554. we might be able to convert
  555. CON MEM into CON REG by loading the variable I into a register.
  556. Alternatively, we could coerce CON to REG by loading the constant into a register.
  557. The first coercion path does the add by first loading I into a register and
  558. then adding 7 to it.
  559. The second path first loads 7 into a register and then adds I to it.
  560. On machines with a fast LOAD IMMEDIATE instruction for small constants
  561. but no fast ADD IMMEDIATE, or vice
  562. versa, one code sequence will be preferable to the other.
  563. .PP
  564. In fact, we actually have more choices than suggested above.
  565. In both coercion paths a register must be allocated.
  566. On many machines, not every register can be used in every operation, so the
  567. choice may be important.
  568. On some machines, for example, the operand of a multiply must be in an odd
  569. register.
  570. To summarize, from any state (i.e., token and stack configuration), a
  571. variety of choices can be made, leading to a variety of different target
  572. code sequences.
  573. .PP
  574. To decide which of the various code sequences to emit, the back end must have
  575. some information about the time and memory cost of each one.
  576. To provide this information, each rule in the driving table, including
  577. coercions, specifies both the time and memory cost of the code emitted when
  578. the rule is applied.
  579. The back end can then simply try each of the legal possibilities (including all
  580. the possible register allocations) to find the cheapest one.
  581. .PP
  582. This situation is similar to that found in a chess or other game-playing
  583. program, in which from any state a finite number of moves can be made.
  584. Just as in a chess program, the back end can look at all the "moves" that can
  585. be made from each state reachable from the original state, and thus find the
  586. sequence that gives the minimum cost to a depth of one.
  587. More generally, the back end can evaluate all paths corresponding to accepting
  588. the next
  589. .I N
  590. input tokens, find the cheapest one, and then make the first move along
  591. that path, precisely the way a chess program would.
  592. .PP
  593. Since the back end is analogous to both a parser and a chess playing program,
  594. some clarifying remarks may be helpful.
  595. First, chess programs and the back end must do some look ahead, whereas the
  596. parser for a well-designed grammar can usually suffice with one input token
  597. because grammars are supposed to be unambiguous.
  598. In contrast, many legal mappings
  599. from a sequence of EM instructions to target code may exist.
  600. Second, like a parser but unlike a chess program, the back end has perfect
  601. information -- it does not have to contend with an unpredictable opponent's
  602. moves.
  603. Third, chess programs normally make a static evaluation of the board and
  604. label the
  605. .I nodes
  606. of the tree with the resulting scores.
  607. The back end, in contrast, associates costs with
  608. .I arcs
  609. (moves) rather than nodes (states).
  610. However, the difference is not essential, since it could
  611. also label each node with the cumulative cost from the root to that node.
  612. .PP
  613. As mentioned above, the cost field in the table contains
  614. .I both
  615. the time and memory costs for the code emitted.
  616. It should be clear that the back end could use either one
  617. or some linear combination of them as the scoring function for evaluating moves.
  618. A user can instruct the compiler to optimize for time or for memory or
  619. for, say, 0.3 x time + 0.7 x memory.
  620. Thus the same compiler can provide a wide range of performance options to
  621. the user.
  622. The writer of the back end table can take advantage of this flexibility by
  623. providing several code sequences with different tradeoffs for each EM
  624. instruction (e.g., in line code vs. call to a run time routine).
  625. .PP
  626. In addition to the time-space tradeoffs, by specifying the depth of search
  627. parameter,
  628. .I N ,
  629. the user can effectively also tradeoff compile time vs. object
  630. code quality, for whatever code metric has been chosen.
  631. In summary, by combining the properties of a parser and a game playing program,
  632. it is possible to make a code generator that is table driven,
  633. highly flexible, and has the ability to produce good code from a
  634. stack machine intermediate code.
  635. .NH 1
  636. The Target Machine Optimizer
  637. .PP
  638. In the model of Fig 2., the peephole optimizer comes before the global
  639. optimizer.
  640. It may happen that the code produced by the global optimizer can also
  641. be improved by another round of peephole optimization.
  642. Conceivably, the system could have been designed to iterate peephole and
  643. global optimizations until no more of either could be performed.
  644. .PP
  645. However, both of these optimizations are done on the machine independent
  646. EM code.
  647. Neither is able to take advantage of the peculiarities and idiosyncracies with
  648. which most target machines are well endowed.
  649. It is the function of the final
  650. optimizer to do any (peephole) optimizations that still remain.
  651. .PP
  652. The algorithm used here is the same as in the EM peephole optimizer.
  653. In fact, if it were not for the differences between EM syntax, which is
  654. very restricted, and target assembly language syntax,
  655. which is less so, precisely the same program could be used for both.
  656. Nevertheless, the same ideas apply concerning patterns and replacements, so
  657. our discussion of this optimizer will be restricted to one example.
  658. .PP
  659. To see what the target optimizer might do, consider the
  660. PDP-11 instruction sequence sub #2,r0; mov (r0),x.
  661. First 2 is subtracted from register 0, then the word pointed to by it
  662. is moved to x.
  663. The PDP-11 happens to have an addressing mode to perform this sequence in
  664. one instruction: mov -(r0),x.
  665. Although it is conceivable that this instruction could be included in the
  666. back end driving table for the PDP-11, it is awkward to do so because it
  667. can occur in so many contexts.
  668. It is much easier to catch things like this in a separate program.
  669. .NH 1
  670. The Universal Assembler/Linker
  671. .PP
  672. Although assembly languages for different machines may appear very different
  673. at first glance, they have a surprisingly large intersection.
  674. We have been able to construct an assembler/linker that is almost entirely
  675. independent of the assembly language being processed.
  676. To tailor the program to a specific assembly language, it is necessary to
  677. supply a table giving the list of instructions, the bit patterns required for
  678. each one, and the language syntax.
  679. The machine independent part of the assembler/linker is then compiled with the
  680. table to produce an assembler and linker for a particular target machine.
  681. Experience has shown that writing the necessary table for a new machine can be
  682. done in less than a week.
  683. .PP
  684. To enforce a modicum of uniformity, we have chosen to use a common set of
  685. pseudoinstructions for all target machines.
  686. They are used to initialize memory, allocate uninitialized memory, determine the
  687. current segment, and similar functions found in most assemblers.
  688. .PP
  689. The assembler is also a linker.
  690. After assembling a program, it checks to see if there are any
  691. unsatisfied external references.
  692. If so, it begins reading the libraries to find the necessary routines, including
  693. them in the object file as it finds them.
  694. This approach requires libraries to be maintained in assembly language form,
  695. but eliminates the need for inventing a language to express relocatable
  696. object programs in a machine independent way.
  697. It also simplifies the assembler, since producing absolute object code is
  698. easier than producing relocatable object code.
  699. Finally, although assembly language libraries may be somewhat larger than
  700. relocatable object module libraries, the loss in speed due to having more
  701. input may be more than compensated for by not having to pass an intermediate
  702. file between the assembler and linker.
  703. .NH 1
  704. The Utility Package
  705. .PP
  706. The utility package is a collection of programs designed to aid the
  707. implementers of new front ends or new back ends.
  708. The most useful ones are the test programs.
  709. For example, one test set, EMTEST, systematically checks out a back end by
  710. executing an ever larger subset of the EM instructions.
  711. It starts out by testing LOC, LOL and a few of the other essential instructions.
  712. If these appear to work, it then tries out new instructions one at a time,
  713. adding them to the set of instructions "known" to work as they pass the tests.
  714. .PP
  715. Each instruction is tested with a variety of operands chosen from values
  716. where problems can be expected.
  717. For example, on target machines which have 16-bit index registers but only
  718. allow 8-bit displacements, a fundamentally different algorithm may be needed
  719. for accessing
  720. the first few bytes of local variables and those with offsets of thousands.
  721. The test programs have been carefully designed to thoroughly test all relevant
  722. cases.
  723. .PP
  724. In addition to EMTEST, test programs in Pascal, C, and other languages are also
  725. available.
  726. A typical test is:
  727. .sp
  728. i := 9; \fBif\fP i + 250 <> 259 \fBthen\fP error(16);
  729. .sp
  730. Like EMTEST, the other test programs systematically exercise all features of the
  731. language being tested, and do so in a way that makes it possible to pinpoint
  732. errors precisely.
  733. While it has been said that testing can only demonstrate the presence of errors
  734. and not their absence, our experience is that
  735. the test programs have been invaluable in debugging new parts of the system
  736. quickly.
  737. .PP
  738. Other utilities include programs to convert
  739. the highly compact EM code produced by front ends to ASCII and vice versa,
  740. programs to build various internal tables from human writable input formats,
  741. a variety of libraries written in or compiled to EM to make them portable,
  742. an EM assembler, and EM interpreters for various machines.
  743. .PP
  744. Interpreting the EM code instead of translating it to target machine language
  745. is useful for several reasons.
  746. First, the interpreters provide extensive run time diagnostics including
  747. an option to list the original source program (in Pascal, C, etc.) with the
  748. execution frequency or execution time for each source line printed in the
  749. left margin.
  750. Second, since an EM program is typically about one-third the size of a
  751. compiled program, large programs can be executed on small machines.
  752. Third, running the EM code directly makes it easier to pinpoint errors in
  753. the EM output of front ends still being debugged.
  754. .NH 1
  755. Summary and Conclusions
  756. .PP
  757. The Amsterdam Compiler Kit is a tool kit for building
  758. portable (cross) compilers and interpreters.
  759. The main pieces of the kit are the front ends, which convert source programs
  760. to EM code, optimizers, which improve the EM code, and back ends, which convert
  761. the EM code to target assembly language.
  762. The kit is highly modular, so writing one front end
  763. (and its associated runtime routines)
  764. is sufficient to implement
  765. a new language on a dozen or more machines, and writing one back end table
  766. and one universal assembler/linker table is all that is needed to bring up all
  767. the previously implemented languages on a new machine.
  768. In this manner, the contents, and hopefully the usefulness, of the toolkit
  769. will increase in time.
  770. .PP
  771. We believe the principal lesson to be learned from our work is that the old
  772. UNCOL idea is basically a sound way to produce compilers, provided suitable
  773. restrictions are placed on the source languages and target machines.
  774. We also believe that although compilers produced by this technology may not
  775. be equal to the very best handcrafted compilers,
  776. in terms of object code quality, they are certainly
  777. competitive with many existing compilers.
  778. However, when one factors in the cost of producing the compiler,
  779. the possible slight loss in performance may be more than compensated for by the
  780. large decrease in production cost.
  781. As a consequence of our work and similar work by other researchers [1,3,4],
  782. we expect integrated compiler building kits to become increasingly popular
  783. in the near future.
  784. .PP
  785. The toolkit is now available for various computers running the
  786. .UX
  787. operating system.
  788. For information, contact the authors.
  789. .NH 1
  790. References
  791. .LP
  792. .nr r 0 1
  793. .in +4
  794. .ti -4
  795. \fB~\n+r.\fR Graham, S.L.
  796. Table-Driven Code Generation.
  797. .I "Computer~13" ,
  798. 8 (August 1980), 25-34.
  799. .PP
  800. A discussion of systematic ways to do code generation,
  801. in particular, the idea of having a table with templates that match parts of
  802. the parse tree and convert them into machine instructions.
  803. .sp 2
  804. .ti -4
  805. \fB~\n+r.\fR Haddon, B.K., and Waite, W.M.
  806. Experience with the Universal Intermediate Language Janus.
  807. .I "Software Practice & Experience~8" ,
  808. 5 (Sept.-Oct. 1978), 601-616.
  809. .PP
  810. An intermediate language for use with ALGOL 68, Pascal, etc. is described.
  811. The paper discusses some problems encountered and how they were dealt with.
  812. .sp 2
  813. .ti -4
  814. \fB~\n+r.\fR Johnson, S.C.
  815. A Portable Compiler: Theory and Practice.
  816. .I "Ann. ACM Symp. Prin. Prog. Lang." ,
  817. Jan. 1978.
  818. .PP
  819. A cogent discussion of the portable C compiler.
  820. Particularly interesting are the author's thoughts on the value of
  821. computer science theory.
  822. .sp 2
  823. .ti -4
  824. \fB~\n+r.\fR Leverett, B.W., Cattell, R.G.G, Hobbs, S.O., Newcomer, J.M.,
  825. Reiner, A.H., Schatz, B.R., and Wulf, W.A.
  826. An Overview of the Production-Quality Compiler-Compiler Project.
  827. .I Computer~13 ,
  828. 8 (August 1980), 38-49.
  829. .PP
  830. PQCC is a system for building compilers similar in concept but differing in
  831. details from the Amsterdam Compiler Kit.
  832. The paper describes the intermediate representation used and the code generation
  833. strategy.
  834. .sp 2
  835. .ti -4
  836. \fB~\n+r.\fR Lowry, E.S., and Medlock, C.W.
  837. Object Code Optimization.
  838. .I "Commun.~ACM~12",
  839. (Jan. 1969), 13-22.
  840. .PP
  841. A classic paper on global object code optimization.
  842. It covers data flow analysis, common subexpressions, code motion, register
  843. allocation and other techniques.
  844. .sp 2
  845. .ti -4
  846. \fB~\n+r.\fR Nori, K.V., Ammann, U., Jensen, K., Nageli, H.
  847. The Pascal P Compiler Implementation Notes.
  848. Eidgen. Tech. Hochschule, Zurich, 1975.
  849. .PP
  850. A description of the original P-code machine, used to transport the Pascal-P
  851. compiler to new computers.
  852. .sp 2
  853. .ti -4
  854. \fB~\n+r.\fR Steel, T.B., Jr. UNCOL: the Myth and the Fact. in
  855. .I "Ann. Rev. Auto. Prog."
  856. Goodman, R. (ed.), vol 2., (1960), 325-344.
  857. .PP
  858. An introduction to the UNCOL idea by its originator.
  859. .sp 2
  860. .ti -4
  861. \fB~\n+r.\fR Steel, T.B., Jr.
  862. A First Version of UNCOL.
  863. .I "Proc. Western Joint Comp. Conf." ,
  864. (1961), 371-377.
  865. .PP
  866. The first detailed proposal for an UNCOL. By current standards it is a
  867. primitive language, but it is interesting for its historical perspective.
  868. .sp 2
  869. .ti -4
  870. \fB~\n+r.\fR Tanenbaum, A.S., van Staveren, H., and Stevenson, J.W.
  871. Using Peephole Optimization on Intermediate Code.
  872. .I "ACM Trans. Prog. Lang. and Sys. 3" ,
  873. 1 (Jan. 1982) pp. 21-36.
  874. .PP
  875. A detailed description of a table-driven peephole optimizer.
  876. The driving table provides a list of patterns to match as well as the
  877. replacement text to use for each successful match.
  878. .sp 2
  879. .ti -4
  880. \fB\n+r.\fR Tanenbaum, A.S., Stevenson, J.W., Keizer, E.G., and van Staveren, H.
  881. Description of an Experimental Machine Architecture for use with Block
  882. Structured Languages.
  883. Informatica Rapport 81, Vrije Universiteit, Amsterdam, 1983.
  884. .PP
  885. The defining document for EM.
  886. .sp 2
  887. .ti -4
  888. \fB\n+r.\fR Tanenbaum, A.S.
  889. Implications of Structured Programming for Machine Architecture.
  890. .I "Comm. ACM~21" ,
  891. 3 (March 1978), 237-246.
  892. .PP
  893. The background and motivation for the design of EM.
  894. This early version emphasized the idea of interpreting the intermediate
  895. code (then called EM-1) rather than compiling it.