prototype.tr 6.8 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276
  1. .TL
  2. A prototype Code expander
  3. .NH
  4. Introduction
  5. .PP
  6. A program to be compiled with ACK is first fed into the preprocessor.
  7. The output of the preprocessor goes into the appropiate front end,
  8. whose job it is to produce EM. The EM code generated is
  9. fed into the peephole optimizer, wich scans it with a window of few
  10. instructions, replacing certain inefficient code sequences by better
  11. ones. Following the peephole optimizer follows a backend wich produces
  12. good assembly code. The assembly code goes into the assembler and the objectcode
  13. then goes into the loader/linker, the final component in the pipeline.
  14. .PP
  15. For various applications this scheme is too slow. For example for testing
  16. programs; In this case the program has to be translated fast and the
  17. runtime of the objectcode may be slower. A solution is to build a code
  18. expander ( \fBce\fR) wich translates EM code to objectcode. Of course this
  19. has to
  20. be done automaticly by a code expander generator, but to get some feeling
  21. for the problem we started out to build prototypes.
  22. We built two types of ce's. One wich tranlated EM to assembly, one
  23. wich translated EM to objectcode.
  24. .NH
  25. EM to assembly
  26. .PP
  27. We made one for the 8086 and one for the vax4. These ce's are instances of the
  28. EM_CODE(3L)-interface and produce for a single EM instruction a set
  29. of assembly instruction wich are semantic equivalent.
  30. We implemented in the 8086-ce push/pop-optimalization.
  31. .NH
  32. EM to objectcode
  33. .PP
  34. Instead of producing assembly code we tried to produce vax4-objectcode.
  35. During execution of ce, ce builds in core a machine independent
  36. objectfile ( NEW A.OUT(5L)) and just before dumping the tables this
  37. objectfile is converted to a Berkly 4.2BSD a.out-file. We build two versions;
  38. One with static memory allocation and one with dynamic memory allocation.
  39. If the first one runs out of memory it will give an error message and stop,
  40. the second one will allocate more memory and proceed with producing
  41. objectcode.
  42. .PP
  43. The C-frontend calls the EM_CODE-interface. So after linking the frontend
  44. and the ce we have a pipeline in a program saving a lot of i/o.
  45. It is interesting to compare this C-compiler ( called fcemcom) with "cc -c".
  46. fcemcom1 (the dynamic variant of fcemcom) is tuned in such a way, that
  47. alloc() won't be called.
  48. .NH 2
  49. Compile time
  50. .PP
  51. fac.c is a small program that produces n! ( see below). foo.c is small program
  52. that loops a lot.
  53. .TS
  54. center, box, tab(:);
  55. c | c | c | c | c | c
  56. c | c | n | n | n | n.
  57. compiler : program : real : user : sys : object size
  58. =
  59. fcemcom : sort.c : 31.0 : 17.5 : 1.8 : 23824
  60. fcemcom1 : : 59.0 : 21.2 : 3.3 :
  61. cc -c : : 50.0 : 38.0 : 3.5 : 6788
  62. _
  63. fcemcom : ed.c : 37.0 : 23.6 : 2.3 : 41744
  64. fcemcom1 : : 1.16.0 : 28.3 : 4.6 :
  65. cc -c : : 1.19.0 : 54.8 : 4.3 : 11108
  66. _
  67. fcemcom : cp.c : 4.0 : 2.4 : 0.8 : 4652
  68. fcemcom1 : : 9.0 : 3.0 : 1.0 :
  69. cc -c : : 8.0 : 5.2 : 1.6 : 1048
  70. _
  71. fcemcom : uniq.c : 5.0 : 2.5 : 0.8 : 5568
  72. fcemcom1 : : 9.0 : 2.9 : 0.8 :
  73. cc -c : : 13.0 : 5.4 : 2.0 : 3008
  74. _
  75. fcemcom : btlgrep.c : 24.0 : 7.2 : 1.4 : 12968
  76. fcemcom1 : : 23.0 : 8.1 : 1.2 :
  77. cc -c : : 1.20.0 : 15.3 : 3.8 : 2392
  78. _
  79. fcemcom : fac.c : 1.0 : 0.1 : 0.5 : 216
  80. fecmcom1 : : 2.0 : 0.2 : 0.5 :
  81. cc -c : : 3.0 : 0.7 : 1.3 : 92
  82. _
  83. fcemcom : foo.c : 4.0 : 0.2 : 0.5 : 272
  84. fcemcom1 : : 11.0 : 0.3 : 0.5 :
  85. cc -c : : 7.0 : 0.8 : 1.6 : 108
  86. .TE
  87. .NH 2
  88. Run time
  89. .LP
  90. Is the runtime very bad?
  91. .TS
  92. tab(:), box, center;
  93. c | c | c | c | c
  94. c | c | n | n | n.
  95. compiler : program : real : user : system
  96. =
  97. fcem : sort.c : 22.0 : 17.5 : 1.5
  98. cc : : 5.0 : 2.4 : 1.1
  99. _
  100. fcem : btlgrep.c : 1.58.0 : 27.2 : 4.2
  101. cc : : 12.0 : 3.6 : 1.1
  102. _
  103. fcem : foo.c : 1.0 : 0.7 : 0.1
  104. cc : : 1.0 : 0.4 : 0.1
  105. _
  106. fcem : uniq.c : 2.0 : 0.5 : 0.3
  107. cc : : 1.0 : 0.1 : 0.2
  108. .TE
  109. .NH 2
  110. quality object code
  111. .LP
  112. The runtime is very bad so its interesting to have look at the code which is
  113. produced by fcemcom and by cc -c. I took a program which computes recursively
  114. n!.
  115. .DS
  116. long fac();
  117. main()
  118. {
  119. int n;
  120. scanf( "%D", &n);
  121. printf( "fac is %D\\\\n", fac( n));
  122. }
  123. long fac( n)
  124. int n;
  125. {
  126. if ( n == 0)
  127. return( 1);
  128. else
  129. return( n * fac( n-1));
  130. }
  131. .DE
  132. .br
  133. .br
  134. .br
  135. .br
  136. .LP
  137. "cc -c fac.c" produces :
  138. .DS
  139. fac: tstl 4(ap)
  140. bnequ 7f
  141. movl $1, r0
  142. ret
  143. 7f: subl3 $1, 4(ap), r0
  144. pushl r0
  145. call $1, fac
  146. movl r0, -4(fp)
  147. mull3 -4(fp), 4(ap), r0
  148. ret
  149. .DE
  150. .br
  151. .br
  152. .LP
  153. "fcem fac.c fac.o" produces :
  154. .DS
  155. _fac: 0
  156. 42: jmp be
  157. 48: pushl 4(ap)
  158. 4e: pushl $0
  159. 54: subl2 (sp)+,(sp)
  160. 57: tstl (sp)+
  161. 59: bnequ 61
  162. 5b: jmp 67
  163. 61: jmp 79
  164. 67: pushl $1
  165. 6d: jmp ba
  166. 73: jmp b9
  167. 79: pushl 4(ap)
  168. 7f: pushl $1
  169. 85: subl2 (sp)+,(sp)
  170. 88: calls $0,_fac
  171. 8f: addl2 $4,sp
  172. 96: pushl r0
  173. 98: pushl 4(ap)
  174. 9e: pushl $4
  175. a4: pushl $4
  176. aa: jsb .cii
  177. b0: mull2 (sp)+,(sp)
  178. b3: jmp ba
  179. b9: ret
  180. ba: movl (sp)+,r0
  181. bd: ret
  182. be: jmp 48
  183. .DE
  184. .NH 1
  185. Conclusions
  186. .PP
  187. comparing "cc -c" with "fcemcom"
  188. .LP
  189. .TS
  190. center, box, tab(:);
  191. c | c s | c | c s
  192. ^ | c s | ^ | c s
  193. ^ | c | c | ^ | c | c
  194. l | n | n | n | n | n.
  195. program : compile time : object size : runtime
  196. :_::_
  197. : user : sys :: user : sys
  198. =
  199. sort.c : 0.47 : 0.5 : 3.5 : 7.3 : 1.4
  200. _
  201. ed.c : 0.46 : 0.5 : 3.8 : : :
  202. _
  203. cp.c : 0.46 : 0.5 : 4.4 : : :
  204. _
  205. uniq.c : 0.46 : 0.4 : 1.8 : : :
  206. _
  207. btlgrep.c : 0.47 : 0.3 : 5.4 : 7.5 : 3.8
  208. _
  209. fac.c : 0.14 : 0.4 : 2.3 : 1.8 : 1.0
  210. _
  211. foo.c : 0.25 : 0.3 : 2.5 : 5.0 : 1.5
  212. .TE
  213. .PP
  214. The results for fcemcom1 are almost identical; The only thing that changes
  215. is that fcemcom1 is 1.2 slower than fcemcom. ( compile time) This is due to
  216. to an another datastructure . In the static version we use huge array's for
  217. the text- and
  218. data-segment, the relocation information, the symboltable and stringarea.
  219. In the dynamic version we use linked lists, wich makes it expensive to get
  220. and to put a byte on a abritrary memory location. So it is probably better
  221. to use realloc(), because in the most cases there will be enough memory.
  222. .PP
  223. The quality of the objectcode is very bad. The reason is that the frontend
  224. generates bad code and expects the peephole-optimizer to improve the code.
  225. This is also one of the main reasons that the runtime is very bad.
  226. (e.g. the expensive "cii" with arguments 4 and 4 could be deleted.)
  227. So its seems a good
  228. idea to put a new peephole-optimizer between the frontend and the ce.
  229. .PP
  230. Using the peephole optimizer the ce would produce :
  231. .DS
  232. _fac: 0
  233. pushl 4(ap)
  234. tstl (sp)+
  235. beqlu 1f
  236. jmp 3f
  237. 1 : pushl $1
  238. jmp 2f
  239. 3 : pushl 4(ap)
  240. decl (sp)
  241. calls $0,_fac
  242. addl2 $4,sp
  243. pushl r0
  244. pushl 4(ap)
  245. mull2 (sp)+,(sp)
  246. movl (sp)+,r0
  247. 2 : ret
  248. .DE
  249. .PP
  250. Bruce McKenzy already implemented it and made some improvements in the
  251. source code of the ce. The compile-time is two to two and a half times better
  252. and the
  253. size of the objectcode is two to three times bigger.(comparing with "cc -c")
  254. Still we could do better.
  255. .PP
  256. Using peephole- and push/pop-optimization ce could produce :
  257. .DS
  258. _fac: 0
  259. tstl 4(ap)
  260. beqlu 1f
  261. jmp 2f
  262. 1 : pushl $1
  263. jmp 3f
  264. 2 : decl 4(ap)
  265. calls $0,_fac
  266. addl2 $4,sp
  267. mull3 4(ap), r0, -(sp)
  268. movl (sp)+, r0
  269. 3 : ret
  270. .DE
  271. .PP
  272. prof doesn't cooperate, so no profile information.
  273. .PP