configure.txt 19 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380
  1. // -*- mode:doc; -*-
  2. // vim: set syntax=asciidoc:
  3. [[configure]]
  4. == Buildroot configuration
  5. All the configuration options in +make *config+ have a help text
  6. providing details about the option. However, a number of topics
  7. require additional details that cannot easily be covered in the help
  8. text and are there covered in the following sections.
  9. === Cross-compilation toolchain
  10. A compilation toolchain is the set of tools that allows you to compile
  11. code for your system. It consists of a compiler (in our case, +gcc+),
  12. binary utils like assembler and linker (in our case, +binutils+) and a
  13. C standard library (for example
  14. http://www.gnu.org/software/libc/libc.html[GNU Libc],
  15. http://www.uclibc.org/[uClibc]).
  16. The system installed on your development station certainly already has
  17. a compilation toolchain that you can use to compile an application
  18. that runs on your system. If you're using a PC, your compilation
  19. toolchain runs on an x86 processor and generates code for an x86
  20. processor. Under most Linux systems, the compilation toolchain uses
  21. the GNU libc (glibc) as the C standard library. This compilation
  22. toolchain is called the "host compilation toolchain". The machine on
  23. which it is running, and on which you're working, is called the "host
  24. system" footnote:[This terminology differs from what is used by GNU
  25. configure, where the host is the machine on which the application will
  26. run (which is usually the same as target)].
  27. The compilation toolchain is provided by your distribution, and
  28. Buildroot has nothing to do with it (other than using it to build a
  29. cross-compilation toolchain and other tools that are run on the
  30. development host).
  31. As said above, the compilation toolchain that comes with your system
  32. runs on and generates code for the processor in your host system. As
  33. your embedded system has a different processor, you need a
  34. cross-compilation toolchain - a compilation toolchain that runs on
  35. your _host system_ but generates code for your _target system_ (and
  36. target processor). For example, if your host system uses x86 and your
  37. target system uses ARM, the regular compilation toolchain on your host
  38. runs on x86 and generates code for x86, while the cross-compilation
  39. toolchain runs on x86 and generates code for ARM.
  40. Buildroot provides two solutions for the cross-compilation toolchain:
  41. * The *internal toolchain backend*, called +Buildroot toolchain+ in
  42. the configuration interface.
  43. * The *external toolchain backend*, called +External toolchain+ in
  44. the configuration interface.
  45. The choice between these two solutions is done using the +Toolchain
  46. Type+ option in the +Toolchain+ menu. Once one solution has been
  47. chosen, a number of configuration options appear, they are detailed in
  48. the following sections.
  49. [[internal-toolchain-backend]]
  50. ==== Internal toolchain backend
  51. The _internal toolchain backend_ is the backend where Buildroot builds
  52. by itself a cross-compilation toolchain, before building the userspace
  53. applications and libraries for your target embedded system.
  54. This backend supports several C libraries:
  55. http://www.uclibc.org[uClibc], the
  56. http://www.gnu.org/software/libc/libc.html[glibc] and
  57. http://www.eglibc.org[eglibc].
  58. Once you have selected this backend, a number of options appear. The
  59. most important ones allow to:
  60. * Change the version of the Linux kernel headers used to build the
  61. toolchain. This item deserves a few explanations. In the process of
  62. building a cross-compilation toolchain, the C library is being
  63. built. This library provides the interface between userspace
  64. applications and the Linux kernel. In order to know how to "talk"
  65. to the Linux kernel, the C library needs to have access to the
  66. _Linux kernel headers_ (i.e. the +.h+ files from the kernel), which
  67. define the interface between userspace and the kernel (system
  68. calls, data structures, etc.). Since this interface is backward
  69. compatible, the version of the Linux kernel headers used to build
  70. your toolchain do not need to match _exactly_ the version of the
  71. Linux kernel you intend to run on your embedded system. They only
  72. need to have a version equal or older to the version of the Linux
  73. kernel you intend to run. If you use kernel headers that are more
  74. recent than the Linux kernel you run on your embedded system, then
  75. the C library might be using interfaces that are not provided by
  76. your Linux kernel.
  77. * Change the version of the GCC compiler, binutils and the C library.
  78. * Select a number of toolchain options (uClibc only): whether the
  79. toolchain should have largefile support (i.e. support for files
  80. larger than 2 GB on 32 bits systems), IPv6 support, RPC support
  81. (used mainly for NFS), wide-char support, locale support (for
  82. internationalization), C++ support or thread support. Depending on
  83. which options you choose, the number of userspace applications and
  84. libraries visible in Buildroot menus will change: many applications
  85. and libraries require certain toolchain options to be enabled. Most
  86. packages show a comment when a certain toolchain option is required
  87. to be able to enable those packages. If needed, you can further
  88. refine the uClibc configuration by running +make
  89. uclibc-menuconfig+. Note however that all packages in Buildroot are
  90. tested against the default uClibc configuration bundled in
  91. Buildroot: if you deviate from this configuration by removing
  92. features from uClibc, some packages may no longer build.
  93. It is worth noting that whenever one of those options is modified,
  94. then the entire toolchain and system must be rebuilt. See
  95. xref:full-rebuild[].
  96. Advantages of this backend:
  97. * Well integrated with Buildroot
  98. * Fast, only builds what's necessary
  99. Drawbacks of this backend:
  100. * Rebuilding the toolchain is needed when doing +make clean+, which
  101. takes time. If you're trying to reduce your build time, consider
  102. using the _External toolchain backend_.
  103. [[external-toolchain-backend]]
  104. ==== External toolchain backend
  105. The _external toolchain backend_ allows to use existing pre-built
  106. cross-compilation toolchains. Buildroot knows about a number of
  107. well-known cross-compilation toolchains (from
  108. http://www.linaro.org[Linaro] for ARM,
  109. http://www.mentor.com/embedded-software/sourcery-tools/sourcery-codebench/editions/lite-edition/[Sourcery
  110. CodeBench] for ARM, x86, x86-64, PowerPC, MIPS and SuperH,
  111. https://blackfin.uclinux.org/gf/project/toolchain[Blackfin toolchains
  112. from Analog Devices], etc.) and is capable of downloading them
  113. automatically, or it can be pointed to a custom toolchain, either
  114. available for download or installed locally.
  115. Then, you have three solutions to use an external toolchain:
  116. * Use a predefined external toolchain profile, and let Buildroot
  117. download, extract and install the toolchain. Buildroot already knows
  118. about a few CodeSourcery, Linaro, Blackfin and Xilinx toolchains.
  119. Just select the toolchain profile in +Toolchain+ from the
  120. available ones. This is definitely the easiest solution.
  121. * Use a predefined external toolchain profile, but instead of having
  122. Buildroot download and extract the toolchain, you can tell Buildroot
  123. where your toolchain is already installed on your system. Just
  124. select the toolchain profile in +Toolchain+ through the available
  125. ones, unselect +Download toolchain automatically+, and fill the
  126. +Toolchain path+ text entry with the path to your cross-compiling
  127. toolchain.
  128. * Use a completely custom external toolchain. This is particularly
  129. useful for toolchains generated using crosstool-NG or with Buildroot
  130. itself. To do this, select the +Custom toolchain+ solution in the
  131. +Toolchain+ list. You need to fill the +Toolchain path+, +Toolchain
  132. prefix+ and +External toolchain C library+ options. Then, you have
  133. to tell Buildroot what your external toolchain supports. If your
  134. external toolchain uses the 'glibc' library, you only have to tell
  135. whether your toolchain supports C\++ or not and whether it has
  136. built-in RPC support. If your external toolchain uses the 'uClibc'
  137. library, then you have to tell Buildroot if it supports largefile,
  138. IPv6, RPC, wide-char, locale, program invocation, threads and
  139. C++. At the beginning of the execution, Buildroot will tell you if
  140. the selected options do not match the toolchain configuration.
  141. Our external toolchain support has been tested with toolchains from
  142. CodeSourcery and Linaro, toolchains generated by
  143. http://crosstool-ng.org[crosstool-NG], and toolchains generated by
  144. Buildroot itself. In general, all toolchains that support the
  145. 'sysroot' feature should work. If not, do not hesitate to contact the
  146. developers.
  147. We do not support toolchains or SDK generated by OpenEmbedded or
  148. Yocto, because these toolchains are not pure toolchains (i.e. just the
  149. compiler, binutils, the C and C++ libraries). Instead these toolchains
  150. come with a very large set of pre-compiled libraries and
  151. programs. Therefore, Buildroot cannot import the 'sysroot' of the
  152. toolchain, as it would contain hundreds of megabytes of pre-compiled
  153. libraries that are normally built by Buildroot.
  154. We also do not support using the distribution toolchain (i.e. the
  155. gcc/binutils/C library installed by your distribution) as the
  156. toolchain to build software for the target. This is because your
  157. distribution toolchain is not a "pure" toolchain (i.e. only with the
  158. C/C++ library), so we cannot import it properly into the Buildroot
  159. build environment. So even if you are building a system for a x86 or
  160. x86_64 target, you have to generate a cross-compilation toolchain with
  161. Buildroot or crosstool-NG.
  162. If you want to generate a custom toolchain for your project, that can
  163. be used as an external toolchain in Buildroot, our recommendation is
  164. definitely to build it with http://crosstool-ng.org[crosstool-NG]. We
  165. recommend to build the toolchain separately from Buildroot, and then
  166. _import_ it in Buildroot using the external toolchain backend.
  167. Advantages of this backend:
  168. * Allows to use well-known and well-tested cross-compilation
  169. toolchains.
  170. * Avoids the build time of the cross-compilation toolchain, which is
  171. often very significant in the overall build time of an embedded
  172. Linux system.
  173. * Not limited to uClibc: glibc and eglibc toolchains are supported.
  174. Drawbacks of this backend:
  175. * If your pre-built external toolchain has a bug, may be hard to get a
  176. fix from the toolchain vendor, unless you build your external
  177. toolchain by yourself using Crosstool-NG.
  178. ===== External toolchain wrapper
  179. When using an external toolchain, Buildroot generates a wrapper program,
  180. that transparently passes the appropriate options (according to the
  181. configuration) to the external toolchain programs. In case you need to
  182. debug this wrapper to check exactly what arguments are passed, you can
  183. set the environment variable +BR2_DEBUG_WRAPPER+ to either one of:
  184. * +0+, empty or not set: no debug
  185. * +1+: trace all arguments on a single line
  186. * +2+: trace one argument per line
  187. === /dev management
  188. On a Linux system, the +/dev+ directory contains special files, called
  189. _device files_, that allow userspace applications to access the
  190. hardware devices managed by the Linux kernel. Without these _device
  191. files_, your userspace applications would not be able to use the
  192. hardware devices, even if they are properly recognized by the Linux
  193. kernel.
  194. Under +System configuration+, +/dev management+, Buildroot offers four
  195. different solutions to handle the +/dev+ directory :
  196. * The first solution is *Static using device table*. This is the old
  197. classical way of handling device files in Linux. With this method,
  198. the device files are persistently stored in the root filesystem
  199. (i.e. they persist across reboots), and there is nothing that will
  200. automatically create and remove those device files when hardware
  201. devices are added or removed from the system. Buildroot therefore
  202. creates a standard set of device files using a _device table_, the
  203. default one being stored in +system/device_table_dev.txt+ in the
  204. Buildroot source code. This file is processed when Buildroot
  205. generates the final root filesystem image, and the _device files_
  206. are therefore not visible in the +output/target+ directory. The
  207. +BR2_ROOTFS_STATIC_DEVICE_TABLE+ option allows to change the
  208. default device table used by Buildroot, or to add an additional
  209. device table, so that additional _device files_ are created by
  210. Buildroot during the build. So, if you use this method, and a
  211. _device file_ is missing in your system, you can for example create
  212. a +board/<yourcompany>/<yourproject>/device_table_dev.txt+ file
  213. that contains the description of your additional _device files_,
  214. and then you can set +BR2_ROOTFS_STATIC_DEVICE_TABLE+ to
  215. +system/device_table_dev.txt
  216. board/<yourcompany>/<yourproject>/device_table_dev.txt+. For more
  217. details about the format of the device table file, see
  218. xref:makedev-syntax[].
  219. * The second solution is *Dynamic using devtmpfs only*. _devtmpfs_ is
  220. a virtual filesystem inside the Linux kernel that has been
  221. introduced in kernel 2.6.32 (if you use an older kernel, it is not
  222. possible to use this option). When mounted in +/dev+, this virtual
  223. filesystem will automatically make _device files_ appear and
  224. disappear as hardware devices are added and removed from the
  225. system. This filesystem is not persistent across reboots: it is
  226. filled dynamically by the kernel. Using _devtmpfs_ requires the
  227. following kernel configuration options to be enabled:
  228. +CONFIG_DEVTMPFS+ and +CONFIG_DEVTMPFS_MOUNT+. When Buildroot is in
  229. charge of building the Linux kernel for your embedded device, it
  230. makes sure that those two options are enabled. However, if you
  231. build your Linux kernel outside of Buildroot, then it is your
  232. responsibility to enable those two options (if you fail to do so,
  233. your Buildroot system will not boot).
  234. * The third solution is *Dynamic using mdev*. This method also relies
  235. on the _devtmpfs_ virtual filesystem detailed above (so the
  236. requirement to have +CONFIG_DEVTMPFS+ and +CONFIG_DEVTMPFS_MOUNT+
  237. enabled in the kernel configuration still apply), but adds the
  238. +mdev+ userspace utility on top of it. +mdev+ is a program part of
  239. BusyBox that the kernel will call every time a device is added or
  240. removed. Thanks to the +/etc/mdev.conf+ configuration file, +mdev+
  241. can be configured to for example, set specific permissions or
  242. ownership on a device file, call a script or application whenever a
  243. device appears or disappear, etc. Basically, it allows _userspace_
  244. to react on device addition and removal events. +mdev+ can for
  245. example be used to automatically load kernel modules when devices
  246. appear on the system. +mdev+ is also important if you have devices
  247. that require a firmware, as it will be responsible for pushing the
  248. firmware contents to the kernel. +mdev+ is a lightweight
  249. implementation (with fewer features) of +udev+. For more details
  250. about +mdev+ and the syntax of its configuration file, see
  251. http://git.busybox.net/busybox/tree/docs/mdev.txt.
  252. * The fourth solution is *Dynamic using eudev*. This method also
  253. relies on the _devtmpfs_ virtual filesystem detailed above, but
  254. adds the +eudev+ userspace daemon on top of it. +eudev+ is a daemon
  255. that runs in the background, and gets called by the kernel when a
  256. device gets added or removed from the system. It is a more
  257. heavyweight solution than +mdev+, but provides higher flexibility.
  258. +eudev+ is a standalone version of +udev+, the original userspace
  259. daemon used in most desktop Linux distributions, which is now part
  260. of Systemd. For more details, see http://en.wikipedia.org/wiki/Udev.
  261. The Buildroot developers recommendation is to start with the *Dynamic
  262. using devtmpfs only* solution, until you have the need for userspace
  263. to be notified when devices are added/removed, or if firmwares are
  264. needed, in which case *Dynamic using mdev* is usually a good solution.
  265. Note that if +systemd+ is chosen as init system, /dev management will
  266. be performed by the +udev+ program provided by +systemd+.
  267. === init system
  268. The _init_ program is the first userspace program started by the
  269. kernel (it carries the PID number 1), and is responsible for starting
  270. the userspace services and programs (for example: web server,
  271. graphical applications, other network servers, etc.).
  272. Buildroot allows to use three different types of init systems, which
  273. can be chosen from +System configuration+, +Init system+:
  274. * The first solution is *BusyBox*. Amongst many programs, BusyBox has
  275. an implementation of a basic +init+ program, which is sufficient
  276. for most embedded systems. Enabling the +BR2_INIT_BUSYBOX+ will
  277. ensure BusyBox will build and install its +init+ program. This is
  278. the default solution in Buildroot. The BusyBox +init+ program will
  279. read the +/etc/inittab+ file at boot to know what to do. The syntax
  280. of this file can be found in
  281. http://git.busybox.net/busybox/tree/examples/inittab (note that
  282. BusyBox +inittab+ syntax is special: do not use a random +inittab+
  283. documentation from the Internet to learn about BusyBox
  284. +inittab+). The default +inittab+ in Buildroot is stored in
  285. +system/skeleton/etc/inittab+. Apart from mounting a few important
  286. filesystems, the main job the default inittab does is to start the
  287. +/etc/init.d/rcS+ shell script, and start a +getty+ program (which
  288. provides a login prompt).
  289. * The second solution is *systemV*. This solution uses the old
  290. traditional _sysvinit_ program, packed in Buildroot in
  291. +package/sysvinit+. This was the solution used in most desktop
  292. Linux distributions, until they switched to more recent
  293. alternatives such as Upstart or Systemd. +sysvinit+ also works with
  294. an +inittab+ file (which has a slightly different syntax than the
  295. one from BusyBox). The default +inittab+ installed with this init
  296. solution is located in +package/sysvinit/inittab+.
  297. * The third solution is *systemd*. +systemd+ is the new generation
  298. init system for Linux. It does far more than traditional _init_
  299. programs: aggressive parallelization capabilities, uses socket and
  300. D-Bus activation for starting services, offers on-demand starting
  301. of daemons, keeps track of processes using Linux control groups,
  302. supports snapshotting and restoring of the system state,
  303. etc. +systemd+ will be useful on relatively complex embedded
  304. systems, for example the ones requiring D-Bus and services
  305. communicating between each other. It is worth noting that +systemd+
  306. brings a fairly big number of large dependencies: +dbus+, +udev+
  307. and more. For more details about +systemd+, see
  308. http://www.freedesktop.org/wiki/Software/systemd.
  309. The solution recommended by Buildroot developers is to use the
  310. *BusyBox init* as it is sufficient for most embedded
  311. systems. *systemd* can be used for more complex situations.
  312. == Configuration of other components
  313. include::customize-busybox-config.txt[]
  314. include::customize-uclibc-config.txt[]
  315. include::customize-kernel-config.txt[]