configure.txt 20 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430
  1. // -*- mode:doc; -*-
  2. // vim: set syntax=asciidoc:
  3. [[configure]]
  4. == Buildroot configuration
  5. All the configuration options in +make *config+ have a help text
  6. providing details about the option.
  7. The +make *config+ commands also offer a search tool. Read the help
  8. message in the different frontend menus to know how to use it:
  9. * in _menuconfig_, the search tool is called by pressing +/+;
  10. * in _xconfig_, the search tool is called by pressing +Ctrl+ + +f+.
  11. The result of the search shows the help message of the matching items.
  12. In _menuconfig_, numbers in the left column provide a shortcut to the
  13. corresponding entry. Just type this number to directly jump to the
  14. entry, or to the containing menu in case the entry is not selectable due
  15. to a missing dependency.
  16. Although the menu structure and the help text of the entries should be
  17. sufficiently self-explanatory, a number of topics require additional
  18. explanation that cannot easily be covered in the help text and are
  19. therefore covered in the following sections.
  20. === Cross-compilation toolchain
  21. A compilation toolchain is the set of tools that allows you to compile
  22. code for your system. It consists of a compiler (in our case, +gcc+),
  23. binary utils like assembler and linker (in our case, +binutils+) and a
  24. C standard library (for example
  25. http://www.gnu.org/software/libc/libc.html[GNU Libc],
  26. http://www.uclibc-ng.org/[uClibc-ng]).
  27. The system installed on your development station certainly already has
  28. a compilation toolchain that you can use to compile an application
  29. that runs on your system. If you're using a PC, your compilation
  30. toolchain runs on an x86 processor and generates code for an x86
  31. processor. Under most Linux systems, the compilation toolchain uses
  32. the GNU libc (glibc) as the C standard library. This compilation
  33. toolchain is called the "host compilation toolchain". The machine on
  34. which it is running, and on which you're working, is called the "host
  35. system" footnote:[This terminology differs from what is used by GNU
  36. configure, where the host is the machine on which the application will
  37. run (which is usually the same as target)].
  38. The compilation toolchain is provided by your distribution, and
  39. Buildroot has nothing to do with it (other than using it to build a
  40. cross-compilation toolchain and other tools that are run on the
  41. development host).
  42. As said above, the compilation toolchain that comes with your system
  43. runs on and generates code for the processor in your host system. As
  44. your embedded system has a different processor, you need a
  45. cross-compilation toolchain - a compilation toolchain that runs on
  46. your _host system_ but generates code for your _target system_ (and
  47. target processor). For example, if your host system uses x86 and your
  48. target system uses ARM, the regular compilation toolchain on your host
  49. runs on x86 and generates code for x86, while the cross-compilation
  50. toolchain runs on x86 and generates code for ARM.
  51. Buildroot provides two solutions for the cross-compilation toolchain:
  52. * The *internal toolchain backend*, called +Buildroot toolchain+ in
  53. the configuration interface.
  54. * The *external toolchain backend*, called +External toolchain+ in
  55. the configuration interface.
  56. The choice between these two solutions is done using the +Toolchain
  57. Type+ option in the +Toolchain+ menu. Once one solution has been
  58. chosen, a number of configuration options appear, they are detailed in
  59. the following sections.
  60. [[internal-toolchain-backend]]
  61. ==== Internal toolchain backend
  62. The _internal toolchain backend_ is the backend where Buildroot builds
  63. by itself a cross-compilation toolchain, before building the userspace
  64. applications and libraries for your target embedded system.
  65. This backend supports several C libraries:
  66. http://www.uclibc-ng.org[uClibc-ng],
  67. http://www.gnu.org/software/libc/libc.html[glibc] and
  68. http://www.musl-libc.org[musl].
  69. Once you have selected this backend, a number of options appear. The
  70. most important ones allow to:
  71. * Change the version of the Linux kernel headers used to build the
  72. toolchain. This item deserves a few explanations. In the process of
  73. building a cross-compilation toolchain, the C library is being
  74. built. This library provides the interface between userspace
  75. applications and the Linux kernel. In order to know how to "talk"
  76. to the Linux kernel, the C library needs to have access to the
  77. _Linux kernel headers_ (i.e. the +.h+ files from the kernel), which
  78. define the interface between userspace and the kernel (system
  79. calls, data structures, etc.). Since this interface is backward
  80. compatible, the version of the Linux kernel headers used to build
  81. your toolchain do not need to match _exactly_ the version of the
  82. Linux kernel you intend to run on your embedded system. They only
  83. need to have a version equal or older to the version of the Linux
  84. kernel you intend to run. If you use kernel headers that are more
  85. recent than the Linux kernel you run on your embedded system, then
  86. the C library might be using interfaces that are not provided by
  87. your Linux kernel.
  88. * Change the version of the GCC compiler, binutils and the C library.
  89. * Select a number of toolchain options (uClibc only): whether the
  90. toolchain should have RPC support (used mainly for NFS),
  91. wide-char support, locale support (for internationalization),
  92. C++ support or thread support. Depending on which options you choose,
  93. the number of userspace applications and libraries visible in
  94. Buildroot menus will change: many applications and libraries require
  95. certain toolchain options to be enabled. Most packages show a comment
  96. when a certain toolchain option is required to be able to enable
  97. those packages. If needed, you can further refine the uClibc
  98. configuration by running +make uclibc-menuconfig+. Note however that
  99. all packages in Buildroot are tested against the default uClibc
  100. configuration bundled in Buildroot: if you deviate from this
  101. configuration by removing features from uClibc, some packages may no
  102. longer build.
  103. It is worth noting that whenever one of those options is modified,
  104. then the entire toolchain and system must be rebuilt. See
  105. xref:full-rebuild[].
  106. Advantages of this backend:
  107. * Well integrated with Buildroot
  108. * Fast, only builds what's necessary
  109. Drawbacks of this backend:
  110. * Rebuilding the toolchain is needed when doing +make clean+, which
  111. takes time. If you're trying to reduce your build time, consider
  112. using the _External toolchain backend_.
  113. [[external-toolchain-backend]]
  114. ==== External toolchain backend
  115. The _external toolchain backend_ allows to use existing pre-built
  116. cross-compilation toolchains. Buildroot knows about a number of
  117. well-known cross-compilation toolchains (from
  118. http://www.linaro.org[Linaro] for ARM,
  119. http://www.mentor.com/embedded-software/sourcery-tools/sourcery-codebench/editions/lite-edition/[Sourcery
  120. CodeBench] for ARM, x86-64, PowerPC, and MIPS, and is capable of
  121. downloading them automatically, or it can be pointed to a custom
  122. toolchain, either available for download or installed locally.
  123. Then, you have three solutions to use an external toolchain:
  124. * Use a predefined external toolchain profile, and let Buildroot
  125. download, extract and install the toolchain. Buildroot already knows
  126. about a few CodeSourcery and Linaro toolchains. Just select the
  127. toolchain profile in +Toolchain+ from the available ones. This is
  128. definitely the easiest solution.
  129. * Use a predefined external toolchain profile, but instead of having
  130. Buildroot download and extract the toolchain, you can tell Buildroot
  131. where your toolchain is already installed on your system. Just
  132. select the toolchain profile in +Toolchain+ through the available
  133. ones, unselect +Download toolchain automatically+, and fill the
  134. +Toolchain path+ text entry with the path to your cross-compiling
  135. toolchain.
  136. * Use a completely custom external toolchain. This is particularly
  137. useful for toolchains generated using crosstool-NG or with Buildroot
  138. itself. To do this, select the +Custom toolchain+ solution in the
  139. +Toolchain+ list. You need to fill the +Toolchain path+, +Toolchain
  140. prefix+ and +External toolchain C library+ options. Then, you have
  141. to tell Buildroot what your external toolchain supports. If your
  142. external toolchain uses the 'glibc' library, you only have to tell
  143. whether your toolchain supports C\++ or not and whether it has
  144. built-in RPC support. If your external toolchain uses the 'uClibc'
  145. library, then you have to tell Buildroot if it supports RPC,
  146. wide-char, locale, program invocation, threads and C++.
  147. At the beginning of the execution, Buildroot will tell you if
  148. the selected options do not match the toolchain configuration.
  149. Our external toolchain support has been tested with toolchains from
  150. CodeSourcery and Linaro, toolchains generated by
  151. http://crosstool-ng.org[crosstool-NG], and toolchains generated by
  152. Buildroot itself. In general, all toolchains that support the
  153. 'sysroot' feature should work. If not, do not hesitate to contact the
  154. developers.
  155. We do not support toolchains or SDK generated by OpenEmbedded or
  156. Yocto, because these toolchains are not pure toolchains (i.e. just the
  157. compiler, binutils, the C and C++ libraries). Instead these toolchains
  158. come with a very large set of pre-compiled libraries and
  159. programs. Therefore, Buildroot cannot import the 'sysroot' of the
  160. toolchain, as it would contain hundreds of megabytes of pre-compiled
  161. libraries that are normally built by Buildroot.
  162. We also do not support using the distribution toolchain (i.e. the
  163. gcc/binutils/C library installed by your distribution) as the
  164. toolchain to build software for the target. This is because your
  165. distribution toolchain is not a "pure" toolchain (i.e. only with the
  166. C/C++ library), so we cannot import it properly into the Buildroot
  167. build environment. So even if you are building a system for a x86 or
  168. x86_64 target, you have to generate a cross-compilation toolchain with
  169. Buildroot or crosstool-NG.
  170. If you want to generate a custom toolchain for your project, that can
  171. be used as an external toolchain in Buildroot, our recommendation is
  172. to build it either with Buildroot itself (see
  173. xref:build-toolchain-with-buildroot[]) or with
  174. http://crosstool-ng.org[crosstool-NG].
  175. Advantages of this backend:
  176. * Allows to use well-known and well-tested cross-compilation
  177. toolchains.
  178. * Avoids the build time of the cross-compilation toolchain, which is
  179. often very significant in the overall build time of an embedded
  180. Linux system.
  181. Drawbacks of this backend:
  182. * If your pre-built external toolchain has a bug, may be hard to get a
  183. fix from the toolchain vendor, unless you build your external
  184. toolchain by yourself using Buildroot or Crosstool-NG.
  185. [[build-toolchain-with-buildroot]]
  186. ==== Build an external toolchain with Buildroot
  187. The Buildroot internal toolchain option can be used to create an
  188. external toolchain. Here are a series of steps to build an internal
  189. toolchain and package it up for reuse by Buildroot itself (or other
  190. projects).
  191. Create a new Buildroot configuration, with the following details:
  192. * Select the appropriate *Target options* for your target CPU
  193. architecture
  194. * In the *Toolchain* menu, keep the default of *Buildroot toolchain*
  195. for *Toolchain type*, and configure your toolchain as desired
  196. * In the *System configuration* menu, select *None* as the *Init
  197. system* and *none* as */bin/sh*
  198. * In the *Target packages* menu, disable *BusyBox*
  199. * In the *Filesystem images* menu, disable *tar the root filesystem*
  200. Then, we can trigger the build, and also ask Buildroot to generate a
  201. SDK. This will conveniently generate for us a tarball which contains
  202. our toolchain:
  203. -----
  204. make sdk
  205. -----
  206. This produces the SDK tarball in +$(O)/images+, with a name similar to
  207. +arm-buildroot-linux-uclibcgnueabi_sdk-buildroot.tar.gz+. Save this
  208. tarball, as it is now the toolchain that you can re-use as an external
  209. toolchain in other Buildroot projects.
  210. In those other Buildroot projects, in the *Toolchain* menu:
  211. * Set *Toolchain type* to *External toolchain*
  212. * Set *Toolchain* to *Custom toolchain*
  213. * Set *Toolchain origin* to *Toolchain to be downloaded and installed*
  214. * Set *Toolchain URL* to +file:///path/to/your/sdk/tarball.tar.gz+
  215. ===== External toolchain wrapper
  216. When using an external toolchain, Buildroot generates a wrapper program,
  217. that transparently passes the appropriate options (according to the
  218. configuration) to the external toolchain programs. In case you need to
  219. debug this wrapper to check exactly what arguments are passed, you can
  220. set the environment variable +BR2_DEBUG_WRAPPER+ to either one of:
  221. * +0+, empty or not set: no debug
  222. * +1+: trace all arguments on a single line
  223. * +2+: trace one argument per line
  224. === /dev management
  225. On a Linux system, the +/dev+ directory contains special files, called
  226. _device files_, that allow userspace applications to access the
  227. hardware devices managed by the Linux kernel. Without these _device
  228. files_, your userspace applications would not be able to use the
  229. hardware devices, even if they are properly recognized by the Linux
  230. kernel.
  231. Under +System configuration+, +/dev management+, Buildroot offers four
  232. different solutions to handle the +/dev+ directory :
  233. * The first solution is *Static using device table*. This is the old
  234. classical way of handling device files in Linux. With this method,
  235. the device files are persistently stored in the root filesystem
  236. (i.e. they persist across reboots), and there is nothing that will
  237. automatically create and remove those device files when hardware
  238. devices are added or removed from the system. Buildroot therefore
  239. creates a standard set of device files using a _device table_, the
  240. default one being stored in +system/device_table_dev.txt+ in the
  241. Buildroot source code. This file is processed when Buildroot
  242. generates the final root filesystem image, and the _device files_
  243. are therefore not visible in the +output/target+ directory. The
  244. +BR2_ROOTFS_STATIC_DEVICE_TABLE+ option allows to change the
  245. default device table used by Buildroot, or to add an additional
  246. device table, so that additional _device files_ are created by
  247. Buildroot during the build. So, if you use this method, and a
  248. _device file_ is missing in your system, you can for example create
  249. a +board/<yourcompany>/<yourproject>/device_table_dev.txt+ file
  250. that contains the description of your additional _device files_,
  251. and then you can set +BR2_ROOTFS_STATIC_DEVICE_TABLE+ to
  252. +system/device_table_dev.txt
  253. board/<yourcompany>/<yourproject>/device_table_dev.txt+. For more
  254. details about the format of the device table file, see
  255. xref:makedev-syntax[].
  256. * The second solution is *Dynamic using devtmpfs only*. _devtmpfs_ is
  257. a virtual filesystem inside the Linux kernel that has been
  258. introduced in kernel 2.6.32 (if you use an older kernel, it is not
  259. possible to use this option). When mounted in +/dev+, this virtual
  260. filesystem will automatically make _device files_ appear and
  261. disappear as hardware devices are added and removed from the
  262. system. This filesystem is not persistent across reboots: it is
  263. filled dynamically by the kernel. Using _devtmpfs_ requires the
  264. following kernel configuration options to be enabled:
  265. +CONFIG_DEVTMPFS+ and +CONFIG_DEVTMPFS_MOUNT+. When Buildroot is in
  266. charge of building the Linux kernel for your embedded device, it
  267. makes sure that those two options are enabled. However, if you
  268. build your Linux kernel outside of Buildroot, then it is your
  269. responsibility to enable those two options (if you fail to do so,
  270. your Buildroot system will not boot).
  271. * The third solution is *Dynamic using devtmpfs + mdev*. This method
  272. also relies on the _devtmpfs_ virtual filesystem detailed above (so
  273. the requirement to have +CONFIG_DEVTMPFS+ and
  274. +CONFIG_DEVTMPFS_MOUNT+ enabled in the kernel configuration still
  275. apply), but adds the +mdev+ userspace utility on top of it. +mdev+
  276. is a program part of BusyBox that the kernel will call every time a
  277. device is added or removed. Thanks to the +/etc/mdev.conf+
  278. configuration file, +mdev+ can be configured to for example, set
  279. specific permissions or ownership on a device file, call a script
  280. or application whenever a device appears or disappear,
  281. etc. Basically, it allows _userspace_ to react on device addition
  282. and removal events. +mdev+ can for example be used to automatically
  283. load kernel modules when devices appear on the system. +mdev+ is
  284. also important if you have devices that require a firmware, as it
  285. will be responsible for pushing the firmware contents to the
  286. kernel. +mdev+ is a lightweight implementation (with fewer
  287. features) of +udev+. For more details about +mdev+ and the syntax
  288. of its configuration file, see
  289. http://git.busybox.net/busybox/tree/docs/mdev.txt.
  290. * The fourth solution is *Dynamic using devtmpfs + eudev*. This
  291. method also relies on the _devtmpfs_ virtual filesystem detailed
  292. above, but adds the +eudev+ userspace daemon on top of it. +eudev+
  293. is a daemon that runs in the background, and gets called by the
  294. kernel when a device gets added or removed from the system. It is a
  295. more heavyweight solution than +mdev+, but provides higher
  296. flexibility. +eudev+ is a standalone version of +udev+, the
  297. original userspace daemon used in most desktop Linux distributions,
  298. which is now part of Systemd. For more details, see
  299. http://en.wikipedia.org/wiki/Udev.
  300. The Buildroot developers recommendation is to start with the *Dynamic
  301. using devtmpfs only* solution, until you have the need for userspace
  302. to be notified when devices are added/removed, or if firmwares are
  303. needed, in which case *Dynamic using devtmpfs + mdev* is usually a
  304. good solution.
  305. Note that if +systemd+ is chosen as init system, /dev management will
  306. be performed by the +udev+ program provided by +systemd+.
  307. === init system
  308. The _init_ program is the first userspace program started by the
  309. kernel (it carries the PID number 1), and is responsible for starting
  310. the userspace services and programs (for example: web server,
  311. graphical applications, other network servers, etc.).
  312. Buildroot allows to use three different types of init systems, which
  313. can be chosen from +System configuration+, +Init system+:
  314. * The first solution is *BusyBox*. Amongst many programs, BusyBox has
  315. an implementation of a basic +init+ program, which is sufficient
  316. for most embedded systems. Enabling the +BR2_INIT_BUSYBOX+ will
  317. ensure BusyBox will build and install its +init+ program. This is
  318. the default solution in Buildroot. The BusyBox +init+ program will
  319. read the +/etc/inittab+ file at boot to know what to do. The syntax
  320. of this file can be found in
  321. http://git.busybox.net/busybox/tree/examples/inittab (note that
  322. BusyBox +inittab+ syntax is special: do not use a random +inittab+
  323. documentation from the Internet to learn about BusyBox
  324. +inittab+). The default +inittab+ in Buildroot is stored in
  325. +system/skeleton/etc/inittab+. Apart from mounting a few important
  326. filesystems, the main job the default inittab does is to start the
  327. +/etc/init.d/rcS+ shell script, and start a +getty+ program (which
  328. provides a login prompt).
  329. * The second solution is *systemV*. This solution uses the old
  330. traditional _sysvinit_ program, packed in Buildroot in
  331. +package/sysvinit+. This was the solution used in most desktop
  332. Linux distributions, until they switched to more recent
  333. alternatives such as Upstart or Systemd. +sysvinit+ also works with
  334. an +inittab+ file (which has a slightly different syntax than the
  335. one from BusyBox). The default +inittab+ installed with this init
  336. solution is located in +package/sysvinit/inittab+.
  337. * The third solution is *systemd*. +systemd+ is the new generation
  338. init system for Linux. It does far more than traditional _init_
  339. programs: aggressive parallelization capabilities, uses socket and
  340. D-Bus activation for starting services, offers on-demand starting
  341. of daemons, keeps track of processes using Linux control groups,
  342. supports snapshotting and restoring of the system state,
  343. etc. +systemd+ will be useful on relatively complex embedded
  344. systems, for example the ones requiring D-Bus and services
  345. communicating between each other. It is worth noting that +systemd+
  346. brings a fairly big number of large dependencies: +dbus+, +udev+
  347. and more. For more details about +systemd+, see
  348. http://www.freedesktop.org/wiki/Software/systemd.
  349. The solution recommended by Buildroot developers is to use the
  350. *BusyBox init* as it is sufficient for most embedded
  351. systems. *systemd* can be used for more complex situations.