tests_writing.rst 13 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346
  1. .. SPDX-License-Identifier: GPL-2.0+
  2. .. Copyright 2021 Google LLC
  3. .. sectionauthor:: Simon Glass <sjg@chromium.org>
  4. Writing Tests
  5. =============
  6. This describes how to write tests in U-Boot and describes the possible options.
  7. Test types
  8. ----------
  9. There are two basic types of test in U-Boot:
  10. - Python tests, in test/py/tests
  11. - C tests, in test/ and its subdirectories
  12. (there are also UEFI tests in lib/efi_selftest/ not considered here.)
  13. Python tests talk to U-Boot via the command line. They support both sandbox and
  14. real hardware. They typically do not require building test code into U-Boot
  15. itself. They are fairly slow to run, due to the command-line interface and there
  16. being two separate processes. Python tests are fairly easy to write. They can
  17. be a little tricky to debug sometimes due to the voluminous output of pytest.
  18. C tests are written directly in U-Boot. While they can be used on boards, they
  19. are more commonly used with sandbox, as they obviously add to U-Boot code size.
  20. C tests are easy to write so long as the required facilities exist. Where they
  21. do not it can involve refactoring or adding new features to sandbox. They are
  22. fast to run and easy to debug.
  23. Regardless of which test type is used, all tests are collected and run by the
  24. pytest framework, so there is typically no need to run them separately. This
  25. means that C tests can be used when it makes sense, and Python tests when it
  26. doesn't.
  27. This table shows how to decide whether to write a C or Python test:
  28. ===================== =========================== =============================
  29. Attribute C test Python test
  30. ===================== =========================== =============================
  31. Fast to run? Yes No (two separate processes)
  32. Easy to write? Yes, if required test Yes
  33. features exist in sandbox
  34. or the target system
  35. Needs code in U-Boot? Yes No, provided the test can be
  36. executed and the result
  37. determined using the command
  38. line
  39. Easy to debug? Yes No, since access to the U-Boot
  40. state is not available and the
  41. amount of output can
  42. sometimes require a bit of
  43. digging
  44. Can use gdb? Yes, directly Yes, with --gdbserver
  45. Can run on boards? Some can, but only if Some
  46. compiled in and not
  47. dependent on sandboxau
  48. ===================== =========================== =============================
  49. Python or C
  50. -----------
  51. Typically in U-Boot we encourage C test using sandbox for all features. This
  52. allows fast testing, easy development and allows contributors to make changes
  53. without needing dozens of boards to test with.
  54. When a test requires setup or interaction with the running host (such as to
  55. generate images and then running U-Boot to check that they can be loaded), or
  56. cannot be run on sandbox, Python tests should be used. These should typically
  57. NOT rely on running with sandbox, but instead should function correctly on any
  58. board supported by U-Boot.
  59. How slow are Python tests?
  60. --------------------------
  61. Under the hood, when running on sandbox, Python tests work by starting a sandbox
  62. test and connecting to it via a pipe. Each interaction with the U-Boot process
  63. requires at least a context switch to handle the pipe interaction. The test
  64. sends a command to U-Boot, which then reacts and shows some output, then the
  65. test sees that and continues. Of course on real hardware, communications delays
  66. (e.g. with a serial console) make this slower.
  67. For comparison, consider a test that checks the 'md' (memory dump). All times
  68. below are approximate, as measured on an AMD 2950X system. Here is is the test
  69. in Python::
  70. @pytest.mark.buildconfigspec('cmd_memory')
  71. def test_md(u_boot_console):
  72. """Test that md reads memory as expected, and that memory can be modified
  73. using the mw command."""
  74. ram_base = u_boot_utils.find_ram_base(u_boot_console)
  75. addr = '%08x' % ram_base
  76. val = 'a5f09876'
  77. expected_response = addr + ': ' + val
  78. u_boot_console.run_command('mw ' + addr + ' 0 10')
  79. response = u_boot_console.run_command('md ' + addr + ' 10')
  80. assert(not (expected_response in response))
  81. u_boot_console.run_command('mw ' + addr + ' ' + val)
  82. response = u_boot_console.run_command('md ' + addr + ' 10')
  83. assert(expected_response in response)
  84. This runs a few commands and checks the output. Note that it runs a command,
  85. waits for the response and then checks it agains what is expected. If run by
  86. itself it takes around 800ms, including test collection. For 1000 runs it takes
  87. 19 seconds, or 19ms per run. Of course 1000 runs it not that useful since we
  88. only want to run it once.
  89. There is no exactly equivalent C test, but here is a similar one that tests 'ms'
  90. (memory search)::
  91. /* Test 'ms' command with bytes */
  92. static int mem_test_ms_b(struct unit_test_state *uts)
  93. {
  94. u8 *buf;
  95. buf = map_sysmem(0, BUF_SIZE + 1);
  96. memset(buf, '\0', BUF_SIZE);
  97. buf[0x0] = 0x12;
  98. buf[0x31] = 0x12;
  99. buf[0xff] = 0x12;
  100. buf[0x100] = 0x12;
  101. ut_assertok(console_record_reset_enable());
  102. run_command("ms.b 1 ff 12", 0);
  103. ut_assert_nextline("00000030: 00 12 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................");
  104. ut_assert_nextline("--");
  105. ut_assert_nextline("000000f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 12 ................");
  106. ut_assert_nextline("2 matches");
  107. ut_assert_console_end();
  108. ut_asserteq(2, env_get_hex("memmatches", 0));
  109. ut_asserteq(0xff, env_get_hex("memaddr", 0));
  110. ut_asserteq(0xfe, env_get_hex("mempos", 0));
  111. unmap_sysmem(buf);
  112. return 0;
  113. }
  114. MEM_TEST(mem_test_ms_b, UT_TESTF_CONSOLE_REC);
  115. This runs the command directly in U-Boot, then checks the console output, also
  116. directly in U-Boot. If run by itself this takes 100ms. For 1000 runs it takes
  117. 660ms, or 0.66ms per run.
  118. So overall running a C test is perhaps 8 times faster individually and the
  119. interactions are perhaps 25 times faster.
  120. It should also be noted that the C test is fairly easy to debug. You can set a
  121. breakpoint on do_mem_search(), which is what implements the 'ms' command,
  122. single step to see what might be wrong, etc. That is also possible with the
  123. pytest, but requires two terminals and --gdbserver.
  124. Why does speed matter?
  125. ----------------------
  126. Many development activities rely on running tests:
  127. - 'git bisect run make qcheck' can be used to find a failing commit
  128. - test-driven development relies on quick iteration of build/test
  129. - U-Boot's continuous integration (CI) systems make use of tests. Running
  130. all sandbox tests typically takes 90 seconds and running each qemu test
  131. takes about 30 seconds. This is currently dwarfed by the time taken to
  132. build all boards
  133. As U-Boot continues to grow its feature set, fast and reliable tests are a
  134. critical factor factor in developer productivity and happiness.
  135. Writing C tests
  136. ---------------
  137. C tests are arranged into suites which are typically executed by the 'ut'
  138. command. Each suite is in its own file. This section describes how to accomplish
  139. some common test tasks.
  140. (there are also UEFI C tests in lib/efi_selftest/ not considered here.)
  141. Add a new driver model test
  142. ~~~~~~~~~~~~~~~~~~~~~~~~~~~
  143. Use this when adding a test for a new or existing uclass, adding new operations
  144. or features to a uclass, adding new ofnode or dev_read_() functions, or anything
  145. else related to driver model.
  146. Find a suitable place for your test, perhaps near other test functions in
  147. existing code, or in a new file. Each uclass should have its own test file.
  148. Declare the test with::
  149. /* Test that ... */
  150. static int dm_test_uclassname_what(struct unit_test_state *uts)
  151. {
  152. /* test code here */
  153. return 0;
  154. }
  155. DM_TEST(dm_test_uclassname_what, UT_TESTF_SCAN_FDT);
  156. Replace 'uclassname' with the name of your uclass, if applicable. Replace 'what'
  157. with what you are testing.
  158. The flags for DM_TEST() are defined in test/test.h and you typically want
  159. UT_TESTF_SCAN_FDT so that the devicetree is scanned and all devices are bound
  160. and ready for use. The DM_TEST macro adds UT_TESTF_DM automatically so that
  161. the test runner knows it is a driver model test.
  162. Driver model tests are special in that the entire driver model state is
  163. recreated anew for each test. This ensures that if a previous test deletes a
  164. device, for example, it does not affect subsequent tests. Driver model tests
  165. also run both with livetree and flattree, to ensure that both devicetree
  166. implementations work as expected.
  167. Example commit: c48cb7ebfb4 ("sandbox: add ADC unit tests") [1]
  168. [1] https://gitlab.denx.de/u-boot/u-boot/-/commit/c48cb7ebfb4
  169. Add a C test to an existing suite
  170. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  171. Use this when you are adding to or modifying an existing feature outside driver
  172. model. An example is bloblist.
  173. Add a new function in the same file as the rest of the suite and register it
  174. with the suite. For example, to add a new mem_search test::
  175. /* Test 'ms' command with 32-bit values */
  176. static int mem_test_ms_new_thing(struct unit_test_state *uts)
  177. {
  178. /* test code here*/
  179. return 0;
  180. }
  181. MEM_TEST(mem_test_ms_new_thing, UT_TESTF_CONSOLE_REC);
  182. Note that the MEM_TEST() macros is defined at the top of the file.
  183. Example commit: 9fe064646d2 ("bloblist: Support relocating to a larger space") [1]
  184. [1] https://gitlab.denx.de/u-boot/u-boot/-/commit/9fe064646d2
  185. Add a new test suite
  186. ~~~~~~~~~~~~~~~~~~~~
  187. Each suite should focus on one feature or subsystem, so if you are writing a
  188. new one of those, you should add a new suite.
  189. Create a new file in test/ or a subdirectory and define a macro to register the
  190. suite. For example::
  191. #include <common.h>
  192. #include <console.h>
  193. #include <mapmem.h>
  194. #include <dm/test.h>
  195. #include <test/ut.h>
  196. /* Declare a new wibble test */
  197. #define WIBBLE_TEST(_name, _flags) UNIT_TEST(_name, _flags, wibble_test)
  198. /* Tetss go here */
  199. /* At the bottom of the file: */
  200. int do_ut_wibble(struct cmd_tbl *cmdtp, int flag, int argc, char *const argv[])
  201. {
  202. struct unit_test *tests = UNIT_TEST_SUITE_START(wibble_test);
  203. const int n_ents = UNIT_TEST_SUITE_COUNT(wibble_test);
  204. return cmd_ut_category("cmd_wibble", "wibble_test_", tests, n_ents, argc, argv);
  205. }
  206. Then add new tests to it as above.
  207. Register this new suite in test/cmd_ut.c by adding to cmd_ut_sub[]::
  208. /* Within cmd_ut_sub[]... */
  209. U_BOOT_CMD_MKENT(wibble, CONFIG_SYS_MAXARGS, 1, do_ut_wibble, "", ""),
  210. and adding new help to ut_help_text[]::
  211. "ut wibble - Test the wibble feature\n"
  212. If your feature is conditional on a particular Kconfig, then you can use #ifdef
  213. to control that.
  214. Finally, add the test to the build by adding to the Makefile in the same
  215. directory::
  216. obj-$(CONFIG_$(SPL_)CMDLINE) += wibble.o
  217. Note that CMDLINE is never enabled in SPL, so this test will only be present in
  218. U-Boot proper. See below for how to do SPL tests.
  219. As before, you can add an extra Kconfig check if needed::
  220. ifneq ($(CONFIG_$(SPL_)WIBBLE),)
  221. obj-$(CONFIG_$(SPL_)CMDLINE) += wibble.o
  222. endif
  223. Example commit: 919e7a8fb64 ("test: Add a simple test for bloblist") [1]
  224. [1] https://gitlab.denx.de/u-boot/u-boot/-/commit/919e7a8fb64
  225. Making the test run from pytest
  226. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  227. All C tests must run from pytest. Typically this is automatic, since pytest
  228. scans the U-Boot executable for available tests to run. So long as you have a
  229. 'ut' subcommand for your test suite, it will run. The same applies for driver
  230. model tests since they use the 'ut dm' subcommand.
  231. See test/py/tests/test_ut.py for how unit tests are run.
  232. Add a C test for SPL
  233. ~~~~~~~~~~~~~~~~~~~~
  234. Note: C tests are only available for sandbox_spl at present. There is currently
  235. no mechanism in other boards to existing SPL tests even if they are built into
  236. the image.
  237. SPL tests cannot be run from the 'ut' command since there are no commands
  238. available in SPL. Instead, sandbox (only) calls ut_run_list() on start-up, when
  239. the -u flag is given. This runs the available unit tests, no matter what suite
  240. they are in.
  241. To create a new SPL test, follow the same rules as above, either adding to an
  242. existing suite or creating a new one.
  243. An example SPL test is spl_test_load().
  244. Writing Python tests
  245. --------------------
  246. See :doc:`py_testing` for brief notes how to write Python tests. You
  247. should be able to use the existing tests in test/py/tests as examples.