oops-tracing.txt 11 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247
  1. NOTE: ksymoops is useless on 2.6. Please use the Oops in its original format
  2. (from dmesg, etc). Ignore any references in this or other docs to "decoding
  3. the Oops" or "running it through ksymoops". If you post an Oops from 2.6 that
  4. has been run through ksymoops, people will just tell you to repost it.
  5. Quick Summary
  6. -------------
  7. Find the Oops and send it to the maintainer of the kernel area that seems to be
  8. involved with the problem. Don't worry too much about getting the wrong person.
  9. If you are unsure send it to the person responsible for the code relevant to
  10. what you were doing. If it occurs repeatably try and describe how to recreate
  11. it. That's worth even more than the oops.
  12. If you are totally stumped as to whom to send the report, send it to
  13. linux-kernel@vger.kernel.org. Thanks for your help in making Linux as
  14. stable as humanly possible.
  15. Where is the Oops?
  16. ----------------------
  17. Normally the Oops text is read from the kernel buffers by klogd and
  18. handed to syslogd which writes it to a syslog file, typically
  19. /var/log/messages (depends on /etc/syslog.conf). Sometimes klogd dies,
  20. in which case you can run dmesg > file to read the data from the kernel
  21. buffers and save it. Or you can cat /proc/kmsg > file, however you
  22. have to break in to stop the transfer, kmsg is a "never ending file".
  23. If the machine has crashed so badly that you cannot enter commands or
  24. the disk is not available then you have three options :-
  25. (1) Hand copy the text from the screen and type it in after the machine
  26. has restarted. Messy but it is the only option if you have not
  27. planned for a crash. Alternatively, you can take a picture of
  28. the screen with a digital camera - not nice, but better than
  29. nothing. If the messages scroll off the top of the console, you
  30. may find that booting with a higher resolution (eg, vga=791)
  31. will allow you to read more of the text. (Caveat: This needs vesafb,
  32. so won't help for 'early' oopses)
  33. (2) Boot with a serial console (see Documentation/serial-console.txt),
  34. run a null modem to a second machine and capture the output there
  35. using your favourite communication program. Minicom works well.
  36. (3) Use Kdump (see Documentation/kdump/kdump.txt),
  37. extract the kernel ring buffer from old memory with using dmesg
  38. gdbmacro in Documentation/kdump/gdbmacros.txt.
  39. Full Information
  40. ----------------
  41. NOTE: the message from Linus below applies to 2.4 kernel. I have preserved it
  42. for historical reasons, and because some of the information in it still
  43. applies. Especially, please ignore any references to ksymoops.
  44. From: Linus Torvalds <torvalds@osdl.org>
  45. How to track down an Oops.. [originally a mail to linux-kernel]
  46. The main trick is having 5 years of experience with those pesky oops
  47. messages ;-)
  48. Actually, there are things you can do that make this easier. I have two
  49. separate approaches:
  50. gdb /usr/src/linux/vmlinux
  51. gdb> disassemble <offending_function>
  52. That's the easy way to find the problem, at least if the bug-report is
  53. well made (like this one was - run through ksymoops to get the
  54. information of which function and the offset in the function that it
  55. happened in).
  56. Oh, it helps if the report happens on a kernel that is compiled with the
  57. same compiler and similar setups.
  58. The other thing to do is disassemble the "Code:" part of the bug report:
  59. ksymoops will do this too with the correct tools, but if you don't have
  60. the tools you can just do a silly program:
  61. char str[] = "\xXX\xXX\xXX...";
  62. main(){}
  63. and compile it with gcc -g and then do "disassemble str" (where the "XX"
  64. stuff are the values reported by the Oops - you can just cut-and-paste
  65. and do a replace of spaces to "\x" - that's what I do, as I'm too lazy
  66. to write a program to automate this all).
  67. Finally, if you want to see where the code comes from, you can do
  68. cd /usr/src/linux
  69. make fs/buffer.s # or whatever file the bug happened in
  70. and then you get a better idea of what happens than with the gdb
  71. disassembly.
  72. Now, the trick is just then to combine all the data you have: the C
  73. sources (and general knowledge of what it _should_ do), the assembly
  74. listing and the code disassembly (and additionally the register dump you
  75. also get from the "oops" message - that can be useful to see _what_ the
  76. corrupted pointers were, and when you have the assembler listing you can
  77. also match the other registers to whatever C expressions they were used
  78. for).
  79. Essentially, you just look at what doesn't match (in this case it was the
  80. "Code" disassembly that didn't match with what the compiler generated).
  81. Then you need to find out _why_ they don't match. Often it's simple - you
  82. see that the code uses a NULL pointer and then you look at the code and
  83. wonder how the NULL pointer got there, and if it's a valid thing to do
  84. you just check against it..
  85. Now, if somebody gets the idea that this is time-consuming and requires
  86. some small amount of concentration, you're right. Which is why I will
  87. mostly just ignore any panic reports that don't have the symbol table
  88. info etc looked up: it simply gets too hard to look it up (I have some
  89. programs to search for specific patterns in the kernel code segment, and
  90. sometimes I have been able to look up those kinds of panics too, but
  91. that really requires pretty good knowledge of the kernel just to be able
  92. to pick out the right sequences etc..)
  93. _Sometimes_ it happens that I just see the disassembled code sequence
  94. from the panic, and I know immediately where it's coming from. That's when
  95. I get worried that I've been doing this for too long ;-)
  96. Linus
  97. ---------------------------------------------------------------------------
  98. Notes on Oops tracing with klogd:
  99. In order to help Linus and the other kernel developers there has been
  100. substantial support incorporated into klogd for processing protection
  101. faults. In order to have full support for address resolution at least
  102. version 1.3-pl3 of the sysklogd package should be used.
  103. When a protection fault occurs the klogd daemon automatically
  104. translates important addresses in the kernel log messages to their
  105. symbolic equivalents. This translated kernel message is then
  106. forwarded through whatever reporting mechanism klogd is using. The
  107. protection fault message can be simply cut out of the message files
  108. and forwarded to the kernel developers.
  109. Two types of address resolution are performed by klogd. The first is
  110. static translation and the second is dynamic translation. Static
  111. translation uses the System.map file in much the same manner that
  112. ksymoops does. In order to do static translation the klogd daemon
  113. must be able to find a system map file at daemon initialization time.
  114. See the klogd man page for information on how klogd searches for map
  115. files.
  116. Dynamic address translation is important when kernel loadable modules
  117. are being used. Since memory for kernel modules is allocated from the
  118. kernel's dynamic memory pools there are no fixed locations for either
  119. the start of the module or for functions and symbols in the module.
  120. The kernel supports system calls which allow a program to determine
  121. which modules are loaded and their location in memory. Using these
  122. system calls the klogd daemon builds a symbol table which can be used
  123. to debug a protection fault which occurs in a loadable kernel module.
  124. At the very minimum klogd will provide the name of the module which
  125. generated the protection fault. There may be additional symbolic
  126. information available if the developer of the loadable module chose to
  127. export symbol information from the module.
  128. Since the kernel module environment can be dynamic there must be a
  129. mechanism for notifying the klogd daemon when a change in module
  130. environment occurs. There are command line options available which
  131. allow klogd to signal the currently executing daemon that symbol
  132. information should be refreshed. See the klogd manual page for more
  133. information.
  134. A patch is included with the sysklogd distribution which modifies the
  135. modules-2.0.0 package to automatically signal klogd whenever a module
  136. is loaded or unloaded. Applying this patch provides essentially
  137. seamless support for debugging protection faults which occur with
  138. kernel loadable modules.
  139. The following is an example of a protection fault in a loadable module
  140. processed by klogd:
  141. ---------------------------------------------------------------------------
  142. Aug 29 09:51:01 blizard kernel: Unable to handle kernel paging request at virtual address f15e97cc
  143. Aug 29 09:51:01 blizard kernel: current->tss.cr3 = 0062d000, %cr3 = 0062d000
  144. Aug 29 09:51:01 blizard kernel: *pde = 00000000
  145. Aug 29 09:51:01 blizard kernel: Oops: 0002
  146. Aug 29 09:51:01 blizard kernel: CPU: 0
  147. Aug 29 09:51:01 blizard kernel: EIP: 0010:[oops:_oops+16/3868]
  148. Aug 29 09:51:01 blizard kernel: EFLAGS: 00010212
  149. Aug 29 09:51:01 blizard kernel: eax: 315e97cc ebx: 003a6f80 ecx: 001be77b edx: 00237c0c
  150. Aug 29 09:51:01 blizard kernel: esi: 00000000 edi: bffffdb3 ebp: 00589f90 esp: 00589f8c
  151. Aug 29 09:51:01 blizard kernel: ds: 0018 es: 0018 fs: 002b gs: 002b ss: 0018
  152. Aug 29 09:51:01 blizard kernel: Process oops_test (pid: 3374, process nr: 21, stackpage=00589000)
  153. Aug 29 09:51:01 blizard kernel: Stack: 315e97cc 00589f98 0100b0b4 bffffed4 0012e38e 00240c64 003a6f80 00000001
  154. Aug 29 09:51:01 blizard kernel: 00000000 00237810 bfffff00 0010a7fa 00000003 00000001 00000000 bfffff00
  155. Aug 29 09:51:01 blizard kernel: bffffdb3 bffffed4 ffffffda 0000002b 0007002b 0000002b 0000002b 00000036
  156. Aug 29 09:51:01 blizard kernel: Call Trace: [oops:_oops_ioctl+48/80] [_sys_ioctl+254/272] [_system_call+82/128]
  157. Aug 29 09:51:01 blizard kernel: Code: c7 00 05 00 00 00 eb 08 90 90 90 90 90 90 90 90 89 ec 5d c3
  158. ---------------------------------------------------------------------------
  159. Dr. G.W. Wettstein Oncology Research Div. Computing Facility
  160. Roger Maris Cancer Center INTERNET: greg@wind.rmcc.com
  161. 820 4th St. N.
  162. Fargo, ND 58122
  163. Phone: 701-234-7556
  164. ---------------------------------------------------------------------------
  165. Tainted kernels:
  166. Some oops reports contain the string 'Tainted: ' after the program
  167. counter. This indicates that the kernel has been tainted by some
  168. mechanism. The string is followed by a series of position-sensitive
  169. characters, each representing a particular tainted value.
  170. 1: 'G' if all modules loaded have a GPL or compatible license, 'P' if
  171. any proprietary module has been loaded. Modules without a
  172. MODULE_LICENSE or with a MODULE_LICENSE that is not recognised by
  173. insmod as GPL compatible are assumed to be proprietary.
  174. 2: 'F' if any module was force loaded by "insmod -f", ' ' if all
  175. modules were loaded normally.
  176. 3: 'S' if the oops occurred on an SMP kernel running on hardware that
  177. hasn't been certified as safe to run multiprocessor.
  178. Currently this occurs only on various Athlons that are not
  179. SMP capable.
  180. 4: 'R' if a module was force unloaded by "rmmod -f", ' ' if all
  181. modules were unloaded normally.
  182. 5: 'M' if any processor has reported a Machine Check Exception,
  183. ' ' if no Machine Check Exceptions have occurred.
  184. 6: 'B' if a page-release function has found a bad page reference or
  185. some unexpected page flags.
  186. 7: 'U' if a user specifically requested that the Tainted flag be set,
  187. ' ' otherwise.
  188. 7: 'U' if a user or user application specifically requested that the
  189. Tainted flag be set, ' ' otherwise.
  190. The primary reason for the 'Tainted: ' string is to tell kernel
  191. debuggers if this is a clean kernel or if anything unusual has
  192. occurred. Tainting is permanent: even if an offending module is
  193. unloaded, the tainted value remains to indicate that the kernel is not
  194. trustworthy.