MSI-HOWTO.txt 25 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572
  1. The MSI Driver Guide HOWTO
  2. Tom L Nguyen tom.l.nguyen@intel.com
  3. 10/03/2003
  4. Revised Feb 12, 2004 by Martine Silbermann
  5. email: Martine.Silbermann@hp.com
  6. Revised Jun 25, 2004 by Tom L Nguyen
  7. 1. About this guide
  8. This guide describes the basics of Message Signaled Interrupts (MSI),
  9. the advantages of using MSI over traditional interrupt mechanisms,
  10. and how to enable your driver to use MSI or MSI-X. Also included is
  11. a Frequently Asked Questions (FAQ) section.
  12. 1.1 Terminology
  13. PCI devices can be single-function or multi-function. In either case,
  14. when this text talks about enabling or disabling MSI on a "device
  15. function," it is referring to one specific PCI device and function and
  16. not to all functions on a PCI device (unless the PCI device has only
  17. one function).
  18. 2. Copyright 2003 Intel Corporation
  19. 3. What is MSI/MSI-X?
  20. Message Signaled Interrupt (MSI), as described in the PCI Local Bus
  21. Specification Revision 2.3 or later, is an optional feature, and a
  22. required feature for PCI Express devices. MSI enables a device function
  23. to request service by sending an Inbound Memory Write on its PCI bus to
  24. the FSB as a Message Signal Interrupt transaction. Because MSI is
  25. generated in the form of a Memory Write, all transaction conditions,
  26. such as a Retry, Master-Abort, Target-Abort or normal completion, are
  27. supported.
  28. A PCI device that supports MSI must also support pin IRQ assertion
  29. interrupt mechanism to provide backward compatibility for systems that
  30. do not support MSI. In systems which support MSI, the bus driver is
  31. responsible for initializing the message address and message data of
  32. the device function's MSI/MSI-X capability structure during device
  33. initial configuration.
  34. An MSI capable device function indicates MSI support by implementing
  35. the MSI/MSI-X capability structure in its PCI capability list. The
  36. device function may implement both the MSI capability structure and
  37. the MSI-X capability structure; however, the bus driver should not
  38. enable both.
  39. The MSI capability structure contains Message Control register,
  40. Message Address register and Message Data register. These registers
  41. provide the bus driver control over MSI. The Message Control register
  42. indicates the MSI capability supported by the device. The Message
  43. Address register specifies the target address and the Message Data
  44. register specifies the characteristics of the message. To request
  45. service, the device function writes the content of the Message Data
  46. register to the target address. The device and its software driver
  47. are prohibited from writing to these registers.
  48. The MSI-X capability structure is an optional extension to MSI. It
  49. uses an independent and separate capability structure. There are
  50. some key advantages to implementing the MSI-X capability structure
  51. over the MSI capability structure as described below.
  52. - Support a larger maximum number of vectors per function.
  53. - Provide the ability for system software to configure
  54. each vector with an independent message address and message
  55. data, specified by a table that resides in Memory Space.
  56. - MSI and MSI-X both support per-vector masking. Per-vector
  57. masking is an optional extension of MSI but a required
  58. feature for MSI-X. Per-vector masking provides the kernel the
  59. ability to mask/unmask a single MSI while running its
  60. interrupt service routine. If per-vector masking is
  61. not supported, then the device driver should provide the
  62. hardware/software synchronization to ensure that the device
  63. generates MSI when the driver wants it to do so.
  64. 4. Why use MSI?
  65. As a benefit to the simplification of board design, MSI allows board
  66. designers to remove out-of-band interrupt routing. MSI is another
  67. step towards a legacy-free environment.
  68. Due to increasing pressure on chipset and processor packages to
  69. reduce pin count, the need for interrupt pins is expected to
  70. diminish over time. Devices, due to pin constraints, may implement
  71. messages to increase performance.
  72. PCI Express endpoints uses INTx emulation (in-band messages) instead
  73. of IRQ pin assertion. Using INTx emulation requires interrupt
  74. sharing among devices connected to the same node (PCI bridge) while
  75. MSI is unique (non-shared) and does not require BIOS configuration
  76. support. As a result, the PCI Express technology requires MSI
  77. support for better interrupt performance.
  78. Using MSI enables the device functions to support two or more
  79. vectors, which can be configured to target different CPUs to
  80. increase scalability.
  81. 5. Configuring a driver to use MSI/MSI-X
  82. By default, the kernel will not enable MSI/MSI-X on all devices that
  83. support this capability. The CONFIG_PCI_MSI kernel option
  84. must be selected to enable MSI/MSI-X support.
  85. 5.1 Including MSI/MSI-X support into the kernel
  86. To allow MSI/MSI-X capable device drivers to selectively enable
  87. MSI/MSI-X (using pci_enable_msi()/pci_enable_msix() as described
  88. below), the VECTOR based scheme needs to be enabled by setting
  89. CONFIG_PCI_MSI during kernel config.
  90. Since the target of the inbound message is the local APIC, providing
  91. CONFIG_X86_LOCAL_APIC must be enabled as well as CONFIG_PCI_MSI.
  92. 5.2 Configuring for MSI support
  93. Due to the non-contiguous fashion in vector assignment of the
  94. existing Linux kernel, this version does not support multiple
  95. messages regardless of a device function is capable of supporting
  96. more than one vector. To enable MSI on a device function's MSI
  97. capability structure requires a device driver to call the function
  98. pci_enable_msi() explicitly.
  99. 5.2.1 API pci_enable_msi
  100. int pci_enable_msi(struct pci_dev *dev)
  101. With this new API, a device driver that wants to have MSI
  102. enabled on its device function must call this API to enable MSI.
  103. A successful call will initialize the MSI capability structure
  104. with ONE vector, regardless of whether a device function is
  105. capable of supporting multiple messages. This vector replaces the
  106. pre-assigned dev->irq with a new MSI vector. To avoid a conflict
  107. of the new assigned vector with existing pre-assigned vector requires
  108. a device driver to call this API before calling request_irq().
  109. 5.2.2 API pci_disable_msi
  110. void pci_disable_msi(struct pci_dev *dev)
  111. This API should always be used to undo the effect of pci_enable_msi()
  112. when a device driver is unloading. This API restores dev->irq with
  113. the pre-assigned IOAPIC vector and switches a device's interrupt
  114. mode to PCI pin-irq assertion/INTx emulation mode.
  115. Note that a device driver should always call free_irq() on the MSI vector
  116. that it has done request_irq() on before calling this API. Failure to do
  117. so results in a BUG_ON() and a device will be left with MSI enabled and
  118. leaks its vector.
  119. 5.2.3 MSI mode vs. legacy mode diagram
  120. The below diagram shows the events which switch the interrupt
  121. mode on the MSI-capable device function between MSI mode and
  122. PIN-IRQ assertion mode.
  123. ------------ pci_enable_msi ------------------------
  124. | | <=============== | |
  125. | MSI MODE | | PIN-IRQ ASSERTION MODE |
  126. | | ===============> | |
  127. ------------ pci_disable_msi ------------------------
  128. Figure 1. MSI Mode vs. Legacy Mode
  129. In Figure 1, a device operates by default in legacy mode. Legacy
  130. in this context means PCI pin-irq assertion or PCI-Express INTx
  131. emulation. A successful MSI request (using pci_enable_msi()) switches
  132. a device's interrupt mode to MSI mode. A pre-assigned IOAPIC vector
  133. stored in dev->irq will be saved by the PCI subsystem and a new
  134. assigned MSI vector will replace dev->irq.
  135. To return back to its default mode, a device driver should always call
  136. pci_disable_msi() to undo the effect of pci_enable_msi(). Note that a
  137. device driver should always call free_irq() on the MSI vector it has
  138. done request_irq() on before calling pci_disable_msi(). Failure to do
  139. so results in a BUG_ON() and a device will be left with MSI enabled and
  140. leaks its vector. Otherwise, the PCI subsystem restores a device's
  141. dev->irq with a pre-assigned IOAPIC vector and marks the released
  142. MSI vector as unused.
  143. Once being marked as unused, there is no guarantee that the PCI
  144. subsystem will reserve this MSI vector for a device. Depending on
  145. the availability of current PCI vector resources and the number of
  146. MSI/MSI-X requests from other drivers, this MSI may be re-assigned.
  147. For the case where the PCI subsystem re-assigns this MSI vector to
  148. another driver, a request to switch back to MSI mode may result
  149. in being assigned a different MSI vector or a failure if no more
  150. vectors are available.
  151. 5.3 Configuring for MSI-X support
  152. Due to the ability of the system software to configure each vector of
  153. the MSI-X capability structure with an independent message address
  154. and message data, the non-contiguous fashion in vector assignment of
  155. the existing Linux kernel has no impact on supporting multiple
  156. messages on an MSI-X capable device functions. To enable MSI-X on
  157. a device function's MSI-X capability structure requires its device
  158. driver to call the function pci_enable_msix() explicitly.
  159. The function pci_enable_msix(), once invoked, enables either
  160. all or nothing, depending on the current availability of PCI vector
  161. resources. If the PCI vector resources are available for the number
  162. of vectors requested by a device driver, this function will configure
  163. the MSI-X table of the MSI-X capability structure of a device with
  164. requested messages. To emphasize this reason, for example, a device
  165. may be capable for supporting the maximum of 32 vectors while its
  166. software driver usually may request 4 vectors. It is recommended
  167. that the device driver should call this function once during the
  168. initialization phase of the device driver.
  169. Unlike the function pci_enable_msi(), the function pci_enable_msix()
  170. does not replace the pre-assigned IOAPIC dev->irq with a new MSI
  171. vector because the PCI subsystem writes the 1:1 vector-to-entry mapping
  172. into the field vector of each element contained in a second argument.
  173. Note that the pre-assigned IOAPIC dev->irq is valid only if the device
  174. operates in PIN-IRQ assertion mode. In MSI-X mode, any attempt at
  175. using dev->irq by the device driver to request for interrupt service
  176. may result in unpredictable behavior.
  177. For each MSI-X vector granted, a device driver is responsible for calling
  178. other functions like request_irq(), enable_irq(), etc. to enable
  179. this vector with its corresponding interrupt service handler. It is
  180. a device driver's choice to assign all vectors with the same
  181. interrupt service handler or each vector with a unique interrupt
  182. service handler.
  183. 5.3.1 Handling MMIO address space of MSI-X Table
  184. The PCI 3.0 specification has implementation notes that MMIO address
  185. space for a device's MSI-X structure should be isolated so that the
  186. software system can set different pages for controlling accesses to the
  187. MSI-X structure. The implementation of MSI support requires the PCI
  188. subsystem, not a device driver, to maintain full control of the MSI-X
  189. table/MSI-X PBA (Pending Bit Array) and MMIO address space of the MSI-X
  190. table/MSI-X PBA. A device driver is prohibited from requesting the MMIO
  191. address space of the MSI-X table/MSI-X PBA. Otherwise, the PCI subsystem
  192. will fail enabling MSI-X on its hardware device when it calls the function
  193. pci_enable_msix().
  194. 5.3.2 Handling MSI-X allocation
  195. Determining the number of MSI-X vectors allocated to a function is
  196. dependent on the number of MSI capable devices and MSI-X capable
  197. devices populated in the system. The policy of allocating MSI-X
  198. vectors to a function is defined as the following:
  199. #of MSI-X vectors allocated to a function = (x - y)/z where
  200. x = The number of available PCI vector resources by the time
  201. the device driver calls pci_enable_msix(). The PCI vector
  202. resources is the sum of the number of unassigned vectors
  203. (new) and the number of released vectors when any MSI/MSI-X
  204. device driver switches its hardware device back to a legacy
  205. mode or is hot-removed. The number of unassigned vectors
  206. may exclude some vectors reserved, as defined in parameter
  207. NR_HP_RESERVED_VECTORS, for the case where the system is
  208. capable of supporting hot-add/hot-remove operations. Users
  209. may change the value defined in NR_HR_RESERVED_VECTORS to
  210. meet their specific needs.
  211. y = The number of MSI capable devices populated in the system.
  212. This policy ensures that each MSI capable device has its
  213. vector reserved to avoid the case where some MSI-X capable
  214. drivers may attempt to claim all available vector resources.
  215. z = The number of MSI-X capable devices populated in the system.
  216. This policy ensures that maximum (x - y) is distributed
  217. evenly among MSI-X capable devices.
  218. Note that the PCI subsystem scans y and z during a bus enumeration.
  219. When the PCI subsystem completes configuring MSI/MSI-X capability
  220. structure of a device as requested by its device driver, y/z is
  221. decremented accordingly.
  222. 5.3.3 Handling MSI-X shortages
  223. For the case where fewer MSI-X vectors are allocated to a function
  224. than requested, the function pci_enable_msix() will return the
  225. maximum number of MSI-X vectors available to the caller. A device
  226. driver may re-send its request with fewer or equal vectors indicated
  227. in the return. For example, if a device driver requests 5 vectors, but
  228. the number of available vectors is 3 vectors, a value of 3 will be
  229. returned as a result of pci_enable_msix() call. A function could be
  230. designed for its driver to use only 3 MSI-X table entries as
  231. different combinations as ABC--, A-B-C, A--CB, etc. Note that this
  232. patch does not support multiple entries with the same vector. Such
  233. attempt by a device driver to use 5 MSI-X table entries with 3 vectors
  234. as ABBCC, AABCC, BCCBA, etc will result as a failure by the function
  235. pci_enable_msix(). Below are the reasons why supporting multiple
  236. entries with the same vector is an undesirable solution.
  237. - The PCI subsystem cannot determine the entry that
  238. generated the message to mask/unmask MSI while handling
  239. software driver ISR. Attempting to walk through all MSI-X
  240. table entries (2048 max) to mask/unmask any match vector
  241. is an undesirable solution.
  242. - Walking through all MSI-X table entries (2048 max) to handle
  243. SMP affinity of any match vector is an undesirable solution.
  244. 5.3.4 API pci_enable_msix
  245. int pci_enable_msix(struct pci_dev *dev, struct msix_entry *entries, int nvec)
  246. This API enables a device driver to request the PCI subsystem
  247. to enable MSI-X messages on its hardware device. Depending on
  248. the availability of PCI vectors resources, the PCI subsystem enables
  249. either all or none of the requested vectors.
  250. Argument 'dev' points to the device (pci_dev) structure.
  251. Argument 'entries' is a pointer to an array of msix_entry structs.
  252. The number of entries is indicated in argument 'nvec'.
  253. struct msix_entry is defined in /driver/pci/msi.h:
  254. struct msix_entry {
  255. u16 vector; /* kernel uses to write alloc vector */
  256. u16 entry; /* driver uses to specify entry */
  257. };
  258. A device driver is responsible for initializing the field 'entry' of
  259. each element with a unique entry supported by MSI-X table. Otherwise,
  260. -EINVAL will be returned as a result. A successful return of zero
  261. indicates the PCI subsystem completed initializing each of the requested
  262. entries of the MSI-X table with message address and message data.
  263. Last but not least, the PCI subsystem will write the 1:1
  264. vector-to-entry mapping into the field 'vector' of each element. A
  265. device driver is responsible for keeping track of allocated MSI-X
  266. vectors in its internal data structure.
  267. A return of zero indicates that the number of MSI-X vectors was
  268. successfully allocated. A return of greater than zero indicates
  269. MSI-X vector shortage. Or a return of less than zero indicates
  270. a failure. This failure may be a result of duplicate entries
  271. specified in second argument, or a result of no available vector,
  272. or a result of failing to initialize MSI-X table entries.
  273. 5.3.5 API pci_disable_msix
  274. void pci_disable_msix(struct pci_dev *dev)
  275. This API should always be used to undo the effect of pci_enable_msix()
  276. when a device driver is unloading. Note that a device driver should
  277. always call free_irq() on all MSI-X vectors it has done request_irq()
  278. on before calling this API. Failure to do so results in a BUG_ON() and
  279. a device will be left with MSI-X enabled and leaks its vectors.
  280. 5.3.6 MSI-X mode vs. legacy mode diagram
  281. The below diagram shows the events which switch the interrupt
  282. mode on the MSI-X capable device function between MSI-X mode and
  283. PIN-IRQ assertion mode (legacy).
  284. ------------ pci_enable_msix(,,n) ------------------------
  285. | | <=============== | |
  286. | MSI-X MODE | | PIN-IRQ ASSERTION MODE |
  287. | | ===============> | |
  288. ------------ pci_disable_msix ------------------------
  289. Figure 2. MSI-X Mode vs. Legacy Mode
  290. In Figure 2, a device operates by default in legacy mode. A
  291. successful MSI-X request (using pci_enable_msix()) switches a
  292. device's interrupt mode to MSI-X mode. A pre-assigned IOAPIC vector
  293. stored in dev->irq will be saved by the PCI subsystem; however,
  294. unlike MSI mode, the PCI subsystem will not replace dev->irq with
  295. assigned MSI-X vector because the PCI subsystem already writes the 1:1
  296. vector-to-entry mapping into the field 'vector' of each element
  297. specified in second argument.
  298. To return back to its default mode, a device driver should always call
  299. pci_disable_msix() to undo the effect of pci_enable_msix(). Note that
  300. a device driver should always call free_irq() on all MSI-X vectors it
  301. has done request_irq() on before calling pci_disable_msix(). Failure
  302. to do so results in a BUG_ON() and a device will be left with MSI-X
  303. enabled and leaks its vectors. Otherwise, the PCI subsystem switches a
  304. device function's interrupt mode from MSI-X mode to legacy mode and
  305. marks all allocated MSI-X vectors as unused.
  306. Once being marked as unused, there is no guarantee that the PCI
  307. subsystem will reserve these MSI-X vectors for a device. Depending on
  308. the availability of current PCI vector resources and the number of
  309. MSI/MSI-X requests from other drivers, these MSI-X vectors may be
  310. re-assigned.
  311. For the case where the PCI subsystem re-assigned these MSI-X vectors
  312. to other drivers, a request to switch back to MSI-X mode may result
  313. being assigned with another set of MSI-X vectors or a failure if no
  314. more vectors are available.
  315. 5.4 Handling function implementing both MSI and MSI-X capabilities
  316. For the case where a function implements both MSI and MSI-X
  317. capabilities, the PCI subsystem enables a device to run either in MSI
  318. mode or MSI-X mode but not both. A device driver determines whether it
  319. wants MSI or MSI-X enabled on its hardware device. Once a device
  320. driver requests for MSI, for example, it is prohibited from requesting
  321. MSI-X; in other words, a device driver is not permitted to ping-pong
  322. between MSI mod MSI-X mode during a run-time.
  323. 5.5 Hardware requirements for MSI/MSI-X support
  324. MSI/MSI-X support requires support from both system hardware and
  325. individual hardware device functions.
  326. 5.5.1 System hardware support
  327. Since the target of MSI address is the local APIC CPU, enabling
  328. MSI/MSI-X support in the Linux kernel is dependent on whether existing
  329. system hardware supports local APIC. Users should verify that their
  330. system supports local APIC operation by testing that it runs when
  331. CONFIG_X86_LOCAL_APIC=y.
  332. In SMP environment, CONFIG_X86_LOCAL_APIC is automatically set;
  333. however, in UP environment, users must manually set
  334. CONFIG_X86_LOCAL_APIC. Once CONFIG_X86_LOCAL_APIC=y, setting
  335. CONFIG_PCI_MSI enables the VECTOR based scheme and the option for
  336. MSI-capable device drivers to selectively enable MSI/MSI-X.
  337. Note that CONFIG_X86_IO_APIC setting is irrelevant because MSI/MSI-X
  338. vector is allocated new during runtime and MSI/MSI-X support does not
  339. depend on BIOS support. This key independency enables MSI/MSI-X
  340. support on future IOxAPIC free platforms.
  341. 5.5.2 Device hardware support
  342. The hardware device function supports MSI by indicating the
  343. MSI/MSI-X capability structure on its PCI capability list. By
  344. default, this capability structure will not be initialized by
  345. the kernel to enable MSI during the system boot. In other words,
  346. the device function is running on its default pin assertion mode.
  347. Note that in many cases the hardware supporting MSI have bugs,
  348. which may result in system hangs. The software driver of specific
  349. MSI-capable hardware is responsible for deciding whether to call
  350. pci_enable_msi or not. A return of zero indicates the kernel
  351. successfully initialized the MSI/MSI-X capability structure of the
  352. device function. The device function is now running on MSI/MSI-X mode.
  353. 5.6 How to tell whether MSI/MSI-X is enabled on device function
  354. At the driver level, a return of zero from the function call of
  355. pci_enable_msi()/pci_enable_msix() indicates to a device driver that
  356. its device function is initialized successfully and ready to run in
  357. MSI/MSI-X mode.
  358. At the user level, users can use the command 'cat /proc/interrupts'
  359. to display the vectors allocated for devices and their interrupt
  360. MSI/MSI-X modes ("PCI-MSI"/"PCI-MSI-X"). Below shows MSI mode is
  361. enabled on a SCSI Adaptec 39320D Ultra320 controller.
  362. CPU0 CPU1
  363. 0: 324639 0 IO-APIC-edge timer
  364. 1: 1186 0 IO-APIC-edge i8042
  365. 2: 0 0 XT-PIC cascade
  366. 12: 2797 0 IO-APIC-edge i8042
  367. 14: 6543 0 IO-APIC-edge ide0
  368. 15: 1 0 IO-APIC-edge ide1
  369. 169: 0 0 IO-APIC-level uhci-hcd
  370. 185: 0 0 IO-APIC-level uhci-hcd
  371. 193: 138 10 PCI-MSI aic79xx
  372. 201: 30 0 PCI-MSI aic79xx
  373. 225: 30 0 IO-APIC-level aic7xxx
  374. 233: 30 0 IO-APIC-level aic7xxx
  375. NMI: 0 0
  376. LOC: 324553 325068
  377. ERR: 0
  378. MIS: 0
  379. 6. MSI quirks
  380. Several PCI chipsets or devices are known to not support MSI.
  381. The PCI stack provides 3 possible levels of MSI disabling:
  382. * on a single device
  383. * on all devices behind a specific bridge
  384. * globally
  385. 6.1. Disabling MSI on a single device
  386. Under some circumstances, it might be required to disable MSI on a
  387. single device, It may be achived by either not calling pci_enable_msi()
  388. or all, or setting the pci_dev->no_msi flag before (most of the time
  389. in a quirk).
  390. 6.2. Disabling MSI below a bridge
  391. The vast majority of MSI quirks are required by PCI bridges not
  392. being able to route MSI between busses. In this case, MSI have to be
  393. disabled on all devices behind this bridge. It is achieves by setting
  394. the PCI_BUS_FLAGS_NO_MSI flag in the pci_bus->bus_flags of the bridge
  395. subordinate bus. There is no need to set the same flag on bridges that
  396. are below the broken brigde. When pci_enable_msi() is called to enable
  397. MSI on a device, pci_msi_supported() takes care of checking the NO_MSI
  398. flag in all parent busses of the device.
  399. Some bridges actually support dynamic MSI support enabling/disabling
  400. by changing some bits in their PCI configuration space (especially
  401. the Hypertransport chipsets such as the nVidia nForce and Serverworks
  402. HT2000). It may then be required to update the NO_MSI flag on the
  403. corresponding devices in the sysfs hierarchy. To enable MSI support
  404. on device "0000:00:0e", do:
  405. echo 1 > /sys/bus/pci/devices/0000:00:0e/msi_bus
  406. To disable MSI support, echo 0 instead of 1. Note that it should be
  407. used with caution since changing this value might break interrupts.
  408. 6.3. Disabling MSI globally
  409. Some extreme cases may require to disable MSI globally on the system.
  410. For now, the only known case is a Serverworks PCI-X chipsets (MSI are
  411. not supported on several busses that are not all connected to the
  412. chipset in the Linux PCI hierarchy). In the vast majority of other
  413. cases, disabling only behind a specific bridge is enough.
  414. For debugging purpose, the user may also pass pci=nomsi on the kernel
  415. command-line to explicitly disable MSI globally. But, once the appro-
  416. priate quirks are added to the kernel, this option should not be
  417. required anymore.
  418. 6.4. Finding why MSI cannot be enabled on a device
  419. Assuming that MSI are not enabled on a device, you should look at
  420. dmesg to find messages that quirks may output when disabling MSI
  421. on some devices, some bridges or even globally.
  422. Then, lspci -t gives the list of bridges above a device. Reading
  423. /sys/bus/pci/devices/0000:00:0e/msi_bus will tell you whether MSI
  424. are enabled (1) or disabled (0). In 0 is found in a single bridge
  425. msi_bus file above the device, MSI cannot be enabled.
  426. 7. FAQ
  427. Q1. Are there any limitations on using the MSI?
  428. A1. If the PCI device supports MSI and conforms to the
  429. specification and the platform supports the APIC local bus,
  430. then using MSI should work.
  431. Q2. Will it work on all the Pentium processors (P3, P4, Xeon,
  432. AMD processors)? In P3 IPI's are transmitted on the APIC local
  433. bus and in P4 and Xeon they are transmitted on the system
  434. bus. Are there any implications with this?
  435. A2. MSI support enables a PCI device sending an inbound
  436. memory write (0xfeexxxxx as target address) on its PCI bus
  437. directly to the FSB. Since the message address has a
  438. redirection hint bit cleared, it should work.
  439. Q3. The target address 0xfeexxxxx will be translated by the
  440. Host Bridge into an interrupt message. Are there any
  441. limitations on the chipsets such as Intel 8xx, Intel e7xxx,
  442. or VIA?
  443. A3. If these chipsets support an inbound memory write with
  444. target address set as 0xfeexxxxx, as conformed to PCI
  445. specification 2.3 or latest, then it should work.
  446. Q4. From the driver point of view, if the MSI is lost because
  447. of errors occurring during inbound memory write, then it may
  448. wait forever. Is there a mechanism for it to recover?
  449. A4. Since the target of the transaction is an inbound memory
  450. write, all transaction termination conditions (Retry,
  451. Master-Abort, Target-Abort, or normal completion) are
  452. supported. A device sending an MSI must abide by all the PCI
  453. rules and conditions regarding that inbound memory write. So,
  454. if a retry is signaled it must retry, etc... We believe that
  455. the recommendation for Abort is also a retry (refer to PCI
  456. specification 2.3 or latest).