ibmvmc.rst 9.8 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227
  1. .. SPDX-License-Identifier: GPL-2.0+
  2. ======================================================
  3. IBM Virtual Management Channel Kernel Driver (IBMVMC)
  4. ======================================================
  5. :Authors:
  6. Dave Engebretsen <engebret@us.ibm.com>,
  7. Adam Reznechek <adreznec@linux.vnet.ibm.com>,
  8. Steven Royer <seroyer@linux.vnet.ibm.com>,
  9. Bryant G. Ly <bryantly@linux.vnet.ibm.com>,
  10. Introduction
  11. ============
  12. Note: Knowledge of virtualization technology is required to understand
  13. this document.
  14. A good reference document would be:
  15. https://openpowerfoundation.org/wp-content/uploads/2016/05/LoPAPR_DRAFT_v11_24March2016_cmt1.pdf
  16. The Virtual Management Channel (VMC) is a logical device which provides an
  17. interface between the hypervisor and a management partition. This interface
  18. is like a message passing interface. This management partition is intended
  19. to provide an alternative to systems that use a Hardware Management
  20. Console (HMC) - based system management.
  21. The primary hardware management solution that is developed by IBM relies
  22. on an appliance server named the Hardware Management Console (HMC),
  23. packaged as an external tower or rack-mounted personal computer. In a
  24. Power Systems environment, a single HMC can manage multiple POWER
  25. processor-based systems.
  26. Management Application
  27. ----------------------
  28. In the management partition, a management application exists which enables
  29. a system administrator to configure the system’s partitioning
  30. characteristics via a command line interface (CLI) or Representational
  31. State Transfer Application (REST API's).
  32. The management application runs on a Linux logical partition on a
  33. POWER8 or newer processor-based server that is virtualized by PowerVM.
  34. System configuration, maintenance, and control functions which
  35. traditionally require an HMC can be implemented in the management
  36. application using a combination of HMC to hypervisor interfaces and
  37. existing operating system methods. This tool provides a subset of the
  38. functions implemented by the HMC and enables basic partition configuration.
  39. The set of HMC to hypervisor messages supported by the management
  40. application component are passed to the hypervisor over a VMC interface,
  41. which is defined below.
  42. The VMC enables the management partition to provide basic partitioning
  43. functions:
  44. - Logical Partitioning Configuration
  45. - Start, and stop actions for individual partitions
  46. - Display of partition status
  47. - Management of virtual Ethernet
  48. - Management of virtual Storage
  49. - Basic system management
  50. Virtual Management Channel (VMC)
  51. --------------------------------
  52. A logical device, called the Virtual Management Channel (VMC), is defined
  53. for communicating between the management application and the hypervisor. It
  54. basically creates the pipes that enable virtualization management
  55. software. This device is presented to a designated management partition as
  56. a virtual device.
  57. This communication device uses Command/Response Queue (CRQ) and the
  58. Remote Direct Memory Access (RDMA) interfaces. A three-way handshake is
  59. defined that must take place to establish that both the hypervisor and
  60. management partition sides of the channel are running prior to
  61. sending/receiving any of the protocol messages.
  62. This driver also utilizes Transport Event CRQs. CRQ messages are sent
  63. when the hypervisor detects one of the peer partitions has abnormally
  64. terminated, or one side has called H_FREE_CRQ to close their CRQ.
  65. Two new classes of CRQ messages are introduced for the VMC device. VMC
  66. Administrative messages are used for each partition using the VMC to
  67. communicate capabilities to their partner. HMC Interface messages are used
  68. for the actual flow of HMC messages between the management partition and
  69. the hypervisor. As most HMC messages far exceed the size of a CRQ buffer,
  70. a virtual DMA (RMDA) of the HMC message data is done prior to each HMC
  71. Interface CRQ message. Only the management partition drives RDMA
  72. operations; hypervisors never directly cause the movement of message data.
  73. Terminology
  74. -----------
  75. RDMA
  76. Remote Direct Memory Access is DMA transfer from the server to its
  77. client or from the server to its partner partition. DMA refers
  78. to both physical I/O to and from memory operations and to memory
  79. to memory move operations.
  80. CRQ
  81. Command/Response Queue a facility which is used to communicate
  82. between partner partitions. Transport events which are signaled
  83. from the hypervisor to partition are also reported in this queue.
  84. Example Management Partition VMC Driver Interface
  85. =================================================
  86. This section provides an example for the management application
  87. implementation where a device driver is used to interface to the VMC
  88. device. This driver consists of a new device, for example /dev/ibmvmc,
  89. which provides interfaces to open, close, read, write, and perform
  90. ioctl’s against the VMC device.
  91. VMC Interface Initialization
  92. ----------------------------
  93. The device driver is responsible for initializing the VMC when the driver
  94. is loaded. It first creates and initializes the CRQ. Next, an exchange of
  95. VMC capabilities is performed to indicate the code version and number of
  96. resources available in both the management partition and the hypervisor.
  97. Finally, the hypervisor requests that the management partition create an
  98. initial pool of VMC buffers, one buffer for each possible HMC connection,
  99. which will be used for management application session initialization.
  100. Prior to completion of this initialization sequence, the device returns
  101. EBUSY to open() calls. EIO is returned for all open() failures.
  102. ::
  103. Management Partition Hypervisor
  104. CRQ INIT
  105. ---------------------------------------->
  106. CRQ INIT COMPLETE
  107. <----------------------------------------
  108. CAPABILITIES
  109. ---------------------------------------->
  110. CAPABILITIES RESPONSE
  111. <----------------------------------------
  112. ADD BUFFER (HMC IDX=0,1,..) _
  113. <---------------------------------------- |
  114. ADD BUFFER RESPONSE | - Perform # HMCs Iterations
  115. ----------------------------------------> -
  116. VMC Interface Open
  117. ------------------
  118. After the basic VMC channel has been initialized, an HMC session level
  119. connection can be established. The application layer performs an open() to
  120. the VMC device and executes an ioctl() against it, indicating the HMC ID
  121. (32 bytes of data) for this session. If the VMC device is in an invalid
  122. state, EIO will be returned for the ioctl(). The device driver creates a
  123. new HMC session value (ranging from 1 to 255) and HMC index value (starting
  124. at index 0 and ranging to 254) for this HMC ID. The driver then does an
  125. RDMA of the HMC ID to the hypervisor, and then sends an Interface Open
  126. message to the hypervisor to establish the session over the VMC. After the
  127. hypervisor receives this information, it sends Add Buffer messages to the
  128. management partition to seed an initial pool of buffers for the new HMC
  129. connection. Finally, the hypervisor sends an Interface Open Response
  130. message, to indicate that it is ready for normal runtime messaging. The
  131. following illustrates this VMC flow:
  132. ::
  133. Management Partition Hypervisor
  134. RDMA HMC ID
  135. ---------------------------------------->
  136. Interface Open
  137. ---------------------------------------->
  138. Add Buffer _
  139. <---------------------------------------- |
  140. Add Buffer Response | - Perform N Iterations
  141. ----------------------------------------> -
  142. Interface Open Response
  143. <----------------------------------------
  144. VMC Interface Runtime
  145. ---------------------
  146. During normal runtime, the management application and the hypervisor
  147. exchange HMC messages via the Signal VMC message and RDMA operations. When
  148. sending data to the hypervisor, the management application performs a
  149. write() to the VMC device, and the driver RDMA’s the data to the hypervisor
  150. and then sends a Signal Message. If a write() is attempted before VMC
  151. device buffers have been made available by the hypervisor, or no buffers
  152. are currently available, EBUSY is returned in response to the write(). A
  153. write() will return EIO for all other errors, such as an invalid device
  154. state. When the hypervisor sends a message to the management, the data is
  155. put into a VMC buffer and an Signal Message is sent to the VMC driver in
  156. the management partition. The driver RDMA’s the buffer into the partition
  157. and passes the data up to the appropriate management application via a
  158. read() to the VMC device. The read() request blocks if there is no buffer
  159. available to read. The management application may use select() to wait for
  160. the VMC device to become ready with data to read.
  161. ::
  162. Management Partition Hypervisor
  163. MSG RDMA
  164. ---------------------------------------->
  165. SIGNAL MSG
  166. ---------------------------------------->
  167. SIGNAL MSG
  168. <----------------------------------------
  169. MSG RDMA
  170. <----------------------------------------
  171. VMC Interface Close
  172. -------------------
  173. HMC session level connections are closed by the management partition when
  174. the application layer performs a close() against the device. This action
  175. results in an Interface Close message flowing to the hypervisor, which
  176. causes the session to be terminated. The device driver must free any
  177. storage allocated for buffers for this HMC connection.
  178. ::
  179. Management Partition Hypervisor
  180. INTERFACE CLOSE
  181. ---------------------------------------->
  182. INTERFACE CLOSE RESPONSE
  183. <----------------------------------------
  184. Additional Information
  185. ======================
  186. For more information on the documentation for CRQ Messages, VMC Messages,
  187. HMC interface Buffers, and signal messages please refer to the Linux on
  188. Power Architecture Platform Reference. Section F.