page_migration.rst 13 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288
  1. .. _page_migration:
  2. ==============
  3. Page migration
  4. ==============
  5. Page migration allows moving the physical location of pages between
  6. nodes in a NUMA system while the process is running. This means that the
  7. virtual addresses that the process sees do not change. However, the
  8. system rearranges the physical location of those pages.
  9. Also see :ref:`Heterogeneous Memory Management (HMM) <hmm>`
  10. for migrating pages to or from device private memory.
  11. The main intent of page migration is to reduce the latency of memory accesses
  12. by moving pages near to the processor where the process accessing that memory
  13. is running.
  14. Page migration allows a process to manually relocate the node on which its
  15. pages are located through the MF_MOVE and MF_MOVE_ALL options while setting
  16. a new memory policy via mbind(). The pages of a process can also be relocated
  17. from another process using the sys_migrate_pages() function call. The
  18. migrate_pages() function call takes two sets of nodes and moves pages of a
  19. process that are located on the from nodes to the destination nodes.
  20. Page migration functions are provided by the numactl package by Andi Kleen
  21. (a version later than 0.9.3 is required. Get it from
  22. https://github.com/numactl/numactl.git). numactl provides libnuma
  23. which provides an interface similar to other NUMA functionality for page
  24. migration. cat ``/proc/<pid>/numa_maps`` allows an easy review of where the
  25. pages of a process are located. See also the numa_maps documentation in the
  26. proc(5) man page.
  27. Manual migration is useful if for example the scheduler has relocated
  28. a process to a processor on a distant node. A batch scheduler or an
  29. administrator may detect the situation and move the pages of the process
  30. nearer to the new processor. The kernel itself only provides
  31. manual page migration support. Automatic page migration may be implemented
  32. through user space processes that move pages. A special function call
  33. "move_pages" allows the moving of individual pages within a process.
  34. For example, A NUMA profiler may obtain a log showing frequent off-node
  35. accesses and may use the result to move pages to more advantageous
  36. locations.
  37. Larger installations usually partition the system using cpusets into
  38. sections of nodes. Paul Jackson has equipped cpusets with the ability to
  39. move pages when a task is moved to another cpuset (See
  40. :ref:`CPUSETS <cpusets>`).
  41. Cpusets allow the automation of process locality. If a task is moved to
  42. a new cpuset then also all its pages are moved with it so that the
  43. performance of the process does not sink dramatically. Also the pages
  44. of processes in a cpuset are moved if the allowed memory nodes of a
  45. cpuset are changed.
  46. Page migration allows the preservation of the relative location of pages
  47. within a group of nodes for all migration techniques which will preserve a
  48. particular memory allocation pattern generated even after migrating a
  49. process. This is necessary in order to preserve the memory latencies.
  50. Processes will run with similar performance after migration.
  51. Page migration occurs in several steps. First a high level
  52. description for those trying to use migrate_pages() from the kernel
  53. (for userspace usage see the Andi Kleen's numactl package mentioned above)
  54. and then a low level description of how the low level details work.
  55. In kernel use of migrate_pages()
  56. ================================
  57. 1. Remove pages from the LRU.
  58. Lists of pages to be migrated are generated by scanning over
  59. pages and moving them into lists. This is done by
  60. calling isolate_lru_page().
  61. Calling isolate_lru_page() increases the references to the page
  62. so that it cannot vanish while the page migration occurs.
  63. It also prevents the swapper or other scans from encountering
  64. the page.
  65. 2. We need to have a function of type new_page_t that can be
  66. passed to migrate_pages(). This function should figure out
  67. how to allocate the correct new page given the old page.
  68. 3. The migrate_pages() function is called which attempts
  69. to do the migration. It will call the function to allocate
  70. the new page for each page that is considered for
  71. moving.
  72. How migrate_pages() works
  73. =========================
  74. migrate_pages() does several passes over its list of pages. A page is moved
  75. if all references to a page are removable at the time. The page has
  76. already been removed from the LRU via isolate_lru_page() and the refcount
  77. is increased so that the page cannot be freed while page migration occurs.
  78. Steps:
  79. 1. Lock the page to be migrated.
  80. 2. Ensure that writeback is complete.
  81. 3. Lock the new page that we want to move to. It is locked so that accesses to
  82. this (not yet up-to-date) page immediately block while the move is in progress.
  83. 4. All the page table references to the page are converted to migration
  84. entries. This decreases the mapcount of a page. If the resulting
  85. mapcount is not zero then we do not migrate the page. All user space
  86. processes that attempt to access the page will now wait on the page lock
  87. or wait for the migration page table entry to be removed.
  88. 5. The i_pages lock is taken. This will cause all processes trying
  89. to access the page via the mapping to block on the spinlock.
  90. 6. The refcount of the page is examined and we back out if references remain.
  91. Otherwise, we know that we are the only one referencing this page.
  92. 7. The radix tree is checked and if it does not contain the pointer to this
  93. page then we back out because someone else modified the radix tree.
  94. 8. The new page is prepped with some settings from the old page so that
  95. accesses to the new page will discover a page with the correct settings.
  96. 9. The radix tree is changed to point to the new page.
  97. 10. The reference count of the old page is dropped because the address space
  98. reference is gone. A reference to the new page is established because
  99. the new page is referenced by the address space.
  100. 11. The i_pages lock is dropped. With that lookups in the mapping
  101. become possible again. Processes will move from spinning on the lock
  102. to sleeping on the locked new page.
  103. 12. The page contents are copied to the new page.
  104. 13. The remaining page flags are copied to the new page.
  105. 14. The old page flags are cleared to indicate that the page does
  106. not provide any information anymore.
  107. 15. Queued up writeback on the new page is triggered.
  108. 16. If migration entries were inserted into the page table, then replace them
  109. with real ptes. Doing so will enable access for user space processes not
  110. already waiting for the page lock.
  111. 17. The page locks are dropped from the old and new page.
  112. Processes waiting on the page lock will redo their page faults
  113. and will reach the new page.
  114. 18. The new page is moved to the LRU and can be scanned by the swapper,
  115. etc. again.
  116. Non-LRU page migration
  117. ======================
  118. Although migration originally aimed for reducing the latency of memory accesses
  119. for NUMA, compaction also uses migration to create high-order pages.
  120. Current problem of the implementation is that it is designed to migrate only
  121. *LRU* pages. However, there are potential non-LRU pages which can be migrated
  122. in drivers, for example, zsmalloc, virtio-balloon pages.
  123. For virtio-balloon pages, some parts of migration code path have been hooked
  124. up and added virtio-balloon specific functions to intercept migration logics.
  125. It's too specific to a driver so other drivers who want to make their pages
  126. movable would have to add their own specific hooks in the migration path.
  127. To overcome the problem, VM supports non-LRU page migration which provides
  128. generic functions for non-LRU movable pages without driver specific hooks
  129. in the migration path.
  130. If a driver wants to make its pages movable, it should define three functions
  131. which are function pointers of struct address_space_operations.
  132. 1. ``bool (*isolate_page) (struct page *page, isolate_mode_t mode);``
  133. What VM expects from isolate_page() function of driver is to return *true*
  134. if driver isolates the page successfully. On returning true, VM marks the page
  135. as PG_isolated so concurrent isolation in several CPUs skip the page
  136. for isolation. If a driver cannot isolate the page, it should return *false*.
  137. Once page is successfully isolated, VM uses page.lru fields so driver
  138. shouldn't expect to preserve values in those fields.
  139. 2. ``int (*migratepage) (struct address_space *mapping,``
  140. | ``struct page *newpage, struct page *oldpage, enum migrate_mode);``
  141. After isolation, VM calls migratepage() of driver with the isolated page.
  142. The function of migratepage() is to move the contents of the old page to the
  143. new page
  144. and set up fields of struct page newpage. Keep in mind that you should
  145. indicate to the VM the oldpage is no longer movable via __ClearPageMovable()
  146. under page_lock if you migrated the oldpage successfully and returned
  147. MIGRATEPAGE_SUCCESS. If driver cannot migrate the page at the moment, driver
  148. can return -EAGAIN. On -EAGAIN, VM will retry page migration in a short time
  149. because VM interprets -EAGAIN as "temporary migration failure". On returning
  150. any error except -EAGAIN, VM will give up the page migration without
  151. retrying.
  152. Driver shouldn't touch the page.lru field while in the migratepage() function.
  153. 3. ``void (*putback_page)(struct page *);``
  154. If migration fails on the isolated page, VM should return the isolated page
  155. to the driver so VM calls the driver's putback_page() with the isolated page.
  156. In this function, the driver should put the isolated page back into its own data
  157. structure.
  158. 4. non-LRU movable page flags
  159. There are two page flags for supporting non-LRU movable page.
  160. * PG_movable
  161. Driver should use the function below to make page movable under page_lock::
  162. void __SetPageMovable(struct page *page, struct address_space *mapping)
  163. It needs argument of address_space for registering migration
  164. family functions which will be called by VM. Exactly speaking,
  165. PG_movable is not a real flag of struct page. Rather, VM
  166. reuses the page->mapping's lower bits to represent it::
  167. #define PAGE_MAPPING_MOVABLE 0x2
  168. page->mapping = page->mapping | PAGE_MAPPING_MOVABLE;
  169. so driver shouldn't access page->mapping directly. Instead, driver should
  170. use page_mapping() which masks off the low two bits of page->mapping under
  171. page lock so it can get the right struct address_space.
  172. For testing of non-LRU movable pages, VM supports __PageMovable() function.
  173. However, it doesn't guarantee to identify non-LRU movable pages because
  174. the page->mapping field is unified with other variables in struct page.
  175. If the driver releases the page after isolation by VM, page->mapping
  176. doesn't have a stable value although it has PAGE_MAPPING_MOVABLE set
  177. (look at __ClearPageMovable). But __PageMovable() is cheap to call whether
  178. page is LRU or non-LRU movable once the page has been isolated because LRU
  179. pages can never have PAGE_MAPPING_MOVABLE set in page->mapping. It is also
  180. good for just peeking to test non-LRU movable pages before more expensive
  181. checking with lock_page() in pfn scanning to select a victim.
  182. For guaranteeing non-LRU movable page, VM provides PageMovable() function.
  183. Unlike __PageMovable(), PageMovable() validates page->mapping and
  184. mapping->a_ops->isolate_page under lock_page(). The lock_page() prevents
  185. sudden destroying of page->mapping.
  186. Drivers using __SetPageMovable() should clear the flag via
  187. __ClearMovablePage() under page_lock() before the releasing the page.
  188. * PG_isolated
  189. To prevent concurrent isolation among several CPUs, VM marks isolated page
  190. as PG_isolated under lock_page(). So if a CPU encounters PG_isolated
  191. non-LRU movable page, it can skip it. Driver doesn't need to manipulate the
  192. flag because VM will set/clear it automatically. Keep in mind that if the
  193. driver sees a PG_isolated page, it means the page has been isolated by the
  194. VM so it shouldn't touch the page.lru field.
  195. The PG_isolated flag is aliased with the PG_reclaim flag so drivers
  196. shouldn't use PG_isolated for its own purposes.
  197. Monitoring Migration
  198. =====================
  199. The following events (counters) can be used to monitor page migration.
  200. 1. PGMIGRATE_SUCCESS: Normal page migration success. Each count means that a
  201. page was migrated. If the page was a non-THP page, then this counter is
  202. increased by one. If the page was a THP, then this counter is increased by
  203. the number of THP subpages. For example, migration of a single 2MB THP that
  204. has 4KB-size base pages (subpages) will cause this counter to increase by
  205. 512.
  206. 2. PGMIGRATE_FAIL: Normal page migration failure. Same counting rules as for
  207. PGMIGRATE_SUCCESS, above: this will be increased by the number of subpages,
  208. if it was a THP.
  209. 3. THP_MIGRATION_SUCCESS: A THP was migrated without being split.
  210. 4. THP_MIGRATION_FAIL: A THP could not be migrated nor it could be split.
  211. 5. THP_MIGRATION_SPLIT: A THP was migrated, but not as such: first, the THP had
  212. to be split. After splitting, a migration retry was used for it's sub-pages.
  213. THP_MIGRATION_* events also update the appropriate PGMIGRATE_SUCCESS or
  214. PGMIGRATE_FAIL events. For example, a THP migration failure will cause both
  215. THP_MIGRATION_FAIL and PGMIGRATE_FAIL to increase.
  216. Christoph Lameter, May 8, 2006.
  217. Minchan Kim, Mar 28, 2016.