page_migration 6.5 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147
  1. Page migration
  2. --------------
  3. Page migration allows the moving of the physical location of pages between
  4. nodes in a numa system while the process is running. This means that the
  5. virtual addresses that the process sees do not change. However, the
  6. system rearranges the physical location of those pages.
  7. The main intend of page migration is to reduce the latency of memory access
  8. by moving pages near to the processor where the process accessing that memory
  9. is running.
  10. Page migration allows a process to manually relocate the node on which its
  11. pages are located through the MF_MOVE and MF_MOVE_ALL options while setting
  12. a new memory policy via mbind(). The pages of process can also be relocated
  13. from another process using the sys_migrate_pages() function call. The
  14. migrate_pages function call takes two sets of nodes and moves pages of a
  15. process that are located on the from nodes to the destination nodes.
  16. Page migration functions are provided by the numactl package by Andi Kleen
  17. (a version later than 0.9.3 is required. Get it from
  18. ftp://ftp.suse.com/pub/people/ak). numactl provided libnuma which
  19. provides an interface similar to other numa functionality for page migration.
  20. cat /proc/<pid>/numa_maps allows an easy review of where the pages of
  21. a process are located. See also the numa_maps manpage in the numactl package.
  22. Manual migration is useful if for example the scheduler has relocated
  23. a process to a processor on a distant node. A batch scheduler or an
  24. administrator may detect the situation and move the pages of the process
  25. nearer to the new processor. The kernel itself does only provide
  26. manual page migration support. Automatic page migration may be implemented
  27. through user space processes that move pages. A special function call
  28. "move_pages" allows the moving of individual pages within a process.
  29. A NUMA profiler may f.e. obtain a log showing frequent off node
  30. accesses and may use the result to move pages to more advantageous
  31. locations.
  32. Larger installations usually partition the system using cpusets into
  33. sections of nodes. Paul Jackson has equipped cpusets with the ability to
  34. move pages when a task is moved to another cpuset (See ../cpusets.txt).
  35. Cpusets allows the automation of process locality. If a task is moved to
  36. a new cpuset then also all its pages are moved with it so that the
  37. performance of the process does not sink dramatically. Also the pages
  38. of processes in a cpuset are moved if the allowed memory nodes of a
  39. cpuset are changed.
  40. Page migration allows the preservation of the relative location of pages
  41. within a group of nodes for all migration techniques which will preserve a
  42. particular memory allocation pattern generated even after migrating a
  43. process. This is necessary in order to preserve the memory latencies.
  44. Processes will run with similar performance after migration.
  45. Page migration occurs in several steps. First a high level
  46. description for those trying to use migrate_pages() from the kernel
  47. (for userspace usage see the Andi Kleen's numactl package mentioned above)
  48. and then a low level description of how the low level details work.
  49. A. In kernel use of migrate_pages()
  50. -----------------------------------
  51. 1. Remove pages from the LRU.
  52. Lists of pages to be migrated are generated by scanning over
  53. pages and moving them into lists. This is done by
  54. calling isolate_lru_page().
  55. Calling isolate_lru_page increases the references to the page
  56. so that it cannot vanish while the page migration occurs.
  57. It also prevents the swapper or other scans to encounter
  58. the page.
  59. 2. We need to have a function of type new_page_t that can be
  60. passed to migrate_pages(). This function should figure out
  61. how to allocate the correct new page given the old page.
  62. 3. The migrate_pages() function is called which attempts
  63. to do the migration. It will call the function to allocate
  64. the new page for each page that is considered for
  65. moving.
  66. B. How migrate_pages() works
  67. ----------------------------
  68. migrate_pages() does several passes over its list of pages. A page is moved
  69. if all references to a page are removable at the time. The page has
  70. already been removed from the LRU via isolate_lru_page() and the refcount
  71. is increased so that the page cannot be freed while page migration occurs.
  72. Steps:
  73. 1. Lock the page to be migrated
  74. 2. Insure that writeback is complete.
  75. 3. Prep the new page that we want to move to. It is locked
  76. and set to not being uptodate so that all accesses to the new
  77. page immediately lock while the move is in progress.
  78. 4. The new page is prepped with some settings from the old page so that
  79. accesses to the new page will discover a page with the correct settings.
  80. 5. All the page table references to the page are converted
  81. to migration entries or dropped (nonlinear vmas).
  82. This decrease the mapcount of a page. If the resulting
  83. mapcount is not zero then we do not migrate the page.
  84. All user space processes that attempt to access the page
  85. will now wait on the page lock.
  86. 6. The radix tree lock is taken. This will cause all processes trying
  87. to access the page via the mapping to block on the radix tree spinlock.
  88. 7. The refcount of the page is examined and we back out if references remain
  89. otherwise we know that we are the only one referencing this page.
  90. 8. The radix tree is checked and if it does not contain the pointer to this
  91. page then we back out because someone else modified the radix tree.
  92. 9. The radix tree is changed to point to the new page.
  93. 10. The reference count of the old page is dropped because the radix tree
  94. reference is gone. A reference to the new page is established because
  95. the new page is referenced to by the radix tree.
  96. 11. The radix tree lock is dropped. With that lookups in the mapping
  97. become possible again. Processes will move from spinning on the tree_lock
  98. to sleeping on the locked new page.
  99. 12. The page contents are copied to the new page.
  100. 13. The remaining page flags are copied to the new page.
  101. 14. The old page flags are cleared to indicate that the page does
  102. not provide any information anymore.
  103. 15. Queued up writeback on the new page is triggered.
  104. 16. If migration entries were page then replace them with real ptes. Doing
  105. so will enable access for user space processes not already waiting for
  106. the page lock.
  107. 19. The page locks are dropped from the old and new page.
  108. Processes waiting on the page lock will redo their page faults
  109. and will reach the new page.
  110. 20. The new page is moved to the LRU and can be scanned by the swapper
  111. etc again.
  112. Christoph Lameter, May 8, 2006.