README.DAC960 34 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756
  1. Linux Driver for Mylex DAC960/AcceleRAID/eXtremeRAID PCI RAID Controllers
  2. Version 2.2.11 for Linux 2.2.19
  3. Version 2.4.11 for Linux 2.4.12
  4. PRODUCTION RELEASE
  5. 11 October 2001
  6. Leonard N. Zubkoff
  7. Dandelion Digital
  8. lnz@dandelion.com
  9. Copyright 1998-2001 by Leonard N. Zubkoff <lnz@dandelion.com>
  10. INTRODUCTION
  11. Mylex, Inc. designs and manufactures a variety of high performance PCI RAID
  12. controllers. Mylex Corporation is located at 34551 Ardenwood Blvd., Fremont,
  13. California 94555, USA and can be reached at 510.796.6100 or on the World Wide
  14. Web at http://www.mylex.com. Mylex Technical Support can be reached by
  15. electronic mail at mylexsup@us.ibm.com, by voice at 510.608.2400, or by FAX at
  16. 510.745.7715. Contact information for offices in Europe and Japan is available
  17. on their Web site.
  18. The latest information on Linux support for DAC960 PCI RAID Controllers, as
  19. well as the most recent release of this driver, will always be available from
  20. my Linux Home Page at URL "http://www.dandelion.com/Linux/". The Linux DAC960
  21. driver supports all current Mylex PCI RAID controllers including the new
  22. eXtremeRAID 2000/3000 and AcceleRAID 352/170/160 models which have an entirely
  23. new firmware interface from the older eXtremeRAID 1100, AcceleRAID 150/200/250,
  24. and DAC960PJ/PG/PU/PD/PL. See below for a complete controller list as well as
  25. minimum firmware version requirements. For simplicity, in most places this
  26. documentation refers to DAC960 generically rather than explicitly listing all
  27. the supported models.
  28. Driver bug reports should be sent via electronic mail to "lnz@dandelion.com".
  29. Please include with the bug report the complete configuration messages reported
  30. by the driver at startup, along with any subsequent system messages relevant to
  31. the controller's operation, and a detailed description of your system's
  32. hardware configuration. Driver bugs are actually quite rare; if you encounter
  33. problems with disks being marked offline, for example, please contact Mylex
  34. Technical Support as the problem is related to the hardware configuration
  35. rather than the Linux driver.
  36. Please consult the RAID controller documentation for detailed information
  37. regarding installation and configuration of the controllers. This document
  38. primarily provides information specific to the Linux support.
  39. DRIVER FEATURES
  40. The DAC960 RAID controllers are supported solely as high performance RAID
  41. controllers, not as interfaces to arbitrary SCSI devices. The Linux DAC960
  42. driver operates at the block device level, the same level as the SCSI and IDE
  43. drivers. Unlike other RAID controllers currently supported on Linux, the
  44. DAC960 driver is not dependent on the SCSI subsystem, and hence avoids all the
  45. complexity and unnecessary code that would be associated with an implementation
  46. as a SCSI driver. The DAC960 driver is designed for as high a performance as
  47. possible with no compromises or extra code for compatibility with lower
  48. performance devices. The DAC960 driver includes extensive error logging and
  49. online configuration management capabilities. Except for initial configuration
  50. of the controller and adding new disk drives, most everything can be handled
  51. from Linux while the system is operational.
  52. The DAC960 driver is architected to support up to 8 controllers per system.
  53. Each DAC960 parallel SCSI controller can support up to 15 disk drives per
  54. channel, for a maximum of 60 drives on a four channel controller; the fibre
  55. channel eXtremeRAID 3000 controller supports up to 125 disk drives per loop for
  56. a total of 250 drives. The drives installed on a controller are divided into
  57. one or more "Drive Groups", and then each Drive Group is subdivided further
  58. into 1 to 32 "Logical Drives". Each Logical Drive has a specific RAID Level
  59. and caching policy associated with it, and it appears to Linux as a single
  60. block device. Logical Drives are further subdivided into up to 7 partitions
  61. through the normal Linux and PC disk partitioning schemes. Logical Drives are
  62. also known as "System Drives", and Drive Groups are also called "Packs". Both
  63. terms are in use in the Mylex documentation; I have chosen to standardize on
  64. the more generic "Logical Drive" and "Drive Group".
  65. DAC960 RAID disk devices are named in the style of the obsolete Device File
  66. System (DEVFS). The device corresponding to Logical Drive D on Controller C
  67. is referred to as /dev/rd/cCdD, and the partitions are called /dev/rd/cCdDp1
  68. through /dev/rd/cCdDp7. For example, partition 3 of Logical Drive 5 on
  69. Controller 2 is referred to as /dev/rd/c2d5p3. Note that unlike with SCSI
  70. disks the device names will not change in the event of a disk drive failure.
  71. The DAC960 driver is assigned major numbers 48 - 55 with one major number per
  72. controller. The 8 bits of minor number are divided into 5 bits for the Logical
  73. Drive and 3 bits for the partition.
  74. SUPPORTED DAC960/AcceleRAID/eXtremeRAID PCI RAID CONTROLLERS
  75. The following list comprises the supported DAC960, AcceleRAID, and eXtremeRAID
  76. PCI RAID Controllers as of the date of this document. It is recommended that
  77. anyone purchasing a Mylex PCI RAID Controller not in the following table
  78. contact the author beforehand to verify that it is or will be supported.
  79. eXtremeRAID 3000
  80. 1 Wide Ultra-2/LVD SCSI channel
  81. 2 External Fibre FC-AL channels
  82. 233MHz StrongARM SA 110 Processor
  83. 64 Bit 33MHz PCI (backward compatible with 32 Bit PCI slots)
  84. 32MB/64MB ECC SDRAM Memory
  85. eXtremeRAID 2000
  86. 4 Wide Ultra-160 LVD SCSI channels
  87. 233MHz StrongARM SA 110 Processor
  88. 64 Bit 33MHz PCI (backward compatible with 32 Bit PCI slots)
  89. 32MB/64MB ECC SDRAM Memory
  90. AcceleRAID 352
  91. 2 Wide Ultra-160 LVD SCSI channels
  92. 100MHz Intel i960RN RISC Processor
  93. 64 Bit 33MHz PCI (backward compatible with 32 Bit PCI slots)
  94. 32MB/64MB ECC SDRAM Memory
  95. AcceleRAID 170
  96. 1 Wide Ultra-160 LVD SCSI channel
  97. 100MHz Intel i960RM RISC Processor
  98. 16MB/32MB/64MB ECC SDRAM Memory
  99. AcceleRAID 160 (AcceleRAID 170LP)
  100. 1 Wide Ultra-160 LVD SCSI channel
  101. 100MHz Intel i960RS RISC Processor
  102. Built in 16M ECC SDRAM Memory
  103. PCI Low Profile Form Factor - fit for 2U height
  104. eXtremeRAID 1100 (DAC1164P)
  105. 3 Wide Ultra-2/LVD SCSI channels
  106. 233MHz StrongARM SA 110 Processor
  107. 64 Bit 33MHz PCI (backward compatible with 32 Bit PCI slots)
  108. 16MB/32MB/64MB Parity SDRAM Memory with Battery Backup
  109. AcceleRAID 250 (DAC960PTL1)
  110. Uses onboard Symbios SCSI chips on certain motherboards
  111. Also includes one onboard Wide Ultra-2/LVD SCSI Channel
  112. 66MHz Intel i960RD RISC Processor
  113. 4MB/8MB/16MB/32MB/64MB/128MB ECC EDO Memory
  114. AcceleRAID 200 (DAC960PTL0)
  115. Uses onboard Symbios SCSI chips on certain motherboards
  116. Includes no onboard SCSI Channels
  117. 66MHz Intel i960RD RISC Processor
  118. 4MB/8MB/16MB/32MB/64MB/128MB ECC EDO Memory
  119. AcceleRAID 150 (DAC960PRL)
  120. Uses onboard Symbios SCSI chips on certain motherboards
  121. Also includes one onboard Wide Ultra-2/LVD SCSI Channel
  122. 33MHz Intel i960RP RISC Processor
  123. 4MB Parity EDO Memory
  124. DAC960PJ 1/2/3 Wide Ultra SCSI-3 Channels
  125. 66MHz Intel i960RD RISC Processor
  126. 4MB/8MB/16MB/32MB/64MB/128MB ECC EDO Memory
  127. DAC960PG 1/2/3 Wide Ultra SCSI-3 Channels
  128. 33MHz Intel i960RP RISC Processor
  129. 4MB/8MB ECC EDO Memory
  130. DAC960PU 1/2/3 Wide Ultra SCSI-3 Channels
  131. Intel i960CF RISC Processor
  132. 4MB/8MB EDRAM or 2MB/4MB/8MB/16MB/32MB DRAM Memory
  133. DAC960PD 1/2/3 Wide Fast SCSI-2 Channels
  134. Intel i960CF RISC Processor
  135. 4MB/8MB EDRAM or 2MB/4MB/8MB/16MB/32MB DRAM Memory
  136. DAC960PL 1/2/3 Wide Fast SCSI-2 Channels
  137. Intel i960 RISC Processor
  138. 2MB/4MB/8MB/16MB/32MB DRAM Memory
  139. DAC960P 1/2/3 Wide Fast SCSI-2 Channels
  140. Intel i960 RISC Processor
  141. 2MB/4MB/8MB/16MB/32MB DRAM Memory
  142. For the eXtremeRAID 2000/3000 and AcceleRAID 352/170/160, firmware version
  143. 6.00-01 or above is required.
  144. For the eXtremeRAID 1100, firmware version 5.06-0-52 or above is required.
  145. For the AcceleRAID 250, 200, and 150, firmware version 4.06-0-57 or above is
  146. required.
  147. For the DAC960PJ and DAC960PG, firmware version 4.06-0-00 or above is required.
  148. For the DAC960PU, DAC960PD, DAC960PL, and DAC960P, either firmware version
  149. 3.51-0-04 or above is required (for dual Flash ROM controllers), or firmware
  150. version 2.73-0-00 or above is required (for single Flash ROM controllers)
  151. Please note that not all SCSI disk drives are suitable for use with DAC960
  152. controllers, and only particular firmware versions of any given model may
  153. actually function correctly. Similarly, not all motherboards have a BIOS that
  154. properly initializes the AcceleRAID 250, AcceleRAID 200, AcceleRAID 150,
  155. DAC960PJ, and DAC960PG because the Intel i960RD/RP is a multi-function device.
  156. If in doubt, contact Mylex RAID Technical Support (mylexsup@us.ibm.com) to
  157. verify compatibility. Mylex makes available a hard disk compatibility list at
  158. http://www.mylex.com/support/hdcomp/hd-lists.html.
  159. DRIVER INSTALLATION
  160. This distribution was prepared for Linux kernel version 2.2.19 or 2.4.12.
  161. To install the DAC960 RAID driver, you may use the following commands,
  162. replacing "/usr/src" with wherever you keep your Linux kernel source tree:
  163. cd /usr/src
  164. tar -xvzf DAC960-2.2.11.tar.gz (or DAC960-2.4.11.tar.gz)
  165. mv README.DAC960 linux/Documentation
  166. mv DAC960.[ch] linux/drivers/block
  167. patch -p0 < DAC960.patch (if DAC960.patch is included)
  168. cd linux
  169. make config
  170. make bzImage (or zImage)
  171. Then install "arch/i386/boot/bzImage" or "arch/i386/boot/zImage" as your
  172. standard kernel, run lilo if appropriate, and reboot.
  173. To create the necessary devices in /dev, the "make_rd" script included in
  174. "DAC960-Utilities.tar.gz" from http://www.dandelion.com/Linux/ may be used.
  175. LILO 21 and FDISK v2.9 include DAC960 support; also included in this archive
  176. are patches to LILO 20 and FDISK v2.8 that add DAC960 support, along with
  177. statically linked executables of LILO and FDISK. This modified version of LILO
  178. will allow booting from a DAC960 controller and/or mounting the root file
  179. system from a DAC960.
  180. Red Hat Linux 6.0 and SuSE Linux 6.1 include support for Mylex PCI RAID
  181. controllers. Installing directly onto a DAC960 may be problematic from other
  182. Linux distributions until their installation utilities are updated.
  183. INSTALLATION NOTES
  184. Before installing Linux or adding DAC960 logical drives to an existing Linux
  185. system, the controller must first be configured to provide one or more logical
  186. drives using the BIOS Configuration Utility or DACCF. Please note that since
  187. there are only at most 6 usable partitions on each logical drive, systems
  188. requiring more partitions should subdivide a drive group into multiple logical
  189. drives, each of which can have up to 6 usable partitions. Also, note that with
  190. large disk arrays it is advisable to enable the 8GB BIOS Geometry (255/63)
  191. rather than accepting the default 2GB BIOS Geometry (128/32); failing to so do
  192. will cause the logical drive geometry to have more than 65535 cylinders which
  193. will make it impossible for FDISK to be used properly. The 8GB BIOS Geometry
  194. can be enabled by configuring the DAC960 BIOS, which is accessible via Alt-M
  195. during the BIOS initialization sequence.
  196. For maximum performance and the most efficient E2FSCK performance, it is
  197. recommended that EXT2 file systems be built with a 4KB block size and 16 block
  198. stride to match the DAC960 controller's 64KB default stripe size. The command
  199. "mke2fs -b 4096 -R stride=16 <device>" is appropriate. Unless there will be a
  200. large number of small files on the file systems, it is also beneficial to add
  201. the "-i 16384" option to increase the bytes per inode parameter thereby
  202. reducing the file system metadata. Finally, on systems that will only be run
  203. with Linux 2.2 or later kernels it is beneficial to enable sparse superblocks
  204. with the "-s 1" option.
  205. DAC960 ANNOUNCEMENTS MAILING LIST
  206. The DAC960 Announcements Mailing List provides a forum for informing Linux
  207. users of new driver releases and other announcements regarding Linux support
  208. for DAC960 PCI RAID Controllers. To join the mailing list, send a message to
  209. "dac960-announce-request@dandelion.com" with the line "subscribe" in the
  210. message body.
  211. CONTROLLER CONFIGURATION AND STATUS MONITORING
  212. The DAC960 RAID controllers running firmware 4.06 or above include a Background
  213. Initialization facility so that system downtime is minimized both for initial
  214. installation and subsequent configuration of additional storage. The BIOS
  215. Configuration Utility (accessible via Alt-R during the BIOS initialization
  216. sequence) is used to quickly configure the controller, and then the logical
  217. drives that have been created are available for immediate use even while they
  218. are still being initialized by the controller. The primary need for online
  219. configuration and status monitoring is then to avoid system downtime when disk
  220. drives fail and must be replaced. Mylex's online monitoring and configuration
  221. utilities are being ported to Linux and will become available at some point in
  222. the future. Note that with a SAF-TE (SCSI Accessed Fault-Tolerant Enclosure)
  223. enclosure, the controller is able to rebuild failed drives automatically as
  224. soon as a drive replacement is made available.
  225. The primary interfaces for controller configuration and status monitoring are
  226. special files created in the /proc/rd/... hierarchy along with the normal
  227. system console logging mechanism. Whenever the system is operating, the DAC960
  228. driver queries each controller for status information every 10 seconds, and
  229. checks for additional conditions every 60 seconds. The initial status of each
  230. controller is always available for controller N in /proc/rd/cN/initial_status,
  231. and the current status as of the last status monitoring query is available in
  232. /proc/rd/cN/current_status. In addition, status changes are also logged by the
  233. driver to the system console and will appear in the log files maintained by
  234. syslog. The progress of asynchronous rebuild or consistency check operations
  235. is also available in /proc/rd/cN/current_status, and progress messages are
  236. logged to the system console at most every 60 seconds.
  237. Starting with the 2.2.3/2.0.3 versions of the driver, the status information
  238. available in /proc/rd/cN/initial_status and /proc/rd/cN/current_status has been
  239. augmented to include the vendor, model, revision, and serial number (if
  240. available) for each physical device found connected to the controller:
  241. ***** DAC960 RAID Driver Version 2.2.3 of 19 August 1999 *****
  242. Copyright 1998-1999 by Leonard N. Zubkoff <lnz@dandelion.com>
  243. Configuring Mylex DAC960PRL PCI RAID Controller
  244. Firmware Version: 4.07-0-07, Channels: 1, Memory Size: 16MB
  245. PCI Bus: 1, Device: 4, Function: 1, I/O Address: Unassigned
  246. PCI Address: 0xFE300000 mapped at 0xA0800000, IRQ Channel: 21
  247. Controller Queue Depth: 128, Maximum Blocks per Command: 128
  248. Driver Queue Depth: 127, Maximum Scatter/Gather Segments: 33
  249. Stripe Size: 64KB, Segment Size: 8KB, BIOS Geometry: 255/63
  250. SAF-TE Enclosure Management Enabled
  251. Physical Devices:
  252. 0:0 Vendor: IBM Model: DRVS09D Revision: 0270
  253. Serial Number: 68016775HA
  254. Disk Status: Online, 17928192 blocks
  255. 0:1 Vendor: IBM Model: DRVS09D Revision: 0270
  256. Serial Number: 68004E53HA
  257. Disk Status: Online, 17928192 blocks
  258. 0:2 Vendor: IBM Model: DRVS09D Revision: 0270
  259. Serial Number: 13013935HA
  260. Disk Status: Online, 17928192 blocks
  261. 0:3 Vendor: IBM Model: DRVS09D Revision: 0270
  262. Serial Number: 13016897HA
  263. Disk Status: Online, 17928192 blocks
  264. 0:4 Vendor: IBM Model: DRVS09D Revision: 0270
  265. Serial Number: 68019905HA
  266. Disk Status: Online, 17928192 blocks
  267. 0:5 Vendor: IBM Model: DRVS09D Revision: 0270
  268. Serial Number: 68012753HA
  269. Disk Status: Online, 17928192 blocks
  270. 0:6 Vendor: ESG-SHV Model: SCA HSBP M6 Revision: 0.61
  271. Logical Drives:
  272. /dev/rd/c0d0: RAID-5, Online, 89640960 blocks, Write Thru
  273. No Rebuild or Consistency Check in Progress
  274. To simplify the monitoring process for custom software, the special file
  275. /proc/rd/status returns "OK" when all DAC960 controllers in the system are
  276. operating normally and no failures have occurred, or "ALERT" if any logical
  277. drives are offline or critical or any non-standby physical drives are dead.
  278. Configuration commands for controller N are available via the special file
  279. /proc/rd/cN/user_command. A human readable command can be written to this
  280. special file to initiate a configuration operation, and the results of the
  281. operation can then be read back from the special file in addition to being
  282. logged to the system console. The shell command sequence
  283. echo "<configuration-command>" > /proc/rd/c0/user_command
  284. cat /proc/rd/c0/user_command
  285. is typically used to execute configuration commands. The configuration
  286. commands are:
  287. flush-cache
  288. The "flush-cache" command flushes the controller's cache. The system
  289. automatically flushes the cache at shutdown or if the driver module is
  290. unloaded, so this command is only needed to be certain a write back cache
  291. is flushed to disk before the system is powered off by a command to a UPS.
  292. Note that the flush-cache command also stops an asynchronous rebuild or
  293. consistency check, so it should not be used except when the system is being
  294. halted.
  295. kill <channel>:<target-id>
  296. The "kill" command marks the physical drive <channel>:<target-id> as DEAD.
  297. This command is provided primarily for testing, and should not be used
  298. during normal system operation.
  299. make-online <channel>:<target-id>
  300. The "make-online" command changes the physical drive <channel>:<target-id>
  301. from status DEAD to status ONLINE. In cases where multiple physical drives
  302. have been killed simultaneously, this command may be used to bring all but
  303. one of them back online, after which a rebuild to the final drive is
  304. necessary.
  305. Warning: make-online should only be used on a dead physical drive that is
  306. an active part of a drive group, never on a standby drive. The command
  307. should never be used on a dead drive that is part of a critical logical
  308. drive; rebuild should be used if only a single drive is dead.
  309. make-standby <channel>:<target-id>
  310. The "make-standby" command changes physical drive <channel>:<target-id>
  311. from status DEAD to status STANDBY. It should only be used in cases where
  312. a dead drive was replaced after an automatic rebuild was performed onto a
  313. standby drive. It cannot be used to add a standby drive to the controller
  314. configuration if one was not created initially; the BIOS Configuration
  315. Utility must be used for that currently.
  316. rebuild <channel>:<target-id>
  317. The "rebuild" command initiates an asynchronous rebuild onto physical drive
  318. <channel>:<target-id>. It should only be used when a dead drive has been
  319. replaced.
  320. check-consistency <logical-drive-number>
  321. The "check-consistency" command initiates an asynchronous consistency check
  322. of <logical-drive-number> with automatic restoration. It can be used
  323. whenever it is desired to verify the consistency of the redundancy
  324. information.
  325. cancel-rebuild
  326. cancel-consistency-check
  327. The "cancel-rebuild" and "cancel-consistency-check" commands cancel any
  328. rebuild or consistency check operations previously initiated.
  329. EXAMPLE I - DRIVE FAILURE WITHOUT A STANDBY DRIVE
  330. The following annotated logs demonstrate the controller configuration and and
  331. online status monitoring capabilities of the Linux DAC960 Driver. The test
  332. configuration comprises 6 1GB Quantum Atlas I disk drives on two channels of a
  333. DAC960PJ controller. The physical drives are configured into a single drive
  334. group without a standby drive, and the drive group has been configured into two
  335. logical drives, one RAID-5 and one RAID-6. Note that these logs are from an
  336. earlier version of the driver and the messages have changed somewhat with newer
  337. releases, but the functionality remains similar. First, here is the current
  338. status of the RAID configuration:
  339. gwynedd:/u/lnz# cat /proc/rd/c0/current_status
  340. ***** DAC960 RAID Driver Version 2.0.0 of 23 March 1999 *****
  341. Copyright 1998-1999 by Leonard N. Zubkoff <lnz@dandelion.com>
  342. Configuring Mylex DAC960PJ PCI RAID Controller
  343. Firmware Version: 4.06-0-08, Channels: 3, Memory Size: 8MB
  344. PCI Bus: 0, Device: 19, Function: 1, I/O Address: Unassigned
  345. PCI Address: 0xFD4FC000 mapped at 0x8807000, IRQ Channel: 9
  346. Controller Queue Depth: 128, Maximum Blocks per Command: 128
  347. Driver Queue Depth: 127, Maximum Scatter/Gather Segments: 33
  348. Stripe Size: 64KB, Segment Size: 8KB, BIOS Geometry: 255/63
  349. Physical Devices:
  350. 0:1 - Disk: Online, 2201600 blocks
  351. 0:2 - Disk: Online, 2201600 blocks
  352. 0:3 - Disk: Online, 2201600 blocks
  353. 1:1 - Disk: Online, 2201600 blocks
  354. 1:2 - Disk: Online, 2201600 blocks
  355. 1:3 - Disk: Online, 2201600 blocks
  356. Logical Drives:
  357. /dev/rd/c0d0: RAID-5, Online, 5498880 blocks, Write Thru
  358. /dev/rd/c0d1: RAID-6, Online, 3305472 blocks, Write Thru
  359. No Rebuild or Consistency Check in Progress
  360. gwynedd:/u/lnz# cat /proc/rd/status
  361. OK
  362. The above messages indicate that everything is healthy, and /proc/rd/status
  363. returns "OK" indicating that there are no problems with any DAC960 controller
  364. in the system. For demonstration purposes, while I/O is active Physical Drive
  365. 1:1 is now disconnected, simulating a drive failure. The failure is noted by
  366. the driver within 10 seconds of the controller's having detected it, and the
  367. driver logs the following console status messages indicating that Logical
  368. Drives 0 and 1 are now CRITICAL as a result of Physical Drive 1:1 being DEAD:
  369. DAC960#0: Physical Drive 1:2 Error Log: Sense Key = 6, ASC = 29, ASCQ = 02
  370. DAC960#0: Physical Drive 1:3 Error Log: Sense Key = 6, ASC = 29, ASCQ = 02
  371. DAC960#0: Physical Drive 1:1 killed because of timeout on SCSI command
  372. DAC960#0: Physical Drive 1:1 is now DEAD
  373. DAC960#0: Logical Drive 0 (/dev/rd/c0d0) is now CRITICAL
  374. DAC960#0: Logical Drive 1 (/dev/rd/c0d1) is now CRITICAL
  375. The Sense Keys logged here are just Check Condition / Unit Attention conditions
  376. arising from a SCSI bus reset that is forced by the controller during its error
  377. recovery procedures. Concurrently with the above, the driver status available
  378. from /proc/rd also reflects the drive failure. The status message in
  379. /proc/rd/status has changed from "OK" to "ALERT":
  380. gwynedd:/u/lnz# cat /proc/rd/status
  381. ALERT
  382. and /proc/rd/c0/current_status has been updated:
  383. gwynedd:/u/lnz# cat /proc/rd/c0/current_status
  384. ...
  385. Physical Devices:
  386. 0:1 - Disk: Online, 2201600 blocks
  387. 0:2 - Disk: Online, 2201600 blocks
  388. 0:3 - Disk: Online, 2201600 blocks
  389. 1:1 - Disk: Dead, 2201600 blocks
  390. 1:2 - Disk: Online, 2201600 blocks
  391. 1:3 - Disk: Online, 2201600 blocks
  392. Logical Drives:
  393. /dev/rd/c0d0: RAID-5, Critical, 5498880 blocks, Write Thru
  394. /dev/rd/c0d1: RAID-6, Critical, 3305472 blocks, Write Thru
  395. No Rebuild or Consistency Check in Progress
  396. Since there are no standby drives configured, the system can continue to access
  397. the logical drives in a performance degraded mode until the failed drive is
  398. replaced and a rebuild operation completed to restore the redundancy of the
  399. logical drives. Once Physical Drive 1:1 is replaced with a properly
  400. functioning drive, or if the physical drive was killed without having failed
  401. (e.g., due to electrical problems on the SCSI bus), the user can instruct the
  402. controller to initiate a rebuild operation onto the newly replaced drive:
  403. gwynedd:/u/lnz# echo "rebuild 1:1" > /proc/rd/c0/user_command
  404. gwynedd:/u/lnz# cat /proc/rd/c0/user_command
  405. Rebuild of Physical Drive 1:1 Initiated
  406. The echo command instructs the controller to initiate an asynchronous rebuild
  407. operation onto Physical Drive 1:1, and the status message that results from the
  408. operation is then available for reading from /proc/rd/c0/user_command, as well
  409. as being logged to the console by the driver.
  410. Within 10 seconds of this command the driver logs the initiation of the
  411. asynchronous rebuild operation:
  412. DAC960#0: Rebuild of Physical Drive 1:1 Initiated
  413. DAC960#0: Physical Drive 1:1 Error Log: Sense Key = 6, ASC = 29, ASCQ = 01
  414. DAC960#0: Physical Drive 1:1 is now WRITE-ONLY
  415. DAC960#0: Rebuild in Progress: Logical Drive 0 (/dev/rd/c0d0) 1% completed
  416. and /proc/rd/c0/current_status is updated:
  417. gwynedd:/u/lnz# cat /proc/rd/c0/current_status
  418. ...
  419. Physical Devices:
  420. 0:1 - Disk: Online, 2201600 blocks
  421. 0:2 - Disk: Online, 2201600 blocks
  422. 0:3 - Disk: Online, 2201600 blocks
  423. 1:1 - Disk: Write-Only, 2201600 blocks
  424. 1:2 - Disk: Online, 2201600 blocks
  425. 1:3 - Disk: Online, 2201600 blocks
  426. Logical Drives:
  427. /dev/rd/c0d0: RAID-5, Critical, 5498880 blocks, Write Thru
  428. /dev/rd/c0d1: RAID-6, Critical, 3305472 blocks, Write Thru
  429. Rebuild in Progress: Logical Drive 0 (/dev/rd/c0d0) 6% completed
  430. As the rebuild progresses, the current status in /proc/rd/c0/current_status is
  431. updated every 10 seconds:
  432. gwynedd:/u/lnz# cat /proc/rd/c0/current_status
  433. ...
  434. Physical Devices:
  435. 0:1 - Disk: Online, 2201600 blocks
  436. 0:2 - Disk: Online, 2201600 blocks
  437. 0:3 - Disk: Online, 2201600 blocks
  438. 1:1 - Disk: Write-Only, 2201600 blocks
  439. 1:2 - Disk: Online, 2201600 blocks
  440. 1:3 - Disk: Online, 2201600 blocks
  441. Logical Drives:
  442. /dev/rd/c0d0: RAID-5, Critical, 5498880 blocks, Write Thru
  443. /dev/rd/c0d1: RAID-6, Critical, 3305472 blocks, Write Thru
  444. Rebuild in Progress: Logical Drive 0 (/dev/rd/c0d0) 15% completed
  445. and every minute a progress message is logged to the console by the driver:
  446. DAC960#0: Rebuild in Progress: Logical Drive 0 (/dev/rd/c0d0) 32% completed
  447. DAC960#0: Rebuild in Progress: Logical Drive 0 (/dev/rd/c0d0) 63% completed
  448. DAC960#0: Rebuild in Progress: Logical Drive 0 (/dev/rd/c0d0) 94% completed
  449. DAC960#0: Rebuild in Progress: Logical Drive 1 (/dev/rd/c0d1) 94% completed
  450. Finally, the rebuild completes successfully. The driver logs the status of the
  451. logical and physical drives and the rebuild completion:
  452. DAC960#0: Rebuild Completed Successfully
  453. DAC960#0: Physical Drive 1:1 is now ONLINE
  454. DAC960#0: Logical Drive 0 (/dev/rd/c0d0) is now ONLINE
  455. DAC960#0: Logical Drive 1 (/dev/rd/c0d1) is now ONLINE
  456. /proc/rd/c0/current_status is updated:
  457. gwynedd:/u/lnz# cat /proc/rd/c0/current_status
  458. ...
  459. Physical Devices:
  460. 0:1 - Disk: Online, 2201600 blocks
  461. 0:2 - Disk: Online, 2201600 blocks
  462. 0:3 - Disk: Online, 2201600 blocks
  463. 1:1 - Disk: Online, 2201600 blocks
  464. 1:2 - Disk: Online, 2201600 blocks
  465. 1:3 - Disk: Online, 2201600 blocks
  466. Logical Drives:
  467. /dev/rd/c0d0: RAID-5, Online, 5498880 blocks, Write Thru
  468. /dev/rd/c0d1: RAID-6, Online, 3305472 blocks, Write Thru
  469. Rebuild Completed Successfully
  470. and /proc/rd/status indicates that everything is healthy once again:
  471. gwynedd:/u/lnz# cat /proc/rd/status
  472. OK
  473. EXAMPLE II - DRIVE FAILURE WITH A STANDBY DRIVE
  474. The following annotated logs demonstrate the controller configuration and and
  475. online status monitoring capabilities of the Linux DAC960 Driver. The test
  476. configuration comprises 6 1GB Quantum Atlas I disk drives on two channels of a
  477. DAC960PJ controller. The physical drives are configured into a single drive
  478. group with a standby drive, and the drive group has been configured into two
  479. logical drives, one RAID-5 and one RAID-6. Note that these logs are from an
  480. earlier version of the driver and the messages have changed somewhat with newer
  481. releases, but the functionality remains similar. First, here is the current
  482. status of the RAID configuration:
  483. gwynedd:/u/lnz# cat /proc/rd/c0/current_status
  484. ***** DAC960 RAID Driver Version 2.0.0 of 23 March 1999 *****
  485. Copyright 1998-1999 by Leonard N. Zubkoff <lnz@dandelion.com>
  486. Configuring Mylex DAC960PJ PCI RAID Controller
  487. Firmware Version: 4.06-0-08, Channels: 3, Memory Size: 8MB
  488. PCI Bus: 0, Device: 19, Function: 1, I/O Address: Unassigned
  489. PCI Address: 0xFD4FC000 mapped at 0x8807000, IRQ Channel: 9
  490. Controller Queue Depth: 128, Maximum Blocks per Command: 128
  491. Driver Queue Depth: 127, Maximum Scatter/Gather Segments: 33
  492. Stripe Size: 64KB, Segment Size: 8KB, BIOS Geometry: 255/63
  493. Physical Devices:
  494. 0:1 - Disk: Online, 2201600 blocks
  495. 0:2 - Disk: Online, 2201600 blocks
  496. 0:3 - Disk: Online, 2201600 blocks
  497. 1:1 - Disk: Online, 2201600 blocks
  498. 1:2 - Disk: Online, 2201600 blocks
  499. 1:3 - Disk: Standby, 2201600 blocks
  500. Logical Drives:
  501. /dev/rd/c0d0: RAID-5, Online, 4399104 blocks, Write Thru
  502. /dev/rd/c0d1: RAID-6, Online, 2754560 blocks, Write Thru
  503. No Rebuild or Consistency Check in Progress
  504. gwynedd:/u/lnz# cat /proc/rd/status
  505. OK
  506. The above messages indicate that everything is healthy, and /proc/rd/status
  507. returns "OK" indicating that there are no problems with any DAC960 controller
  508. in the system. For demonstration purposes, while I/O is active Physical Drive
  509. 1:2 is now disconnected, simulating a drive failure. The failure is noted by
  510. the driver within 10 seconds of the controller's having detected it, and the
  511. driver logs the following console status messages:
  512. DAC960#0: Physical Drive 1:1 Error Log: Sense Key = 6, ASC = 29, ASCQ = 02
  513. DAC960#0: Physical Drive 1:3 Error Log: Sense Key = 6, ASC = 29, ASCQ = 02
  514. DAC960#0: Physical Drive 1:2 killed because of timeout on SCSI command
  515. DAC960#0: Physical Drive 1:2 is now DEAD
  516. DAC960#0: Physical Drive 1:2 killed because it was removed
  517. DAC960#0: Logical Drive 0 (/dev/rd/c0d0) is now CRITICAL
  518. DAC960#0: Logical Drive 1 (/dev/rd/c0d1) is now CRITICAL
  519. Since a standby drive is configured, the controller automatically begins
  520. rebuilding onto the standby drive:
  521. DAC960#0: Physical Drive 1:3 is now WRITE-ONLY
  522. DAC960#0: Rebuild in Progress: Logical Drive 0 (/dev/rd/c0d0) 4% completed
  523. Concurrently with the above, the driver status available from /proc/rd also
  524. reflects the drive failure and automatic rebuild. The status message in
  525. /proc/rd/status has changed from "OK" to "ALERT":
  526. gwynedd:/u/lnz# cat /proc/rd/status
  527. ALERT
  528. and /proc/rd/c0/current_status has been updated:
  529. gwynedd:/u/lnz# cat /proc/rd/c0/current_status
  530. ...
  531. Physical Devices:
  532. 0:1 - Disk: Online, 2201600 blocks
  533. 0:2 - Disk: Online, 2201600 blocks
  534. 0:3 - Disk: Online, 2201600 blocks
  535. 1:1 - Disk: Online, 2201600 blocks
  536. 1:2 - Disk: Dead, 2201600 blocks
  537. 1:3 - Disk: Write-Only, 2201600 blocks
  538. Logical Drives:
  539. /dev/rd/c0d0: RAID-5, Critical, 4399104 blocks, Write Thru
  540. /dev/rd/c0d1: RAID-6, Critical, 2754560 blocks, Write Thru
  541. Rebuild in Progress: Logical Drive 0 (/dev/rd/c0d0) 4% completed
  542. As the rebuild progresses, the current status in /proc/rd/c0/current_status is
  543. updated every 10 seconds:
  544. gwynedd:/u/lnz# cat /proc/rd/c0/current_status
  545. ...
  546. Physical Devices:
  547. 0:1 - Disk: Online, 2201600 blocks
  548. 0:2 - Disk: Online, 2201600 blocks
  549. 0:3 - Disk: Online, 2201600 blocks
  550. 1:1 - Disk: Online, 2201600 blocks
  551. 1:2 - Disk: Dead, 2201600 blocks
  552. 1:3 - Disk: Write-Only, 2201600 blocks
  553. Logical Drives:
  554. /dev/rd/c0d0: RAID-5, Critical, 4399104 blocks, Write Thru
  555. /dev/rd/c0d1: RAID-6, Critical, 2754560 blocks, Write Thru
  556. Rebuild in Progress: Logical Drive 0 (/dev/rd/c0d0) 40% completed
  557. and every minute a progress message is logged on the console by the driver:
  558. DAC960#0: Rebuild in Progress: Logical Drive 0 (/dev/rd/c0d0) 40% completed
  559. DAC960#0: Rebuild in Progress: Logical Drive 0 (/dev/rd/c0d0) 76% completed
  560. DAC960#0: Rebuild in Progress: Logical Drive 1 (/dev/rd/c0d1) 66% completed
  561. DAC960#0: Rebuild in Progress: Logical Drive 1 (/dev/rd/c0d1) 84% completed
  562. Finally, the rebuild completes successfully. The driver logs the status of the
  563. logical and physical drives and the rebuild completion:
  564. DAC960#0: Rebuild Completed Successfully
  565. DAC960#0: Physical Drive 1:3 is now ONLINE
  566. DAC960#0: Logical Drive 0 (/dev/rd/c0d0) is now ONLINE
  567. DAC960#0: Logical Drive 1 (/dev/rd/c0d1) is now ONLINE
  568. /proc/rd/c0/current_status is updated:
  569. ***** DAC960 RAID Driver Version 2.0.0 of 23 March 1999 *****
  570. Copyright 1998-1999 by Leonard N. Zubkoff <lnz@dandelion.com>
  571. Configuring Mylex DAC960PJ PCI RAID Controller
  572. Firmware Version: 4.06-0-08, Channels: 3, Memory Size: 8MB
  573. PCI Bus: 0, Device: 19, Function: 1, I/O Address: Unassigned
  574. PCI Address: 0xFD4FC000 mapped at 0x8807000, IRQ Channel: 9
  575. Controller Queue Depth: 128, Maximum Blocks per Command: 128
  576. Driver Queue Depth: 127, Maximum Scatter/Gather Segments: 33
  577. Stripe Size: 64KB, Segment Size: 8KB, BIOS Geometry: 255/63
  578. Physical Devices:
  579. 0:1 - Disk: Online, 2201600 blocks
  580. 0:2 - Disk: Online, 2201600 blocks
  581. 0:3 - Disk: Online, 2201600 blocks
  582. 1:1 - Disk: Online, 2201600 blocks
  583. 1:2 - Disk: Dead, 2201600 blocks
  584. 1:3 - Disk: Online, 2201600 blocks
  585. Logical Drives:
  586. /dev/rd/c0d0: RAID-5, Online, 4399104 blocks, Write Thru
  587. /dev/rd/c0d1: RAID-6, Online, 2754560 blocks, Write Thru
  588. Rebuild Completed Successfully
  589. and /proc/rd/status indicates that everything is healthy once again:
  590. gwynedd:/u/lnz# cat /proc/rd/status
  591. OK
  592. Note that the absence of a viable standby drive does not create an "ALERT"
  593. status. Once dead Physical Drive 1:2 has been replaced, the controller must be
  594. told that this has occurred and that the newly replaced drive should become the
  595. new standby drive:
  596. gwynedd:/u/lnz# echo "make-standby 1:2" > /proc/rd/c0/user_command
  597. gwynedd:/u/lnz# cat /proc/rd/c0/user_command
  598. Make Standby of Physical Drive 1:2 Succeeded
  599. The echo command instructs the controller to make Physical Drive 1:2 into a
  600. standby drive, and the status message that results from the operation is then
  601. available for reading from /proc/rd/c0/user_command, as well as being logged to
  602. the console by the driver. Within 60 seconds of this command the driver logs:
  603. DAC960#0: Physical Drive 1:2 Error Log: Sense Key = 6, ASC = 29, ASCQ = 01
  604. DAC960#0: Physical Drive 1:2 is now STANDBY
  605. DAC960#0: Make Standby of Physical Drive 1:2 Succeeded
  606. and /proc/rd/c0/current_status is updated:
  607. gwynedd:/u/lnz# cat /proc/rd/c0/current_status
  608. ...
  609. Physical Devices:
  610. 0:1 - Disk: Online, 2201600 blocks
  611. 0:2 - Disk: Online, 2201600 blocks
  612. 0:3 - Disk: Online, 2201600 blocks
  613. 1:1 - Disk: Online, 2201600 blocks
  614. 1:2 - Disk: Standby, 2201600 blocks
  615. 1:3 - Disk: Online, 2201600 blocks
  616. Logical Drives:
  617. /dev/rd/c0d0: RAID-5, Online, 4399104 blocks, Write Thru
  618. /dev/rd/c0d1: RAID-6, Online, 2754560 blocks, Write Thru
  619. Rebuild Completed Successfully