Calculating correct stripe size for Linux mdadm RAID10 arrays in "far" layout

view story

http://serverfault.com – I'm creating RAID10 array from 6 drives. When created in near layout, e.g. mdadm --create /dev/md2 --chunk=64 --level=10 --raid-devices=6 --layout=n2 /dev/sda1 ... Checking stripe size as reported by the system: cat /sys/devices/virtual/block/md2/queue/optimal_io_size Result is 196608, as expected, e.g. 3 data drives (50% of 6 total in RAID10) x 64K chunk = 192K stripe. Now, when creating same array with --layout=f2 option, optimal_io_size reports 393216, e.g. twice as large. Now, according to Nail Brown (mdadm raid10 author), The "far" layout lays all the data out in a (HowTos)