As a follow-on to the previous post, I wanted to see what happened with various block sizes as compared to the default 32768 v3xfersize parameter on Celerra NFSv3.
Test 1: 35840 byte block sizes (35KB as per customer)
Test 2: 8192 byte block sizes (8KB)
Test 3: 32768 byte block sizes (32KB)
Test 4: 65536 byte block sizes (64KB)
Test 5: 75368 byte block sizes (73.6KB chosen at random)
Before at 35830 block sizes, which we chose to match the customers typical file sizes, we had the following data set:
Command: ./perf.pl 35840 100000 1 1 10
Running Parameters:
basedir = /mnt/nfs3/testing
dd blocksize = 35840
dd blocksperfile = 100000
total files = 1
files per directory = 1
loop iterations = 10
dd command: dd if=/dev/zero of=mkfile.dat bs=32768 count=100000 2>/dev/null
Block Size |
Block Count |
Capacity (MB) |
Iteration |
Total Seconds |
Transfer Rate (MB/s) |
Transfer Rate (Mb/s) |
35840 |
100000 |
3417.969 |
1 |
53 |
64.48997642 |
515.9198113 |
35840 |
100000 |
3417.969 |
2 |
53 |
64.48997642 |
515.9198113 |
35840 |
100000 |
3417.969 |
3 |
53 |
64.48997642 |
515.9198113 |
35840 |
100000 |
3417.969 |
4 |
54 |
63.29571759 |
506.3657407 |
35840 |
100000 |
3417.969 |
5 |
53 |
64.48997642 |
515.9198113 |
35840 |
100000 |
3417.969 |
6 |
53 |
64.48997642 |
515.9198113 |
35840 |
100000 |
3417.969 |
7 |
54 |
63.29571759 |
506.3657407 |
35840 |
100000 |
3417.969 |
8 |
53 |
64.48997642 |
515.9198113 |
35840 |
100000 |
3417.969 |
9 |
53 |
64.48997642 |
515.9198113 |
35840 |
100000 |
3417.969 |
10 |
55 |
62.14488636 |
497.1590909 |
|
|
|
Average |
53.4 |
64.01661565 |
512.1329252 |
Changing the block size to fit within the v3xfersize block size, we see that we have slightly faster transfer times:
Command: ./perf.pl 8192 400000 1 1 10
Running Parameters:
basedir = /mnt/nfs3/testing
dd blocksize = 8192
dd blocksperfile = 400000
total files = 1
files per directory = 1
loop iterations = 10
Block Size |
Block Count |
Capacity (MB) |
Iteration |
Total Seconds |
Transfer Rate (MB/s) |
Transfer Rate (Mb/s) |
8192 |
400000 |
3125.000 |
1 |
48 |
65.10416667 |
520.8333333 |
8192 |
400000 |
3125.000 |
2 |
47 |
66.4893617 |
531.9148936 |
8192 |
400000 |
3125.000 |
3 |
48 |
65.10416667 |
520.8333333 |
8192 |
400000 |
3125.000 |
4 |
48 |
65.10416667 |
520.8333333 |
8192 |
400000 |
3125.000 |
5 |
47 |
66.4893617 |
531.9148936 |
8192 |
400000 |
3125.000 |
6 |
48 |
65.10416667 |
520.8333333 |
8192 |
400000 |
3125.000 |
7 |
47 |
66.4893617 |
531.9148936 |
8192 |
400000 |
3125.000 |
8 |
48 |
65.10416667 |
520.8333333 |
8192 |
400000 |
3125.000 |
9 |
47 |
66.4893617 |
531.9148936 |
8192 |
400000 |
3125.000 |
10 |
48 |
65.10416667 |
520.8333333 |
|
|
|
Average |
47.6 |
65.65824468 |
525.2659574 |
Command: ./perf.pl 32768 100000 1 1 10
Running Parameters:
basedir = /mnt/nfs3/testing
dd blocksize = 32768
dd blocksperfile = 100000
total files = 1
files per directory = 1
loop iterations = 10
Block Size |
Block Count |
Capacity (MB) |
Iteration |
Total Seconds |
Transfer Rate (MB/s) |
Transfer Rate (Mb/s) |
32768 |
100000 |
3125.000 |
1 |
48 |
65.10416667 |
520.8333333 |
32768 |
100000 |
3125.000 |
2 |
48 |
65.10416667 |
520.8333333 |
32768 |
100000 |
3125.000 |
3 |
48 |
65.10416667 |
520.8333333 |
32768 |
100000 |
3125.000 |
4 |
50 |
62.5 |
500 |
32768 |
100000 |
3125.000 |
5 |
48 |
65.10416667 |
520.8333333 |
32768 |
100000 |
3125.000 |
6 |
48 |
65.10416667 |
520.8333333 |
32768 |
100000 |
3125.000 |
7 |
48 |
65.10416667 |
520.8333333 |
32768 |
100000 |
3125.000 |
8 |
48 |
65.10416667 |
520.8333333 |
32768 |
100000 |
3125.000 |
9 |
48 |
65.10416667 |
520.8333333 |
32768 |
100000 |
3125.000 |
10 |
49 |
63.7755102 |
510.2040816 |
|
|
|
Average |
48.3 |
64.71088435 |
517.6870748 |
Command: ./perf.pl 65536 50000 1 1 10
Running Parameters:
basedir = /mnt/nfs3/testing
dd blocksize = 65536
dd blocksperfile = 50000
total files = 1
files per directory = 1
loop iterations = 10
Block Size |
Block Count |
Capacity (MB) |
Iteration |
Total Seconds |
Transfer Rate (MB/s) |
Transfer Rate (Mb/s) |
65536 |
50000 |
3125.000 |
1 |
47 |
66.4893617 |
531.9148936 |
65536 |
50000 |
3125.000 |
2 |
48 |
65.10416667 |
520.8333333 |
65536 |
50000 |
3125.000 |
3 |
48 |
65.10416667 |
520.8333333 |
65536 |
50000 |
3125.000 |
4 |
49 |
63.7755102 |
510.2040816 |
65536 |
50000 |
3125.000 |
5 |
48 |
65.10416667 |
520.8333333 |
65536 |
50000 |
3125.000 |
6 |
48 |
65.10416667 |
520.8333333 |
65536 |
50000 |
3125.000 |
7 |
47 |
66.4893617 |
531.9148936 |
65536 |
50000 |
3125.000 |
8 |
48 |
65.10416667 |
520.8333333 |
65536 |
50000 |
3125.000 |
9 |
47 |
66.4893617 |
531.9148936 |
65536 |
50000 |
3125.000 |
10 |
48 |
65.10416667 |
520.8333333 |
|
|
|
Average |
47.8 |
65.38685953 |
523.0948762 |
And going back to another random sized I/O block:
Command: ./perf.pl 75368 50000 1 1 10
Running Parameters:
basedir = /mnt/nfs3/testing
dd blocksize = 75368
dd blocksperfile = 50000
total files = 1
files per directory = 1
loop iterations = 10
Block Size |
Block Count |
Capacity (MB) |
Iteration |
Total Seconds |
Transfer Rate (MB/s) |
Transfer Rate (Mb/s) |
75368 |
50000 |
3593.826 |
1 |
56 |
64.17546953 |
513.4037563 |
75368 |
50000 |
3593.826 |
2 |
56 |
64.17546953 |
513.4037563 |
75368 |
50000 |
3593.826 |
3 |
58 |
61.96252231 |
495.7001785 |
75368 |
50000 |
3593.826 |
4 |
56 |
64.17546953 |
513.4037563 |
75368 |
50000 |
3593.826 |
5 |
55 |
65.34229625 |
522.73837 |
75368 |
50000 |
3593.826 |
6 |
55 |
65.34229625 |
522.73837 |
75368 |
50000 |
3593.826 |
7 |
55 |
65.34229625 |
522.73837 |
75368 |
50000 |
3593.826 |
8 |
57 |
63.0495841 |
504.3966728 |
75368 |
50000 |
3593.826 |
9 |
55 |
65.34229625 |
522.73837 |
75368 |
50000 |
3593.826 |
10 |
55 |
65.34229625 |
522.73837 |
|
|
|
Average |
55.8 |
64.42499963 |
515.399997 |
Note that while sniffing the NS20 port for NFS traffic, all packets came across as 32768 bytes, regardless of the data within. I expected to see better performance with block sizes aligning with the v3xfersize parameter. The slight anomaly was at 32768, where the transfer rate dropped slight compared to the 8192 and 65536 block sizes.
All in all though, I got fairly consistent transfer rates between the various test case scenarios.