Clariion Enterprise Flash Drives: Achieve Highest Performance

Missed the first 10-15 minutes of the session. Notes begin thereafter.

Lab Test Scenario

R5 4+1 73GB EFD and 15k

SP cache off for EFD (default), on for FC luns

Random multi-threaded 8KB test workloads

IO Type

FC response time (ms)

FC IOPS

EFD response time (ms)

EFD IOPS

IOPS per drive

Read

22

1450

1.1

30000

6000

Write

42

590

6.1

15000

3000

Read/Write, 60/40

25

1000

3.0

7500

1500

Note: The EFD IOPS numbers are roughly accurate from what he had. I didn’t have enough time to get them all.

Bandwidth is not the strongest suit for EFD

FC drives very effective at low stream counts, with SP caching

EFD beats FC when multiple sequential streams are running, without SP caching

2,500 IOPS per drive, 100 MB/s per drive

Compare to 180 IOPS for 15k

For simple estimation for all EFD drives, mixed read/write workloads

Remember this is a backend measure. Need to add RAID parity penalty calculations

System Limitations

  • Ports – FC connections, 4 Gb/s bandwidth, 75000 IOPS
  • SP – CPU processing rate, memory system bandwidth
  • Buses – FC loops, 4 Gb/s bandwidth, 75000 IOPS
  • Limits can be reached with large number of HDDs or small number of EFDs

Configuring Cache

Default – cache is off for EFD luns. Not necessarily a best practice, just the default

Why?

  • Uncached EFD performance is excellent
  • Allocate cache pages to HDD luns that benefit more

When to change the default:

  • Sequential reads
    • Creates multiple threads from single thread load
    • Pre-fetch multipliers can optimize for EFDs with more smaller requests
  • Response time critical writes
    • SP cache can hide RAID parity overhead
    • Helps match fast EFD read times with slower write times
  • EFD-only array

Configuring Layout

  • All RAID groups available
    • Per GB cost and low service times favor R5
    • Narrow R10 good option for small deployments, best write response time with cache off
    • Best practices still apply
  • LUNs
    • High concurrency gives highest EFD throughput
    • Multiple LUNs on RG
    • Dual SP ownership
  • Bandwidth
    • Spread across buses in cases where bandwidth is required

Virtual Provisioning

  • Pool management reduces maximum performance
  • Policy based data placement means more performance variance
  • Makes sense if highest EFD performance is not required and provisioning convenience is more of a priority

EFD-friendly Workloads

  • High ratio of reads, high percentage random reads
    • Reads faster than writes
    • Large or small block are both winners
  • Requirement for very low latency
  • High concurrency
    • Needs many threads to obtain highest throughput

Replication Advantage

  • COFW IO operations on the source LUN contends with all other traffic and queues on source drives
  • Use EFD for source lun and HDD reserved lun pool. The read performance for COFW will improve source host response time.

Advertisements

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s