Performance Improvements and Enhancements of Enginuity 5874 for Symmetrix VMAX: What’s New Since Last Year

Engine comprises two directors. Up to 8 engines.

  • Up to 16 physical/virtual front-end connections
  • Up to 16 back-end FC connections
  • Four Quad Core procesors (2.33Ghz) per Engine
  • Global memory is mirrored to different director for full redundancy in the case of director failure
  • Virtual matrix interface connects to the redundant Rapid IO backplane


  • 1-8 Engines
  • 48-2400 disk drives
  • 128 FC front-end ports
  • 64 FICON front-end ports
  • 64 GigE/iSCSI front-end ports
  • 200/400GB EFD drives
  • 146/300/450/600GB 15k FC drives
  • 400/450/600GB 10k FC drives
  • 1TB SATA drives

Front-end Connectivity

  • zHPF
    • High performance FICON, encapsulates CCW (channel command word)
    • IBM makes this a separately licensed feature, Included with VMAX
  • 8Gb fibre channel
    • Go wide across A ports before going deep across B ports
    • Mix different systems across A and B ports
    • Be careful to provide enough ports on frame considerations

SRDF Software Compression

  • Enginuity 5874.207 introduced Software Compression Off by default
  • Can be enabled via inline
    • Cannot do this via symcli
  • Can be turned on for SRDF/A or ACP
  • Not recommended for SRDF/s but can be enabled
    • There’s overhead associated with it
    • Typically SRDF/S environment have more bandwidth

Asynchronous CoFW with Virtual Provisioning

  • Versioned write pending slot was set aside for performing this activity
  • Removes the impact of CoFW, only impact is at the time of activation

Fully Automated Storage Tiering (FAST)

  • First release worked on a full volume basis
    • Unsure if this has changed
  • Unlike Symm Optimizer, FAST will swap between classes of disk technology
  • Uses multiple algorithms to make swap/move decisions
  • Supported for mainframe in VMAX

Extended Drive Loop Option

  • At GA, the first four VMAX engines had a maximum of one direct and one daisy chain
  • Now the first four VMAX engines may now have up to four daisy chain bays added
  • After the third drive bay is attached, the system is then limited to four engines
    • This is because of cabling length limitations
  • Considered a capacity configuration (versus a performance configuration)
  • Response times don’t suffer as you go out to more DAEs (deeper in the drive loop)

Virtual Provisioning Zero Reclaim

  • Return (de-allocate) thin extents that contain all zeros to the free pool list
  • Filesystems will need to zero out the deleted files in order for this to happen
  • User initiated:
    • symconfigure commit -cmd “free tdev 1E3:1F2 size=8192MB type=zero;”
  • Restrictions apply
    • TDEVS with ready RDF
    • Two others I missed

Virtual Provisioning Write Level Rate

  • Devices are added to the virtual provisioning pool and writes are balanced across existing and new devices

Enginuity Performance Enhancements from GA

  • Enhancements to locality within cache/directors (accessing data from memory on the associated director versus the other director through the Rapid IO matrix)

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s