Thursday, June 4, 2009

PCS - Predictive Cache Statistics for PAM cards

This is in continuation of last part where I have covered details of PAM, in this part I will cover PCS which you can use to determine if the PAM module will give you benifite before ordering your one from NetApp

PCS: Determining If PAM Will Improve Performance

To determine whether your storage systems can benefit from added cache, NetApp has developed its Predictive Cache Statistics software, which is currently available in Data ONTAP 7.3 and later releases. PCS allows you to predict the effects of adding the cache equivalent of two, four, and eight times system memory.

Using PCS, you can determine whether PAM will improve performance for your workloads and decide how many modules you will need. PCS also allows you to test the different modes of operation to determine whether the default, metadata, or low-priority mode is best.

To begin using PCS, you enable the feature with the command:

options flexscale.enable pcs

Don’t enable PCS if your storage system is consistently above 80% CPU utilization. Once PCS is enabled, you have to let the simulated cache “warm up” or gather data blocks. Once the cache is warmed up, you can review and analyze the data using the perfstat tool.

This procedure simulates caching using the default caching mode that includes both metadata and normal user data. You can also test the other operating modes.

To enable metadata mode:

options flexscale.normal_data_blocks off

To enable low-priority mode:

options flexscale.normal_data_blocks on
options flexscale.lopri_blocks on

Once you have completed testing, disable PCS:

options flexscale.enable off

With PCS enabled, you can find out what's happening using the following command:

stats show -p flexscale-pcs

Sample output is shown below



Example PCS output.

Use the following guidelines to help you interpret the data:

· If the hit/(invalidate+evict) ratio is small, then a lot of data is being discarded before it is used. The instance (ec0, ec1, ec2) may be too small.

· If the (hit+miss)/invalidate ratio is too small, it might indicate a workload with a large amount of updates; switch to metadata mode and check the hit% again.

· If the usage is stable and there are a small number of invalidates and evictions, then the working set fits well.

· The KB/s served by the cache is approximately equal to the hit/s × 4KB per block.

Note that the three caches simulated in PCS are cascading caches. In the example above, ec0 represents the first cache of size 8GB, ec1 represents the second cache of size 8GB, and ec3 represents the third cache of size 16GB. The hit per second for a 32GB cache is the sum of all the hits per second for all three caches. The key advantage of cascading caches is that in the process of measuring an accurate hit rate for a 32GB cache, we also obtain hit rate estimates of both 8GB and 16GB caches. This gives us three points on the hit rate curve and the ability to estimate hit rates for intermediate cache sizes.

PAM and FlexShare

FlexShare™ is a Data ONTAP option that allows you to set priorities for system resources (processors, memory, and I/O) at a volume level, thereby allocating more resources to workloads on particular volumes when the controller is under significant load. FlexShare is fully compatible with PAM, and settings made in FlexShare apply to the data kept in the PAM cache. With FlexShare, finer-grained control can be applied on top of the global policies you implement with PAM. For example, if an individual volume is given a higher priority with FlexShare, data from that volume will receive a higher priority in the cache.

Post a Comment