[PLUG-TALK] SSD lstat performance questions and seeking hard proof

Chris Schafer xophere at gmail.com
Sun Nov 27 04:29:53 UTC 2016


OK this was some good information.  I think if you aren't doing HA or more
then one VMware host that getting off a 1Gbe NFS share will likely help a
lot.  I would suggest you try to get SSD just for the data that needs it or
find a filesystem that will let you auto tier it.

I guess I would recommend trying to get data on the number of IOPS and the
size of the reads you are doing.  This will help you design the right
solution.

Why not just add SSD to what you have?  Keep the lower priority data on the
array?

Does the array have any decent tools for performance monitoring?

I guess I need you do need to get some better data on where the bottle neck
is.

My thought was if you weren't sure if the app would scale with storage AWS
could answer that.

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

I am presuming you have tried something like this:

http://bencane.com/2012/08/06/troubleshooting-high-io-wait-in-linux/

I wonder if your VM's are short of memory as well.

So do you know how busy the 1Gbe NAS links are running?  This might tell
you something. If they are running at line speed you may just need a bigger
pipe.

I would think that many spindles at that speed would out run 1Gbe.
Especially if we are talking about read not write.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.pdxlinux.org/pipermail/plug-talk/attachments/20161126/acff210e/attachment.html>


More information about the PLUG-talk mailing list