Multi-Client iSCSI Evaluation

As virtualization becomes more and more popular even in home / power user settings, the importance of the iSCSI feature set of any COTS NAS can't be overstated. Starting with our ioSafe 1513+ review, we have started devoting a separate section (in the reviews of NAS units targeting SMBs and SMEs) to the evaluation of iSCSI performance. QNAP supports only one type of iSCSI LUN - regular files.

We evaluated the performance of the QNAP TS-853 Pro with file-based LUNs. The standard IOMeter benchmarks that we used for multi-client CIFS evaluation were utilized for iSCSI evaluation also. The main difference to note is that the CIFS evaluation was performed on a mounted network share, while the iSCSI evaluation was done on a 'clean physical disk' (from the viewpoint of the virtual machine).

Performance Numbers

The four IOMeter traces were run on the physical disk manifested by mapping the iSCSI target on each VM. The benchmarking started with one VM accessing the NAS. The number of VMs simultaneously playing out the trace was incremented one by one till we had all 25 VMs in the fray. Detailed listings of the IOMeter benchmark numbers (including IOPS and maximum response times) for each configuration are linked below:

QNAP TS-853 Pro - LUNs (Regular Files) - Multi-Client Performance - 100% Sequential Reads

 

QNAP TS-853 Pro - LUNs (Regular Files) - Multi-Client Performance - Max Throughput - 50% Reads

 

QNAP TS-853 Pro - LUNs (Regular Files) - Multi-Client Performance - Random 8K - 70% Reads

 

QNAP TS-853 Pro - LUNs (Regular Files) - Multi-Client Performance - Real Life - 65% Reads

QNAP's implementation provides very good results for purely sequential workloads compared to Synology. However, the latter wins out when it comes to random workloads. On the whole, the performance of the iSCSI LUNs in QTS is satisfactory. Features such as VAAI and ODX are available. However, Synology manages to edge QNAP out when it comes to offering multiple ways of configuring iSCSI LUNs. These ways provide different balancing acts for performance and flexibility.

Multi-Client Performance - CIFS on Windows Encryption Support Evaluation
Comments Locked

58 Comments

View All Comments

  • ap90033 - Wednesday, December 31, 2014 - link

    RAID is not a REPLACEMENT for BACKUP and BACKUP is not a REPLACEMENT for RAID.... RAID 5 can be perfectly fine... Especially if you have it backed up. ;)
  • shodanshok - Wednesday, December 31, 2014 - link

    I think you should consider raid10: recovery is much faster (the system "only" need to copy the content of a disk to another) and URE-imposed threat is way lower.

    Moreover, remember that large RAIDZ arrays have the IOPS of a single disk. While you can use a large ZIL device to transform random writes into sequential ones, the moment you hit the platters the low IOPS performance can bite you.

    For reference: https://blogs.oracle.com/roch/entry/when_to_and_no...
  • shodanshok - Wednesday, December 31, 2014 - link

    I agree.

    The only thing to remember when using large RAIDZ system is that, by design, RAIDZ arrays have the IOPS of a single disk, no matter how much disks you throw at it (throughput will linearly increase, though). For increased IOPS capability, you should construct your ZPOOL from multiple, striped RAIDZ arrays (similar to how RAID50/RAID60 work).

    For more information: https://blogs.oracle.com/roch/entry/when_to_and_no...
  • ap90033 - Friday, January 2, 2015 - link

    That is why RAID is not Backup and Backup is not RAID. ;)
  • cjs150 - Wednesday, January 7, 2015 - link

    Totally agree. As a home user, Raid 5 on a 4 bay NAS unit is fine, but I have had it fall over twice in 4 yrs, once when a disk failed and a second time when a disk worked loose (probably my fault). Failure was picked up, disk replaced and riad rebuilt. Once you have 5+ discs, Raid 5 is too risky for me.
  • jwcalla - Monday, December 29, 2014 - link

    Just doing some research and it's impossible to find out if this has ECC RAM or not, which is usually a good indication that it doesn't. (Which is kind of surprising for the price.)

    I don't know why they even bother making storage systems w/o ECC RAM. It's like saying, "Hey, let's set up this empty fire extinguisher here in the kitchen... you know... just in case."
  • Brett Howse - Monday, December 29, 2014 - link

    The J1900 doesn't support ECC:
    http://ark.intel.com/products/78867/Intel-Celeron-...
  • icrf - Monday, December 29, 2014 - link

    I thought the whole "ECC required for a reliable file system" was really only a thing for ZFS, and even then, only barely, with dangers generally over-stated.
  • shodanshok - Wednesday, December 31, 2014 - link

    It's not over-stated: any filesystem that proactively scrubs the disk/array (BTRFS and ZFS, at the moment) subsystem _need_ ECC memory.

    While you can ignore this fact on a client system (where the value of the corrupted data is probably low), on NAS or multi-user storage system ECC is almost mandatory.

    This is the very same reason why hardware RAID cards have ECC memory: when they scrubs the disks, any memory-related corruption can wreak havoc on array (and data) integrity.

    Regards.
  • creed3020 - Monday, December 29, 2014 - link

    I hope that Synology is working on something similar to the QvM solution here. The day I started my Synology NAS was the day I shutdown my Windows Server. I would, however, still love to have an always on Windows machine for the use cases that my NAS cannot perform or would be onerous to set up and get running.

Log in

Don't have an account? Sign up now