Thanks for visiting Scientific Computing World.

You're trying to access an editorial feature that is only available to logged in, registered users of Scientific Computing World. Registering is completely free, so why not sign up with us?

By registering, as well as being able to browse all content on the site without further interruption, you'll also have the option to receive our magazine (multiple times a year) and our email newsletters.

Mount Sinai reduces genomic sequencing time

Share this on social media:

Mount Sinai School of Medicine in New York has implemented an Avere Systems FXT Series to reduce job processing times of I/O intensive genomic sequencing research. Modern experimental technologies for genetic and genome sequencing projects involve massive datasets that require researchers to not only think about the biology of their projects, but the technologies needed to handle the analysis of hundreds of millions of genomic sequences generated.

‘Organisations performing data-intensive operations have for too long gotten by with a “good enough” approach to their computational needs,’ said Ron Bianchini, CEO at Avere. ‘They too often accept read/write times as fixed, turn to over-provisioning of disk-based systems or add expensive Flash aimlessly to make up for this loss. By implementing automatically tiered NAS appliances, like the FXT 2300, organisations like Mount Sinai School of Medicine can accelerate read and write workloads by dynamically moving data in real time to the most-appropriate storage tier to improve the cost/performance ratio without additional administrative overhead.’

The Avere FXT Series contains both solid state storage and traditional hard drives to optimise performance without compromises on all types of workloads. Reads, writes, and metadata are allocated to storage media via Avere's dynamic tiering. Allocation algorithms running on the FXT appliances monitor access frequency patterns and workload type and manage data placement on multiple internal tiers. This increases performance, distributes the workload in the cluster and minimises requests to the mass storage server. Movement of data occurs automatically in real-time and at the file or block level.