NSULATE™

GPU-accelerated alternative to RAID

  • Enables real-time hyperscale erasure coding up to 255 parity
  • Cryptographic checksums and real-time corruption recovery
  • Create highly parallel arrays with hundreds of devices
  • High performance, even with massive degradation
  • Compatible with all Linux filesystems and applications

Courtesy InsideHPC

NSULATE is available via our distribution partners. For a trial license, contact us.

Say Hello to NSULATE™

RAID6 was standardized in 1993 in an era of single-core computing. For exascale computing, RAID is an obstacle to higher performance and resilience. NSULATE revolutionises the role of the storage controller by replacing a fixed-function RAID controller with a powerful general-purpose GPU. Using a GPU as a storage controller enables the calculation of several storage functions on the same high performance controller, enabling more efficient storage processing without sacrificing performance. This enables modern storage appliances to deliver unprecedented speed, scale, security, storage efficiency and intelligence in real-time.

nsulate gpu tyan

Extreme Resilience

NSULATE offers extreme data resilience. It uses a GPU to generate erasure encoded parity calculations to enable automatic data recovery on scales impossible with a RAID card or a CPU.

While traditional RAID and erasure coding solutions support parity calculations between 2 and 6, NSULATE supports real-time Reed-Solomon erasure coding up to 255 parity. Stable I/O throughput can be maintained even while experiencing dozens of simultaneous device failures and corruption events across an array.

resilience

Continuous Verification

NSULATE adds support for cryptographic data verification and recovery to all storage applications. NSULATE includes a complete suite of hash functions for corruption detection and recovery, including CRC32C as well as the NIST compatible cryptographic hash functions, SHA2 & SHA3. NSULATE also includes support for blockchain cryptographic hash functions SHA2 Merkle & SHA3 Merkle for blockchain auditable storage solutions.

NSULATE patrol scans continuously cryptographically verify and rebuild missing drives and corrupt data. NSULATE’s extreme resilience to data corruption enables this background process to run at a very low priority to maintain an array. NSULATE can rebuild corrupt or missing data in real-time. Full rebuilds can often be deferred indefinitely due to the level of parity that can be set.

checksum

Converged Storage-Processing

NSULATE can further reduce infrastructure requirements by sharing GPU resources for compute and storage on the same physical node. Storage nodes can be configured to double as processing nodes for I/O bound computing steps. This further accelerates big data and HPC processing and storage access by reducing the distance between GPU resources and storage.

storage processing

Features Technical Specification
Solution Software block device for Linux that enables enterprise GPUs to function as storage controllers
Form Factor Software - Linux kernel module and software daemon
Connectors Any provided by accompanying RAID, HBA or motherboard
Device Support 1024+ SAS/SATA/NVMe Devices limited to underlying hardware configuration
Data Transfer Rates Up to 12GB/s per GPU
Cache Memory NVMe, NV-RAM caching up to 16-256GB
Key Resilience
and Data Protection
Features
High parity erasure coding where data + parity <= 256, no hot-spares needed
  Online capacity expansion
  Real-time consistency check and recovery for data integrity
  Fast initialisation for quick array setup
  Up to 256 Virtual Drives
  Runs well with degraded or failed drives
  CRC32, SHA2 or SHA3 (256-512bit) cryptographic checksum verification
Management Command-line Interface
  Ubuntu 16.04.4+
  CentOS 7.2+

High Availability Configuration with NSULATE

High Availability can be used to help you minimize downtime experienced by your users. NSULATE can be configured to run on multiple systems in a high availability configuration.

NSULATE is configured on two nodes (u901, u902), where the journal device of each NSULATE array is replicated with DRBD (Distributed Replicated Block Device). The first node is active while the second node is passive, on standby. The passive server acts as a failover node that's ready to take over operation across the same drives, as soon as the first node fails. This configuration is described in the following diagram.

It's important that the two nodes have the same settings. If changes are made on the active node, those changes must be replicated on the passive, failover, node. This ensures that clients won't be able to tell the difference when the failover node takes over.

Graph High Avail 

High Availability Configuration Set up

Hardware required

Inventec U90G3 series are 4U height ultra-dense storage servers, supporting up to seventy-bay 3.5'' large-form-factor hard disk drives and dual server nodes of two-socket mainstream Intel® Xeon® processor E5 v3/v4 family. U90G3 storage servers feature 12G SAS interface and dual domain, supporting two HDD control configurations - single node accessing all 70 drives for best price per drive catering to cold storage usage, or two nodes accessing all 70 drives for those who need failover.

Software Required

NSULATE, the block device.

Heartbeat a subsystem that allows a primary and a back-up Linux server to determine if the other is 'alive' and if the primary isn't, fail over resources to the backup.

DRBD is a kernel block-level synchronous replication facility which serves as an imported shared-nothing cluster building block.

The NFS kernel server is the in-kernel Linux NFS daemon. It serves locally mounted file systems out to clients via the NFS network protocol.

 

 System Specification

Hardware Model Details
Chassis Inventec U90 Dual Motherboard, 4U
Operating System Ubuntu Server 16.04.5 4.15 Kernel
CPU Intel Xeon CPU E5-2620 v4 @ 2.10GHz x 2 per node  
GPU Nvidia Tesla P4 x 1 per node  
RAM 8GB 2667Mhz x 8 per node  
PSU 1400W (220V) Platinum (2+2 redundancy) Two Slots are active, and two are for failover purposes.
Hard Drives 70 x 2 TB HDDs,  2 x 250GB SSDs per node The 70 hard drives are shared between each node, while the SSDs can be configured in RAID1 for the operating system.

Software Setup Guide

This installation does not require any additional configuration and will come down to individual use cases.

Download Setup Guide