Bit Hoarder Software-Defined-Storage (SDS) Network-Attached-Storage (NAS) appliances are designed to connect your users in new and unique ways and to provide highly reliable services for your data. With data demands in such high need and an ever increasing need for throughput, our Professional and Enterprise class of appliances deliver blazing fast storage suitable for any application.
With access to a state of the art file system, you will gain flexibility, file integrity and performance. With our software you will gain intuitive access to snapshots, deduplication, user management, monitoring and trending and storage management. Please select a link on the left to learn more.
ZFS stands for Zettabyte File System. ZFS is a highly robust, scalable and state of the art file system that fundamentally changes the way file systems are created and managed. ZFS provides features and benefits not found in any other file system available today.
Software Defined Storage (SDS) is a revolutionary new technique for managing digital data using policy-based provisioning defined by software. Traditional storage systems use firmware (e.g. BIOS) or hardware (e.g. a RAID controller) to manage storage policies.
Network Attached Storage (NAS) is a data storage device that provides file-based shared storage through a local area network (LAN).
Multiple users on completely different platforms are able to access and edit the same files using low latency protocols such as SMB and NFS.
Owning a NAS is like owning your own private cloud except that it's faster, less expensive and gives you complete control over the balance between storage capacity, speed and redundancy.
A snapshot is a virtual copy of an entire volume for a single moment in time. A snapshot is not a backup.
Backups are always recommended, but maintaining multiple versions of backups is not as critical as it used to be. Now, all changes are tracked with the concept of the snapshot and one off site backup is usually sufficient to achieve peace of mind.
Deduplication is the process of eliminating identical copies of data.
Intentional redundancy is a good thing, unintentional redundancy only wastes disk space. Deduplication only involves removing unnecessary copies of data. It is block level deduplication; to the user, duplication is still apparent but to the hard drive it's the same data on disk.
For example, say a coworker downloads a project folder and works on the project locally while on an airplane. When he or she gets back to the office, they load the revised project back onto the server as a new revision, without deleting the old folder just in case. In a traditional filesystem, all common data in these two folders would be replicated on disk as a side effect. However, SDS deduplication allows for automatic block level linking to common data. With SDS deduplication, only the data that was added or changed would consume additional disk space. We also have a file level deduplication option for scheduled deduplication tasks.
Due to the automatic linking discussed above, block level deduplication requires stronger checksum algorithms, which could have a small performance impact because they take up more RAM. Performance is not generally impacted unless all RAM is utilized. Adding additional RAM or a cache device, or switching to file level scheduled deduplication tasks, will provide the solution if that occurs.
Generally speaking, the more space deduplication saves, the more the benefits will outweigh the cost.
An advantage of ZFS deduplication is that it doesn't require nightly tasks to run, data is de-duplicated before it is even written to disk.
It all depends on the type of data you are dealing with. If there is a high potential for duplicated data, than the savings will be significant. For example, hosting hundreds of nearly identical virtual machines, or where multiple versions of the same product are saved for historical purposes would both provide significant savings. However, storing any number of completely unique photographs will not provide the benefit, albeit with a slight additional overhead.
Multiple users can be created with a few easy mouse clicks. Each user account is assigned a profile which includes enough information to identify and contact the associated user, which eliminates the possibility of orphaned accounts.
With Bit Hoarder's unique web based interface, users can be assigned to groups, and groups can then be assigned to volumes. This restricts access when needed. When access restrictions are not required, Bit Hoarder provides a guest mode for all volumes allowing anyone to access the volume.
Bit Hoarder also provides an interface for managing users relative to the group. This additional interfaces allows faster management of users under certain scenarios, such as when a large number of users need to be removed from a single group.
Access to Bit Hoarder's interface is accomplished using the same accounts that are used for volume access. This eliminates the need to have two interfaces for managing what a user can do, as is typical with other systems.
Business moves fast and every second counts. Our real time gauges provide supervisors and administrators with immediate knowledge of system throughput and performance.
System performance statistics are logged in real time to the Bit Hoarder database. All software versions allow you to customize the frequency, age and type of statistics logged. Logs can be maintained for up to 365 days. Feature rich graphical trending displays allow for troubleshooting and workflow optimization.
Bit Hoarder works off the fundamental concept of virtualizing your hardware. When you understand that, everything else makes sense. "Virtual devices" (vdevs) are defined by the user and then combined into a pool to create what is known as hybrid or nested RAID (i.e. RAID 0/10/50/60/70).
Data written to a pool is then stripped across all virtual devices within that pool to achieve exceptional performance, so virtual devices need to be resilient.
Virtual devices come in the form of RAID 0, 1, 5, 6 or 7 and special virtual devices used to bolster performance. The RAID levels, or virtual device types, are discussed below.
RAIDz 0 (striping): All data is striped across every disk. This configuration is not recommended due to the potential catastrophic loss of data that you would experience if you lost even a single drive from a striped array. The advantage is maximum space savings since this is a non-redundant configuration.
RAIDz 1 (mirror): A mirror of two or more physical devices. Data is replicated in an identical fashion across all components of a mirror.
RAIDz 5 (single parity): A variation on RAID-5 (single parity) that allows for better distribution of parity and eliminates the 'RAID-5 write hole' (in which data and parity become inconsistent after a power loss). Data and parity is striped across all disks.
RAIDz 6 (double parity): A variation on RAIDz-5, but with two parity bits instead of one.
RAIDz 7 (triple parity): A variation on RAIDz-5, but with three parity bits instead of one.
Assuming N is the number of disks and all disks are the same, of size X, we can estimate some important properties about each virtual device type.
|RAIDz 0||RAIDz 1||RAIDz 5||RAIDz 6||RAIDz 7|
|Minimum number of disks||1||2||3||4||5|
|Recommended Number of disks1||Any||2-6||3-9||4-12||5-15|
|Approximate bytes available||N × X||X||(N-1) × X||(N-2) × X||(N-3) × X|
|Maximum disk failures without potential data lose||0||N-1||1||2||3|
1Actual recommended number of disks depends on the anticipated size of the entire pool, but should be in multiples of the minimum number of disks.