Home > Backup and Recovery Blog > What is HPSS Storage? The Backup of IBM HPSS

What is HPSS Storage? The Backup of IBM HPSS

1 Star2 Stars3 Stars4 Stars5 Stars
(12 votes, average: 4.94 out of 5)
Loading...
Updated 30th January 2024, Rob Morrison

What is HPSS?

High-Performance Storage System (HPSS) is a highly scalable and flexible storage management software developed by the HPSS Collaboration in order to provide a policy-based and software-defined Hierarchical Storage Management (HSM) system. This technology is typically in demand for many HPC (high Performance Computing) and SuperComputing IT environments. One of the reasons for this is that just a single HPSS namespace can handle billions of files, can create from a few files per second to thousands per second and scale from petabytes to exabytes of data.

HPSS uses a combination of SAN, LAN, and cluster technologies to combine many different storage mediums in a single infrastructure (computers, disks, tape drives, or tape libraries).

HPSS supports many different data creation methods. For example, it can work with FUSE, FTP, parallel FTP, and even with client API (parallel I/O). The API of HPSS is supported on Solaris, Linux, and AIX, and the complete Linux support was added with version 7.5. The entirety of HPSS is created using a scalable RDBMS (Relational Database Management System) from IBM called Db2.

Advantages of HPSS

HPSS storage is very complex under the hood, with most of its features created to provide long-term scalable storage for enterprise needs. Some of the most significant advantages of IBM HPSS include:

  • Availability. Since Db2 offers the ability to ensure metadata integrity while offering fast failure recovery, it is not particularly difficult for HPSS to provide a combination of long-term data protection and high data availability. HPSS RAIT (Redundant Array of Independent Tapes) provides cheap data redundancy capabilities, and data accuracy is ensured via logical block protection and file checksum validation.
  • Efficiency. HPSS uses a combination of techniques to ensure high performance and impressive efficiency. Access latency is reduced by organizing the R/W order, the speed of transferring large files is improved via parallel transfer and collocating, and transparency for end users is achieved through various automation capabilities with policy management.
  • Support. HPSS is supported and provided by IBM with an impressive number of benefits – installation, configuration, test results, component verification, and a detailed solution architecture.
  • Huge Scalability. HPSS has a unique structure that allows it to scale incrementally when necessary. Adding storage, network, and computing resources to the namespace makes it possible for exabytes of data and billions of files to be stored within that same namespace.

Innovations of HPSS

HPSS is an exceptional example of how software can live for more than ten years before being replaced entirely. The 2022 HPSS User Forum marked the thirtieth birthday of this software, and it is still going strong to this day. Some of the more prominent innovations that HPSS brought to the industry include:

  • Remote procedure calls. HPSS is widely considered one of the first infrastructures to gain distributed computing advantage by using remote procedure calls.
  • Hierarchical storage management. Since HSM is a tiered storage model, implementing it in a practical environment can be extremely difficult. HPSS is officially the first-ever example of a commercially successful HSM implementation.
  • Network-based architecture. There was a particular time period in the 1990s when practically all HPC units transformed using a distributed design model as a baseline – making the usage of network for data transfer practically mandatory. HPSS was one of the industry’s first successful implementations of distributed network capability.
  • A clear divide between control traffic commands and data commands. HPSS significantly enhanced its scalability options by completely separating two different function groups – data and traffic control commands.
  • Distributed transactions. The entire idea of a distributed architecture was initially proposed by a small number of solutions (including HPSS), and the implementation of distributed transactions was the primary catalyst for this suggestion.

The origins of HPSS

HPSS was initially created in 1992 and made possible by the National Storage Laboratory’s (NSL) research. The primary purpose of NSL was to commercialize both hardware and software technologies in order to overcome various bottlenecks in regard to digital information – such as data storage and computing bottlenecks.

NSL was created as a collaboration between IBM and five national laboratories of the Department of Energy in the U.S.:

  • Oak Ridge (ORNL);
  • Lawrence Livermore (LLNL);
  • Sandia (SNL);
  • Los Alamos (LANL);
  • Lawrence Berkeley (LBL).

The above group of research organizations realized that the so-called “data storage explosion” was about to happen, driving multiple parameters such as data storage, data transfer speed, and computing power to rise tremendously. This collaboration aimed to create and deploy an infrastructure that can scale effortlessly with all the expected parameters and beyond. The goal was to create a system capable of supporting gigabytes of data transfer speed, tens of terabytes of data throughput, and petabytes or even exabytes of data stored.

The original IBM HPSS collaboration clearly understood that there is not a single organization in the world that has the resources and the experience to meet all of the new storage and transfer requirements at once. During the entire existence of HPSS, over twenty different companies and organizations contributed to developing this project, including NSF supercomputer centers, U.S. federal laboratories, universities, and so on.

The core development team of HPSS as of 2022 consisted of all six original collaborators – such as IBM Global Business Services, LLNL, ORNL, LANL, SNL, and LBNL. The National Energy Research Scientific Computing Center (NERSC) is also considered a significant contributor to developing HPSS as a product.

The most prominent achievements of HPSS

During its thirty-year-long history, the HPSS storage system managed to change and evolve, bringing new achievements and capabilities to the industry. Here are a few examples of that:

  • A relatively unknown test that implied the backing up of a billion files was first performed successfully in November 2007 by the San Diego Supercomputer Center – the data in question was copied from GPFS (IBM’s own clustered file system) to HPSS.
  • The National Center for Supercomputing Applications in Illinois launched an HPSS infrastructure with 380 Petabytes of storage in May 2013, a genuinely tremendous storage amount at the time.

Noteworthy examples of HPSS usage

HPSS is used by dozens of well-known and popular businesses around the world, offering more accessible and more efficient access to large data pools. In the list below, we present a number of projects that use HPSS for long-term data storage:

  • The Dark Energy Spectroscopic Instrument – more than 5 Petabytes of experiment results and simulation data.
  • The Joint Genome Institute – over 20 Petabytes of information, including mapped sequences, assembled genomes, quality-controlled sequences, raw sequences, transcriptomes, and more.
  • The Advanced Light Source (Berkeley’s Lab) – over 4 Petabytes of data for ten years, including all of the tomography beamline information.
  • The Intergovernmental Panel on Climate Change – more than 30 Petabytes of information, including earth system simulations, climate simulations, and a lot more data that contributed to the Twentieth Century Reanalysis (an international project with the goal of creating an atmospheric circulation dataset for the entire 20th century).
  • The Cosmic Microwave Background – at least 5.5 Petabytes of simulations and data from various experiments, including South Pole experiments, BICEP, Keck, and 17 different telescopes across the planet.

The present and the future of HPSS

HPSS was initially created to push the world forward regarding network standards, storage capacities, transfer rates, etc. This project managed to remain at the forefront of progress and technological evolution over thirty years after its creation – and there is no doubt that it will continue to do so in the future.

This system has continued evolving, growing and implementing new capabilities as time goes on, introducing solutions to existing problems and raising the bottom line of various standards in regard to large-scale data management. For example, user-friendliness is now seen as the next big goal – an attempt to make HPSS storage easier to work with while also addressing other well-known challenges of the system, such as file size and file length limitations.

HPSS and Bacula Enterprise

HPSS is a very case-specific data storage solution, often used for scientific, research and laboratory environments – typically in government-level organizations. The fact that these use cases are so removed from plain-vanilla businesses type needs does not mean this data should not be properly unprotected. Usually the opposite. Luckily, solutions such as Bacula Enterprise exist to protect and safeguard many different data types and storage environments – including those of HPC and supercomputing.

Bacula Enterprise is relied on by government-level entities such as NASA and the US National Laboratories to safeguard many petabytes of data stored using IBM HPSS. For example, just some of the reasons NASA chose Bacula for its demanding environments was that it provided HPSS support out-of-the-box, multi-user access, FIPS-compliant encryption, and no capacity-based licensing mode. In addition to seamlessly connecting with HPSS technology and matching its vast scalability, Bacula tends to be the favorite backup solution in supercomputing and HPC deployments because of its high security qualities, special HPC management tools and its ability to be able to handle billions of files. In addition, Bacula’s licensing model does not charge by data volume, reducing costs significantly.

Learn more about Bacula Enterprise’s success with NASA (as well as Bacula’s backup and recovery capabilities for HPSS) in our dedicated article about this topic.

About the author
Rob Morrison
Rob Morrison is the marketing director at Bacula Systems. He started his IT marketing career with Silicon Graphics in Switzerland, performing strongly in various marketing management roles for almost 10 years. In the next 10 years Rob also held various marketing management positions in JBoss, Red Hat and Pentaho ensuring market share growth for these well-known companies. He is a graduate of Plymouth University and holds an Honours Digital Media and Communications degree, and completed an Overseas Studies Program.
Leave a comment

Your email address will not be published. Required fields are marked *