# The HPSS Archive System¶

## Introduction¶

The High Performance Storage System (HPSS) is a modern, flexible, performance-oriented mass storage system. It has been used at NERSC for archival storage since 1998. HPSS is intended for long term storage of data that is not frequently accessed.

HPSS is Hierarchical Storage Management (HSM) software developed by a collaboration of DOE labs, of which NERSC is a participant, and IBM. The HPSS system is a tape system that uses HSM software to ingest data onto a high performance disk cache and automatically migrate it to a very large enterprise tape subsystem for long-term retention. The disk cache in HPSS is designed to retain many days worth of new data and the tape subsystem is designed to provide the most cost-effective long-term scalable data storage available.

## Accessing HPSS¶

You can access NERSC's HPSS in a variety of different ways. hsi and htar are the best ways to transfer data in and out of HPSS within NERSC. hsi is used to put individual files into HPSS and htar is used to put bundles of files into HPSS (similar to how tar works). For tips on how best to use hsi and htar, see the Best Practices section below. Globus is recommended for transfers between HPSS and facilities outside NERSC. We also offer access via pftp, and ftp.

NERSC's HPSS system can be accessed at archive.nersc.gov (this is set by default for hsi, htar, and Globus). By default every user has an HPSS account.

### Automatic Token Generation¶

The first time you try to connect from a NERSC system (Cori, DTNs, etc.) using a NERSC provided client like hsi, htar, or pftp you will be prompted for your NERSC password + one-time password which will generate a token stored in $HOME/.netrc. After completing this step you will be able to connect to HPSS without typing a password: nersc$ hsi
Generating .netrc entry...


If you are having problems connecting see the Troubleshooting section below.

### Session Limits¶

Users are limited to 15 concurrent sessions. This number can be temporarily reduced if a user is impacting system usability for others.

### hsi¶

hsi is a flexible and powerful command-line utility to access the NERSC HPSS storage system. You can use it to store and retrieve files and it has a large set of commands for listing your files and directories, creating directories, changing file permissions, etc. The command set has a UNIX look and feel (e.g. mv, mkdir, rm, cp, cd, etc.) so that moving through your HPSS directory tree is close to what you would find on a UNIX file system. hsi can be used both interactively or in batch scripts. hsi doesn't offer compression options, but the HPSS tape system uses hardware compression which is as effective as software compression.

The hsi utility is available on all NERSC production computer systems and it has been configured on these systems to use high-bandwidth parallel transfers.

#### hsi Usage Examples¶

All of the NERSC computational systems available to users have the hsi client already installed. To access the Archive storage system you can type hsi with no arguments: this will put you in an interactive command shell, placing you in your home directory on the Archive system. From this shell, you can run the ls command to see your files, cd into storage system subdirectories, put files into the storage system and get files from it.

Most of the standard Linux commands work in hsi (cd, ls,rm,chmod, mkdir, rmdir, etc). There are a few commands that are unique to hsi:

Command Function
put Archive one or more local files into HPSS, overwriting the destination file, if it exists
get Download one or more HPSS files to local storage, overwriting the destination file, if it exists
cput Conditional put - archive a file if it does not already exist on HPSS or the local file is newer than an existing HPSS file
cget Conditional get - get the file only if a local copy does not already exist or the HPSS file is newer than an existing local file
mget/mput Interactive get/put - prompts for user confirmation before copying each file

hsi also has a series of "local" commands, that act on the non-HPSS side of things:

Command Function
lcd Change local directory
lls List local directory
lmkdir Make a local directory
lpwd Print current local directory
command Issue shell command

The hsi utility uses a special syntax to specify local and HPSS file names when using the put and get commands. The local file name is always on the left, while the HPSS file name is always on the right, and : (a colon character with spaces on the sides) is used to separate the local and HPSS paths.

You don't need to provide the separator at all if you want the destination file to use the same name of the source file; you can also combine this with a cd command, e.g. hsi "cd my_hpss_dir/; put my_local_file; get my_hpss_file"

Here are some usage examples:

• Show the content of your HPSS home directory: hsi ls
• Show the content of a specific directory: hsi ls /path/to/hpss/dir/
• Create a remote directory in your home: hsi mkdir new_dir_123
• Store a single file from your local home into your HPSS home: hsi put my_local_file : my_hpss_file
• Store a single file into HPSS without renaming: hsi put my_local_file
• Store a directory tree, creating sub-dirs when needed: hsi put -R my_local_dir/
• Fetch a single file from HPSS, from a specific directory: hsi get /path/to/my_local_file : /path/to/my_hpss_file
• Fetch a single file from HPSS into the local directory without renaming: hsi get /path/to/my_hpss_file
• Delete a file from HPSS: hsi rm /path/to/my_hpss_file; use hsi rm -i if you want to confirm the deletion of each file;
• To recursively remove a directory and all of its contained sub-directories and files: hsi rm -R /path/to/my_hpss_dir/;
• Delete an empty directory: hsi rmdir /path/to/my_hpss_dir/.

Make sure to escape bash expansions, e.g. place quotes around * to avoid bash from replacing the symbol with the files in your local directory, e.g. hsi rm -i "*" or hsi rm -i \*.

In addition to command line, you can run hsi commands several different ways:

• Single-line execution, e.g. to create a new dir and copy a file into it: hsi "mkdir my_hpss_dir; cd my_hpss_dir; put bigdata.123"
• Read commands from a file: hsi "in command_file"
• Read commands from standard input: hsi < command_file
• Read commands from a pipe: cat command_file | hsi

#### hsi Storage Verification¶

HPSS provides a built-in checksum mechanism to verify data integrity while archiving to HPSS, but you can also calculate checksums for files already stored in HPSS. All checksums are stored separately from the files.

Checksum generation is very CPU-intensive and can significantly impact file transfer performance. As much as 80% degradation in transfer rates has been observed during testing of this feature. Note that also checksum verification takes time, proportional to the size of the file to be hashed.

Some examples:

• To calculate the checksum of a file during a transfer to HPSS, use hsi put -c on local.file : hpss.file (specifying a destination file is optional, see the examples section above);
• You can calculate the hash of an existing file already in HPSS with hsi hashcreate hpss.file; to calculate hashes of all files in a HPSS directory recursively, use hsi hashcreate -R;
• Similarly, you can verify that a file on HPSS still matches its hash using hsi hashverify (-R for directories);
• To show the stored hash of a file, use hsi hashlist (-R to recurse in directories).

The easiest way to verify the integrity of a file in HPSS is to record a checksum during the transfer, which can be then be used to verify the data on tape still matches with what was originally stored. Therefore the recommended approach is using hsi put -c on to store data, and hsi hashverify before deleting the source files from the local storage.

Sort Your Files for Large Hash Calculations

If you are calculating hashes for a large number of files (>10s of files) please make sure to sort the files in tape order. You can use our file sorting script.

#### Removing Older Files¶

You can find and remove older files in HPSS using the hsi find command. This may be useful if you're doing periodic backups of directories (not recommended for software version control, use a versioning system like git instead) and want to delete older backups. Since you can't use a linux pipe (|) in hsi, you need a multi-step process. The example below will find files older than 10 days and delete them from HPSS.

hsi -q "find . -ctime 10" > temp.txt 2>&1
cat temp.txt | awk '{print "rm -R",$0}' > temp1.txt hsi in temp1.txt  ### htar¶ htar is a command line utility that is ideal for storing groups of files in HPSS. Since the tar file is created directly in HPSS, it is generally faster and doesn't use local space like creating a local tar file then storing that into HPSS will. htar preserves the directory structure of stored files. htar doesn't have options for compression, but the HPSS tape system uses hardware compression which is as effective as software compression. htar creates an index file that (by default) is stored along with the archive in HPSS. This allows you to list the contents of an archive without retrieving it from tape first. The index file is only created if the htar bundle is successfully stored in the archive. htar is installed and maintained on all NERSC production systems. If you need to access the member files of an htar archive from a system that does not have the htar utility installed, you can retrieve the tar file to a local file system and extract the member files using the local tar utility. If you have a collection of files and store them individually with hsi, the files will likely be distributed across several tapes, requiring long delays (due to multiple tape mounts) when fetching them from HPSS. Instead, group these files in an htar archive file, which will likely be stored on a single tape, requiring only a single tape mount when it comes time to retrieve the data. The basic syntax of htar is similar to the standard tar utility: htar -{c|K|t|x|X} -f tarfile [directories] [files]  As with the standard unix tar utility the -c -x and -t options create, extract, and list tar archive files. The -K option verifies an existing tarfile in HPSS and the -X option can be used to re-create the index file for an existing archive. Please note, you cannot add or append files to an existing htar file. If your htar files are 100 GB or larger and you only want to extract one or two small member files, you may find faster retrieval rates by skipping staging the file to the HPSS disk cache by adding the -Hnostage option to your htar command. #### htar Usage Examples¶ Create an archive with directory nova and file simulator nersc$ htar -cvf nova.tar nova simulator
HTAR: a   nova/
HTAR: a   nova/sn1987a
HTAR: a   nova/sn1993j
HTAR: a   nova/sn2005e
HTAR: a   simulator
HTAR: a   /scratch/scratchdirs/elvis/HTAR_CF_CHK_61406_1285375012
HTAR Create complete for nova.tar. 28,396,544 bytes written for 4 member files, max threads: 4 Transfer time: 0.420 seconds (67.534 MB/s)
HTAR: HTAR SUCCESSFUL


Now list the contents:

nersc$htar -tf nova.tar HTAR: drwx------ elvis/elvis 0 2010-09-24 14:24 nova/ HTAR: -rwx------ elvis/elvis 9331200 2010-09-24 14:24 nova/sn1987a HTAR: -rwx------ elvis/elvis 9331200 2010-09-24 14:24 nova/sn1993j HTAR: -rwx------ elvis/elvis 9331200 2010-09-24 14:24 nova/sn2005e HTAR: -rwx------ elvis/elvis 398552 2010-09-24 17:35 simulator HTAR: -rw------- elvis/elvis 256 2010-09-24 17:36 /scratch/scratchdirs/elvis/HTAR_CF_CHK_61406_1285375012 HTAR: HTAR SUCCESSFUL  As an example, using hsi remove the nova.tar.idx index file from HPSS (Note: you generally do not want to do this) nersc$ hsi "rm nova.tar.idx"
rm: /home/e/elvis/nova.tar.idx (2010/09/24 17:36:53 3360 bytes)


Now try to list the archive contents without the index file:

nersc$htar -tf nova.tar ERROR: No such file: nova.tar.idx ERROR: Fatal error opening index file: nova.tar.idx HTAR: HTAR FAILED  Here is how we can rebuild the index file if it is accidently deleted nersc$ htar -Xvf nova.tar
HTAR: i nova
HTAR: i nova/sn1987a
HTAR: i nova/sn1993j
HTAR: i nova/sn2005e
HTAR: i simulator
HTAR: i /scratch/scratchdirs/elvis/HTAR_CF_CHK_61406_1285375012
HTAR: Build Index complete for nova.tar, 5 files 6 total objects, size=28,396,544 bytes
HTAR: HTAR SUCCESSFUL

nersc$htar -tf nova.tar HTAR: drwx------ elvis/elvis 0 2010-09-24 14:24 nova/ HTAR: -rwx------ elvis/elvis 9331200 2010-09-24 14:24 nova/sn1987a HTAR: -rwx------ elvis/elvis 9331200 2010-09-24 14:24 nova/sn1993j HTAR: -rwx------ elvis/elvis 9331200 2010-09-24 14:24 nova/sn2005e HTAR: -rwx------ elvis/elvis 398552 2010-09-24 17:35 simulator HTAR: -rw------- elvis/elvis 256 2010-09-24 17:36 /scratch/scratchdirs/elvis/HTAR_CF_CHK_61406_1285375012 HTAR: HTAR SUCCESSFUL  Here is how we extract a single file from a htar file htar -xvf nova.tar simulator  ##### Using ListFiles to Create an htar Archive¶ Rather than specifying the list of files and directories on the command line when creating an htar archive, you can place the list of file and directory pathnames into a ListFile and use the -L option. The contents of the ListFile must contain exactly one pathname per line. nersc$ find nova -name 'sn19*' -print > novalist
nersc$cat novalist nova/sn1987a nova/sn1993j  Now create an archive containing only these files nersc$ htar -cvf nova19.tar -L novalist
HTAR: a   nova/sn1987a
HTAR: a   nova/sn1993j
nersc$htar -tf nova19.tar HTAR: -rwx------ elvis/elvis 9331200 2010-09-24 14:24 nova/sn1987a HTAR: -rwx------ elvis/elvis 9331200 2010-09-24 14:24 nova/sn1993j  ##### Soft Delete and Undelete¶ The -D option can be used to "soft delete" one or more member files or directories from an htar archive. The files are not really deleted, but simply marked in the index file as deleted. A file that is soft-deleted will not be retrieved from the archive during an extract operation. If you list the contents of the archive, soft deleted files will have a D character after the mode bits in the listing: nersc$ htar -Df nova.tar nova/sn1993j
HTAR: d  nova/sn1993j
HTAR: HTAR SUCCESSFUL


Now list the files and note that sn1993j is marked as deleted:

nersc$htar -tf nova.tar HTAR: drwx------ elvis/elvis 0 2010-09-24 14:24 nova/ HTAR: -rwx------ elvis/elvis 9331200 2010-09-24 14:24 nova/sn1987a HTAR: -rwx------ D elvis/elvis 9331200 2010-09-24 14:24 nova/sn1993j HTAR: -rwx------ elvis/elvis 9331200 2010-09-24 14:24 nova/sn2005e  To undelete the file, use the -U option: nersc$ htar -Uf nova.tar nova/sn1993j
HTAR: u  nova/sn1993j
HTAR: HTAR SUCCESSFUL


List the file and note that the 'D' is missing

nersc$htar -tf nova.tar nova/sn1993j HTAR: -rwx------ elvis/elvis 9331200 2010-09-24 14:24 nova/sn1993j  #### htar Archive Verification¶ Performance degradation Similarly to hsi, calculating checksums for htar archives reduces file transfer speed; calculating and verifying checksums also takes time proportional to the size of the files to hash. You can request that htar compute and save checksum values for each member file during archive creation. The checksums are saved in the corresponding htar index file. You can then further request that htar compute checksums of the files as you extract them from the archive and compare the values to what it has stored in the index file. nersc$ htar -Hcrc -cvf nova.tar nova
HTAR: a   nova/
HTAR: a   nova/sn1987a
HTAR: a   nova/sn1993j
HTAR: a   nova/sn2005e


Now, in another directory, extract the files and request verification

nersc$htar -Hverify=crc -xvf nova.tar HTAR: x nova/ HTAR: x nova/sn1987a, 9331200 bytes, 18226 media blocks HTAR: x nova/sn1993j, 9331200 bytes, 18226 media blocks  #### htar Limitations¶ htar has several limitations to be aware of: • Member File Path Length: File path names within an htar aggregate of the form prefix/name are limited to 154 characters for the prefix and 99 characters for the file name. Link names cannot exceed 99 characters. • Member File Size: The maximum file size the NERSC archive will support is approximately 20 TB. However, we recommend you aim for htar aggregate sizes between 100 GB and 2 TB. Member files within an htar aggregate are limited to approximately 68GB. • Member File Limit: htar aggregates have a default soft limit of 1,000,000 (1 million) member files. Users can increase this limit to a maximum hard limit of 5,000,000 member files. You can work around these limitations by using tar and then hsi put to put the tar file into HPSS. If the tarballs will be very large, you can split them up by following the instructions found in the "Avoid Very Large Files" section. ### Globus¶ Globus is recommended for transfers between sites (i.e. non-NERSC to NERSC). To access the HPSS system using Globus, you first need to create a Globus account. Once you've created an account you can log in either with your Globus information or with your NERSC account information. The NERSC HPSS endpoint is called "NERSC HPSS". You can use the web interface to transfer files. Currently, there is no explicit ordering by tape of file retrievals for Globus. Caution If you're retrieving a large data set from HPSS with Globus, please see this page for instructions on how to best retrieve files in correct tape order using the command line interace for Globus. ### Pftp and ftp¶ Files can be transferred between HPSS and remote sites via the standard internet protocol ftp. However, we recommend you use Globus for better performance. There is no sftp (secure ftp) or scp access. Standard ftp clients only support authentication via the transmission of unencrypted passwords, which is not allowed by NERSC policy. Instead you must manually generate an access token. ## Best Practices¶ HPSS is intended for long term storage of data that is not frequently accessed. The best guide for how files should be stored in HPSS is how you might want to retrieve them. If you are backing up against accidental directory deletion / failure, then you would want to store your files in a structure where you use htar to separately bundle up each directory. On the other hand, if you are archiving data files, you might want to bundle things up according to month the data was taken or detector run characteristics, etc. The optimal size for htar bundles is between 100 GB and 2 TB, so you may need to do several htar bundles for each set depending on the size of the data. ### Group Small Files Together¶ HPSS is optimized for file sizes between 100 GB and 2 TB. If you need to store many files smaller than this, please use htar to bundle them together before archiving. HPSS is a tape system and responds differently than a typical file system. If you upload large numbers of small files they will be spread across dozens or hundreds of tapes, requiring multiple loads into tape drives and positioning the tape. Storing many small files in HPSS without bundling them together will result in extremely long retrieval times for these files and will slow down the HPSS system for all users. ### Order Large Retrievals¶ If you are retrieving many (> 100 files) from HPSS, you need to order your retrievals so that all files on a single tape will be retieved in a single pass in the order they are on the tape. NERSC has several scripts to help you generate an ordered list for retrievals with both hsi and htar. Caution If you're retrieving a large data set from HPSS with Globus, please see the Globus CLI Section for instructions on how to best retrieve files in correct tape order using the command line interace for Globus. #### Generating A Tape Sorted List¶ The script, generate_sorted_list_for_hpss.py, will generate a list of tape sorted files. This list can be used for htar or hsi to extract the files. For hsi please see the description below for a more advanced script that will also re-create the directory structure you had in HPSS. To use the script, you first need a list of fully qualified file path names. If you do not already have such a list, you can query HPSS using the following command: hsi -q 'ls -1 -R <HPSS_files_you_want_to_retrieve>' |& grep -v '/$' > temp.txt


(the stdout+stderr pipe to grep removes directories from the output, keeping only files). Once you have the list of files, feed it to the sorting script:

generate_sorted_list_for_hpss.py -i temp.txt > sorted_list.txt


The file sorted_list.txt will have a sorted list of files to retrieve. If these are htar files , you can extract them with htar into your current directory:

nersc$cat sorted_list.txt | awk '{print "htar -xvf",$1}' > extract.script
nersc$chmod u+x extract.script nersc$ ./extract.script


Tip

You can use the xfer queue to parallelize your extractions using the sorted list. Just split the list into N smaller lists and and submit N separate xfer jobs.

#### Ordering hsi Retrievals and Recreating Directory Structure¶

The script, hpss_get_sorted_files.py, will retrieve the files in the proper tape order and also recreate the directory structure the files had in HPSS.

To use the script, you first need a list of fully qualified file path names and/or directory path names. If you do not already have such a list, you can query HPSS using the following command:

hsi -q 'ls -1 -R <HPSS_files_or_directories_you_want_to_retrieve>' |& grep -v '/$' > temp.txt  (the stdout+stderr pipe to grep removes directories from the output, keeping only files). Once you have the list of files, feed it to the sorting script: hpss_get_sorted_files.py -i temp.txt -o <your_target_directory, default is current directory> -s <strip string, default in NONE>  For files in HPSS under /home/e/elvis/unique_data, you might want to strip off /home/e/elvis from the target directory. You can do that by adding the -s /home/e/elvis flag. ### Avoid Very Large Files¶ Files sizes greater than 2 TB can be difficult for HPSS to work with and lead to longer transfer times, increasing the possibility of transfer interruptions. Generally it's best to aim for file sizes in the 100 to 2 TB range. You can use tar and split to break up large aggregates or large files into 500 GB sized chunks: tar cvf - myfiles* | split -d --bytes=500G - my_output_tarname.tar.  This will generate a number of files with names like my_output_tarname.tar.00, my_output_tarname.tar.01, which you can use hsi put to archive into HPSS. When you retrieve these files, you can recombine them with cat: cat my_output_tarname.tar.* | tar xvf -  Tip If you're generating these chunks on the Lustre file system, be sure to follow the Lustre striping guidelines. ### Accessing HPSS Data Remotely¶ We recommend a two-stage process to move data to / from HPSS and a remote site. Use Globus to transfer the data between NERSC and the remote site (your scratch directory would make a useful temporary staging point) and use hsi or htar to move the data into HPSS. When connecting with HPSS via ftp or pftp, it is not uncommon to encounter problems due to firewalls at the client site. Often you will have to configure your client firewall to allow connections to HPSS and generate a token for accessing HPSS remotely. #### Manual Token Generation¶ You can generate a string for access to NERSC HPSS from outside the NERSC network by logging to Iris and selecting the blue "Storage" tab. Scroll down to the section labeled "HPSS Tokens" and you will see buttons to generate a token for access from an external IP address or from within NERSC. Either button will generate a token which you can paste into a file named .netrc in your home directory: machine archive.nersc.gov login <your NERSC user name> password <token generated by Iris>  The .netrc file should only have user readable permissions. If it's group or world readable HPSS access will fail. #### Firewalls and External Access¶ Most firewalls are configured to deny incoming network connections unless access is explicitly granted. Systems running htar or hsi that want to connect to the archive at NERSC must accept network connections which are initiated by the HPSS Movers (helper machines that initiate multi-stream data movement into and out of the archive). By default hsi is configured with Firewall Mode set to on and will usually work without any firewall changes. To configure your system to allow connections from HPSS Movers at NERSC, you will need to grant access for TCP connections originating from the 128.55.32.0/22, 128.55.80.0/21, 128.55.88.0/24, 128.55.136.0/22, and 128.55.207.0/24 subnets. ### Use the Xfer Queue¶ User the dedicated xfer queue for long-running transfers to / from HPSS. You can also submit jobs to the xfer queue after your computations are done. The xfer queue is configured to limit the number of running jobs per user to the same number as the limit of HPSS sessions. ## HPSS Usage Charging¶ DOE's Office of Science awards an HPSS quota to each NERSC project every year. Users charge their HPSS space usage to the HPSS repos of which they are members. Users can check their HPSS usage and quotas with the hpssquota command on Cori. You view usages on a user level: nersc$ hpssquota -u usgtest
HPSS Usage for User usgtest
REPO                          STORED [GB]      REPO QUOTA [GB]     PERCENT USED [%]
-----------------------------------------------------------------------------------
nstaff                             144.25              49500.0                  0.3
matcomp                              10.0                950.0                  1.1
-----------------------------------------------------------------------------------
TOTAL USAGE [GB]                   154.25


Here, "Stored" shows you how much data you have stored in HPSS. Data stored in HPSS could potentially be charged to any repo that you are a member of (see below for details). The "Repo Quota" shows you the maximum amount your PI has allocated for you to store data, and the "Percent Used" shows the percentage of the quota you've used.

You can also view usage on a repo level:

nersc$hpssquota -r ntrain HPSS Usage for Repo ntrain USER STORED [GB] USER QUOTA [GB] PERCENT USED [%] --------------------------------------------------------------------------------------------- train1 100.00 500.0 20.0 train2 0.35 50.0 0.1 train47 0.12 500.0 0.0 train28 0.09 500.0 0.0 --------------------------------------------------------------------------------------------- TOTAL USAGE [GB] TOTAL QUOTA [GB] PERCENT USED 100.56 500.0 20.11  "Stored" shows how much data each user has in HPSS that is charged to this repo. "User Quota" shows how much total space the PI has allocated for that user (by default this is 100%, PIs may want to adjust these for each user, see below for more info) and the "Percent Used" is the percentage of allocated quota each user has used. The totals at the bottom shows the total space and quota stored for the whole repo. You can also check the HPSS quota for a repo by logging in to Iris and clicking on their "Storage" menu. ### Apportioning User Charges to Repositories: Project Percents¶ The HPSS system has no notion of repo accounting but only of user accounting. Users must say "after the fact" how to distribute their HPSS data usage to the HPSS repos to which they belong. If a user belongs to only one HPSS repo, all usage is charged to that repo. If a user belongs to multiple repos, usage is apportioned among the user's repos. By default this is split based on the size of each repo's storage allocation. Users (only the user, not the project managers) can change what percentage of their HPSS usage is charged to which repo in their Storage menu in Iris. ### Adding or Removing Users¶ If a user is added to a new repo or removed from an existing repo the project percents for that user are adjusted based on the size of the quotas of the repos to which the user currently belongs. However, if the user has previously changed the default project percents the relative ratios of the previously set project percents are respected. As an example user u1 belongs to repos r1 and r2 and has changed the project percents from the default of 50% for each repo to 40% for r1 and 60% for r2: Login Repo Allocation (GBs) Project % u1 r1 500 40 u1 r2 500 60 If u1 then becomes a new member of repo r3 which has a storage allocation of 1,000 GBs the project percents will be adjusted as follows (to preserve the old ratio of 40:60 between r1 and r2 while adding r3 which has the same SRU allocation as r1+r2): Login Repo Allocation (GBs) Project % u1 r1 500 20 u1 r2 500 30 u1 r3 1,000 50 If a repo is retired, the percentage charged to that repo is spread among the remaining repos while keeping their relative values the same. ## HPSS Project Directories¶ A special "project directory" can be created in HPSS for groups of researchers who wish to easily share files. The file in this directory will be readable by all members of a particular unix file group. This file group can have the same name as the repository (in which case all members of the repository will have access to the project directory) or a new name can be requested (in which case only those users added to the new file group by the requester will have access to the project directory). HPSS project directories have the following properties: • located under /home/projects • owned by the PI, a PI Proxy, or a Project Manager of the associated repository • have suitable group attribute (include "setgid bit") To request creation of an HPSS project directory the PI, a PI Proxy or a Project Manager of the requesting repository should open a ticket. ## Troubleshooting¶ Some frequently encountered issues and how to solve them. ### Trouble connecting¶ The first time you try to connect using a NERSC provided client like hsi, htar, or pftp you will be prompted for your NERSC password + one-time password which will generate a token stored in$HOME/.netrc. This allows you to connect to HPSS without typing a password. However, sometimes this file can become out of date or otherwise corrupted. This generates errors that look like this:

nersc$hsi result = -11000, errno = 29 Unable to authenticate user with HPSS. result = -11000, errno = 9 Unable to setup communication to HPSS... *** HSI: error opening logging Error - authentication/initialization failed  If this error occurs try moving$HOME/.netrc file to $HOME/.netrc_temp. Then connect to the HPSS system again and enter your NERSC password + one-time password when prompted. A new$HOME/.netrc file will be generated with a new entry/token. If the problem persists contact account support.

### Cannot transfer files using htar¶

htar requires the node you're on to accept incoming connections from its movers. This is not possible from a compute node at NERSC, so htar transfers will fail. Instead we recommend you use our special xfer queue for data transfers.

### Globus transfer errors¶

Globus transfers will fail if you don't have permission to read the source directory or space to write in the target directory. One common mistake is to make the files readable, but forget to make the directory holding them readable. You can check directory permissions with ls -ld. At NERSC you can make sure you have enough space to write in a directory by using the myquota command.