Using the REST API for PixStor

Background

The PixStor Management REST API uses an object model which abstraction over the underlying file system.

Consequently, there is not a one-to-one mapping between the REST API and PixStor objects, such as filesets.

This page lays-out how PixStor objects and operations can be mapped to nearest equivalent REST API methods.

This is a relatively high level description, ignoring things like authentication. For a more detailed look at the REST API, see REST API Overview

List Filesystems

Filesystems roughly correspond to Native Exposers

GET /exposers/?where={"type":"gpfsnative"}

List Filesets

Filesets correspond to Spaces

GET /spaces/

List filesets in a filesystem

GET /spaces/?where={"exposers.filesystem":"mmfs1"}

This won’t return unlinked filesets, or filesets excluded by the apconfig setting arcapix.management.filesets.exclude

List Snapshots

GET /snapshots/

List filesystem global snapshots

GET /snapshots/?where={"exposer.filesystem":"mmfs1"}

List fileset snapshots

GET /snapshots/?where={"space.name":"projects"}

NOTE: This is search by space name, not fileset name

Typically, the fileset name will be the space name prefixed with target pool name e.g. sas1-projects

List NFS Shares

GET /exposers/?where={"type":"nfs"}

List Samba Shares

GET /exposers/?where={"type":"cifs"}

Create a Fileset

For a more detailed discussion of creating a fileset (space), see Walkthrough: Creating a Space

In the PixStor Admin UI, there are 3 types of fileset - Independent, Dependent, and Templated

In the REST API, there are just dependent and independent filesets, both of which can have templates installed.

Settings

Name

Name of the space. Note that the fileset will be the space name prefixed with the name of the placement pool.

So if the space name is projects and the placement pool is sas1, then the fileset will be named sas1-projects`

Profile

The profile determines the filesystem that the fileset belongs to, and which pool the fileset data is allocated to.

You can create your own profile, or you can use one of the predefined ones. For every filesystem and data pool, there is a corresponding profile. There is also a special ‘default’ profile for each filesystem, which corresponds to whatever the filesystem default placement pool is.

These predefined profiles are named like {filesystem}-{pool}. So to create a fileset in mmfs1, allocated to pool sas1, we need to lookup the id for the corresponding profile

GET /profiles/?where={"name":"mmfs1-sas1"}

Templates

A fileset can optionally have one or more templates applied to it.

A template is a pre-defined directory structure with permissions and optional dependent filesets.

Exposers

GPFS native exposers roughly correspond to filesystems. However, as previously mentioned, the filesystem that a space (fileset) is assigned to is determined by it’s profile.

The exposer is optional, but if provided it must be the GPFS native exposer corresponding the the profile filesystem.

GET /exposers/?where={"type":"gpfsnative","name":"mmfs1"}

If the exposer is provided, the fileset will be linked, if not it will be unlinked

Relative Path

The relative path defines where the fileset will be linked if an exposer is provided

For example, if the native exposer has mountpoint /mmfs1 and the space has relative path projects, then the fileset will be linked at /mmfs1/projects

Size

If a size is specified, a block quota will be created for the fileset. The hard limit will be the specified size, and the soft limit will be 90% of the size.

If a size if not specified, no quota will be applied.

Extras

There are various additional settings which can be provided via extras:

gpfs.fileset.dependent

Indicates whether the fileset should dependent or independent.

An independent fileset has its own inodespace, while a dependent fileset belongs to another fileset’s inodespace

gpfs.fileset.inodespace

The inodespace the fileset should be assigned to.

This only has meaning for dependent filesets. Independent fileset always belong to inodespace 0

If not provided, the inodespace for dependent filesets will be inferred from the link path (via the space’s exposer and relative path).

gpfs.fileset.maxinodes

The maximum inodes allocation of the corresponding fileset.

If not provided, a default value will be used. For PixStor Management 0.3 and earlier, the default values was 20,000. As of PixStor Management 0.4, this default is configurable via the apconfig setting arcapix.management.filesets.maxinodes

gpfs.fileset.allocinodes

The number of inodes allocated to the corresponding fileset. allocinodes must be less than maxinodes.

If not provided, a default value will be used. For PixStor Management 0.3 and earlier, the default values was 10,000. As of PixStor Management 0.4, this default is configurable via the apconfig setting arcapix.management.filesets.allocinodes

Note

Inode limits can’t be set for dependent filesets. If provided, these settings will be ignored.

Templated Independent Fileset

POST /spaces/
{
    "template": {
        "data": [
        {
            "name": "name",
            "value": "projects"
        },
        {
            "name": "profile",
            "value": "/profiles/116cca6e-21cf-4ee0-b950-d37eca6d29d7"
        },
        {
            "name": "templates",
            "value": "/templates/397efad5-4ab5-4b01-8955-954360acffc7"
        },
        {
            "name": "exposers",
            "value": "/exposers/0f34e6c4-880d-479f-86bd-4de952697cdf"
        },
        {
            "name": "relativepath",
            "value": "projects"
        },
        {
            "name": "size",
            "value": 4294967296
        },
        {
            "name": "extras",
            "value": {
                "gpfs.fileset.allocinodes": 102400,
                "gpfs.fileset.maxinodes": 204800
            }
        }
        ]
    }
}

Dependent Fileset

POST /spaces/
{
    "template": {
        "data": [
        {
            "name": "name",
            "value": "assets"
        },
        {
            "name": "profile",
            "value": "/profiles/116cca6e-21cf-4ee0-b950-d37eca6d29d7"
        },
        {
            "name": "exposers",
            "value": "/exposers/0f34e6c4-880d-479f-86bd-4de952697cdf"
        },
        {
            "name": "relativepath",
            "value": "projects/assets"
        },
        {
            "name": "extras",
            "value": {
                "gpfs.fileset.dependent": true,
                "gpfs.fileset.inodespace": 13
            }
        }
        ]
    }
}

Change fileset quota

PATCH /spaces/e8919907-f896-4e89-883c-2ae68ad5b7b3
{
    "template": {
        "data": [
        {
            "name": "size",
            "value": 8589934592
        }
        ]
    }
}

Unset quota

PATCH /spaces/e8919907-f896-4e89-883c-2ae68ad5b7b3
{
    "template": {
        "data": [
        {
            "name": "size",
            "value": 0
        }
        ]
    }
}

Create a Snapshot

Global snapshot

To create a global snapshot, you create a snapshot of the gpfs native exposer corresponding to the filesystem you want to snapshot

GET /exposers/?where={"type":"gpfsnative","name":"mmfs1"}
POST /snapshots/
{
    "template": {
        "data": [
        {
            "name": "name",
            "value": "global-snapshot1"
        },
        {
            "name": "type",
            "value": "gpfsnativeexposersnapshot",
        },
        {
            "name": "exposer",
            "value": "/exposers/b7a39fd6-436e-469c-83c0-16f8cc7d7dd1"
        }
        ]
    }
}

Fileset snapshot

Similarly, to create a fileset snapshot, you create a snapshot of the space corresponding to the fileset you want to snapshot

POST /snapshots/
{
    "template": {
        "data": [
        {
            "name": "name",
            "value": "projects.1565013620"
        },
        {
            "name": "type",
            "value": "gpfsspacesnapshot",
        },
        {
            "name": "space",
            "value": "/spaces/9a54fd89-c56e-46c9-bd16-e6d6d3d91e12"
        }
        ]
    }
}

Note

Snapshots cannot be created for dependent filesets

Create an NFS Share

readonly is the only ‘first class’ setting. All other valid nfs settings can be passed via extras

POST /exposers/
{
    "template": {
        "data": [
        {
            "name": "sharepath",
            "value": "/mmfs1/data"
        },
        {
            "name": "client",
            "value": "*"
        },
        {
            "name": "type",
            "value": "nfs"
        },
        {
            "name": "readonly",
            "value": true
        },
        {
            "name": "extras",
            "value": {
                "no_root_squash": true,
                "secure": true
            }
        }
        ]
    }
}

Create a Samba Share

readonly and visible are ‘first class’ settings. All other valid samba settings can be passed via extras

visible is equivalent to the samba browsable setting

POST /exposers/
{
    "template": {
        "data": [
        {
            "name": "name",
            "value": "some_data"
        },
        {
            "name": "sharepath",
            "value": "/mmfs1/data"
        }
        {
            "name": "type",
            "value": "cifs"
        },
        {
            "name": "visible",
            "value": true
        },
        {
            "name": "readonly",
            "value": true
        },
        {
            "name": "extras",
            "value": {
                "gpfs:hsm": true,
                "guest ok": false
            }
        },
        ]
    }
}