Skip to content

Storage

Pantavisor needs store some artifacts on disk. These are the elements on the root of the /storage space:

  • boot: information written by Pantavisor so bootloader can run the proper revision after boot up.
  • cache: to store metadata and other stuff that is not fixed to revisions.
  • config: to store part of the configuration.
  • disks: permanent and revision disks.
  • logs: both containers and Pantavisor logs.
  • objects: binary artifacts that are part of revisions.
  • trails: list of revisions.

Disks

Pantavisor offers a way to define the physical storage medium by setting up disks and disk v2.

There are currently 4 disks types supported for disk:

  • Non-encrypted directory
  • Device Mapper crypt using without hardware acceleration (versatile)
  • Device Mapper crypt using i.Mx CAAM
  • Device Mapper crypt using i.Mx DCP

The new disk v2 supports all the previous ones and adds 2 new types:

  • Swap disk based on zram
  • Volume disk based on zram

Each disk, with the exception of swap disk, that is defined in the state JSON can then be privately used by the containers. They can also be internally used by Pantavisor, as in the case of metadata.

Volumes

Additional volumes can be specified to be used internally by Pantavisor. There are two volumes that are currently supported:

  • pv--devmeta: if defined, Pantavisor will use this volume to save and load device metadata.
  • pv--usrmeta: if defined, Pantavisor will use this volume to save and load user metadata.

Metadata

Metadata is the information that is exchanged between Pantavisor and the outside world, either remotely or locally, that is not attached to a revision itself.

Metadata is just a list of key-value pairs of string type. This list is persistent and storaged on disk. By default, it is saved in plain text, but this can be changed in favour of volumes on a crypt disk.

Device metadata

This type of metadata is meant to be generated by Pantavisor or any of the running containers to expose internal information.

The list of device metadata is persistent on disk and can be managed locally by mgmt containers. Their values will be uploaded to the cloud in remote mode.

Check the list of default key-value pairs that Pantavisor sends.

User metadata

This type of metadata is meant to be generated by the user so Pantavisor or any of the containers can consume it.

The list of user metadata is persistent on disk, can be managed locally by mgmt containers. Their values will be deleted or overwritten by the list stored in the cloud in remote mode.

Check out the list of default user metadata consumed by Pantavisor.

Logs

Pantavisor can centralize all of your container logs, separated by revision, in one place on-disk. It does so by running a small server (Log Server) which offers a couple of sockets to the containers:

Output types

There are a number of parameters that can be tweaked from the configuration that will affect Log Server, such as log.capture, log.maxsize, log.level, etc. Between those, there is an important parameter called log.server.outputs that allows to change how the logs will be stored:

These options are not mutually exclusive and can be combined in any fashion.

File tree

This is the default option. In this case the log output will be delivered in different files by containers. Pantavisor logs will also be stored in its own directory.

Single file

With this option, all logs will be centralized in one file. These logs are delivered in a JSON following this structure:

{
    "tsec":9,
    "tnano":0,
    "plat":"pantavisor",
    "lvl":"DEBUG",
    "src":"logserver",
    "msg": "starting logserver loop"
}

Standard Output

Outputs logs into stdout, which can be useful for debugging. Unlike stdout_direct, this sink is processed by the log server. This will provide more control on the output than its direct counterpart and a way of working that is more similar to the other sinks.

Note

In embedded mode, this is the same as sending the logs to dmesg. You can set ignore_loglevel and printk.devkmsg=on in cmdline to get all messages on console.

Standard Output Pantavisor

Same as Standard Ouput but filtering all messages except Pantavisor ones.

Standard Output Containers

Same as Standard Ouput but filtering all messages except container ones.

Standard Output Direct

Outputs logs into stdout. Unlinke stdout, this sink is not processed by the log server. This means the logs will be directly printed without going through the server. It will have a faster response and will provide logs during device initialization and teardown, when plain stdout would not be available.

This sink will be automatically enabled when stdout is enabled and log server is not available. That is, before its initialization and after its shutdown. Furthermore, it will always let pass all FATAL level log messages through.

NULL Sink

Send logs to /dev/null.

Timestamp format

All the log outputs, except for the NULL Sink, will print the timestamp along the log lines. It is possible to configure the format of the timestamp with the configuration key log.<output>.timestamp.format, which can be set with the following values:

value example
golang:Layout "01/02 03:04:05PM '06 -0700"
golang:RubyDate "Mon Jan 02 15:04:05 -0700 2006"
golang:ANSIC "Mon Jan _2 15:04:05 2006"
golang:RFC822Z "02 Jan 06 15:04 -0700"
golang:RFC1123Z "Mon, 02 Jan 2006 15:04:05 -0700"

Those formats are based in the Golang time constants. For other formats, the strftime formatters can be used setting the strftime: prefix. For example: strftime:%d, %T %Y. Please check the strftime manual for more information about formatters.

Trails and objects

Pantavisor trails, with their state JSONs are stored for each installed revision. The rest of artifacts that can be shared between revisions, are called objects and stored separately.

Garbage Collector

The Pantavisor garbage collector is in charge of cleaning up the /storage by automatically removing unused Pantavisor artifacts.

It works by removing logs, trails and stored disks that belong to old revisions. After all this, all orphan objects that were linked to removed revisions are deleted too.

The garbage collector will not affect any of the files related to the running revision, the revision that is being updated or the latest DONE revision. These revisions that are not affected by the garbage collector can be expanded to the factory revision using the storage.gc.keep_factory configuration parameter.

Currently, it can be triggered by three different events:

  • by a remote update that requires more disk space than available
  • if a threshold of disk usage is surpassed
  • with a command

Remote update

If a remote update requires more disk space than available, the garbage collector will be activated. This can be adjusted with the configuration storage.gc.reserved parameter.

Note

This type of trigger will not work with local updates. For those, you will need to use one of the following two options.

Threshold

Disk usage will be checked periodically and garbage collector will be activated if that threshold is reached. By default it is disabled. This can be changed with the storage.gc.threshold parameter.

If an object is put using the objects endpoint, the threshold will be disabled temporarily, which is done to avoid removing objects that could be linked to the upcoming new revision. This parameter can be changed from the configuration.

On Demand

Finally, our garbage collector can be triggered on demand issuing a command through Pantavisor control socket.

Integrity

Pantavisor offers secureboot, a security mechanism to ensure integrity of the artifacts that are part of a revision. As revisions are composed of a state JSON and a series of artifacts pointed by that JSON, Pantavisor can perform a double level validation: artifact checksum and state signature.

Artifact Checksum

Objects and JSONs belonging to the revision that is about to be run after booting up are checked against the checksums stored in their state JSON by default.

As running a full SHA hashsum of all artifacts can be time consuming for low spec targets and thus increas boot up time, a lazy integrity validation on container storage can be performed using handlers, which are specified at the state JSON. Handlers are scripts to mount/unmount volumes. The only supported handler so far is dm-verity.

Checksum can also be totally disabled at the configuration level if boot up speed needs to be boosted further.

State Signature

Additionally to artifact checksum, the state JSON can be signed. This can be done from your host computer or from an automated CI, and it is then validated from Pantavisor using a cryptographic hash algorithm. It can be tuned according to one of these three levels of severity:

  • disabled: no signature validation will be performed.
  • audit: all validations from lenient and strict will be performed, errors will be reflected in log but nothing will fail.
  • lenient: only the signed artifacts in the revision will be validated. Enabled by default.
  • strict: all artifacts in the revision must be signed and will be validated.

Right now, RS256, ES256, ES384 and ES512 algorithms are supported and Pantavisor must know the public key either by storing it on disk directly or by using a certificate chain.

Validation is performed in Pantavisor in two places:

  • when booting up, right after Pantavisor is initialized and before the current revision is loaded. In case of failure, the update will be reported as ERROR if possible and the device will either rollback or just reset.
  • when a new remote update is received or a new local update is run. In case of failure, the update will be reported as WONTGO.

In either case, Pantavisor will parse all signatures, parse the protected JSON, calculate the hash depending on the elements included in the JSON and compare them with the signatures.

On-disk public key

In this option, Pantavisor will need the RSA public key to be stored on disk. To do so, it is necessary to add it during compiling time with the PV_PVS_PUB_PEM option.

It will need the signature and information about how to generate it in the revision checkout.

Certificate chain of trust

In this case, the public key will travel in the revision checkout in a x509 certificate. The rest of the chain can be added to the checkout, but the root certificate have to be stored on disk during compiling time using the PV_PVS_CERT_PEM option.

It will also need the signature and information about how to generate it in the revision checkout.