Skip to content


Pantavisor needs store some artifacts on disk. These are the elements on the root of the /storage space:

  • boot: information written by Pantavisor so bootloader can run the proper revision after boot up.
  • cache: to store metadata and other stuff that is not fixed to revisions.
  • config: to store part of the configuration.
  • disks: permanent and revision disks.
  • logs: Pantavisor and container logs.
  • objects: binary artifacts that are part of revisions.
  • trails: list of revisions.


Pantavisor offers a way to define the physical storage medium by setting up disks.

There are currently 4 disks types supported:

  • Non-encrypted directory
  • Device Mapper crypt using without hardware acceleration (versatile)
  • Device Mapper crypt using i.Mx CAAM
  • Device Mapper crypt using i.Mx DCP

Each disk that is defined in the state JSON can then be privately used by the containers. They can also be internally used by Pantavisor, as in the case of metadata.


Additional volumes can be specified to be used internally by Pantavisor. There are two volumes that are currently supported:

  • pv--devmeta: if defined, Pantavisor will use this volume to save and load device metadata.
  • pv--usrmeta: if defined, Pantavisor will use this volume to save and load user metadata.


Metadata is the information that is exchanged between Pantavisor and the outside world, either remotely or locally, that is not attached to a revision itself.

Metadata is just a list of key-value pairs of string type. This list is persistent and storaged on disk. By default, it is saved in plain text, but this can be changed in favour of volumes on a crypt disk.

Device metadata

This type of metadata is meant to be generated by Pantavisor or any of the running containers to expose internal information.

The list of device metadata is persistent on disk and can be managed locally by mgmt containers. Their values will be uploaded to the cloud in remote mode.

Check the list of default key-value pairs that Pantavisor sends.

User metadata

This type of metadata is meant to be generated by the user so Pantavisor or any of the containers can consume it.

The list of user metadata is persistent on disk, can be managed locally by mgmt containers. Their values will be deleted or overwritten by the list stored in the cloud in remote mode.

Check out the list of default user metadata consumed by Pantavisor.


Pantavisor can centralize both its own logs and your container logs, separated by revision, in one place on-disk.

It does so by running a small server (Logserver) which offers a couple of sockets (pv-ctrl-log and pv-fd-log) where containers can direct their logs (pv-ctrl-log) into or a file descriptor can be subscribed (pv-fd-log).

There are a number of parameters that can be tweaked from the configuration, such as log.capture, log.maxsize, log.level, etc.

Between the after mentioned parameters there is an important option called log.server.outputs this allows to change how the logs will be organized. Currently, exists two possible options:

File tree (filetree)

This is the default option, in this case the log output will be delivered in different files separated in folders by containers (/pantavisor/logs/current/<container_name>/) and one for Pantavisor. Inside each container folder there are 2 folders, first lxc where the logs for lxc (console.log and lxc.log) can be found and other called var where the syslog and messages logs lies.

Single file (singlefile)

With this option all logs are centralized on one file called pv.log, the file can be found at /pantavisor/logs/current/pv.log and these logs are delivered in a json format following this structure:

    "msg": "starting logserver loop"
  • tsec: seconds
  • tnano: nanoseconds
  • plat: platform, could be "pantavisor" or some container name
  • lvl: message level (could be DEBUG, WARN, INFO or ERROR)
  • src: denote the origin of the message. In the case of Pantavisor ("plat": "pantavisor") it means a Pantavisor sub-system, otherwise "src" refers to the original file name (for example: "lxc/console.log")
  • msg: contains the log message.

The filetree and singlefile options aren't exclusive and can be used together. Please check the configuration section for more details.

Trails and objects

Pantavisor trails, with their state JSONs are stored for each installed revision. The rest of artifacts that can be shared between revisions, are called objects and stored separately.

Garbage Collector

The Pantavisor garbage collector is in charge of cleaning up the /storage by automatically removing unused Pantavisor artifacts.

It works by removing logs, trails and stored disks that belong to old revisions. After all this, all orphan objects that were linked to removed revisions are deleted too.

The garbage collector will not affect any of the files related to the running revision, the revision that is being updated or the latest DONE revision. These revisions that are not affected by the garbage collector can be expanded to the factory revision using the storage.gc.keep_factory configuration parameter.

Currently, it can be triggered by three different events:

  • by a remote update that requires more disk space than available
  • if a threshold of disk usage is surpassed
  • with a command

Remote update

If a remote update requires more disk space than available, the garbage collector will be activated. This can be adjusted with the configuration storage.gc.reserved parameter.


This type of trigger will not work with local updates. For those, you will need to use one of the following two options.


Disk usage will be checked periodically and garbage collector will be activated if that threshold is reached. By default it is disabled. This can be changed with the storage.gc.threshold parameter.

If an object is put using the objects endpoint, the threshold will be disabled temporarily, which is done to avoid removing objects that could be linked to the upcoming new revision. This parameter can be changed from the configuration.

On Demand

Finally, our garbage collector can be triggered on demand issuing a command through Pantavisor control socket.


Pantavisor offers secureboot, a security mechanism to ensure integrity of the artifacts that are part of a revision. It can perform a double level validation: artifact checksum and state signature.

Artifact Checksum

Objects and JSONs belonging to the revision that is about to be run after booting up are checked against the checksums stored in their state JSON by default.

This verification can be disabled if boot up speed needs to be boosted.

State Signature

Additionally to artifact checksum, artifacts that form a revision can also be signed. This can be done from your host computer or from an automated CI, and it is then validated from Pantavisor using a cryptographic hash algorithm. It can be tuned according to one of these three levels of severity:

  • disabled: no signature validation will be performed.
  • audit: all validations from lenient and strict will be performed, errors will be reflected in logs but nothing will fail.
  • lenient: only the signed artifacts in the revision will be validated. Enabled by default.
  • strict: all artifacts in the revision must be signed and will be validated.

Right now, RS256, ES256, ES384 and ES512 algorithms are supported and Pantavisor must know the public key either by storing it on disk directly or by using a certificate chain.

Validation is performed in Pantavisor in two places:

  • when booting up, right after Pantavisor is initialized and before the current revision is loaded. In case of failure, the update will be reported as ERROR if possible and the device will either rollback or just reset.
  • when a new remote update is received or a new local update is run. In case of failure, the update will be reported as WONTGO.

In either case, Pantavisor will parse all signatures, parse the protected JSON, calculate the hash depending on the elements included in the JSON and compare them with the signatures.

On-disk public key

In this option, Pantavisor will need the RSA public key to be stored on disk. To do so, it is necessary to add it during compiling time with the PV_PVS_PUB_PEM option.

It will need the signature and information about how to generate it in the revision checkout.

Certificate chain of trust

In this case, the public key will travel in the revision checkout in a x509 certificate. The rest of the chain can be added to the checkout, but the root certificate have to be stored on disk during compiling time using the PV_PVS_CERT_PEM option.

It will also need the signature and information about how to generate it in the revision checkout.