This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Welcome

Anchore Open Source Software (OSS) is a suite of tools for Software Bill of Materials (SBOM) Generation, Vulnerability Scanning, and License Scanning.

Start by going to the project overview of Anchore OSS to learn more about the basic concepts and functions.

Installing the Tools

The tools are available through official and community curated distribution channels.

Using the Tools

Step-by-step guides for common use cases:

Or checkout the Reference section for CLI and configuration details.

Contributing

Interested in contributing to Anchore OSS?

Check out the contribution guides for how to get started!

1 - Projects

Overview of Anchore Open Source tools

We maintain three popular command-line tools, some libraries, and supporting utilities. Most are written in Go, with a few in Python. They are all released under the Apache-2.0 license. For the full list, see our GitHub org.

Anchore’s tools follow a simple workflow: search and raise up evidence in the form of a Software Bill of Materials (SBOM) using Syft, then analyze that SBOM with Grype for security vulnerabilities and Grant for open source license compliance.

%%{init: {'theme':'base', 'themeVariables': { 'primaryColor':'#f8fafc','primaryTextColor':'#1e293b','primaryBorderColor':'#cbd5e1','lineColor':'#94a3b8','secondaryColor':'#f8fafc','tertiaryColor':'#f8fafc'}}}%%
graph LR
    software["📦 Your Software<br/><small>Container Images<br/>Filesystems<br/>Archives</small>"]
    syft["🔍 Syft<br/><small>SBOM Generator</small>"]
    sbom@{ shape: doc, label: "📋 SBOM<br/><small>Software Bill<br/>of Materials</small>"}
    grype["🛡️ Grype<br/><small>Vulnerability<br/>Scanner</small>"]
    grant["⚖️ Grant<br/><small>License<br/>Scanner</small>"]
    vulns@{ shape: doc, label: "Security Report<br/><small>CVE findings</small>"}
    licenses@{ shape: doc, label: "License Report<br/><small>Compliance info</small>"}

    software -.->|scan| syft
    syft -->|generates| sbom
    sbom -->|analyze| grype
    sbom -->|analyze| grant
    grype -->|produces| vulns
    grant -->|produces| licenses

    classDef inputStyle fill:#f8fafc,stroke:#cbd5e1,stroke-width:2px,stroke-dasharray: 5 5,color:#64748b
    classDef syftStyle fill:#fdf4ff,stroke:#e879f9,stroke-width:2px,color:#6b21a8
    classDef grypleStyle fill:#eff6ff,stroke:#3b82f6,stroke-width:2px,color:#1e3a8a
    classDef grantStyle fill:#f0fdf4,stroke:#00b388,stroke-width:2px,color:#065f46
    classDef docStyle fill:#ffffff,stroke:#cbd5e1,stroke-width:1px,color:#475569

    class software inputStyle
    class syft syftStyle
    class grype grypleStyle
    class grant grantStyle
    class sbom,vulns,licenses docStyle

This modular approach lets you generate the SBOM once with Syft, then use Grype and Grant independently to scan for different types of risk.

Syft logo Syft

SBOM Generator and library

Syft (pronounced like sift) is an open-source command-line tool and Go library. Its primary function is to scan container images, file systems, and archives to automatically generate a Software Bill of Materials, making it easier to understand the composition of software.  

Grype logo Grype

Vulnerability Scanner

Grype (rhymes with hype) is an open-source vulnerability scanner specifically designed to analyze container images and filesystems. It works by comparing the software components it finds against a database of known vulnerabilities, providing a report of potential risks so they can be addressed.

Grant logo Grant

License Scanner

Grant is an open-source command-line tool designed to discover and report on the software licenses present in container images, SBOM documents, or filesystems. It helps users understand the licenses of their software dependencies and can check them against user-defined policies to ensure compliance.

2 - Installation

Official and community maintained packages of Anchore OSS Tools

Any of the tools can be installed with:

curl -sSfL https://get.anchore.io/TOOLNAME | sudo sh -s -- -b /usr/local/bin

However, there are additional installation options for each tool, so see the individual pages for more information.

2.1 - Installing Syft

Official builds

The Anchore OSS team publish official source archives and binary builds of Syft for Linux, macOS and Windows. There are also numerous community-maintained builds of the tools for different platforms.

Installer script

Syft binaries are provided for Linux, macOS and Windows.

curl -sSfL https://get.anchore.io/syft | sudo sh -s -- -b /usr/local/bin

Install script options:

  • -b: Specify a custom installation directory (defaults to ./bin)
  • -d: More verbose logging levels (-d for debug, -dd for trace)
  • -v: Verify the signature of the downloaded artifact before installation (requires cosign to be installed)

Updating Syft

Syft checks for new versions on launch. It will print a message at the end of the output if the version in use is not the latest.

A newer version of syft is available for download: 1.20.0 (installed version is 1.19.2)

Docker container

docker pull anchore/syft

GitHub releases

  • Download the file for your operating system and architecture from the GitHub releases page
  • In the case of .deb or .rpm, install them using your package manager
  • For compressed archives, unpack the file, and copy the syft binary to a folder in your path such as /usr/local/bin

Community builds of syft

Alpine Linux

apk add syft

Thanks to Michał Polański for maintaining this package.

Homebrew

brew tap anchore/syft
brew install syft

Thanks to the Syft community for maintaining this package.

Kali Linux

sudo apt install syft

Thanks to Sophie Brun for maintaining this package.

Nix

Syft is available in the stable channel since NixOS 22.05.

nix-env -i syft

Alternatively, just try it out in an ephemeral nix shell.

nix-shell -p syft

WinGet

winget install Anchore.syft

Thanks to Alan Pope for maintaining this package.

Scoop

scoop bucket add main
scoop install main/syft

Snapcraft

snap install syft

Thanks to Alan Pope for maintaining this package.

2.2 - Installing Grype

Official builds

The Anchore OSS team publish official source archives and binary builds of Grype for Linux, macOS and Windows. There are also numerous community-maintained builds of the tools for different platforms.

Installer script

Grype binaries are provided for Linux, macOS and Windows.

curl -sSfL https://get.anchore.io/grype | sudo sh -s -- -b /usr/local/bin

Install script options:

  • -b: Specify a custom installation directory (defaults to ./bin)
  • -d: More verbose logging levels (-d for debug, -dd for trace)
  • -v: Verify the signature of the downloaded artifact before installation (requires cosign to be installed)

Updating Grype

Grype checks for new versions on launch. It will print a message at the end of the output if the version in use is not the latest.

A newer version of grype is available for download: 0.92.0 (installed version is 0.91.2)

Docker container

docker pull anchore/grype

GitHub releases

  • Download the file for your operating system and architecture from the GitHub releases page
  • In the case of .deb or .rpm, install them using your package manager
  • For compressed archives, unpack the file, and copy the grype binary to a folder in your path such as /usr/local/bin

Community builds of Grype

Arch Linux

sudo pacman -S grype-bin

Homebrew

brew tap anchore/grype
brew install grype

MacPorts

sudo port install grype

Winget

winget install Anchore.Grype

Scoop

scoop bucket add main
scoop install main/grype

Snapcraft

snap install grype

2.3 - Installing Grant

Official builds

The Anchore OSS team publish official source archives and binary builds for Linux and macOS. There are also some community-maintained builds of the tools for different platforms.

Installer script

Grant binaries are provided for Linux and macOS.

curl -sSfL https://get.anchore.io/grant | sudo sh -s -- -b /usr/local/bin

Install script options:

  • -b: Specify a custom installation directory (defaults to ./bin)
  • -d: More verbose logging levels (-d for debug, -dd for trace)
  • -v: Verify the signature of the downloaded artifact before installation (requires cosign to be installed)

GitHub releases

  • Download the file for your operating system and architecture from the GitHub releases page
  • In the case of .deb or .rpm, install them using your package manager
  • For compressed archives, unpack the file, and copy the grant binary to a folder in your path such as /usr/local/bin

Community builds of grant

Homebrew

brew tap anchore/grant
brew install grant

2.4 - Verifying Downloads

Verifying release assets after downloading

Why verify downloads?

Verifying your downloads ensures that:

  • The files haven’t been tampered with during transit
  • You’re installing authentic Anchore software
  • Your supply chain is secure from the start

All release artifacts include checksums, and the checksum file itself is cryptographically signed using cosign for verification.

Prerequisites

Before verifying downloads, you need:

  • The binary you want to verify (see Installation)
  • Cosign installed (for signature verification)

Note: Checksum verification doesn’t require additional tools beyond your operating system’s built-in utilities.

Cosign signature verification

This method verifies that your download is both authentic (from Anchore) and hasn’t been tampered with.

Step 1: Download the files

Download your tool binary and the verification files from the appropriate GitHub releases page:

You’ll need:

  • The binary file (e.g., syft_1.23.1_darwin_arm64.tar.gz)
  • checksums.txt
  • checksums.txt.pem
  • checksums.txt.sig

Step 2: Verify the signature

Use cosign to verify the checksum file’s signature:

cosign verify-blob <path to checksums.txt> \
  --certificate <path to checksums.txt.pem> \
  --signature <path to checksums.txt.sig> \
  --certificate-identity-regexp 'https://github\.com/anchore/<tool-name>/\.github/workflows/.+' \
  --certificate-oidc-issuer "https://token.actions.githubusercontent.com"

Replace <tool-name> with syft, grype, or grant depending on which tool you’re verifying.

Expected output on success:

Verified OK

Step 3: Verify the checksum

Once the signature is confirmed as valid, verify that the SHA256 checksum matches your downloaded file:

sha256sum --ignore-missing -c checksums.txt

Expected output on success:

<your-binary-file>: OK

Complete example

Here’s a complete example verifying Syft v1.23.1 for macOS ARM64:

Download the files:

# Download the binary
wget https://github.com/anchore/syft/releases/download/v1.23.1/syft_1.23.1_darwin_arm64.tar.gz

# Download verification files
wget https://github.com/anchore/syft/releases/download/v1.23.1/syft_1.23.1_checksums.txt
wget https://github.com/anchore/syft/releases/download/v1.23.1/syft_1.23.1_checksums.txt.pem
wget https://github.com/anchore/syft/releases/download/v1.23.1/syft_1.23.1_checksums.txt.sig

Verify the signature:

cosign verify-blob ./syft_1.23.1_checksums.txt \
  --certificate ./syft_1.23.1_checksums.txt.pem \
  --signature ./syft_1.23.1_checksums.txt.sig \
  --certificate-identity-regexp 'https://github\.com/anchore/syft/\.github/workflows/.+' \
  --certificate-oidc-issuer "https://token.actions.githubusercontent.com"

Output:

Verified OK

Verify the checksum:

sha256sum --ignore-missing -c syft_1.23.1_checksums.txt

Output:

syft_1.23.1_darwin_arm64.tar.gz: OK

Checksum verification

If you can’t use cosign, you can verify checksums manually. This verifies file integrity but not authenticity.

Step 1: Download the files

Download your tool binary and the checksums file:

# Example for Syft v1.23.1
wget https://github.com/anchore/syft/releases/download/v1.23.1/syft_1.23.1_darwin_arm64.tar.gz
wget https://github.com/anchore/syft/releases/download/v1.23.1/syft_1.23.1_checksums.txt

Step 2: Verify the checksum

sha256sum --ignore-missing -c syft_1.23.1_checksums.txt

Expected output:

syft_1.23.1_darwin_arm64.tar.gz: OK

Troubleshooting

Verification failed

If cosign verification fails, check these common issues:

  • Mismatched certificate identity: Ensure you’re using the correct tool name (syft, grype, or grant) in the certificate identity pattern
  • Outdated cosign: Update to the latest version of cosign
  • Network connectivity: Cosign requires internet access to verify against transparency logs
  • Corrupted download: Try downloading the verification files again

Checksum doesn’t match

If the checksum verification fails:

  • Download again: The file may have been corrupted during download
  • Check the filename: Ensure you’re comparing the checksum for the correct file (right version, architecture, and tool)
  • Do not proceed: A mismatched checksum indicates a potential security issue or corruption

Platform-specific issues

macOS:

  • If you get a “command not found” error for sha256sum, use shasum -a 256 instead
  • Example: shasum -a 256 syft_1.23.1_darwin_arm64.tar.gz

Windows:

  • Use PowerShell’s Get-FileHash command:

    Get-FileHash .\syft_1.23.1_windows_amd64.zip -Algorithm SHA256
    

Need help?

If you’re still having issues:

3 - Guides

Step-by-step guides for common use cases

3.1 - SBOM Generation

Learn how to create a Software Bill of Materials (SBOMs) for container images, filesystems, and archives using Syft.

3.1.1 - Getting Started

Use Syft to generate your first SBOM from container images, directories, or archives.

What is an SBOM?

A Software Bill of Materials (SBOM) is a detailed list of all libraries and components that make up software.

  • For developers, it’s crucial for tracking dependencies, identifying vulnerabilities, and ensuring license compliance.

  • For organizations, it provides transparency into the software supply chain to assess security risks.

Syft is a CLI tool for generating an SBOM from container images and filesystems.

Installation

Syft is provided as a single compiled executable and requires no external dependencies to run. Run the command for your platform to download the latest release.

curl -sSfL https://get.anchore.io/syft | sudo sh -s -- -b /usr/local/bin
brew install syft
nuget install Anchore.Syft

Check out installation guide for full list of official and community-maintained packaging options.

Find packages within a container image

Run syft against a small container image; the output will be a simple human-readable table of the installed packages found:

syft alpine:latest
NAME                    VERSION      TYPE
alpine-baselayout       3.6.8-r1     apk
alpine-baselayout-data  3.6.8-r1     apk
alpine-keys             2.5-r0       apk
alpine-release          3.21.3-r0    apk
apk-tools               2.14.6-r3    apk
busybox                 1.37.0-r12   apk
busybox-binsh           1.37.0-r12   apk
...

Create an industry-standard SBOM

This command will display the human-readable table and write SBOMs in both SPDX and CycloneDX formats, the two primary industry standards.

syft alpine:latest \                 # what we're scanning
  -o table \                         # a human-readable table to stdout
  -o spdx-json=alpine.spdx.json \    # SPDX-JSON formatted SBOM to a file
  -o cyclonedx-json=alpine.cdx.json  # CycloneDX-JSON formatted SBOM to a file

The same table will be displayed, and two SBOM files will be created in the current directory.

Examine the SBOM file contents

We can use jq to extract specific package data from the SBOM files (by default Syft outputs JSON on a single line, but you can enable pretty-printing with the SYFT_FORMAT_PRETTY=true environment variable). Both formats structure package information differently:

SPDX format:

jq '.packages[].name' alpine.spdx.json

CycloneDX format:

jq '.components[].name' alpine.cdx.json

Both commands show the packages that Syft found in the container image:

"alpine-baselayout"
"alpine-baselayout-data"
"alpine-keys"
"alpine-release"
"apk-tools"
"busybox"
"busybox-binsh"
...

By default, Syft shows only software visible in the final container image (the “squashed” representation). To include software from all image layers, regardless of its presence in the final image, use --scope all-layers:

syft <image> --scope all-layers

FAQ

Does Syft need internet access?

Only for downloading container images. By default, scanning works offline.

What about private container registries?

Syft supports authentication for private registries. See Private Registries.

Can I use Syft in CI/CD pipelines?

Absolutely! Syft is designed for automation. Generate SBOMs during builds and scan them for vulnerabilities.

What data does Syft send externally?

Nothing. Syft runs entirely locally and doesn’t send any data to external services.

Next steps

Now that you’ve generated your first SBOM, here are additional resources:

  • Scan for vulnerabilities: Use Grype to find security issues in your SBOMs
  • Check licenses: Learn about License Scanning to understand dependency licenses
  • Customize output: Explore different Output Formats for various tools and workflows
  • Query SBOM data: Master Working with Syft JSON for advanced data extraction

3.1.2 - Supported Scan Targets

Explore the different scan targets Syft supports including container images, OCI registries, directories, files, and archives.

Syft can generate an SBOM from a variety of scan targets including container images, directories, files, and archives. In most cases, you can simply point Syft at what you want to analyze and it will automatically detect and catalog it correctly.

Catalog a container image from your local daemon or a remote registry:

syft alpine:latest

Catalog a directory (useful for analyzing source code or installed applications):

syft /path/to/project

Catalog a container image archive:

syft image.tar

To explicitly specify the scan target type, use the --from flag:

--from ARGDescription
dockerUse images from the Docker daemon
podmanUse images from the Podman daemon
containerdUse images from the Containerd daemon
docker-archiveUse a tarball from disk for archives created from docker save
oci-archiveUse a tarball from disk for OCI archives (from Skopeo or otherwise)
oci-dirRead directly from a path on disk for OCI layout directories (from Skopeo or otherwise)
singularityRead directly from a Singularity Image Format (SIF) container file on disk
dirRead directly from a path on disk (any directory)
fileRead directly from a path on disk (any single file)
registryPull image directly from a registry (bypass any container runtimes)

Instead of using the --from flag explicitly, you can instead:

  • provide no hint and let Syft automatically detect the scan target type implicitly based on the input provided

  • provide the scan target type as a URI scheme in the target argument (e.g., docker:alpine:latest, oci-archive:/path/to/image.tar, dir:/path/to/dir)

Scan Target-Specific Behaviors

Container Image Scan Targets

When working with container images, Syft applies the following defaults and behaviors:

  • Registry: If no registry is specified in the image reference (e.g. alpine:latest instead of docker.io/alpine:latest), Syft assumes docker.io
  • Platform: For unspecific image references (tags) or multi-arch images pointing to an index (not a manifest), Syft analyzes the linux/amd64 manifest by default. Use the --platform flag to target a different platform.

When you provide an image reference without specifying a scan target type (i.e. no --from flag), Syft attempts to resolve the image using the following scan targets in order:

  1. Docker daemon
  2. Podman daemon
  3. Containerd daemon
  4. Direct registry access

For example, when you run syft alpine:latest, Syft will first check your local Docker daemon for the image. If Docker isn’t available, it tries Podman, then Containerd, and finally attempts to pull directly from the registry.

You can override this default behavior with the default-image-pull-source configuration option to always prefer a specific scan target. See Configuration for more details.

Directory Scan Targets

When you provide a directory path as the scan target, Syft recursively scans the directory tree to catalog installed software packages and files.

When you point Syft at a directory (especially system directories like /), it automatically skips certain filesystem types to improve scan performance and avoid indexing areas that don’t contain installed software packages.

Filesystems always skipped

  • proc / procfs - Virtual filesystem for process information
  • sysfs - Virtual filesystem for kernel and device information
  • devfs / devtmpfs / udev - Device filesystems

Filesystems conditionally skipped

tmpfs filesystems are only skipped when mounted at these specific locations:

  • /dev - Device files
  • /sys - System information
  • /run and /var/run - Runtime data and process IDs
  • /var/lock - Lock files

These paths are excluded because they contain virtual or temporary runtime data rather than installed software packages. Skipping them significantly improves scan performance and enables you to catalog entire system root directories without getting stuck scanning thousands of irrelevant entries.

Syft identifies these filesystems by reading your system’s mount table (/proc/self/mountinfo on Linux). When a directory matches one of these criteria, the entire directory tree under that mount point is skipped.

File types excluded

These file types are never indexed during directory scans:

  • Character devices
  • Block devices
  • Sockets
  • FIFOs (named pipes)
  • Irregular files

Regular files, directories, and symbolic links are always processed.

Archive Scan Targets

Syft automatically detects and unpacks common archive formats, then catalogs their contents. If an archive is a container image archive (from docker save or skopeo copy), Syft treats it as a container image.

Supported archive formats:

Standard archives:

  • .zip
  • .tar (uncompressed)
  • .rar (read-only extraction)

Compressed tar variants:

  • .tar.gz / .tgz
  • .tar.bz2 / .tbz2
  • .tar.br / .tbr (brotli)
  • .tar.lz4 / .tlz4
  • .tar.sz / .tsz (snappy)
  • .tar.xz / .txz
  • .tar.zst / .tzst (zstandard)

Standalone compression formats (extracted if containing tar):

  • .gz (gzip)
  • .bz2 (bzip2)
  • .br (brotli)
  • .lz4
  • .sz (snappy)
  • .xz
  • .zst / .zstd (zstandard)

OCI Archives and Layout Scan Targets

Syft automatically detects OCI archive and directory structures (including OCI layouts and SIF files) and catalogs them accordingly.

OCI archives and layouts are particularly useful for CI/CD pipelines, as they allow you to catalog images, scan for vulnerabilities, or perform other checks without publishing to a registry. This provides a powerful pattern for build-time gating.

Create OCI scan targets without a registry

OCI archive from an image:

skopeo copy \
  docker://alpine@sha256:eafc1edb577d2e9b458664a15f23ea1c370214193226069eb22921169fc7e43f \
  oci-archive:alpine.tar

OCI layout directory from an image:

skopeo copy \
  docker://alpine@sha256:eafc1edb577d2e9b458664a15f23ea1c370214193226069eb22921169fc7e43f \
  oci:alpine

Container image archive from an image:

docker save -o alpine.tar alpine:latest

Container Runtime Configuration

Image Availability and Authentication

When using container runtime scan targets (Docker, Podman, or Containerd):

  • Missing images: If an image doesn’t exist locally in the container runtime, Syft attempts to pull it from the registry via the runtime
  • Private images: You must be logged in to the registry via the container runtime (e.g., docker login) or have credentials configured for direct registry access. See Authentication with Private Registries for more details.

Environment Variables

Syft respects the following environment variables for each container runtime:

Scan TargetEnvironment VariablesDescription
DockerDOCKER_HOSTDocker daemon socket/host address (supports ssh:// for remote connections)
DOCKER_TLS_VERIFYEnable TLS verification (auto-sets DOCKER_CERT_PATH if not set)
DOCKER_CERT_PATHPath to TLS certificates (defaults to ~/.docker if DOCKER_TLS_VERIFY is set)
DOCKER_CONFIGOverride default Docker config directory
PodmanCONTAINER_HOSTPodman socket/host address (e.g., unix:///run/podman/podman.sock or ssh://user@host/path/to/socket)
CONTAINER_SSHKEYSSH identity file path for remote Podman connections
CONTAINER_PASSPHRASEPassphrase for the SSH key
ContainerdCONTAINERD_ADDRESSContainerd socket address (overrides default /run/containerd/containerd.sock)
CONTAINERD_NAMESPACEContainerd namespace (defaults to default)

Podman Daemon Requirements

Unlike Docker Desktop, which typically auto-starts, Podman requires explicitly starting the service.

Syft attempts to connect to Podman using the following methods in order:

  1. Unix Socket (primary)

    • Checks CONTAINER_HOST environment variable first
    • Falls back to Podman config files
    • Finally tries default socket locations ($XDG_RUNTIME_DIR/podman/podman.sockand/run/podman/podman.sock`)
  2. SSH (fallback)

    • Configured via CONTAINER_HOST, CONTAINER_SSHKEY, and CONTAINER_PASSPHRASE environment variables
    • Used for remote Podman instances

Direct Registry Access

The registry scan target bypasses container runtimes entirely and pulls images directly from the registry.

Credentials are resolved in the following order:

  • Syft first attempts to use default Docker credentials from ~/.docker/config.json if they exist
  • If default credentials are not available, you can provide credentials via environment variables. See Authentication with Private Registries for more details.

Troubleshooting

Image not found in local daemon

If Syft reports an image doesn’t exist but you know it’s available:

  • Check which daemon has the image: Run docker images, podman images, or nerdctl images to see where the image exists
  • Specify the scan target type explicitly: Use --from docker, --from podman, or --from containerd to target the correct daemon
  • Pull from registry: Use --from registry to bypass local daemons and pull directly

Authentication failures with private registries

If you get authentication errors when scanning private images:

  • For daemon scan targets: Ensure you’re logged in via the daemon (e.g., docker login registry.example.com)
  • For registry scan target: Configure credentials in ~/.docker/config.json or use environment variables (see Private Registries)
  • Verify credentials: Check that your credentials haven’t expired and have appropriate permissions

Podman connection issues

If Syft can’t connect to Podman:

  • Start the service: Run podman system service to start the Podman socket
  • Check socket location: Verify the socket exists at $XDG_RUNTIME_DIR/podman/podman.sock or /run/podman/podman.sock
  • Use environment variable: Set CONTAINER_HOST to point to your Podman socket location

Slow directory scans

If scanning a directory takes too long:

  • Exclude unnecessary paths: Use file selection options to skip build artifacts, caches, or virtual environments (see File Selection)
  • Avoid system directories: Scanning / includes all mounted filesystems; consider scanning specific application directories instead
  • Check mount points: Ensure you’re not accidentally scanning network mounts or remote filesystems

Next steps

Additional resources:

3.1.3 - Output Formats

Choose from multiple SBOM output formats including SPDX, CycloneDX, and Syft’s native JSON format.

Syft supports multiple output formats to fit different workflows and requirements by using the -o (or --output) flag:

syft <image> -o <format>

Available formats

-o ARGDescription
tableA columnar summary (default)
jsonNative output for Syft—use this to get as much information out of Syft as possible! (see the JSON schema)
purlsA line-separated list of Package URLs (PURLs) for all discovered packages
github-jsonA JSON report conforming to GitHub’s dependency snapshot format
templateLets you specify a custom output format via Go templates (see Templates for more detail)
textA row-oriented, human-and-machine-friendly output

CycloneDX

CycloneDX is an OWASP-maintained industry standard SBOM format.

-o ARGDescription
cyclonedx-jsonA JSON report conforming to the CycloneDX specification
cyclonedx-xmlAn XML report conforming to the CycloneDX specification

SPDX

SPDX (Software Package Data Exchange) is an ISO/IEC 5962:2021 industry standard SBOM format.

-o ARGDescription
spdx-jsonA JSON report conforming to the SPDX JSON Schema
spdx-tag-valueA tag-value formatted report conforming to the SPDX specification

Format versions

Some output formats support multiple schema versions. Specify a version by appending @<version> to the format name:

syft <source> -o <format>@<version>

Examples:

# Use CycloneDX JSON version 1.4
syft <source> -o cyclonedx-json@1.4

# Use SPDX JSON version 2.2
syft <source> -o spdx-json@2.2

# Default to latest version if not specified
syft <source> -o cyclonedx-json

Formats with version support:

  • cyclonedx-json: 1.2, 1.3, 1.4, 1.5, 1.6
  • cyclonedx-xml: 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6
  • spdx-json: 2.2, 2.3
  • spdx-tag-value: 2.1, 2.2, 2.3

When no version is specified, Syft uses the latest supported version of the format.

Format examples

NAME     VERSION  TYPE
busybox  1.37.0   binary
{
  "artifacts": [
    {
      "id": "fe44cee3fe279dfa",
      "name": "busybox",
      "version": "1.37.0",
      "type": "binary",
      "foundBy": "binary-classifier-cataloger",
      "locations": [
        {
          "path": "/bin/[",
          "layerID": "sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05",
          "accessPath": "/bin/busybox",
          "annotations": {
            "evidence": "primary"
          }
        }
      ],
      "licenses": [],
      "language": "",
      "cpes": [
        {
          "cpe": "cpe:2.3:a:busybox:busybox:1.37.0:*:*:*:*:*:*:*",
          "source": "nvd-cpe-dictionary"
        }
      ],
      "purl": "pkg:generic/busybox@1.37.0",
      "metadataType": "binary-signature",
      "metadata": {
        "matches": [
          {
            "classifier": "busybox-binary",
            "location": {
              "path": "/bin/[",
              "layerID": "sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05",
              "accessPath": "/bin/busybox",
              "annotations": {
                "evidence": "primary"
              }
            }
          }
        ]
      }
    }
  ],
  "artifactRelationships": [
    {
      "parent": "396fa78f221c72de93053a00e33e3d69b5bdfa80131777e6ea518eb9a1af3f3b",
      "child": "fe44cee3fe279dfa",
      "type": "contains"
    },
    {
      "parent": "fe44cee3fe279dfa",
      "child": "3a6b3df220691408",
      "type": "evident-by",
      "metadata": {
        "kind": "primary"
      }
    }
  ],
  "files": [
    {
      "id": "3a6b3df220691408",
      "location": {
        "path": "/bin/[",
        "layerID": "sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
      },
      "metadata": {
        "mode": 755,
        "type": "RegularFile",
        "userID": 0,
        "groupID": 0,
        "mimeType": "application/x-sharedlib",
        "size": 1119808
      },
      "digests": [
        {
          "algorithm": "sha1",
          "value": "5231d5d79cb52f3581f9c137396e7d9df7aa6d6b"
        },
        {
          "algorithm": "sha256",
          "value": "f19470457088612bc3285404783d9f93533d917e869050aca13a4139b937c0a5"
        }
      ],
      "executable": {
        "format": "elf",
        "hasExports": true,
        "hasEntrypoint": true,
        "importedLibraries": ["libm.so.6", "libresolv.so.2", "libc.so.6"],
        "elfSecurityFeatures": {
          "symbolTableStripped": true,
          "stackCanary": false,
          "nx": true,
          "relRO": "partial",
          "pie": true,
          "dso": true,
          "safeStack": false
        }
      }
    },
    {
      "id": "eab1ede6d517d844",
      "location": {
        "path": "/bin/getconf",
        "layerID": "sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
      },
      "executable": {
        "format": "elf",
        "hasExports": true,
        "hasEntrypoint": true,
        "importedLibraries": ["libc.so.6"],
        "elfSecurityFeatures": {
          "symbolTableStripped": true,
          "stackCanary": false,
          "nx": true,
          "relRO": "full",
          "pie": true,
          "dso": true,
          "safeStack": false
        }
      },
      "unknowns": ["unknowns-labeler: no package identified in executable file"]
    },
    {
      "id": "9c61e609f3b76f4a",
      "location": {
        "path": "/lib/ld-linux-aarch64.so.1",
        "layerID": "sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
      },
      "executable": {
        "format": "elf",
        "hasExports": true,
        "hasEntrypoint": true,
        "importedLibraries": [],
        "elfSecurityFeatures": {
          "symbolTableStripped": true,
          "stackCanary": true,
          "nx": true,
          "relRO": "full",
          "pie": false,
          "dso": true,
          "safeStack": false
        }
      },
      "unknowns": ["unknowns-labeler: no package identified in executable file"]
    },
    {
      "id": "456b7910a9499337",
      "location": {
        "path": "/lib/libc.so.6",
        "layerID": "sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
      },
      "executable": {
        "format": "elf",
        "hasExports": true,
        "hasEntrypoint": true,
        "importedLibraries": ["ld-linux-aarch64.so.1"],
        "elfSecurityFeatures": {
          "symbolTableStripped": true,
          "stackCanary": true,
          "nx": true,
          "relRO": "full",
          "pie": false,
          "dso": true,
          "safeStack": false
        }
      },
      "unknowns": ["unknowns-labeler: no package identified in executable file"]
    },
    {
      "id": "9376910c472a1ddd",
      "location": {
        "path": "/lib/libm.so.6",
        "layerID": "sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
      },
      "executable": {
        "format": "elf",
        "hasExports": true,
        "hasEntrypoint": false,
        "importedLibraries": ["libc.so.6", "ld-linux-aarch64.so.1"],
        "elfSecurityFeatures": {
          "symbolTableStripped": true,
          "stackCanary": true,
          "nx": true,
          "relRO": "full",
          "pie": false,
          "dso": true,
          "safeStack": false
        }
      },
      "unknowns": ["unknowns-labeler: no package identified in executable file"]
    },
    {
      "id": "383904be0603bd22",
      "location": {
        "path": "/lib/libnss_compat.so.2",
        "layerID": "sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
      },
      "executable": {
        "format": "elf",
        "hasExports": true,
        "hasEntrypoint": false,
        "importedLibraries": ["libc.so.6", "ld-linux-aarch64.so.1"],
        "elfSecurityFeatures": {
          "symbolTableStripped": true,
          "stackCanary": true,
          "nx": true,
          "relRO": "full",
          "pie": false,
          "dso": true,
          "safeStack": false
        }
      },
      "unknowns": ["unknowns-labeler: no package identified in executable file"]
    },
    {
      "id": "324828ff45e1fc0b",
      "location": {
        "path": "/lib/libnss_dns.so.2",
        "layerID": "sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
      },
      "executable": {
        "format": "elf",
        "hasExports": true,
        "hasEntrypoint": false,
        "importedLibraries": ["libc.so.6"],
        "elfSecurityFeatures": {
          "symbolTableStripped": true,
          "stackCanary": false,
          "nx": true,
          "relRO": "full",
          "pie": false,
          "dso": true,
          "safeStack": false
        }
      },
      "unknowns": ["unknowns-labeler: no package identified in executable file"]
    },
    {
      "id": "9a791682497737bd",
      "location": {
        "path": "/lib/libnss_files.so.2",
        "layerID": "sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
      },
      "executable": {
        "format": "elf",
        "hasExports": true,
        "hasEntrypoint": false,
        "importedLibraries": ["libc.so.6"],
        "elfSecurityFeatures": {
          "symbolTableStripped": true,
          "stackCanary": false,
          "nx": true,
          "relRO": "full",
          "pie": false,
          "dso": true,
          "safeStack": false
        }
      },
      "unknowns": ["unknowns-labeler: no package identified in executable file"]
    },
    {
      "id": "c6f668db34996e30",
      "location": {
        "path": "/lib/libnss_hesiod.so.2",
        "layerID": "sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
      },
      "executable": {
        "format": "elf",
        "hasExports": true,
        "hasEntrypoint": false,
        "importedLibraries": ["libresolv.so.2", "libc.so.6", "ld-linux-aarch64.so.1"],
        "elfSecurityFeatures": {
          "symbolTableStripped": true,
          "stackCanary": true,
          "nx": true,
          "relRO": "full",
          "pie": false,
          "dso": true,
          "safeStack": false
        }
      },
      "unknowns": ["unknowns-labeler: no package identified in executable file"]
    },
    {
      "id": "d5aa00430d994aa8",
      "location": {
        "path": "/lib/libpthread.so.0",
        "layerID": "sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
      },
      "executable": {
        "format": "elf",
        "hasExports": true,
        "hasEntrypoint": false,
        "importedLibraries": ["libc.so.6"],
        "elfSecurityFeatures": {
          "symbolTableStripped": true,
          "stackCanary": false,
          "nx": true,
          "relRO": "full",
          "pie": false,
          "dso": true,
          "safeStack": false
        }
      },
      "unknowns": ["unknowns-labeler: no package identified in executable file"]
    },
    {
      "id": "5804ce9e713c7582",
      "location": {
        "path": "/lib/libresolv.so.2",
        "layerID": "sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
      },
      "executable": {
        "format": "elf",
        "hasExports": true,
        "hasEntrypoint": false,
        "importedLibraries": ["libc.so.6", "ld-linux-aarch64.so.1"],
        "elfSecurityFeatures": {
          "symbolTableStripped": true,
          "stackCanary": true,
          "nx": true,
          "relRO": "full",
          "pie": false,
          "dso": true,
          "safeStack": false
        }
      },
      "unknowns": ["unknowns-labeler: no package identified in executable file"]
    }
  ],
  "source": {
    "id": "396fa78f221c72de93053a00e33e3d69b5bdfa80131777e6ea518eb9a1af3f3b",
    "name": "busybox",
    "version": "sha256:396fa78f221c72de93053a00e33e3d69b5bdfa80131777e6ea518eb9a1af3f3b",
    "type": "image",
    "metadata": {
      "userInput": "busybox:latest",
      "imageID": "sha256:eade5be814e817df411f138aa7711c3f81595185eb54b3257fd19f6c4966b285",
      "manifestDigest": "sha256:396fa78f221c72de93053a00e33e3d69b5bdfa80131777e6ea518eb9a1af3f3b",
      "mediaType": "application/vnd.oci.image.manifest.v1+json",
      "tags": [],
      "imageSize": 4170774,
      "layers": [
        {
          "mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
          "digest": "sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05",
          "size": 4170774
        }
      ],
      "manifest": "ewoJInNjaGVtYVZlcnNpb24iOiAyLAoJIm1lZGlhVHlwZSI6ICJhcHBsaWNhdGlvbi92bmQub2NpLmltYWdlLm1hbmlmZXN0LnYxK2pzb24iLAoJImNvbmZpZyI6IHsKCQkibWVkaWFUeXBlIjogImFwcGxpY2F0aW9uL3ZuZC5vY2kuaW1hZ2UuY29uZmlnLnYxK2pzb24iLAoJCSJkaWdlc3QiOiAic2hhMjU2OmVhZGU1YmU4MTRlODE3ZGY0MTFmMTM4YWE3NzExYzNmODE1OTUxODVlYjU0YjMyNTdmZDE5ZjZjNDk2NmIyODUiLAoJCSJzaXplIjogNDc3Cgl9LAoJImxheWVycyI6IFsKCQl7CgkJCSJtZWRpYVR5cGUiOiAiYXBwbGljYXRpb24vdm5kLm9jaS5pbWFnZS5sYXllci52MS50YXIrZ3ppcCIsCgkJCSJkaWdlc3QiOiAic2hhMjU2OjViYzUxYjg3ZDRlY2NlMDYyOWM0ODg2NzRlMjU4MGEzZDU4ZDI5MzdkNzBjODFkNGY2ZDQ4NWQ0M2UwNmViMDYiLAoJCQkic2l6ZSI6IDE5MDI5OTEKCQl9CgldLAoJImFubm90YXRpb25zIjogewoJCSJvcmcub3BlbmNvbnRhaW5lcnMuaW1hZ2UudXJsIjogImh0dHBzOi8vZ2l0aHViLmNvbS9kb2NrZXItbGlicmFyeS9idXN5Ym94IiwKCQkib3JnLm9wZW5jb250YWluZXJzLmltYWdlLnZlcnNpb24iOiAiMS4zNy4wLWdsaWJjIgoJfQp9Cg==",
      "config": "ewoJImNvbmZpZyI6IHsKCQkiQ21kIjogWwoJCQkic2giCgkJXSwKCQkiRW52IjogWwoJCQkiUEFUSD0vdXNyL2xvY2FsL3NiaW46L3Vzci9sb2NhbC9iaW46L3Vzci9zYmluOi91c3IvYmluOi9zYmluOi9iaW4iCgkJXQoJfSwKCSJjcmVhdGVkIjogIjIwMjQtMDktMjZUMjE6MzE6NDJaIiwKCSJoaXN0b3J5IjogWwoJCXsKCQkJImNyZWF0ZWQiOiAiMjAyNC0wOS0yNlQyMTozMTo0MloiLAoJCQkiY3JlYXRlZF9ieSI6ICJCdXN5Qm94IDEuMzcuMCAoZ2xpYmMpLCBEZWJpYW4gMTMiCgkJfQoJXSwKCSJyb290ZnMiOiB7CgkJInR5cGUiOiAibGF5ZXJzIiwKCQkiZGlmZl9pZHMiOiBbCgkJCSJzaGEyNTY6MWEzODI3NDBjNTY0MmU0NjA3NDEyYTM0MWRmMzcxNmMyMjI4N2ZmYTZhZGY5MmVhZmY1NGUwNzlhMTkwMmYwNSIKCQldCgl9LAoJImFyY2hpdGVjdHVyZSI6ICJhcm02NCIsCgkib3MiOiAibGludXgiLAoJInZhcmlhbnQiOiAidjgiCn0K",
      "repoDigests": [
        "index.docker.io/library/busybox@sha256:e3652a00a2fabd16ce889f0aa32c38eec347b997e73bd09e69c962ec7f8732ee"
      ],
      "architecture": "arm64",
      "os": "linux"
    }
  },
  "distro": {
    "prettyName": "BusyBox v1.37.0",
    "name": "busybox",
    "id": "busybox",
    "idLike": ["busybox"],
    "version": "1.37.0",
    "versionID": "1.37.0"
  },
  "descriptor": {
    "name": "syft",
    "version": "1.38.0",
    "configuration": {
      "catalogers": {
        "requested": {
          "default": ["image", "file"]
        },
        "used": [
          "alpm-db-cataloger",
          "apk-db-cataloger",
          "binary-classifier-cataloger",
          "bitnami-cataloger",
          "cargo-auditable-binary-cataloger",
          "conan-info-cataloger",
          "dotnet-deps-binary-cataloger",
          "dotnet-packages-lock-cataloger",
          "dpkg-db-cataloger",
          "elf-binary-package-cataloger",
          "file-content-cataloger",
          "file-digest-cataloger",
          "file-executable-cataloger",
          "file-metadata-cataloger",
          "gguf-cataloger",
          "go-module-binary-cataloger",
          "graalvm-native-image-cataloger",
          "homebrew-cataloger",
          "java-archive-cataloger",
          "java-jvm-cataloger",
          "javascript-package-cataloger",
          "linux-kernel-cataloger",
          "lua-rock-cataloger",
          "nix-cataloger",
          "pe-binary-package-cataloger",
          "php-composer-installed-cataloger",
          "php-interpreter-cataloger",
          "php-pear-serialized-cataloger",
          "portage-cataloger",
          "python-installed-package-cataloger",
          "r-package-cataloger",
          "rpm-db-cataloger",
          "ruby-installed-gemspec-cataloger",
          "snap-cataloger",
          "wordpress-plugins-cataloger"
        ]
      },
      "data-generation": {
        "generate-cpes": true
      },
      "files": {
        "content": {
          "globs": null,
          "skip-files-above-size": 0
        },
        "hashers": ["sha-1", "sha-256"],
        "selection": "owned-by-package"
      },
      "licenses": {
        "coverage": 75,
        "include-content": "none"
      },
      "packages": {
        "binary": [
          "python-binary",
          "python-binary-lib",
          "pypy-binary-lib",
          "go-binary",
          "julia-binary",
          "helm",
          "redis-binary",
          "nodejs-binary",
          "go-binary-hint",
          "busybox-binary",
          "util-linux-binary",
          "haproxy-binary",
          "perl-binary",
          "php-composer-binary",
          "httpd-binary",
          "memcached-binary",
          "traefik-binary",
          "arangodb-binary",
          "postgresql-binary",
          "mysql-binary",
          "mysql-binary",
          "mysql-binary",
          "xtrabackup-binary",
          "mariadb-binary",
          "rust-standard-library-linux",
          "rust-standard-library-macos",
          "ruby-binary",
          "erlang-binary",
          "erlang-alpine-binary",
          "erlang-library",
          "swipl-binary",
          "dart-binary",
          "haskell-ghc-binary",
          "haskell-cabal-binary",
          "haskell-stack-binary",
          "consul-binary",
          "hashicorp-vault-binary",
          "nginx-binary",
          "bash-binary",
          "openssl-binary",
          "gcc-binary",
          "fluent-bit-binary",
          "wordpress-cli-binary",
          "curl-binary",
          "lighttpd-binary",
          "proftpd-binary",
          "zstd-binary",
          "xz-binary",
          "gzip-binary",
          "sqlcipher-binary",
          "jq-binary",
          "chrome-binary",
          "ffmpeg-binary",
          "ffmpeg-library",
          "ffmpeg-library",
          "elixir-binary",
          "elixir-library",
          "java-binary",
          "java-jdb-binary"
        ],
        "dotnet": {
          "dep-packages-must-claim-dll": true,
          "dep-packages-must-have-dll": false,
          "propagate-dll-claims-to-parents": true,
          "relax-dll-claims-when-bundling-detected": true
        },
        "golang": {
          "local-mod-cache-dir": "/root/go/pkg/mod",
          "local-vendor-dir": "",
          "main-module-version": {
            "from-build-settings": true,
            "from-contents": false,
            "from-ld-flags": true
          },
          "proxies": ["https://proxy.golang.org", "direct"],
          "search-local-mod-cache-licenses": false,
          "search-local-vendor-licenses": false,
          "search-remote-licenses": false
        },
        "java-archive": {
          "include-indexed-archives": true,
          "include-unindexed-archives": false,
          "maven-base-url": "https://repo1.maven.org/maven2",
          "maven-localrepository-dir": "/root/.m2/repository",
          "max-parent-recursive-depth": 0,
          "resolve-transitive-dependencies": false,
          "use-maven-localrepository": false,
          "use-network": false
        },
        "javascript": {
          "include-dev-dependencies": false,
          "npm-base-url": "https://registry.npmjs.org",
          "search-remote-licenses": false
        },
        "linux-kernel": {
          "catalog-modules": true
        },
        "nix": {
          "capture-owned-files": false
        },
        "python": {
          "guess-unpinned-requirements": false,
          "pypi-base-url": "https://pypi.org/pypi",
          "search-remote-licenses": false
        }
      },
      "relationships": {
        "exclude-binary-packages-with-file-ownership-overlap": true,
        "package-file-ownership": true,
        "package-file-ownership-overlap": true
      },
      "search": {
        "scope": "squashed"
      }
    }
  },
  "schema": {
    "version": "16.1.0",
    "url": "https://raw.githubusercontent.com/anchore/syft/main/schema/json/schema-16.1.0.json"
  }
}
pkg:generic/busybox@1.37.0
{
  "$schema": "http://cyclonedx.org/schema/bom-1.6.schema.json",
  "bomFormat": "CycloneDX",
  "specVersion": "1.6",
  "serialNumber": "urn:uuid:8831f243-6dcd-4bdd-a2b0-562480154c9b",
  "version": 1,
  "metadata": {
    "timestamp": "2025-11-21T20:47:28Z",
    "tools": {
      "components": [
        {
          "type": "application",
          "author": "anchore",
          "name": "syft",
          "version": "1.38.0"
        }
      ]
    },
    "component": {
      "bom-ref": "e98d5f0296649c51",
      "type": "container",
      "name": "busybox",
      "version": "sha256:396fa78f221c72de93053a00e33e3d69b5bdfa80131777e6ea518eb9a1af3f3b"
    }
  },
  "components": [
    {
      "bom-ref": "pkg:generic/busybox@1.37.0?package-id=fe44cee3fe279dfa",
      "type": "application",
      "name": "busybox",
      "version": "1.37.0",
      "cpe": "cpe:2.3:a:busybox:busybox:1.37.0:*:*:*:*:*:*:*",
      "purl": "pkg:generic/busybox@1.37.0",
      "properties": [
        {
          "name": "syft:package:foundBy",
          "value": "binary-classifier-cataloger"
        },
        {
          "name": "syft:package:type",
          "value": "binary"
        },
        {
          "name": "syft:package:metadataType",
          "value": "binary-signature"
        },
        {
          "name": "syft:location:0:layerID",
          "value": "sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
        },
        {
          "name": "syft:location:0:path",
          "value": "/bin/["
        }
      ]
    },
    {
      "bom-ref": "os:busybox@1.37.0",
      "type": "operating-system",
      "name": "busybox",
      "version": "1.37.0",
      "description": "BusyBox v1.37.0",
      "swid": {
        "tagId": "busybox",
        "name": "busybox",
        "version": "1.37.0"
      },
      "properties": [
        {
          "name": "syft:distro:extendedSupport",
          "value": "false"
        },
        {
          "name": "syft:distro:id",
          "value": "busybox"
        },
        {
          "name": "syft:distro:idLike:0",
          "value": "busybox"
        },
        {
          "name": "syft:distro:prettyName",
          "value": "BusyBox v1.37.0"
        },
        {
          "name": "syft:distro:versionID",
          "value": "1.37.0"
        }
      ]
    },
    {
      "bom-ref": "3a6b3df220691408",
      "type": "file",
      "name": "/bin/[",
      "hashes": [
        {
          "alg": "SHA-1",
          "content": "5231d5d79cb52f3581f9c137396e7d9df7aa6d6b"
        },
        {
          "alg": "SHA-256",
          "content": "f19470457088612bc3285404783d9f93533d917e869050aca13a4139b937c0a5"
        }
      ]
    }
  ]
}
<?xml version="1.0" encoding="UTF-8"?>
<bom xmlns="http://cyclonedx.org/schema/bom/1.6" serialNumber="urn:uuid:33ad49e5-992c-4f1e-be05-68f4095b764f" version="1">
  <metadata>
    <timestamp>2025-11-21T20:47:29Z</timestamp>
    <tools>
      <components>
        <component type="application">
          <author>anchore</author>
          <name>syft</name>
          <version>1.38.0</version>
        </component>
      </components>
    </tools>
    <component bom-ref="e98d5f0296649c51" type="container">
      <name>busybox</name>
      <version>sha256:396fa78f221c72de93053a00e33e3d69b5bdfa80131777e6ea518eb9a1af3f3b</version>
    </component>
  </metadata>
  <components>
    <component bom-ref="pkg:generic/busybox@1.37.0?package-id=fe44cee3fe279dfa" type="application">
      <name>busybox</name>
      <version>1.37.0</version>
      <cpe>cpe:2.3:a:busybox:busybox:1.37.0:*:*:*:*:*:*:*</cpe>
      <purl>pkg:generic/busybox@1.37.0</purl>
      <properties>
        <property name="syft:package:foundBy">binary-classifier-cataloger</property>
        <property name="syft:package:type">binary</property>
        <property name="syft:package:metadataType">binary-signature</property>
        <property name="syft:location:0:layerID">sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05</property>
        <property name="syft:location:0:path">/bin/[</property>
      </properties>
    </component>
    <component bom-ref="os:busybox@1.37.0" type="operating-system">
      <name>busybox</name>
      <version>1.37.0</version>
      <description>BusyBox v1.37.0</description>
      <swid tagId="busybox" name="busybox" version="1.37.0"></swid>
      <properties>
        <property name="syft:distro:extendedSupport">false</property>
        <property name="syft:distro:id">busybox</property>
        <property name="syft:distro:idLike:0">busybox</property>
        <property name="syft:distro:prettyName">BusyBox v1.37.0</property>
        <property name="syft:distro:versionID">1.37.0</property>
      </properties>
    </component>
    <component bom-ref="3a6b3df220691408" type="file">
      <name>/bin/[</name>
      <hashes>
        <hash alg="SHA-1">5231d5d79cb52f3581f9c137396e7d9df7aa6d6b</hash>
        <hash alg="SHA-256">f19470457088612bc3285404783d9f93533d917e869050aca13a4139b937c0a5</hash>
      </hashes>
    </component>
  </components>
</bom>
{
  "spdxVersion": "SPDX-2.3",
  "dataLicense": "CC0-1.0",
  "SPDXID": "SPDXRef-DOCUMENT",
  "name": "busybox",
  "documentNamespace": "https://anchore.com/syft/image/busybox-9730898a-4b77-4396-b39c-e08a872ec19f",
  "creationInfo": {
    "licenseListVersion": "3.27",
    "creators": ["Organization: Anchore, Inc", "Tool: syft-1.38.0"],
    "created": "2025-11-21T20:47:30Z"
  },
  "packages": [
    {
      "name": "busybox",
      "SPDXID": "SPDXRef-Package-binary-busybox-fe44cee3fe279dfa",
      "versionInfo": "1.37.0",
      "supplier": "NOASSERTION",
      "downloadLocation": "NOASSERTION",
      "filesAnalyzed": false,
      "sourceInfo": "acquired package info from the following paths: /bin/[",
      "licenseConcluded": "NOASSERTION",
      "licenseDeclared": "NOASSERTION",
      "copyrightText": "NOASSERTION",
      "externalRefs": [
        {
          "referenceCategory": "SECURITY",
          "referenceType": "cpe23Type",
          "referenceLocator": "cpe:2.3:a:busybox:busybox:1.37.0:*:*:*:*:*:*:*"
        },
        {
          "referenceCategory": "PACKAGE-MANAGER",
          "referenceType": "purl",
          "referenceLocator": "pkg:generic/busybox@1.37.0"
        }
      ]
    },
    {
      "name": "busybox",
      "SPDXID": "SPDXRef-DocumentRoot-Image-busybox",
      "versionInfo": "sha256:396fa78f221c72de93053a00e33e3d69b5bdfa80131777e6ea518eb9a1af3f3b",
      "supplier": "NOASSERTION",
      "downloadLocation": "NOASSERTION",
      "filesAnalyzed": false,
      "checksums": [
        {
          "algorithm": "SHA256",
          "checksumValue": "396fa78f221c72de93053a00e33e3d69b5bdfa80131777e6ea518eb9a1af3f3b"
        }
      ],
      "licenseConcluded": "NOASSERTION",
      "licenseDeclared": "NOASSERTION",
      "copyrightText": "NOASSERTION",
      "externalRefs": [
        {
          "referenceCategory": "PACKAGE-MANAGER",
          "referenceType": "purl",
          "referenceLocator": "pkg:oci/busybox@sha256%3A396fa78f221c72de93053a00e33e3d69b5bdfa80131777e6ea518eb9a1af3f3b?arch=arm64&tag=latest"
        }
      ],
      "primaryPackagePurpose": "CONTAINER"
    }
  ],
  "files": [
    {
      "fileName": "bin/[",
      "SPDXID": "SPDXRef-File-bin---3a6b3df220691408",
      "fileTypes": ["APPLICATION", "BINARY"],
      "checksums": [
        {
          "algorithm": "SHA1",
          "checksumValue": "5231d5d79cb52f3581f9c137396e7d9df7aa6d6b"
        },
        {
          "algorithm": "SHA256",
          "checksumValue": "f19470457088612bc3285404783d9f93533d917e869050aca13a4139b937c0a5"
        }
      ],
      "licenseConcluded": "NOASSERTION",
      "licenseInfoInFiles": ["NOASSERTION"],
      "copyrightText": "NOASSERTION",
      "comment": "layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
    },
    {
      "fileName": "bin/getconf",
      "SPDXID": "SPDXRef-File-bin-getconf-eab1ede6d517d844",
      "checksums": [
        {
          "algorithm": "SHA1",
          "checksumValue": "0000000000000000000000000000000000000000"
        }
      ],
      "licenseConcluded": "NOASSERTION",
      "licenseInfoInFiles": ["NOASSERTION"],
      "copyrightText": "NOASSERTION",
      "comment": "layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
    },
    {
      "fileName": "lib/ld-linux-aarch64.so.1",
      "SPDXID": "SPDXRef-File-lib-ld-linux-aarch64.so.1-9c61e609f3b76f4a",
      "checksums": [
        {
          "algorithm": "SHA1",
          "checksumValue": "0000000000000000000000000000000000000000"
        }
      ],
      "licenseConcluded": "NOASSERTION",
      "licenseInfoInFiles": ["NOASSERTION"],
      "copyrightText": "NOASSERTION",
      "comment": "layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
    },
    {
      "fileName": "lib/libc.so.6",
      "SPDXID": "SPDXRef-File-lib-libc.so.6-456b7910a9499337",
      "checksums": [
        {
          "algorithm": "SHA1",
          "checksumValue": "0000000000000000000000000000000000000000"
        }
      ],
      "licenseConcluded": "NOASSERTION",
      "licenseInfoInFiles": ["NOASSERTION"],
      "copyrightText": "NOASSERTION",
      "comment": "layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
    },
    {
      "fileName": "lib/libm.so.6",
      "SPDXID": "SPDXRef-File-lib-libm.so.6-9376910c472a1ddd",
      "checksums": [
        {
          "algorithm": "SHA1",
          "checksumValue": "0000000000000000000000000000000000000000"
        }
      ],
      "licenseConcluded": "NOASSERTION",
      "licenseInfoInFiles": ["NOASSERTION"],
      "copyrightText": "NOASSERTION",
      "comment": "layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
    },
    {
      "fileName": "lib/libnss_compat.so.2",
      "SPDXID": "SPDXRef-File-lib-libnss-compat.so.2-383904be0603bd22",
      "checksums": [
        {
          "algorithm": "SHA1",
          "checksumValue": "0000000000000000000000000000000000000000"
        }
      ],
      "licenseConcluded": "NOASSERTION",
      "licenseInfoInFiles": ["NOASSERTION"],
      "copyrightText": "NOASSERTION",
      "comment": "layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
    },
    {
      "fileName": "lib/libnss_dns.so.2",
      "SPDXID": "SPDXRef-File-lib-libnss-dns.so.2-324828ff45e1fc0b",
      "checksums": [
        {
          "algorithm": "SHA1",
          "checksumValue": "0000000000000000000000000000000000000000"
        }
      ],
      "licenseConcluded": "NOASSERTION",
      "licenseInfoInFiles": ["NOASSERTION"],
      "copyrightText": "NOASSERTION",
      "comment": "layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
    },
    {
      "fileName": "lib/libnss_files.so.2",
      "SPDXID": "SPDXRef-File-lib-libnss-files.so.2-9a791682497737bd",
      "checksums": [
        {
          "algorithm": "SHA1",
          "checksumValue": "0000000000000000000000000000000000000000"
        }
      ],
      "licenseConcluded": "NOASSERTION",
      "licenseInfoInFiles": ["NOASSERTION"],
      "copyrightText": "NOASSERTION",
      "comment": "layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
    },
    {
      "fileName": "lib/libnss_hesiod.so.2",
      "SPDXID": "SPDXRef-File-lib-libnss-hesiod.so.2-c6f668db34996e30",
      "checksums": [
        {
          "algorithm": "SHA1",
          "checksumValue": "0000000000000000000000000000000000000000"
        }
      ],
      "licenseConcluded": "NOASSERTION",
      "licenseInfoInFiles": ["NOASSERTION"],
      "copyrightText": "NOASSERTION",
      "comment": "layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
    },
    {
      "fileName": "lib/libpthread.so.0",
      "SPDXID": "SPDXRef-File-lib-libpthread.so.0-d5aa00430d994aa8",
      "checksums": [
        {
          "algorithm": "SHA1",
          "checksumValue": "0000000000000000000000000000000000000000"
        }
      ],
      "licenseConcluded": "NOASSERTION",
      "licenseInfoInFiles": ["NOASSERTION"],
      "copyrightText": "NOASSERTION",
      "comment": "layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
    },
    {
      "fileName": "lib/libresolv.so.2",
      "SPDXID": "SPDXRef-File-lib-libresolv.so.2-5804ce9e713c7582",
      "checksums": [
        {
          "algorithm": "SHA1",
          "checksumValue": "0000000000000000000000000000000000000000"
        }
      ],
      "licenseConcluded": "NOASSERTION",
      "licenseInfoInFiles": ["NOASSERTION"],
      "copyrightText": "NOASSERTION",
      "comment": "layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
    }
  ],
  "relationships": [
    {
      "spdxElementId": "SPDXRef-Package-binary-busybox-fe44cee3fe279dfa",
      "relatedSpdxElement": "SPDXRef-File-bin---3a6b3df220691408",
      "relationshipType": "OTHER",
      "comment": "evident-by: indicates the package's existence is evident by the given file"
    },
    {
      "spdxElementId": "SPDXRef-DocumentRoot-Image-busybox",
      "relatedSpdxElement": "SPDXRef-Package-binary-busybox-fe44cee3fe279dfa",
      "relationshipType": "CONTAINS"
    },
    {
      "spdxElementId": "SPDXRef-DOCUMENT",
      "relatedSpdxElement": "SPDXRef-DocumentRoot-Image-busybox",
      "relationshipType": "DESCRIBES"
    }
  ]
}
SPDXVersion: SPDX-2.3
DataLicense: CC0-1.0
SPDXID: SPDXRef-DOCUMENT
DocumentName: busybox
DocumentNamespace: https://anchore.com/syft/image/busybox-04c37b1f-d42c-4c7b-847b-747d25fb694c
LicenseListVersion: 3.27
Creator: Organization: Anchore, Inc
Creator: Tool: syft-1.38.0
Created: 2025-11-21T20:47:30Z

##### Unpackaged files

FileName: bin/[
SPDXID: SPDXRef-File-bin---3a6b3df220691408
FileType: APPLICATION
FileType: BINARY
FileChecksum: SHA1: 5231d5d79cb52f3581f9c137396e7d9df7aa6d6b
FileChecksum: SHA256: f19470457088612bc3285404783d9f93533d917e869050aca13a4139b937c0a5
LicenseConcluded: NOASSERTION
LicenseInfoInFile: NOASSERTION
FileCopyrightText: NOASSERTION
FileComment: layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05

FileName: bin/getconf
SPDXID: SPDXRef-File-bin-getconf-eab1ede6d517d844
FileChecksum: SHA1: 0000000000000000000000000000000000000000
LicenseConcluded: NOASSERTION
LicenseInfoInFile: NOASSERTION
FileCopyrightText: NOASSERTION
FileComment: layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05

FileName: lib/ld-linux-aarch64.so.1
SPDXID: SPDXRef-File-lib-ld-linux-aarch64.so.1-9c61e609f3b76f4a
FileChecksum: SHA1: 0000000000000000000000000000000000000000
LicenseConcluded: NOASSERTION
LicenseInfoInFile: NOASSERTION
FileCopyrightText: NOASSERTION
FileComment: layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05

FileName: lib/libc.so.6
SPDXID: SPDXRef-File-lib-libc.so.6-456b7910a9499337
FileChecksum: SHA1: 0000000000000000000000000000000000000000
LicenseConcluded: NOASSERTION
LicenseInfoInFile: NOASSERTION
FileCopyrightText: NOASSERTION
FileComment: layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05

FileName: lib/libm.so.6
SPDXID: SPDXRef-File-lib-libm.so.6-9376910c472a1ddd
FileChecksum: SHA1: 0000000000000000000000000000000000000000
LicenseConcluded: NOASSERTION
LicenseInfoInFile: NOASSERTION
FileCopyrightText: NOASSERTION
FileComment: layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05

FileName: lib/libnss_compat.so.2
SPDXID: SPDXRef-File-lib-libnss-compat.so.2-383904be0603bd22
FileChecksum: SHA1: 0000000000000000000000000000000000000000
LicenseConcluded: NOASSERTION
LicenseInfoInFile: NOASSERTION
FileCopyrightText: NOASSERTION
FileComment: layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05

FileName: lib/libnss_dns.so.2
SPDXID: SPDXRef-File-lib-libnss-dns.so.2-324828ff45e1fc0b
FileChecksum: SHA1: 0000000000000000000000000000000000000000
LicenseConcluded: NOASSERTION
LicenseInfoInFile: NOASSERTION
FileCopyrightText: NOASSERTION
FileComment: layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05

FileName: lib/libnss_files.so.2
SPDXID: SPDXRef-File-lib-libnss-files.so.2-9a791682497737bd
FileChecksum: SHA1: 0000000000000000000000000000000000000000
LicenseConcluded: NOASSERTION
LicenseInfoInFile: NOASSERTION
FileCopyrightText: NOASSERTION
FileComment: layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05

FileName: lib/libnss_hesiod.so.2
SPDXID: SPDXRef-File-lib-libnss-hesiod.so.2-c6f668db34996e30
FileChecksum: SHA1: 0000000000000000000000000000000000000000
LicenseConcluded: NOASSERTION
LicenseInfoInFile: NOASSERTION
FileCopyrightText: NOASSERTION
FileComment: layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05

FileName: lib/libpthread.so.0
SPDXID: SPDXRef-File-lib-libpthread.so.0-d5aa00430d994aa8
FileChecksum: SHA1: 0000000000000000000000000000000000000000
LicenseConcluded: NOASSERTION
LicenseInfoInFile: NOASSERTION
FileCopyrightText: NOASSERTION
FileComment: layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05

FileName: lib/libresolv.so.2
SPDXID: SPDXRef-File-lib-libresolv.so.2-5804ce9e713c7582
FileChecksum: SHA1: 0000000000000000000000000000000000000000
LicenseConcluded: NOASSERTION
LicenseInfoInFile: NOASSERTION
FileCopyrightText: NOASSERTION
FileComment: layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05

##### Package: busybox

PackageName: busybox
SPDXID: SPDXRef-DocumentRoot-Image-busybox
PackageVersion: sha256:396fa78f221c72de93053a00e33e3d69b5bdfa80131777e6ea518eb9a1af3f3b
PackageSupplier: NOASSERTION
PackageDownloadLocation: NOASSERTION
PrimaryPackagePurpose: CONTAINER
FilesAnalyzed: false
PackageChecksum: SHA256: 396fa78f221c72de93053a00e33e3d69b5bdfa80131777e6ea518eb9a1af3f3b
PackageLicenseConcluded: NOASSERTION
PackageLicenseDeclared: NOASSERTION
PackageCopyrightText: NOASSERTION
ExternalRef: PACKAGE-MANAGER purl pkg:oci/busybox@sha256%3A396fa78f221c72de93053a00e33e3d69b5bdfa80131777e6ea518eb9a1af3f3b?arch=arm64&tag=latest

##### Package: busybox

PackageName: busybox
SPDXID: SPDXRef-Package-binary-busybox-fe44cee3fe279dfa
PackageVersion: 1.37.0
PackageSupplier: NOASSERTION
PackageDownloadLocation: NOASSERTION
FilesAnalyzed: false
PackageSourceInfo: acquired package info from the following paths: /bin/[
PackageLicenseConcluded: NOASSERTION
PackageLicenseDeclared: NOASSERTION
PackageCopyrightText: NOASSERTION
ExternalRef: SECURITY cpe23Type cpe:2.3:a:busybox:busybox:1.37.0:*:*:*:*:*:*:*
ExternalRef: PACKAGE-MANAGER purl pkg:generic/busybox@1.37.0

##### Relationships

Relationship: SPDXRef-Package-binary-busybox-fe44cee3fe279dfa OTHER SPDXRef-File-bin---3a6b3df220691408
RelationshipComment: evident-by: indicates the package's existence is evident by the given file
Relationship: SPDXRef-DocumentRoot-Image-busybox CONTAINS SPDXRef-Package-binary-busybox-fe44cee3fe279dfa
Relationship: SPDXRef-DOCUMENT DESCRIBES SPDXRef-DocumentRoot-Image-busybox
{
  "version": 0,
  "job": {},
  "detector": {
    "name": "syft",
    "url": "https://github.com/anchore/syft",
    "version": "1.38.0"
  },
  "metadata": {
    "syft:distro": "pkg:generic/busybox@1.37.0?like=busybox"
  },
  "manifests": {
    "busybox:latest:/bin/busybox": {
      "name": "busybox:latest:/bin/busybox",
      "file": {
        "source_location": "busybox:latest:/bin/busybox"
      },
      "metadata": {
        "syft:filesystem": "sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
      },
      "resolved": {
        "pkg:generic/busybox@1.37.0": {
          "package_url": "pkg:generic/busybox@1.37.0",
          "relationship": "direct",
          "scope": "runtime"
        }
      }
    }
  },
  "scanned": "2025-11-21T20:47:31Z"
}

Writing output to files

Direct Syft output to a file instead of stdout by appending =<file> to the format option:

# Write JSON to a file
syft <source> -o json=sbom.json

# Write to stdout (default behavior)
syft <source> -o json

Multiple outputs

Generate multiple SBOM formats in a single run by specifying multiple -o flags:

syft <source> \
  -o json=sbom.json \
  -o spdx-json=sbom.spdx.json

You can both display to terminal and write to file:

syft <source> \
  -o table \           # report to stdout
  -o json=sbom.json    # write to file

FAQ

Which format should I use?

  • For human review: Use table (default) for quick package lists
  • For automation and queries: Use json to access all Syft data including file details, relationships, and metadata
  • For compliance and sharing: Use spdx-json or cyclonedx-json - both are widely supported industry standards
  • For custom formats: Use template to create your own output format

Can I convert between formats?

Yes! See the Format Conversion guide to convert existing SBOMs between formats without re-scanning.

Do all formats contain the same information?

No. Syft’s native json format contains the most complete information. Standard formats (SPDX, CycloneDX) contain package data but may not include all file details or Syft-specific metadata. Some data may be omitted or transformed to fit the target schema.

Which version should I use for SPDX or CycloneDX?

Use the latest version (default) unless you need compatibility with specific tools that require older versions. Check your downstream tools’ documentation for version requirements.

Next steps

Additional resources:

3.1.4 - Working with JSON

Learn how to work with Syft’s native JSON format including querying with jq, extracting metadata, and understanding the SBOM structure.

Syft’s native JSON format provides the most comprehensive view of discovered software components, capturing all package metadata, file details, relationships, and source information.

Since Syft can convert from its native JSON format to standard SBOM formats, capturing your SBOM in Syft JSON format lets you generate any SBOM format as needed for compliance requirements.

Data Shapes

A Syft JSON output contains these main sections:

{
  "artifacts": [], // Package nodes discovered
  "artifactRelationships": [], // Edges between packages and files
  "files": [], // File nodes discovered
  "source": {}, // What was scanned (the image, directory, etc.)
  "distro": {}, // Linux distribution discovered
  "descriptor": {}, // Syft version and configuration that captured this SBOM
  "schema": {} // Schema version
}

Package (artifacts)

A software package discovered by Syft (library, application, OS package, etc.).

{
  "id": "74d9294c42941b37", // Unique identifier for this package that is content addressable
  "name": "openssl",
  "version": "1.1.1k",
  "type": "apk", // Package ecosystem (apk, deb, npm, etc.)
  "foundBy": "apk-cataloger",
  "locations": [
    // Paths used to populate information on this package object
    {
      "path": "/lib/apk/db/installed", // Always the real-path
      "layerID": "sha256:...",
      "accessPath": "/lib/apk/db/installed", // How Syft accessed the file (may be a symlink)
      "annotations": {
        "evidence": "primary" // Qualifies the kind of evidence extracted from this location (primary, supporting)
      }
    }
  ],
  "licenses": [
    {
      "value": "Apache-2.0", // Raw value discovered
      "spdxExpression": "Apache-2.0", // Normalized SPDX expression of the discovered value
      "type": "declared", // "declared", "concluded", or "observed"
      "urls": ["https://..."],
      "locations": [] // Where license was found
    }
  ],
  "language": "c",
  "cpes": [
    {
      "cpe": "cpe:2.3:a:openssl:openssl:1.1.1k:*:*:*:*:*:*:*",
      "source": "nvd-dictionary" // Where the CPE was derived from (nvd-dictionary or syft-generated)
    }
  ],
  "purl": "pkg:apk/alpine/openssl@1.1.1k",
  "metadata": {} // Ecosystem-specific fields (varies by type)
}

File

A file found on disk or referenced in package manager metadata.

{
  "id": "def456",
  "location": {
    "path": "/usr/bin/example",
    "layerID": "sha256:..." // For container images
  },
  "metadata": {
    "mode": 493, // File permissions in octal
    "type": "RegularFile",
    "mimeType": "application/x-executable",
    "size": 12345 // Size in bytes
  },
  "digests": [
    {
      "algorithm": "sha256",
      "value": "abc123..."
    }
  ],
  "licenses": [
    {
      "value": "Apache-2.0", // Raw value discovered
      "spdxExpression": "Apache-2.0", // Normalized SPDX expression of the discovered value
      "type": "declared", // "declared", "concluded", or "observed"
      "evidence": {
        "confidence": 100,
        "offset": 1234, // Byte offset in file
        "extent": 567 // Length of match
      }
    }
  ],
  "executable": {
    "format": "elf", // "elf", "pe", or "macho"
    "hasExports": true,
    "hasEntrypoint": true,
    "importedLibraries": [
      // Shared library dependencies
      "libc.so.6",
      "libssl.so.1.1"
    ],
    "elfSecurityFeatures": {
      // ELF binaries only
      "symbolTableStripped": false,
      "stackCanary": true, // Stack protection
      "nx": true, // No-Execute bit
      "relRO": "full", // Relocation Read-Only
      "pie": true // Position Independent Executable
    }
  }
}

Relationship

Connects any two nodes (package, file, or source) with a typed relationship.

{
  "parent": "package-id", // Package, file, or source ID
  "child": "file-id",
  "type": "contains" // contains, dependency-of, etc.
}

Source

Information about what was scanned (container image, directory, file, etc.).

{
  "id": "sha256:...",
  "name": "alpine:3.9.2", // User input
  "version": "sha256:...",
  "type": "image", // image, directory, file
  "metadata": {
    "imageID": "sha256:...",
    "manifestDigest": "sha256:...",
    "mediaType": "application/vnd.docker...",
    "tags": ["alpine:3.9.2"],
    "repoDigests": []
  }
}

Distribution

Linux distribution details from /etc/os-release or similar sources.

{
  "name": "alpine",
  "version": "3.9.2",
  "idLike": ["alpine"] // Related distributions
}

Location

Describes where a package or file was found.

{
  "path": "/lib/apk/db/installed",
  "layerID": "sha256:...",
  "accessPath": "/var/lib/apk/installed",
  "annotations": {
    "evidence": "primary"
  }
}

The path field always contains the real path after resolving symlinks, while accessPath shows how Syft accessed the file (which may be through a symlink).

The evidence annotation indicates whether this location was used to discover the package (primary) or contains only auxiliary information (supporting).

Descriptor

Syft version and configuration used to generate this SBOM.

{
  "name": "syft",
  "version": "1.0.0",
  "configuration": {} // Syft configuration used
}

The Syft JSON schema is versioned and available in the Syft repository:

JQ Recipes

jq is a command-line tool for querying and manipulating JSON. The following examples demonstrate practical queries for working with Syft JSON output.

Find packages by name pattern

Uses regex pattern matching to find security-critical packages

.artifacts[] |
  select(.name | test("^(openssl|ssl|crypto)")) |  # Regex pattern match on package name
  {
    name,
    version,
    type  # Package type (apk, deb, rpm, etc.)
  }
syft alpine:3.9.2 -o json | \
  jq '.artifacts[] |
  select(.name | test("^(openssl|ssl|crypto)")) |
  {
    name,
    version,
    type
  }'
{
  "name": "ssl_client",
  "version": "1.29.3-r10",
  "type": "apk"
}

Location of all JARs

Shows Java packages with their primary installation paths

.artifacts[] |
  select(.type == "java-archive") |  # Filter for JAR packages
  {
    package: "\(.name)@\(.version)",
    path: (.locations[] | select(.annotations.evidence == "primary") | .path)  # Primary installation path
  }
syft openjdk:11.0.11-jre-slim -o json | \
  jq '.artifacts[] |
  select(.type == "java-archive") |
  {
    package: "\(.name)@\(.version)",
    path: (.locations[] | select(.annotations.evidence == "primary") | .path)
  }'
{
  "package": "jrt-fs@11.0.11",
  "path": "/usr/local/openjdk-11/lib/jrt-fs.jar"
}

All executable files

Lists all binary files with their format and entry point status

.files[] |
  select(.executable != null) |  # Filter for executable files
  {
    path: .location.path,
    format: .executable.format,  # ELF, Mach-O, PE, etc.
    importedLibraries: .executable.importedLibraries  # Shared library dependencies
  }
syft alpine:3.9.2 -o json | \
  jq '.files[] |
  select(.executable != null) |
  {
    path: .location.path,
    format: .executable.format,
    importedLibraries: .executable.importedLibraries
  }'
{
  "path": "/bin/busybox",
  "format": "elf",
  "importedLibraries": [
    "libc.musl-aarch64.so.1"
  ]
}
{
  "path": "/lib/ld-musl-aarch64.so.1",
  "format": "elf",
  "importedLibraries": []
}
{
  "path": "/lib/libcrypto.so.1.1",
  "format": "elf",
  "importedLibraries": [
    "libc.musl-aarch64.so.1"
  ]
}
{
  "path": "/lib/libssl.so.1.1",
  "format": "elf",
  "importedLibraries": [
    "libcrypto.so.1.1",
    "libc.musl-aarch64.so.1"
  ]
}
{
  "path": "/lib/libz.so.1.2.11",
  "format": "elf",
  "importedLibraries": [
    "libc.musl-aarch64.so.1"
  ]
}
{
  "path": "/sbin/apk",
  "format": "elf",
  "importedLibraries": [
    "libssl.so.1.1",
    "libcrypto.so.1.1",
    "libz.so.1",
    "libc.musl-aarch64.so.1"
  ]
}
{
  "path": "/sbin/mkmntdirs",
  "format": "elf",
  "importedLibraries": [
    "libc.musl-aarch64.so.1"
  ]
}
{
  "path": "/usr/bin/getconf",
  "format": "elf",
  "importedLibraries": [
    "libc.musl-aarch64.so.1"
  ]
}
{
  "path": "/usr/bin/getent",
  "format": "elf",
  "importedLibraries": [
    "libc.musl-aarch64.so.1"
  ]
}
{
  "path": "/usr/bin/iconv",
  "format": "elf",
  "importedLibraries": [
    "libc.musl-aarch64.so.1"
  ]
}
{
  "path": "/usr/bin/scanelf",
  "format": "elf",
  "importedLibraries": [
    "libc.musl-aarch64.so.1"
  ]
}
{
  "path": "/usr/bin/ssl_client",
  "format": "elf",
  "importedLibraries": [
    "libtls-standalone.so.1",
    "libc.musl-aarch64.so.1"
  ]
}
{
  "path": "/usr/lib/engines-1.1/afalg.so",
  "format": "elf",
  "importedLibraries": [
    "libcrypto.so.1.1",
    "libc.musl-aarch64.so.1"
  ]
}
{
  "path": "/usr/lib/engines-1.1/capi.so",
  "format": "elf",
  "importedLibraries": [
    "libc.musl-aarch64.so.1"
  ]
}
{
  "path": "/usr/lib/engines-1.1/padlock.so",
  "format": "elf",
  "importedLibraries": [
    "libc.musl-aarch64.so.1"
  ]
}
{
  "path": "/usr/lib/libtls-standalone.so.1.0.0",
  "format": "elf",
  "importedLibraries": [
    "libssl.so.1.1",
    "libcrypto.so.1.1",
    "libc.musl-aarch64.so.1"
  ]
}

Binaries not owned by packages

Uses set operations on relationships to identify untracked binaries that might indicate supply chain issues

. as $root |
  [.files[] | select(.executable != null) | .id] as $binaries |  # All binary IDs
  [.artifactRelationships[] | select(.type == "contains") | .child] as $owned |  # Package-owned files
  ($binaries - $owned) as $unowned |  # Set subtraction to find unowned binaries
  $root.files[] |
  select(.id as $id | $unowned | index($id)) |  # Filter to unowned binaries
  {
    path: .location.path,
    sha256: .digests[] | select(.algorithm == "sha256") | .value  # For integrity verification
  }
syft httpd:2.4.65 -o json | \
  jq '. as $root |
  [.files[] | select(.executable != null) | .id] as $binaries |
  [.artifactRelationships[] | select(.type == "contains") | .child] as $owned |
  ($binaries - $owned) as $unowned |
  $root.files[] |
  select(.id as $id | $unowned | index($id)) |
  {
    path: .location.path,
    sha256: .digests[] | select(.algorithm == "sha256") | .value
  }'
# .syft.yaml
file:
  metadata:
    selection: all
{
  "path": "/usr/local/apache2/bin/ab",
  "sha256": "1aa76de1f9eb534fe22d35a01ccbf7ede03e250f6f5d0a00553e687187565d3a"
}
{
  "path": "/usr/local/apache2/bin/checkgid",
  "sha256": "af3372d60eee3f8132d2bdd10fb8670db8a9965b2e056c267131586184ba11fb"
}
{
  "path": "/usr/local/apache2/bin/fcgistarter",
  "sha256": "eea2fa75671e7e647692cd0352405ef8a0b17167a05770b9552602a3c720bfdb"
}
{
  "path": "/usr/local/apache2/bin/htcacheclean",
  "sha256": "94e0fd5f0f5cf6231080177072846a4e99846f1f534224911e3bed17ce27ec38"
}
{
  "path": "/usr/local/apache2/bin/htdbm",
  "sha256": "e2a41d96c92cb16c98972a043ac380c06f19b5bddbafe0b2d2082ed174f8cfe3"
}
{
  "path": "/usr/local/apache2/bin/htdigest",
  "sha256": "0881598a4fd15455297c186fa301fdb1656ff26d0f77626d54a15421095e047f"
}
{
  "path": "/usr/local/apache2/bin/htpasswd",
  "sha256": "871ef0aa4ae0914747a471bf3917405548abf768dd6c94e3e0177c8e87334d9e"
}
{
  "path": "/usr/local/apache2/bin/httpd",
  "sha256": "2f3b52523394d1f4d4e2c5e1c5565161dcf8a0fc8e957e8d2d741acd3a111563"
}
{
  "path": "/usr/local/apache2/bin/httxt2dbm",
  "sha256": "1d5eb8e5d910760aa859c45e79b541362a84499f08fb79b8773bf9b8faf7bbdb"
}
{
  "path": "/usr/local/apache2/bin/logresolve",
  "sha256": "de8ed1fa5184170fca09980025f40c55d9fbf14b47c73b2575bc90ac1c9bf20e"
}
{
  "path": "/usr/local/apache2/bin/rotatelogs",
  "sha256": "f5ed895712cddcec7f542dee08a1ff74fd00ae3a9b0d92ede429e04ec2b9b8ae"
}
{
  "path": "/usr/local/apache2/bin/suexec",
  "sha256": "264efc529c09a60fed57fcde9e7a2c36f8bb414ae0e1afc9bb85595113ab4ec2"
}
{
  "path": "/usr/local/apache2/modules/mod_access_compat.so",
  "sha256": "0d6322b7d7d3d6c459751f8b271f733fa05a8b56eecd75f608100a5dbf464fc2"
}
{
  "path": "/usr/local/apache2/modules/mod_actions.so",
  "sha256": "6dc5dea7137ec0ae139c545b26efd860c6de7bcc19d2e31db213399c86bf2ead"
}
{
  "path": "/usr/local/apache2/modules/mod_alias.so",
  "sha256": "bb422c4486600ec349ac9b89acaa3793265d69498c30370e678a362900daea04"
}
{
  "path": "/usr/local/apache2/modules/mod_allowmethods.so",
  "sha256": "99a9db80c8f18fe3defb315731af3bceef321a98bd52f518f068ca2632596cee"
}
{
  "path": "/usr/local/apache2/modules/mod_asis.so",
  "sha256": "039014ad5ad3f357e811b570bd9977a772e74f191856981a503e57263b88cc44"
}
{
  "path": "/usr/local/apache2/modules/mod_auth_basic.so",
  "sha256": "1f9534187df98194fa60259c3d9feca05f1b2564d49b37b49da040232e7a327b"
}
{
  "path": "/usr/local/apache2/modules/mod_auth_digest.so",
  "sha256": "ad77d0457b773c9d13097adf47bebcd95297466fc9fb6886b7bff85e2acdd99d"
}
{
  "path": "/usr/local/apache2/modules/mod_auth_form.so",
  "sha256": "ceb56183d83c22ff08853982b0f35f122185cf69d3bcfd948eeb1df32dd12bbb"
}
{
  "path": "/usr/local/apache2/modules/mod_authn_anon.so",
  "sha256": "44308e1d5a65ab64232d27f24a827aa1afdb2fef580dd1a8454788431ebd639f"
}
{
  "path": "/usr/local/apache2/modules/mod_authn_core.so",
  "sha256": "9cbf85b1a20da26483ca4a57186161a2876ca296dd1174ed5a5af9f5301fe5e8"
}
{
  "path": "/usr/local/apache2/modules/mod_authn_dbd.so",
  "sha256": "08dc7b848a67131a091563046e3fc6914e86f248740bd2f23905f2f6df3ce541"
}
{
  "path": "/usr/local/apache2/modules/mod_authn_dbm.so",
  "sha256": "1e5900c8b41ca227b59ba54738154e04841cef2045d8040747e4b7887526a763"
}
{
  "path": "/usr/local/apache2/modules/mod_authn_file.so",
  "sha256": "74f83d5717276ae6a37f4a2d0c54f8d23e57ae1c3f73bb2b332c77860b7421ed"
}
{
  "path": "/usr/local/apache2/modules/mod_authn_socache.so",
  "sha256": "2f51212b62c5bbda54ddec0c1a07f523e96c2b56d987fefa43e0cc42dbf6f5d0"
}
{
  "path": "/usr/local/apache2/modules/mod_authnz_fcgi.so",
  "sha256": "4fa0fa7d3d4b742b3f73a781d2e8d4625d477c76aa0698aa0d499f87e6985554"
}
{
  "path": "/usr/local/apache2/modules/mod_authnz_ldap.so",
  "sha256": "dccffc453f46d201ecb1003b372a6ca417ac40a33036500a2215697b2e5ac0af"
}
{
  "path": "/usr/local/apache2/modules/mod_authz_core.so",
  "sha256": "e2b825ec9e2992b1cc157aef12c4ecd75960604658c3b7aa4a370088e89455b5"
}
{
  "path": "/usr/local/apache2/modules/mod_authz_dbd.so",
  "sha256": "61b427078b5d11b3fd8693cbfa22cb5871dc9784b08d3182b73ad3e99b8579d9"
}
{
  "path": "/usr/local/apache2/modules/mod_authz_dbm.so",
  "sha256": "1d99ed703743d9dd2185a0d7e9e351fa38066b3234ae997e87efa6dc1e4513eb"
}
{
  "path": "/usr/local/apache2/modules/mod_authz_groupfile.so",
  "sha256": "3e9adb775d41a8b01802ff610dda01f8e62a0d282ea0522d297a252207453c4d"
}
{
  "path": "/usr/local/apache2/modules/mod_authz_host.so",
  "sha256": "c0fcd53dc9596fd6bc280c55d14b61c72dc12470bf5c1bc86e369217af05cb2c"
}
{
  "path": "/usr/local/apache2/modules/mod_authz_owner.so",
  "sha256": "e8923ef5f11e03c37b4579e18d396758ee085bae4dadc0519374ca63da86c932"
}
{
  "path": "/usr/local/apache2/modules/mod_authz_user.so",
  "sha256": "3c5674a1e7af6b7d09e8c66f973a3138fed0dde4dfaee98fc132c89730cd9156"
}
{
  "path": "/usr/local/apache2/modules/mod_autoindex.so",
  "sha256": "2d992f31f40be2c0ec34a29981191c3bfb9e4448a2099f11a4876ba4d394dc2f"
}
{
  "path": "/usr/local/apache2/modules/mod_brotli.so",
  "sha256": "73bfe5aeff2040a7b56a0bf822bc4069ce3e9954186f81322060697f5cf0546f"
}
{
  "path": "/usr/local/apache2/modules/mod_bucketeer.so",
  "sha256": "9f146159e928405d2a007dba3690566a45e5793cde87871a30dbfd1dc9114db1"
}
{
  "path": "/usr/local/apache2/modules/mod_buffer.so",
  "sha256": "710bd1b238a7814963b2857eb92c891bafeff61d9e40f807d68ded700c8c37f2"
}
{
  "path": "/usr/local/apache2/modules/mod_cache.so",
  "sha256": "976222e2c7ddb317d8804383801b310be33c6b3542f6972edd12c38ddc527e38"
}
{
  "path": "/usr/local/apache2/modules/mod_cache_disk.so",
  "sha256": "c5359004a563b9b01bf0416cbe856bb50de642bf06649383ffcae26490dc69c8"
}
{
  "path": "/usr/local/apache2/modules/mod_cache_socache.so",
  "sha256": "94abdf3779a9f7d258b1720021e1e3f10c630e625f5aa13c683c3c811b8dac10"
}
{
  "path": "/usr/local/apache2/modules/mod_case_filter.so",
  "sha256": "79a0a336c1bacd06c0fc5ca14cfc97223c92f0f5b0c88ec95f7e163e8cdf917d"
}
{
  "path": "/usr/local/apache2/modules/mod_case_filter_in.so",
  "sha256": "aa5e1c9452e1be3789a8a867a98dab700e4a579c0ea1ff7180adf4e41b8495e3"
}
{
  "path": "/usr/local/apache2/modules/mod_cern_meta.so",
  "sha256": "1a6da74d768c01b1a96f5c0f0e74686d5b0f51c3d7f1149fa1124cdf10ba842a"
}
{
  "path": "/usr/local/apache2/modules/mod_cgi.so",
  "sha256": "f2716c663f4f7db8cd78f456e5bd098a62c1b8fde86253ed4617edfe9cdb93b2"
}
{
  "path": "/usr/local/apache2/modules/mod_cgid.so",
  "sha256": "d5a19aeeb7b9063bac25e4a172ea7578e83bb32da4fe21ecd858409115de166c"
}
{
  "path": "/usr/local/apache2/modules/mod_charset_lite.so",
  "sha256": "9c4a1b27532c5f47eea7cfc61f65a7cf2f132286e556175ec28e313024641c9d"
}
{
  "path": "/usr/local/apache2/modules/mod_data.so",
  "sha256": "4dcae9a704c7d9861497e57b15423b9ce3fc7dda6544096ecfff64e4223f3684"
}
{
  "path": "/usr/local/apache2/modules/mod_dav.so",
  "sha256": "1a33728b16ad05b12fbecf637168608cb10f258ef7a355bd37cef8ce2ed86fd7"
}
...

Binary file digests

Useful for verifying binary integrity and detecting tampering

.files[] |
  select(.executable != null) |  # Filter for executable files
  {
    path: .location.path,
    digests: [.digests[] | {algorithm, value}]  # All available hash algorithms
  }
syft alpine:3.9.2 -o json | \
  jq '.files[] |
  select(.executable != null) |
  {
    path: .location.path,
    digests: [.digests[] | {algorithm, value}]
  }'
{
  "path": "/bin/busybox",
  "digests": [
    {
      "algorithm": "sha1",
      "value": "7423801dfb28659fcaaaa5e8d41051d470b19008"
    },
    {
      "algorithm": "sha256",
      "value": "2c1276c3c02ccec8a0e1737d3144cdf03db883f479c86fbd9c7ea4fd9b35eac5"
    }
  ]
}
{
  "path": "/lib/ld-musl-aarch64.so.1",
  "digests": [
    {
      "algorithm": "sha1",
      "value": "0b83c1eb91d633379e0c17349e7dae821fa36dbb"
    },
    {
      "algorithm": "sha256",
      "value": "0132814479f1acc1e264ef59f73fd91563235897e8dc1bd52765f974cde382ca"
    }
  ]
}
{
  "path": "/lib/libcrypto.so.1.1",
  "digests": [
    {
      "algorithm": "sha1",
      "value": "e9d1540e5bbd9e77b388ab0e6e2f52603eb032a4"
    },
    {
      "algorithm": "sha256",
      "value": "6c597c8ad195eeb7a9130ad832dfa4cbf140f42baf96304711b2dbd43ba8e617"
    }
  ]
}
{
  "path": "/lib/libssl.so.1.1",
  "digests": [
    {
      "algorithm": "sha1",
      "value": "a8d5036010b52a80402b900c626fe862ab06bd8b"
    },
    {
      "algorithm": "sha256",
      "value": "fb72f4615fb4574bd6eeabfdb86be47012618b9076d75aeb1510941c585cae64"
    }
  ]
}
{
  "path": "/lib/libz.so.1.2.11",
  "digests": [
    {
      "algorithm": "sha1",
      "value": "83378fc7a19ff908a7e92a9fd0ca39eee90d0a3c"
    },
    {
      "algorithm": "sha256",
      "value": "19e790eb36a09eba397b5af16852f3bea21a242026bbba3da7b16442b8ba305b"
    }
  ]
}
{
  "path": "/sbin/apk",
  "digests": [
    {
      "algorithm": "sha1",
      "value": "adac7738917adecff81d4a6f9f0c7971b173859a"
    },
    {
      "algorithm": "sha256",
      "value": "22d7d85bd24923f1f274ce765d16602191097829e22ac632748302817ce515d8"
    }
  ]
}
{
  "path": "/sbin/mkmntdirs",
  "digests": [
    {
      "algorithm": "sha1",
      "value": "fff9b110ad6c659a39681e7be3b2a036fbbcca7b"
    },
    {
      "algorithm": "sha256",
      "value": "a14a5a28525220224367616ef46d4713ef7bd00d22baa761e058e8bdd4c0af1b"
    }
  ]
}
{
  "path": "/usr/bin/getconf",
  "digests": [
    {
      "algorithm": "sha1",
      "value": "06ed40070e1c2ad6d4171095eff4a6bdf9c8489b"
    },
    {
      "algorithm": "sha256",
      "value": "82bcde66ead19bc3b9ff850f66c2dbf5eaff36d481f1ec154100f73f6265d2ef"
    }
  ]
}
{
  "path": "/usr/bin/getent",
  "digests": [
    {
      "algorithm": "sha1",
      "value": "c318a3a780fc27ed7dba57827a825191fa7ee8bd"
    },
    {
      "algorithm": "sha256",
      "value": "53ffb508150e91838d795831e8ecc71f2bc3a7db036c6d7f9512c3973418bb5e"
    }
  ]
}
{
  "path": "/usr/bin/iconv",
  "digests": [
    {
      "algorithm": "sha1",
      "value": "eb98f04742e41cfc3ed44109b0e059d13e5523ea"
    },
    {
      "algorithm": "sha256",
      "value": "1c99d1f4edcb8da6db1da60958051c413de45a4c15cd3b7f7285ed87f9a250ff"
    }
  ]
}
{
  "path": "/usr/bin/scanelf",
  "digests": [
    {
      "algorithm": "sha1",
      "value": "cb085d106f35862e44e17849026927bd05845bff"
    },
    {
      "algorithm": "sha256",
      "value": "908da485ad2edea35242f8989c7beb9536414782abc94357c72b7d840bb1fda2"
    }
  ]
}
{
  "path": "/usr/bin/ssl_client",
  "digests": [
    {
      "algorithm": "sha1",
      "value": "7e17cb64c3fce832e5fa52a3b2ed1e1ccd26acd0"
    },
    {
      "algorithm": "sha256",
      "value": "67ab7f3a1ba35630f439d1ca4f73c7d95f8b7aa0e6f6db6ea1743f136f074ab4"
    }
  ]
}
{
  "path": "/usr/lib/engines-1.1/afalg.so",
  "digests": [
    {
      "algorithm": "sha1",
      "value": "6bd2c385e3884109c581659a8b184592c86e7cee"
    },
    {
      "algorithm": "sha256",
      "value": "ea7c2f48bc741fd828d79a304dbf713e20e001c0187f3f534d959886af87f4af"
    }
  ]
}
{
  "path": "/usr/lib/engines-1.1/capi.so",
  "digests": [
    {
      "algorithm": "sha1",
      "value": "41bb990b6f8e2013487980fd430455cc3b59905f"
    },
    {
      "algorithm": "sha256",
      "value": "b461ed43f0f244007d872e84760a446023b69b178c970acf10ed2666198942c6"
    }
  ]
}
{
  "path": "/usr/lib/engines-1.1/padlock.so",
  "digests": [
    {
      "algorithm": "sha1",
      "value": "82d8308700f481884fd77c882e0e9406fb17b317"
    },
    {
      "algorithm": "sha256",
      "value": "0ccb04f040afb0216da1cea2c1db7a0b91d990ce061e232782aedbd498483649"
    }
  ]
}
{
  "path": "/usr/lib/libtls-standalone.so.1.0.0",
  "digests": [
    {
      "algorithm": "sha1",
...

Binaries with security features

Analyzes ELF security hardening features extracted during SBOM generation

.files[] |
  select(.executable != null and .executable.format == "elf") |  # ELF binaries only
  {
    path: .location.path,
    pie: .executable.elfSecurityFeatures.pie,  # Position Independent Executable
    stackCanary: .executable.elfSecurityFeatures.stackCanary,  # Stack protection
    nx: .executable.elfSecurityFeatures.nx  # No-Execute bit
  }
syft alpine:3.9.2 -o json | \
  jq '.files[] |
  select(.executable != null and .executable.format == "elf") |
  {
    path: .location.path,
    pie: .executable.elfSecurityFeatures.pie,
    stackCanary: .executable.elfSecurityFeatures.stackCanary,
    nx: .executable.elfSecurityFeatures.nx
  }'
{
  "path": "/bin/busybox",
  "pie": true,
  "stackCanary": true,
  "nx": true
}
{
  "path": "/lib/ld-musl-aarch64.so.1",
  "pie": false,
  "stackCanary": true,
  "nx": true
}
{
  "path": "/lib/libcrypto.so.1.1",
  "pie": false,
  "stackCanary": true,
  "nx": true
}
{
  "path": "/lib/libssl.so.1.1",
  "pie": false,
  "stackCanary": true,
  "nx": true
}
{
  "path": "/lib/libz.so.1.2.11",
  "pie": false,
  "stackCanary": true,
  "nx": true
}
{
  "path": "/sbin/apk",
  "pie": true,
  "stackCanary": true,
  "nx": true
}
{
  "path": "/sbin/mkmntdirs",
  "pie": true,
  "stackCanary": false,
  "nx": true
}
{
  "path": "/usr/bin/getconf",
  "pie": true,
  "stackCanary": false,
  "nx": true
}
{
  "path": "/usr/bin/getent",
  "pie": true,
  "stackCanary": true,
  "nx": true
}
{
  "path": "/usr/bin/iconv",
  "pie": true,
  "stackCanary": true,
  "nx": true
}
{
  "path": "/usr/bin/scanelf",
  "pie": true,
  "stackCanary": true,
  "nx": true
}
{
  "path": "/usr/bin/ssl_client",
  "pie": true,
  "stackCanary": true,
  "nx": true
}
{
  "path": "/usr/lib/engines-1.1/afalg.so",
  "pie": false,
  "stackCanary": true,
  "nx": true
}
{
  "path": "/usr/lib/engines-1.1/capi.so",
  "pie": false,
  "stackCanary": false,
  "nx": true
}
{
  "path": "/usr/lib/engines-1.1/padlock.so",
  "pie": false,
  "stackCanary": false,
  "nx": true
}
{
  "path": "/usr/lib/libtls-standalone.so.1.0.0",
  "pie": false,
  "stackCanary": true,
  "nx": true
}

Binaries importing specific libraries

Identifies which binaries depend on specific shared libraries for security audits

.files[] |
  select(.executable != null and .executable.importedLibraries != null) |
  select(.executable.importedLibraries[] | contains("libcrypto")) |  # Find binaries using libcrypto
  {
    path: .location.path,
    imports: .executable.importedLibraries  # Shared library dependencies
  }
syft alpine:3.9.2 -o json | \
  jq '.files[] |
  select(.executable != null and .executable.importedLibraries != null) |
  select(.executable.importedLibraries[] | contains("libcrypto")) |
  {
    path: .location.path,
    imports: .executable.importedLibraries
  }'
{
  "path": "/lib/libssl.so.1.1",
  "imports": [
    "libcrypto.so.1.1",
    "libc.musl-aarch64.so.1"
  ]
}
{
  "path": "/sbin/apk",
  "imports": [
    "libssl.so.1.1",
    "libcrypto.so.1.1",
    "libz.so.1",
    "libc.musl-aarch64.so.1"
  ]
}
{
  "path": "/usr/lib/engines-1.1/afalg.so",
  "imports": [
    "libcrypto.so.1.1",
    "libc.musl-aarch64.so.1"
  ]
}
{
  "path": "/usr/lib/libtls-standalone.so.1.0.0",
  "imports": [
    "libssl.so.1.1",
    "libcrypto.so.1.1",
    "libc.musl-aarch64.so.1"
  ]
}

Extract Package URLs (PURLs)

Extracts Package URLs for cross-tool SBOM correlation and vulnerability matching

.artifacts[] |
  select(.purl != null and .purl != "") |  # Filter packages with PURLs
  {
    name,
    version,
    purl  # Package URL for cross-tool compatibility
  }
syft alpine:3.9.2 -o json | \
  jq '.artifacts[] |
  select(.purl != null and .purl != "") |
  {
    name,
    version,
    purl
  }'
{
  "name": "alpine-baselayout",
  "version": "3.1.0-r3",
  "purl": "pkg:apk/alpine/alpine-baselayout@3.1.0-r3?arch=aarch64&distro=alpine-3.9.2"
}
{
  "name": "alpine-keys",
  "version": "2.1-r1",
  "purl": "pkg:apk/alpine/alpine-keys@2.1-r1?arch=aarch64&distro=alpine-3.9.2"
}
{
  "name": "apk-tools",
  "version": "2.10.3-r1",
  "purl": "pkg:apk/alpine/apk-tools@2.10.3-r1?arch=aarch64&distro=alpine-3.9.2"
}
{
  "name": "busybox",
  "version": "1.29.3-r10",
  "purl": "pkg:apk/alpine/busybox@1.29.3-r10?arch=aarch64&distro=alpine-3.9.2"
}
{
  "name": "ca-certificates-cacert",
  "version": "20190108-r0",
  "purl": "pkg:apk/alpine/ca-certificates-cacert@20190108-r0?arch=aarch64&distro=alpine-3.9.2&upstream=ca-certificates"
}
{
  "name": "libc-utils",
  "version": "0.7.1-r0",
  "purl": "pkg:apk/alpine/libc-utils@0.7.1-r0?arch=aarch64&distro=alpine-3.9.2&upstream=libc-dev"
}
{
  "name": "libcrypto1.1",
  "version": "1.1.1a-r1",
  "purl": "pkg:apk/alpine/libcrypto1.1@1.1.1a-r1?arch=aarch64&distro=alpine-3.9.2&upstream=openssl"
}
{
  "name": "libssl1.1",
  "version": "1.1.1a-r1",
  "purl": "pkg:apk/alpine/libssl1.1@1.1.1a-r1?arch=aarch64&distro=alpine-3.9.2&upstream=openssl"
}
{
  "name": "libtls-standalone",
  "version": "2.7.4-r6",
  "purl": "pkg:apk/alpine/libtls-standalone@2.7.4-r6?arch=aarch64&distro=alpine-3.9.2"
}
{
  "name": "musl",
  "version": "1.1.20-r3",
  "purl": "pkg:apk/alpine/musl@1.1.20-r3?arch=aarch64&distro=alpine-3.9.2"
}
{
  "name": "musl-utils",
  "version": "1.1.20-r3",
  "purl": "pkg:apk/alpine/musl-utils@1.1.20-r3?arch=aarch64&distro=alpine-3.9.2&upstream=musl"
}
{
  "name": "scanelf",
  "version": "1.2.3-r0",
  "purl": "pkg:apk/alpine/scanelf@1.2.3-r0?arch=aarch64&distro=alpine-3.9.2&upstream=pax-utils"
}
{
  "name": "ssl_client",
  "version": "1.29.3-r10",
  "purl": "pkg:apk/alpine/ssl_client@1.29.3-r10?arch=aarch64&distro=alpine-3.9.2&upstream=busybox"
}
{
  "name": "zlib",
  "version": "1.2.11-r1",
  "purl": "pkg:apk/alpine/zlib@1.2.11-r1?arch=aarch64&distro=alpine-3.9.2"
}

Group packages by language

Groups and counts packages by programming language

[.artifacts[] | select(.language != null and .language != "")] |
  group_by(.language) |  # Group by programming language
  map({
    language: .[0].language,
    count: length  # Count packages per language
  }) |
  sort_by(.count) |
  reverse  # Highest count first
syft node:18-alpine -o json | \
  jq '[.artifacts[] | select(.language != null and .language != "")] |
  group_by(.language) |
  map({
    language: .[0].language,
    count: length
  }) |
  sort_by(.count) |
  reverse'
[
  {
    "language": "javascript",
    "count": 204
  }
]

Count packages by type

Provides a summary count of packages per ecosystem

[.artifacts[]] |
  group_by(.type) |  # Group packages by ecosystem type
  map({
    type: .[0].type,
    count: length  # Count packages in each group
  }) |
  sort_by(.count) |
  reverse  # Highest count first
syft node:18-alpine -o json | \
  jq '[.artifacts[]] |
  group_by(.type) |
  map({
    type: .[0].type,
    count: length
  }) |
  sort_by(.count) |
  reverse'
[
  {
    "type": "npm",
    "count": 204
  },
  {
    "type": "apk",
    "count": 17
  },
  {
    "type": "binary",
    "count": 1
  }
]

Package locations

Maps packages to their filesystem locations

.artifacts[] |
  {
    name,
    version,
    type,
    locations: [.locations[] | .path]  # All filesystem locations
  }
syft alpine:3.9.2 -o json | \
  jq '.artifacts[] |
  {
    name,
    version,
    type,
    locations: [.locations[] | .path]
  }'
{
  "name": "alpine-baselayout",
  "version": "3.1.0-r3",
  "type": "apk",
  "locations": [
    "/lib/apk/db/installed"
  ]
}
{
  "name": "alpine-keys",
  "version": "2.1-r1",
  "type": "apk",
  "locations": [
    "/lib/apk/db/installed"
  ]
}
{
  "name": "apk-tools",
  "version": "2.10.3-r1",
  "type": "apk",
  "locations": [
    "/lib/apk/db/installed"
  ]
}
{
  "name": "busybox",
  "version": "1.29.3-r10",
  "type": "apk",
  "locations": [
    "/lib/apk/db/installed"
  ]
}
{
  "name": "ca-certificates-cacert",
  "version": "20190108-r0",
  "type": "apk",
  "locations": [
    "/lib/apk/db/installed"
  ]
}
{
  "name": "libc-utils",
  "version": "0.7.1-r0",
  "type": "apk",
  "locations": [
    "/lib/apk/db/installed"
  ]
}
{
  "name": "libcrypto1.1",
  "version": "1.1.1a-r1",
  "type": "apk",
  "locations": [
    "/lib/apk/db/installed"
  ]
}
{
  "name": "libssl1.1",
  "version": "1.1.1a-r1",
  "type": "apk",
  "locations": [
    "/lib/apk/db/installed"
  ]
}
{
  "name": "libtls-standalone",
  "version": "2.7.4-r6",
  "type": "apk",
  "locations": [
    "/lib/apk/db/installed"
  ]
}
{
  "name": "musl",
  "version": "1.1.20-r3",
  "type": "apk",
  "locations": [
    "/lib/apk/db/installed"
  ]
}
{
  "name": "musl-utils",
  "version": "1.1.20-r3",
  "type": "apk",
  "locations": [
    "/lib/apk/db/installed"
  ]
}
{
  "name": "scanelf",
  "version": "1.2.3-r0",
  "type": "apk",
  "locations": [
    "/lib/apk/db/installed"
  ]
}
{
  "name": "ssl_client",
  "version": "1.29.3-r10",
  "type": "apk",
  "locations": [
    "/lib/apk/db/installed"
  ]
}
{
  "name": "zlib",
  "version": "1.2.11-r1",
  "type": "apk",
  "locations": [
    "/lib/apk/db/installed"
  ]
}

Files by MIME type

Filters files by MIME type, useful for finding specific file types

.files[] |
  select(.metadata.mimeType == "application/x-sharedlib") |  # Filter by MIME type
  {
    path: .location.path,
    mimeType: .metadata.mimeType,
    size: .metadata.size  # File size in bytes
  }
syft alpine:3.9.2 -o json | \
  jq '.files[] |
  select(.metadata.mimeType == "application/x-sharedlib") |
  {
    path: .location.path,
    mimeType: .metadata.mimeType,
    size: .metadata.size
  }'
{
  "path": "/bin/busybox",
  "mimeType": "application/x-sharedlib",
  "size": 841320
}
{
  "path": "/lib/ld-musl-aarch64.so.1",
  "mimeType": "application/x-sharedlib",
  "size": 616960
}
{
  "path": "/lib/libcrypto.so.1.1",
  "mimeType": "application/x-sharedlib",
  "size": 2321984
}
{
  "path": "/lib/libssl.so.1.1",
  "mimeType": "application/x-sharedlib",
  "size": 515376
}
{
  "path": "/lib/libz.so.1.2.11",
  "mimeType": "application/x-sharedlib",
  "size": 91888
}
{
  "path": "/sbin/apk",
  "mimeType": "application/x-sharedlib",
  "size": 218928
}
{
  "path": "/sbin/mkmntdirs",
  "mimeType": "application/x-sharedlib",
  "size": 5712
}
{
  "path": "/usr/bin/getconf",
  "mimeType": "application/x-sharedlib",
  "size": 33544
}
{
  "path": "/usr/bin/getent",
  "mimeType": "application/x-sharedlib",
  "size": 48704
}
{
  "path": "/usr/bin/iconv",
  "mimeType": "application/x-sharedlib",
  "size": 21968
}
{
  "path": "/usr/bin/scanelf",
  "mimeType": "application/x-sharedlib",
  "size": 79592
}
{
  "path": "/usr/bin/ssl_client",
  "mimeType": "application/x-sharedlib",
  "size": 9808
}
{
  "path": "/usr/lib/engines-1.1/afalg.so",
  "mimeType": "application/x-sharedlib",
  "size": 18568
}
{
  "path": "/usr/lib/engines-1.1/capi.so",
  "mimeType": "application/x-sharedlib",
  "size": 5672
}
{
  "path": "/usr/lib/engines-1.1/padlock.so",
  "mimeType": "application/x-sharedlib",
  "size": 5672
}
{
  "path": "/usr/lib/libtls-standalone.so.1.0.0",
  "mimeType": "application/x-sharedlib",
  "size": 96032
}

Dependency relationships

Traverses package dependency graph using relationships

. as $root |
  .artifactRelationships[] |
  select(.type == "dependency-of") |  # Filter for dependency relationships
  .parent as $parent |
  .child as $child |
  {
    parent: ($root.artifacts[] | select(.id == $parent).name),  # Parent package name
    child: ($root.artifacts[] | select(.id == $child).name)  # Dependency name
  }
syft node:18-alpine -o json | \
  jq '. as $root |
  .artifactRelationships[] |
  select(.type == "dependency-of") |
  .parent as $parent |
  .child as $child |
  {
    parent: ($root.artifacts[] | select(.id == $parent).name),
    child: ($root.artifacts[] | select(.id == $child).name)
  }'
{
  "parent": "ca-certificates-bundle",
  "child": "apk-tools"
}
{
  "parent": "alpine-keys",
  "child": "alpine-release"
}
{
  "parent": "alpine-baselayout-data",
  "child": "alpine-baselayout"
}
{
  "parent": "musl",
  "child": "ssl_client"
}
{
  "parent": "musl",
  "child": "libgcc"
}
{
  "parent": "musl",
  "child": "libstdc++"
}
{
  "parent": "musl",
  "child": "musl-utils"
}
{
  "parent": "musl",
  "child": "libssl3"
}
{
  "parent": "musl",
  "child": "busybox"
}
{
  "parent": "musl",
  "child": "apk-tools"
}
{
  "parent": "musl",
  "child": "scanelf"
}
{
  "parent": "musl",
  "child": "libcrypto3"
}
{
  "parent": "musl",
  "child": "zlib"
}
{
  "parent": "libgcc",
  "child": "libstdc++"
}
{
  "parent": "libssl3",
  "child": "ssl_client"
}
{
  "parent": "libssl3",
  "child": "apk-tools"
}
{
  "parent": "busybox",
  "child": "busybox-binsh"
}
{
  "parent": "scanelf",
  "child": "musl-utils"
}
{
  "parent": "busybox-binsh",
  "child": "alpine-baselayout"
}
{
  "parent": "libcrypto3",
  "child": "ssl_client"
}
{
  "parent": "libcrypto3",
  "child": "libssl3"
}
{
  "parent": "libcrypto3",
  "child": "apk-tools"
}
{
  "parent": "zlib",
  "child": "apk-tools"
}

Files without packages

Finds orphaned files not associated with any package

. as $root |
  [.files[].id] as $allFiles |  # All file IDs
  [.artifactRelationships[] | select(.type == "contains") | .child] as $ownedFiles |  # Package-owned files
  ($allFiles - $ownedFiles) as $orphans |  # Set subtraction for unowned files
  $root.files[] |
  select(.id as $id | $orphans | index($id)) |  # Filter to orphaned files
  .location.path
syft alpine:3.9.2 -o json | \
  jq '. as $root |
  [.files[].id] as $allFiles |
  [.artifactRelationships[] | select(.type == "contains") | .child] as $ownedFiles |
  ($allFiles - $ownedFiles) as $orphans |
  $root.files[] |
  select(.id as $id | $orphans | index($id)) |
  .location.path'
"/lib/apk/db/installed"

Largest files

Identifies the top 10 largest files by size

[.files[] |
  {
    path: .location.path,
    size: .metadata.size,
    mimeType: .metadata.mimeType
  }] |
  sort_by(.size) |
  reverse |  # Largest first
  .[0:10]  # Top 10 files
syft alpine:3.9.2 -o json | \
  jq '[.files[] |
  {
    path: .location.path,
    size: .metadata.size,
    mimeType: .metadata.mimeType
  }] |
  sort_by(.size) |
  reverse |
  .[0:10]'
[
  {
    "path": "/lib/libcrypto.so.1.1",
    "size": 2321984,
    "mimeType": "application/x-sharedlib"
  },
  {
    "path": "/bin/busybox",
    "size": 841320,
    "mimeType": "application/x-sharedlib"
  },
  {
    "path": "/lib/ld-musl-aarch64.so.1",
    "size": 616960,
    "mimeType": "application/x-sharedlib"
  },
  {
    "path": "/lib/libssl.so.1.1",
    "size": 515376,
    "mimeType": "application/x-sharedlib"
  },
  {
    "path": "/etc/ssl/cert.pem",
    "size": 232598,
    "mimeType": "text/plain"
  },
  {
    "path": "/sbin/apk",
    "size": 218928,
    "mimeType": "application/x-sharedlib"
  },
  {
    "path": "/usr/lib/libtls-standalone.so.1.0.0",
    "size": 96032,
    "mimeType": "application/x-sharedlib"
  },
  {
    "path": "/lib/libz.so.1.2.11",
    "size": 91888,
    "mimeType": "application/x-sharedlib"
  },
  {
    "path": "/usr/bin/scanelf",
    "size": 79592,
    "mimeType": "application/x-sharedlib"
  },
  {
    "path": "/usr/bin/getent",
    "size": 48704,
    "mimeType": "application/x-sharedlib"
  }
]

Extract CPEs

Lists Common Platform Enumeration identifiers for vulnerability scanning

.artifacts[] |
  select(.cpes != null and (.cpes | length) > 0) |  # Filter packages with CPEs
  {
    name,
    version,
    cpes: [.cpes[].cpe]  # Extract CPE strings
  }
syft alpine:3.9.2 -o json | \
  jq '.artifacts[] |
  select(.cpes != null and (.cpes | length) > 0) |
  {
    name,
    version,
    cpes: [.cpes[].cpe]
  }'
{
  "name": "alpine-baselayout",
  "version": "3.1.0-r3",
  "cpes": [
    "cpe:2.3:a:alpine-baselayout:alpine-baselayout:3.1.0-r3:*:*:*:*:*:*:*",
    "cpe:2.3:a:alpine-baselayout:alpine_baselayout:3.1.0-r3:*:*:*:*:*:*:*",
    "cpe:2.3:a:alpine_baselayout:alpine-baselayout:3.1.0-r3:*:*:*:*:*:*:*",
    "cpe:2.3:a:alpine_baselayout:alpine_baselayout:3.1.0-r3:*:*:*:*:*:*:*",
    "cpe:2.3:a:alpine:alpine-baselayout:3.1.0-r3:*:*:*:*:*:*:*",
    "cpe:2.3:a:alpine:alpine_baselayout:3.1.0-r3:*:*:*:*:*:*:*"
  ]
}
{
  "name": "alpine-keys",
  "version": "2.1-r1",
  "cpes": [
    "cpe:2.3:a:alpine-keys:alpine-keys:2.1-r1:*:*:*:*:*:*:*",
    "cpe:2.3:a:alpine-keys:alpine_keys:2.1-r1:*:*:*:*:*:*:*",
    "cpe:2.3:a:alpine_keys:alpine-keys:2.1-r1:*:*:*:*:*:*:*",
    "cpe:2.3:a:alpine_keys:alpine_keys:2.1-r1:*:*:*:*:*:*:*",
    "cpe:2.3:a:alpine:alpine-keys:2.1-r1:*:*:*:*:*:*:*",
    "cpe:2.3:a:alpine:alpine_keys:2.1-r1:*:*:*:*:*:*:*"
  ]
}
{
  "name": "apk-tools",
  "version": "2.10.3-r1",
  "cpes": [
    "cpe:2.3:a:apk-tools:apk-tools:2.10.3-r1:*:*:*:*:*:*:*",
    "cpe:2.3:a:apk-tools:apk_tools:2.10.3-r1:*:*:*:*:*:*:*",
    "cpe:2.3:a:apk_tools:apk-tools:2.10.3-r1:*:*:*:*:*:*:*",
    "cpe:2.3:a:apk_tools:apk_tools:2.10.3-r1:*:*:*:*:*:*:*",
    "cpe:2.3:a:apk:apk-tools:2.10.3-r1:*:*:*:*:*:*:*",
    "cpe:2.3:a:apk:apk_tools:2.10.3-r1:*:*:*:*:*:*:*"
  ]
}
{
  "name": "busybox",
  "version": "1.29.3-r10",
  "cpes": [
    "cpe:2.3:a:busybox:busybox:1.29.3-r10:*:*:*:*:*:*:*"
  ]
}
{
  "name": "ca-certificates-cacert",
  "version": "20190108-r0",
  "cpes": [
    "cpe:2.3:a:ca-certificates-cacert:ca-certificates-cacert:20190108-r0:*:*:*:*:*:*:*",
    "cpe:2.3:a:ca-certificates-cacert:ca_certificates_cacert:20190108-r0:*:*:*:*:*:*:*",
    "cpe:2.3:a:ca_certificates_cacert:ca-certificates-cacert:20190108-r0:*:*:*:*:*:*:*",
    "cpe:2.3:a:ca_certificates_cacert:ca_certificates_cacert:20190108-r0:*:*:*:*:*:*:*",
    "cpe:2.3:a:ca-certificates:ca-certificates-cacert:20190108-r0:*:*:*:*:*:*:*",
    "cpe:2.3:a:ca-certificates:ca_certificates_cacert:20190108-r0:*:*:*:*:*:*:*",
    "cpe:2.3:a:ca_certificates:ca-certificates-cacert:20190108-r0:*:*:*:*:*:*:*",
    "cpe:2.3:a:ca_certificates:ca_certificates_cacert:20190108-r0:*:*:*:*:*:*:*",
    "cpe:2.3:a:mozilla:ca-certificates-cacert:20190108-r0:*:*:*:*:*:*:*",
    "cpe:2.3:a:mozilla:ca_certificates_cacert:20190108-r0:*:*:*:*:*:*:*",
    "cpe:2.3:a:ca:ca-certificates-cacert:20190108-r0:*:*:*:*:*:*:*",
    "cpe:2.3:a:ca:ca_certificates_cacert:20190108-r0:*:*:*:*:*:*:*"
  ]
}
{
  "name": "libc-utils",
  "version": "0.7.1-r0",
  "cpes": [
    "cpe:2.3:a:libc-utils:libc-utils:0.7.1-r0:*:*:*:*:*:*:*",
    "cpe:2.3:a:libc-utils:libc_utils:0.7.1-r0:*:*:*:*:*:*:*",
    "cpe:2.3:a:libc_utils:libc-utils:0.7.1-r0:*:*:*:*:*:*:*",
    "cpe:2.3:a:libc_utils:libc_utils:0.7.1-r0:*:*:*:*:*:*:*",
    "cpe:2.3:a:libc:libc-utils:0.7.1-r0:*:*:*:*:*:*:*",
    "cpe:2.3:a:libc:libc_utils:0.7.1-r0:*:*:*:*:*:*:*"
  ]
}
{
  "name": "libcrypto1.1",
  "version": "1.1.1a-r1",
  "cpes": [
    "cpe:2.3:a:libcrypto1.1:libcrypto1.1:1.1.1a-r1:*:*:*:*:*:*:*",
    "cpe:2.3:a:libcrypto1.1:libcrypto:1.1.1a-r1:*:*:*:*:*:*:*",
    "cpe:2.3:a:libcrypto:libcrypto1.1:1.1.1a-r1:*:*:*:*:*:*:*",
    "cpe:2.3:a:libcrypto:libcrypto:1.1.1a-r1:*:*:*:*:*:*:*"
  ]
}
{
  "name": "libssl1.1",
  "version": "1.1.1a-r1",
  "cpes": [
    "cpe:2.3:a:libssl1.1:libssl1.1:1.1.1a-r1:*:*:*:*:*:*:*",
    "cpe:2.3:a:libssl1.1:libssl:1.1.1a-r1:*:*:*:*:*:*:*",
    "cpe:2.3:a:libssl:libssl1.1:1.1.1a-r1:*:*:*:*:*:*:*",
    "cpe:2.3:a:libssl:libssl:1.1.1a-r1:*:*:*:*:*:*:*"
  ]
}
{
  "name": "libtls-standalone",
  "version": "2.7.4-r6",
  "cpes": [
    "cpe:2.3:a:libtls-standalone:libtls-standalone:2.7.4-r6:*:*:*:*:*:*:*",
    "cpe:2.3:a:libtls-standalone:libtls_standalone:2.7.4-r6:*:*:*:*:*:*:*",
    "cpe:2.3:a:libtls_standalone:libtls-standalone:2.7.4-r6:*:*:*:*:*:*:*",
    "cpe:2.3:a:libtls_standalone:libtls_standalone:2.7.4-r6:*:*:*:*:*:*:*",
    "cpe:2.3:a:libtls:libtls-standalone:2.7.4-r6:*:*:*:*:*:*:*",
    "cpe:2.3:a:libtls:libtls_standalone:2.7.4-r6:*:*:*:*:*:*:*"
  ]
}
{
  "name": "musl",
  "version": "1.1.20-r3",
  "cpes": [
    "cpe:2.3:a:musl-libc:musl:1.1.20-r3:*:*:*:*:*:*:*",
    "cpe:2.3:a:musl_libc:musl:1.1.20-r3:*:*:*:*:*:*:*",
    "cpe:2.3:a:musl:musl:1.1.20-r3:*:*:*:*:*:*:*"
  ]
}
{
  "name": "musl-utils",
  "version": "1.1.20-r3",
  "cpes": [
    "cpe:2.3:a:musl-utils:musl-utils:1.1.20-r3:*:*:*:*:*:*:*",
    "cpe:2.3:a:musl-utils:musl_utils:1.1.20-r3:*:*:*:*:*:*:*",
    "cpe:2.3:a:musl_utils:musl-utils:1.1.20-r3:*:*:*:*:*:*:*",
    "cpe:2.3:a:musl_utils:musl_utils:1.1.20-r3:*:*:*:*:*:*:*",
    "cpe:2.3:a:musl:musl-utils:1.1.20-r3:*:*:*:*:*:*:*",
    "cpe:2.3:a:musl:musl_utils:1.1.20-r3:*:*:*:*:*:*:*"
  ]
}
{
  "name": "scanelf",
  "version": "1.2.3-r0",
  "cpes": [
    "cpe:2.3:a:scanelf:scanelf:1.2.3-r0:*:*:*:*:*:*:*"
  ]
}
{
  "name": "ssl_client",
  "version": "1.29.3-r10",
  "cpes": [
    "cpe:2.3:a:ssl-client:ssl-client:1.29.3-r10:*:*:*:*:*:*:*",
    "cpe:2.3:a:ssl-client:ssl_client:1.29.3-r10:*:*:*:*:*:*:*",
    "cpe:2.3:a:ssl_client:ssl-client:1.29.3-r10:*:*:*:*:*:*:*",
    "cpe:2.3:a:ssl_client:ssl_client:1.29.3-r10:*:*:*:*:*:*:*",
    "cpe:2.3:a:ssl:ssl-client:1.29.3-r10:*:*:*:*:*:*:*",
    "cpe:2.3:a:ssl:ssl_client:1.29.3-r10:*:*:*:*:*:*:*"
  ]
}
{
  "name": "zlib",
  "version": "1.2.11-r1",
  "cpes": [
    "cpe:2.3:a:zlib:zlib:1.2.11-r1:*:*:*:*:*:*:*"
  ]
}

Packages without licenses

Identifies packages missing license information for compliance audits

.artifacts[] |
  select(.licenses == null or (.licenses | length) == 0) |  # Packages without license info
  {
    name,
    version,
    type,
    locations: [.locations[].path]  # Where package is installed
  }
syft httpd:2.4.65 -o json | \
  jq '.artifacts[] |
  select(.licenses == null or (.licenses | length) == 0) |
  {
    name,
    version,
    type,
    locations: [.locations[].path]
  }'
{
  "name": "httpd",
  "version": "2.4.65",
  "type": "binary",
  "locations": ["/usr/local/apache2/bin/httpd"]
}

Packages with CPE identifiers

Lists packages with CPE identifiers indicating potential CVE matches

.artifacts[] |
  select(.cpes != null and (.cpes | length) > 0) |  # Packages with CPE identifiers
  {
    name,
    version,
    type,
    cpeCount: (.cpes | length)  # Number of CPE matches
  }
syft alpine:3.9.2 -o json | \
  jq '.artifacts[] |
  select(.cpes != null and (.cpes | length) > 0) |
  {
    name,
    version,
    type,
    cpeCount: (.cpes | length)
  }'
{
  "name": "alpine-baselayout",
  "version": "3.1.0-r3",
  "type": "apk",
  "cpeCount": 6
}
{
  "name": "alpine-keys",
  "version": "2.1-r1",
  "type": "apk",
  "cpeCount": 6
}
{
  "name": "apk-tools",
  "version": "2.10.3-r1",
  "type": "apk",
  "cpeCount": 6
}
{
  "name": "busybox",
  "version": "1.29.3-r10",
  "type": "apk",
  "cpeCount": 1
}
{
  "name": "ca-certificates-cacert",
  "version": "20190108-r0",
  "type": "apk",
  "cpeCount": 12
}
{
  "name": "libc-utils",
  "version": "0.7.1-r0",
  "type": "apk",
  "cpeCount": 6
}
{
  "name": "libcrypto1.1",
  "version": "1.1.1a-r1",
  "type": "apk",
  "cpeCount": 4
}
{
  "name": "libssl1.1",
  "version": "1.1.1a-r1",
  "type": "apk",
  "cpeCount": 4
}
{
  "name": "libtls-standalone",
  "version": "2.7.4-r6",
  "type": "apk",
  "cpeCount": 6
}
{
  "name": "musl",
  "version": "1.1.20-r3",
  "type": "apk",
  "cpeCount": 3
}
{
  "name": "musl-utils",
  "version": "1.1.20-r3",
  "type": "apk",
  "cpeCount": 6
}
{
  "name": "scanelf",
  "version": "1.2.3-r0",
  "type": "apk",
  "cpeCount": 1
}
{
  "name": "ssl_client",
  "version": "1.29.3-r10",
  "type": "apk",
  "cpeCount": 6
}
{
  "name": "zlib",
  "version": "1.2.11-r1",
  "type": "apk",
  "cpeCount": 1
}

Troubleshooting

jq command not found

Install jq to query JSON output:

  • macOS: brew install jq
  • Ubuntu/Debian: apt-get install jq
  • Fedora/RHEL: dnf install jq
  • Windows: Download from jqlang.org

Empty or unexpected query results

Common jq query issues:

  • Wrong field path: Use jq 'keys' to list available top-level keys, then explore nested structures
  • Missing select filter: Remember to use select() when filtering (e.g., .artifacts[] | select(.type=="apk"))
  • String vs array: Some fields like licenses are arrays; use .[0] or iterate with .[]

Query works in terminal but not in scripts

When using jq in shell scripts:

  • Quote properly: Single quotes prevent shell variable expansion (e.g., jq '.artifacts' not jq ".artifacts")
  • Escape for heredocs: Use different quotes or escape when embedding jq in heredocs
  • Pipe errors: Add set -o pipefail to catch jq errors in pipelines

Performance issues with large SBOMs

For very large JSON files:

  • Stream processing: Use jq’s --stream flag for memory-efficient processing
  • Filter early: Apply filters as early as possible in the pipeline to reduce data volume
  • Use specific queries: Avoid .[] on large arrays; be specific about what you need

Next steps

Additional resources:

3.1.5 - Package Catalogers

Configure which package catalogers Syft uses to discover software components including language-specific and file-based catalogers.

Catalogers are Syft’s detection modules that identify software packages in your projects. Each cataloger specializes in finding specific types of packages—for example, python-package-cataloger finds Python dependencies declared in requirements.txt, while python-installed-package-cataloger finds Python packages that have already been installed.

Syft includes dozens of catalogers covering languages like Python, Java, Go, JavaScript, Ruby, Rust, and more, as well as OS packages (APK, RPM, DEB) and binary formats.

Default Behavior

Syft uses different cataloger sets depending on what you’re scanning:

Scan TypeDefault CatalogersWhat They FindExample
Container ImageImage-specific catalogersInstalled packages onlyPython packages in site-packages
DirectoryDirectory-specific catalogersInstalled packages + declared dependenciesPython packages in site-packages AND requirements.txt

This behavior ensures accurate results across different contexts. When you scan an image, Syft assumes installation steps have completed –this way you are getting results for software that is positively present. When you scan a directory (like a source code repository), Syft looks for both what’s installed and what’s declared as a dependency –this way you are getting results for not only what’s installed but also what you intend to install.

Why use different catalogers for different sources?

Most of the time, files that hint at the intent to install software do not have enough information in them to determine the exact version of the package that would be installed. For example, a requirements.txt file might specify a package without a version, or with a version range. By looking at installed packages in an image, after any build tooling has been invoked, Syft can provide more accurate version information.

Example: Python Package Detection

Scanning an image:

syft <container-image> --select-catalogers python
# Uses: python-installed-package-cataloger
# Finds: Packages in site-packages directories

Scanning a directory:

syft <source-directory> --select-catalogers python
# Uses: python-installed-package-cataloger, python-package-cataloger
# Finds: Packages in site-packages + requirements.txt, setup.py, Pipfile, etc.

Viewing Active Catalogers

The most reliable way to see which catalogers Syft used is to check the SBOM itself. Every SBOM captures both the catalogers that were requested and those that actually ran:

syft busybox:latest -o json | jq '.descriptor.configuration.catalogers'

Output:

{
  "requested": {
    "default": [
      "image",
      "file"
    ]
  },
  "used": [
    "alpm-db-cataloger",
    "apk-db-cataloger",
    "binary-classifier-cataloger",
    "bitnami-cataloger",
    "cargo-auditable-binary-cataloger",
    "conan-info-cataloger",
    "dotnet-deps-binary-cataloger",
    "dotnet-packages-lock-cataloger",
    "dpkg-db-cataloger",
    "elf-binary-package-cataloger",
    ...
  ]
}

This shows what catalogers were attempted, not just what found packages. The requested field shows your cataloger selection strategy, while used lists every cataloger that ran.

You can also see cataloger activity in real-time using verbose logging, though this is less comprehensive and not as direct.

Exploring Available Catalogers

Use the syft cataloger list command to see all available catalogers, their tags, and test selection expressions.

List all catalogers

syft cataloger list

Output shows file and package catalogers with their tags:

┌───────────────────────────┬───────────────────────┐
│ FILE CATALOGER            │ TAGS                  │
├───────────────────────────┼───────────────────────┤
│ file-content-cataloger    │ content, file         │
│ file-digest-cataloger     │ digest, file          │
│ file-executable-cataloger │ binary-metadata, file │
│ file-metadata-cataloger   │ file, file-metadata   │
└───────────────────────────┴───────────────────────┘
┌────────────────────────────────────┬────────────────────────────────────────────────────────┐
│ PACKAGE CATALOGER                  │ TAGS                                                   │
├────────────────────────────────────┼────────────────────────────────────────────────────────┤
│ python-installed-package-cataloger │ directory, image, installed, language, package, python │
│ python-package-cataloger           │ declared, directory, language, package, python         │
│ java-archive-cataloger             │ directory, image, installed, java, language, maven     │
│ go-module-binary-cataloger         │ binary, directory, go, golang, image, installed        │
│ ...                                │                                                        │
└────────────────────────────────────┴────────────────────────────────────────────────────────┘

Test cataloger selection

Preview which catalogers a selection expression would use:

syft cataloger list --select-catalogers python
Default selections: 1
'all'
Selection expressions: 1
'python' (intersect)

┌────────────────────────────────────┬────────────────────────────────────────────────────────┐
│ PACKAGE CATALOGER                  │ TAGS                                                   │
├────────────────────────────────────┼────────────────────────────────────────────────────────┤
│ python-installed-package-cataloger │ directory, image, installed, language, package, python │
│ python-package-cataloger           │ declared, directory, language, package, python         │
└────────────────────────────────────┴────────────────────────────────────────────────────────┘

This shows exactly which catalogers your selection expression will use, helping you verify your configuration before running a scan.

Output formats

Get cataloger information in different formats:

# Table format (default)
syft cataloger list

# JSON format (useful for automation)
syft cataloger list -o json

Cataloger References

You can refer to catalogers in two ways:

  • By name: The exact cataloger identifier (e.g., java-pom-cataloger, go-module-binary-cataloger)
  • By tag: A group label for related catalogers (e.g., java, python, image, directory)

Common tags include:

  • Language tags: python, java, go, javascript, ruby, rust, etc.
  • Scan type tags: image, directory
  • Installation state tags: installed, declared
  • Ecosystem tags: maven, npm, cargo, composer, etc.

Customizing Cataloger Selection

Syft provides two flags for controlling catalogers:

--select-catalogers: Modify Defaults

Use this flag to adjust the default cataloger set. This is the recommended approach for most use cases.

Syntax:

OperationSyntaxExampleDescription
Filter<tag>--select-catalogers javaUse only Java catalogers from the defaults
Add+<name>--select-catalogers +sbom-catalogerAdd a specific cataloger to defaults
Remove-<name-or-tag>--select-catalogers -rpmRemove catalogers by name or tag
Combine<tag>,+<name>,-<tag>--select-catalogers java,+sbom-cataloger,-mavenMultiple operations together

Selection Logic:

  1. Start with default catalogers (image or directory based)
  2. If tags provided (without + or -), filter to only those tagged catalogers
  3. Remove any catalogers matching -<name-or-tag>
  4. Add any catalogers specified with +<name>

--override-default-catalogers: Replace Defaults

Use this flag to completely replace Syft’s default cataloger selection. This bypasses the automatic image vs. directory behavior.

Syntax:

--override-default-catalogers <comma-separated-names-or-tags>

When to use:

  • You need catalogers from both image and directory sets
  • You want to use catalogers that aren’t in the default set
  • You need precise control regardless of scan type

Examples by Use Case

Filtering to Specific Languages

Scan for only Python packages using defaults for your scan type:

syft <target> --select-catalogers python

Scan for only Java and Go packages:

syft <target> --select-catalogers java,go

Adding Catalogers

Use defaults and also include the SBOM cataloger (which finds embedded SBOMs):

syft <target> --select-catalogers +sbom-cataloger

Scan with defaults plus both SBOM and binary catalogers:

syft <target> --select-catalogers +sbom-cataloger,+binary-classifier-cataloger

Removing Catalogers

Use defaults but exclude all RPM-related catalogers:

syft <target> --select-catalogers -rpm

Scan with defaults but remove Java JAR cataloger specifically:

syft <target> --select-catalogers -java-archive-cataloger

Combining Operations

Scan for Go packages, always include SBOM cataloger, but exclude binary analysis:

syft <container-image> --select-catalogers go,+sbom-cataloger,-binary
# Result: go-module-binary-cataloger, sbom-cataloger
# (binary cataloger excluded even though it's in go tag)

Filter to Java, add POM cataloger, remove Gradle:

syft <directory> --select-catalogers java,+java-pom-cataloger,-gradle

Complete Override Examples

Use only binary analysis catalogers regardless of scan type:

syft <target> --override-default-catalogers binary
# Result: binary-cataloger, cargo-auditable-binary-cataloger,
#         dotnet-portable-executable-cataloger, go-module-binary-cataloger

Use exactly two specific catalogers:

syft <target> --override-default-catalogers go-module-binary-cataloger,go-module-file-cataloger

Use all directory catalogers even when scanning an image:

syft <container-image> --override-default-catalogers directory

Troubleshooting

My language isn’t being detected

Check which catalogers ran and whether they found packages:

# See which catalogers were used
syft <target> -o json | jq '.descriptor.configuration.catalogers.used'

# See which catalogers found packages
syft <target> -o json | jq '.artifacts[].foundBy'

# See packages found by a specific cataloger
syft <target> -o json | jq '.artifacts[] | select(.foundBy == "python-package-cataloger") | .name'

If your expected cataloger isn’t in the used list:

  1. Verify the cataloger exists for your scan type: Use syft cataloger list --select-catalogers <tag> to preview
  2. Check your selection expressions: You may have excluded it with - or not included it in your filter
  3. Check file locations: Some catalogers look for specific paths (e.g., site-packages for Python)

If the cataloger ran but found nothing, check that:

  • Package files exist in the scanned source
  • Files are properly formatted
  • Files are in the expected locations for that cataloger

How do I know if I’m using image or directory defaults?

Check the SBOM’s cataloger configuration:

syft <target> -o json | jq '.descriptor.configuration.catalogers.requested'

This shows the selection strategy used:

  • "default": ["image", "file"] indicates image defaults
  • "default": ["directory", "file"] indicates directory defaults

What’s the difference between a name and a tag?

  • Name: The unique identifier for a single cataloger (e.g., python-package-cataloger)
  • Tag: A label that groups multiple catalogers (e.g., python includes both python-package-cataloger and python-installed-package-cataloger)

Use tags when you want to downselect from the default catalogers, and names when you need to target a specific cataloger.

Why use –select-catalogers vs –override-default-catalogers?

  • --select-catalogers: Respects Syft’s automatic image/directory behavior, safer for most use cases
  • --override-default-catalogers: Ignores scan type, gives complete control, requires more knowledge

When in doubt, use --select-catalogers.

Technical Reference

For reference, here’s the formal logic Syft uses for cataloger selection:

image_catalogers = all_catalogers AND catalogers_tagged("image")

directory_catalogers = all_catalogers AND catalogers_tagged("directory")

default_catalogers = image_catalogers OR directory_catalogers

sub_selected_catalogers = default_catalogers INTERSECT catalogers_tagged(TAG) [ UNION sub_selected_catalogers ... ]

base_catalogers = default_catalogers OR sub_selected_catalogers

final_set = (base_catalogers SUBTRACT removed_catalogers) UNION added_catalogers

This logic applies when using --select-catalogers. The --override-default-catalogers flag bypasses the default cataloger selection entirely and starts with the specified catalogers instead.

Next steps

Additional resources:

3.1.6 - File Selection

Control which files and directories Syft includes or excludes when generating SBOMs.

By default, Syft catalogs file details and digests for files owned by discovered packages. You can change this behavior using the SYFT_FILE_METADATA_SELECTION environment variable or the file.metadata.selection configuration option.

Available options:

  • all: capture all files from the search space
  • owned-by-package: capture only files owned by packages (default)
  • none: disable file information capture

Excluding file paths

You can exclude specific files and paths from scanning using glob patterns with the --exclude parameter. Use multiple --exclude flags to specify multiple patterns.

# Exclude a specific directory
syft <source> --exclude /etc

# Exclude files by pattern
syft <source> --exclude './out/**/*.json'

# Combine multiple exclusions
syft <source> --exclude './out/**/*.json' --exclude /etc --exclude '**/*.log'

Exclusion behavior by source type

How Syft interprets exclusion patterns depends on whether you’re scanning an image or a directory.

Image scanning

When scanning container images, Syft scans the entire filesystem. Use absolute paths for exclusions:

# Exclude system directories
syft alpine:latest --exclude /etc --exclude /var

# Exclude files by pattern across entire filesystem
syft alpine:latest --exclude '/usr/**/*.txt'

Directory scanning

When scanning directories, Syft resolves exclusion patterns relative to the specified directory. All exclusion patterns must begin with ./, */, or **/.

# Scanning /usr/foo
syft /usr/foo --exclude ./package.json        # Excludes /usr/foo/package.json
syft /usr/foo --exclude '**/package.json'     # Excludes all package.json files under /usr/foo
syft /usr/foo --exclude './out/**'            # Excludes everything under /usr/foo/out

Path prefix requirements for directory scans:

PatternMeaningExample
./Relative to scan directory root./config.json
*/One level of directories*/temp
**/Any depth of directories**/node_modules

Common exclusion patterns

# Exclude all JSON files
syft <source> --exclude '**/*.json'

# Exclude build output directories
syft <source> --exclude '**/dist/**' --exclude '**/build/**'

# Exclude dependency directories
syft <source> --exclude '**/node_modules/**' --exclude '**/vendor/**'

# Exclude test files
syft <source> --exclude '**/*_test.go' --exclude '**/test/**'

FAQ

Why is my exclusion pattern not working?

Common issues:

  • Missing quotes: Wrap patterns in single quotes to prevent shell expansion ('**/*.json' not **/*.json)
  • Wrong path prefix: Directory scans require ./, */, or **/ prefix; absolute paths like /etc won’t work
  • Pattern syntax: Use glob syntax, not regex (e.g., **/*.txt not .*\.txt)

What’s the difference between owned-by-package and all file metadata?

  • owned-by-package (default): Only catalogs files that belong to discovered packages (e.g., files in an RPM’s file manifest)
  • all: Catalogs every file in the scan space, which significantly increases SBOM size and scan time

Use all when you need complete file listings for compliance or audit purposes.

Can I exclude directories based on .gitignore?

Not directly, but you can convert .gitignore patterns to --exclude flags. Note that .gitignore syntax differs from glob patterns, so you may need to adjust patterns (e.g., node_modules/ becomes **/node_modules/**).

Do exclusions affect package detection?

Yes! If you exclude a file that a cataloger needs (like package.json or requirements.txt), Syft won’t detect packages from that file. Exclude carefully to avoid missing dependencies.

Next steps

Additional resources:

  • Configure catalogers: See Package Catalogers to control which package types are detected
  • Configuration file: Use Configuration to set persistent exclusion patterns
  • Scan target types: Review Supported Scan Targets to understand scanning behavior for different scan target types

3.1.7 - Using Templates

Create custom SBOM output formats using Go templates with available data fields to build tailored reports for specific tooling or compliance requirements.

Syft lets you define custom output formats using Go templates. This is useful for generating custom reports, integrating with specific tools, or extracting only the data you need.

How to use templates

Set the output format to template and specify the template file path:

syft <image> -o template -t ./path/to/custom.tmpl

You can also configure the template path in your configuration file:

#.syft.yaml
format:
  template:
    path: "/path/to/template.tmpl"

Available fields

Templates receive the same data structure as the syft-json output format. The Syft JSON schema is the source of truth for all available fields and their structure.

To see what data is available:

# View the full JSON structure
syft <image> -o json

# Explore specific fields
syft <image> -o json | jq '.artifacts[0]'

Key fields commonly used in templates:

  • .artifacts - Array of discovered packages
  • .files - Array of discovered files
  • .source - Information about what was scanned
  • .distro - Detected Linux distribution (if applicable)
  • .descriptor - Syft version and configuration

Common package (artifact) fields:

  • .name, .version, .type - Basic package info
  • .licenses - License information (array)
  • .purl - Package URL
  • .cpes - Common Platform Enumerations
  • .locations - Where the package was found

Template functions

Syft templates support:

FunctionArgumentsDescription
getLastIndexcollectionReturns the last index of a slice (length - 1), useful for comma-separated lists
hasFieldobj, fieldChecks if a field exists on an object, returns boolean

Examples

The following examples show template source code and the rendered output when run against alpine:3.9.2:

CSV output

"Package","Version","Type","Found by"
{{- range .artifacts}}
"{{.name}}","{{.version}}","{{.type}}","{{.foundBy}}"
{{- end}}
"Package","Version","Type","Found by"
"alpine-baselayout","3.1.0-r3","apk","apk-db-cataloger"
"alpine-keys","2.1-r1","apk","apk-db-cataloger"
"apk-tools","2.10.3-r1","apk","apk-db-cataloger"
"busybox","1.29.3-r10","apk","apk-db-cataloger"
"ca-certificates-cacert","20190108-r0","apk","apk-db-cataloger"
"libc-utils","0.7.1-r0","apk","apk-db-cataloger"
"libcrypto1.1","1.1.1a-r1","apk","apk-db-cataloger"
"libssl1.1","1.1.1a-r1","apk","apk-db-cataloger"
"libtls-standalone","2.7.4-r6","apk","apk-db-cataloger"
"musl","1.1.20-r3","apk","apk-db-cataloger"
"musl-utils","1.1.20-r3","apk","apk-db-cataloger"
"scanelf","1.2.3-r0","apk","apk-db-cataloger"
"ssl_client","1.29.3-r10","apk","apk-db-cataloger"
"zlib","1.2.11-r1","apk","apk-db-cataloger"

Filter by package type

{{range .artifacts}}
{{- if eq .type "apk"}}
{{.name}}@{{.version}}{{end}}
{{- end}}
alpine-baselayout@3.1.0-r3
alpine-keys@2.1-r1
apk-tools@2.10.3-r1
busybox@1.29.3-r10
ca-certificates-cacert@20190108-r0
libc-utils@0.7.1-r0
libcrypto1.1@1.1.1a-r1
libssl1.1@1.1.1a-r1
libtls-standalone@2.7.4-r6
musl@1.1.20-r3
musl-utils@1.1.20-r3
scanelf@1.2.3-r0
ssl_client@1.29.3-r10
zlib@1.2.11-r1

Markdown report

# SBOM Report: {{.source.metadata.userInput}}

Scanned: {{.source.name}}:{{.source.version}} ({{.source.type}})
{{- if .distro}}
Distribution: {{.distro.prettyName}}
{{- end}}

## Packages ({{len .artifacts}})

| Package | Version | Type |
|---------|---------|------|
{{- range .artifacts}}
| {{.name}} | {{.version}} | {{.type}} |
{{- end}}
# SBOM Report: alpine:3.9.2

Scanned: alpine:3.9.2 (image)
Distribution: Alpine Linux v3.9

## Packages (14)

| Package                | Version     | Type |
| ---------------------- | ----------- | ---- |
| alpine-baselayout      | 3.1.0-r3    | apk  |
| alpine-keys            | 2.1-r1      | apk  |
| apk-tools              | 2.10.3-r1   | apk  |
| busybox                | 1.29.3-r10  | apk  |
| ca-certificates-cacert | 20190108-r0 | apk  |
| libc-utils             | 0.7.1-r0    | apk  |
| libcrypto1.1           | 1.1.1a-r1   | apk  |
| libssl1.1              | 1.1.1a-r1   | apk  |
| libtls-standalone      | 2.7.4-r6    | apk  |
| musl                   | 1.1.20-r3   | apk  |
| musl-utils             | 1.1.20-r3   | apk  |
| scanelf                | 1.2.3-r0    | apk  |
| ssl_client             | 1.29.3-r10  | apk  |
| zlib                   | 1.2.11-r1   | apk  |

License compliance

{{range .artifacts}}
{{- if .licenses}}
{{.name}}: {{range .licenses}}{{.value}} {{end}}{{end}}
{{- end}}
alpine-baselayout: GPL-2.0
alpine-keys: MIT
apk-tools: GPL2
busybox: GPL-2.0
ca-certificates-cacert: GPL-2.0-or-later MPL-2.0
libc-utils: BSD
libcrypto1.1: OpenSSL
libssl1.1: OpenSSL
libtls-standalone: ISC
musl: MIT
musl-utils: BSD GPL2+ MIT
scanelf: GPL-2.0
ssl_client: GPL-2.0
zlib: zlib

Custom JSON subset

{
  "scanned": "{{.source.metadata.userInput}}",
  "packages": [
    {{- $last := sub (len .artifacts) 1}}
    {{- range $i, $pkg := .artifacts}}
    {"name": "{{$pkg.name}}", "version": "{{$pkg.version}}"}{{if ne $i $last}},{{end}}
    {{- end}}
  ]
}
{
  "scanned": "alpine:3.9.2",
  "packages": [
    { "name": "alpine-baselayout", "version": "3.1.0-r3" },
    { "name": "alpine-keys", "version": "2.1-r1" },
    { "name": "apk-tools", "version": "2.10.3-r1" },
    { "name": "busybox", "version": "1.29.3-r10" },
    { "name": "ca-certificates-cacert", "version": "20190108-r0" },
    { "name": "libc-utils", "version": "0.7.1-r0" },
    { "name": "libcrypto1.1", "version": "1.1.1a-r1" },
    { "name": "libssl1.1", "version": "1.1.1a-r1" },
    { "name": "libtls-standalone", "version": "2.7.4-r6" },
    { "name": "musl", "version": "1.1.20-r3" },
    { "name": "musl-utils", "version": "1.1.20-r3" },
    { "name": "scanelf", "version": "1.2.3-r0" },
    { "name": "ssl_client", "version": "1.29.3-r10" },
    { "name": "zlib", "version": "1.2.11-r1" }
  ]
}

Executable file digests

{{range .files -}}
{{- if .executable}}
{{.location.path}}: {{range .digests}}{{if eq .algorithm "sha256"}}{{.value}}{{end}}{{end}}
{{end}}
{{- end}}
/bin/busybox: 2c1276c3c02ccec8a0e1737d3144cdf03db883f479c86fbd9c7ea4fd9b35eac5

/lib/ld-musl-aarch64.so.1: 0132814479f1acc1e264ef59f73fd91563235897e8dc1bd52765f974cde382ca

/lib/libcrypto.so.1.1: 6c597c8ad195eeb7a9130ad832dfa4cbf140f42baf96304711b2dbd43ba8e617

/lib/libssl.so.1.1: fb72f4615fb4574bd6eeabfdb86be47012618b9076d75aeb1510941c585cae64

/lib/libz.so.1.2.11: 19e790eb36a09eba397b5af16852f3bea21a242026bbba3da7b16442b8ba305b

/sbin/apk: 22d7d85bd24923f1f274ce765d16602191097829e22ac632748302817ce515d8

/sbin/mkmntdirs: a14a5a28525220224367616ef46d4713ef7bd00d22baa761e058e8bdd4c0af1b

/usr/bin/getconf: 82bcde66ead19bc3b9ff850f66c2dbf5eaff36d481f1ec154100f73f6265d2ef

/usr/bin/getent: 53ffb508150e91838d795831e8ecc71f2bc3a7db036c6d7f9512c3973418bb5e

/usr/bin/iconv: 1c99d1f4edcb8da6db1da60958051c413de45a4c15cd3b7f7285ed87f9a250ff

/usr/bin/scanelf: 908da485ad2edea35242f8989c7beb9536414782abc94357c72b7d840bb1fda2

/usr/bin/ssl_client: 67ab7f3a1ba35630f439d1ca4f73c7d95f8b7aa0e6f6db6ea1743f136f074ab4

/usr/lib/engines-1.1/afalg.so: ea7c2f48bc741fd828d79a304dbf713e20e001c0187f3f534d959886af87f4af

/usr/lib/engines-1.1/capi.so: b461ed43f0f244007d872e84760a446023b69b178c970acf10ed2666198942c6

/usr/lib/engines-1.1/padlock.so: 0ccb04f040afb0216da1cea2c1db7a0b91d990ce061e232782aedbd498483649

/usr/lib/libtls-standalone.so.1.0.0: 7f4c2ff4010e30a69f588ab4f213fdf9ce61a524a0eecd3f5af31dc760e8006c

Find binaries importing a library

{{range .files -}}
{{- if .executable}}
{{- $path := .location.path}}
{{- range .executable.importedLibraries}}
{{- if eq . "libcrypto.so.1.1"}}
{{$path}}
{{break}}
{{- end}}
{{- end}}
{{- end}}
{{- end}}
/lib/libssl.so.1.1

/sbin/apk

/usr/lib/engines-1.1/afalg.so

/usr/lib/libtls-standalone.so.1.0.0

Troubleshooting

“can’t evaluate field” errors: The field doesn’t exist or is misspelled. Check field names with syft <image> -o json | jq.

Empty output: Verify your field paths are correct. Use syft <image> -o json to see the actual data structure.

Template syntax errors: Refer to the Go template documentation for syntax help.

Next steps

Additional resources:

3.1.8 - Format Conversion

Convert existing SBOMs between different formats including SPDX and CycloneDX using Syft’s experimental conversion capabilities.

The ability to convert existing SBOMs means you can create SBOMs in different formats quickly, without the need to regenerate the SBOM from scratch, which may take significantly more time.

syft convert <ORIGINAL-SBOM-FILE> -o <NEW-SBOM-FORMAT>[=<NEW-SBOM-FILE>]

We support formats with wide community usage AND good encode/decode support by Syft. The supported formats are:

  • Syft JSON (-o json)
  • SPDX JSON (-o spdx-json)
  • SPDX tag-value (-o spdx-tag-value)
  • CycloneDX JSON (-o cyclonedx-json)
  • CycloneDX XML (-o cyclonedx-xml)

Conversion example:

syft alpine:latest -o syft-json=sbom.syft.json # generate a syft SBOM
syft convert sbom.syft.json -o cyclonedx-json=sbom.cdx.json  # convert it to CycloneDX

Best practices

Use Syft JSON as the source format

Generate and keep Syft JSON as your primary SBOM. Convert from it to other formats as needed:

# Generate Syft JSON (native format with complete data)
syft <source> -o json=sbom.json

# Convert to other formats
syft convert sbom.json -o spdx-json=sbom.spdx.json
syft convert sbom.json -o cyclonedx-json=sbom.cdx.json

Converting between non-Syft formats loses data. Syft JSON contains all information Syft extracted, while other formats use different schemas that can’t represent the same fields.

What gets preserved

Conversions from Syft JSON to SPDX or CycloneDX preserve all standard SBOM fields. Converted output matches directly-generated output (only timestamps and IDs differ).

Avoid chaining conversions (e.g., SPDX → CycloneDX). Each step may lose format-specific data.

Reliably preserved across conversions:

  • Package names, versions, and PURLs
  • License information
  • CPEs and external references
  • Package relationships

May be lost in conversions:

  • Tool configuration and cataloger information
  • Source metadata (image manifests, layers, container config)
  • File location details and layer attribution
  • Package-manager-specific metadata (git commits, checksums, provides/dependencies)
  • Distribution details

When to convert vs regenerate

Convert from Syft JSON when:

  • You need multiple formats for different tools
  • The original source is unavailable
  • Scanning takes significant time

Regenerate from source when:

  • You need complete format-specific data
  • Conversion output is missing critical information

FAQ

Can I convert from SPDX to CycloneDX?

Yes, but it’s not recommended. Converting between non-Syft formats loses data with each conversion. If you have the original Syft JSON or can re-scan the source, that’s a better approach.

Why is some data missing after conversion?

Different SBOM formats have different schemas with different capabilities. SPDX and CycloneDX can’t represent all Syft metadata. Converting from Syft JSON to standard formats works best; converting between standard formats loses more data.

Is conversion faster than re-scanning?

Yes, significantly. Conversion takes milliseconds while scanning can take seconds to minutes depending on source size. This makes conversion ideal for CI/CD pipelines that need multiple formats.

Can I convert back to Syft JSON from SPDX?

Yes, but you’ll lose Syft-specific metadata that doesn’t exist in SPDX (like cataloger information, layer details, and file metadata). The result won’t match the original Syft JSON.

Which format versions are supported?

See the Output Formats guide for supported versions of each format. Syft converts to the latest version by default, but you can specify older versions (e.g., -o spdx-json@2.2).

Next steps

Additional resources:

3.1.9 - Attestation

Generate cryptographically signed SBOM attestations using in-toto and Sigstore to create, verify, and attach attestations to container images for supply chain security.

Overview

An attestation is cryptographic proof that you created a specific SBOM for a container image. When you publish an image, consumers need to trust that the SBOM accurately describes the image contents. Attestations solve this by letting you sign SBOMs and attach them to images, enabling consumers to verify authenticity.

Syft supports two approaches:

  • Keyless attestation: Uses your identity (GitHub, Google, Microsoft) as trust root via Sigstore. Best for CI/CD and teams.
  • Local key attestation: Uses cryptographic key pairs you manage. Best for air-gapped environments or specific security requirements.

Prerequisites

Before creating attestations, ensure you have:

  • Syft installed
  • Cosign ≥ v1.12.0 installed (installation guide)
  • Write access to the OCI registry where you’ll publish attestations
  • Registry authentication configured (e.g., docker login for Docker Hub)

For local key attestations, you’ll also need a key pair. Generate one with:

cosign generate-key-pair

This creates cosign.key (private key) and cosign.pub (public key). Keep the private key secure.

Keyless attestation

Keyless attestation uses Sigstore to tie your OIDC identity (GitHub, Google, or Microsoft account) to the attestation. This eliminates key management overhead.

Create a keyless attestation

syft attest --output cyclonedx-json <IMAGE>

Replace <IMAGE> with your image reference (e.g., docker.io/myorg/myimage:latest). You must have write access to this image.

What happens:

  1. Syft opens your browser to authenticate via OIDC (GitHub, Google, or Microsoft)
  2. After authentication, Syft generates the SBOM
  3. Sigstore signs the SBOM using your identity
  4. The attestation is uploaded to the OCI registry alongside your image

Verify a keyless attestation

Anyone can verify the attestation using cosign:

COSIGN_EXPERIMENTAL=1 cosign verify-attestation <IMAGE>

Successful output shows:

  • Attestation claims are validated
  • Claims exist in the Sigstore transparency log
  • Certificates verified against Fulcio (Sigstore’s certificate authority)
  • Certificate subject (your identity email)
  • Certificate issuer (identity provider URL)

Example:

Certificate subject:  user@example.com
Certificate issuer URL:  https://accounts.google.com

This proves the attestation was created by the specified identity.

Local key attestation

Local key attestation uses cryptographic key pairs you manage. You sign attestations with your private key, and consumers verify with your public key.

Create a key-based attestation

Generate the attestation and save it locally:

syft attest --output spdx-json --key cosign.key docker.io/myorg/myimage:latest > attestation.json

The output is a DSSE envelope containing an in-toto statement with your SBOM as the predicate.

Attach the attestation to your image

Use cosign to attach the attestation:

cosign attach attestation --attestation attestation.json docker.io/myorg/myimage:latest

You need write access to the image registry for this to succeed.

Verify a key-based attestation

Consumers verify using your public key:

cosign verify-attestation --key cosign.pub --type spdxjson docker.io/myorg/myimage:latest

Successful output shows:

Verification for docker.io/myorg/myimage:latest --
The following checks were performed on each of these signatures:
  - The cosign claims were validated
  - The signatures were verified against the specified public key
  - Any certificates were verified against the Fulcio roots.

To extract and view the SBOM:

cosign verify-attestation --key cosign.pub --type spdxjson docker.io/myorg/myimage:latest | \
  jq '.payload | @base64d | .payload | fromjson | .predicate'

Use with vulnerability scanning

Pipe the verified SBOM directly to Grype for vulnerability analysis:

cosign verify-attestation --key cosign.pub --type spdxjson docker.io/myorg/myimage:latest | \
  jq '.payload | @base64d | .payload | fromjson | .predicate' | \
  grype

This ensures you’re scanning a verified, trusted SBOM.

Troubleshooting

Authentication failures

  • Ensure you’re logged into the registry: docker login <registry>
  • Verify you have write access to the image repository

Cosign version errors

  • Update to cosign ≥ v1.12.0: cosign version

Verification failures

  • For keyless: ensure COSIGN_EXPERIMENTAL=1 is set
  • For key-based: verify you’re using the correct public key
  • Check the attestation type matches (--type spdxjson or --type cyclonedx-json)

Permission denied uploading attestations

  • Verify write access to the registry
  • Check authentication credentials are current
  • Ensure the image exists in the registry before attaching attestations

Next steps

Continue your journey:

  • Scan for vulnerabilities: Use Grype to find security issues in your SBOMs
  • Check licenses: Analyze open source licenses with Grant
  • Reference documentation: Explore Syft CLI reference for all available commands and options
  • Configure Syft: See Configuration for advanced settings and persistent configuration

Key pages to revisit:

3.2 - Vulnerability Scanning

Learn how to scan container images, filesystems, and SBOMs for known software vulnerabilities.

Vulnerability scanning is the automated process of proactively identifying security weaknesses and known exploits within software and systems. This is crucial because it helps developers and organizations find and fix potential security holes before malicious actors can discover and exploit them, thus protecting data and maintaining system integrity.

Grype is an open-source vulnerability scanner specifically designed to analyze container images and filesystems. It works by comparing the software components it finds against a database of known vulnerabilities, providing a report of potential risks so they can be addressed.

3.2.1 - Getting Started

Use Grype to scan your container images, directories, or archives for known vulnerabilities.

What is Vulnerability Scanning?

Vulnerability scanning is the process of identifying known security vulnerabilities in software packages and dependencies.

  • For developers, it helps catch security issues early in development, before they reach production.

  • For organizations, it’s essential for maintaining security posture and meeting compliance requirements.

Grype is a CLI tool for scanning container images, filesystems, and SBOMs for known vulnerabilities.

Installation

Grype is provided as a single compiled executable and requires no external dependencies to run. Run the command for your platform to download the latest release.

curl -sSfL https://get.anchore.io/grype | sudo sh -s -- -b /usr/local/bin
brew install grype
nuget install Anchore.Grype

Check out installation guide for full list of official and community-maintained packaging options.

Scan a container image for vulnerabilities

Run grype against a small container image. Grype will download the latest vulnerability database and output simple human-readable table of packages that are vulnerable:

grype alpine:latest
 ✔ Loaded image alpine:latest
 ✔ Parsed image sha256:8d591b0b7dea080ea3be9e12ae563eebf9…
 ✔ Cataloged contents 058c92d86112aa6f641b01ed238a07a3885…
   ├── ✔ Packages                        [15 packages]
   ├── ✔ File metadata                   [82 locations]
   ├── ✔ File digests                    [82 files]
   └── ✔ Executables                     [17 executables]
 ✔ Scanned for vulnerabilities     [6 vulnerability matches]
   ├── by severity: 0 critical, 0 high, 0 medium, 6 low, 0 negligible
   └── by status:   0 fixed, 6 not-fixed, 0 ignored
NAME           INSTALLED   FIXED-IN  TYPE  VULNERABILITY   SEVERITY
busybox        1.37.0-r12            apk   CVE-2024-58251  Low
busybox        1.37.0-r12            apk   CVE-2025-46394  Low
busybox-binsh  1.37.0-r12            apk   CVE-2024-58251  Low
busybox-binsh  1.37.0-r12            apk   CVE-2025-46394  Low
ssl_client     1.37.0-r12            apk   CVE-2024-58251  Low
ssl_client     1.37.0-r12            apk   CVE-2025-46394  Low

Scan an existing SBOM for vulnerabilities

Grype can scan container images directly, but it can also scan an existing SBOM document.

grype alpine_latest-spdx.json

Create a vulnerability report in JSON format

The JSON-formatted output from Grype can be processed or visualized by other tools.

Create the vulnerability report using the --output flag:

grype alpine:latest --output json | jq . > vuln_report.json

While the JSON is piped to the file, you’ll see progress on stderr:

 ✔ Pulled image
 ✔ Loaded image alpine:latest
 ✔ Parsed image sha256:8d591b0b7dea080ea3be9e12ae563eebf9869168ffced1cb25b2470a3d9fe15e
 ✔ Cataloged contents 058c92d86112aa6f641b01ed238a07a3885b8c0815de3e423e5c5f789c398b45
   ├── ✔ Packages                        [15 packages]
   ├── ✔ File digests                    [82 files]
   ├── ✔ Executables                     [17 executables]
   └── ✔ File metadata                   [82 locations]
 ✔ Scanned for vulnerabilities     [6 vulnerability matches]
   ├── by severity: 0 critical, 0 high, 0 medium, 6 low, 0 negligible
   └── by status:   0 fixed, 6 not-fixed, 0 ignored

FAQ

Does Grype need internet access?

Only for downloading container images and the vulnerability database. After the initial database download, scanning works offline until you update the database.

What about private container registries?

Grype supports authentication for private registries. See Private Registries.

Can I use Grype in CI/CD pipelines?

Absolutely! Grype is designed for automation. Scan images or SBOMs during builds and fail pipelines based on severity thresholds.

What data does Grype send externally?

Nothing. Grype runs entirely locally and doesn’t send any data to external services.

Next steps

Now that you’ve scanned for vulnerabilities, here are additional resources:

3.2.2 - Supported Scan Targets

Explore the different scan targets Grype supports including container images, directories, SBOMs, and individual packages

Grype can scan a variety of scan targets including container images, directories, files, archives, SBOMs, and individual packages. In most cases, you can simply point Grype at what you want to analyze and it will automatically detect and scan it correctly.

Scan a container image from your local daemon or a remote registry:

grype alpine:latest

Scan a directory or file:

grype /path/to/project

Scan an SBOM:

grype sbom.json

To explicitly specify the scan target type, use the --from flag:

--from ARGDescription
dockerUse images from the Docker daemon
podmanUse images from the Podman daemon
containerdUse images from the Containerd daemon
docker-archiveUse a tarball from disk for archives created from docker save
oci-archiveUse a tarball from disk for OCI archives
oci-dirRead directly from a path on disk for OCI layout directories
singularityRead directly from a Singularity Image Format (SIF) container file on disk
dirRead directly from a path on disk (any directory)
fileRead directly from a path on disk (any single file)
registryPull image directly from a registry (bypass any container runtimes)
sbomRead SBOM from file (supports Syft JSON, SPDX, CycloneDX formats)
purlScan individual packages via Package URL identifiers

Instead of using the --from flag explicitly, you can instead:

  • provide no hint and let Grype automatically detect the scan target type implicitly based on the input provided

  • provide the scan target type as a URI scheme in the target argument (e.g., docker:alpine:latest, oci-archive:/path/to/image.tar, dir:/path/to/dir)

Scan target-specific behaviors

With each kind of scan target, there are specific behaviors and defaults to be aware of.

For scan target capabilities that are inherited from Syft, please see the SBOM scan targets documentation:

For scan targets that are uniquely supported by Grype, see the sections below.

SBOM Scan Targets

You can scan pre-generated SBOMs instead of scanning the scan target directly. This approach offers several benefits:

  • Faster scans since package cataloging is already complete
  • Ability to cache and reuse SBOMs
  • Standardized vulnerability scanning across different tools

Scan an SBOM file

Grype scans SBOM files in multiple formats. You can provide an explicit sbom: prefix or just provide the file path:

Explicit SBOM prefix:

grype sbom:sbom.json

Implicit detection:

grype sbom.json

Grype automatically detects the SBOM format. Supported formats include:

  • Syft JSON
  • SPDX JSON, XML, and tag-value
  • CycloneDX JSON and XML

Use the explicit sbom: prefix when the file path might be ambiguous or when you want to be clear about the input type.

Scan an SBOM from stdin

You can pipe SBOM output directly from Syft or other SBOM generation tools:

Syft → Grype pipeline:

syft alpine:latest -o json | grype

Read SBOM from file via stdin:

Grype detects stdin input automatically when no command-line argument is provided and stdin is piped:

cat sbom.json | grype

Package scan targets

You can scan specific packages without scanning an entire image or directory. This is useful for:

  • Testing whether a specific package has vulnerabilities
  • Lightweight vulnerability checks
  • Compliance scanning for specific dependencies

Grype supports two formats for individual package scanning: Package URLs (PURLs) and Common Platform Enumerations (CPEs). When Grype receives input, it checks for PURL format first, then CPE format, before trying other scan target types.

Scan Package URLs (PURLs)

Package URLs (PURLs) provide a standardized way to identify software packages.

A PURL has this format:

pkg:<type>/<namespace>/<name>@<version>?<qualifiers>#<subpath>

Grype can take purls from the CLI or from a file. For instance, to scan the python library urllib3 (version 1.26.7):

grype pkg:pypi/urllib3@1.26.7

You’ll see vulnerabilities for that specific package:

NAME     INSTALLED  FIXED IN  TYPE    VULNERABILITY        SEVERITY  EPSS           RISK
urllib3  1.26.7     1.26.17   python  GHSA-v845-jxx5-vc9f  High      0.9% (74th)    0.6
urllib3  1.26.7     1.26.19   python  GHSA-34jh-p97f-mpxf  Medium    0.1% (35th)    < 0.1
urllib3  1.26.7     1.26.18   python  GHSA-g4mx-q9vg-27p4  Medium    < 0.1% (15th)  < 0.1
urllib3  1.26.7     2.5.0     python  GHSA-pq67-6m6q-mj2v  Medium    < 0.1% (4th)   < 0.1

For operating system packages (apk, deb, rpm), use the distro qualifier to specify the distribution:

grype "pkg:apk/alpine/openssl@3.1.5-r0?distro=alpine-3.19"
grype "pkg:deb/debian/openssl@1.1.1w-0+deb11u1?distro=debian-11"
grype "pkg:rpm/redhat/openssl@1.0.2k-19.el7?distro=rhel-7"

You can specify distribution information with the --distro flag instead:

grype "pkg:rpm/redhat/openssl@1.0.2k-19.el7?arch=x86_64" --distro rhel:7

Without either the distro qualifier or the --distro flag hint, Grype may not find distribution-specific vulnerabilities.

Other qualifiers include:

  • upstream: The upstream package name or version. Vulnerability information tends to be tracked with the source or origin package instead of the installed package itself (e.g. libcrypto might be installed but the pacakge it was built from is openssl which is where vulnerabilities are attributed to)
  • epoch: The epoch value for RPM packages. This is necessary when the package in question has changed the methodology for versioning (e.g., switching from date-based versions to semantic versions) and the epoch is used to indicate that change.

You can scan multiple packages from a file. The file contains one PURL per line:

# contents of packages.txt follow, which must be a text file with one PURL per line

pkg:npm/lodash@4.17.20
pkg:pypi/requests@2.25.1
pkg:maven/org.apache.commons/commons-lang3@3.12.0
grype ./packages.txt

Grype scans all the packages in the file:

NAME           INSTALLED  FIXED IN  TYPE          VULNERABILITY        SEVERITY
lodash         4.17.20    4.17.21   npm           GHSA-35jh-r3h4-6jhm  High
requests       2.25.1     2.31.0    python        GHSA-j8r2-6x86-q33q  Medium
commons-lang3  3.12.0     3.18.0    java-archive  GHSA-j288-q9x7-2f5v  Medium
...

Scan Common Platform Enumerations (CPEs)

Common Platform Enumeration (CPE) is an older identification format for software and hardware. You can scan using CPE format:

grype "cpe:2.3:a:apache:log4j:2.14.1:*:*:*:*:*:*:*"

Grype supports multiple CPE formats:

# CPE 2.2 format (WFN URI binding)
grype "cpe:/a:apache:log4j:2.14.1"

# CPE 2.3 format (string binding)
grype "cpe:2.3:a:apache:log4j:2.14.1:*:*:*:*:*:*:*"

Use CPE when:

  • You’re working with legacy systems that use CPE identifiers
  • You need to test for vulnerabilities in a specific CVE that references a CPE
  • PURL format is not available for your package type

For most modern scanning workflows, PURL format is preferred because it provides better precision and ecosystem-specific information.

Next steps

Additional resources:

3.2.3 - Supported package ecosystems

Learn how Grype selects vulnerability data for different package types and what level of accuracy to expect

Grype automatically selects the right vulnerability data source based on the package type and distribution information in your SBOM. This guide explains how Grype chooses which vulnerability feed to use and what level of accuracy to expect.

How Grype chooses vulnerability data

Grype selects vulnerability feeds based on package type:

  • OS packages (apk, deb, rpm, portage, alpm) use vulnerability data sourced from distribution-specific security feeds.
  • Language packages (npm, PyPI, Maven, Go modules, etc.) use GitHub Security Advisories.
  • Other packages (binaries, Homebrew, Jenkins plugins, etc.) fall back to CPE matching against the NVD.

OS packages

When Grype scans an OS package, it uses vulnerability data sourced from distribution security feeds. Distribution maintainers curate these feeds and provide authoritative information about vulnerabilities affecting specific distribution versions.

For example, when you scan Debian 10, Grype looks for vulnerabilities affecting Debian 10 packages:

$ grype debian:10
NAME          INSTALLED           FIXED IN     TYPE  VULNERABILITY   SEVERITY
libgcrypt20   1.8.4-5+deb10u1     (won't fix)  deb   CVE-2021-33560  High
bash          5.0-4                            deb   CVE-2019-18276  Negligible
libidn2-0     2.0.5-1+deb10u1     (won't fix)  deb   CVE-2019-12290  High

OS distributions

Grype supports major Linux distributions with dedicated vulnerability feeds, including Alpine, Debian, Ubuntu, RHEL, SUSE, and many others. Some distributions have mature security tracking programs that report both fixed and unfixed vulnerabilities, providing comprehensive coverage.

Derivative distributions automatically use their parent distribution’s vulnerability feed. Grype maps derivative distributions to their upstream source using the ID_LIKE field from /etc/os-release. For example, Rocky Linux and AlmaLinux use the RHEL vulnerability feed, while Raspbian uses Debian’s feed.

When scanning Rocky Linux, Grype uses Red Hat security data:

$ grype rockylinux:9 -o json | jq '.matches[0].matchDetails[0].searchedBy.distro'
{
  "type": "rockylinux",
  "version": "9.3"
}

The distro type shows rockylinux, but Grype searches the RHEL vulnerability feed automatically. You don’t need to configure this mapping –it happens transparently based on the distribution’s ID_LIKE field.

Language packages

Language packages use vulnerability data from GitHub Security Advisories (GHSA). GitHub maintains security advisories for major package ecosystems, sourced from package maintainers, security researchers, and automated scanning.

When you scan a JavaScript package, Grype searches GHSA for npm advisories:

$ grype node:18-alpine
NAME         INSTALLED  FIXED IN  TYPE  VULNERABILITY         SEVERITY
cross-spawn  7.0.3      7.0.5     npm   GHSA-3xgq-45jj-v275   High

Supported language ecosystems

Grype supports these language ecosystems through GHSA:

  • Python (PyPI) - Python packages
  • JavaScript (npm) - Node.js packages
  • Java (Maven) - Java archives
  • Go (modules) - Go modules
  • PHP (Composer) - PHP packages
  • .NET (NuGet) - .NET packages
  • Dart (Pub) - Dart and Flutter packages
  • Ruby (RubyGems) - Ruby gems
  • Rust (Crates) - Rust crates
  • Swift - Swift packages
  • GitHub Actions - GitHub Actions workflow dependencies

For language packages, Grype searches GHSA by package name and version, applying ecosystem-specific version comparison rules to determine if your package version falls within the vulnerable range.

In addition to language packages, Bitnami packages are searched against Bitnami’s vulnerability feed in a similar manner.

Other packages

Packages without dedicated feeds use CPE fallback matching

Packages using CPE matching

These package types rely on Common Platform Enumeration (CPE) matching against the National Vulnerability Database (NVD):

  • Binary executables
  • Homebrew packages
  • Jenkins plugins
  • Conda packages
  • WordPress plugins

CPE matching constructs a CPE string from the package name and version, then searches the NVD for matching vulnerability entries.

Understanding CPE match accuracy

CPE matching has important limitations:

  • May produce false positives - CPEs often do not distinguish between package ecosystems. For example, the PyPI package docker (a Python library for talking to the Docker daemon) can match vulnerabilities for Docker the container runtime because they share similar CPE identifiers.
  • May miss vulnerabilities - Not all vulnerabilities have CPE entries in the NVD.
  • Requires CPE metadata - Packages must have CPE information for matching to work.

You should verify CPE matches against the actual vulnerability details to confirm they apply to your specific package. Here’s a CPE match example:

{
  "matchDetails": [
    {
      "type": "cpe-match",
      "searchedBy": {
        "cpes": ["cpe:2.3:a:zlib:zlib:1.2.11:*:*:*:*:*:*:*"]
      },
      "found": {
        "versionConstraint": "<= 1.2.12 (unknown)"
      }
    }
  ]
}

Notice the version constraint shows (unknown) format rather than ecosystem-specific semantics, and the match type is cpe-match instead of exact-direct-match.

For more details on interpreting match types, confidence levels, and result reliability, see Understanding Grype results.

Next steps

Additional resources:

3.2.4 - Understanding Grype results

Learn how to read and interpret Grype’s vulnerability scan output, including match types, confidence levels, and result reliability

This guide explains how to read and interpret Grype’s vulnerability scan output. You’ll learn what different match types mean, how to assess result reliability, and how to filter results based on confidence levels.

Output formats

Grype supports several output formats for scan results:

  • Table (default) - Human-readable columnar output for terminal viewing
  • JSON - Complete structured data with all match details
  • SARIF - Standard format for tool integration and CI/CD pipelines
  • Template - Custom output using Go templates

This guide focuses on table and JSON formats, which you’ll use most often for understanding scan results.

Reading table output

The table format is Grype’s default output. When you run grype <image>, you see a table displaying one row per unique vulnerability match, with deduplication of identical rows.

Table columns

The table displays eight standard columns, with an optional ninth column for annotations:

  • NAME - The package name
  • INSTALLED - The version of the package
  • FIXED-IN - The version that fixes the vulnerability (shows (won't fix) if the vendor won’t fix it, or empty if no fix is available). See Filter by fix availability to filter results based on fix states
  • TYPE - Package type (apk, deb, rpm, npm, python, java-archive, etc.)
  • VULNERABILITY - The vulnerability identifier (see below)
  • SEVERITY - Vulnerability severity rating (Critical, High, Medium, Low, Negligible, Unknown)
  • EPSS - Exploit Prediction Scoring System score and percentile showing the probability of exploitation
  • RISK - Calculated risk score combining CVSS, EPSS, and other severity metrics into a single numeric value (0.0 to 10.0)
  • Annotations (conditional) - Additional context like KEV (Known Exploited Vulnerability), suppressed status, or distribution version when scanning multi-distro images

Here’s what a typical scan looks like:

NAME          INSTALLED  FIXED-IN     TYPE          VULNERABILITY   SEVERITY  EPSS          RISK
log4j-core    2.4.0      2.12.2       java-archive  CVE-2021-44228  Critical  94.4% (99th)  100.0  (kev)
log4j-core    2.4.0      2.12.2       java-archive  CVE-2021-45046  Critical  94.3% (99th)  99.0   (kev)
apk-tools     2.10.6-r0  2.10.7-r0    apk           CVE-2021-36159  Critical  12% (85th)    8.5
libcrypto1.1  1.1.1k-r0               apk           CVE-2021-3711   Critical  9% (78th)     9.1
libcrypto1.1  1.1.1k-r0  (won't fix)  apk           CVE-2021-3712   High      5% (62nd)     7.2

The Annotations column appears conditionally to provide additional context:

  • KEV or (kev) - Indicates the vulnerability is in CISA’s Known Exploited Vulnerabilities catalog
  • suppressed or suppressed by VEX - Shown when using --show-suppressed flag (see View filtered results)
  • Distribution version (e.g., ubuntu:20.04) - Shown when scan results include matches from multiple different distributions

Understanding vulnerability IDs

The VULNERABILITY column displays different types of identifiers depending on the data source:

  • CVE IDs (e.g., CVE-2024-1234) - Common Vulnerabilities and Exposures identifiers used by most Linux distributions (Alpine, Debian, Ubuntu, RHEL, SUSE) and the NVD
  • GHSA IDs (e.g., GHSA-xxxx-xxxx-xxxx) - GitHub Security Advisory identifiers for language ecosystem packages
  • ALAS IDs (e.g., ALAS-2023-1234) - Amazon Linux Security Advisory identifiers
  • ELSA IDs (e.g., ELSA-2023-12205) - Oracle Enterprise Linux Security Advisory identifiers

By default, Grype displays the vulnerability ID from the original data source. For example, an Alpine package might show CVE-2024-1234 while a GitHub Advisory for the same issue shows GHSA-abcd-1234-efgh. Use the --by-cve flag to normalize results to CVE identifiers:

grype <image> --by-cve

This flag replaces non-CVE vulnerability IDs with their related CVE ID when available, uses CVE metadata instead of the original advisory metadata, and makes it easier to correlate vulnerabilities across different data sources.

Compare the two approaches:

# Default output - shows GitHub Advisory ID
$ grype node:18
NAME     INSTALLED  FIXED-IN  TYPE  VULNERABILITY        SEVERITY
lodash   4.17.20    4.17.21   npm   GHSA-35jh-r3h4-6jhm  High

# With --by-cve - converts to CVE
$ grype node:18 --by-cve
NAME     INSTALLED  FIXED-IN  TYPE  VULNERABILITY   SEVERITY
lodash   4.17.20    4.17.21   npm   CVE-2021-23337  High

Sorting results

By default, Grype sorts vulnerability results by risk score, which combines multiple factors to help you prioritize remediation efforts. Understanding how sorting works and when to use alternative methods helps you build effective security workflows.

Why risk-based sorting works best

The default risk score takes a holistic approach by combining:

  • Threat (likelihood of exploitation) - Based on EPSS (Exploit Prediction Scoring System) scores or presence in CISA’s Known Exploited Vulnerabilities (KEV) catalog
  • Impact (potential damage) - Based on CVSS scores and severity ratings from multiple sources
  • Context (exploitation evidence) - Additional weight for vulnerabilities with known ransomware campaigns

This multi-factor approach aligns with security best practices recommended by the EPSS project, which emphasizes that “CVSS is a useful tool for capturing the fundamental properties of a vulnerability, but it needs to be used in combination with data-driven threat information, like EPSS.”

Risk-based sorting helps you focus on vulnerabilities that are both likely to be exploited AND have significant business impact, optimizing your remediation efficiency.

Why single-metric sorting can be misleading

While Grype offers several sorting options via the --sort-by flag, using single metrics can lead to inefficient prioritization:

Severity-only sorting (--sort-by severity) focuses solely on potential impact:

  • You may waste effort patching Critical severity vulnerabilities that are unlikely to ever be exploited in the wild
  • No consideration for whether attackers are actively targeting the vulnerability
  • Ignores real-world threat intelligence

EPSS-only sorting (--sort-by epss) focuses solely on exploitation likelihood:

  • You may prioritize vulnerabilities with high exploitation probability but low business impact
  • EPSS is not a risk score – it only addresses the threat component, not the complete risk picture
  • Missing context like asset criticality, network exposure, or available compensating controls

The EPSS documentation explicitly states that EPSS scores should be combined with severity information to make informed prioritization decisions, which is exactly what Grype’s risk score does.

Understanding EPSS in Grype

EPSS (Exploit Prediction Scoring System) is a data-driven scoring model that estimates the probability a vulnerability will be exploited in the next 30 days. Grype displays EPSS data in the table output showing both the raw score and percentile, such as 94.4% (99th), which means:

  • 94.4% - The raw EPSS score indicating a 94.4% probability of exploitation within 30 days
  • 99th - The percentile rank, meaning this score is higher than 99% of all EPSS scores

EPSS percentiles help normalize the heavily skewed distribution of EPSS scores, making it easier to set thresholds. For example, a vulnerability in the 90th percentile is more concerning than one in the 50th percentile, even if the raw likelihood values appear to be similar.

Grype incorporates EPSS as the threat component of its risk calculation. When a vulnerability appears in the KEV catalog, Grype automatically treats it as maximum threat (overriding EPSS) since observed exploitation is more significant than predicted exploitation.

For more details on EPSS methodology and interpretation, see the EPSS model documentation.

When to use alternative sorting methods

While risk-based sorting is recommended for most remediation workflows, alternative sorting methods serve specific use cases:

Sort by KEV status (--sort-by kev):

  • When you need to comply with regulatory requirements like CISA BOD 22-01
  • For incident response scenarios focusing on actively exploited vulnerabilities

Sort by severity (--sort-by severity):

  • When organizational SLAs or compliance frameworks specify severity-based remediation timeframes (e.g., “patch all Critical within 7 days”)

Sort by EPSS (--sort-by epss):

  • For threat landscape analysis and security research

Sort by package (--sort-by package):

  • When organizing remediation work by team ownership (different teams maintain different packages)
  • For coordinating updates across multiple instances of the same package

Sort by vulnerability ID (--sort-by vulnerability):

  • When tracking specific CVE campaigns across your environment
  • For correlating findings with external threat intelligence reports

For most security and remediation workflows, stick with the default risk-based sorting. It provides the best balance of threat intelligence and impact assessment to help you prioritize effectively.

Next steps

Additional resources:

3.2.5 - Working with JSON

Learn how to work with Grype’s native JSON format

Grype’s native JSON output format provides a comprehensive representation of vulnerability scan results, including detailed information about each vulnerability, how it was matched, and the affected packages. This guide explains the structure of the JSON output and how to interpret its contents effectively.

Data shapes

The JSON output contains a top-level matches array. Each match has this structure:

{
  "matches": [
    {
      "vulnerability": { ... },
      "relatedVulnerabilities": [ ... ],
      "matchDetails": [ ... ],
      "artifact": { ... }
    }
  ]
}

Ultimately, matches are the core results of a Grype scan. Matches are composed of:

  • vulnerability - Primary vulnerability information
  • matchDetails - How Grype found the match
  • artifact - The package/artifact that was matched against the vulnerability

Vulnerability fields

The vulnerability object contains the primary vulnerability information:

  • id (string) - The vulnerability identifier (CVE, GHSA, ALAS, ELSA, etc.)
  • dataSource (string) - URL to the vulnerability record in the data feed
  • namespace (string) - The data source namespace (e.g., alpine:distro:alpine:3.10, debian:distro:debian:10, github:language:javascript, nvd:cpe)
  • severity (string) - Severity rating from the data source
  • urls (array) - Reference URLs for the vulnerability
  • description (string) - Human-readable vulnerability description
  • cvss (array) - CVSS score information from various sources
  • fix (object) - Fix information including available versions and fix state (fixed, not-fixed, wont-fix, unknown). See Understanding fix states for details
  • advisories (array) - Related security advisories (where RHSAs appear)
  • risk (float64) - Calculated risk score combining CVSS, EPSS, and other severity metrics

A typical vulnerability object looks like:

{
  "vulnerability": {
    "id": "CVE-2021-36159",
    "dataSource": "https://security.alpinelinux.org/vuln/CVE-2021-36159",
    "namespace": "alpine:distro:alpine:3.10",
    "severity": "Critical",
    "urls": [],
    "fix": {
      "versions": ["2.10.7-r0"],
      "state": "fixed"
    },
    "advisories": [],
    "risk": 0.92
  }
}

Match detail fields

The matchDetails array contains information about how Grype found the match. Each detail object includes:

  • type (string) - Match type: exact-direct-match, exact-indirect-match, or cpe-match
  • matcher (string) - The matcher that produced this result (e.g., apk-matcher, github-matcher, stock-matcher)
  • searchedBy (object) - The specific attributes used to search (package name, version, etc.)
  • found (object) - The specific attributes in the vulnerability data that matched
  • fix (object) - Fix details specific to this match (may differ from vulnerability-level fix)

Here’s what matchDetails looks like:

{
  "matchDetails": [
    {
      "type": "exact-direct-match",
      "matcher": "apk-matcher",
      "searchedBy": {
        "distro": {
          "type": "alpine",
          "version": "3.10.9"
        },
        "package": {
          "name": "apk-tools",
          "version": "2.10.6-r0"
        },
        "namespace": "alpine:distro:alpine:3.10"
      },
      "found": {
        "vulnerabilityID": "CVE-2021-36159",
        "versionConstraint": "< 2.10.7-r0 (apk)"
      }
    }
  ]
}

Understanding match types

Grype determines how it matched a package to a vulnerability based on the available data sources. The match type indicates how the match was made:

  • exact-direct-match means the package name matched directly in a dedicated vulnerability feed. Grype searched the feed using the package name from your scan and found a matching vulnerability entry.

  • exact-indirect-match means the source package name matched in a dedicated vulnerability feed. This occurs when you scan a binary package (e.g., libcrypto1.1) but the feed tracks vulnerabilities under the source package (e.g., openssl). Grype searches the feed using the source package name and maps the results to the binary package.

  • cpe-match means Grype used Common Platform Enumeration (CPE) matching as a fallback when no exact match was found in ecosystem-specific feeds. CPE matching relies on CPE identifiers derived from package metadata and is less precise.

You can loosely think of the match type as a proxy for confidence level in the match, where exact-direct-match has the highest confidence, followed by exact-indirect-match, and finally cpe-match.

A cpe-match means Grype used Common Platform Enumeration (CPE) matching as a fallback.

CPE matching occurs when:

  • No exact package match exists in ecosystem-specific feeds
  • Grype falls back to the NVD database
  • The match is based on CPE identifiers derived from package metadata

This match type has lower confidence because:

  • CPE matching is generic and not package-ecosystem aware
  • Package naming may not match CPE naming conventions exactly
  • Version ranges may be broader or less precise

Understanding version constraints

The found.versionConstraint field shows the version range (found on the vulnerability) which the package version was found to be within (thus, the package is affected by the vulnerability). The format indicates the constraint type and the comparison logic used:

  • < 1.2.3 (apk) - Alpine package version constraint using apk version comparison
  • < 1.2.3 (deb) - Debian package version constraint using dpkg version comparison
  • < 1.2.3 (rpm) - RPM package version constraint using rpm version comparison
  • < 1.2.3 (python) - Python package version constraint using PEP 440 comparison
  • < 1.2.3 (semantic) - Semantic versioning constraint using semver comparison
  • < 1.2.3 (unknown) - Unknown version format (lower reliability)

The constraint type tells you how Grype compared versions. Ecosystem-specific formats (apk, deb, rpm) use that ecosystem’s version comparison rules, which handle epoch numbers, release tags, and other format-specific details correctly. Generic formats like unknown may have less precise matching.

Filtering and querying results

Use jq to filter and analyze JSON output based on match type, severity, or data source.

Filter by match type

Show only high-confidence exact matches:

grype <image> -o json | jq '.matches[] | select(.matchDetails[0].type == "exact-direct-match")'

Exclude CPE matches:

grype <image> -o json | jq '.matches[] | select(.matchDetails[0].type != "cpe-match")'

Filter by data source

Show only matches from Alpine security data:

grype <image> -o json | jq '.matches[] | select(.vulnerability.namespace | startswith("alpine:"))'

Show only GitHub Security Advisories:

grype <image> -o json | jq '.matches[] | select(.vulnerability.namespace | startswith("github:"))'

Filter by severity

Show only Critical and High severity vulnerabilities:

grype <image> -o json | jq '.matches[] | select(.vulnerability.severity == "Critical" or .vulnerability.severity == "High")'

Combine filters

Show Critical/High severity vulnerabilities with exact matches only:

grype <image> -o json | jq '.matches[] | select(
  (.vulnerability.severity == "Critical" or .vulnerability.severity == "High") and
  (.matchDetails[0].type == "exact-direct-match" or .matchDetails[0].type == "exact-indirect-match")
)'

Count matches by type

grype <image> -o json | jq '[.matches[].matchDetails[0].type] | group_by(.) | map({type: .[0], count: length})'

Understanding a match

Each match in JSON output contains information about how Grype found the vulnerability and links to the original sources. This lets you examine what Grype looked at and verify the match yourself.

Reference URLs

The vulnerability object includes reference URLs from the vulnerability data:

grype <image> -o json | jq '.matches[].vulnerability | {id, dataSource, urls}'
  • dataSource - URL to the vulnerability record in Grype’s data feed
  • urls - Reference URLs from the original vulnerability disclosure (CVE details, vendor advisories, etc.)

These URLs point to the original vulnerability information that Grype used.

What Grype searched for

The matchDetails[].searchedBy field shows what Grype looked at when searching for vulnerabilities:

grype <image> -o json | jq '.matches[].matchDetails[].searchedBy'

For distro packages, this shows the distro, package name, and version. For CPE matches, this shows the CPE strings Grype constructed. This lets you see exactly what Grype queried.

What Grype found

The matchDetails[].found field shows what matched in the vulnerability data:

grype <image> -o json | jq '.matches[].matchDetails[] | {found, type}'

This shows the vulnerability ID and version constraint that matched, along with the match type. Comparing searchedBy and found shows how Grype connected your package to the vulnerability.

Next steps

Additional resources:

3.2.6 - Filter scan results

Control which vulnerabilities Grype reports using filtering flags, configuration rules, and VEX documents

Learn how to control which vulnerabilities Grype reports using filtering flags and configuration options.

Set failure thresholds

Use the --fail-on flag to control Grype’s exit code based on vulnerability severity. This can be helpful for integrating Grype into CI/CD pipelines.

The --fail-on flag (alias: -f) sets a severity threshold. When scanning completes, Grype exits with code 2 if it found vulnerabilities at or above the specified severity:

grype alpine:3.10 --fail-on high

You’ll see vulnerabilities at or above the threshold:

NAME          INSTALLED  FIXED IN   TYPE  VULNERABILITY   SEVERITY  EPSS           RISK
zlib          1.2.11-r1             apk   CVE-2022-37434  Critical  92.7% (99th)   87.1
libcrypto1.1  1.1.1k-r0             apk   CVE-2023-0286   High      89.1% (99th)   66.4
libssl1.1     1.1.1k-r0             apk   CVE-2023-0286   High      89.1% (99th)   66.4
...
[0026] ERROR discovered vulnerabilities at or above the severity threshold

# Exit code: 2

Valid severity values, from lowest to highest:

negligible < low < medium < high < critical

When you set a threshold, Grype fails if it finds vulnerabilities at that severity or higher. For example, --fail-on high fails on both high and critical vulnerabilities.

Filter by fix availability

Grype provides flags to filter vulnerabilities based on whether fixes are available.

Show only vulnerabilities with fixes available

The --only-fixed flag filters scan results to show only vulnerabilities that have fixes available:

grype alpine:latest --only-fixed

This flag filters out vulnerabilities with these fix states:

  • not-fixed - No fix is available yet
  • wont-fix - Maintainers won’t fix this vulnerability
  • unknown - No fix state information is available

This is useful when you want to focus on actionable vulnerabilities that you can remediate by updating packages.

Show only vulnerabilities without fixes available

The --only-notfixed flag filters scan results to show only vulnerabilities that do not have fixes available:

grype alpine:3.10 --only-notfixed

These vulnerabilities don’t have fixes available yet:

NAME          INSTALLED  TYPE  VULNERABILITY   SEVERITY  EPSS           RISK
zlib          1.2.11-r1  apk   CVE-2022-37434  Critical  92.7% (99th)   87.1
libcrypto1.1  1.1.1k-r0  apk   CVE-2023-0286   High      89.1% (99th)   66.4
libssl1.1     1.1.1k-r0  apk   CVE-2023-0286   High      89.1% (99th)   66.4
libcrypto1.1  1.1.1k-r0  apk   CVE-2023-2650   Medium    92.0% (99th)   52.9
libssl1.1     1.1.1k-r0  apk   CVE-2023-2650   Medium    92.0% (99th)   52.9
...

This flag filters out vulnerabilities with fix state fixed. Notice the FIXED-IN column is empty for these vulnerabilities.

This is useful when you want to identify vulnerabilities that require alternative mitigation strategies, such as:

  • Accepting the risk
  • Implementing compensating controls
  • Waiting for a fix to become available
  • Switching to a different package

Understanding fix states

Grype assigns one of four fix states to each vulnerability based on information from vulnerability data sources:

Fix StateDescription
fixedA fix is available for this vulnerability
not-fixedNo fix is available yet, but maintainers may release one
wont-fixPackage maintainers have decided not to fix this vulnerability
unknownNo fix state information is available

Vulnerabilities with no fix state information are treated as unknown. This ensures Grype handles missing data consistently.

Ignore specific fix states

The --ignore-states flag gives you fine-grained control over which fix states to filter out. You can ignore one or more fix states by specifying them as a comma-separated list:

# Ignore vulnerabilities with unknown fix states
grype alpine:3.10 --ignore-states unknown

Only vulnerabilities with known fix states appear:

NAME       INSTALLED  FIXED IN   TYPE  VULNERABILITY   SEVERITY  EPSS         RISK
apk-tools  2.10.6-r0  2.10.7-r0  apk   CVE-2021-36159  Critical  1.0% (76th)  0.9
# Ignore both wont-fix and not-fixed vulnerabilities
grype alpine:3.10 --ignore-states wont-fix,not-fixed

This leaves only fixed vulnerabilities and those with unknown states:

NAME          INSTALLED  FIXED IN   TYPE  VULNERABILITY   SEVERITY  EPSS           RISK
zlib          1.2.11-r1             apk   CVE-2022-37434  Critical  92.7% (99th)   87.1
libcrypto1.1  1.1.1k-r0             apk   CVE-2023-0286   High      89.1% (99th)   66.4
libssl1.1     1.1.1k-r0             apk   CVE-2023-0286   High      89.1% (99th)   66.4
apk-tools     2.10.6-r0  2.10.7-r0  apk   CVE-2021-36159  Critical  1.0% (76th)    0.9
...

Valid fix state values are: fixed, not-fixed, wont-fix, unknown.

If you specify an invalid fix state, Grype returns an error:

grype alpine:latest --ignore-states invalid-state
# Error: unknown fix state invalid-state was supplied for --ignore-states

Combining severity with fix filtering

You can combine --fail-on with fix state filtering to create sophisticated CI/CD policies:

# Fail only if fixable critical or high vulnerabilities exist
grype alpine:3.10 --fail-on high --only-fixed

Grype now only fails on fixable critical/high vulnerabilities:

NAME       INSTALLED  FIXED IN   TYPE  VULNERABILITY   SEVERITY  EPSS         RISK
apk-tools  2.10.6-r0  2.10.7-r0  apk   CVE-2021-36159  Critical  1.0% (76th)  0.9
[0026] ERROR discovered vulnerabilities at or above the severity threshold

# Exit code: 2
# Fail on medium or higher, but ignore wont-fix vulnerabilities
grype alpine:latest --fail-on medium --ignore-states wont-fix

The --fail-on check runs after vulnerability matching and filtering. Grype converts all filtering options (--only-fixed, --only-notfixed, --ignore-states, configuration ignore rules, and VEX documents) into ignore rules and applies them during matching. The severity threshold check then evaluates only the remaining vulnerabilities.

View filtered results

By default, Grype hides filtered vulnerabilities from output. You can view them in table output with --show-suppressed or in JSON output by inspecting the ignoredMatches field.

In table output

The --show-suppressed flag displays filtered vulnerabilities in table output with a (suppressed) label:

grype alpine:3.10 --only-fixed --show-suppressed

Filtered vulnerabilities now appear with a (suppressed) label:

NAME          INSTALLED  FIXED IN   TYPE  VULNERABILITY   SEVERITY  EPSS           RISK
apk-tools     2.10.6-r0  2.10.7-r0  apk   CVE-2021-36159  Critical  1.0% (76th)    0.9
zlib          1.2.11-r1             apk   CVE-2018-25032  High      < 0.1% (26th)  < 0.1  (suppressed)
libcrypto1.1  1.1.1k-r0             apk   CVE-2021-3711   Critical  2.7% (85th)    2.4    (suppressed)
libssl1.1     1.1.1k-r0             apk   CVE-2021-3711   Critical  2.7% (85th)    2.4    (suppressed)
libcrypto1.1  1.1.1k-r0             apk   CVE-2021-3712   High      0.5% (66th)    0.4    (suppressed)
libssl1.1     1.1.1k-r0             apk   CVE-2021-3712   High      0.5% (66th)    0.4    (suppressed)
...

In JSON output

When you use JSON output (-o json), Grype places filtered vulnerabilities in the ignoredMatches array. Non-filtered vulnerabilities appear in the matches array.

For details on the complete JSON structure and all fields, see Reading JSON output.

View the structure:

grype alpine:3.10 --only-fixed -o json | jq '{matches, ignoredMatches}'

The structure separates matched from ignored vulnerabilities:

{
  "matches": [
    {
      "vulnerability": {...},
      "artifact": {...},
      ...
    }
  ],
  "ignoredMatches": [
    {
      "vulnerability": {...},
      "artifact": {...},
      ...
    },
    ...
  ]
}

Inspect a specific ignored vulnerability:

grype alpine:3.10 --only-fixed -o json | jq '.ignoredMatches[0] | {vulnerability: .vulnerability.id, package: .artifact.name, reason: .appliedIgnoreRules}'

Each ignored match shows why it was filtered:

{
  "vulnerability": "CVE-2018-25032",
  "package": "zlib",
  "reason": [
    {
      "namespace": "",
      "fix-state": "unknown"
    }
  ]
}

The appliedIgnoreRules field shows why each vulnerability was filtered.

Ignore specific vulnerabilities or packages

You can create ignore rules in your .grype.yaml configuration file to exclude specific vulnerabilities or packages from scan results.

Use ignore rules

Create a .grype.yaml file with ignore rules:

ignore:
  # Ignore specific CVEs
  - vulnerability: CVE-2008-4318
  - vulnerability: GHSA-1234-5678-90ab

  # Ignore all vulnerabilities in a package
  - package:
      name: libcurl

  # Ignore vulnerabilities in a specific version
  - package:
      name: openssl
      version: 1.1.1g

  # Ignore by package type
  - package:
      type: npm
      name: lodash

  # Ignore by package location (supports glob patterns)
  - package:
      location: "/usr/local/lib/node_modules/**"

  # Ignore by fix state
  - vulnerability: CVE-2020-1234
    fix-state: not-fixed

  # Combine multiple criteria
  - vulnerability: CVE-2008-4318
    fix-state: unknown
    package:
      name: libcurl
      version: 1.5.1

Valid fix-state values are: fixed, not-fixed, wont-fix, unknown.

When you combine multiple criteria in a rule, all criteria must match for the rule to apply.

Use VEX documents

Grype supports Vulnerability Exploitability eXchange (VEX) documents to provide information about which vulnerabilities affect your software. VEX allows you to communicate vulnerability status in a machine-readable format that follows CISA minimum requirements.

Grype supports two VEX formats as input:

  • OpenVEX - Compact JSON format with minimal required fields
  • CSAF VEX - Comprehensive format with rich advisory metadata (OASIS standard)

VEX-filtered vulnerabilities behave like other filtered results:

  • Table output: Hidden by default, shown with --show-suppressed flag and marked as (suppressed by VEX)
  • JSON output: Moved to the ignoredMatches array with VEX rules listed in appliedIgnoreRules

This guide uses OpenVEX examples for simplicity, but both formats work identically with Grype. The core concepts (status values, product identification, filtering behavior) apply to both formats.

Basic usage

Use the --vex flag to provide one or more VEX documents:

# Single VEX document
grype alpine:latest --vex vex-report.json

# Multiple VEX documents
grype alpine:latest --vex vex-1.json,vex-2.json

You can also specify VEX documents in your configuration file:

# .grype.yaml file
vex-documents:
  - vex-report.json
  - vex-findings.json

VEX status values

VEX documents use four standard status values:

Filtering statuses (automatically applied):

  • not_affected - Product is not affected by the vulnerability
  • fixed - Vulnerability has been remediated

Augmenting statuses (require explicit configuration):

  • affected - Product is affected by the vulnerability
  • under_investigation - Impact is still being assessed

By default, Grype moves vulnerabilities with not_affected or fixed status to the ignored list. Vulnerabilities with affected or under_investigation status are only added to results when you enable augmentation:

vex-add: ["affected", "under_investigation"]

Creating VEX documents with vexctl

The easiest way to create OpenVEX documents is with vexctl:

# Create a VEX statement marking a CVE as not affecting your image
vexctl create \
  --product="pkg:oci/alpine@sha256:4b7ce07002c69e8f3d704a9c5d6fd3053be500b7f1c69fc0d80990c2ad8dd412" \
  --subcomponents="pkg:apk/alpine/busybox@1.37.0-r19" \
  --vuln="CVE-2024-58251" \
  --status="not_affected" \
  --justification="vulnerable_code_not_present" \
  --file="vex.json"

# Use the VEX document with Grype
grype alpine:3.22.2 --vex vex.json

You can also create VEX documents manually. Here’s an OpenVEX example:

{
  "@context": "https://openvex.dev/ns/v0.2.0",
  "@id": "https://openvex.dev/docs/public/vex-07f09249682f6d9d2924be146078475538731fa0ee6a50ad3c9f33617e4a0be4",
  "author": "Alex Goodman",
  "version": 1,
  "statements": [
    {
      "vulnerability": {
        "name": "CVE-2024-58251"
      },
      "products": [
        {
          "@id": "pkg:oci/alpine@sha256:4b7ce07002c69e8f3d704a9c5d6fd3053be500b7f1c69fc0d80990c2ad8dd412",
          "subcomponents": [
            {
              "@id": "pkg:apk/alpine/busybox@1.37.0-r19"
            }
          ]
        }
      ],
      "status": "not_affected",
      "justification": "vulnerable_code_not_present",
      "timestamp": "2025-11-21T20:30:11.725672Z"
    }
  ],
  "timestamp": "2025-11-21T20:30:11Z"
}

CSAF VEX documents have a more complex structure with product trees, branches, and vulnerability arrays. See the CSAF specification for complete structure details.

Justifications for not_affected

OpenVEX provides standardized justification values when marking vulnerabilities as not_affected:

  • component_not_present - The component is not included in the product
  • vulnerable_code_not_present - The vulnerable code is not present
  • vulnerable_code_not_in_execute_path - The vulnerable code cannot be executed
  • vulnerable_code_cannot_be_controlled_by_adversary - The vulnerability cannot be exploited
  • inline_mitigations_already_exist - Mitigations prevent exploitation

CSAF VEX uses a richer product status model with categories like known_not_affected that Grype maps to the standard VEX statuses. See the CSAF specification for details on CSAF-specific fields.

These justifications help security teams understand the rationale behind VEX statements.

Product identification

Grype matches VEX statements to scan results using several identification methods:

Container images (most reliable):

"products": [
  { "@id": "pkg:oci/alpine@sha256:124c7d2707a0ee..." }
]

Image tags (less reliable, can change):

"products": [
  { "@id": "alpine:3.17" }
]

Individual packages via PURLs:

"products": [
  {
    "@id": "pkg:oci/alpine@sha256:124c7d...",
    "subcomponents": [
      { "@id": "pkg:apk/alpine/libssl3@3.0.8-r3" }
    ]
  }
]

Use container digests for the most reliable matching, as tags can move to different images over time.

Next steps

Additional resources:

3.2.7 - Vulnerability Database

Using the Grype Vulnerability Database

Grype uses a locally cached database of known vulnerabilities when searching a container, directory, or SBOM for security vulnerabilities. Anchore collates vulnerability data from common feeds, and publishes that data online, at no cost to users.

Updating the local database

When Grype is launched, it checks for an existing vulnerability database, and looks for an updated one online. If available, Grype will automatically download the new database.

To update the database manually, use the following command:

grype db update

If instead, you would like to simply check if a new DB is available without actually updating, use:

grype db check

This will return 0 if the database is up to date, and 1 if an update is available.

Or, you can delete the local database entirely:

grype db delete

Searching the database

The Grype vulnerability database contains detailed information about vulnerabilities and affected packages across all supported ecosystems. While you can examine the raw SQLite database directly (use grype db status to find the local storage path), the grype db search commands provide a much easier way to explore what’s in the database.

Search for affected packages

Use grype db search to find packages affected by vulnerabilities. This is useful when you want to understand what packages are impacted by a specific CVE, or when you want to see all vulnerabilities affecting a particular package.

For example, to find all packages affected by Log4Shell across all ecosystems:

grype db search --vuln CVE-2021-44228

To find all vulnerable versions of the log4j package:

grype db search --pkg log4j

To search by PURL or CPE formats:

grype db search --pkg 'pkg:rpm/redhat/openssl'
grype db search --pkg 'cpe:2.3:a:jetty:jetty_http_server:*:*:*:*:*:*:*:*'

Any version value provided will be ignored entirely.

You can also use these options in combination to filter results further (finding the common intersection); in this example, finding packages named “openssl” in Alpine Linux 3.18 that have fixes available:

grype db search --pkg openssl --distro alpine-3.18 --fixed-state fixed

Search for vulnerabilities

Use grype db search vuln to look up vulnerability details directly, including descriptions, severity ratings, and data sources.

This is subtly different from searching for affected packages, as it focuses on the vulnerabilities themselves, so you can find information about vulnerabilities that may not affect any packages (there are a few reasons why this could happen.)

To view full metadata for a specific CVE:

grype db search vuln CVE-2021-44228

To filter by data provider:

grype db search vuln CVE-2021-44228 --provider nvd

Next steps

Now that you understand how Grype’s vulnerability database works, here are additional resources:

3.3 - License Scanning

Learn how to scan container images and filesystems for software licenses covering detection, compliance checking, and managing license obligations.

License scanning involves automatically identifying and analyzing the licenses associated with the various software components used in a project.

This is important because most software relies on third-party and open-source components, each with its own licensing terms that dictate how the software can be used, modified, and distributed, and failing to comply can lead to legal issues.

Grant is an open-source command-line tool designed to discover and report on the software licenses present in container images, SBOM documents, or filesystems. It helps users understand the licenses of their software dependencies and can check them against user-defined policies to ensure compliance.

3.3.1 - Getting Started

License Scanning Getting Started

Introduction

Grant searches SBOMs for licenses and the packages they belong to.

Install the latest Grant release

Grant is provided as a single compiled executable. Issue the command for your platform to download the latest release of Grant. The full list of official and community maintained packages can be found on the installation page.

curl -sSfL <a href="https://get.anchore.io/grant">https://get.anchore.io/grant</a> | sudo sh -s &ndash; -b /usr/local/bin
brew install grant
  1. Scan a container for all the licenses used
grant alpine:latest

Grant will produce a list of licenses.

* alpine:latest
  * license matches for rule: default-deny-all; matched with pattern *
    * Apache-2.0
    * BSD-2-Clause
    * GPL-2.0-only
    * GPL-2.0-or-later
    * MIT
    * MPL-2.0
    * Zlib
  1. Scan a container for OSI compliant licenses

Now we scan a different container, that contains some software that is distributed under non-OSI-compliant licenses.

grant check pytorch/pytorch:latest --osi-approved

Read more in our License Auditing User Guide.

3.4 - Private Registries

Configure authentication for scanning container images from private registries using credentials, registry tokens, and credential helpers.

The Anchore OSS tools analyze container images from private registries using multiple authentication methods. When a container runtime isn’t available, the tools use the go-containerregistry library to handle authentication directly with registries.

When using a container runtime explicitly (for instance, with the --from docker flag) the tools defer to the runtime’s authentication mechanisms. However, if the registry source is used, the tools use the Docker configuration file and any configured credential helpers to authenticate with the registry.

Registry tokens and personal access tokens

Many registries support personal access tokens (PATs) or registry tokens for authentication. Use docker login with your token, then the tools can use the cached credentials:

# GitHub Container Registry - create token at https://github.com/settings/tokens (needs read:packages scope)
docker login ghcr.io -u <username> -p <token>
syft ghcr.io/username/private-image:latest

# GitLab Container Registry - use deploy token or personal access token
docker login registry.gitlab.com -u <username> -p <token>
syft registry.gitlab.com/group/project/image:latest

The tools read credentials from ~/.docker/config.json, the same file Docker uses when you run docker login. This file can contain either basic authentication credentials or credential helper configurations.

Here are examples of what the config looks like if you are crafting it manually:

Basic authentication example:

{
  "auths": {
    "registry.example.com": {
      "username": "AzureDiamond",
      "password": "hunter2"
    }
  }
}

Token authentication example:

// token auth, where credentials are base64-encoded
{
  "auths": {
    "ghcr.io": {
      "auth": "dXNlcm5hb...m5h=="
    }
  }
}

By default, the tools look for credentials in ~/.docker/config.json. You can override this location using the DOCKER_CONFIG environment variable:

export DOCKER_CONFIG=/path/to/custom/config
syft registry.example.com/private/image:latest

You can also use this in a container:

docker run -v ./config.json:/auth/config.json -e "DOCKER_CONFIG=/auth" anchore/syft:latest <private_image>

Docker credential helpers

Docker credential helpers are specialized programs that securely store and retrieve registry credentials. They’re particularly useful for cloud provider registries that use dynamic, short-lived tokens.

Instead of storing passwords as plaintext in config.json, you configure helpers that generate credentials on-demand. This is facilitated by the google/go-containerregistry library.

Configuring credential helpers

Add credential helpers to your config.json:

{
  "credHelpers": {
    // using the docker-credential-gcr for Google Container Registry and Artifact Registry
    "gcr.io": "gcr",
    "us-docker.pkg.dev": "gcloud",

    // using the amazon-ecr-credential-helper for AWS Elastic Container Registry
    "123456789012.dkr.ecr.us-west-2.amazonaws.com": "ecr-login",

    // using the docker-credential-acr for Azure Container Registry
    "myregistry.azurecr.io": "acr"
  }
}

When the tools access these registries, they execute the corresponding helper program (for example, docker-credential-gcr, or more generically docker-credential-NAME where NAME is the config value) to obtain credentials.

For more information about Docker credential helpers for various cloud providers:

Within Kubernetes

When running the tools in Kubernetes and you need access to private registries, mount Docker credentials as a secret.

Create secret

Create a Kubernetes secret containing your Docker credentials. The key config.json is important—it becomes the filename when mounted into the pod. For more information about the credential file format, see the go-containerregistry config docs.

# secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: registry-config
  namespace: syft
data:
  config.json: <base64-encoded-config.json>

Create the secret:

# Base64 encode your config.json
cat ~/.docker/config.json | base64

# Apply the secret
kubectl apply -f secret.yaml

Configure pod

Configure your pod to use the credential secret. The DOCKER_CONFIG environment variable tells the tools where to look for credentials. Setting DOCKER_CONFIG=/config means the tools look for credentials at /config/config.json. This matches the secret key config.json we created above—when Kubernetes mounts secrets, each key becomes a file with that name.

The volumeMounts section mounts the secret to /config, and the volumes section references the secret created in the previous step.

# pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: syft-k8s-usage
spec:
  containers:
    - image: anchore/syft:latest
      name: syft-private-registry-demo
      env:
        - name: DOCKER_CONFIG
          value: /config
      volumeMounts:
        - mountPath: /config
          name: registry-config
          readOnly: true
      args:
        - <private-image>
  volumes:
    - name: registry-config
      secret:
        secretName: registry-config

Apply and check logs:

kubectl apply -f pod.yaml
kubectl logs syft-private-registry-demo

4 - Capabilities

Summary of package analysis and vulnerability scanning capabilities across ecosystems

Capabilities describe the cross-cutting features available across Anchore’s tools:

  • Package analysis: What Syft can catalog from package manifests, lock files, and installed packages
  • Vulnerability scanning: What Grype can detect using vulnerability databases and matching rules

These capabilities are ecosystem-specific. For example, Python’s capabilities differ from Go’s, and Ubuntu’s capabilities differ from Alpine’s.

Default capabilities do not require to be online or have special configuration (other than having a vulnerability DB downloaded). Some capabilities may be conditionally supported, requiring additional configuration or online access to function.

Vulnerability scanning capabilities

Vulnerability data source qualities vary in the information they provide and how to interpret them correctly.

Disclosure and fix information

In terms of disclosures and fixes, each data source can be described along the following dimensions:

  • Independent Disclosure: Whether the advisory discloses the vulnerability regardless of fix availability. Sources with this capability report vulnerabilities even when no fix is available yet.

  • Disclosure Date: Whether the data source provides the date when a vulnerability was first publicly disclosed. This helps you understand the timeline of vulnerability exposure.

  • Fix Versions: Whether the data source specifies which package versions contain fixes for a vulnerability. This allows Grype to determine if an installed package version is vulnerable or fixed.

  • Fix Date: Whether the advisory includes a date when the fix was made available. This helps you understand the timeline of vulnerability remediation.

Track by source package

Some ecosystems have parent packages where the source code for the current package is maintained. For example, the libcrypto for debian is part of the larger openssl package (where openssl is denoted as the origin package). The same is true for redhat-based packages, except the parent package is denoted as the srcrpm package.

Ecosystems like this have vulnerabilities are often disclosed and fixed at the parent package level (origin and srcrpm). More critically, the parent packages are often not installed on the system, making it impossible to directly detect vulnerabilities against them. There tends to be package metadata on the downstream package that denotes the parent package name and version, which Syft can extract during package analysis.

Package analysis capabilities

Dependencies

We describe Syft’s ability to capture dependency information in the following dimentions:

  • Depth: How far into the true dependency graph we are able to discover package nodes.

    • direct: only captures dependencies explicitly declared by the project, but not necessarily dependencies of those dependencies

    • transitive: all possible depths of dependencies are captured

  • Edges: Whether we are able to capture relationships between packages, and if so, describe the topology of those relationships.

    • flat: we can capture the root package relative to all other dependencies, but are unaware of relationships between dependencies (a simple star topology, where all dependencies point to the root package)

    • complete: all possible relationships between packages are captured (the full dependency graph)

  • Kinds: The types of dependencies we are able to capture.

    • runtime: dependencies required for the package to function at runtime

    • dev: dependencies required for development

Licenses

Indicates whether Syft can detect and catalog license information from package metadata. When supported, Syft extracts license declarations from package manifests, metadata files, or installed package databases.

Package manager features

Syft can extract various package manager metadata beyond basic package information:

  • Files: Whether Syft can catalog the list of files that are part of a package installation. This provides visibility into all files installed by the package manager.

  • Digests: Whether Syft can capture file checksums (digests/hashes) for individual files within a package. This enables integrity verification of installed files. Note: this is not necessarily the actual hash of the file, but instead the claims made by the package manager about those files. We capture actual file hashes in the files section of the SBOM.

  • Integrity Hash: Whether Syft can capture a single package-level integrity hash used by package managers to verify the package archive itself (for example, the https://go.dev/ref/mod#go-sum-files for go packages).

Next steps

4.1 - Supported operating systems

A high-level summary of which OS’s are supported

Syft and Grype support several operating systems for package cataloging and vulnerability detection. The table below shows which OS versions are supported and where Grype’s vulnerability data comes from.

Operating SystemSupported VersionsVunnel ProviderData Source
Alpine Linux3.2+, edgealpineAlpine SecDB
Amazon Linux2, 2022, 2023amazonAmazon Linux Security Center
Azure Linux3.0marinerMicrosoft CBL-Mariner OVAL
CentOS5, 6, 7, 8rhelRed Hat Security Data API
Chainguard OSrollingchainguardChainguard Security
Debian7 (wheezy), 8 (jessie), 9 (stretch), 10 (buster), 11 (bullseye), 12 (bookworm), 13 (trixie), 14, unstabledebianDebian Security Tracker
Echo OSrollingechoECHO Security
CBL-Mariner1.0, 2.0marinerMicrosoft CBL-Mariner OVAL
MinimOSrollingminimosMINIMOS Security
Oracle Linux5, 6, 7, 8, 9, 10oracleOracle Linux Security
Raspberry Pi OS7 (wheezy), 8 (jessie), 9 (stretch), 10 (buster), 11 (bullseye), 12 (bookworm), 13 (trixie), 14, unstabledebianDebian Security Tracker
Red Hat Enterprise Linux5, 6, 7, 8, 9, 10
EUS: 5.9, 6.4+, 7, 8.1, 8.2, 8.4, 8.6, 8.8, 9
rhelRed Hat Security Data API
Rocky Linux5, 6, 7, 8, 9, 10rhelRed Hat Security Data API
SUSE Linux Enterprise Server11, 12, 15slesSUSE Security OVAL
Ubuntu12.04 (precise), 12.10 (quantal), 13.04 (raring), 14.04 (trusty), 14.10 (utopic), 15.04 (vivid), 15.10 (wily), 16.04 (xenial), 16.10 (yakkety), 17.04 (zesty), 17.10 (artful), 18.04 (bionic), 18.10 (cosmic), 19.04 (disco), 19.10 (eoan), 20.04 (focal), 20.10 (groovy), 21.04 (hirsute), 21.10 (impish), 22.04 (jammy), 22.10 (kinetic), 23.04 (lunar), 23.10 (mantic), 24.04 (noble), 24.10 (oracular), 25.04 (plucky), 25.10ubuntuUbuntu CVE Tracker
WolfirollingwolfiWolfi Security

4.2 - Supported package ecosystems

A high-level summary of all package detection capabilities across ecosystems

The table below shows which ecosystems support package analysis and vulnerability scanning.

EcosystemCataloger + EvidenceLicensesDependenciesFiles
Ai
gguf-cataloger
*.gguf
ALPM
alpm-db-cataloger
var/lib/pacman/local/**/desc
APK
apk-db-cataloger
lib/apk/db/installed
Binary
binary-classifier-cataloger
arangodb-binaryarangosh bash-binarybash busybox-binarybusybox chrome-binarychrome consul-binaryconsul curl-binarycurl dart-binarydart elixir-binaryelixir elixir-libraryelixir/ebin/elixir.app erlang-alpine-binarybeam.smp erlang-binaryerlexec erlang-libraryliberts_internal.a ffmpeg-binaryffmpeg ffmpeg-librarylibav*, libswresample* fluent-bit-binaryfluent-bit gcc-binarygcc go-binarygo go-binary-hintVERSION* gzip-binarygzip haproxy-binaryhaproxy hashicorp-vault-binaryvault haskell-cabal-binarycabal haskell-ghc-binaryghc* haskell-stack-binarystack helmhelm httpd-binaryhttpd java-binaryjava java-jdb-binaryjdb jq-binaryjq julia-binarylibjulia-internal.so lighttpd-binarylighttpd mariadb-binary{mariadb,mysql} memcached-binarymemcached mysql-binarymysql nginx-binarynginx nodejs-binarynode openssl-binaryopenssl perl-binaryperl php-composer-binarycomposer* postgresql-binarypostgres proftpd-binaryproftpd pypy-binary-liblibpypy*.so* python-binarypython* python-binary-liblibpython*.so* redis-binaryredis-server ruby-binaryruby rust-standard-library-linuxlibstd-*.so rust-standard-library-macoslibstd-*.dylib sqlcipher-binarysqlcipher swipl-binaryswipl traefik-binarytraefik util-linux-binarygetopt wordpress-cli-binarywp xtrabackup-binaryxtrabackup xz-binaryxz zstd-binaryzstd
elf-binary-package-cataloger
application/x-executable, application/x-mach-binary, application/x-elf, application/x-sharedlib, application/vnd.microsoft.portable-executable (mimetype)
pe-binary-package-cataloger
*.dll, *.exe
Bitnami
bitnami-cataloger
/opt/bitnami/**/.spdx-*.spdx
C/C++
conan-cataloger
conan.lock
conan-cataloger
conanfile.txt
conan-info-cataloger
conaninfo.txt
Conda
conda-meta-cataloger
conda-meta/*.json
Dart
dart-pubspec-cataloger
pubspec.yml, pubspec.yaml
dart-pubspec-lock-cataloger
pubspec.lock
DPKG
deb-archive-cataloger
*.deb
dpkg-db-cataloger
lib/dpkg/status, lib/dpkg/status.d/*, lib/opkg/info/*.control, lib/opkg/status
Elixir
elixir-mix-lock-cataloger
mix.lock
Erlang
erlang-otp-application-cataloger
*.app
erlang-rebar-lock-cataloger
rebar.lock
GitHub Actions
github-action-workflow-usage-cataloger
.github/workflows/*.yaml, .github/workflows/*.yml
github-actions-usage-cataloger
.github/actions/*/action.yml, .github/actions/*/action.yaml
github-actions-usage-cataloger
.github/workflows/*.yaml, .github/workflows/*.yml
Go
go-module-binary-cataloger
application/x-executable, application/x-mach-binary, application/x-elf, application/x-sharedlib, application/vnd.microsoft.portable-executable, application/x-executable (mimetype)
go-module-file-cataloger
go.mod
Haskell
haskell-cataloger
cabal.project.freeze
haskell-cataloger
stack.yaml.lock
haskell-cataloger
stack.yaml
Homebrew
homebrew-cataloger
Cellar/*/*/.brew/*.rb, Library/Taps/*/*/Formula/*.rb
Java
graalvm-native-image-cataloger
application/x-executable, application/x-mach-binary, application/x-elf, application/x-sharedlib, application/vnd.microsoft.portable-executable (mimetype)
java-archive-cataloger
*.jar, *.war, *.ear, *.par, *.sar, *.nar, *.jpi, *.hpi, *.kar, *.lpkg
java-archive-cataloger
*.zip
java-archive-cataloger
*.tar, *.tar.gz, *.tgz, *.tar.bz, *.tar.bz2, *.tbz, *.tbz2, *.tar.br, *.tbr, *.tar.lz4, *.tlz4, *.tar.sz, *.tsz, *.tar.xz, *.txz, *.tar.zst, *.tzst, *.tar.zstd, *.tzstd
java-gradle-lockfile-cataloger
gradle.lockfile*
java-jvm-cataloger
release
java-pom-cataloger
*pom.xml
JavaScript
javascript-lock-cataloger
pnpm-lock.yaml
javascript-lock-cataloger
yarn.lock
javascript-lock-cataloger
package-lock.json
javascript-package-cataloger
package.json
Linux
linux-kernel-cataloger
kernel, kernel-*, vmlinux, vmlinux-*, vmlinuz, vmlinuz-*, lib/modules/**/*.ko
Lua
lua-rock-cataloger
*.rockspec
.NET
dotnet-deps-binary-cataloger
*.deps.json, *.dll, *.exe
dotnet-deps-cataloger deprecated
*.deps.json
dotnet-packages-lock-cataloger
packages.lock.json
dotnet-portable-executable-cataloger deprecated
*.dll, *.exe
Nix
nix-cataloger
nix/var/nix/db/db.sqlite, nix/store/*, nix/store/*.drv
nix-store-cataloger deprecated
nix/store/*, nix/store/*.drv
OCaml
opam-cataloger
*opam
PHP
php-composer-installed-cataloger
installed.json
php-composer-lock-cataloger
composer.lock
php-interpreter-cataloger
php*/**/*.so, php-fpm*, apache*/**/libphp*.so
php-pear-serialized-cataloger
php/.registry/**/*.reg
php-pecl-serialized-cataloger deprecated
php/.registry/.channel.*/*.reg
Portage
portage-cataloger
var/db/pkg/*/*/CONTENTS
Prolog
swipl-pack-cataloger
pack.pl
Python
python-installed-package-cataloger
*.egg-info, *dist-info/METADATA, *egg-info/PKG-INFO, *DIST-INFO/METADATA, *EGG-INFO/PKG-INFO
python-package-cataloger
pdm.lock
python-package-cataloger
uv.lock
python-package-cataloger
setup.py
python-package-cataloger
Pipfile.lock
python-package-cataloger
poetry.lock
python-package-cataloger
*requirements*.txt
R
r-package-cataloger
DESCRIPTION
RPM
rpm-archive-cataloger
*.rpm
rpm-db-cataloger
var/lib/rpmmanifest/container-manifest-2
rpm-db-cataloger
{var/lib,usr/share,usr/lib/sysimage}/rpm/{Packages,Packages.db,rpmdb.sqlite}
Ruby
ruby-gemfile-cataloger
Gemfile.lock
ruby-gemspec-cataloger
*.gemspec
ruby-installed-gemspec-cataloger
specifications/**/*.gemspec
Rust
cargo-auditable-binary-cataloger
application/x-executable, application/x-mach-binary, application/x-elf, application/x-sharedlib, application/vnd.microsoft.portable-executable, application/x-executable (mimetype)
rust-cargo-lock-cataloger
Cargo.lock
SBOM
sbom-cataloger
*.syft.json, *.bom.*, *.bom, bom, *.sbom.*, *.sbom, sbom, *.cdx.*, *.cdx, *.spdx.*, *.spdx
Snap
snap-cataloger
snap/snapcraft.yaml
snap-cataloger
snap/manifest.yaml
snap-cataloger
doc/linux-modules-*/changelog.Debian.gz
snap-cataloger
usr/share/snappy/dpkg.yaml
snap-cataloger
meta/snap.yaml
Swift
cocoapods-cataloger
Podfile.lock
swift-package-manager-cataloger
Package.resolved, .package.resolved
Terraform
terraform-lock-cataloger
.terraform.lock.hcl
WordPress
wordpress-plugins-cataloger
wp-content/plugins/*/*.php

Legend:

  • : Supported by default
  • : Conditionally supported (requires configuration)
  • (empty): Not supported

4.3 - ALPM

ALPM package format used by Arch-based Linux distributions

Package analysis

Cataloger + EvidenceLicenseDependenciesPackage Manager Claims
DepthEdgesKindsFilesDigestsIntegrity Hash
alpm-db-cataloger
var/lib/pacman/local/**/desc
TransitiveCompleteRuntime

Vulnerability scanning

Data SourceDisclosuresFixesTrack by
Source
Package
AffectedDateVersionsDate
National Vulnerability Database (NVD)

Operating systems

Operating SystemSupported VersionsProviderData Source
Arch Linuxminimal support (CPE-based)nvdNational Vulnerability Database (NVD)

Contributing

Interested in contributing vulnerability scanning support?

Feel free to add a new vunnel provider for Arch-based distributions. See the existing issue in the Vunnel repository.

Next steps

4.4 - AI

AI model analysis and vulnerability scanning capabilities

Package analysis

Cataloger + EvidenceLicenseDependenciesPackage Manager Claims
DepthEdgesKindsFilesDigestsIntegrity Hash
gguf-cataloger
*.gguf

Vulnerability scanning

Unsupported at this time.

Next steps

4.5 - APK

APK package format analysis and vulnerability scanning capabilities

Package analysis

Cataloger + EvidenceLicenseDependenciesPackage Manager Claims
DepthEdgesKindsFilesDigestsIntegrity Hash
apk-db-cataloger
lib/apk/db/installed
TransitiveCompleteRuntime

Vulnerability scanning

Data SourceDisclosuresFixesTrack by
Source
Package
AffectedDateVersionsDate
Alpine SecDB
National Vulnerability Database (NVD)
Chainguard Security
MINIMOS Security
Wolfi Security

The APK vulnerability matcher searches all data sources for upstream packages, including NVD.

Operating systems

Operating SystemSupported VersionsProviderData Source
Alpine Linux3.2+, edgealpineAlpine SecDB
Chainguard OSrollingchainguardChainguard Security
MinimOSrollingminimosMINIMOS Security
WolfirollingwolfiWolfi Security

The APK vulnerability database (a.k.a. “SecDB”) includes data from the Alpine Security Tracker, which provides fix information for known vulnerabilities that affect Alpine Linux packages. This database only includes vulnerabilities that have fixes available and does not track unfixed vulnerabilities. The maintainers of the SecDB intend for the primary source of truth for disclosures to be the National Vulnerability Database (NVD).

This is true of other APK vulnerability data sources as well (such as Chainguard, Wolfi, and MinimOS).

Next steps

4.6 - Binary

Binary package analysis and vulnerability scanning capabilities

File analysis

Within the .files[].executable sections of the Syft JSON there is an analysis of what features and claims were found within a binary file.

This includes:

  • Imported libraries (use of shared libraries)
  • Exported symbols
  • Security features (like NX, PIE, RELRO, etc)

Security features that can be detected include:

  • if debugging symbols have been stripped
  • presence of Stack Canaries to protect against stack smashing (which lead to buffer overflows)
  • NoExecute (NX) bit support to prevent execution of code on the stack or heap
  • Relocation Read-Only (RelRO) to protect the Global Offset Table (GOT) from being overwritten (can be “partial” or “full”)
  • Position Independent Executable (PIE) support such that offsets are used instead of absolute addresses
  • if it is a Dynamic Shared Object (DSO) (not a security feature, but important for analysis)
  • LLVM SafeStack partitioning is in use, which separates unsafe stack objects from safe stack objects to mitigate stack-based memory corruption vulnerabilities
  • LLVM Control Flow Integrity (CFI) is in use, which adds runtime checks to ensure that indirect function calls only target valid functions, helping to prevent control-flow hijacking attacks
  • Clang Fortified Builds is enabled, which adds additional runtime checks for certain standard library functions to detect buffer overflows and other memory errors

When it comes to shared library requirement claims and exported symbol claims, these are used by Syft to:

  • associate file-to-file relationships (in the case of executables/shared libraries being distributed without a package manager)
  • associate file-to-package relationships (when an executable imports a shared library that is managed by a package manager)

Syft can synthesize a dependency graph from the imported libraries and exported symbols found within a set of binaries, even if all package manager information has been removed, allowing for a more complete SBOM to be generated. In a mixed case, where there are some packages managed by package managers and some binaries without package manager metadata, Syft can still use the binary analysis to fill in the gaps. Package-level relationships are preferred over file-level relationships when both are available, which simplifies the dependency graph.

Package analysis

ELF package notes

Syft is capable of looking at ELF formatted binaries, specifically the .note.package note, that are formatted using the convention established by the systemd project. This spec requires a PE/COFF section that wraps a json payload describing the package metadata for the binary, however, syft does not require the PE/COFF wrapping and can extract the json payload directly from the ELF note.

Here’s an example of what the json payload looks like:

{
  "name": "my-application",
  "version": "1.2.3",
  "purl": "pkg:deb/debian/my-application@1.2.3?arch=amd64&distro=debian-12",
  "cpe": "cpe:2.3:a:vendor:my-application:1.2.3:*:*:*:*:*:*:*",
  "license": "Apache-2.0",
  "type": "deb"
}

Which, if stored in payload.json, can be injected into an existing ELF binary using the following command:

objcopy --add-section .note.package=payload.json --set-section-flags .note.package=noload,readonly

Known patterns

Cataloger + EvidenceLicenseDependenciesPackage Manager Claims
DepthEdgesKindsFilesDigestsIntegrity Hash
binary-classifier-cataloger
(see table below)
elf-binary-package-cataloger
application/x-executable, application/x-mach-binary, application/x-elf, application/x-sharedlib, application/vnd.microsoft.portable-executable (mimetype)
pe-binary-package-cataloger
*.dll, *.exe

Binary Package Details
ClassFilesPURLCPEs
arangodb-binaryarangoshpkg:generic/arangodbcpe:2.3:a:arangodb:arangodb:*:*:*:*:*:*:*:*
bash-binarybashpkg:generic/bashcpe:2.3:a:gnu:bash:*:*:*:*:*:*:*:*
busybox-binarybusyboxpkg:generic/busyboxcpe:2.3:a:busybox:busybox:*:*:*:*:*:*:*:*
chrome-binarychromepkg:generic/chromecpe:2.3:a:google:chrome:*:*:*:*:*:*:*:*
consul-binaryconsulpkg:golang/github.com/hashicorp/consulcpe:2.3:a:hashicorp:consul:*:*:*:*:*:*:*:*
curl-binarycurlpkg:generic/curlcpe:2.3:a:haxx:curl:*:*:*:*:*:*:*:*
dart-binarydartpkg:generic/dartcpe:2.3:a:dart:dart_software_development_kit:*:*:*:*:*:*:*:*
elixir-binaryelixirpkg:generic/elixircpe:2.3:a:elixir-lang:elixir:*:*:*:*:*:*:*:*
elixir-libraryelixir/ebin/elixir.apppkg:generic/elixircpe:2.3:a:elixir-lang:elixir:*:*:*:*:*:*:*:*
erlang-alpine-binarybeam.smppkg:generic/erlangcpe:2.3:a:erlang:erlang/otp:*:*:*:*:*:*:*:*
erlang-binaryerlexecpkg:generic/erlangcpe:2.3:a:erlang:erlang/otp:*:*:*:*:*:*:*:*
erlang-libraryliberts_internal.apkg:generic/erlangcpe:2.3:a:erlang:erlang/otp:*:*:*:*:*:*:*:*
ffmpeg-binaryffmpegpkg:generic/ffmpegcpe:2.3:a:ffmpeg:ffmpeg:*:*:*:*:*:*:*:*
ffmpeg-librarylibav*pkg:generic/ffmpegcpe:2.3:a:ffmpeg:ffmpeg:*:*:*:*:*:*:*:*
ffmpeg-librarylibswresample*pkg:generic/ffmpegcpe:2.3:a:ffmpeg:ffmpeg:*:*:*:*:*:*:*:*
fluent-bit-binaryfluent-bitpkg:github/fluent/fluent-bitcpe:2.3:a:treasuredata:fluent_bit:*:*:*:*:*:*:*:*
gcc-binarygccpkg:generic/gcccpe:2.3:a:gnu:gcc:*:*:*:*:*:*:*:*
go-binarygopkg:generic/gocpe:2.3:a:golang:go:*:*:*:*:*:*:*:*
go-binary-hintVERSION*pkg:generic/gocpe:2.3:a:golang:go:*:*:*:*:*:*:*:*
gzip-binarygzippkg:generic/gzipcpe:2.3:a:gnu:gzip:*:*:*:*:*:*:*:*
haproxy-binaryhaproxypkg:generic/haproxycpe:2.3:a:haproxy:haproxy:*:*:*:*:*:*:*:*
hashicorp-vault-binaryvaultpkg:golang/github.com/hashicorp/vaultcpe:2.3:a:hashicorp:vault:*:*:*:*:*:*:*:*
haskell-cabal-binarycabalpkg:generic/haskell/cabalcpe:2.3:a:haskell:cabal:*:*:*:*:*:*:*:*
haskell-ghc-binaryghc*pkg:generic/haskell/ghccpe:2.3:a:haskell:ghc:*:*:*:*:*:*:*:*
haskell-stack-binarystackpkg:generic/haskell/stackcpe:2.3:a:haskell:stack:*:*:*:*:*:*:*:*
helmhelmpkg:golang/helm.sh/helmcpe:2.3:a:helm:helm:*:*:*:*:*:*:*:*
httpd-binaryhttpdpkg:generic/httpdcpe:2.3:a:apache:http_server:*:*:*:*:*:*:*:*
java-binaryjavapkg:generic/oracle/graalvm
pkg:generic/azul/zulu
pkg:generic/oracle/openjdk
pkg:generic/oracle/openjdk
pkg:generic/ibm/java
pkg:generic/oracle/jre
pkg:generic/oracle/jre
cpe:2.3:a:oracle:graalvm:*:*:*:*:*:*:*:*
cpe:2.3:a:azul:zulu:*:*:*:*:*:*:*:*
cpe:2.3:a:oracle:openjdk:{{.primary}}:update{{.update}}:*:*:*:*:*:*
cpe:2.3:a:oracle:openjdk:*:*:*:*:*:*:*:*
cpe:2.3:a:ibm:java:*:*:*:*:*:*:*:*
cpe:2.3:a:oracle:jre:*:*:*:*:*:*:*:*
cpe:2.3:a:oracle:jre:*:*:*:*:*:*:*:*
java-jdb-binaryjdbpkg:generic/oracle/graalvm
pkg:generic/azul/zulu
pkg:generic/oracle/openjdk
pkg:generic/ibm/java_sdk
pkg:generic/oracle/openjdk
pkg:generic/oracle/jdk
cpe:2.3:a:oracle:graalvm_for_jdk:*:*:*:*:*:*:*:*
cpe:2.3:a:azul:zulu:*:*:*:*:*:*:*:*
cpe:2.3:a:oracle:openjdk:*:*:*:*:*:*:*:*
cpe:2.3:a:ibm:java_sdk:*:*:*:*:*:*:*:*
cpe:2.3:a:oracle:openjdk:*:*:*:*:*:*:*:*
cpe:2.3:a:oracle:jdk:*:*:*:*:*:*:*:*
jq-binaryjqpkg:generic/jqcpe:2.3:a:jqlang:jq:*:*:*:*:*:*:*:*
julia-binarylibjulia-internal.sopkg:generic/juliacpe:2.3:a:julialang:julia:*:*:*:*:*:*:*:*
lighttpd-binarylighttpdpkg:generic/lighttpdcpe:2.3:a:lighttpd:lighttpd:*:*:*:*:*:*:*:*
mariadb-binary{mariadb,mysql}pkg:generic/mariadbcpe:2.3:a:mariadb:mariadb:*:*:*:*:*:*:*:*
memcached-binarymemcachedpkg:generic/memcachedcpe:2.3:a:memcached:memcached:*:*:*:*:*:*:*:*
mysql-binarymysqlpkg:generic/mysqlcpe:2.3:a:oracle:mysql:*:*:*:*:*:*:*:*
mysql-binarymysqlpkg:generic/percona-servercpe:2.3:a:oracle:mysql:*:*:*:*:*:*:*:*
cpe:2.3:a:percona:percona_server:*:*:*:*:*:*:*:*
mysql-binarymysqlpkg:generic/percona-xtradb-clustercpe:2.3:a:oracle:mysql:*:*:*:*:*:*:*:*
cpe:2.3:a:percona:percona_server:*:*:*:*:*:*:*:*
cpe:2.3:a:percona:xtradb_cluster:*:*:*:*:*:*:*:*
nginx-binarynginxpkg:generic/nginxcpe:2.3:a:f5:nginx:*:*:*:*:*:*:*:*
cpe:2.3:a:nginx:nginx:*:*:*:*:*:*:*:*
nodejs-binarynodepkg:generic/nodecpe:2.3:a:nodejs:node.js:*:*:*:*:*:*:*:*
openssl-binaryopensslpkg:generic/opensslcpe:2.3:a:openssl:openssl:*:*:*:*:*:*:*:*
perl-binaryperlpkg:generic/perlcpe:2.3:a:perl:perl:*:*:*:*:*:*:*:*
php-composer-binarycomposer*pkg:generic/composercpe:2.3:a:getcomposer:composer:*:*:*:*:*:*:*:*
postgresql-binarypostgrespkg:generic/postgresqlcpe:2.3:a:postgresql:postgresql:*:*:*:*:*:*:*:*
proftpd-binaryproftpdpkg:generic/proftpdcpe:2.3:a:proftpd:proftpd:*:*:*:*:*:*:*:*
pypy-binary-liblibpypy*.so*pkg:generic/pypy-
python-binarypython*pkg:generic/pythoncpe:2.3:a:python_software_foundation:python:*:*:*:*:*:*:*:*
cpe:2.3:a:python:python:*:*:*:*:*:*:*:*
python-binary-liblibpython*.so*pkg:generic/pythoncpe:2.3:a:python_software_foundation:python:*:*:*:*:*:*:*:*
cpe:2.3:a:python:python:*:*:*:*:*:*:*:*
redis-binaryredis-serverpkg:generic/rediscpe:2.3:a:redislabs:redis:*:*:*:*:*:*:*:*
cpe:2.3:a:redis:redis:*:*:*:*:*:*:*:*
ruby-binaryrubypkg:generic/rubycpe:2.3:a:ruby-lang:ruby:*:*:*:*:*:*:*:*
rust-standard-library-linuxlibstd-*.sopkg:generic/rustcpe:2.3:a:rust-lang:rust:*:*:*:*:*:*:*:*
rust-standard-library-macoslibstd-*.dylibpkg:generic/rustcpe:2.3:a:rust-lang:rust:*:*:*:*:*:*:*:*
sqlcipher-binarysqlcipherpkg:generic/sqlciphercpe:2.3:a:zetetic:sqlcipher:*:*:*:*:*:*:*:*
swipl-binaryswiplpkg:generic/swiplcpe:2.3:a:erlang:erlang/otp:*:*:*:*:*:*:*:*
traefik-binarytraefikpkg:generic/traefikcpe:2.3:a:traefik:traefik:*:*:*:*:*:*:*:*
util-linux-binarygetoptpkg:generic/util-linuxcpe:2.3:a:kernel:util-linux:*:*:*:*:*:*:*:*
wordpress-cli-binarywppkg:generic/wp-clicpe:2.3:a:wp-cli:wp-cli:*:*:*:*:*:*:*:*
xtrabackup-binaryxtrabackuppkg:generic/percona-xtrabackupcpe:2.3:a:percona:xtrabackup:*:*:*:*:*:*:*:*
xz-binaryxzpkg:generic/xzcpe:2.3:a:tukaani:xz:*:*:*:*:*:*:*:*
zstd-binaryzstdpkg:generic/zstdcpe:2.3:a:facebook:zstandard:*:*:*:*:*:*:*:*

Vulnerability scanning

Data SourceDisclosuresFixesTrack by
Source
Package
AffectedDateVersionsDate
National Vulnerability Database (NVD)

Grype Configuration
Configuration KeyDescription
match.stock.using-cpesUse CPE package identifiers to find vulnerabilities

Next steps

4.7 - Bitnami

Bitnami package analysis and vulnerability scanning capabilities

Package analysis

Cataloger + EvidenceLicenseDependenciesPackage Manager Claims
DepthEdgesKindsFilesDigestsIntegrity Hash
bitnami-cataloger
/opt/bitnami/**/.spdx-*.spdx
TransitiveCompleteRuntime

Since all package data is gathered from SPDX SBOMs, the quality of the package analysis is dependent on the quality of the provided SBOMs.

Vulnerability scanning

Data SourceDisclosuresFixesTrack by
Source
Package
AffectedDateVersionsDate
Bitnami Vulnerability Database

Next steps

4.8 - Conda

Conda package analysis and vulnerability scanning capabilities

Package analysis

Cataloger + EvidenceLicenseDependenciesPackage Manager Claims
DepthEdgesKindsFilesDigestsIntegrity Hash
conda-meta-cataloger
conda-meta/*.json
DirectRuntime

Vulnerability scanning

Data SourceDisclosuresFixesTrack by
Source
Package
AffectedDateVersionsDate
National Vulnerability Database (NVD)

Grype Configuration
Configuration KeyDescription
match.stock.using-cpesUse CPE package identifiers to find vulnerabilities

Next steps

4.9 - C/C++

C/C++ package analysis and vulnerability scanning capabilities

Package analysis

Cataloger + EvidenceLicenseDependenciesPackage Manager Claims
DepthEdgesKindsFilesDigestsIntegrity Hash
conan-cataloger
conan.lock
TransitiveRuntime, Build
conan-cataloger
conanfile.txt
DirectRuntime
conan-info-cataloger
conaninfo.txt
DirectFlatRuntime

We support package detection for v1 and v2 formatted conan.lock files.

Vulnerability scanning

Data SourceDisclosuresFixesTrack by
Source
Package
AffectedDateVersionsDate
National Vulnerability Database (NVD)

Grype Configuration
Configuration KeyDescription
match.stock.using-cpesUse CPE package identifiers to find vulnerabilities

Next steps

4.10 - Dart

Dart package analysis and vulnerability scanning capabilities

Package analysis

Cataloger + EvidenceLicenseDependenciesPackage Manager Claims
DepthEdgesKindsFilesDigestsIntegrity Hash
dart-pubspec-cataloger
pubspec.yml, pubspec.yaml
DirectRuntime
dart-pubspec-lock-cataloger
pubspec.lock
TransitiveRuntime, Dev

Vulnerability scanning

Data SourceDisclosuresFixesTrack by
Source
Package
AffectedDateVersionsDate
National Vulnerability Database (NVD)

Grype Configuration
Configuration KeyDescription
match.stock.using-cpesUse CPE package identifiers to find vulnerabilities

Next steps

4.11 - DPKG

Debian package format used by Debian-based Linux distributions

Package analysis

Cataloger + EvidenceLicenseDependenciesPackage Manager Claims
DepthEdgesKindsFilesDigestsIntegrity Hash
deb-archive-cataloger
*.deb
dpkg-db-cataloger
lib/dpkg/status, lib/dpkg/status.d/*, lib/opkg/info/*.control, lib/opkg/status
TransitiveCompleteRuntime

There is additional functionality for:

Vulnerability scanning

Data SourceDisclosuresFixesTrack by
Source
Package
AffectedDateVersionsDate
Debian Security Tracker (DSA, DLA)
ECHO Security
Ubuntu CVE Tracker (USN)

Operating systems

Operating SystemSupported VersionsProviderData Source
Debian7 (wheezy), 8 (jessie), 9 (stretch), 10 (buster), 11 (bullseye), 12 (bookworm), 13 (trixie), 14, unstabledebianDebian Security Tracker
Echo OSrollingechoECHO Security
Raspberry Pi OS7 (wheezy), 8 (jessie), 9 (stretch), 10 (buster), 11 (bullseye), 12 (bookworm), 13 (trixie), 14, unstabledebianDebian Security Tracker
Ubuntu12.04 (precise), 12.10 (quantal), 13.04 (raring), 14.04 (trusty), 14.10 (utopic), 15.04 (vivid), 15.10 (wily), 16.04 (xenial), 16.10 (yakkety), 17.04 (zesty), 17.10 (artful), 18.04 (bionic), 18.10 (cosmic), 19.04 (disco), 19.10 (eoan), 20.04 (focal), 20.10 (groovy), 21.04 (hirsute), 21.10 (impish), 22.04 (jammy), 22.10 (kinetic), 23.04 (lunar), 23.10 (mantic), 24.04 (noble), 24.10 (oracular), 25.04 (plucky), 25.10ubuntuUbuntu CVE Tracker

Next steps

4.12 - .NET

.NET package analysis and vulnerability scanning capabilities

Package analysis

Cataloger + EvidenceLicenseDependenciesPackage Manager Claims
DepthEdgesKindsFilesDigestsIntegrity Hash
dotnet-deps-binary-cataloger
*.deps.json, *.dll, *.exe
TransitiveCompleteRuntime
dotnet-deps-cataloger deprecated
*.deps.json
TransitiveCompleteRuntime
dotnet-packages-lock-cataloger
packages.lock.json
TransitiveCompleteRuntime, Dev, Build
dotnet-portable-executable-cataloger deprecated
*.dll, *.exe

Syft Configuration
Configuration KeyDescription
dotnet.dep-packages-must-claim-dllAllows for deps.json packages to be included only if there is a runtime/resource DLL claimed in the deps.json targets section. This does not require such claimed DLLs to exist on disk. The behavior of this
dotnet.dep-packages-must-have-dllAllows for deps.json packages to be included only if there is a DLL on disk for that package.
dotnet.propagate-dll-claims-to-parentsAllows for deps.json packages to be included if any child (transitive) package claims a DLL. This applies to both the claims configuration and evidence-on-disk configurations.
dotnet.relax-dll-claims-when-bundling-detectedWill look for indications of IL bundle tooling via deps.json package names and, if found (and this config option is enabled), will relax the DepPackagesMustClaimDLL value to `false` only in those cases.

When scanning a .NET application evidence from deps.json (compiler output) as well as any built binaries are used together to identify packages. This way we can enrich missing data from any one source and synthesize a more complete and accurate package graph.

Vulnerability scanning

Data SourceDisclosuresFixesTrack by
Source
Package
AffectedDateVersionsDate
GitHub Security Advisories (GHSA)
National Vulnerability Database (NVD)

Grype Configuration
Configuration KeyDescription
match.dotnet.using-cpesUse CPE package identifiers to find vulnerabilities

Next steps

4.13 - Elixir

Elixir package analysis and vulnerability scanning capabilities

Package analysis

Cataloger + EvidenceLicenseDependenciesPackage Manager Claims
DepthEdgesKindsFilesDigestsIntegrity Hash
elixir-mix-lock-cataloger
mix.lock
TransitiveRuntime, Dev

Vulnerability scanning

Data SourceDisclosuresFixesTrack by
Source
Package
AffectedDateVersionsDate
National Vulnerability Database (NVD)

Grype Configuration
Configuration KeyDescription
match.stock.using-cpesUse CPE package identifiers to find vulnerabilities

Next steps

4.14 - Erlang

Erlang package analysis and vulnerability scanning capabilities

Package analysis

Cataloger + EvidenceLicenseDependenciesPackage Manager Claims
DepthEdgesKindsFilesDigestsIntegrity Hash
erlang-otp-application-cataloger
*.app
DirectRuntime, Dev
erlang-rebar-lock-cataloger
rebar.lock
DirectRuntime, Dev

Vulnerability scanning

Data SourceDisclosuresFixesTrack by
Source
Package
AffectedDateVersionsDate
National Vulnerability Database (NVD)

Grype Configuration
Configuration KeyDescription
match.stock.using-cpesUse CPE package identifiers to find vulnerabilities

Next steps

4.15 - GitHub Actions

GitHub Actions package analysis and vulnerability scanning capabilities

Package analysis

Cataloger + EvidenceLicenseDependenciesPackage Manager Claims
DepthEdgesKindsFilesDigestsIntegrity Hash
github-action-workflow-usage-cataloger
.github/workflows/*.yaml, .github/workflows/*.yml
github-actions-usage-cataloger
.github/actions/*/action.yml, .github/actions/*/action.yaml
github-actions-usage-cataloger
.github/workflows/*.yaml, .github/workflows/*.yml

Vulnerability scanning

Data SourceDisclosuresFixesTrack by
Source
Package
AffectedDateVersionsDate
GitHub Security Advisories (GHSA)

Next steps

4.16 - Go

Go package analysis and vulnerability scanning capabilities

Package analysis

Cataloger + EvidenceLicenseDependenciesPackage Manager Claims
DepthEdgesKindsFilesDigestsIntegrity Hash
go-module-binary-cataloger
application/x-executable, application/x-mach-binary, application/x-elf, application/x-sharedlib, application/vnd.microsoft.portable-executable, application/x-executable (mimetype)
TransitiveFlatRuntime, Dev
go-module-file-cataloger
go.mod
TransitiveFlatRuntime, Dev

Syft Configuration
Configuration KeyDescription
golang.local-mod-cache-dirSpecifies the location of the local go module cache directory. When not set, syft will attempt to discover the GOPATH env or default to $HOME/go.
golang.local-vendor-dirSpecifies the location of the local vendor directory. When not set, syft will search for a vendor directory relative to the go.mod file.
golang.no-proxyIs a list of glob patterns that match go module names that should not be fetched from the go proxy. When not set, syft will use the GOPRIVATE and GONOPROXY env vars.
golang.proxyIs a list of go module proxies to use when fetching go module metadata and licenses. When not set, syft will use the GOPROXY env or default to https://proxy.golang.org,direct.
golang.search-local-mod-cache-licensesEnables searching for go package licenses in the local GOPATH mod cache.
golang.search-local-vendor-licensesEnables searching for go package licenses in the local vendor directory relative to the go.mod file.
golang.search-remote-licensesEnables downloading go package licenses from the upstream go proxy (typically proxy.golang.org).

Version detection for binaries

When Syft scans a Go binary, the main module often has version (devel) because Go doesn’t embed version information by default. Syft attempts to detect the actual version using three strategies (configurable via golang.main-module-version.*):

  1. From ldflags (enabled by default): Looks for version strings passed during build like -ldflags="-X main.version=v1.2.3". Supports common patterns: *.version=, *.gitTag=, *.release=, etc.

  2. From build settings (enabled by default): Uses VCS metadata (commit hash and timestamp) embedded by Go 1.18+ to generate a pseudo-version like v0.0.0-20230101120000-abcdef123456.

  3. From contents (disabled by default): Scans binary contents for version string patterns. Can produce false positives.

Best practice: Use -ldflags when building to embed your version explicitly.

Example:

go build -ldflags="-X main.version=v1.2.3"

This ensures Syft (and Grype) can accurately identify your application version for vulnerability matching.

Standard library

Syft automatically creates a stdlib package for each Go binary, representing the Go standard library version used to compile it. The version is extracted from the binary’s build metadata (e.g., go1.22.2). This enables Grype to check for vulnerabilities reported against the go standard library.

Why this matters: Vulnerabilities in the Go compiler (like CVEs affecting the crypto library or net/http) can affect your application even if your code doesn’t directly use those packages.

Vulnerability scanning

Data SourceDisclosuresFixesTrack by
Source
Package
AffectedDateVersionsDate
GitHub Security Advisories (GHSA)
National Vulnerability Database (NVD)

Grype Configuration
Configuration KeyDescription
match.golang.using-cpesUse CPE package identifiers to find vulnerabilities
match.golang.always-use-cpe-for-stdlibuse CPE matching to find vulnerabilities for the Go standard library
match.golang.allow-main-module-pseudo-version-comparisonallow comparison between main module pseudo-versions (e.g. v0.0.0-20240413-2b432cf643...)

Main module filtering

Grype skips vulnerability matching for packages that match all these conditions:

  • Package name equals the main module name (from the SBOM metadata)
  • Package version is unreliable:
    • When allow-main-module-pseudo-version-comparison is false (default): version starts with v0.0.0- or is (devel)
    • When allow-main-module-pseudo-version-comparison is true: version is (devel) only

This filtering exists because Go doesn’t have a standard way to embed the main module’s version into compiled binaries (see golang/go#50603). Pseudo-versions in compiled binaries are often unreliable for vulnerability matching.

You can disable this filtering with the allow-main-module-pseudo-version-comparison configuration option.

Troubleshooting

No vulnerabilities found for main module

Cause: The main module has a pseudo-version (v0.0.0-*) or (devel), which Grype filters by default.

Solution: Enable pseudo-version matching in your Grype configuration:

match:
  golang:
    allow-main-module-pseudo-version-comparison: true

No vulnerabilities found for stdlib

Possible causes:

  • Missing CPEs: Verify Syft generates CPEs with generate-cpes: true in .syft.yaml
  • CPE matching disabled: Ensure always-use-cpe-for-stdlib: true in Grype config (default)
  • Incorrect version format: Stdlib version should be go1.18.3, not v1.18.3 (file a Syft bug if incorrect)

Next steps

4.17 - Haskell

Haskell package analysis and vulnerability scanning capabilities

Package analysis

Cataloger + EvidenceLicenseDependenciesPackage Manager Claims
DepthEdgesKindsFilesDigestsIntegrity Hash
haskell-cataloger
cabal.project.freeze
TransitiveRuntime, Dev
haskell-cataloger
stack.yaml.lock
TransitiveRuntime, Dev
haskell-cataloger
stack.yaml
DirectRuntime, Dev

Vulnerability scanning

Data SourceDisclosuresFixesTrack by
Source
Package
AffectedDateVersionsDate
National Vulnerability Database (NVD)

Grype Configuration
Configuration KeyDescription
match.stock.using-cpesUse CPE package identifiers to find vulnerabilities

Next steps

4.18 - Homebrew

Homebrew package analysis and vulnerability scanning capabilities

Package analysis

Cataloger + EvidenceLicenseDependenciesPackage Manager Claims
DepthEdgesKindsFilesDigestsIntegrity Hash
homebrew-cataloger
Cellar/*/*/.brew/*.rb, Library/Taps/*/*/Formula/*.rb

Vulnerability scanning

Data SourceDisclosuresFixesTrack by
Source
Package
AffectedDateVersionsDate
National Vulnerability Database (NVD)

Grype Configuration
Configuration KeyDescription
match.stock.using-cpesUse CPE package identifiers to find vulnerabilities

Next steps

4.19 - Java

Java package analysis and vulnerability scanning capabilities

Package analysis

Cataloger + EvidenceLicenseDependenciesPackage Manager Claims
DepthEdgesKindsFilesDigestsIntegrity Hash
graalvm-native-image-cataloger
application/x-executable, application/x-mach-binary, application/x-elf, application/x-sharedlib, application/vnd.microsoft.portable-executable (mimetype)
TransitiveCompleteRuntime, Dev
java-archive-cataloger
*.jar, *.war, *.ear, *.par, *.sar, *.nar, *.jpi, *.hpi, *.kar, *.lpkg
TransitiveCompleteRuntime, Dev
java-archive-cataloger
*.zip
TransitiveCompleteRuntime, Dev
java-archive-cataloger
*.tar, *.tar.gz, *.tgz, *.tar.bz, *.tar.bz2, *.tbz, *.tbz2, *.tar.br, *.tbr, *.tar.lz4, *.tlz4, *.tar.sz, *.tsz, *.tar.xz, *.txz, *.tar.zst, *.tzst, *.tar.zstd, *.tzstd
TransitiveCompleteRuntime, Dev
java-gradle-lockfile-cataloger
gradle.lockfile*
TransitiveRuntime, Dev
java-jvm-cataloger
release
TransitiveRuntime, Dev
java-pom-cataloger
*pom.xml
DirectCompleteRuntime, Dev

Syft Configuration
Configuration KeyDescription
java.maven-local-repository-dirSpecifies the location of the local maven repository. When not set, defaults to ~/.m2/repository.
java.maven-urlSpecifies the base URL(s) to use for fetching POMs and metadata from maven central or other repositories. When not set, defaults to https://repo1.maven.org/maven2.
java.max-parent-recursive-depthLimits how many parent POMs will be fetched recursively before stopping. This prevents infinite loops or excessively deep parent chains.
java.resolve-transitive-dependenciesEnables resolving transitive dependencies for java packages found within archives.
java.use-maven-local-repositoryEnables searching the local maven repository (~/.m2/repository by default) for parent POMs and other metadata.
java.use-networkEnables network operations for java package metadata enrichment, such as fetching parent POMs and license information.

Archives

When scanning a Java archive (e.g. jar, war, ear, …), Syft will look for maven project evidence within the archive recursively. This means that if a jar file contains other jar files, Syft will also look for pom.xml files within those nested jar files to identify packages (such as with shaded jars).

Additionally, if opted-in via configuration, Syft will scan non-java archive files (e.g., zip, tar, tar.gz, …) for Java package evidence as well.

Vulnerability scanning

Data SourceDisclosuresFixesTrack by
Source
Package
AffectedDateVersionsDate
GitHub Security Advisories (GHSA)
National Vulnerability Database (NVD)

Grype Configuration
Configuration KeyDescription
match.java.using-cpesUse CPE package identifiers to find vulnerabilities

Next steps

4.20 - JavaScript

JavaScript package analysis and vulnerability scanning capabilities

Package analysis

Cataloger + EvidenceLicenseDependenciesPackage Manager Claims
DepthEdgesKindsFilesDigestsIntegrity Hash
javascript-lock-cataloger
pnpm-lock.yaml
TransitiveRuntime
javascript-lock-cataloger
yarn.lock
TransitiveRuntime, Dev
javascript-lock-cataloger
package-lock.json
TransitiveRuntime
javascript-package-cataloger
package.json
DirectRuntime

Syft Configuration
Configuration KeyDescription
javascript.include-dev-dependenciesControls whether development dependencies should be included in the catalog results, in addition to production dependencies.
javascript.npm-base-urlSpecifies the base URL for the NPM registry API used when searching for remote license information.
javascript.search-remote-licensesEnables querying the NPM registry API to retrieve license information for packages that are missing license data in their local metadata.

Vulnerability scanning

Data SourceDisclosuresFixesTrack by
Source
Package
AffectedDateVersionsDate
GitHub Security Advisories (GHSA)
National Vulnerability Database (NVD)

Grype Configuration
Configuration KeyDescription
match.javascript.using-cpesUse CPE package identifiers to find vulnerabilities

Next steps

4.21 - Linux Kernel

Linux kernel archive and module analysis and vulnerability scanning capabilities

Package analysis

Cataloger + EvidenceLicenseDependenciesPackage Manager Claims
DepthEdgesKindsFilesDigestsIntegrity Hash
linux-kernel-cataloger
kernel, kernel-*, vmlinux, vmlinux-*, vmlinuz, vmlinuz-*, lib/modules/**/*.ko

Syft Configuration
Configuration KeyDescription
linux-kernel.catalog-modulesEnables cataloging linux kernel modules (*.ko files) in addition to the kernel itself.

Vulnerability scanning

Data SourceDisclosuresFixesTrack by
Source
Package
AffectedDateVersionsDate
National Vulnerability Database (NVD)

Grype Configuration
Configuration KeyDescription
match.stock.using-cpesUse CPE package identifiers to find vulnerabilities

Next steps

4.22 - Lua

Lua package analysis and vulnerability scanning capabilities

Package analysis

Cataloger + EvidenceLicenseDependenciesPackage Manager Claims
DepthEdgesKindsFilesDigestsIntegrity Hash
lua-rock-cataloger
*.rockspec

Vulnerability scanning

Data SourceDisclosuresFixesTrack by
Source
Package
AffectedDateVersionsDate
National Vulnerability Database (NVD)

Grype Configuration
Configuration KeyDescription
match.stock.using-cpesUse CPE package identifiers to find vulnerabilities

Next steps

4.23 - Nix

Nix package analysis and vulnerability scanning capabilities

Package analysis

Cataloger + EvidenceLicenseDependenciesPackage Manager Claims
DepthEdgesKindsFilesDigestsIntegrity Hash
nix-cataloger
nix/var/nix/db/db.sqlite, nix/store/*, nix/store/*.drv
TransitiveCompleteRuntime
nix-store-cataloger deprecated
nix/store/*, nix/store/*.drv
TransitiveCompleteRuntime

Syft Configuration
Configuration KeyDescription
nix.capture-owned-filesDetermines whether to record the list of files owned by each Nix package discovered in the store. Recording owned files provides more detailed information but increases processing time and memory usage.

Vulnerability scanning

Data SourceDisclosuresFixesTrack by
Source
Package
AffectedDateVersionsDate
National Vulnerability Database (NVD)

Grype Configuration
Configuration KeyDescription
match.stock.using-cpesUse CPE package identifiers to find vulnerabilities

Next steps

4.24 - OCaml

OCaml package analysis and vulnerability scanning capabilities

Package analysis

Cataloger + EvidenceLicenseDependenciesPackage Manager Claims
DepthEdgesKindsFilesDigestsIntegrity Hash
opam-cataloger
*opam
DirectRuntime

Vulnerability scanning

Data SourceDisclosuresFixesTrack by
Source
Package
AffectedDateVersionsDate
National Vulnerability Database (NVD)

Grype Configuration
Configuration KeyDescription
match.stock.using-cpesUse CPE package identifiers to find vulnerabilities

Next steps

4.25 - PHP

PHP package analysis and vulnerability scanning capabilities

Package analysis

Cataloger + EvidenceLicenseDependenciesPackage Manager Claims
DepthEdgesKindsFilesDigestsIntegrity Hash
php-composer-installed-cataloger
installed.json
TransitiveRuntime, Dev
php-composer-lock-cataloger
composer.lock
TransitiveRuntime
php-interpreter-cataloger
php*/**/*.so, php-fpm*, apache*/**/libphp*.so
DirectFlatRuntime
php-pear-serialized-cataloger
php/.registry/**/*.reg
DirectRuntime
php-pecl-serialized-cataloger deprecated
php/.registry/.channel.*/*.reg
DirectRuntime

Vulnerability scanning

Data SourceDisclosuresFixesTrack by
Source
Package
AffectedDateVersionsDate
National Vulnerability Database (NVD)

Grype Configuration
Configuration KeyDescription
match.stock.using-cpesUse CPE package identifiers to find vulnerabilities

Next steps

4.26 - Portage

Portage package format used by Gentoo-based Linux distributions

Package analysis

Cataloger + EvidenceLicenseDependenciesPackage Manager Claims
DepthEdgesKindsFilesDigestsIntegrity Hash
portage-cataloger
var/db/pkg/*/*/CONTENTS
DirectRuntime

Vulnerability scanning

Data SourceDisclosuresFixesTrack by
Source
Package
AffectedDateVersionsDate
National Vulnerability Database (NVD)

Operating systems

Operating SystemSupported VersionsProviderData Source
Gentoo Linuxminimal support (CPE-based)nvdNational Vulnerability Database (NVD)

Next steps

4.27 - Prolog

Prolog package analysis and vulnerability scanning capabilities

Package analysis

Cataloger + EvidenceLicenseDependenciesPackage Manager Claims
DepthEdgesKindsFilesDigestsIntegrity Hash
swipl-pack-cataloger
pack.pl
DirectRuntime, Dev

Vulnerability scanning

Data SourceDisclosuresFixesTrack by
Source
Package
AffectedDateVersionsDate
National Vulnerability Database (NVD)

Grype Configuration
Configuration KeyDescription
match.stock.using-cpesUse CPE package identifiers to find vulnerabilities

Next steps

4.28 - Python

Python package analysis and vulnerability scanning capabilities

Package analysis

Cataloger + EvidenceLicenseDependenciesPackage Manager Claims
DepthEdgesKindsFilesDigestsIntegrity Hash
python-installed-package-cataloger
*.egg-info, *dist-info/METADATA, *egg-info/PKG-INFO, *DIST-INFO/METADATA, *EGG-INFO/PKG-INFO
DirectCompleteRuntime
python-package-cataloger
pdm.lock
TransitiveCompleteRuntime, Dev, Optional
python-package-cataloger
uv.lock
TransitiveCompleteRuntime, Dev, Optional
python-package-cataloger
setup.py
Direct
python-package-cataloger
Pipfile.lock
TransitiveRuntime
python-package-cataloger
poetry.lock
TransitiveCompleteRuntime, Dev, Optional
python-package-cataloger
*requirements*.txt
DirectAny

Syft Configuration
Configuration KeyDescription
python.guess-unpinned-requirementsAttempts to infer package versions from version constraints when no explicit version is specified in requirements files.
python.pypi-base-urlSpecifies the base URL for the Pypi registry API used when searching for remote license information.
python.search-remote-licensesEnables querying the NPM registry API to retrieve license information for packages that are missing license data in their local metadata.

Vulnerability scanning

Data SourceDisclosuresFixesTrack by
Source
Package
AffectedDateVersionsDate
GitHub Security Advisories (GHSA)
National Vulnerability Database (NVD)

Grype Configuration
Configuration KeyDescription
match.python.using-cpesUse CPE package identifiers to find vulnerabilities

Next steps

4.29 - R

R package analysis and vulnerability scanning capabilities

Package analysis

Cataloger + EvidenceLicenseDependenciesPackage Manager Claims
DepthEdgesKindsFilesDigestsIntegrity Hash
r-package-cataloger
DESCRIPTION

Vulnerability scanning

Data SourceDisclosuresFixesTrack by
Source
Package
AffectedDateVersionsDate
National Vulnerability Database (NVD)

Grype Configuration
Configuration KeyDescription
match.stock.using-cpesUse CPE package identifiers to find vulnerabilities

Next steps

4.30 - RPM

Red Hat Package Manager format used by Red Hat-based Linux distributions

Package analysis

Cataloger + EvidenceLicenseDependenciesPackage Manager Claims
DepthEdgesKindsFilesDigestsIntegrity Hash
rpm-archive-cataloger
*.rpm
rpm-db-cataloger
var/lib/rpmmanifest/container-manifest-2
TransitiveRuntime
rpm-db-cataloger
{var/lib,usr/share,usr/lib/sysimage}/rpm/{Packages,Packages.db,rpmdb.sqlite}
TransitiveCompleteRuntime

Vulnerability scanning

Data SourceDisclosuresFixesTrack by
Source
Package
AffectedDateVersionsDate
AlmaLinux OSV Database (ALSA)
Amazon Linux Security Center (ALAS)
Microsoft AzureLinux OVAL
Red Hat Security Data API (RHSA)
National Vulnerability Database (NVD)
Microsoft CBL-Mariner OVAL
Oracle Linux Security (ELSA)
SUSE Security OVAL (SUSE-SU)

Operating systems

Operating SystemSupported VersionsProviderData Source
AlmaLinux8, 9, 10almaAlmaLinux OSV Database
Amazon Linux2, 2022, 2023amazonAmazon Linux Security Center
Azure Linux3.0marinerMicrosoft CBL-Mariner OVAL
CentOS5, 6, 7, 8rhelRed Hat Security Data API
Fedoraminimal support (CPE-based)nvdNational Vulnerability Database (NVD)
CBL-Mariner1.0, 2.0marinerMicrosoft CBL-Mariner OVAL
OpenSUSE Leapminimal support (CPE-based)nvdNational Vulnerability Database (NVD)
Oracle Linux5, 6, 7, 8, 9, 10oracleOracle Linux Security
Photon OSminimal support (CPE-based)nvdNational Vulnerability Database (NVD)
Red Hat Enterprise Linux5, 6, 7, 8, 9, 10
EUS: 5.9, 6.4+, 7, 8.1, 8.2, 8.4, 8.6, 8.8, 9
rhelRed Hat Security Data API
Rocky Linux5, 6, 7, 8, 9, 10rhelRed Hat Security Data API
SUSE Linux Enterprise Server11, 12, 15slesSUSE Security OVAL

Next steps

4.31 - Ruby

Ruby package analysis and vulnerability scanning capabilities

Package analysis

Cataloger + EvidenceLicenseDependenciesPackage Manager Claims
DepthEdgesKindsFilesDigestsIntegrity Hash
ruby-gemfile-cataloger
Gemfile.lock
TransitiveRuntime, Dev
ruby-gemspec-cataloger
*.gemspec
DirectRuntime
ruby-installed-gemspec-cataloger
specifications/**/*.gemspec
TransitiveRuntime

Vulnerability scanning

Data SourceDisclosuresFixesTrack by
Source
Package
AffectedDateVersionsDate
GitHub Security Advisories (GHSA)
National Vulnerability Database (NVD)

Grype Configuration
Configuration KeyDescription
match.ruby.using-cpesUse CPE package identifiers to find vulnerabilities

Next steps

4.32 - Rust

Rust package analysis and vulnerability scanning capabilities

Package analysis

Cataloger + EvidenceLicenseDependenciesPackage Manager Claims
DepthEdgesKindsFilesDigestsIntegrity Hash
cargo-auditable-binary-cataloger
application/x-executable, application/x-mach-binary, application/x-elf, application/x-sharedlib, application/vnd.microsoft.portable-executable, application/x-executable (mimetype)
TransitiveCompleteRuntime
rust-cargo-lock-cataloger
Cargo.lock
TransitiveCompleteRuntime, Dev, Build

Vulnerability scanning

Data SourceDisclosuresFixesTrack by
Source
Package
AffectedDateVersionsDate
GitHub Security Advisories (GHSA)
National Vulnerability Database (NVD)

Grype Configuration
Configuration KeyDescription
match.rust.using-cpesUse CPE package identifiers to find vulnerabilities

Next steps

4.33 - SBOM

SBOM package analysis and vulnerability scanning capabilities

Package analysis

Cataloger + EvidenceLicenseDependenciesPackage Manager Claims
DepthEdgesKindsFilesDigestsIntegrity Hash
sbom-cataloger
*.syft.json, *.bom.*, *.bom, bom, *.sbom.*, *.sbom, sbom, *.cdx.*, *.cdx, *.spdx.*, *.spdx

Vulnerability scanning

Data SourceDisclosuresFixesTrack by
Source
Package
AffectedDateVersionsDate
National Vulnerability Database (NVD)

Grype Configuration
Configuration KeyDescription
match.stock.using-cpesUse CPE package identifiers to find vulnerabilities

Next steps

4.34 - Snap

Snap package analysis and vulnerability scanning capabilities

Package analysis

Cataloger + EvidenceLicenseDependenciesPackage Manager Claims
DepthEdgesKindsFilesDigestsIntegrity Hash
snap-cataloger
snap/snapcraft.yaml
snap-cataloger
snap/manifest.yaml
snap-cataloger
doc/linux-modules-*/changelog.Debian.gz
snap-cataloger
usr/share/snappy/dpkg.yaml
snap-cataloger
meta/snap.yaml

Vulnerability scanning

Data SourceDisclosuresFixesTrack by
Source
Package
AffectedDateVersionsDate
National Vulnerability Database (NVD)

Grype Configuration
Configuration KeyDescription
match.stock.using-cpesUse CPE package identifiers to find vulnerabilities

Next steps

4.35 - Swift

Swift package analysis and vulnerability scanning capabilities

Package analysis

Cataloger + EvidenceLicenseDependenciesPackage Manager Claims
DepthEdgesKindsFilesDigestsIntegrity Hash
cocoapods-cataloger
Podfile.lock
TransitiveRuntime
swift-package-manager-cataloger
Package.resolved, .package.resolved
TransitiveRuntime

Vulnerability scanning

Data SourceDisclosuresFixesTrack by
Source
Package
AffectedDateVersionsDate
National Vulnerability Database (NVD)

Grype Configuration
Configuration KeyDescription
match.stock.using-cpesUse CPE package identifiers to find vulnerabilities

Next steps

4.36 - Terraform

Terraform package analysis and vulnerability scanning capabilities

Package analysis

Cataloger + EvidenceLicenseDependenciesPackage Manager Claims
DepthEdgesKindsFilesDigestsIntegrity Hash
terraform-lock-cataloger
.terraform.lock.hcl
DirectRuntime

Vulnerability scanning

Data SourceDisclosuresFixesTrack by
Source
Package
AffectedDateVersionsDate
National Vulnerability Database (NVD)

Grype Configuration
Configuration KeyDescription
match.stock.using-cpesUse CPE package identifiers to find vulnerabilities

Next steps

4.37 - Wordpress

Wordpress package analysis and vulnerability scanning capabilities

Package analysis

Cataloger + EvidenceLicenseDependenciesPackage Manager Claims
DepthEdgesKindsFilesDigestsIntegrity Hash
wordpress-plugins-cataloger
wp-content/plugins/*/*.php

Vulnerability scanning

Data SourceDisclosuresFixesTrack by
Source
Package
AffectedDateVersionsDate
National Vulnerability Database (NVD)

Grype Configuration
Configuration KeyDescription
match.stock.using-cpesUse CPE package identifiers to find vulnerabilities

Next steps

5 - Contributing

Guidelines for developing & contributing to Anchore Open Source projects

Welcome! We appreciate all contributions to Anchore’s open source projects. Whether you’re fixing a bug, adding a feature, or improving documentation, your help makes these tools better for everyone.

Getting Help

The Anchore open source community is here to help. Use Discourse for questions, discussions, and troubleshooting. Use GitHub for reporting bugs, requesting features, and submitting code contributions. See Issues vs Discussions for guidance on which channel to use.

For security vulnerabilities, email security@anchore.com - do not create public issues. See our Security Policy for details.

5.1 - Issues and Discussions

When to use GitHub Issues versus Discourse Discussions

Understanding where to post helps you get faster, more relevant responses.

GitHub Issues

Use GitHub issues for:

  • Bug reports: Something isn’t working as documented
  • Feature requests: Proposals for new functionality
  • Enhancement requests: Improvements to existing features
  • Security vulnerabilities: Please follow our Security Policy (reported privately)

Creating a good issue

  • Write a clear title: Issue titles become changelog entries in release notes, so make them descriptive and user-focused
  • Search existing issues first: This helps avoid duplicates and keeps discussions in one place
  • Use issue templates: Templates guide you through providing the right information
  • Include version information: Specify which version you’re using
  • Provide reproduction steps: For bugs, describe how to recreate the issue
  • Describe expected vs actual behavior: Explain what you expected to happen and what actually happened
  • Add supporting details: Include relevant logs, error messages, or screenshots

Discourse Discussions

Use the Anchore Discourse for:

  • Questions: “How do I…?” or “Why does…?”
  • Clarifications: Understanding how features work
  • General discussion: Ideas, use cases, and community chat
  • Help requests: Troubleshooting your specific setup
  • Best practices: Sharing knowledge and experiences

Why separate channels?

GitHub issues track work items that require code changes. Each issue represents a potential task for the development team. Discourse provides a better format for conversations, questions, and community support without cluttering the issue tracker.

If you’re unsure which to use, start with Discourse. The community can help identify if an issue should be created.

Security Issues

If you discover a security vulnerability, please report it privately rather than creating a public issue. See our Security Policy for details on how to report security issues responsibly. This gives us time to fix the problem and protect users before details become public.

5.2 - Syft

Developer guidelines when contributing to Syft

Getting started

In order to test and develop in the Syft repo you will need the following dependencies installed:

  • Golang
  • Docker
  • Python (>= 3.9)
  • make

Initial setup

Run once after cloning to install development tools:

make bootstrap

Useful commands

Common commands for ongoing development:

  • make help - List all available commands
  • make lint - Check code formatting and linting
  • make lint-fix - Auto-fix formatting issues
  • make unit - Run unit tests
  • make integration - Run integration tests
  • make cli - Run CLI tests
  • make snapshot - Build release snapshot with all binaries and packages

Testing

Levels of testing

  • unit (make unit): The default level of test which is distributed throughout the repo are unit tests. Any _test.go file that does not reside somewhere within the /test directory is a unit test. Other forms of testing should be organized in the /test directory. These tests should focus on the correctness of functionality in depth. % test coverage metrics only considers unit tests and no other forms of testing.

  • integration (make integration): located within cmd/syft/internal/test/integration, these tests focus on the behavior surfaced by the common library entrypoints from the syft package and make light assertions about the results surfaced. Additionally, these tests tend to make diversity assertions for enum-like objects, ensuring that as enum values are added to a definition that integration tests will automatically fail if no test attempts to use that enum value. For more details see the “Data diversity and freshness assertions” section below.

  • cli (make cli): located with in test/cli, these are tests that test the correctness of application behavior from a snapshot build. This should be used in cases where a unit or integration test will not do or if you are looking for in-depth testing of code in the cmd/ package (such as testing the proper behavior of application configuration, CLI switches, and glue code before syft library calls).

  • acceptance (make install-test): located within test/compare and test/install, these are smoke-like tests that ensure that application packaging and installation works as expected. For example, during release we provide RPM packages as a download artifact. We also have an accompanying RPM acceptance test that installs the RPM from a snapshot build and ensures the output of a syft invocation matches canned expected output. New acceptance tests should be added for each release artifact and architecture supported (when possible).

Data diversity and freshness assertions

It is important that tests against the codebase are flexible enough to begin failing when they do not cover “enough” of the objects under test. “Cover” in this case does not mean that some percentage of the code has been executed during testing, but instead that there is enough diversity of data input reflected in testing relative to the definitions available.

For instance, consider an enum-like value like so:

type Language string

const (
  Java            Language = "java"
  JavaScript      Language = "javascript"
  Python          Language = "python"
  Ruby            Language = "ruby"
  Go              Language = "go"
)

Say we have a test that exercises all the languages defined today:

func TestCatalogPackages(t *testing.T) {
  testTable := []struct {
    // ... the set of test cases that test all languages
  }
  for _, test := range cases {
    t.Run(test.name, func (t *testing.T) {
      // use inputFixturePath and assert that syft.CatalogPackages() returns the set of expected Package objects
      // ...
    })
  }
}

Where each test case has a inputFixturePath that would result with packages from each language. This test is brittle since it does not assert that all languages were exercised directly and future modifications (such as adding a new language) won’t be covered by any test cases.

To address this, the enum-like object should have a definition of all objects that can be used in testing:

type Language string

// const( Java Language = ..., ... )

var AllLanguages = []Language{
 Java,
 JavaScript,
 Python,
 Ruby,
 Go,
 Rust,
}

Allowing testing to automatically fail when adding a new language:

func TestCatalogPackages(t *testing.T) {
  testTable := []struct {
   // ... the set of test cases that (hopefully) covers all languages
  }

  // new stuff...
  observedLanguages := strset.New()

  for _, test := range cases {
    t.Run(test.name, func (t *testing.T) {
      // use inputFixturePath and assert that syft.CatalogPackages() returns the set of expected Package objects
     // ...

     // new stuff...
     for _, actualPkg := range actual {
        observedLanguages.Add(string(actualPkg.Language))
     }

    })
  }

   // new stuff...
  for _, expectedLanguage := range pkg.AllLanguages {
    if  !observedLanguages.Contains(expectedLanguage) {
      t.Errorf("failed to test language=%q", expectedLanguage)
    }
  }
}

This is a better test since it will fail when someone adds a new language but fails to write a test case that should exercise that new language. This method is ideal for integration-level testing, where testing correctness in depth is not needed (that is what unit tests are for) but instead testing in breadth to ensure that units are well integrated.

A similar case can be made for data freshness; if the quality of the results will be diminished if the input data is not kept up to date then a test should be written (when possible) to assert any input data is not stale.

An example of this is the static list of licenses that is stored in internal/spdxlicense for use by the SPDX presenters. This list is updated and published periodically by an external group and syft can grab and update this list by running go generate ./... from the root of the repo.

An integration test has been written to grabs the latest license list version externally and compares that version with the version generated in the codebase. If they differ, the test fails, indicating to someone that there is an action needed to update it.

Snapshot tests

The format objects make a lot of use of “snapshot” testing, where you save the expected output bytes from a call into the git repository and during testing make a comparison of the actual bytes from the subject under test with the golden copy saved in the repo. The “golden” files are stored in the test-fixtures/snapshot directory relative to the go package under test and should always be updated by invoking go test on the specific test file with a specific CLI update flag provided.

Many of the Format tests make use of this approach, where the raw SBOM report is saved in the repo and the test compares that SBOM with what is generated from the latest presenter code. The following command can be used to update the golden files for the various snapshot tests:

make update-format-golden-files

These flags are defined at the top of the test files that have tests that use the snapshot files.

Snapshot testing is only as good as the manual verification of the golden snapshot file saved to the repo! Be careful and diligent when updating these files.

Test fixtures

Syft uses a sophisticated test fixture caching system to speed up test execution. Test fixtures include pre-built test images, language-specific package manifests, and other test data. Rather than rebuilding fixtures on every checkout, Syft can download a pre-built cache from GitHub Container Registry.

Common fixture commands:

  • make fixtures - Intelligently download or rebuild fixtures as needed
  • make build-fixtures - Manually build all fixtures from scratch
  • make clean-cache - Remove all cached test fixtures
  • make check-docker-cache - Verify docker cache size is within limits

When to use each command:

  • First time setup: Run make fixtures after cloning the repository. This will download the latest fixture cache.
  • Tests failing unexpectedly: Try make clean-cache followed by make fixtures to ensure you have fresh fixtures.
  • Working offline: Set DOWNLOAD_TEST_FIXTURE_CACHE=false and run make build-fixtures to build fixtures locally without downloading.
  • Modifying test fixtures: After changing fixture source files, run make build-fixtures to rebuild affected fixtures.

The fixture system tracks input fingerprints and only rebuilds fixtures when their source files change. This makes the development cycle faster while ensuring tests always run against the correct fixture data.

Code generation

Syft generates several types of code and data files that need to be kept in sync with external sources or internal structures:

What gets generated:

  • JSON Schema - Generated from Go structs to define the Syft JSON output format
  • SPDX License List - Up-to-date list of license identifiers from the SPDX project
  • CPE Dictionary Index - Index of Common Platform Enumeration identifiers for vulnerability matching

When to regenerate:

Run code generation after:

  • Modifying the pkg.Package struct or related types (requires JSON schema regeneration)
  • SPDX releases a new license list
  • CPE dictionary updates are available

Generation commands:

  • make generate - Run all generation tasks
  • make generate-json-schema - Generate JSON schema from Go types
  • make generate-license-list - Download and generate latest SPDX license list
  • make generate-cpe-dictionary-index - Generate CPE dictionary index

After running generation commands, review the changes carefully and commit them as part of your pull request. The CI pipeline will verify that generated files are up to date.

Adding a new cataloger

Catalogers must fulfill the pkg.Cataloger interface in order to add packages to the SBOM.

All catalogers are registered as tasks in Syft’s task-based cataloging system:

  • Add your cataloger to DefaultPackageTaskFactories() using newSimplePackageTaskFactory or newPackageTaskFactory
  • Tag the task appropriately to indicate when it should run:
    • pkgcataloging.InstalledTag - for packages positively installed
    • pkgcataloging.DeclaredTag - for packages described in manifests (places where we intend to install software, but does not describe installed software)
    • pkgcataloging.ImageTag - should run when scanning container images
    • pkgcataloging.DirectoryTag - should run when scanning directories/filesystems
    • pkgcataloging.LanguageTag - for language-specific packages
    • pkgcataloging.OSTag - for OS-specific packages
    • Ecosystem tags like "java", "python", "alpine", etc.
  • If your cataloger needs configuration, add it to pkgcataloging.Config

The task system orchestrates all catalogers through CreateSBOMConfig, which manages task execution, parallelism, and configuration.

generic.NewCataloger is an abstraction syft used to make writing common components easier (see the alpine cataloger for example usage). It takes the following information as input:

  • A catalogerName to identify the cataloger uniquely among all other catalogers.
  • Pairs of file globs as well as parser functions to parse those files. These parser functions return a slice of pkg.Package as well as a slice of artifact.Relationship to describe how the returned packages are related. See this the alpine cataloger parser function as an example.

Identified packages share a common pkg.Package struct so be sure that when the new cataloger is constructing a new package it is using the Package struct. If you want to return more information than what is available on the pkg.Package struct then you can do so in the pkg.Package.Metadata field, which accepts any type. Metadata types tend to be unique for each pkg.Type but this is not required. See the pkg package for examples of the different metadata types that are supported today. When encoding to JSON, metadata type names are determined by reflection and mapped according to internal/packagemetadata/names.go.

Finally, here is an example of where the package construction is done within the alpine cataloger:

Troubleshooting

Cannot build test fixtures with Artifactory repositories

Some companies have Artifactory setup internally as a solution for sourcing secure dependencies. If you’re seeing an issue where the unit tests won’t run because of the below error then this section might be relevant for your use case.

[ERROR] [ERROR] Some problems were encountered while processing the POMs

If you’re dealing with an issue where the unit tests will not pull/build certain java fixtures check some of these settings:

  • a settings.xml file should be available to help you communicate with your internal artifactory deployment
  • this can be moved to syft/pkg/cataloger/java/test-fixtures/java-builds/example-jenkins-plugin/ to help build the unit test-fixtures
  • you’ll also want to modify the build-example-jenkins-plugin.sh to use settings.xml

For more information on this setup and troubleshooting see issue 1895

Next Steps

Understanding the Codebase

  • Architecture - Learn about package structure, core library flow, cataloger design patterns, and file searching
  • API Reference - Explore the public Go API, type definitions, and function signatures

Contributing Your Work

Finding Work

Getting Help

5.3 - Grype

Developer guidelines when contributing to Grype

Getting started

In order to test and develop in the Grype repo you will need the following dependencies installed:

  • Golang
  • Docker
  • Python (>= 3.9)
  • make
  • SQLite3 (optional – for database inspection)

Initial setup

Run once after cloning to install development tools:

make bootstrap

Useful commands

Common commands for ongoing development:

  • make help - List all available commands
  • make lint - Check code formatting and linting
  • make lint-fix - Auto-fix formatting issues
  • make format - Auto-format source code
  • make unit - Run unit tests
  • make integration - Run integration tests
  • make cli - Run CLI tests
  • make quality - Run vulnerability matching quality tests
  • make snapshot - Build release snapshot with all binaries and packages

Testing

Levels of testing

  • unit (make unit): The default level of test which is distributed throughout the repo are unit tests. Any _test.go file that does not reside somewhere within the /test directory is a unit test. Other forms of testing should be organized in the /test directory. These tests should focus on the correctness of functionality in depth. % test coverage metrics only considers unit tests and no other forms of testing.

  • integration (make integration): located within test/integration, these tests focus on the behavior surfaced by the Grype library entrypoints and make assertions about vulnerability matching results. The integration tests also update the vulnerability database and run with the race detector enabled to catch concurrency issues.

  • cli (make cli): located within test/cli, these are tests that test the correctness of application behavior from a snapshot build. This should be used in cases where a unit or integration test will not do or if you are looking for in-depth testing of code in the cmd/ package (such as testing the proper behavior of application configuration, CLI switches, and glue code before grype library calls).

  • quality (make quality): located within test/quality, these are tests that verify vulnerability matching quality by comparing Grype’s results against known-good results (quality gates). These tests help ensure that changes to vulnerability matching logic don’t introduce regressions in match quality. The quality tests use a pinned database version to ensure consistent results. See the quality gate architecture documentation for how the system works and the test/quality README for practical development workflows.

  • install (part of acceptance testing): located within test/install, these are smoke-like tests that ensure that application packaging and installation works as expected. For example, during release we provide RPM packages as a download artifact. We also have an accompanying RPM acceptance test that installs the RPM from a snapshot build and ensures the output of a grype invocation matches canned expected output.

Quality Gates

Quality gates validate that code changes don’t cause performance regressions in vulnerability matching. The system compares your PR’s matching results against a baseline using a pinned database to isolate code changes from database volatility.

What quality gates validate:

  • F1 score (combination of true positives, false positives, and false negatives)
  • False negative count (should not increase)
  • Indeterminate matches (should remain below 10%)

Common development workflows:

  • make capture - Download SBOMs and generate match results
  • make validate - Analyze output and evaluate pass/fail
  • yardstick label explore [UUID] - Interactive TUI for labeling matches
  • ./gate.py --image [digest] - Test specific images

Learn more:

Relationship to Syft

Grype uses Syft as a library for all-things related to obtaining and parsing the given scan target (pulling container images, parsing container images, indexing directories, cataloging packages, etc). Releases of Grype should always use released versions of Syft (commits that are tagged and show up in the GitHub releases page). However, continually integrating unreleased Syft changes into Grype incrementally is encouraged (e.g. go get github.com/anchore/syft@main) as long as by the time a release is cut the Syft version is updated to a released version (e.g. go get github.com/anchore/syft@v<semantic-version>).

Inspecting the database

The currently supported database format is Sqlite3. Install sqlite3 in your system and ensure that the sqlite3 executable is available in your path. Ask grype about the location of the database, which will be different depending on the operating system:

$ go run ./cmd/grype db status
Location:  /Users/alfredo/Library/Caches/grype/db
Built:  2020-07-31 08:18:29 +0000 UTC
Current DB Version:  1
Require DB Version:  1
Status: Valid

The database is located within the XDG_CACHE_HOME path. To verify the database filename, list that path:

# OSX-specific path
$ ls -alh  /Users/alfredo/Library/Caches/grype/db
total 445392
drwxr-xr-x  4 alfredo  staff   128B Jul 31 09:27 .
drwxr-xr-x  3 alfredo  staff    96B Jul 31 09:27 ..
-rw-------  1 alfredo  staff   139B Jul 31 09:27 metadata.json
-rw-r--r--  1 alfredo  staff   217M Jul 31 09:27 vulnerability.db

Next, open the vulnerability.db with sqlite3:

sqlite3 /Users/alfredo/Library/Caches/grype/db/vulnerability.db

To make the reporting from Sqlite3 easier to read, enable the following:

sqlite> .mode column
sqlite> .headers on

List the tables:

sqlite> .tables
id                      vulnerability           vulnerability_metadata

In this example you retrieve a specific vulnerability from the nvd namespace:

sqlite> select * from vulnerability where (namespace="nvd" and package_name="libvncserver") limit 1;
id             record_source  package_name  namespace   version_constraint  version_format  cpes                                                         proxy_vulnerabilities
-------------  -------------  ------------  ----------  ------------------  --------------  -----------------------------------------------------------  ---------------------
CVE-2006-2450                 libvncserver  nvd         = 0.7.1             unknown         ["cpe:2.3:a:libvncserver:libvncserver:0.7.1:*:*:*:*:*:*:*"]  []

Next Steps

Understanding the Codebase

  • Architecture - Learn about package structure, core library flow, and matchers

  • API Reference - Explore the public Go API, type definitions, and function signatures Contributing Your Work

  • Pull Requests - Guidelines for submitting PRs and working with reviewers

  • Issues and Discussions - Where to get help and report issues

Finding Work

Getting Help

5.4 - Pull Requests

Guidelines for submitting pull requests and working with reviewers

If you’ve made changes and the tests are passing, it’s time to submit a pull request (PR). This guide will help you through the process.

Quick Checklist

Before submitting your PR, make sure you have:

  • ✓ Run the test suite and confirmed tests pass
  • ✓ Signed off all commits (see Sign-off Requirements)
  • ✓ Updated in-repo documentation if your changes affect user-facing behavior
  • ✓ Written a clear PR title that describes the user-facing impact
  • ✓ Followed existing code style and patterns in the project

Each of these items helps maintainers review your contribution more effectively and merge it faster.

PR Title

Your PR title is important—it becomes the changelog entry in release notes. Write titles that are meaningful to end users, not just developers.

Guidelines

  • Start with an action verb: “Add”, “Fix”, “Update”, “Remove”
  • Be specific: “Add support for Alpine 3.19” rather than “Update Alpine”
  • Keep it concise: Under 72 characters when possible
  • Focus on user impact: What changed for users, not implementation details

Examples

Good titles:

  • Add support for Python 3.12 package detection
  • Fix crash when parsing malformed RPM databases
  • Update documentation for custom template usage

Poor titles:

  • Updates (too vague—updates to what?)
  • Fixed bug (which bug?)
  • WIP: trying some things (not ready for review)
  • Refactor parseRPM function (implementation detail, not a user-facing change)

PR Description

A clear description helps reviewers understand your changes quickly. Include these key sections:

What to include

  1. Summary: Briefly describe what changed
  2. Motivation: Explain why this change is needed or what problem it solves
  3. Approach: If your solution isn’t obvious, explain your approach
  4. Testing: Describe how you tested the changes
  5. Related issues: Link to issues or discussions that provide context

Template

## Summary

Brief description of the change.

## Motivation

Why is this change needed? What problem does it solve?

## Changes

- Bullet point list of key changes
- Include any breaking changes or migration steps

## Type of change

<!-- Delete any that are not relevant -->

- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (please discuss with the team first; Syft is 1.0 software and we won't accept breaking changes without going to 2.0)
- [ ] Documentation (updates the documentation)
- [ ] Chore (improve the developer experience, fix a test flake, etc, without changing the visible behavior of Syft)
- [ ] Performance (make Syft run faster or use less memory, without changing visible behavior much)

## Checklist

- [ ] I have added unit tests that cover changed behavior
- [ ] I have tested my code in common scenarios and confirmed there are no regressions
- [ ] I have added comments to my code, particularly in hard-to-understand sections

Closes #123

Commit History

We use squash merging for all pull requests, which means:

  • Your entire PR becomes a single commit on the main branch
  • You don’t need to maintain a clean commit history in your PR
  • Merge commits in your feature branch are perfectly fine
  • You can commit as frequently as you like during development
  • The PR title (not individual commit messages) becomes the changelog entry

This approach keeps the main branch clean and linear while reducing friction for contributors. Focus on code quality rather than commit structure—reviewers care about the changes, not how you got there.

Size Matters

Small PRs get reviewed faster. Here’s how to make your PR easier to review:

  • Keep changes focused: Try to address one concern per PR
  • Avoid mixing unrelated changes: Don’t combine bug fixes with new features
  • Split large PRs when possible: If a PR is unavoidably large, provide extra context in the description

Consider breaking work into multiple PRs if you’re making both refactoring changes and feature additions. Reviewers can process smaller, focused changes more quickly.

What to Expect

Review Feedback

It’s normal and expected for reviewers to have questions and suggestions:

  • Questions about your approach: Be prepared to explain your decisions
  • Code style adjustments: You may be asked to match existing project patterns
  • Additional tests: Reviewers might request more test coverage
  • Scope changes: You might be asked to split or narrow the PR

How to respond to feedback

  • Address feedback promptly: Respond when you can, even if just to acknowledge
  • Ask for clarification: If something isn’t clear, ask questions
  • Explain your reasoning: It’s okay to discuss alternatives respectfully
  • Make changes in new commits: This makes incremental review easier
  • Mark conversations as resolved: When you’ve addressed a comment

Remember that review feedback is about the code, not about you. Reviewers want to help make the contribution successful.

After Approval

Once approved, a maintainer will merge your PR. Depending on the project, you might be asked to:

  • Rebase on the latest main branch if there are conflicts
  • Update the PR title or description for clarity
  • Make final adjustments based on last-minute feedback

Common Issues

Watch out for these common pitfalls:

  • Missing sign-off: All commits must be signed off (see Sign-off Requirements)
  • Failing CI checks: Make sure all tests and checks pass before requesting review
  • Merge conflicts: Keep your branch up to date with main to avoid conflicts
  • Formatting-only changes: Submit formatting and refactoring in separate PRs from features
  • Missing documentation: User-facing changes need corresponding documentation updates

Need Help?

If you’re stuck or have questions about the PR process:

  • Ask in the PR comments—maintainers are happy to help
  • Reach out on the project’s Discourse
  • Check the project-specific contributing guide for any additional requirements

Contributing to open source can feel intimidating at first, but the community is here to support you. Don’t hesitate to ask questions.

5.5 - Grype DB

Developer guidelines when contributing to Grype DB

Getting started

This codebase is primarily Go, however, there are also Python scripts critical to the daily DB publishing process as well as acceptance testing. You will require the following:

  • Python 3.11+ installed on your system (Python 3.11-3.13 supported). Consider using pyenv if you do not have a preference for managing python interpreter installations.
  • zstd binary utility if you are packaging v6+ DB schemas
  • (optional) xz binary utility if you have specifically overridden the package command options
  • uv installed for Python package and virtualenv management

To download Go tooling used for static analysis, dependent Go modules, and Python dependencies run:

make bootstrap

Useful commands

Common commands for ongoing development:

  • make help - List all available commands
  • make lint - Check code formatting and linting
  • make lint-fix - Auto-fix formatting issues
  • make unit - Run unit tests (Go and Python)
  • make cli - Run CLI tests
  • make db-acceptance schema=<version> - Run DB acceptance tests for a schema version
  • make snapshot - Build release snapshot with all binaries and packages
  • make download-all-provider-cache - Download pre-built vulnerability data cache

Development workflows

Getting vulnerability data

In order to build a grype DB you will need a local cache of vulnerability data:

make download-all-provider-cache

This will populate the ./data directory locally with everything needed to run grype-db build (without needing to run grype-db pull).

This data being pulled down is the same data used in the daily DB publishing workflow, so it should be relatively fresh.

Creating a new DB schema

  1. Create a new v# schema package in the grype repo (within pkg/db)
  2. Create a new v# schema package in the grype-db repo (use the bump-schema.py helper script) that uses the new changes from grype-db
  3. Modify the manager/src/grype_db_manager/data/schema-info.json to pin the last-latest version to a specific version of grype and add the new schema version pinned to the “main” branch of grype (or a development branch)
  4. Update all references in grype to use the new schema
  5. Use the Staging DB Publisher workflow to test your DB changes with grype in a flow similar to the daily DB publisher workflow

Testing with staging databases

While developing a new schema version it may be useful to get a DB built for you by the Staging DB Publisher GitHub Actions workflow. This code exercises the same code as the Daily DB Publisher, with the exception that only a single schema is built and is validated against a given development branch of grype. When these DBs are published you can point grype at the proper listing file like so:

GRYPE_DB_UPDATE_URL=https://toolbox-data.anchore.io/grype/staging-databases/listing.json grype centos:8 ...

Testing

Levels of testing

  • unit (make unit): Unit tests for both Go code in the main codebase and Python scripts in the manager/ directory. These tests focus on correctness of individual functions and components. Coverage metrics track Go test coverage.

  • cli (make cli): CLI tests for both Go and Python components. These validate that command-line interfaces work correctly with various inputs and configurations.

  • db-acceptance (make db-acceptance schema=<version>): Acceptance tests that verify a specific DB schema version works correctly with Grype. These tests build a database, run Grype scans, and validate that vulnerability matches are correct and complete.

Running tests

To run unit tests for Go code and Python scripts:

make unit

To verify that a specific DB schema version interops with Grype:

make db-acceptance schema=<version>
# Note: this may take a while... go make some coffee.

Next Steps

Understanding the Codebase

Related Projects

Getting Help

5.6 - Vunnel

Developer guidelines when contributing to Vunnel

Getting started

This project requires:

  • python (>= 3.11)
  • pip (>= 22.2)
  • uv
  • docker
  • go (>= 1.20)
  • posix shell (bash, zsh, etc… needed for the make dev “development shell”)

Once you have python and uv installed, get the project bootstrapped:

# clone grype and grype-db, which is needed for provider development
git clone git@github.com:anchore/grype.git
git clone git@github.com:anchore/grype-db.git
# note: if you already have these repos cloned, you can skip this step. However, if they
# reside in a different directory than where the vunnel repo is, then you will need to
# set the `GRYPE_PATH` and/or `GRYPE_DB_PATH` environment variables for the development
# shell to function. You can add these to a local .env file in the vunnel repo root.

# clone the vunnel repo
git clone git@github.com:anchore/vunnel.git
cd vunnel

# get basic project tooling
make bootstrap

# install project dependencies
uv sync --all-extras --dev

Pre-commit is used to help enforce static analysis checks with git hooks:

uv run pre-commit install --hook-type pre-push

Developing

Development shell

The easiest way to develop on a providers is to use the development shell, selecting the specific provider(s) you’d like to focus your development workflow on:

# Specify one or more providers you want to develop on.
# Any provider from the output of "vunnel list" is valid.
# Specify multiple as a space-delimited list:
# make dev providers="oracle wolfi nvd"
$ make dev provider="oracle"

Entering vunnel development shell...
• Configuring with providers: oracle ...
• Writing grype config: /Users/wagoodman/code/vunnel/.grype.yaml ...
• Writing grype-db config: /Users/wagoodman/code/vunnel/.grype-db.yaml ...
• Activating virtual env: /Users/wagoodman/code/vunnel/.venv ...
• Installing editable version of vunnel ...
• Building grype ...
• Building grype-db ...

Note: development builds grype and grype-db are now available in your path.
To update these builds run 'make build-grype' and 'make build-grype-db' respectively.
To run your provider and update the grype database run 'make update-db'.
Type 'exit' to exit the development shell.

You can now run the provider you specified in the make dev command, build an isolated grype DB, and import the DB into grype:

$ make update-db
• Updating vunnel providers ...
[0000]  INFO grype-db version: ede464c2def9c085325e18ed319b36424d71180d-adhoc-build
...
[0000]  INFO configured providers parallelism=1 providers=1
[0000] DEBUG   └── oracle
[0000] DEBUG all providers started, waiting for graceful completion...
[0000]  INFO running vulnerability provider provider=oracle
[0000] DEBUG oracle:  2023-03-07 15:44:13 [INFO] running oracle provider
[0000] DEBUG oracle:  2023-03-07 15:44:13 [INFO] downloading ELSA from https://linux.oracle.com/security/oval/com.oracle.elsa-all.xml.bz2
[0019] DEBUG oracle:  2023-03-07 15:44:31 [INFO] wrote 6298 entries
[0019] DEBUG oracle:  2023-03-07 15:44:31 [INFO] recording workspace state
• Building grype-db ...
[0000]  INFO grype-db version: ede464c2def9c085325e18ed319b36424d71180d-adhoc-build
[0000]  INFO reading all provider state
[0000]  INFO building DB build-directory=./build providers=[oracle] schema=5
• Packaging grype-db ...
[0000]  INFO grype-db version: ede464c2def9c085325e18ed319b36424d71180d-adhoc-build
[0000]  INFO packaging DB from="./build" for="https://toolbox-data.anchore.io/grype/databases"
[0000]  INFO created DB archive path=build/vulnerability-db_v5_2023-03-07T20:44:13Z_405ae93d52ac4cde6606.tar.gz
• Importing DB into grype ...
Vulnerability database imported

You can now run grype that uses the newly created DB:

$ grype oraclelinux:8.4
 ✔ Pulled image
 ✔ Loaded image
 ✔ Parsed image
 ✔ Cataloged packages      [195 packages]
 ✔ Scanning image...       [193 vulnerabilities]
   ├── 0 critical, 25 high, 146 medium, 22 low, 0 negligible
   └── 193 fixed

NAME                        INSTALLED                FIXED-IN                    TYPE  VULNERABILITY   SEVERITY
bind-export-libs            32:9.11.26-4.el8_4       32:9.11.26-6.el8            rpm   ELSA-2021-4384  Medium
bind-export-libs            32:9.11.26-4.el8_4       32:9.11.36-3.el8            rpm   ELSA-2022-2092  Medium
bind-export-libs            32:9.11.26-4.el8_4       32:9.11.36-3.el8_6.1        rpm   ELSA-2022-6778  High
bind-export-libs            32:9.11.26-4.el8_4       32:9.11.36-5.el8            rpm   ELSA-2022-7790  Medium

# note that we're using the database we just built...
$ grype db status
Location:  /Users/wagoodman/code/vunnel/.cache/grype/5  # <--- this is the local DB we just built
...

# also note that we're using a development build of grype
$ which grype
/Users/wagoodman/code/vunnel/bin/grype

The development builds of grype and grype-db provided are derived from ../grype and ../grype-db paths relative to the vunnel project. If you want to use a different path, you can set the GRYPE_PATH and GRYPE_DB_PATH environment variables. This can be persisted by adding a .env file to the root of the vunnel project:

# example .env file in the root of the vunnel repo
GRYPE_PATH=~/somewhere/else/grype
GRYPE_DB_PATH=~/also/somewhere/else/grype-db

Rebuilding development tools

To rebuild the grype and grype-db binaries from local source, run:

make build-grype
make build-grype-db

Common commands

This project uses Make for running common development tasks:


make                  # run static analysis and unit testing
make static-analysis  # run static analysis
make unit             # run unit tests
make format           # format the codebase with black
make lint-fix         # attempt to automatically fix linting errors
...

If you want to see all of the things you can do:

make help

If you want to use a locally-editable copy of vunnel while you develop without the custom development shell:

uv pip uninstall vunnel  #... if you already have vunnel installed in this virtual env
uv pip install -e .

Snapshot tests

In order to ensure that the same feed state from providers would make the same set of vulnerabilities, snapshot testing is used.

Snapshot tests are run as part of ordinary unit tests, and will run during make unit.

To update snapshots, run the following pytest command. (Note that this example is for the debian provider, and the test name and path will be different for other providers):

pytest ./tests/unit/providers/debian/test_debian.py -k test_provider_via_snapshot --snapshot-update

Architecture

For detailed information about Vunnel’s architecture, including:

  • Provider abstraction and design
  • Workspace conventions
  • Vulnerability schemas (OS, NVD, GitHub, OSV)
  • Provider configuration options
  • Integration with Grype DB

See the Vunnel Architecture page.

Adding a new provider

“Vulnerability matching” is the process of taking a list of vulnerabilities and matching them against a list of packages. A provider in this repo is responsible for the “vulnerability” side of this process. The “package” side is handled by Syft. A prerequisite for adding a new provider is that Syft can catalog the package types that the provider is feeding vulnerability data for, so Grype can perform the matching from these two sources.

To add a new provider, you will need to create a new provider class under /src/vunnel/providers/<name> that inherits from provider.Provider and implements:

  • name(): a unique and semantically-useful name for the provider (same as the name of the directory)
  • update(): downloads and processes the raw data, writing all results with self.results_writer()

All results must conform to a particular schema, today there are a few kinds:

  • os: a generic operating system vulnerability (e.g redhat, debian, ubuntu, alpine, wolfi, etc.)
  • nvd: tailored to describe vulnerabilities from the NVD
  • github-security-advisory: tailored to describe vulnerabilities from GitHub
  • osv: tailored to describe vulnerabilities from the aggregated OSV vulnerability database

Once the provider is implemented, you will need to wire it up into the application in a couple places:

  • add a new entry under the dispatch table in src/vunnel/providers/__init__.py mapping your provider name to the class
  • add the provider configuration to the application configuration under src/vunnel/cli/config.py (specifically the Providers dataclass)

For a more detailed example on the implementation details of a provider see the “example” provider.

Validating this provider has different implications depending on what is being added. For example, if the provider is adding a new vulnerability source but is ultimately using an existing schema to express results then there may be very little to do! If you are adding a new schema, then the downstream data pipeline will need to be altered to support reading data in the new schema.

For an existing schema

1. Fork Vunnel and add the new provider.

Take a look at the example provider in the example directory. You are encouraged to copy example/awesome/* into src/vunnel/providers/YOURPROVIDERNAME/ and modify it to fit the needs of your new provider, however, this is not required:

# from the root of the vunnel repo
cp -a example/awesome src/vunnel/providers/YOURPROVIDERNAME

See the “example” provider README as well as the code comments for steps and considerations to take when implementing a new provider.

Once implemented, you should be able to see the new provider in the vunnel list command and run it with vunnel run <name>. The entries written should write out to a specific namespace in the DB downstream, as indicated in the record. This namespace is needed when making Grype changes.

While developing the provider consider using the make dev provider="<your-provider-name>"developer shell to run the provider and manually test the results against grype.

At this point you can optionally open a Vunnel PR with your new provider and a Maintainer can help with the next steps. Or if you’d like to get PR changes merged faster you can continue with the next steps.

2. Fork Grype and map distro type to a specific namespace.

This step might not be needed depending on the provider.

Common reasons for needing Grype changes include:

If you’re using the developer shell (make dev ...) then you can run make build-grype to get a build of grype with your changes.

3. In Vunnel: add a new test case to tests/quality/config.yaml for the new provider.

The configuration maps a provider to test to specific images to test with, for example:

---
- provider: amazon
  images:
    - docker.io/amazonlinux:2@sha256:1301cc9f889f21dc45733df9e58034ac1c318202b4b0f0a08d88b3fdc03004de
    - docker.io/anchore/test_images:vulnerabilities-amazonlinux-2-5c26ce9@sha256:cf742eca189b02902a0a7926ac3fbb423e799937bf4358b0d2acc6cc36ab82aa

These images are used to test the provider on PRs and nightly builds to verify the specific provider is working. Always use both the image tag and digest for all container image entries. Pick an image that has a good representation of the package types that your new provider is adding vulnerability data for.

4. In Vunnel: swap the tools to your Grype branch in tests/quality/config.yaml.

If you wanted to see PR quality gate checks pass with your specific Grype changes (if you have any) then you can update the yardstick.tools[*] entries for grype to use the a version that points to your fork (w.g. your-fork-username/grype@main). If you don’t have any grype changes needed then you can skip this step.

5. In Vunnel: add new “vulnerability match labels” to annotate True and False positive findings with Grype.

In order to evaluate the quality of the new provider, we need to know what the expected results are. This is done by annotating Grype results with “True Positive” labels (good results) and “False Positive” labels (bad results). We’ll use Yardstick to do this:

$ cd tests/quality

# capture results with the development version of grype (from your fork)
$ make capture provider=<your-provider-name>

# list your results
$ uv run yardstick result list | grep grype

d415064e-2bf3-4a1d-bda6-9c3957f2f71a  docker.io/anc...  grype@v0.58.0             2023-03...
75d1fe75-0890-4d89-a497-b1050826d9f6  docker.io/anc...  grype[custom-db]@bdcefd2  2023-03...

# use the "grype[custom-db]" result UUID and explore the results and add labels to each entry
$ uv run yardstick label explore 75d1fe75-0890-4d89-a497-b1050826d9f6

# You can use the yardstick TUI to label results:
# - use "T" to label a row as a True Positive
# - use "F" to label a row as a False Positive
# - Ctrl-Z to undo a label
# - Ctrl-S to save your labels
# - Ctrl-C to quit when you are done

Later we’ll open a PR in the vulnerability-match-labels repo to persist these labels. For the meantime we can iterate locally with the labels we’ve added.

6. In Vunnel: run the quality gate.

cd tests/quality

# runs your specific provider to gather vulnerability data, builds a DB, and runs grype with the new DB
make capture provider=<your-provider-name>

# evaluate the quality gate
make validate

This uses the latest Grype DB release to build a DB and the specified Grype version with a DB containing only data from the new provider.

You are looking for a passing run before continuing further.

7. Open a vulnerability-match-labels repo PR to persist the new labels.

Vunnel uses the labels in the vulnerability-Match-Labels repo via a git submodule. We’ve already added labels locally within this submodule in an earlier step. To persist these labels we need to push them to a fork and open a PR:

# fork the github.com/anchore/vulnerability-match-labels repo, but you do not need to clone it...

# from the Vunnel repo...
$ cd tests/quality/vulnerability-match-labels

$ git remote add fork git@github.com:your-fork-name/vulnerability-match-labels.git
$ git checkout -b 'add-labels-for-<your-provider-name>'
$ git status

# you should see changes from the labels/ directory for your provider that you added

$ git add .
$ git commit -m 'add labels for <your-provider-name>'
$ git push fork add-labels-for-<your-provider-name>

At this point you can open a PR against in the vulnerability-match-labels repo.

Note: you will not be able to open a Vunnel PR that passes PR checks until the labels are merged into the vulnerability-match-labels repo.

Once the PR is merged in the vulnerability-match-labels repo you can update the submodule in Vunnel to point to the latest commit in the vulnerability-match-labels repo.

cd tests/quality

git submodule update --remote vulnerability-match-labels

8. In Vunnel: open a PR with your new provider.

The PR will also run all of the same quality gate checks that you ran locally.

If you have Grype changes, you should also create a PR for that as well. The Vunnel PR will not pass PR checks until the Grype PR is merged and the test/quality/config.yaml file is updated to point back to the latest Grype version.

For a new schema

This is the same process as listed above with a few additional steps:

  1. You will need to add the new schema to the Vunnel repo in the schemas directory.
  2. Grype DB will need to be updated to support the new schema in the pkg/provider/unmarshal and pkg/process/v* directories.
  3. The Vunnel tests/quality/config.yaml file will need to be updated to use development grype-db.version, pointing to your fork.
  4. The final Vunnel PR will not be able to be merged until the Grype DB PR is merged and the tests/quality/config.yaml file is updated to point back to the latest Grype DB version.

Contributing improvements

Finding refactoring opportunities

Looking to help out with improving the code quality of Vunnel, but not sure where to start?

The best way is to look for issues with the refactor label.

More general ways would be to use radon to search for complexity and maintainability issues:

$ radon cc src --total-average -nb
src/vunnel/provider.py
    M 115:4 Provider._on_error - B
src/vunnel/providers/alpine/parser.py
    M 73:4 Parser._download - C
    M 178:4 Parser._normalize - C
    M 141:4 Parser._load - B
    C 44:0 Parser - B
src/vunnel/providers/amazon/parser.py
    M 66:4 Parser._parse_rss - C
    C 164:0 JsonifierMixin - C
    M 165:4 JsonifierMixin.json - C
    C 32:0 Parser - B
    M 239:4 PackagesHTMLParser.handle_data - B
...

The output of radon indicates the type (M=method, C=class, F=function), the path/name, and a A-F grade. Anything that’s not an A is worth taking a look at.

Another approach is to use wily:

$ wily build
...
$ wily rank
-----------Rank for Maintainability Index for bdb4983 by Alex Goodman on 2022-12-25.------------
╒═════════════════════════════════════════════════╤═════════════════════════╕
│ File                                            │   Maintainability Index │
╞═════════════════════════════════════════════════╪═════════════════════════╡
│ src/vunnel/providers/rhel/parser.py             │                 21.591  │
├─────────────────────────────────────────────────┼─────────────────────────┤
│ src/vunnel/providers/ubuntu/parser.py           │                 21.6144 │
├─────────────────────────────────────────────────┼─────────────────────────┤
│ tests/unit/providers/github/test_github.py      │                 35.3599 │
├─────────────────────────────────────────────────┼─────────────────────────┤
│ tests/unit/utils/test_oval_v2.py                │                 36.3388 │
├─────────────────────────────────────────────────┼─────────────────────────┤
│ src/vunnel/providers/debian/parser.py           │                 37.3723 │
├─────────────────────────────────────────────────┼─────────────────────────┤
│ tests/unit/utils/test_fdb.py                    │                 38.6926 │
├─────────────────────────────────────────────────┼─────────────────────────┤
│ tests/unit/providers/sles/test_sles.py          │                 41.6602 │
├─────────────────────────────────────────────────┼─────────────────────────┤
│ tests/unit/providers/ubuntu/test_ubuntu.py      │                 43.1323 │
├─────────────────────────────────────────────────┼─────────────────────────┤
...

Ideally we should try to get wily diff output into the CI pipeline and post on a sticky PR comment to show regressions (and potentially fail the CI run).

Adding type hints

This codebase has been ported from another repo that did not have any type hints. This is OK, though ideally over time this should be corrected as new features are added and bug fixes made.

We use mypy today for static type checking, however, the ported code has been explicitly ignored (see pyproject.toml).

If you want to make enhancements in this area consider using automated tooling such as pytype to generate types via inference into .pyi files and later merge them into the codebase with merge-pyi.

Alternatively a tool like MonkeyType can be used generate static types from runtime data and incorporate into the code.

Next Steps

Understanding the Codebase

Finding Work

Getting Help

5.7 - Grant

Developer guidelines when contributing to Grant

Getting started

In order to test and develop in the Grant repo you will need the following dependencies installed:

  • Golang
  • Docker
  • make

Initial setup

Run once after cloning to install development tools:

make bootstrap

Useful commands

Common commands for ongoing development:

  • make help - List all available commands
  • make lint - Check code formatting and linting
  • make lint-fix - Auto-fix formatting issues
  • make unit - Run unit tests
  • make test - Run all tests
  • make snapshot - Build release snapshot with all binaries and packages (also available as make build)
  • make generate - Generate SPDX license index and license patterns

Testing

Levels of testing

  • unit (make unit): The default level of test which is distributed throughout the repo are unit tests. Any _test.go file that does not reside somewhere within the /tests directory is a unit test. These tests focus on the correctness of functionality in depth. % test coverage metrics only consider unit tests and no other forms of testing.

  • integration (make test): located in tests/integration_test.go, these tests focus on policy loading, license evaluation, and core library behavior. They test the interaction between different components like policy parsing, license matching with glob patterns, and package evaluation logic.

  • cli (part of make test): located in tests/cli/, these are tests that test the correctness of application behavior from a snapshot build. These tests execute the actual Grant binary and verify command output, exit codes, and behavior of commands like check, list, and version.

Testing conventions

  • Unit tests should focus on correctness of individual functions and components
  • Integration tests validate that core library components work together correctly (policy evaluation, license matching, etc.)
  • CLI tests ensure user-facing commands produce expected output and behavior
  • Current coverage threshold is 8% (see Taskfile.yaml)
  • Use table-driven tests where appropriate to test multiple scenarios

Linting

You can run the linter for the project by running:

make lint

This checks code formatting with gofmt and runs golangci-lint checks.

To automatically fix linting issues:

make lint-fix

Code generation

Grant generates code and data files that need to be kept in sync with external sources:

What gets generated:

  • SPDX License Index - Up-to-date list of license identifiers from the SPDX project for license identification and validation
  • License File Patterns - Generated patterns to identify license files in scanned directories

When to regenerate:

Run code generation after:

  • The SPDX license list has been updated
  • Adding new license file naming patterns
  • Contributing changes to license detection logic

Generation commands:

  • make generate - Run all generation tasks
  • make generate-spdx-licenses - Download and generate latest SPDX license list
  • make generate-license-patterns - Generate license file patterns (depends on SPDX license index)

After running generation commands, review the changes carefully and commit them as part of your pull request.

Package structure

Grant is organized into two main areas: the public library API and the CLI application. For detailed API documentation, see the Grant Go package reference.

grant/ - Public Library API

The top-level grant/ package is the public library that other projects can import and use. This is what you’d reference with import "github.com/anchore/grant/grant".

This package contains the core functionality:

  • License evaluation and matching
  • Policy loading and validation
  • Package analysis and filtering

Most contributions to core Grant functionality belong in this package.

cmd/grant/ - CLI Application

The CLI application is built on top of the grant/ library and contains application-specific code:

cmd/grant/
├── cli/            # Command wiring and application setup
│   ├── command/    # CLI command implementations (list, check, etc.)
│   ├── internal/   # Internal command implementations
│   ├── option/     # Command flags and configuration options
│   └── tui/        # Terminal UI and event handlers
└── main.go         # Application entrypoint

Contributions to CLI features, command behavior, or user interface improvements belong in this package.

Next Steps

Understanding the Codebase

Contributing Your Work

Finding Work

Getting Help

5.8 - Sign-off Commits

How to sign-off commits with the Developer’s Certificate of Origin

Sign off your work

All commits require a simple sign-off line to confirm you have the right to contribute your code. This is a standard practice in open source called the Developer Certificate of Origin (DCO).

How to sign off

The easiest way is to use the -s or --signoff flag when committing:

git commit -s -m "your commit message"

This automatically adds a sign-off line to your commit message:

Signed-off-by: Your Name <your.email@example.com>

Tip: You can configure Git to always sign off commits automatically:

git config --global format.signoff true

Verify your sign-off

To check that your commit includes the sign-off, look at the log output:

git log -1

You should see the Signed-off-by: line at the end of your commit message:

commit 37ceh170e4hb283bb73d958f2036ee5k07e7fde7
Author: Your Name <your.email@example.com>
Date:   Mon Aug 1 11:27:13 2020 -0400

    your commit message

    Signed-off-by: Your Name <your.email@example.com>

Why we require sign-off

In plain English: By adding a sign-off line, you’re confirming that:

  • You wrote the code yourself, OR
  • You have permission to submit it, AND
  • You’re okay with it being released under the project’s open source license

This protects both you and the project. It’s a simple legal formality that takes just a few seconds to add to each commit.

All contributions to this project are licensed under the Apache License Version 2.0.

Adding sign-off to existing commits

If you’ve already committed without a sign-off (easy to do!), you can add it retroactively.

For your most recent commit

git commit --amend --signoff

This updates your last commit to include the sign-off line.

For older commits

If you need to add sign-off to commits further back in your history:

git rebase --signoff HEAD~N

Replace N with the number of commits you need to sign. For example, HEAD~3 signs off the last 3 commits.

Note: If you’ve already pushed these commits, you’ll need to force-push after rebasing:

git push --force-with-lease

If you’re new to rebasing

Rebasing rewrites commit history, which can be tricky if you’re not familiar with it. If you run into issues:

  1. Ask for help in the PR comments
  2. Or, create a fresh branch from the latest main and cherry-pick your changes
  3. The maintainers can also help you fix sign-off issues during the review process

What the DCO means (technical details)

The Developer Certificate of Origin (DCO) is a legal attestation that you have the right to submit your contribution under the project’s license. Here’s the full text:

Developer's Certificate of Origin 1.1

By making a contribution to this project, I certify that:

   (a) The contribution was created in whole or in part by me and I
       have the right to submit it under the open source license
       indicated in the file; or

   (b) The contribution is based upon previous work that, to the best
       of my knowledge, is covered under an appropriate open source
       license and I have the right under that license to submit that
       work with modifications, whether created in whole or in part
       by me, under the same open source license (unless I am
       permitted to submit under a different license), as indicated
       in the file; or

   (c) The contribution was provided directly to me by some other
       person who certified (a), (b) or (c) and I have not modified
       it.

   (d) I understand and agree that this project and the contribution
       are public and that a record of the contribution (including all
       personal information I submit with it, including my sign-off) is
       maintained indefinitely and may be redistributed consistent with
       this project or the open source license(s) involved.

The DCO protects both contributors and the project by creating a clear record of contribution rights and licensing terms.

5.9 - SBOM Action

Developer guidelines when contributing to sbom-action

Getting started

In order to test and develop in the sbom-action repo you will need the following dependencies installed:

  • Node.js (>= 20.11.0)
  • npm
  • Docker

Initial setup

Run once after cloning to install dependencies and development tools:

npm install

This command installs all dependencies and sets up Husky git hooks that automatically format code and rebuild the distribution files before commits.

Useful commands

Common commands for ongoing development:

  • npm run build - Check TypeScript compilation (no output files)
  • npm run lint - Check code with ESLint
  • npm run format - Auto-format code with Prettier
  • npm run format-check - Check code formatting without changes
  • npm run package - Build distribution files with ncc (outputs to dist/)
  • npm test - Run Jest tests
  • npm run all - Complete validation suite (build + format + lint + package + test)

Testing

The sbom-action uses Jest for testing. To run the test suite:

npm test

The CI workflow handles any additional setup automatically (like Docker registries). For local development, you just need to install dependencies and run tests.

Test types

The test suite includes two main categories:

  • Unit tests (e.g., tests/GithubClient.test.ts, tests/SyftGithubAction.test.ts): Test individual components in isolation by mocking GitHub Actions context and external dependencies.

  • Integration tests (tests/integration/): Execute the full action workflow with real Syft invocations against test fixtures in tests/fixtures/ (npm-project, yarn-project). These tests use snapshot testing to validate SBOM output and GitHub dependency snapshot uploads.

Snapshot testing

Integration tests extensively use Jest’s snapshot testing to validate SBOM output. When you run integration tests, Jest compares the generated SBOMs against saved snapshots in tests/integration/__snapshots__/.

The tests normalize dynamic values (timestamps, hashes, IDs) before comparison to ensure consistent snapshots across runs.

Updating snapshots:

When you intentionally change SBOM output format or content, update the snapshots:

npm run test:update-snapshots

Development workflow

Pre-commit hooks

The sbom-action uses Husky to run automated checks before each commit:

  1. Code formatting - Prettier formats staged TypeScript files
  2. Distribution rebuild - Runs npm run package to rebuild dist/ directory
  3. Auto-staging - Automatically stages updated dist/ files

The hook is defined in .husky/pre-commit and runs the precommit npm script.

Code organization

The sbom-action consists of three GitHub Actions, each with its own entry point:

Main action (action.yml):

  • Entry point: src/runSyftAction.ts
  • Compiled to: dist/runSyftAction/index.js
  • Generates SBOMs and uploads as workflow artifacts and release assets

Publish SBOM sub-action (publish-sbom/action.yml):

  • Entry point: src/attachReleaseAssets.ts
  • Compiled to: dist/attachReleaseAssets/index.js
  • Uploads existing SBOMs to GitHub releases

Download Syft sub-action (download-syft/action.yml):

  • Entry point: src/downloadSyft.ts
  • Compiled to: dist/downloadSyft/index.js
  • Downloads and caches Syft binary

Key modules:

  • src/Syft.ts - Wraps Syft execution and configuration
  • src/SyftVersion.ts - Manages Syft version resolution
  • src/github/SyftDownloader.ts - Handles Syft binary downloads
  • src/github/SyftGithubAction.ts - Core action orchestration logic
  • src/github/GithubClient.ts - GitHub API interactions
  • src/github/Executor.ts - Command execution wrapper

GitHub Actions specifics

Debugging Actions

Enable detailed debug logging by setting a repository secret:

  1. Go to your repository Settings → Secrets and variables → Actions
  2. Add a new secret: ACTIONS_STEP_DEBUG = true

This enables debug logging from the @actions/toolkit libraries used throughout the action.

See the GitHub documentation for more details.

Testing Actions locally

CI validation:

The repository includes comprehensive CI workflows in .github/workflows/test.yml that:

  • Test on Ubuntu and Windows
  • Validate distribution files are up-to-date
  • Test scanning directories and container images
  • Verify all SBOM formats
  • Test sub-actions (download-syft, publish-sbom)

Manual testing:

Test changes in your own workflows using the repository name and branch:

- uses: your-username/sbom-action@your-branch
  with:
    path: ./

Or test locally using act if you have it installed.

Action runtime

The sbom-action uses the Node.js 20 runtime (runs.using: node20 in action.yml). This runtime is provided by GitHub Actions and doesn’t require separate installation in workflows.

Next Steps

Understanding the Codebase

Contributing Your Work

Finding Work

Getting Help

5.10 - Security Policy

How to report security vulnerabilities in Anchore OSS projects

Security is a top priority for Anchore’s open source projects. We appreciate the security research community’s efforts in responsibly disclosing vulnerabilities to help keep our users safe.

Supported Versions

Security updates are applied only to the most recent release of each project. We strongly recommend staying up to date with the latest versions to ensure you have the most recent security patches and fixes.

If you’re using an older version and concerned about a security issue, please upgrade to the latest release. For questions about specific versions, reach out on Discourse.

Reporting a Vulnerability

Found a security vulnerability? Please report security issues privately by emailing security@anchore.com rather than creating a public GitHub issue. This gives us time to fix the problem and protect users before details become public.

What to Include in Your Report

To help us understand and address the issue quickly, please include as much detail as you can:

  • Description: A clear description of the vulnerability and its potential impact
  • Steps to reproduce: Detailed steps to recreate the issue
  • Affected versions: Which versions of the tool are vulnerable
  • Proof of concept: If available, a minimal example demonstrating the issue
  • Suggested mitigation: If you have ideas for how to fix or mitigate the issue
  • Urgency level: Your assessment of the severity (Critical, High, Medium, or Low)

Don’t worry if you can’t provide every detail –partial reports are still valuable and welcome. We’ll work with you to understand the issue.

What to Expect

After you submit a report:

  1. Acknowledgment: You’ll receive an initial response confirming we’ve received your report
  2. Assessment: The security team will investigate and assess the severity and impact
  3. Updates: We’ll keep you informed of our progress and any questions we have
  4. Resolution: Once a fix is developed, if necessary, we’ll coordinate disclosure timing with you
  5. Credit: With your permission, we’ll acknowledge your responsible disclosure in release notes

Disclosure Policy

Anchore follows a coordinated disclosure process:

  1. Security issues are addressed privately until a fix is available
  2. Fixes are released as quickly as possible based on severity
  3. Security advisories are published after fixes are released
  4. Credit is given to security researchers who report responsibly

Thank you for helping keep Anchore’s open source projects and their users secure.

5.11 - Code of Conduct

Community standards and guidelines for respectful collaboration

All Anchore open source projects follow the Contributor Covenant Code of Conduct.

Our Pledge

We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.

We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community.

Our Standards

Examples of behavior that contributes to a positive environment for our community include:

  • Demonstrating empathy and kindness toward other people
  • Being respectful of differing opinions, viewpoints, and experiences
  • Giving and gracefully accepting constructive feedback
  • Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience
  • Focusing on what is best not just for us as individuals, but for the overall community

Examples of unacceptable behavior include:

  • The use of sexualized language or imagery, and sexual attention or advances of any kind
  • Trolling, insulting or derogatory comments, and personal or political attacks
  • Public or private harassment
  • Publishing others’ private information, such as a physical or email address, without their explicit permission
  • Other conduct which could reasonably be considered inappropriate in a professional setting

Enforcement Responsibilities

Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful.

Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate.

Scope

This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official email address, posting via an official social media account, or acting as an appointed representative at an online or offline event.

Enforcement

Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at opensource@anchore.com.

All complaints will be reviewed and investigated promptly and fairly.

All community leaders are obligated to respect the privacy and security of the reporter of any incident.

Enforcement Guidelines

Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:

1. Warning

Community Impact: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community.

Consequence: The original post will be edited or removed and a warning issued to the offender.

2. Temporary Ban

Community Impact: A serious violation of community standards, including sustained inappropriate behavior.

Consequence: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.

3. Permanent Ban

Community Impact: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals.

Consequence: A permanent ban from any sort of public interaction within the community.

Attribution

This Code of Conduct is adapted from the Contributor Covenant, version 2.0, available at https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.

Community Impact Guidelines were inspired by Mozilla’s code of conduct enforcement ladder.

For answers to common questions about this code of conduct, see the FAQ at https://www.contributor-covenant.org/faq. Translations are available at https://www.contributor-covenant.org/translations.

5.12 - Scan Action

Developer guidelines when contributing to scan-action

Getting started

In order to test and develop in the scan-action repo you will need the following dependencies installed:

  • Node.js (>= 20.11.0)
  • npm
  • Docker

Initial setup

Run once after cloning to install dependencies and development tools:

npm install

This command installs all dependencies and sets up Husky git hooks that automatically format code and rebuild the distribution files before commits.

Useful commands

Common commands for ongoing development:

  • npm run build - Bundle with ncc and normalize line endings
  • npm run lint - Check code with ESLint
  • npm run prettier - Auto-format code with Prettier
  • npm test - Complete test suite (lint + install Grype + build + run tests)
  • npm run run-tests - Run Jest tests only
  • npm run test:update-snapshots - Update test expectations (lint + install Grype + run tests with snapshot updates)
  • npm run audit - Run security audit on production dependencies
  • npm run update-deps - Update dependencies with npm-check-updates

Testing

Tests require Grype to be installed locally and a Docker registry for integration tests. Set up your test environment:

Install Grype locally:

npm run install-and-update-grype

Start local Docker registry:

docker run -d -p 5000:5000 --name registry registry:2

Tests automatically disable Grype database auto-update and validation to ensure consistent test results.

CI environment:

The GitHub Actions test workflow automatically:

  • Starts a Docker registry service on port 5000
  • Tests on Ubuntu, Windows, and macOS
  • Validates across multiple configurations (image/path/sbom sources, output formats)

Test types

The scan-action uses Jest for testing with several categories:

  • Unit tests (e.g., tests/action.test.js, tests/grype_command.test.js): Test individual functions in isolation by mocking GitHub Actions context and external dependencies.

  • Integration tests: Execute the full action workflow with real Grype invocations. These tests validate end-to-end functionality including downloading Grype, running scans, and generating output files.

  • SARIF validation tests (tests/sarif_output.test.js): Validate SARIF report structure and content using the @microsoft/jest-sarif library to ensure consistent output format and compliance with the SARIF specification.

  • Distribution tests (tests/dist.test.js): Verify that the committed dist/ directory is up-to-date with the source code.

Test fixtures:

The tests/fixtures/ directory contains sample projects and files for testing:

  • npm-project/ - Sample npm project for directory scanning
  • yarn-project/ - Sample yarn project for directory scanning
  • test_sbom.spdx.json - Sample SBOM file for SBOM scanning tests

SARIF output testing

The SARIF output tests validate report structure using the @microsoft/jest-sarif library. Tests normalize dynamic values (versions, fully qualified names) before validation to ensure consistent results across test runs.

The tests validate that:

  • Generated SARIF reports are valid according to the SARIF specification
  • Expected vulnerabilities are detected in test fixtures
  • Output structure remains consistent across runs

If you need to update test expectations, run:

npm run test:update-snapshots

Development workflow

Pre-commit hooks

The scan-action uses Husky to run automated checks before each commit:

  1. Code formatting - lint-staged runs Prettier on staged JavaScript files
  2. Distribution rebuild - Runs npm run precommit to rebuild dist/ directory
  3. Auto-staging - Automatically stages updated dist/ files

The hook is defined in .husky/pre-commit and ensures that distribution files are always synchronized with source code.

Code organization

The scan-action has a straightforward single-file architecture:

Main action (action.yml):

  • Entry point: index.js
  • Compiled to: dist/index.js
  • Downloads Grype, runs vulnerability scans, generates reports

Download Grype sub-action (download-grype/action.yml):

  • Entry point: Reuses dist/index.js with run: "download-grype" input
  • Provides standalone Grype download and caching
  • Returns cmd output with path to Grype binary

Key functions in index.js:

  • downloadGrype() - Downloads Grype using install script
  • downloadGrypeWindowsWorkaround() - Windows-specific download logic
  • installGrype() - Installs and caches Grype binary
  • sourceInput() - Validates mutually exclusive inputs (image/path/sbom)
  • run() - Main action execution flow
  • Command construction and output handling

GitHub Actions specifics

Debugging Actions

Enable detailed debug logging by setting a repository secret:

  1. Go to your repository Settings → Secrets and variables → Actions
  2. Add a new secret: ACTIONS_STEP_DEBUG = true

This enables debug logging from the @actions/toolkit libraries used throughout the action.

See the GitHub documentation for more details.

Testing Actions locally

CI validation:

The repository includes comprehensive CI workflows in .github/workflows/test.yml that:

  • Test on Ubuntu, Windows, and macOS
  • Validate distribution files are up-to-date
  • Test scanning images, directories, and SBOM files
  • Verify all output formats (SARIF, JSON, CycloneDX, table)
  • Test download-grype sub-action

Manual testing:

Test changes in your own workflows using the repository name and branch:

- uses: <your-username>/scan-action@<your-branch>
  with:
    image: "alpine:latest"

Or test locally using act if you have it installed.

Action runtime

The scan-action uses the Node.js 20 runtime (runs.using: node20 in action.yml). This runtime is provided by GitHub Actions and doesn’t require separate installation in workflows.

Next Steps

Understanding the Codebase

Contributing Your Work

Finding Work

Getting Help

5.13 - Docs (this site!)

Style guide for writing Anchore OSS documentation

This style guide is for the Anchore OSS documentation. The style guide helps contributors to write documentation that readers can understand quickly and correctly.

The Anchore OSS docs aim for:

  • Consistency in style and terminology, so that readers can expect certain structures and conventions. Readers don’t have to keep re-learning how to use the documentation or questioning whether they’ve understood something correctly.
  • Clear, concise writing so that readers can quickly find and understand the information they need.

Use standard American spelling

Use American spelling rather than Commonwealth or British spelling. Refer to Merriam-Webster’s Collegiate Dictionary, Eleventh Edition.

Use capital letters sparingly

Some hints:

  • Capitalize only the first letter of each heading within the page. (That is, use sentence case.)
  • Capitalize (almost) every word in page titles. (That is, use title case.) The little words like “and”, “in”, etc, don’t get a capital letter.
  • In page content, use capitals only for brand names, like Syft, Anchore, and so on. See more about brand names below.
  • Don’t use capital letters to emphasize words.

Spell out abbreviations and acronyms on first use

Always spell out the full term for every abbreviation or acronym the first time you use it on the page. Don’t assume people know what an abbreviation or acronym means, even if it seems like common knowledge.

Example: “To run Grype locally in a virtual machine (VM)”

Use contractions if you want to

For example, it’s fine to write “it’s” instead of “it is”.

Use full, correct brand names

When referring to a product or brand, use the full name. Capitalize the name as the product owners do in the product documentation. Do not use abbreviations even if they’re in common use, unless the product owner has sanctioned the abbreviation.

Use thisInstead of this
Anchoreanchore
Kubernetesk8s
GitHubgithub

Be consistent with punctuation

Use punctuation consistently within a page. For example, if you use a period (full stop) after every item in list, then use a period on all other lists on the page.

Check the other pages if you’re unsure about a particular convention.

Examples:

  • Most pages in the Anchore OSS docs use a period at the end of every list item.
  • There is no period at the end of the page subtitle and the subtitle need not be a full sentence. (The subtitle comes from the description in the front matter of each page.)

Use active voice rather than passive voice

Passive voice is often confusing, as it’s not clear who should perform the action.

Use active voiceInstead of passive voice
You can configure Grype toGrype can be configured to
Add the directory to your pathThe directory should be added to your path

Use simple present tense

Avoid future tense (“will”) and complex syntax such as conjunctive mood (“would”, “should”).

Use simple present tenseInstead of future tense or complex syntax
The following command provisions a virtual machineThe following command will provision a virtual machine
If you add this configuration element, the system is open to the InternetIf you added this configuration element, the system would be open to the Internet

Exception: Use future tense if it’s necessary to convey the correct meaning. This requirement is rare.

Address the audience directly

Using “we” in a sentence can be confusing, because the reader may not know whether they’re part of the “we” you’re describing.

For example, compare the following two statements:

  • “In this release we’ve added many new features.”
  • “In this tutorial we build a flying saucer.”

The words “the developer” or “the user” can be ambiguous. For example, if the reader is building a product that also has users, then the reader does not know whether you’re referring to the reader or the users of their product.

Address the reader directlyInstead of "we", "the user", or "the developer"
Include the directory in your pathThe user must make sure that the directory is included in their path
In this tutorial you build a flying saucerIn this tutorial we build a flying saucer

Use short, simple sentences

Keep sentences short. Short sentences are easier to read than long ones. Below are some tips for writing short sentences.

Use fewer words instead of many words that convey the same meaning
Use thisInstead of this
You can useIt is also possible to use
You canYou are able to
Split a single long sentence into two or more shorter ones
Use thisInstead of this
You do not need a running GKE cluster. The deployment process creates a cluster for youYou do not need a running GKE cluster, because the deployment process creates a cluster for you
Use a list instead of a long sentence showing various options
Use thisInstead of this

To scan a container for vulnerabilities:

  1. Package the software in an OCI container.
  2. Upload the container to an online registry.
  3. Run Grype with the container name as a parameter.
To scan a container, you must package the software in an OCI container, upload the container to an online registry, and run Grype with the container name as a parameter.

Avoid too much text styling

Use bold text when referring to UI controls or other UI elements.

Use code style for:

  • filenames, directories, and paths
  • inline code and commands
  • object field names

Avoid using bold text or capital letters for emphasis. If a page has too much textual highlighting it becomes confusing and even annoying.

Use angle brackets for placeholders

For example:

  • export SYFT_PARALLELISM=<number>
  • --email <your email address>

Style your images

The Anchore OSS docs recognize Bootstrap classes to style images and other content.

The following code snippet shows the typical styling that makes an image show up nicely on the page:

<!-- for wide images -->
<img src="/images/my-image.png" alt="My image" class="mt-3 mb-3 border rounded" />

<!-- for tall images -->
<img src="/images/my-image.png" alt="My image" class="mt-3 mb-3 border rounded" style="width: 100%; max-width: 30em" />

To see some examples of styled images, take a look at the Kubeflow OAuth setup page.

For more help, see the guide to Bootstrap image styling and the Bootstrap utilities, such as borders.

A detailed style guide

The Google Developer Documentation Style Guide contains detailed information about specific aspects of writing clear, readable, succinct documentation for a developer audience.

Next steps

6 - Reference

Reference for Anchore OSS Tools

6.1 - Syft Command Line Reference

Generate a packaged-based Software Bill Of Materials (SBOM) from container images and filesystems

Usage:
  syft [SOURCE] [flags]
  syft [command]

Examples:
  syft scan alpine:latest                                a summary of discovered packages
  syft scan alpine:latest -o json                        show all possible cataloging details
  syft scan alpine:latest -o cyclonedx                   show a CycloneDX formatted SBOM
  syft scan alpine:latest -o cyclonedx-json              show a CycloneDX JSON formatted SBOM
  syft scan alpine:latest -o spdx                        show a SPDX 2.3 Tag-Value formatted SBOM
  syft scan alpine:latest -o spdx@2.2                    show a SPDX 2.2 Tag-Value formatted SBOM
  syft scan alpine:latest -o spdx-json                   show a SPDX 2.3 JSON formatted SBOM
  syft scan alpine:latest -o spdx-json@2.2               show a SPDX 2.2 JSON formatted SBOM
  syft scan alpine:latest -vv                            show verbose debug information
  syft scan alpine:latest -o template -t my_format.tmpl  show a SBOM formatted according to given template file

  Supports the following image sources:
    syft scan yourrepo/yourimage:tag     defaults to using images from a Docker daemon. If Docker is not present, the image is pulled directly from the registry.
    syft scan path/to/a/file/or/dir      a Docker tar, OCI tar, OCI directory, SIF container, or generic filesystem directory

  You can also explicitly specify the scheme to use:
    syft scan docker:yourrepo/yourimage:tag            explicitly use the Docker daemon
    syft scan podman:yourrepo/yourimage:tag            explicitly use the Podman daemon
    syft scan registry:yourrepo/yourimage:tag          pull image directly from a registry (no container runtime required)
    syft scan docker-archive:path/to/yourimage.tar     use a tarball from disk for archives created from "docker save"
    syft scan oci-archive:path/to/yourimage.tar        use a tarball from disk for OCI archives (from Skopeo or otherwise)
    syft scan oci-dir:path/to/yourimage                read directly from a path on disk for OCI layout directories (from Skopeo or otherwise)
    syft scan singularity:path/to/yourimage.sif        read directly from a Singularity Image Format (SIF) container on disk
    syft scan dir:path/to/yourproject                  read directly from a path on disk (any directory)
    syft scan file:path/to/yourproject/file            read directly from a path on disk (any single file)


Available Commands:
  attest      Generate an SBOM as an attestation for the given [SOURCE] container image
  cataloger   Show available catalogers and configuration
  completion  Generate the autocompletion script for the specified shell
  config      show the syft configuration
  convert     Convert between SBOM formats
  help        Help about any command
  login       Log in to a registry
  scan        Generate an SBOM
  version     show version information

Flags:
      --base-path string                          base directory for scanning, no links will be followed above this directory, and all paths will be reported relative to this directory
  -c, --config stringArray                        syft configuration file(s) to use
      --enrich stringArray                        enable package data enrichment from local and online sources (options: all, golang, java, javascript, python)
      --exclude stringArray                       exclude paths from being scanned using a glob expression
      --file string                               file to write the default report output to (default is STDOUT) (DEPRECATED: use: --output FORMAT=PATH)
      --from stringArray                          specify the source behavior to use (e.g. docker, registry, oci-dir, ...)
  -h, --help                                      help for syft
  -o, --output stringArray                        report output format (<format>=<file> to output to a file), formats=[cyclonedx-json cyclonedx-xml github-json purls spdx-json spdx-tag-value syft-json syft-table syft-text template] (default [syft-table])
      --override-default-catalogers stringArray   set the base set of catalogers to use (defaults to 'image' or 'directory' depending on the scan source)
      --parallelism int                           number of cataloger workers to run in parallel
      --platform string                           an optional platform specifier for container image sources (e.g. 'linux/arm64', 'linux/arm64/v8', 'arm64', 'linux')
      --profile stringArray                       configuration profiles to use
  -q, --quiet                                     suppress all logging output
  -s, --scope string                              selection of layers to catalog, options=[squashed all-layers deep-squashed] (default "squashed")
      --select-catalogers stringArray             add, remove, and filter the catalogers to be used
      --source-name string                        set the name of the target being analyzed
      --source-supplier string                    the organization that supplied the component, which often may be the manufacturer, distributor, or repackager
      --source-version string                     set the version of the target being analyzed
  -t, --template string                           specify the path to a Go template file
  -v, --verbose count                             increase verbosity (-v = info, -vv = debug)
      --version                                   version for syft

Use "syft [command] --help" for more information about a command.

syft attest

Generate a packaged-based Software Bill Of Materials (SBOM) from a container image as the predicate of an in-toto attestation that will be uploaded to the image registry.

Usage:
  syft attest --output [FORMAT] <IMAGE> [flags]

Examples:
  syft attest --output [FORMAT] alpine:latest            defaults to using images from a Docker daemon. If Docker is not present, the image is pulled directly from the registry

  You can also explicitly specify the scheme to use:
    syft attest docker:yourrepo/yourimage:tag            explicitly use the Docker daemon
    syft attest podman:yourrepo/yourimage:tag            explicitly use the Podman daemon
    syft attest registry:yourrepo/yourimage:tag          pull image directly from a registry (no container runtime required)
    syft attest docker-archive:path/to/yourimage.tar     use a tarball from disk for archives created from "docker save"
    syft attest oci-archive:path/to/yourimage.tar        use a tarball from disk for OCI archives (from Skopeo or otherwise)
    syft attest oci-dir:path/to/yourimage                read directly from a path on disk for OCI layout directories (from Skopeo or otherwise)
    syft attest singularity:path/to/yourimage.sif        read directly from a Singularity Image Format (SIF) container on disk


Flags:
      --base-path string                          base directory for scanning, no links will be followed above this directory, and all paths will be reported relative to this directory
      --enrich stringArray                        enable package data enrichment from local and online sources (options: all, golang, java, javascript, python)
      --exclude stringArray                       exclude paths from being scanned using a glob expression
      --from stringArray                          specify the source behavior to use (e.g. docker, registry, oci-dir, ...)
  -h, --help                                      help for attest
  -k, --key string                                the key to use for the attestation
  -o, --output stringArray                        report output format (<format>=<file> to output to a file), formats=[cyclonedx-json cyclonedx-xml github-json purls spdx-json spdx-tag-value syft-json syft-table syft-text template] (default [syft-json])
      --override-default-catalogers stringArray   set the base set of catalogers to use (defaults to 'image' or 'directory' depending on the scan source)
      --parallelism int                           number of cataloger workers to run in parallel
      --platform string                           an optional platform specifier for container image sources (e.g. 'linux/arm64', 'linux/arm64/v8', 'arm64', 'linux')
  -s, --scope string                              selection of layers to catalog, options=[squashed all-layers deep-squashed] (default "squashed")
      --select-catalogers stringArray             add, remove, and filter the catalogers to be used
      --source-name string                        set the name of the target being analyzed
      --source-supplier string                    the organization that supplied the component, which often may be the manufacturer, distributor, or repackager
      --source-version string                     set the version of the target being analyzed

syft cataloger list

List available catalogers.

Usage:
  syft cataloger list [OPTIONS] [flags]

Flags:
  -h, --help                                      help for list
  -o, --output string                             format to output the cataloger list (available: table, json)
      --override-default-catalogers stringArray   override the default catalogers with an expression (default [all])
      --select-catalogers stringArray             select catalogers with an expression
  -s, --show-hidden                               show catalogers that have been de-selected

syft config

Show the syft configuration.

Usage:
  syft config [flags]
  syft config [command]

Available Commands:
  locations   shows all locations and the order in which syft will look for a configuration file

Flags:
  -h, --help   help for config
      --load   load and validate the syft configuration

syft convert

[Experimental] Convert SBOM files to, and from, SPDX, CycloneDX and Syft’s format. For more info about data loss between formats see https://github.com/anchore/syft/wiki/format-conversion.

Usage:
  syft convert [SOURCE-SBOM] -o [FORMAT] [flags]

Examples:
  syft convert img.syft.json -o spdx-json                      convert a syft SBOM to spdx-json, output goes to stdout
  syft convert img.syft.json -o cyclonedx-json=img.cdx.json    convert a syft SBOM to CycloneDX, output is written to the file "img.cdx.json"
  syft convert - -o spdx-json                                  convert an SBOM from STDIN to spdx-json


Flags:
      --file string          file to write the default report output to (default is STDOUT) (DEPRECATED: use: --output FORMAT=PATH)
  -h, --help                 help for convert
  -o, --output stringArray   report output format (<format>=<file> to output to a file), formats=[cyclonedx-json cyclonedx-xml github-json purls spdx-json spdx-tag-value syft-json syft-table syft-text template] (default [syft-table])
  -t, --template string      specify the path to a Go template file

syft login

Log in to a registry.

Usage:
  syft login [OPTIONS] [SERVER] [flags]

Examples:
  # Log in to reg.example.com
  syft login reg.example.com -u AzureDiamond -p hunter2

Flags:
  -h, --help              help for login
  -p, --password string   Password
      --password-stdin    Take the password from stdin
  -u, --username string   Username

syft scan

Generate a packaged-based Software Bill Of Materials (SBOM) from container images and filesystems.

Usage:
  syft scan [SOURCE] [flags]

Examples:
  syft scan alpine:latest                                a summary of discovered packages
  syft scan alpine:latest -o json                        show all possible cataloging details
  syft scan alpine:latest -o cyclonedx                   show a CycloneDX formatted SBOM
  syft scan alpine:latest -o cyclonedx-json              show a CycloneDX JSON formatted SBOM
  syft scan alpine:latest -o spdx                        show a SPDX 2.3 Tag-Value formatted SBOM
  syft scan alpine:latest -o spdx@2.2                    show a SPDX 2.2 Tag-Value formatted SBOM
  syft scan alpine:latest -o spdx-json                   show a SPDX 2.3 JSON formatted SBOM
  syft scan alpine:latest -o spdx-json@2.2               show a SPDX 2.2 JSON formatted SBOM
  syft scan alpine:latest -vv                            show verbose debug information
  syft scan alpine:latest -o template -t my_format.tmpl  show a SBOM formatted according to given template file

  Supports the following image sources:
    syft scan yourrepo/yourimage:tag     defaults to using images from a Docker daemon. If Docker is not present, the image is pulled directly from the registry.
    syft scan path/to/a/file/or/dir      a Docker tar, OCI tar, OCI directory, SIF container, or generic filesystem directory

  You can also explicitly specify the scheme to use:
    syft scan docker:yourrepo/yourimage:tag            explicitly use the Docker daemon
    syft scan podman:yourrepo/yourimage:tag            explicitly use the Podman daemon
    syft scan registry:yourrepo/yourimage:tag          pull image directly from a registry (no container runtime required)
    syft scan docker-archive:path/to/yourimage.tar     use a tarball from disk for archives created from "docker save"
    syft scan oci-archive:path/to/yourimage.tar        use a tarball from disk for OCI archives (from Skopeo or otherwise)
    syft scan oci-dir:path/to/yourimage                read directly from a path on disk for OCI layout directories (from Skopeo or otherwise)
    syft scan singularity:path/to/yourimage.sif        read directly from a Singularity Image Format (SIF) container on disk
    syft scan dir:path/to/yourproject                  read directly from a path on disk (any directory)
    syft scan file:path/to/yourproject/file            read directly from a path on disk (any single file)


Flags:
      --base-path string                          base directory for scanning, no links will be followed above this directory, and all paths will be reported relative to this directory
      --enrich stringArray                        enable package data enrichment from local and online sources (options: all, golang, java, javascript, python)
      --exclude stringArray                       exclude paths from being scanned using a glob expression
      --file string                               file to write the default report output to (default is STDOUT) (DEPRECATED: use: --output FORMAT=PATH)
      --from stringArray                          specify the source behavior to use (e.g. docker, registry, oci-dir, ...)
  -h, --help                                      help for scan
  -o, --output stringArray                        report output format (<format>=<file> to output to a file), formats=[cyclonedx-json cyclonedx-xml github-json purls spdx-json spdx-tag-value syft-json syft-table syft-text template] (default [syft-table])
      --override-default-catalogers stringArray   set the base set of catalogers to use (defaults to 'image' or 'directory' depending on the scan source)
      --parallelism int                           number of cataloger workers to run in parallel
      --platform string                           an optional platform specifier for container image sources (e.g. 'linux/arm64', 'linux/arm64/v8', 'arm64', 'linux')
  -s, --scope string                              selection of layers to catalog, options=[squashed all-layers deep-squashed] (default "squashed")
      --select-catalogers stringArray             add, remove, and filter the catalogers to be used
      --source-name string                        set the name of the target being analyzed
      --source-supplier string                    the organization that supplied the component, which often may be the manufacturer, distributor, or repackager
      --source-version string                     set the version of the target being analyzed
  -t, --template string                           specify the path to a Go template file

syft version

Show version information.

Usage:
  syft version [flags]

Flags:
  -h, --help            help for version
  -o, --output string   the format to show the results (allowable: [text json]) (default "text")

6.2 - Syft Configuration Reference

Syft searches for configuration files in the following locations, in order:

  1. ./.syft.yaml - current working directory
  2. ./.syft/config.yaml - app subdirectory in current working directory
  3. ~/.syft.yaml - home directory
  4. $XDG_CONFIG_HOME/syft/config.yaml - XDG config directory

The configuration file can use either .yaml or .yml extensions. The first configuration file found will be used.

For general information about how config and environment variables are handled, see the Configuration Reference section.

log:
  # suppress all logging output (env: SYFT_LOG_QUIET)
  quiet: false

  # increase verbosity (-v = info, -vv = debug) (env: SYFT_LOG_VERBOSITY)
  verbosity: 0

  # explicitly set the logging level (available: [error warn info debug trace]) (env: SYFT_LOG_LEVEL)
  level: "warn"

  # file path to write logs to (env: SYFT_LOG_FILE)
  file: ""

dev:
  # capture resource profiling data (available: [cpu, mem]) (env: SYFT_DEV_PROFILE)
  profile: ""

# the configuration file(s) used to load application configuration (env: SYFT_CONFIG)
config: ""

# the output format(s) of the SBOM report (options: syft-table, syft-text, syft-json, spdx-json, ...)
# to specify multiple output files in differing formats, use a list:
# output:
#   - "syft-json=<syft-json-output-file>"
#   - "spdx-json=<spdx-json-output-file>" (env: SYFT_OUTPUT)
output:
  - "syft-table"

# file to write the default report output to (default is STDOUT) (env: SYFT_LEGACYFILE)
legacyFile: ""

format:
  # default value for all formats that support the "pretty" option (default is unset) (env: SYFT_FORMAT_PRETTY)
  pretty:

  template:
    # path to the template file to use when rendering the output with the template output format.
    # Note that all template paths are based on the current syft-json schema (env: SYFT_FORMAT_TEMPLATE_PATH)
    path: ""

    # if true, uses the go structs for the syft-json format for templating.
    # if false, uses the syft-json output for templating (which follows the syft JSON schema exactly).
    #
    # Note: long term support for this option is not guaranteed (it may change or break at any time) (env: SYFT_FORMAT_TEMPLATE_LEGACY)
    legacy: false

  json:
    # transform any syft-json output to conform to an approximation of the v11.0.1 schema. This includes:
    # - using the package metadata type names from before v12 of the JSON schema (changed in https://github.com/anchore/syft/pull/1983)
    #
    # Note: this will still include package types and fields that were added at or after json schema v12. This means
    # that output might not strictly be json schema v11 compliant, however, for consumers that require time to port
    # over to the final syft 1.0 json output this option can be used to ease the transition.
    #
    # Note: long term support for this option is not guaranteed (it may change or break at any time) (env: SYFT_FORMAT_JSON_LEGACY)
    legacy: false

    # include space indentation and newlines
    # note: inherits default value from 'format.pretty' or 'false' if parent is unset (env: SYFT_FORMAT_JSON_PRETTY)
    pretty:

  spdx-json:
    # include space indentation and newlines
    # note: inherits default value from 'format.pretty' or 'false' if parent is unset (env: SYFT_FORMAT_SPDX_JSON_PRETTY)
    pretty:

  cyclonedx-json:
    # include space indentation and newlines
    # note: inherits default value from 'format.pretty' or 'false' if parent is unset (env: SYFT_FORMAT_CYCLONEDX_JSON_PRETTY)
    pretty:

  cyclonedx-xml:
    # include space indentation and newlines
    # note: inherits default value from 'format.pretty' or 'false' if parent is unset (env: SYFT_FORMAT_CYCLONEDX_XML_PRETTY)
    pretty:

# whether to check for an application update on start up or not (env: SYFT_CHECK_FOR_APP_UPDATE)
check-for-app-update: true

# enable one or more package catalogers (env: SYFT_CATALOGERS)
catalogers: []

# set the base set of catalogers to use (defaults to 'image' or 'directory' depending on the scan source) (env: SYFT_DEFAULT_CATALOGERS)
default-catalogers: []

# add, remove, and filter the catalogers to be used (env: SYFT_SELECT_CATALOGERS)
select-catalogers: []

package:
  # search within archives that do not contain a file index to search against (tar, tar.gz, tar.bz2, etc)
  # note: enabling this may result in a performance impact since all discovered compressed tars will be decompressed
  # note: for now this only applies to the java package cataloger (env: SYFT_PACKAGE_SEARCH_UNINDEXED_ARCHIVES)
  search-unindexed-archives: false

  # search within archives that do contain a file index to search against (zip)
  # note: for now this only applies to the java package cataloger (env: SYFT_PACKAGE_SEARCH_INDEXED_ARCHIVES)
  search-indexed-archives: true

  # allows users to exclude synthetic binary packages from the sbom
  # these packages are removed if an overlap with a non-synthetic package is found (env: SYFT_PACKAGE_EXCLUDE_BINARY_OVERLAP_BY_OWNERSHIP)
  exclude-binary-overlap-by-ownership: true

license:
  # include the content of licenses in the SBOM for a given syft scan; valid values are: [all unknown none] (env: SYFT_LICENSE_CONTENT)
  content: "none"

  # adjust the percent as a fraction of the total text, in normalized words, that
  # matches any valid license for the given inputs, expressed as a percentage across all of the licenses matched. (env: SYFT_LICENSE_COVERAGE)
  coverage: 75

file:
  metadata:
    # select which files should be captured by the file-metadata cataloger and included in the SBOM.
    # Options include:
    #  - "all": capture all files from the search space
    #  - "owned-by-package": capture only files owned by packages
    #  - "none", "": do not capture any files (env: SYFT_FILE_METADATA_SELECTION)
    selection: "owned-by-package"

    # the file digest algorithms to use when cataloging files (options: "md5", "sha1", "sha224", "sha256", "sha384", "sha512") (env: SYFT_FILE_METADATA_DIGESTS)
    digests:
      - "sha1"
      - "sha256"

  content:
    # skip searching a file entirely if it is above the given size (default = 1MB; unit = bytes) (env: SYFT_FILE_CONTENT_SKIP_FILES_ABOVE_SIZE)
    skip-files-above-size: 256000

    # file globs for the cataloger to match on (env: SYFT_FILE_CONTENT_GLOBS)
    globs: []

  executable:
    # file globs for the cataloger to match on (env: SYFT_FILE_EXECUTABLE_GLOBS)
    globs: []

# selection of layers to catalog, options=[squashed all-layers deep-squashed] (env: SYFT_SCOPE)
scope: "squashed"

# number of cataloger workers to run in parallel
# by default, when set to 0: this will be based on runtime.NumCPU * 4, if set to less than 0 it will be unbounded (env: SYFT_PARALLELISM)
parallelism: 0

relationships:
  # include package-to-file relationships that indicate which files are owned by which packages (env: SYFT_RELATIONSHIPS_PACKAGE_FILE_OWNERSHIP)
  package-file-ownership: true

  # include package-to-package relationships that indicate one package is owned by another due to files claimed to be owned by one package are also evidence of another package's existence (env: SYFT_RELATIONSHIPS_PACKAGE_FILE_OWNERSHIP_OVERLAP)
  package-file-ownership-overlap: true

compliance:
  # action to take when a package is missing a name (env: SYFT_COMPLIANCE_MISSING_NAME)
  missing-name: "drop"

  # action to take when a package is missing a version (env: SYFT_COMPLIANCE_MISSING_VERSION)
  missing-version: "stub"

# Enable data enrichment operations, which can utilize services such as Maven Central and NPM.
# By default all enrichment is disabled, use: all to enable everything.
# Available options are: all, golang, java, javascript, python (env: SYFT_ENRICH)
enrich: []

dotnet:
  # only keep dep.json packages which an executable on disk is found. The package is also included if a DLL is found for any child package, even if the package itself does not have a DLL. (env: SYFT_DOTNET_DEP_PACKAGES_MUST_HAVE_DLL)
  dep-packages-must-have-dll: false

  # only keep dep.json packages which have a runtime/resource DLL claimed in the deps.json targets section (but not necessarily found on disk). The package is also included if any child package claims a DLL, even if the package itself does not claim a DLL. (env: SYFT_DOTNET_DEP_PACKAGES_MUST_CLAIM_DLL)
  dep-packages-must-claim-dll: true

  # treat DLL claims or on-disk evidence for child packages as DLL claims or on-disk evidence for any parent package (env: SYFT_DOTNET_PROPAGATE_DLL_CLAIMS_TO_PARENTS)
  propagate-dll-claims-to-parents: true

  # show all packages from the deps.json if bundling tooling is present as a dependency (e.g. ILRepack) (env: SYFT_DOTNET_RELAX_DLL_CLAIMS_WHEN_BUNDLING_DETECTED)
  relax-dll-claims-when-bundling-detected: true

golang:
  # search for go package licences in the GOPATH of the system running Syft, note that this is outside the
  # container filesystem and potentially outside the root of a local directory scan (env: SYFT_GOLANG_SEARCH_LOCAL_MOD_CACHE_LICENSES)
  search-local-mod-cache-licenses:

  # specify an explicit go mod cache directory, if unset this defaults to $GOPATH/pkg/mod or $HOME/go/pkg/mod (env: SYFT_GOLANG_LOCAL_MOD_CACHE_DIR)
  local-mod-cache-dir: "~/go/pkg/mod"

  # search for go package licences in the vendor folder on the system running Syft, note that this is outside the
  # container filesystem and potentially outside the root of a local directory scan (env: SYFT_GOLANG_SEARCH_LOCAL_VENDOR_LICENSES)
  search-local-vendor-licenses:

  # specify an explicit go vendor directory, if unset this defaults to ./vendor (env: SYFT_GOLANG_LOCAL_VENDOR_DIR)
  local-vendor-dir: ""

  # search for go package licences by retrieving the package from a network proxy (env: SYFT_GOLANG_SEARCH_REMOTE_LICENSES)
  search-remote-licenses:

  # remote proxy to use when retrieving go packages from the network,
  # if unset this defaults to $GOPROXY followed by https://proxy.golang.org (env: SYFT_GOLANG_PROXY)
  proxy: "https://proxy.golang.org,direct"

  # specifies packages which should not be fetched by proxy
  # if unset this defaults to $GONOPROXY (env: SYFT_GOLANG_NO_PROXY)
  no-proxy: ""

  main-module-version:
    # look for LD flags that appear to be setting a version (e.g. -X main.version=1.0.0) (env: SYFT_GOLANG_MAIN_MODULE_VERSION_FROM_LD_FLAGS)
    from-ld-flags: true

    # search for semver-like strings in the binary contents (env: SYFT_GOLANG_MAIN_MODULE_VERSION_FROM_CONTENTS)
    from-contents: false

    # use the build settings (e.g. vcs.version & vcs.time) to craft a v0 pseudo version
    # (e.g. v0.0.0-20220308212642-53e6d0aaf6fb) when a more accurate version cannot be found otherwise (env: SYFT_GOLANG_MAIN_MODULE_VERSION_FROM_BUILD_SETTINGS)
    from-build-settings: true

java:
  # enables Syft to use the network to fetch version and license information for packages when
  # a parent or imported pom file is not found in the local maven repository.
  # the pom files are downloaded from the remote Maven repository at 'maven-url' (env: SYFT_JAVA_USE_NETWORK)
  use-network:

  # use the local Maven repository to retrieve pom files. When Maven is installed and was previously used
  # for building the software that is being scanned, then most pom files will be available in this
  # repository on the local file system. this greatly speeds up scans. when all pom files are available
  # in the local repository, then 'use-network' is not needed.
  # TIP: If you want to download all required pom files to the local repository without running a full
  # build, run 'mvn help:effective-pom' before performing the scan with syft. (env: SYFT_JAVA_USE_MAVEN_LOCAL_REPOSITORY)
  use-maven-local-repository:

  # override the default location of the local Maven repository.
  # the default is the subdirectory '.m2/repository' in your home directory (env: SYFT_JAVA_MAVEN_LOCAL_REPOSITORY_DIR)
  maven-local-repository-dir: "~/.m2/repository"

  # maven repository to use, defaults to Maven central (env: SYFT_JAVA_MAVEN_URL)
  maven-url: "https://repo1.maven.org/maven2"

  # depth to recursively resolve parent POMs, no limit if <= 0 (env: SYFT_JAVA_MAX_PARENT_RECURSIVE_DEPTH)
  max-parent-recursive-depth: 0

  # resolve transient dependencies such as those defined in a dependency's POM on Maven central (env: SYFT_JAVA_RESOLVE_TRANSITIVE_DEPENDENCIES)
  resolve-transitive-dependencies: false

javascript:
  # enables Syft to use the network to fill in more detailed license information (env: SYFT_JAVASCRIPT_SEARCH_REMOTE_LICENSES)
  search-remote-licenses:

  # base NPM url to use (env: SYFT_JAVASCRIPT_NPM_BASE_URL)
  npm-base-url: ""

  # include development-scoped dependencies (env: SYFT_JAVASCRIPT_INCLUDE_DEV_DEPENDENCIES)
  include-dev-dependencies:

linux-kernel:
  # whether to catalog linux kernel modules found within lib/modules/** directories (env: SYFT_LINUX_KERNEL_CATALOG_MODULES)
  catalog-modules: true

nix:
  # enumerate all files owned by packages found within Nix store paths (env: SYFT_NIX_CAPTURE_OWNED_FILES)
  capture-owned-files: false

python:
  # enables Syft to use the network to fill in more detailed license information (env: SYFT_PYTHON_SEARCH_REMOTE_LICENSES)
  search-remote-licenses:

  # base Pypi url to use (env: SYFT_PYTHON_PYPI_BASE_URL)
  pypi-base-url: ""

  # when running across entries in requirements.txt that do not specify a specific version
  # (e.g. "sqlalchemy >= 1.0.0, <= 2.0.0, != 3.0.0, <= 3.0.0"), attempt to guess what the version could
  # be based on the version requirements specified (e.g. "1.0.0"). When enabled the lowest expressible version
  # when given an arbitrary constraint will be used (even if that version may not be available/published). (env: SYFT_PYTHON_GUESS_UNPINNED_REQUIREMENTS)
  guess-unpinned-requirements:

registry:
  # skip TLS verification when communicating with the registry (env: SYFT_REGISTRY_INSECURE_SKIP_TLS_VERIFY)
  insecure-skip-tls-verify: false

  # use http instead of https when connecting to the registry (env: SYFT_REGISTRY_INSECURE_USE_HTTP)
  insecure-use-http: false

  # Authentication credentials for specific registries. Each entry describes authentication for a specific authority:
  # - authority: the registry authority URL the URL to the registry (e.g. "docker.io", "localhost:5000", etc.) (env: SYFT_REGISTRY_AUTH_AUTHORITY)
  #  username: a username if using basic credentials (env: SYFT_REGISTRY_AUTH_USERNAME)
  #  password: a corresponding password (env: SYFT_REGISTRY_AUTH_PASSWORD)
  #  token: a token if using token-based authentication, mutually exclusive with username/password (env: SYFT_REGISTRY_AUTH_TOKEN)
  #  tls-cert: filepath to the client certificate used for TLS authentication to the registry (env: SYFT_REGISTRY_AUTH_TLS_CERT)
  #  tls-key: filepath to the client key used for TLS authentication to the registry (env: SYFT_REGISTRY_AUTH_TLS_KEY)
  auth: []

  # filepath to a CA certificate (or directory containing *.crt, *.cert, *.pem) used to generate the client certificate (env: SYFT_REGISTRY_CA_CERT)
  ca-cert: ""

# specify the source behavior to use (e.g. docker, registry, oci-dir, ...) (env: SYFT_FROM)
from: []

# an optional platform specifier for container image sources (e.g. 'linux/arm64', 'linux/arm64/v8', 'arm64', 'linux') (env: SYFT_PLATFORM)
platform: ""

source:
  # set the name of the target being analyzed (env: SYFT_SOURCE_NAME)
  name: ""

  # set the version of the target being analyzed (env: SYFT_SOURCE_VERSION)
  version: ""

  # the organization that supplied the component, which often may be the manufacturer, distributor, or repackager (env: SYFT_SOURCE_SUPPLIER)
  supplier: ""

  # (env: SYFT_SOURCE_SOURCE)
  source: ""

  # base directory for scanning, no links will be followed above this directory, and all paths will be reported relative to this directory (env: SYFT_SOURCE_BASE_PATH)
  base-path: ""

  file:
    # the file digest algorithms to use on the scanned file (options: "md5", "sha1", "sha224", "sha256", "sha384", "sha512") (env: SYFT_SOURCE_FILE_DIGESTS)
    digests:
      - "SHA-256"

  image:
    # allows users to specify which image source should be used to generate the sbom
    # valid values are: registry, docker, podman (env: SYFT_SOURCE_IMAGE_DEFAULT_PULL_SOURCE)
    default-pull-source: ""

    # (env: SYFT_SOURCE_IMAGE_MAX_LAYER_SIZE)
    max-layer-size: ""

# exclude paths from being scanned using a glob expression (env: SYFT_EXCLUDE)
exclude: []

unknowns:
  # remove unknown errors on files with discovered packages (env: SYFT_UNKNOWNS_REMOVE_WHEN_PACKAGES_DEFINED)
  remove-when-packages-defined: true

  # include executables without any identified packages (env: SYFT_UNKNOWNS_EXECUTABLES_WITHOUT_PACKAGES)
  executables-without-packages: true

  # include archives which were not expanded and searched (env: SYFT_UNKNOWNS_UNEXPANDED_ARCHIVES)
  unexpanded-archives: true

cache:
  # root directory to cache any downloaded content; empty string will use an in-memory cache (env: SYFT_CACHE_DIR)
  dir: "~/.cache/syft"

  # time to live for cached data; setting this to 0 will disable caching entirely (env: SYFT_CACHE_TTL)
  ttl: "7d"

# show catalogers that have been de-selected (env: SYFT_SHOW_HIDDEN)
show-hidden: false

attest:
  # the key to use for the attestation (env: SYFT_ATTEST_KEY)
  key: ""

  # password to decrypt to given private key
  # additionally responds to COSIGN_PASSWORD env var (env: SYFT_ATTEST_PASSWORD)
  password: ""

6.3 - JSON Schema

6.3.1 - Syft v16 JSON Schema Reference

Complete reference for Syft JSON schema version 16.1.0

Document

Represents the syft cataloging findings as a JSON document

Field NameType
artifactsArray<Package>
artifactRelationshipsArray<Relationship>
filesArray<File>
sourceSource
distroLinuxRelease
descriptorDescriptor
schemaSchema

Core Types

CPE

Represents a Common Platform Enumeration identifier used for matching packages to known vulnerabilities in security databases.

Field NameTypeDescription
cpestrValue is the CPE string identifier.
sourcestrSource is the source where this CPE was obtained or generated from.

ClassifierMatch

Represents a single matched value within a binary file and the "class" name the search pattern represents.

Field NameType
classifierstr
locationLocation

Coordinates

Contains the minimal information needed to describe how to find a file within any possible source object (e.g.

Field NameTypeDescription
pathstrRealPath is the canonical absolute form of the path accessed (all symbolic links have been followed and relative path components like '.' and '..' have been removed).
layerIDstrFileSystemID is an ID representing and entire filesystem. For container images, this is a layer digest. For directories or a root filesystem, this is blank.

Descriptor

Identifies the tool that generated this SBOM document, including its name, version, and configuration used during catalog generation.

Field NameTypeDescription
namestrName is the name of the tool that generated this SBOM (e.g., "syft").
versionstrVersion is the version of the tool that generated this SBOM.
configurationunknownConfiguration contains the tool configuration used during SBOM generation.

Digest

Represents a cryptographic hash of file contents.

Field NameTypeDescription
algorithmstrAlgorithm specifies the hash algorithm used (e.g., "sha256", "md5").
valuestrValue is the hexadecimal string representation of the hash.

ELFSecurityFeatures

Captures security hardening and protection mechanisms in ELF binaries.

Field NameTypeDescription
symbolTableStrippedboolSymbolTableStripped indicates whether debugging symbols have been removed.
stackCanaryboolStackCanary indicates whether stack smashing protection is enabled.
nxboolNoExecutable indicates whether NX (no-execute) protection is enabled for the stack.
relROstrRelocationReadOnly indicates the RELRO protection level.
pieboolPositionIndependentExecutable indicates whether the binary is compiled as PIE.
dsoboolDynamicSharedObject indicates whether the binary is a shared library.
safeStackboolLlvmSafeStack represents a compiler-based security mechanism that separates the stack into a safe stack for storing return addresses and other critical data, and an unsafe stack for everything else, to mitigate stack-based memory corruption errors see https://clang.llvm.org/docs/SafeStack.html
cfiboolControlFlowIntegrity represents runtime checks to ensure a program's control flow adheres to the legal paths determined at compile time, thus protecting against various types of control-flow hijacking attacks see https://clang.llvm.org/docs/ControlFlowIntegrity.html
fortifyboolClangFortifySource is a broad suite of extensions to libc aimed at catching misuses of common library functions see https://android.googlesource.com/platform//bionic/+/d192dbecf0b2a371eb127c0871f77a9caf81c4d2/docs/clang_fortify_anatomy.md

Executable

Contains metadata about binary files and their security features.

Field NameTypeDescription
formatstrFormat denotes either ELF, Mach-O, or PE
hasExportsboolHasExports indicates whether the binary exports symbols.
hasEntrypointboolHasEntrypoint indicates whether the binary has an entry point function.
importedLibrariesArray<str>ImportedLibraries lists the shared libraries required by this executable.
elfSecurityFeaturesELFSecurityFeaturesELFSecurityFeatures contains ELF-specific security hardening information when Format is ELF.

File

Represents a file discovered during cataloging with its metadata, content digests, licenses, and relationships to packages.

Field NameTypeDescription
idstrID is a unique identifier for this file within the SBOM.
locationCoordinatesLocation is the file path and layer information where this file was found.
metadataFileMetadataEntryMetadata contains filesystem metadata such as permissions, ownership, and file type.
contentsstrContents is the file contents for small files.
digestsArray<Digest>Digests contains cryptographic hashes of the file contents.
licensesArray<FileLicense>Licenses contains license information discovered within this file.
executableExecutableExecutable contains executable metadata if this file is a binary.
unknownsArray<str>Unknowns contains unknown fields for forward compatibility.

FileLicense

Represents license information discovered within a file's contents or metadata, including the matched license text and SPDX expression.

Field NameTypeDescription
valuestrValue is the raw license identifier or text as found in the file.
spdxExpressionstrSPDXExpression is the parsed SPDX license expression.
typestrType is the license type classification (e.g., declared, concluded, discovered).
evidenceFileLicenseEvidenceEvidence contains supporting evidence for this license detection.

FileLicenseEvidence

Contains supporting evidence for a license detection in a file, including the byte offset, extent, and confidence level.

Field NameTypeDescription
confidenceintConfidence is the confidence score for this license detection (0-100).
offsetintOffset is the byte offset where the license text starts in the file.
extentintExtent is the length of the license text in bytes.

FileMetadataEntry

Contains filesystem-level metadata attributes such as permissions, ownership, type, and size for a cataloged file.

Field NameTypeDescription
modeintMode is the Unix file permission mode in octal format.
typestrType is the file type (e.g., "RegularFile", "Directory", "SymbolicLink").
linkDestinationstrLinkDestination is the target path for symbolic links.
userIDintUserID is the file owner user ID.
groupIDintGroupID is the file owner group ID.
mimeTypestrMIMEType is the MIME type of the file contents.
sizeintSize is the file size in bytes.

KeyValue

Represents a single key-value pair.

Field NameTypeDescription
keystrKey is the key name
valuestrValue is the value associated with the key

License

Represents software license information discovered for a package, including SPDX expressions and supporting evidence locations.

Field NameTypeDescription
valuestrValue is the raw license identifier or expression as found.
spdxExpressionstrSPDXExpression is the parsed SPDX license expression.
typestrType is the license type classification (e.g., declared, concluded, discovered).
urlsArray<str>URLs are URLs where license text or information can be found.
locationsArray<Location>Locations are file locations where this license was discovered.
contentsstrContents is the full license text content.

LinuxKernelModuleParameter

Represents a configurable parameter for a kernel module with its type and description.

Field NameTypeDescription
typestrType is parameter data type (e.g. int, string, bool, array types)
descriptionstrDescription is a human-readable parameter description explaining what the parameter controls

LinuxRelease

Contains Linux distribution identification and version information extracted from /etc/os-release or similar system files.

Field NameTypeDescription
prettyNamestrPrettyName is a human-readable operating system name with version.
namestrName is the operating system name without version information.
idstrID is the lower-case operating system identifier (e.g., "ubuntu", "rhel").
idLikeIDLikesIDLike is a list of operating system IDs this distribution is similar to or derived from.
versionstrVersion is the operating system version including codename if available.
versionIDstrVersionID is the operating system version number or identifier.
versionCodenamestrVersionCodename is the operating system release codename (e.g., "jammy", "bullseye").
buildIDstrBuildID is a build identifier for the operating system.
imageIDstrImageID is an identifier for container or cloud images.
imageVersionstrImageVersion is the version for container or cloud images.
variantstrVariant is the operating system variant name (e.g., "Server", "Workstation").
variantIDstrVariantID is the lower-case operating system variant identifier.
homeURLstrHomeURL is the homepage URL for the operating system.
supportURLstrSupportURL is the support or help URL for the operating system.
bugReportURLstrBugReportURL is the bug reporting URL for the operating system.
privacyPolicyURLstrPrivacyPolicyURL is the privacy policy URL for the operating system.
cpeNamestrCPEName is the Common Platform Enumeration name for the operating system.
supportEndstrSupportEnd is the end of support date or version identifier.
extendedSupportboolExtendedSupport indicates whether extended security or support is available.

Location

Represents a path relative to a particular filesystem resolved to a specific file.Reference.

Field NameTypeDescription
pathstrRealPath is the canonical absolute form of the path accessed (all symbolic links have been followed and relative path components like '.' and '..' have been removed).
layerIDstrFileSystemID is an ID representing and entire filesystem. For container images, this is a layer digest. For directories or a root filesystem, this is blank.
accessPathstrAccessPath is the path used to retrieve file contents (which may or may not have hardlinks / symlinks in the path)
annotationsobj

Package

Represents a pkg.Package object specialized for JSON marshaling and unmarshalling.

Field NameType
idstr
namestr
versionstr
typestr
foundBystr
locationsArray<Location>
licenseslicenses
languagestr
cpescpes
purlstr
metadataTypestr
metadatasee the Ecosystem Specific Types section

PnpmLockResolution

Contains package resolution metadata from pnpm lockfiles, including the integrity hash used for verification.

Field NameTypeDescription
integritystrIntegrity is Subresource Integrity hash for verification (SRI format)

Relationship

Represents a directed relationship between two artifacts in the SBOM, such as package-contains-file or package-depends-on-package.

Field NameTypeDescription
parentstrParent is the ID of the parent artifact in this relationship.
childstrChild is the ID of the child artifact in this relationship.
typestrType is the relationship type (e.g., "contains", "dependency-of", "ancestor-of").
metadataunknownMetadata contains additional relationship-specific metadata.

Schema

Specifies the JSON schema version and URL reference that defines the structure and validation rules for this document format.

Field NameTypeDescription
versionstrVersion is the JSON schema version for this document format.
urlstrURL is the URL to the JSON schema definition document.

Source

Represents the artifact that was analyzed to generate this SBOM, such as a container image, directory, or file archive.

Field NameTypeDescription
idstrID is a unique identifier for the analyzed source artifact.
namestrName is the name of the analyzed artifact (e.g., image name, directory path).
versionstrVersion is the version of the analyzed artifact (e.g., image tag).
supplierstrSupplier is supplier information, which can be user-provided for NTIA minimum elements compliance.
typestrType is the source type (e.g., "image", "directory", "file").
metadataunknownMetadata contains additional source-specific metadata.

Ecosystem Specific Types

AlpmDbEntry

Is a struct that represents the package data stored in the pacman flat-file stores for arch linux.

Field NameTypeDescription
basepackagestrBasePackage is the base package name this package was built from (source package in Arch build system)
packagestrPackage is the package name as found in the desc file
versionstrVersion is the package version as found in the desc file
descriptionstrDescription is a human-readable package description
architecturestrArchitecture is the target CPU architecture as defined in Arch architecture spec (e.g. x86_64, aarch64, or "any" for arch-independent packages)
sizeintSize is the installed size in bytes
packagerstrPackager is the name and email of the person who packaged this (RFC822 format)
urlstrURL is the upstream project URL
validationstrValidation is the validation method used for package integrity (e.g. pgp signature, sha256 checksum)
reasonintReason is the installation reason tracked by pacman (0=explicitly installed by user, 1=installed as dependency)
filesArray<AlpmFileRecord>Files are the files installed by this package
backupArray<AlpmFileRecord>Backup is the list of configuration files that pacman backs up before upgrades
providesArray<str>Provides are virtual packages provided by this package (allows other packages to depend on capabilities rather than specific packages)
dependsArray<str>Depends are the runtime dependencies required by this package

AlpmFileRecord

Represents a single file entry within an Arch Linux package with its associated metadata tracked by pacman.

Field NameTypeDescription
pathstrPath is the file path relative to the filesystem root
typestrType is the file type (e.g. regular file, directory, symlink)
uidstrUID is the file owner user ID as recorded by pacman
gidstrGID is the file owner group ID as recorded by pacman
timestrTime is the file modification timestamp
sizestrSize is the file size in bytes
linkstrLink is the symlink target path if this is a symlink
digestArray<Digest>Digests contains file content hashes for integrity verification

ApkDbEntry

Represents all captured data for the alpine linux package manager flat-file store.

Field NameTypeDescription
packagestrPackage is the package name as found in the installed file
originPackagestrOriginPackage is the original source package name this binary was built from (used to track which aport/source built this)
maintainerstrMaintainer is the package maintainer name and email
versionstrVersion is the package version as found in the installed file
architecturestrArchitecture is the target CPU architecture
urlstrURL is the upstream project URL
descriptionstrDescription is a human-readable package description
sizeintSize is the package archive size in bytes (.apk file size)
installedSizeintInstalledSize is the total size of installed files in bytes
pullDependenciesArray<str>Dependencies are the runtime dependencies required by this package
providesArray<str>Provides are virtual packages provided by this package (for capability-based dependencies)
pullChecksumstrChecksum is the package content checksum for integrity verification
gitCommitOfApkPortstrGitCommit is the git commit hash of the APK port definition in Alpine's aports repository
filesArray<ApkFileRecord>Files are the files installed by this package

ApkFileRecord

Represents a single file listing and metadata from a APK DB entry (which may have many of these file records).

Field NameTypeDescription
pathstrPath is the file path relative to the filesystem root
ownerUidstrOwnerUID is the file owner user ID
ownerGidstrOwnerGID is the file owner group ID
permissionsstrPermissions is the file permission mode string (e.g. "0755", "0644")
digestDigestDigest is the file content hash for integrity verification

BinarySignature

Represents a set of matched values within a binary file.

Field NameType
matchesArray<ClassifierMatch>

BitnamiSbomEntry

Represents all captured data from Bitnami packages described in Bitnami' SPDX files.

Field NameTypeDescription
namestrName is the package name as found in the Bitnami SPDX file
archstrArchitecture is the target CPU architecture (amd64 or arm64 in Bitnami images)
distrostrDistro is the distribution name this package is for (base OS like debian, ubuntu, etc.)
revisionstrRevision is the Bitnami-specific package revision number (incremented for Bitnami rebuilds of same upstream version)
versionstrVersion is the package version as found in the Bitnami SPDX file
pathstrPath is the installation path in the filesystem where the package is located
filesArray<str>Files are the file paths owned by this package (tracked via SPDX relationships)

CConanFileEntry

ConanfileEntry represents a single "Requires" entry from a conanfile.txt.

Field NameTypeDescription
refstrRef is the package reference string in format name/version@user/channel

CConanInfoEntry

ConaninfoEntry represents a single "full_requires" entry from a conaninfo.txt.

Field NameTypeDescription
refstrRef is the package reference string in format name/version@user/channel
package_idstrPackageID is a unique package variant identifier

CConanLockEntry

ConanV1LockEntry represents a single "node" entry from a conan.lock V1 file.

Field NameTypeDescription
refstrRef is the package reference string in format name/version@user/channel
package_idstrPackageID is a unique package variant identifier computed from settings/options (static hash in Conan 1.x, can have collisions with complex dependency graphs)
prevstrPrev is the previous lock entry reference for versioning
requiresArray<str>Requires are the runtime package dependencies
build_requiresArray<str>BuildRequires are the build-time dependencies (e.g. cmake, compilers)
py_requiresArray<str>PythonRequires are the Python dependencies needed for Conan recipes
optionsKeyValuesOptions are package configuration options as key-value pairs (e.g. shared=True, fPIC=True)
pathstrPath is the filesystem path to the package in Conan cache
contextstrContext is the build context information

CConanLockV2Entry

ConanV2LockEntry represents a single "node" entry from a conan.lock V2 file.

Field NameTypeDescription
refstrRef is the package reference string in format name/version@user/channel
packageIDstrPackageID is a unique package variant identifier (dynamic in Conan 2.0, more accurate than V1)
usernamestrUsername is the Conan user/organization name
channelstrChannel is the Conan channel name indicating stability/purpose (e.g. stable, testing, experimental)
recipeRevisionstrRecipeRevision is a git-like revision hash (RREV) of the recipe
packageRevisionstrPackageRevision is a git-like revision hash of the built binary package
timestampstrTimeStamp is when this package was built/locked

CocoaPodfileLockEntry

Represents a single entry from the "Pods" section of a Podfile.lock file.

Field NameTypeDescription
checksumstrChecksum is the SHA-1 hash of the podspec file for integrity verification (generated via `pod ipc spec ... | openssl sha1`), ensuring all team members use the same pod specification version

CondaMetadataEntry

CondaMetaPackage represents metadata for a Conda package extracted from the conda-meta/*.json files.

Field NameTypeDescription
archstrArch is the target CPU architecture for the package (e.g., "arm64", "x86_64").
namestrName is the package name as found in the conda-meta JSON file.
versionstrVersion is the package version as found in the conda-meta JSON file.
buildstrBuild is the build string identifier (e.g., "h90dfc92_1014").
build_numberintBuildNumber is the sequential build number for this version.
channelstrChannel is the Conda channel URL where the package was retrieved from.
subdirstrSubdir is the subdirectory within the channel (e.g., "osx-arm64", "linux-64").
noarchstrNoarch indicates if the package is platform-independent (e.g., "python", "generic").
licensestrLicense is the package license identifier.
license_familystrLicenseFamily is the general license category (e.g., "MIT", "Apache", "GPL").
md5strMD5 is the MD5 hash of the package archive.
sha256strSHA256 is the SHA-256 hash of the package archive.
sizeintSize is the package archive size in bytes.
timestampintTimestamp is the Unix timestamp when the package was built.
fnstrFilename is the original package archive filename (e.g., "zlib-1.2.11-h90dfc92_1014.tar.bz2").
urlstrURL is the full download URL for the package archive.
extracted_package_dirstrExtractedPackageDir is the local cache directory where the package was extracted.
dependsArray<str>Depends is the list of runtime dependencies with version constraints.
filesArray<str>Files is the list of files installed by this package.
paths_dataCondaPathsDataPathsData contains detailed file metadata from the paths.json file.
linkCondaLinkLink contains installation source metadata from the link.json file.

Represents link metadata from a Conda package's link.json file describing package installation source.

Field NameTypeDescription
sourcestrSource is the original path where the package was extracted from cache.
typeintType indicates the link type (1 for hard link, 2 for soft link, 3 for copy).

CondaPathData

Represents metadata for a single file within a Conda package from the paths.json file.

Field NameTypeDescription
_pathstrPath is the file path relative to the Conda environment root.
path_typestrPathType indicates the link type for the file (e.g., "hardlink", "softlink", "directory").
sha256strSHA256 is the SHA-256 hash of the file contents.
sha256_in_prefixstrSHA256InPrefix is the SHA-256 hash of the file after prefix replacement during installation.
size_in_bytesintSizeInBytes is the file size in bytes.

CondaPathsData

Represents the paths.json file structure from a Conda package containing file metadata.

Field NameTypeDescription
paths_versionintPathsVersion is the schema version of the paths data format.
pathsArray<CondaPathData>Paths is the list of file metadata entries for all files in the package.

DartPubspec

Is a struct that represents a package described in a pubspec.yaml file

Field NameTypeDescription
homepagestrHomepage is the package homepage URL
repositorystrRepository is the source code repository URL
documentationstrDocumentation is the documentation site URL
publish_tostrPublishTo is the package repository to publish to, or "none" to prevent accidental publishing
environmentDartPubspecEnvironmentEnvironment is SDK version constraints for Dart and Flutter
platformsArray<str>Platforms are the supported platforms (Android, iOS, web, etc.)
ignored_advisoriesArray<str>IgnoredAdvisories are the security advisories to explicitly ignore for this package

DartPubspecEnvironment

Represents SDK version constraints from the environment section of pubspec.yaml.

Field NameTypeDescription
sdkstrSDK is the Dart SDK version constraint (e.g. ">=2.12.0 <3.0.0")
flutterstrFlutter is the Flutter SDK version constraint if this is a Flutter package

DartPubspecLockEntry

Is a struct that represents a single entry found in the "packages" section in a Dart pubspec.lock file.

Field NameTypeDescription
namestrName is the package name as found in the pubspec.lock file
versionstrVersion is the package version as found in the pubspec.lock file
hosted_urlstrHostedURL is the URL of the package repository for hosted packages (typically pub.dev, but can be custom repository identified by hosted-url). When PUB_HOSTED_URL environment variable changes, lockfile tracks the source.
vcs_urlstrVcsURL is the URL of the VCS repository for git/path dependencies (for packages fetched from version control systems like Git)

DotnetDepsEntry

Is a struct that represents a single entry found in the "libraries" section in a .NET [*.]deps.json file.

Field NameTypeDescription
namestrName is the package name as found in the deps.json file
versionstrVersion is the package version as found in the deps.json file
pathstrPath is the relative path to the package within the deps structure (e.g. "app.metrics/3.0.0")
sha512strSha512 is the SHA-512 hash of the NuGet package content WITHOUT the signed content for verification (won't match hash from NuGet API or manual calculation of .nupkg file)
hashPathstrHashPath is the relative path to the .nupkg.sha512 hash file (e.g. "app.metrics.3.0.0.nupkg.sha512")
executablesobjExecutables are the map of .NET Portable Executable files within this package with their version resources

DotnetPackagesLockEntry

Is a struct that represents a single entry found in the "dependencies" section in a .NET packages.lock.json file.

Field NameTypeDescription
namestrName is the package name as found in the packages.lock.json file
versionstrVersion is the package version as found in the packages.lock.json file
contentHashstrContentHash is the hash of the package content for verification
typestrType is the dependency type indicating how this dependency was added (Direct=explicit in project file, Transitive=pulled in by another package, Project=project reference)

DotnetPortableExecutableEntry

Is a struct that represents a single entry found within "VersionResources" section of a .NET Portable Executable binary file.

Field NameTypeDescription
assemblyVersionstrAssemblyVersion is the .NET assembly version number (strong-named version)
legalCopyrightstrLegalCopyright is the copyright notice string
commentsstrComments are additional comments or description embedded in PE resources
internalNamestrInternalName is the internal name of the file
companyNamestrCompanyName is the company that produced the file
productNamestrProductName is the name of the product this file is part of
productVersionstrProductVersion is the version of the product (may differ from AssemblyVersion)

DpkgArchiveEntry

Represents package metadata extracted from a .deb archive file.

Field NameTypeDescription
packagestrPackage is the package name as found in the status file
sourcestrSource is the source package name this binary was built from (one source can produce multiple binary packages)
versionstrVersion is the binary package version as found in the status file
sourceVersionstrSourceVersion is the source package version (may differ from binary version when binNMU rebuilds occur)
architecturestrArchitecture is the target architecture per Debian spec (specific arch like amd64/arm64, wildcard like any, architecture-independent "all", or "source" for source packages)
maintainerstrMaintainer is the package maintainer's name and email in RFC822 format (name must come first, then email in angle brackets)
installedSizeintInstalledSize is the total size of installed files in kilobytes
providesArray<str>Provides are the virtual packages provided by this package (allows other packages to depend on capabilities. Can include versioned provides like "libdigest-md5-perl (= 2.55.01)")
dependsArray<str>Depends are the packages required for this package to function (will not be installed unless these requirements are met, creates strict ordering constraint)
preDependsArray<str>PreDepends are the packages that must be installed and configured BEFORE even starting installation of this package (stronger than Depends, discouraged unless absolutely necessary as it adds strict constraints for apt)
filesArray<DpkgFileRecord>Files are the files installed by this package

DpkgFileRecord

Represents a single file attributed to a debian package.

Field NameTypeDescription
pathstrPath is the file path relative to the filesystem root
digestDigestDigest is the file content hash (typically MD5 for dpkg compatibility with legacy systems)
isConfigFileboolIsConfigFile is whether this file is marked as a configuration file (dpkg will preserve user modifications during upgrades)

DpkgDbEntry

Represents all captured data for a Debian package DB entry; available fields are described at http://manpages.ubuntu.com/manpages/xenial/man1/dpkg-query.1.html in the --showformat section.

Field NameTypeDescription
packagestrPackage is the package name as found in the status file
sourcestrSource is the source package name this binary was built from (one source can produce multiple binary packages)
versionstrVersion is the binary package version as found in the status file
sourceVersionstrSourceVersion is the source package version (may differ from binary version when binNMU rebuilds occur)
architecturestrArchitecture is the target architecture per Debian spec (specific arch like amd64/arm64, wildcard like any, architecture-independent "all", or "source" for source packages)
maintainerstrMaintainer is the package maintainer's name and email in RFC822 format (name must come first, then email in angle brackets)
installedSizeintInstalledSize is the total size of installed files in kilobytes
providesArray<str>Provides are the virtual packages provided by this package (allows other packages to depend on capabilities. Can include versioned provides like "libdigest-md5-perl (= 2.55.01)")
dependsArray<str>Depends are the packages required for this package to function (will not be installed unless these requirements are met, creates strict ordering constraint)
preDependsArray<str>PreDepends are the packages that must be installed and configured BEFORE even starting installation of this package (stronger than Depends, discouraged unless absolutely necessary as it adds strict constraints for apt)
filesArray<DpkgFileRecord>Files are the files installed by this package

DpkgFileRecord

Represents a single file attributed to a debian package.

Field NameTypeDescription
pathstrPath is the file path relative to the filesystem root
digestDigestDigest is the file content hash (typically MD5 for dpkg compatibility with legacy systems)
isConfigFileboolIsConfigFile is whether this file is marked as a configuration file (dpkg will preserve user modifications during upgrades)

ElfBinaryPackageNoteJsonPayload

Represents metadata captured from the .note.package section of an ELF-formatted binary

Field NameTypeDescription
typestrType is the type of the package (e.g. "rpm", "deb", "apk", etc.)
architecturestrArchitecture of the binary package (e.g. "amd64", "arm", etc.)
osCPEstrOSCPE is a CPE name for the OS, typically corresponding to CPE_NAME in os-release (e.g. cpe:/o:fedoraproject:fedora:33)
osstrOS is the OS name, typically corresponding to ID in os-release (e.g. "fedora")
osVersionstrosVersion is the version of the OS, typically corresponding to VERSION_ID in os-release (e.g. "33")
systemstrSystem is a context-specific name for the system that the binary package is intended to run on or a part of
vendorstrVendor is the individual or organization that produced the source code for the binary
sourceRepostrSourceRepo is the URL to the source repository for which the binary was built from
commitstrCommit is the commit hash of the source repository for which the binary was built from

ElixirMixLockEntry

Is a struct that represents a single entry in a mix.lock file

Field NameTypeDescription
namestrName is the package name as found in the mix.lock file
versionstrVersion is the package version as found in the mix.lock file
pkgHashstrPkgHash is the outer checksum (SHA-256) of the entire Hex package tarball for integrity verification (preferred method, replaces deprecated inner checksum)
pkgHashExtstrPkgHashExt is the extended package hash format (inner checksum is deprecated - SHA-256 of concatenated file contents excluding CHECKSUM file, now replaced by outer checksum)

ErlangRebarLockEntry

Represents a single package entry from the "deps" section within an Erlang rebar.lock file.

Field NameTypeDescription
namestrName is the package name as found in the rebar.lock file
versionstrVersion is the package version as found in the rebar.lock file
pkgHashstrPkgHash is the outer checksum (SHA-256) of the entire Hex package tarball for integrity verification (preferred method over deprecated inner checksum)
pkgHashExtstrPkgHashExt is the extended package hash format (inner checksum deprecated - was SHA-256 of concatenated file contents)

GgufFileHeader

Represents metadata extracted from a GGUF (GPT-Generated Unified Format) model file.

Field NameTypeDescription
ggufVersionintGGUFVersion is the GGUF format version (e.g., 3)
fileSizeintFileSize is the size of the GGUF file in bytes (best-effort if available from resolver)
architecturestrArchitecture is the model architecture (from general.architecture, e.g., "qwen3moe", "llama")
quantizationstrQuantization is the quantization type (e.g., "IQ4_NL", "Q4_K_M")
parametersintParameters is the number of model parameters (if present in header)
tensorCountintTensorCount is the number of tensors in the model
headerobjRemainingKeyValues contains the remaining key-value pairs from the GGUF header that are not already represented as typed fields above. This preserves additional metadata fields for reference (namespaced with general.*, llama.*, etc.) while avoiding duplication.
metadataHashstrMetadataKeyValuesHash is a xx64 hash of all key-value pairs from the GGUF header metadata. This hash is computed over the complete header metadata (including the fields extracted into typed fields above) and provides a stable identifier for the model configuration across different file locations or remotes. It allows matching identical models even when stored in different repositories or with different filenames.

GithubActionsUseStatement

Represents a single 'uses' statement in a GitHub Actions workflow file referencing an action or reusable workflow.

Field NameTypeDescription
valuestrValue is the action reference (e.g. "actions/checkout@v3")
commentstrComment is the inline comment associated with this uses statement

GoModuleBuildinfoEntry

GolangBinaryBuildinfoEntry represents all captured data for a Golang binary

Field NameTypeDescription
goBuildSettingsKeyValuesBuildSettings contains the Go build settings and flags used to compile the binary (e.g., GOARCH, GOOS, CGO_ENABLED).
goCompiledVersionstrGoCompiledVersion is the version of Go used to compile the binary.
architecturestrArchitecture is the target CPU architecture for the binary (extracted from GOARCH build setting).
h1DigeststrH1Digest is the Go module hash in h1: format for the main module from go.sum.
mainModulestrMainModule is the main module path for the binary (e.g., "github.com/anchore/syft").
goCryptoSettingsArray<str>GoCryptoSettings contains FIPS and cryptographic configuration settings if present.
goExperimentsArray<str>GoExperiments lists experimental Go features enabled during compilation (e.g., "arenas", "cgocheck2").

GoModuleEntry

GolangModuleEntry represents all captured data for a Golang source scan with go.mod/go.sum

Field NameTypeDescription
h1DigeststrH1Digest is the Go module hash in h1: format from go.sum for verifying module contents.

GoSourceEntry

GolangSourceEntry represents all captured data for a Golang package found through source analysis

Field NameTypeDescription
h1DigeststrH1Digest is the Go module hash in h1: format from go.sum for verifying module contents.
osstrOperatingSystem is the target OS for build constraints (e.g., "linux", "darwin", "windows").
architecturestrArchitecture is the target CPU architecture for build constraints (e.g., "amd64", "arm64").
buildTagsstrBuildTags are the build tags used to conditionally compile code (e.g., "integration,debug").
cgoEnabledboolCgoEnabled indicates whether CGO was enabled for this package.

HaskellHackageStackEntry

HackageStackYamlEntry represents a single entry from the "extra-deps" section of a stack.yaml file.

Field NameTypeDescription
pkgHashstrPkgHash is the package content hash for verification

HaskellHackageStackLockEntry

HackageStackYamlLockEntry represents a single entry from the "packages" section of a stack.yaml.lock file.

Field NameTypeDescription
pkgHashstrPkgHash is the package content hash for verification
snapshotURLstrSnapshotURL is the URL to the Stack snapshot this package came from

HomebrewFormula

Represents metadata about a Homebrew formula package extracted from formula JSON files.

Field NameTypeDescription
tapstrTap is Homebrew tap this formula belongs to (e.g. "homebrew/core")
homepagestrHomepage is the upstream project homepage URL
descriptionstrDescription is a human-readable formula description

JavaArchive

Encapsulates all Java ecosystem metadata for a package as well as an (optional) parent relationship.

Field NameTypeDescription
virtualPathstrVirtualPath is path within the archive hierarchy, where nested entries are delimited with ':' (for nested JARs)
manifestJavaManifestManifest is parsed META-INF/MANIFEST.MF contents
pomPropertiesJavaPomPropertiesPomProperties is parsed pom.properties file contents
pomProjectJavaPomProjectPomProject is parsed pom.xml file contents
digestArray<Digest>ArchiveDigests is cryptographic hashes of the archive file

JavaManifest

Represents the fields of interest extracted from a Java archive's META-INF/MANIFEST.MF file.

Field NameTypeDescription
mainKeyValuesMain is main manifest attributes as key-value pairs
sectionsArray<KeyValues>Sections are the named sections from the manifest (e.g. per-entry attributes)

JavaPomParent

Contains the fields within the tag in a pom.xml file

Field NameTypeDescription
groupIdstrGroupID is the parent Maven group identifier
artifactIdstrArtifactID is the parent Maven artifact identifier
versionstrVersion is the parent version (child inherits configuration from this specific version of parent POM)

JavaPomProject

Represents fields of interest extracted from a Java archive's pom.xml file.

Field NameTypeDescription
pathstrPath is path to the pom.xml file within the archive
parentJavaPomParentParent is the parent POM reference for inheritance (child POMs inherit configuration from parent)
groupIdstrGroupID is Maven group identifier (reversed domain name like org.apache.maven)
artifactIdstrArtifactID is Maven artifact identifier (project name)
versionstrVersion is project version (together with groupId and artifactId forms Maven coordinates groupId:artifactId:version)
namestrName is a human-readable project name (displayed in Maven-generated documentation)
descriptionstrDescription is detailed project description
urlstrURL is the project URL (typically project website or repository)

JavaPomProperties

Represents the fields of interest extracted from a Java archive's pom.properties file.

Field NameTypeDescription
pathstrPath is path to the pom.properties file within the archive
namestrName is the project name
groupIdstrGroupID is Maven group identifier uniquely identifying the project across all projects (follows reversed domain name convention like com.company.project)
artifactIdstrArtifactID is Maven artifact identifier, the name of the jar/artifact (unique within the groupId scope)
versionstrVersion is artifact version
scopestrScope is dependency scope determining when dependency is available (compile=default all phases, test=test compilation/execution only, runtime=runtime and test not compile, provided=expected from JDK or container)
extraFieldsobjExtra is additional custom properties not in standard Maven coordinates

JavaJvmInstallation

JavaVMInstallation represents a Java Virtual Machine installation discovered on the system with its release information and file list.

Field NameTypeDescription
releaseJavaVMReleaseRelease is JVM release information and version details
filesArray<str>Files are the list of files that are part of this JVM installation

JavaVMRelease

Represents JVM version and build information extracted from the release file in a Java installation.

Field NameTypeDescription
implementorstrImplementor is extracted with the `java.vendor` JVM property
implementorVersionstrImplementorVersion is extracted with the `java.vendor.version` JVM property
javaRuntimeVersionstrJavaRuntimeVersion is extracted from the 'java.runtime.version' JVM property
javaVersionstrJavaVersion matches that from `java -version` command output
javaVersionDatestrJavaVersionDate is extracted from the 'java.version.date' JVM property
libcstrLibc can either be 'glibc' or 'musl'
modulesArray<str>Modules is a list of JVM modules that are packaged
osArchstrOsArch is the target CPU architecture
osNamestrOsName is the name of the target runtime operating system environment
osVersionstrOsVersion is the version of the target runtime operating system environment
sourcestrSource refers to the origin repository of OpenJDK source
buildSourcestrBuildSource Git SHA of the build repository
buildSourceRepostrBuildSourceRepo refers to rhe repository URL for the build source
sourceRepostrSourceRepo refers to the OpenJDK repository URL
fullVersionstrFullVersion is extracted from the 'java.runtime.version' JVM property
semanticVersionstrSemanticVersion is derived from the OpenJDK version
buildInfostrBuildInfo contains additional build information
jvmVariantstrJvmVariant specifies the JVM variant (e.g., Hotspot or OpenJ9)
jvmVersionstrJvmVersion is extracted from the 'java.vm.version' JVM property
imageTypestrImageType can be 'JDK' or 'JRE'
buildTypestrBuildType can be 'commercial' (used in some older oracle JDK distributions)

JavascriptNpmPackage

NpmPackage represents the contents of a javascript package.json file.

Field NameTypeDescription
namestrName is the package name as found in package.json
versionstrVersion is the package version as found in package.json
authorstrAuthor is package author name
homepagestrHomepage is project homepage URL
descriptionstrDescription is a human-readable package description
urlstrURL is repository or project URL
privateboolPrivate is whether this is a private package

JavascriptNpmPackageLockEntry

NpmPackageLockEntry represents a single entry within the "packages" section of a package-lock.json file.

Field NameTypeDescription
resolvedstrResolved is URL where this package was downloaded from (registry source)
integritystrIntegrity is Subresource Integrity hash for verification using standard SRI format (sha512-... or sha1-...). npm changed from SHA-1 to SHA-512 in newer versions. For registry sources this is the integrity from registry, for remote tarballs it's SHA-512 of the file. npm verifies tarball matches this hash before unpacking, throwing EINTEGRITY error if mismatch detected.
dependenciesobjDependencies is a map of dependencies and their version markers, i.e. "lodash": "^1.0.0"

JavascriptPnpmLockEntry

PnpmLockEntry represents a single entry in the "packages" section of a pnpm-lock.yaml file.

Field NameTypeDescription
resolutionPnpmLockResolutionResolution is the resolution information for the package
dependenciesobjDependencies is a map of dependencies and their versions

JavascriptYarnLockEntry

YarnLockEntry represents a single entry section of a yarn.lock file.

Field NameTypeDescription
resolvedstrResolved is URL where this package was downloaded from
integritystrIntegrity is Subresource Integrity hash for verification (SRI format)
dependenciesobjDependencies is a map of dependencies and their versions

LinuxKernelArchive

LinuxKernel represents all captured data for a Linux kernel

Field NameTypeDescription
namestrName is kernel name (typically "Linux")
architecturestrArchitecture is the target CPU architecture
versionstrVersion is kernel version string
extendedVersionstrExtendedVersion is additional version information
buildTimestrBuildTime is when the kernel was built
authorstrAuthor is who built the kernel
formatstrFormat is kernel image format (e.g. bzImage, zImage)
rwRootFSboolRWRootFS is whether root filesystem is mounted read-write
swapDeviceintSwapDevice is swap device number
rootDeviceintRootDevice is root device number
videoModestrVideoMode is default video mode setting

LinuxKernelModule

Represents a loadable kernel module (.ko file) with its metadata, parameters, and dependencies.

Field NameTypeDescription
namestrName is module name
versionstrVersion is module version string
sourceVersionstrSourceVersion is the source code version identifier
pathstrPath is the filesystem path to the .ko kernel object file (absolute path)
descriptionstrDescription is a human-readable module description
authorstrAuthor is module author name and email
licensestrLicense is module license (e.g. GPL, BSD) which must be compatible with kernel
kernelVersionstrKernelVersion is kernel version this module was built for
versionMagicstrVersionMagic is version magic string for compatibility checking (includes kernel version, SMP status, module loading capabilities like "3.17.4-302.fc21.x86_64 SMP mod_unload modversions"). Module will NOT load if vermagic doesn't match running kernel.
parametersobjParameters are the module parameters that can be configured at load time (user-settable values like module options)

LuarocksPackage

Represents a Lua package managed by the LuaRocks package manager with metadata from .rockspec files.

Field NameTypeDescription
namestrName is the package name as found in the .rockspec file
versionstrVersion is the package version as found in the .rockspec file
licensestrLicense is license identifier
homepagestrHomepage is project homepage URL
descriptionstrDescription is a human-readable package description
urlstrURL is the source download URL
dependenciesobjDependencies are the map of dependency names to version constraints

NixStoreEntry

Represents a package in the Nix store (/nix/store) with its derivation information and metadata.

Field NameTypeDescription
pathstrPath is full store path for this output (e.g. /nix/store/abc123...-package-1.0)
outputstrOutput is the specific output name for multi-output packages (empty string for default "out" output, can be "bin", "dev", "doc", etc.)
outputHashstrOutputHash is hash prefix of the store path basename (first part before the dash)
derivationNixDerivationDerivation is information about the .drv file that describes how this package was built
filesArray<str>Files are the list of files under the nix/store path for this package

NixDerivation

Represents a Nix .drv file that describes how to build a package including inputs, outputs, and build instructions.

Field NameTypeDescription
pathstrPath is path to the .drv file in Nix store
systemstrSystem is target system string indicating where derivation can be built (e.g. "x86_64-linux", "aarch64-darwin"). Must match current system for local builds.
inputDerivationsArray<NixDerivationReference>InputDerivations are the list of other derivations that were inputs to this build (dependencies)
inputSourcesArray<str>InputSources are the list of source file paths that were inputs to this build

NixDerivationReference

Represents a reference to another derivation used as a build input or runtime dependency.

Field NameTypeDescription
pathstrPath is path to the referenced .drv file
outputsArray<str>Outputs are which outputs of the referenced derivation were used (e.g. ["out"], ["bin", "dev"])

OpamPackage

Represents an OCaml package managed by the OPAM package manager with metadata from .opam files.

Field NameTypeDescription
namestrName is the package name as found in the .opam file
versionstrVersion is the package version as found in the .opam file
licensesArray<str>Licenses are the list of applicable licenses
urlstrURL is download URL for the package source
checksumArray<str>Checksums are the list of checksums for verification
homepagestrHomepage is project homepage URL
dependenciesArray<str>Dependencies are the list of required dependencies

PeBinary

Represents metadata captured from a Portable Executable formatted binary (dll, exe, etc.)

Field NameTypeDescription
VersionResourcesKeyValuesVersionResources contains key-value pairs extracted from the PE file's version resource section (e.g., FileVersion, ProductName, CompanyName).

PhpComposerInstalledEntry

Represents a single package entry from a composer v1/v2 "installed.json" files (very similar to composer.lock files).

Field NameTypeDescription
namestrName is package name in vendor/package format (e.g. symfony/console)
versionstrVersion is the package version
sourcePhpComposerExternalReferenceSource is the source repository information for development (typically git repo, used when passing --prefer-source). Originates from source code repository.
distPhpComposerExternalReferenceDist is distribution archive information for production (typically zip/tar, default install method). Packaged version of released code.
requireobjRequire is runtime dependencies with version constraints (package will not install unless these requirements can be met)
provideobjProvide is virtual packages/functionality provided by this package (allows other packages to depend on capabilities)
require-devobjRequireDev is development-only dependencies (not installed in production, only when developing this package or running tests)
suggestobjSuggest is optional but recommended dependencies (suggestions for packages that would extend functionality)
licenseArray<str>License is the list of license identifiers (SPDX format)
typestrType is package type indicating purpose (library=reusable code, project=application, metapackage=aggregates dependencies, etc.)
notification-urlstrNotificationURL is the URL to notify when package is installed (for tracking/statistics)
binArray<str>Bin is the list of binary/executable files that should be added to PATH
authorsArray<PhpComposerAuthors>Authors are the list of package authors with name/email/homepage
descriptionstrDescription is a human-readable package description
homepagestrHomepage is project homepage URL
keywordsArray<str>Keywords are the list of keywords for package discovery/search
timestrTime is timestamp when this package version was released

PhpComposerAuthors

Represents author information for a PHP Composer package from the authors field in composer.json.

Field NameTypeDescription
namestrName is author's full name
emailstrEmail is author's email address
homepagestrHomepage is author's personal or company website

PhpComposerExternalReference

Represents source or distribution information for a PHP package, indicating where the package code is retrieved from.

Field NameTypeDescription
typestrType is reference type (git for source VCS, zip/tar for dist archives)
urlstrURL is the URL to the resource (git repository URL or archive download URL)
referencestrReference is git commit hash or version tag for source, or archive version for dist
shasumstrShasum is SHA hash of the archive file for integrity verification (dist only)

PhpComposerLockEntry

Represents a single package entry found from a composer.lock file.

Field NameTypeDescription
namestrName is package name in vendor/package format (e.g. symfony/console)
versionstrVersion is the package version
sourcePhpComposerExternalReferenceSource is the source repository information for development (typically git repo, used when passing --prefer-source). Originates from source code repository.
distPhpComposerExternalReferenceDist is distribution archive information for production (typically zip/tar, default install method). Packaged version of released code.
requireobjRequire is runtime dependencies with version constraints (package will not install unless these requirements can be met)
provideobjProvide is virtual packages/functionality provided by this package (allows other packages to depend on capabilities)
require-devobjRequireDev is development-only dependencies (not installed in production, only when developing this package or running tests)
suggestobjSuggest is optional but recommended dependencies (suggestions for packages that would extend functionality)
licenseArray<str>License is the list of license identifiers (SPDX format)
typestrType is package type indicating purpose (library=reusable code, project=application, metapackage=aggregates dependencies, etc.)
notification-urlstrNotificationURL is the URL to notify when package is installed (for tracking/statistics)
binArray<str>Bin is the list of binary/executable files that should be added to PATH
authorsArray<PhpComposerAuthors>Authors are the list of package authors with name/email/homepage
descriptionstrDescription is a human-readable package description
homepagestrHomepage is project homepage URL
keywordsArray<str>Keywords are the list of keywords for package discovery/search
timestrTime is timestamp when this package version was released

PhpComposerAuthors

Represents author information for a PHP Composer package from the authors field in composer.json.

Field NameTypeDescription
namestrName is author's full name
emailstrEmail is author's email address
homepagestrHomepage is author's personal or company website

PhpComposerExternalReference

Represents source or distribution information for a PHP package, indicating where the package code is retrieved from.

Field NameTypeDescription
typestrType is reference type (git for source VCS, zip/tar for dist archives)
urlstrURL is the URL to the resource (git repository URL or archive download URL)
referencestrReference is git commit hash or version tag for source, or archive version for dist
shasumstrShasum is SHA hash of the archive file for integrity verification (dist only)

PhpPearEntry

Represents a single package entry found within php pear metadata files.

Field NameTypeDescription
namestrName is the package name
channelstrChannel is PEAR channel this package is from
versionstrVersion is the package version
licenseArray<str>License is the list of applicable licenses

PhpPeclEntry

Represents a single package entry found within php pecl metadata files.

Field NameTypeDescription
namestrName is the package name
channelstrChannel is PEAR channel this package is from
versionstrVersion is the package version
licenseArray<str>License is the list of applicable licenses

PortageDbEntry

PortageEntry represents a single package entry in the portage DB flat-file store.

Field NameTypeDescription
installedSizeintInstalledSize is total size of installed files in bytes
licensesstrLicenses is license string which may be an expression (e.g. "GPL-2 OR Apache-2.0")
filesArray<PortageFileRecord>Files are the files installed by this package (tracked in CONTENTS file)

PortageFileRecord

Represents a single file attributed to a portage package.

Field NameTypeDescription
pathstrPath is the file path relative to the filesystem root
digestDigestDigest is file content hash (MD5 for regular files in CONTENTS format: "obj filename md5hash mtime")

PythonPackage

Represents all captured data for a python egg or wheel package (specifically as outlined in the PyPA core metadata specification https://packaging.python.org/en/latest/specifications/core-metadata/).

Field NameTypeDescription
namestrName is the package name from the Name field in PKG-INFO or METADATA.
versionstrVersion is the package version from the Version field in PKG-INFO or METADATA.
authorstrAuthor is the package author name from the Author field.
authorEmailstrAuthorEmail is the package author's email address from the Author-Email field.
platformstrPlatform indicates the target platform for the package (e.g., "any", "linux", "win32").
filesArray<PythonFileRecord>Files are the installed files listed in the RECORD file for wheels or installed-files.txt for eggs.
sitePackagesRootPathstrSitePackagesRootPath is the root directory path containing the package (e.g., "/usr/lib/python3.9/site-packages").
topLevelPackagesArray<str>TopLevelPackages are the top-level Python module names from top_level.txt file.
directUrlOriginPythonDirectURLOriginInfoDirectURLOrigin contains VCS or direct URL installation information from direct_url.json.
requiresPythonstrRequiresPython specifies the Python version requirement (e.g., ">=3.6").
requiresDistArray<str>RequiresDist lists the package dependencies with version specifiers from Requires-Dist fields.
providesExtraArray<str>ProvidesExtra lists optional feature names that can be installed via extras (e.g., "dev", "test").

PythonDirectURLOriginInfo

Represents installation source metadata from direct_url.json for packages installed from VCS or direct URLs.

Field NameTypeDescription
urlstrURL is the source URL from which the package was installed.
commitIdstrCommitID is the VCS commit hash if installed from version control.
vcsstrVCS is the version control system type (e.g., "git", "hg").

PythonFileDigest

Represents the file metadata for a single file attributed to a python package.

Field NameTypeDescription
algorithmstrAlgorithm is the hash algorithm used (e.g., "sha256").
valuestrValue is the hex-encoded hash digest value.

PythonFileRecord

Represents a single entry within a RECORD file for a python wheel or egg package

Field NameTypeDescription
pathstrPath is the installed file path from the RECORD file.
digestPythonFileDigestDigest contains the hash algorithm and value for file integrity verification.
sizestrSize is the file size in bytes as a string.

PythonPdmLockEntry

Represents a single package entry within a pdm.lock file.

Field NameTypeDescription
summarystrSummary provides a description of the package
filesArray<PythonPdmFileEntry>Files are the package files with their paths and hash digests (for the base package without extras)
markerstrMarker is the "environment" --conditional expressions that determine whether a package should be installed based on the runtime environment
requiresPythonstrRequiresPython specifies the Python version requirement (e.g., ">=3.6").
dependenciesArray<str>Dependencies are the dependency specifications for the base package (without extras)
extrasArray<PythonPdmLockExtraVariant>Extras contains variants for different extras combinations (PDM may have multiple entries per package)

PythonFileDigest

Represents the file metadata for a single file attributed to a python package.

Field NameTypeDescription
algorithmstrAlgorithm is the hash algorithm used (e.g., "sha256").
valuestrValue is the hex-encoded hash digest value.

PythonPdmFileEntry

Field NameTypeDescription
urlstrURL is the file download URL
digestPythonFileDigestDigest is the hash digest of the file hosted at the URL

PythonPdmLockExtraVariant

Represents a specific extras combination variant within a PDM lock file.

Field NameTypeDescription
extrasArray<str>Extras are the optional extras enabled for this variant (e.g., ["toml"], ["dev"], or ["toml", "dev"])
dependenciesArray<str>Dependencies are the dependencies specific to this extras variant
filesArray<PythonPdmFileEntry>Files are the package files specific to this variant (only populated if different from base)
markerstrMarker is the environment conditional expression for this variant (e.g., "python_version < \"3.11\"")

PythonPipRequirementsEntry

PythonRequirementsEntry represents a single entry within a [*-]requirements.txt file.

Field NameTypeDescription
namestrName is the package name from the requirements file.
extrasArray<str>Extras are the optional features to install from the package (e.g., package[dev,test]).
versionConstraintstrVersionConstraint specifies version requirements (e.g., ">=1.0,<2.0").
urlstrURL is the direct download URL or VCS URL if specified instead of a PyPI package.
markersstrMarkers are environment marker expressions for conditional installation (e.g., "python_version >= '3.8'").

PythonPipfileLockEntry

Represents a single package entry within a Pipfile.lock file.

Field NameTypeDescription
hashesArray<str>Hashes are the package file hash values in the format "algorithm:digest" for integrity verification.
indexstrIndex is the PyPI index name where the package should be fetched from.

PythonPoetryLockEntry

Represents a single package entry within a Pipfile.lock file.

Field NameTypeDescription
indexstrIndex is the package repository name where the package should be fetched from.
dependenciesArray<PythonPoetryLockDependencyEntry>Dependencies are the package's runtime dependencies with version constraints.
extrasArray<PythonPoetryLockExtraEntry>Extras are optional feature groups that include additional dependencies.

PythonPoetryLockDependencyEntry

Represents a single dependency entry within a Poetry lock file.

Field NameTypeDescription
namestrName is the dependency package name.
versionstrVersion is the locked version or version constraint for the dependency.
optionalboolOptional indicates whether this dependency is optional (only needed for certain extras).
markersstrMarkers are environment marker expressions that conditionally enable the dependency (e.g., "python_version >= '3.8'").
extrasArray<str>Extras are the optional feature names from the dependency that should be installed.

PythonPoetryLockExtraEntry

Represents an optional feature group in a Poetry lock file.

Field NameTypeDescription
namestrName is the optional feature name (e.g., "dev", "test").
dependenciesArray<str>Dependencies are the package names required when this extra is installed.

PythonUvLockEntry

Represents a single package entry within a uv.lock file.

Field NameTypeDescription
indexstrIndex is the package repository name where the package should be fetched from.
dependenciesArray<PythonUvLockDependencyEntry>Dependencies are the package's runtime dependencies with version constraints.
extrasArray<PythonUvLockExtraEntry>Extras are optional feature groups that include additional dependencies.

PythonUvLockDependencyEntry

Represents a single dependency entry within a uv lock file.

Field NameTypeDescription
namestrName is the dependency package name.
optionalboolOptional indicates whether this dependency is optional (only needed for certain extras).
markersstrMarkers are environment marker expressions that conditionally enable the dependency (e.g., "python_version >= '3.8'").
extrasArray<str>Extras are the optional feature names from the dependency that should be installed.

PythonUvLockExtraEntry

Represents an optional feature group in a uv lock file.

Field NameTypeDescription
namestrName is the optional feature name (e.g., "dev", "test").
dependenciesArray<str>Dependencies are the package names required when this extra is installed.

RDescription

Represents metadata from an R package DESCRIPTION file containing package information, dependencies, and author details.

Field NameTypeDescription
titlestrTitle is short one-line package title
descriptionstrDescription is detailed package description
authorstrAuthor is package author(s)
maintainerstrMaintainer is current package maintainer
urlArray<str>URL is the list of related URLs
repositorystrRepository is CRAN or other repository name
builtstrBuilt is R version and platform this was built with
needsCompilationboolNeedsCompilation is whether this package requires compilation
importsArray<str>Imports are the packages imported in the NAMESPACE
dependsArray<str>Depends are the packages this package depends on
suggestsArray<str>Suggests are the optional packages that extend functionality

RpmArchive

Represents package metadata extracted directly from a .rpm archive file, containing the same information as an RPM database entry.

Field NameTypeDescription
namestrName is the RPM package name as found in the RPM database.
versionstrVersion is the upstream version of the package.
epochint | null
architecturestrArch is the target CPU architecture (e.g., "x86_64", "aarch64", "noarch").
releasestrRelease is the package release number or distribution-specific version suffix.
sourceRpmstrSourceRpm is the source RPM filename that was used to build this package.
signaturesArray<RpmSignature>Signatures contains GPG signature metadata for package verification.
sizeintSize is the total installed size of the package in bytes.
vendorstrVendor is the organization that packaged the software.
modularityLabelstrModularityLabel identifies the module stream for modular RPM packages (e.g., "nodejs:12:20200101").
providesArray<str>Provides lists the virtual packages and capabilities this package provides.
requiresArray<str>Requires lists the dependencies required by this package.
filesArray<RpmFileRecord>Files are the file records for all files owned by this package.

RpmFileRecord

Represents the file metadata for a single file attributed to a RPM package.

Field NameTypeDescription
pathstrPath is the absolute file path where the file is installed.
modeintMode is the file permission mode bits following Unix stat.h conventions.
sizeintSize is the file size in bytes.
digestDigestDigest contains the hash algorithm and value for file integrity verification.
userNamestrUserName is the owner username for the file.
groupNamestrGroupName is the group name for the file.
flagsstrFlags indicates the file type (e.g., "%config", "%doc", "%ghost").

RpmSignature

Represents a GPG signature for an RPM package used for authenticity verification.

Field NameTypeDescription
algostrPublicKeyAlgorithm is the public key algorithm used for signing (e.g., "RSA").
hashstrHashAlgorithm is the hash algorithm used for the signature (e.g., "SHA256").
createdstrCreated is the timestamp when the signature was created.
issuerstrIssuerKeyID is the GPG key ID that created the signature.

RpmDbEntry

Represents all captured data from a RPM DB package entry.

Field NameTypeDescription
namestrName is the RPM package name as found in the RPM database.
versionstrVersion is the upstream version of the package.
epochint | null
architecturestrArch is the target CPU architecture (e.g., "x86_64", "aarch64", "noarch").
releasestrRelease is the package release number or distribution-specific version suffix.
sourceRpmstrSourceRpm is the source RPM filename that was used to build this package.
signaturesArray<RpmSignature>Signatures contains GPG signature metadata for package verification.
sizeintSize is the total installed size of the package in bytes.
vendorstrVendor is the organization that packaged the software.
modularityLabelstrModularityLabel identifies the module stream for modular RPM packages (e.g., "nodejs:12:20200101").
providesArray<str>Provides lists the virtual packages and capabilities this package provides.
requiresArray<str>Requires lists the dependencies required by this package.
filesArray<RpmFileRecord>Files are the file records for all files owned by this package.

RpmFileRecord

Represents the file metadata for a single file attributed to a RPM package.

Field NameTypeDescription
pathstrPath is the absolute file path where the file is installed.
modeintMode is the file permission mode bits following Unix stat.h conventions.
sizeintSize is the file size in bytes.
digestDigestDigest contains the hash algorithm and value for file integrity verification.
userNamestrUserName is the owner username for the file.
groupNamestrGroupName is the group name for the file.
flagsstrFlags indicates the file type (e.g., "%config", "%doc", "%ghost").

RpmSignature

Represents a GPG signature for an RPM package used for authenticity verification.

Field NameTypeDescription
algostrPublicKeyAlgorithm is the public key algorithm used for signing (e.g., "RSA").
hashstrHashAlgorithm is the hash algorithm used for the signature (e.g., "SHA256").
createdstrCreated is the timestamp when the signature was created.
issuerstrIssuerKeyID is the GPG key ID that created the signature.

RubyGemspec

Represents all metadata parsed from the *.gemspec file

Field NameTypeDescription
namestrName is gem name as specified in the gemspec
versionstrVersion is gem version as specified in the gemspec
filesArray<str>Files is logical list of files in the gem (NOT directly usable as filesystem paths. Example: bundler gem lists "lib/bundler/vendor/uri/lib/uri/ldap.rb" but actual path is "/usr/local/lib/ruby/3.2.0/bundler/vendor/uri/lib/uri/ldap.rb". Would need gem installation path, ruby version, and env vars like GEM_HOME to resolve actual paths.)
authorsArray<str>Authors are the list of gem authors (stored as array regardless of using `author` or `authors` method in gemspec)
homepagestrHomepage is project homepage URL

RustCargoAuditEntry

RustBinaryAuditEntry represents Rust crate metadata extracted from a compiled binary using cargo-auditable format.

Field NameTypeDescription
namestrName is crate name as specified in audit section of the build binary
versionstrVersion is crate version as specified in audit section of the build binary
sourcestrSource is the source registry or repository where this crate came from

RustCargoLockEntry

Represents a locked dependency from a Cargo.lock file with precise version and checksum information.

Field NameTypeDescription
namestrName is crate name as specified in Cargo.toml
versionstrVersion is crate version as specified in Cargo.toml
sourcestrSource is the source registry or repository URL in format "registry+https://github.com/rust-lang/crates.io-index" for registry packages
checksumstrChecksum is content checksum for registry packages only (hexadecimal string). Cargo doesn't require or include checksums for git dependencies. Used to detect MITM attacks by verifying downloaded crate matches lockfile checksum.
dependenciesArray<str>Dependencies are the list of dependencies with version constraints

SnapEntry

Represents metadata for a Snap package extracted from snap.yaml or snapcraft.yaml files.

Field NameTypeDescription
snapTypestrSnapType indicates the snap type (base, kernel, app, gadget, or snapd).
basestrBase is the base snap name that this snap depends on (e.g., "core20", "core22").
snapNamestrSnapName is the snap package name.
snapVersionstrSnapVersion is the snap package version.
architecturestrArchitecture is the target CPU architecture (e.g., "amd64", "arm64").

SwiftPackageManagerLockEntry

SwiftPackageManagerResolvedEntry represents a resolved dependency from a Package.resolved file with its locked version and source location.

Field NameTypeDescription
revisionstrRevision is git commit hash of the resolved package

SwiplpackPackage

SwiplPackEntry represents a SWI-Prolog package from the pack system with metadata about the package and its dependencies.

Field NameTypeDescription
namestrName is the package name as found in the .toml file
versionstrVersion is the package version as found in the .toml file
authorstrAuthor is author name
authorEmailstrAuthorEmail is author email address
packagerstrPackager is packager name (if different from author)
packagerEmailstrPackagerEmail is packager email address
homepagestrHomepage is project homepage URL
dependenciesArray<str>Dependencies are the list of required dependencies

TerraformLockProviderEntry

Represents a single provider entry in a Terraform dependency lock file (.terraform.lock.hcl).

Field NameTypeDescription
urlstrURL is the provider source address (e.g., "registry.terraform.io/hashicorp/aws").
constraintsstrConstraints specifies the version constraints for the provider (e.g., "~> 4.0").
versionstrVersion is the locked provider version selected during terraform init.
hashesArray<str>Hashes are cryptographic checksums for the provider plugin archives across different platforms.

WordpressPluginEntry

Represents all metadata parsed from the wordpress plugin file

Field NameTypeDescription
pluginInstallDirectorystrPluginInstallDirectory is directory name where the plugin is installed
authorstrAuthor is plugin author name
authorUristrAuthorURI is author's website URL

6.4 - Grype Command Line Reference

A vulnerability scanner for container images, filesystems, and SBOMs.

Supports the following image sources:
    grype yourrepo/yourimage:tag             defaults to using images from a Docker daemon
    grype path/to/yourproject                a Docker tar, OCI tar, OCI directory, SIF container, or generic filesystem directory

You can also explicitly specify the scheme to use:
    grype podman:yourrepo/yourimage:tag          explicitly use the Podman daemon
    grype docker:yourrepo/yourimage:tag          explicitly use the Docker daemon
    grype docker-archive:path/to/yourimage.tar   use a tarball from disk for archives created from "docker save"
    grype oci-archive:path/to/yourimage.tar      use a tarball from disk for OCI archives (from Podman or otherwise)
    grype oci-dir:path/to/yourimage              read directly from a path on disk for OCI layout directories (from Skopeo or otherwise)
    grype singularity:path/to/yourimage.sif      read directly from a Singularity Image Format (SIF) container on disk
    grype dir:path/to/yourproject                read directly from a path on disk (any directory)
    grype file:path/to/yourfile                  read directly from a file on disk
    grype sbom:path/to/syft.json                 read Syft JSON from path on disk
    grype registry:yourrepo/yourimage:tag        pull image directly from a registry (no container runtime required)
    grype purl:path/to/purl/file                 read a newline separated file of package URLs from a path on disk
    grype PURL                                   read a single package PURL directly (e.g. pkg:apk/openssl@3.2.1?distro=alpine-3.20.3)
    grype CPE                                    read a single CPE directly (e.g. cpe:2.3:a:openssl:openssl:3.0.14:*:*:*:*:*)

You can also pipe in Syft JSON directly:
 syft yourimage:tag -o json | grype

Usage:
  grype [IMAGE] [flags]
  grype [command]

Available Commands:
  completion  Generate a shell completion for Grype (listing local docker images)
  config      show the grype configuration
  db          vulnerability database operations
  explain     Ask grype to explain a set of findings
  help        Help about any command
  version     show version information

Flags:
      --add-cpes-if-none       generate CPEs for packages with no CPE data
      --by-cve                 orient results by CVE instead of the original vulnerability ID when possible
  -c, --config stringArray     grype configuration file(s) to use
      --distro string          distro to match against in the format: <distro>[-:@]<version>
      --exclude stringArray    exclude paths from being scanned using a glob expression
  -f, --fail-on string         set the return code to 1 if a vulnerability is found with a severity >= the given severity, options=[negligible low medium high critical]
      --file string            file to write the default report output to (default is STDOUT)
      --from stringArray       specify the source behavior to use (e.g. docker, registry, podman, oci-dir, ...)
  -h, --help                   help for grype
      --ignore-states string   ignore matches for vulnerabilities with specified comma separated fix states, options=[fixed not-fixed unknown wont-fix]
      --name string            set the name of the target being analyzed
      --only-fixed             ignore matches for vulnerabilities that are not fixed
      --only-notfixed          ignore matches for vulnerabilities that are fixed
  -o, --output stringArray     report output formatter, formats=[json table cyclonedx cyclonedx-json sarif template], deprecated formats=[embedded-cyclonedx-vex-json embedded-cyclonedx-vex-xml]
      --platform string        an optional platform specifier for container image sources (e.g. 'linux/arm64', 'linux/arm64/v8', 'arm64', 'linux')
      --profile stringArray    configuration profiles to use
  -q, --quiet                  suppress all logging output
  -s, --scope string           selection of layers to analyze, options=[squashed all-layers deep-squashed] (default "squashed")
      --show-suppressed        show suppressed/ignored vulnerabilities in the output (only supported with table output format)
      --sort-by string         sort the match results with the given strategy, options=[package severity epss risk kev vulnerability] (default "risk")
  -t, --template string        specify the path to a Go template file (requires 'template' output to be selected)
  -v, --verbose count          increase verbosity (-v = info, -vv = debug)
      --version                version for grype
      --vex stringArray        a list of VEX documents to consider when producing scanning results

Use "grype [command] --help" for more information about a command.

grype config

Show the grype configuration.

Usage:
  grype config [flags]
  grype config [command]

Available Commands:
  locations   shows all locations and the order in which grype will look for a configuration file

Flags:
  -h, --help   help for config
      --load   load and validate the grype configuration

grype db check

Check to see if there is a database update available.

Usage:
  grype db check [flags]

Flags:
  -h, --help            help for check
  -o, --output string   format to display results (available=[text, json]) (default "text")

grype db delete

Delete the vulnerability database.

Usage:
  grype db delete [flags]

Flags:
  -h, --help   help for delete

grype db import

Import a vulnerability database archive from a local FILE or URL.

DB archives can be obtained from “https://grype.anchore.io/databases" (or running db list). If the URL has a checksum query parameter with a fully qualified digest (e.g. ‘sha256:abc728…’) then the archive/DB will be verified against this value.

Usage:
  grype db import FILE | URL [flags]

Flags:
  -h, --help   help for import

grype db list

List all DBs available according to the listing URL.

Usage:
  grype db list [flags]

Flags:
  -h, --help            help for list
  -o, --output string   format to display results (available=[text, raw, json]) (default "text")

grype db providers

List vulnerability providers that are in the database.

Usage:
  grype db providers [flags]

Flags:
  -h, --help            help for providers
  -o, --output string   format to display results (available=[table, json]) (default "table")

Search the DB for vulnerabilities or affected packages.

Usage:
  grype db search [flags]
  grype db search [command]

Examples:

  Search for affected packages by vulnerability ID:

    $ grype db search --vuln ELSA-2023-12205

  Search for affected packages by package name:

    $ grype db search --pkg log4j

  Search for affected packages by package name, filtering down to a specific vulnerability:

    $ grype db search --pkg log4j --vuln CVE-2021-44228

  Search for affected packages by PURL (note: version is not considered):

    $ grype db search --pkg 'pkg:rpm/redhat/openssl' # or: '--ecosystem rpm --pkg openssl

  Search for affected packages by CPE (note: version/update is not considered):

    $ grype db search --pkg 'cpe:2.3:a:jetty:jetty_http_server:*:*:*:*:*:*:*:*'
    $ grype db search --pkg 'cpe:/a:jetty:jetty_http_server'

Available Commands:
  vuln        Search for vulnerabilities within the DB (supports DB schema v6+ only)

Flags:
      --broad-cpe-matching        allow for specific package CPE attributes to match with '*' values on the vulnerability
      --distro stringArray        refine to results with the given operating system (format: 'name', 'name[-:@]version', 'name[-:@]maj.min', 'name[-:@]codename')
      --ecosystem string          ecosystem of the package to search within
      --fixed-state stringArray   only show vulnerabilities with the given fix state (fixed, not-fixed, unknown, wont-fix)
  -h, --help                      help for search
      --limit int                 limit the number of results returned, use 0 for no limit (default 5000)
      --modified-after string     only show vulnerabilities originally published or modified since the given date (format: YYYY-MM-DD)
  -o, --output string             format to display results (available=[table, json]) (default "table")
      --pkg stringArray           package name/CPE/PURL to search for
      --provider stringArray      only show vulnerabilities from the given provider
      --published-after string    only show vulnerabilities originally published after the given date (format: YYYY-MM-DD)
      --vuln stringArray          only show results for the given vulnerability ID

grype db status

Display database status and metadata.

Usage:
  grype db status [flags]

Flags:
  -h, --help            help for status
  -o, --output string   format to display results (available=[text, json]) (default "text")

grype db update

Download and install the latest vulnerability database.

Usage:
  grype db update [flags]

Flags:
  -h, --help   help for update

grype explain

Ask grype to explain a set of findings.

Usage:
  grype explain --id [VULNERABILITY ID] [flags]

Flags:
  -h, --help             help for explain
      --id stringArray   CVE IDs to explain

grype version

Show version information.

Usage:
  grype version [flags]

Flags:
  -h, --help            help for version
  -o, --output string   the format to show the results (allowable: [text json]) (default "text")

6.5 - Grype Configuration Reference

Grype searches for configuration files in the following locations, in order:

  1. ./.grype.yaml - current working directory
  2. ./.grype/config.yaml - app subdirectory in current working directory
  3. ~/.grype.yaml - home directory
  4. $XDG_CONFIG_HOME/grype/config.yaml - XDG config directory

The configuration file can use either .yaml or .yml extensions. The first configuration file found will be used.

For general information about how config and environment variables are handled, see the Configuration Reference section.

log:
  # suppress all logging output (env: GRYPE_LOG_QUIET)
  quiet: false

  # explicitly set the logging level (available: [error warn info debug trace]) (env: GRYPE_LOG_LEVEL)
  level: "warn"

  # file path to write logs to (env: GRYPE_LOG_FILE)
  file: ""

dev:
  # capture resource profiling data (available: [cpu, mem]) (env: GRYPE_DEV_PROFILE)
  profile: ""

  db:
    # (env: GRYPE_DEV_DB_DEBUG)
    debug: false

# the output format of the vulnerability report (options: table, template, json, cyclonedx)
# when using template as the output type, you must also provide a value for 'output-template-file' (env: GRYPE_OUTPUT)
output: []

# if using template output, you must provide a path to a Go template file
# see https://github.com/anchore/grype#using-templates for more information on template output
# the default path to the template file is the current working directory
# output-template-file: .grype/html.tmpl
#
# write output report to a file (default is to write to stdout) (env: GRYPE_FILE)
file: ""

# pretty-print output (env: GRYPE_PRETTY)
pretty: false

# distro to match against in the format: <distro>[-:@]<version> (env: GRYPE_DISTRO)
distro: ""

# generate CPEs for packages with no CPE data (env: GRYPE_ADD_CPES_IF_NONE)
add-cpes-if-none: false

# specify the path to a Go template file (requires 'template' output to be selected) (env: GRYPE_OUTPUT_TEMPLATE_FILE)
output-template-file: ""

# enable/disable checking for application updates on startup (env: GRYPE_CHECK_FOR_APP_UPDATE)
check-for-app-update: true

# ignore matches for vulnerabilities that are not fixed (env: GRYPE_ONLY_FIXED)
only-fixed: false

# ignore matches for vulnerabilities that are fixed (env: GRYPE_ONLY_NOTFIXED)
only-notfixed: false

# ignore matches for vulnerabilities with specified comma separated fix states, options=[fixed not-fixed unknown wont-fix] (env: GRYPE_IGNORE_WONTFIX)
ignore-wontfix: ""

# an optional platform specifier for container image sources (e.g. 'linux/arm64', 'linux/arm64/v8', 'arm64', 'linux') (env: GRYPE_PLATFORM)
platform: ""

search:
  # selection of layers to analyze, options=[squashed all-layers deep-squashed] (env: GRYPE_SEARCH_SCOPE)
  scope: "squashed"

  # search within archives that do not contain a file index to search against (tar, tar.gz, tar.bz2, etc)
  # note: enabling this may result in a performance impact since all discovered compressed tars will be decompressed
  # note: for now this only applies to the java package cataloger (env: GRYPE_SEARCH_UNINDEXED_ARCHIVES)
  unindexed-archives: false

  # search within archives that do contain a file index to search against (zip)
  # note: for now this only applies to the java package cataloger (env: GRYPE_SEARCH_INDEXED_ARCHIVES)
  indexed-archives: true

# A list of vulnerability ignore rules, one or more property may be specified and all matching vulnerabilities will be ignored.
# This is the full set of supported rule fields:
#   - vulnerability: CVE-2008-4318
#     fix-state: unknown
#     package:
#       name: libcurl
#       version: 1.5.1
#       type: npm
#       location: "/usr/local/lib/node_modules/**"
#
# VEX fields apply when Grype reads vex data:
#   - vex-status: not_affected
#     vex-justification: vulnerable_code_not_present
ignore: []

# a list of globs to exclude from scanning, for example:
#   - '/etc/**'
#   - './out/**/*.json'
# same as --exclude (env: GRYPE_EXCLUDE)
exclude: []

external-sources:
  # enable Grype searching network source for additional information (env: GRYPE_EXTERNAL_SOURCES_ENABLE)
  enable: false

  maven:
    # search for Maven artifacts by SHA1 (env: GRYPE_EXTERNAL_SOURCES_MAVEN_SEARCH_MAVEN_UPSTREAM)
    search-maven-upstream: true

    # base URL of the Maven repository to search (env: GRYPE_EXTERNAL_SOURCES_MAVEN_BASE_URL)
    base-url: "https://search.maven.org/solrsearch/select"

    # (env: GRYPE_EXTERNAL_SOURCES_MAVEN_RATE_LIMIT)
    rate-limit: 300ms

match:
  java:
    # use CPE matching to find vulnerabilities (env: GRYPE_MATCH_JAVA_USING_CPES)
    using-cpes: false

  jvm:
    # (env: GRYPE_MATCH_JVM_USING_CPES)
    using-cpes: true

  dotnet:
    # use CPE matching to find vulnerabilities (env: GRYPE_MATCH_DOTNET_USING_CPES)
    using-cpes: false

  golang:
    # use CPE matching to find vulnerabilities (env: GRYPE_MATCH_GOLANG_USING_CPES)
    using-cpes: false

    # use CPE matching to find vulnerabilities for the Go standard library (env: GRYPE_MATCH_GOLANG_ALWAYS_USE_CPE_FOR_STDLIB)
    always-use-cpe-for-stdlib: true

    # allow comparison between main module pseudo-versions (e.g. v0.0.0-20240413-2b432cf643...) (env: GRYPE_MATCH_GOLANG_ALLOW_MAIN_MODULE_PSEUDO_VERSION_COMPARISON)
    allow-main-module-pseudo-version-comparison: false

  javascript:
    # use CPE matching to find vulnerabilities (env: GRYPE_MATCH_JAVASCRIPT_USING_CPES)
    using-cpes: false

  python:
    # use CPE matching to find vulnerabilities (env: GRYPE_MATCH_PYTHON_USING_CPES)
    using-cpes: false

  ruby:
    # use CPE matching to find vulnerabilities (env: GRYPE_MATCH_RUBY_USING_CPES)
    using-cpes: false

  rust:
    # use CPE matching to find vulnerabilities (env: GRYPE_MATCH_RUST_USING_CPES)
    using-cpes: false

  stock:
    # use CPE matching to find vulnerabilities (env: GRYPE_MATCH_STOCK_USING_CPES)
    using-cpes: true

# upon scanning, if a severity is found at or above the given severity then the return code will be 1
# default is unset which will skip this validation (options: negligible, low, medium, high, critical) (env: GRYPE_FAIL_ON_SEVERITY)
fail-on-severity: ""

registry:
  # skip TLS verification when communicating with the registry (env: GRYPE_REGISTRY_INSECURE_SKIP_TLS_VERIFY)
  insecure-skip-tls-verify: false

  # use http instead of https when connecting to the registry (env: GRYPE_REGISTRY_INSECURE_USE_HTTP)
  insecure-use-http: false

  # Authentication credentials for specific registries. Each entry describes authentication for a specific authority:
  # - authority: the registry authority URL the URL to the registry (e.g. "docker.io", "localhost:5000", etc.) (env: SYFT_REGISTRY_AUTH_AUTHORITY)
  #  username: a username if using basic credentials (env: SYFT_REGISTRY_AUTH_USERNAME)
  #  password: a corresponding password (env: SYFT_REGISTRY_AUTH_PASSWORD)
  #  token: a token if using token-based authentication, mutually exclusive with username/password (env: SYFT_REGISTRY_AUTH_TOKEN)
  #  tls-cert: filepath to the client certificate used for TLS authentication to the registry (env: SYFT_REGISTRY_AUTH_TLS_CERT)
  #  tls-key: filepath to the client key used for TLS authentication to the registry (env: SYFT_REGISTRY_AUTH_TLS_KEY)
  auth: []

  # filepath to a CA certificate (or directory containing *.crt, *.cert, *.pem) used to generate the client certificate (env: GRYPE_REGISTRY_CA_CERT)
  ca-cert: ""

# show suppressed/ignored vulnerabilities in the output (only supported with table output format) (env: GRYPE_SHOW_SUPPRESSED)
show-suppressed: false

# orient results by CVE instead of the original vulnerability ID when possible (env: GRYPE_BY_CVE)
by-cve: false

# sort the match results with the given strategy, options=[package severity epss risk kev vulnerability] (env: GRYPE_SORT_BY)
sort-by: "risk"

# same as --name; set the name of the target being analyzed (env: GRYPE_NAME)
name: ""

# allows users to specify which image source should be used to generate the sbom
# valid values are: registry, docker, podman (env: GRYPE_DEFAULT_IMAGE_PULL_SOURCE)
default-image-pull-source: ""

# specify the source behavior to use (e.g. docker, registry, podman, oci-dir, ...) (env: GRYPE_FROM)
from: []

# a list of VEX documents to consider when producing scanning results (env: GRYPE_VEX_DOCUMENTS)
vex-documents: []

# VEX statuses to consider as ignored rules (env: GRYPE_VEX_ADD)
vex-add: []

# match kernel-header packages with upstream kernel as kernel vulnerabilities (env: GRYPE_MATCH_UPSTREAM_KERNEL_HEADERS)
match-upstream-kernel-headers: false

fix-channel:
  redhat-eus:
    # whether fixes from this channel should be considered, options are "never", "always", or "auto" (conditionally applied based on SBOM data) (env: GRYPE_FIX_CHANNEL_REDHAT_EUS_APPLY)
    apply: "auto"

    # (env: GRYPE_FIX_CHANNEL_REDHAT_EUS_VERSIONS)
    versions: ">= 8.0"

# (env: GRYPE_TIMESTAMP)
timestamp: true

db:
  # location to write the vulnerability database cache (env: GRYPE_DB_CACHE_DIR)
  cache-dir: "~/.cache/grype/db"

  # URL of the vulnerability database (env: GRYPE_DB_UPDATE_URL)
  update-url: "https://grype.anchore.io/databases"

  # certificate to trust download the database and listing file (env: GRYPE_DB_CA_CERT)
  ca-cert: ""

  # check for database updates on execution (env: GRYPE_DB_AUTO_UPDATE)
  auto-update: true

  # validate the database matches the known hash each execution (env: GRYPE_DB_VALIDATE_BY_HASH_ON_START)
  validate-by-hash-on-start: true

  # ensure db build is no older than the max-allowed-built-age (env: GRYPE_DB_VALIDATE_AGE)
  validate-age: true

  # Max allowed age for vulnerability database,
  # age being the time since it was built
  # Default max age is 120h (or five days) (env: GRYPE_DB_MAX_ALLOWED_BUILT_AGE)
  max-allowed-built-age: 120h0m0s

  # fail the scan if unable to check for database updates (env: GRYPE_DB_REQUIRE_UPDATE_CHECK)
  require-update-check: false

  # Timeout for downloading GRYPE_DB_UPDATE_URL to see if the database needs to be downloaded
  # This file is ~156KiB as of 2024-04-17 so the download should be quick; adjust as needed (env: GRYPE_DB_UPDATE_AVAILABLE_TIMEOUT)
  update-available-timeout: 30s

  # Timeout for downloading actual vulnerability DB
  # The DB is ~156MB as of 2024-04-17 so slower connections may exceed the default timeout; adjust as needed (env: GRYPE_DB_UPDATE_DOWNLOAD_TIMEOUT)
  update-download-timeout: 5m0s

  # Maximum frequency to check for vulnerability database updates (env: GRYPE_DB_MAX_UPDATE_CHECK_FREQUENCY)
  max-update-check-frequency: 2h0m0s

exp:

6.6 - Grant Command Line Reference

Grant helps you view licenses for container images, SBOM documents, and filesystems. Apply filters and views that can help you build a picture of licenses in your SBOM.

Usage:
  grant [command]

Available Commands:
  check       Check license compliance for one or more targets
  completion  Generate the autocompletion script for the specified shell
  config      Generate a comprehensive configuration file
  help        Help about any command
  list        List licenses found in one or more targets
  version     Show the version information for grant

Flags:
  -c, --config string        path to configuration file
  -h, --help                 help for grant
      --no-output            suppress terminal output when writing to file
  -o, --output string        output format (table, json) (default "table")
  -f, --output-file string   write JSON output to file (sets output format to json)
  -q, --quiet                suppress all non-essential output
  -v, --verbose              enable verbose output
      --version              version for grant

Use "grant [command] --help" for more information about a command.

grant check

Check evaluates license compliance for container images, SBOMs, filesystems, and files.

Targets can be:

  • Container images: alpine:latest, ubuntu:22.04

  • SBOM files: path/to/sbom.json, path/to/sbom.json

  • Directories: dir:./project, ./my-app

  • Archive files: project.tar.gz, source.zip

  • License files: LICENSE, COPYING

  • Stdin: - (reads SBOM from stdin)

Exit codes:

  • 0: All targets are compliant

  • 1: One or more targets are non-compliant or an error occurred.

Usage:
  grant check [TARGET...] [flags]

Flags:
      --disable-file-search   disable filesystem license file search
      --dry-run               run check without returning non-zero exit code on violations
  -h, --help                  help for check
      --summary               show only summary information
      --unlicensed            show only packages without licenses

grant config

Generate a complete YAML configuration file with all available Grant options.

This command outputs a comprehensive configuration file that includes:

  • License policy options (allow lists, ignore patterns)

  • Command-line options with defaults

  • Detailed comments explaining each option

The generated configuration can be saved to a file and customized as needed.

Usage:
  grant config [flags]

Flags:
  -h, --help            help for config
  -o, --output string   output file path (default: stdout)

grant list

List shows all licenses found in container images, SBOMs, filesystems, and files

without applying policy evaluation.

Targets can be:

  • Container images: alpine:latest, ubuntu:22.04

  • SBOM files: path/to/sbom.json, path/to/sbom.xml

  • Directories: dir:./project, ./my-app

  • Archive files: project.tar.gz, source.zip

  • License files: LICENSE, COPYING

  • Stdin: - (reads SBOM from stdin)

When no target is specified and stdin is available (piped input), grant will

automatically read from stdin. This allows usage like:

syft -o json dir:. | grant list Apache-2.0

License filtering:

If license names are provided as additional arguments, only packages with those

specific licenses will be shown. For example:

grant list dir:. “MIT” “Apache-2.0”

syft -o json dir:. | grant list “MIT” “Apache-2.0”

This command always returns exit code 0 unless there are processing errors.

Usage:
  grant list [TARGET] [LICENSE...] [flags]

Flags:
      --disable-file-search   disable filesystem license file search
      --group-by string       group results by specified field (risk)
  -h, --help                  help for list
      --pkg string            show detailed information for a specific package (requires license filter)
      --unlicensed            show only packages without licenses

grant version

Show the version information for grant.

Usage:
  grant version [flags]

Flags:
  -h, --help   help for version

6.7 - Data sources

Complete list of data sources used by Grype for vulnerability scanning

The following are a list of data sources used to directly match packages to vulnerabilities in Grype:

Data SourceVunnel ProviderEcosystems
AlmaLinux OSV DatabasealmaRPM
Alpine SecDBalpineAPK
Amazon Linux Security CenteramazonRPM
Microsoft AzureLinux OVALmarinerRPM
Bitnami Vulnerability DatabasebitnamiBitnami
Chainguard SecuritychainguardAPK
Chainguard Libraries (OpenVEX)chainguard_libraries
Debian Security TrackerdebianDPKG
ECHO SecurityechoDPKG
GitHub Security Advisoriesgithub.NET, GitHub Actions, Go, Java, JavaScript, Python, Ruby, Rust
Microsoft CBL-Mariner OVALmarinerRPM
MINIMOS SecurityminimosAPK
National Vulnerability Database (NVD)nvd.NET, APK, Go, Java, JavaScript, Python, Ruby, Rust
Oracle Linux SecurityoracleRPM
Red Hat Security Data APIrhelRPM
SUSE Security OVALslesRPM
Ubuntu CVE TrackerubuntuDPKG
Wolfi SecuritywolfiAPK

Capabilities

Here are the capabilities of each data source as Grype uses them:

Data SourceAdvisoriesDisclosuresFixesTrack by
Source
Package
AffectedDateVersionsDate
AlmaLinux OSV DatabaseALSA
Alpine SecDB
Amazon Linux Security CenterALAS
Microsoft AzureLinux OVAL
Bitnami Vulnerability Database
Chainguard SecurityCGA
Chainguard Libraries (OpenVEX)CGA
Debian Security TrackerDSA
ECHO SecurityECHO
GitHub Security AdvisoriesGHSA
Microsoft CBL-Mariner OVAL
MINIMOS Security
National Vulnerability Database (NVD)CVE
Oracle Linux SecurityELSA
Red Hat Security Data APIRHSA
SUSE Security OVALSUSE-SU
Ubuntu CVE TrackerUSN
Wolfi SecurityCGA

Auxiliary data

We additionally have auxiliary data sources that are used to enhance vulnerability matching in Grype:

Data SourceVunnel ProviderDescription
Exploit Prediction Scoring SystemepssData-driven effort by FIRST to predict the likelihood that a software vulnerability will be exploited. Provides daily-updated probability scores (0-1) and percentile rankings for CVE prioritization.
CISA Known Exploited VulnerabilitieskevCISA's authoritative catalog of vulnerabilities known to be actively exploited in the wild. Provides exploitation status, required remediation actions, due dates, and ransomware campaign associations.

These sources are cross-cutting in nature and are not tied to a specific distribution or ecosystem (though, primarily enriching information about CVEs specifically).

6.8 - Grant Configuration Reference

Grant searches for configuration files in the following locations, in order:

  1. ./.grant.yaml - current working directory
  2. ./.grant/config.yaml - app subdirectory in current working directory
  3. ~/.grant.yaml - home directory
  4. $XDG_CONFIG_HOME/grant/config.yaml - XDG config directory

The configuration file can use either .yaml or .yml extensions. The first configuration file found will be used.

For general information about how config and environment variables are handled, see the Configuration Reference section.

# Grant License Compliance Configuration
# Complete configuration file with all available options
# See: https://github.com/anchore/grant

format: table # Output format: "table" or "json" (default: "table")
quiet: false # Suppress all non-essential output (default: false)
verbose: false # Enable verbose output (default: false)
# List of allowed license patterns (supports glob matching)
# Default behavior: DENY all licenses except those explicitly permitted
allow:
  - MIT
  - Apache-2.0
  - BSD-3-Clause
# List of package patterns to ignore from license checking
# Supports glob patterns for flexible matching
ignore-packages: []
  # Add package patterns to ignore here
  # Examples:
  # - "github.com/mycompany/*"
  # - "internal/*"
# Policy enforcement options
require-license: true # When true, deny packages with no detected licenses
require-known-license: false # When true, deny non-SPDX / unparsable licenses

# ============================================================================
# COMMAND-SPECIFIC OPTIONS
# ============================================================================
disable-file-search: false # Disable filesystem license file search
summary: false # Show only summary information for check command
# Show only packages without licenses (default: false)
only-unlicensed: false # maps to grant check --unlicensed || grant list --unlicensed

6.9 - Configuration Rules

Configuration patterns and options used across all Anchore OSS tools

All Anchore open source tools (Syft, Grype, Grant) share the same configuration system. This guide explains how to configure these tools using command-line flags, environment variables, and configuration files.

Configuration precedence

When you configure a tool, settings are applied in a specific order. If the same setting is specified in multiple places, the tool uses the value from the highest-priority source:

  1. Command-line arguments (highest priority)
  2. Environment variables
  3. Explicit config file (-c PATH or --config PATH)
  4. Auto-discovered configuration file
  5. Default values (lowest priority)

For example, if you set the log level using all three methods, the command-line flag overrides the environment variable, which overrides the config file value.

Viewing your configuration

To see available configuration options and current settings:

  • syft --help — shows all command-line flags
  • syft config — prints a complete sample configuration file
  • syft config --load — displays your current active configuration

Replace syft with the tool you’re using (grype, grant, etc.).

Specifying a configuration file

You can explicitly specify a configuration file using the -c or --config flag, which overrides the auto-discovery behavior.

syft alpine:latest -c /path/to/config.yaml
grype alpine:latest --config ~/.grype-custom.yaml
grant check . -c ./grant-config.yaml

Syft and Grype support multiple configuration files by specifying the flag multiple times:

syft alpine:latest -c base.yaml -c overrides.yaml

When multiple files are specified, individual settings from later files override earlier ones.

Using environment variables

Every configuration option can be set via environment variable. The variable name follows the path to the setting in the configuration file.

Example: To enable pretty-printed JSON output, the config file setting is:

format:
  json:
    pretty: true

The path from root to this value is formatjsonpretty, so the environment variable is:

export SYFT_FORMAT_JSON_PRETTY=true

The pattern is: <TOOL>_<PATH>_<TO>_<SETTING> where:

  • <TOOL> is the uppercase tool name (SYFT, GRYPE, GRANT)
  • Path segments are joined with underscores
  • All letters are uppercase

More examples:

# Set log level to debug
export SYFT_LOG_LEVEL=debug

# Configure output format
export GRYPE_OUTPUT=json

# Set registry credentials
export SYFT_REGISTRY_AUTH_USERNAME=myuser

Configuration file auto-discovery

When you don’t specify a configuration file with -c, the tool automatically searches for one. Configuration files use YAML format. The tool searches these locations in order and uses the first file it finds:

  1. .syft.yaml (in current directory)
  2. .syft/config.yaml (in current directory)
  3. ~/.syft.yaml (in home directory)
  4. <XDG_CONFIG_HOME>/syft/config.yaml (typically ~/.config/syft/config.yaml)

Replace syft with your tool name (grype, grant, etc.).

7 - Architecture

How all the projects and datasets fit together

Anchore’s open source security tooling consists of several interconnected tools that work together to detect vulnerabilities and ensure license compliance in software packages. This page explains how these tools interact and how data flows through the system.

The Anchore OSS ecosystem includes five main tools that, at the 30,000 ft view, work together as follows:

---
config:
  layout: dagre
  look: handDrawn
  theme: default
  flowchart:
    curve: linear
---
flowchart TD
    vunnel["***Vunnel***<br><small>Downloads and normalizes<br>security feeds</small>"]:::Ash
    grypedb["***Grype DB***<br><small>Converts feeds to<br>SQLite database</small>"]:::Ash
    grype["***Grype***<br><small>Matches vulnerabilities<br>from SBOM + database</small>"]:::Ash
    syft["***Syft***<br><small>Generates SBOMs from<br>scan targets</small>"]:::Ash
    grant["***Grant***<br><small>Analyzes licenses<br>from SBOM</small>"]:::Ash

    vunnel --> grypedb --> grype
    syft --> grype & grant

    vunnel@{ shape: event}
    grypedb@{ shape: event}
    grype@{ shape: event}
    syft@{ shape: event}
    grant@{ shape: event}

    classDef Ash stroke-width:1px, stroke-dasharray:none, stroke:#424242, fill:#e1ffe1, color:#000000

Zooming in to the 20,000 ft view, here’s how data flows through the same system:

---
config:
  layout: dagre
  look: handDrawn
  theme: default
  flowchart:
    curve: linear
---
flowchart TB

  feed1["NVD Feed"]
  feed2["Alpine Feed"]
  feed3["... (20+ feeds)"]

  subgraph anchore["<b>Anchore Infrastructure</b>"]
    vunnel["Vunnel"]
    grypedb["Grype DB"]
    cache["Daily DB"]
    vunnel --> grypedb --> cache
  end


  subgraph user["<b>User Environment</b>"]
    targets["Image, filesystem,<br>PURLs, directory, ..."]
    local["DB Cache"]

    syft["Syft"]
    sbom["SBOM"]

    targets --> syft --> sbom

    grype["Grype"]
    vulns["Vulnerability+Package<br>Matches"]
    grant["Grant"]
    licenses["License Compliance<br>Report"]

    grype --> vulns
    grant --> licenses

    sbom --> grype
    sbom --> grant
    local --> grype
  end

  feed1 --> vunnel
  feed2 --> vunnel
  feed3 -.-> vunnel

  cache -. "<i>download</i>" .-> local

  feed1:::ExternalSource@{ shape: cloud}
  feed2:::ExternalSource@{ shape: cloud}
  feed3:::ExternalSource@{ shape: cloud}
  vunnel:::Application@{ shape: event}
  grypedb:::Application@{ shape: event}
  grype:::Application@{ shape: event}
  syft:::Application@{ shape: event}
  grant:::Application@{ shape: event}

  targets:::AnalysisInput
  cache:::Database@{ shape: db}
  local:::Database@{ shape: db}
  sbom:::Document@{ shape: doc}
  vulns:::Document@{ shape: doc}
  licenses:::Document@{ shape: doc}

  style anchore fill:none, stroke:#333333, stroke-width:2px, stroke-dasharray:5 5
  style user fill:none, stroke:#333333, stroke-width:2px, stroke-dasharray:5 5

  classDef AnalysisInput stroke-width:1px, stroke-dasharray:none, stroke:#424242, fill:#f0f8ff, color:#000000
  classDef ExternalSource stroke-width:1px, stroke-dasharray:none, stroke:#424242, fill:#f0f8ff, color:#000000
  classDef Application stroke-width:1px, stroke-dasharray:none, stroke:#424242, fill:#e1ffe1, color:#000000
  classDef Document stroke-width:1px, stroke-dasharray:none, stroke:#424242, fill:#fff9c4, color:#000000
  classDef Database stroke-width:1px, stroke-dasharray:none, stroke:#424242, fill:#fff9c4, color:#000000

7.1 - Go CLI patterms

All of the common patterns used in our go-based CLIs

This document explains how all of the golang-base Anchore OSS tools are organized, covering the package structure, common core architectural concepts, and where key functionality is implemented.

Use this as a reference when trying to familiarize yourself with the overall structure of Syft, Grype, or other applications.

CLI

The cmd package uses the Clio framework (built on top of the spf13/cobra and spf13/viper) to manage flag/argument parsing, configuration, and command execution.

All flags, arguments, and config arguments are represented in the application as a struct. Each command tends to get it’s own struct with all options the command needs to function. Common options or sets of options can be defined independently and reused across commands, being composed within each command struct that needs the option.

Select options that represent flags are registered with the AddFlags method defined on the command struct (or on each option struct used within the command struct). If any additional processing is needed to be done to elements in command structs or option structs before being used in the application then you can define a PostLoad method on the struct to mutate the elements you need.

In terms of what is executed when: all processing is done within the selected cobra command’s PreRun hook, wrapping any potential user-provided hook. This means that all of this fits nicely into the existing cobra command lifecycle.

See the sign command in Quill for a small example of all of this together.

The reason for this approach is to smooth over the rough edges between cobra and viper, which have multiple ways to configure and use functionality, and provide a single way to specify any input into the application. Being prescriptive about these approaches has allowed us to take many shared concerns that used to be a lot of boilerplate when creating an application and put it into one framework –Clio.

Execution flow

The following diagrams show the execution of a typical Anchore application at different levels of detail, using the scan command in Syft as a representative example:

sequenceDiagram
    actor user as User
    participant syft as Syft Application
    participant cmd as Command Handler (Cobra)
    participant lib as Library

    user->>syft: syft scan alpine:latest
    syft->>cmd: Execute
    cmd->>cmd: Initialize & Load Configuration
    cmd->>lib: Execute Scan Logic
    lib->>cmd: SBOM
    cmd-->>user: Display/Write SBOM

sequenceDiagram
    actor user as User

    box rgba(0,0,0,.1) Syft Application
      participant main as main.go
      participant cliApp as cli.Application()
      participant clio as Clio Framework
    end

    box rgba(0,0,0,.1) Command Handler
      participant cobra as Command PreRunE
      participant opts as Command Options
      participant runE as Command RunE
    end

    participant lib as Library

    user->>main: syft scan alpine:latest

    Note over main,clio: Syft Application (initialization)
    main->>cliApp: Create app with ID
    cliApp->>clio: clio.New(config)
    clio-->>cliApp: app instance

    Note over cliApp,cobra: Build Command Tree
    cliApp->>cliApp: commands.Scan(app)
    cliApp->>clio: app.SetupCommand(&cobra.Command, opts)
    Note over clio: Bind config sources to options struct
    clio-->>cliApp: configured scanCmd

    cliApp->>cliApp: commands.Root(app, scanCmd)
    cliApp->>clio: app.SetupRootCommand(&cobra.Command, opts)
    clio-->>cliApp: rootCmd with scanCmd attached

    main->>clio: app.Run()
    clio->>cobra: rootCmd.Execute()

    Note over cobra,runE: Command Handler (execution)
    cobra->>cobra: Parse args → "scan alpine:latest"
    cobra->>opts: Load config (files/env/flags)
    cobra->>opts: opts.PostLoad() validation
    cobra->>runE: RunE(cmd, args)

    runE->>lib: Execute Scan Logic
    lib-->>runE: SBOM

    Note over runE: Result Output
    runE-->>user: SBOM output

Package structure

Many of the Anchore OSS tools have the following setup (or very similar):

  • /cmd/NAME/ - CLI application layer. This is the entry point for the command-line tool and wires up much of the functionality from the public API.

    ./cmd/NAME/
    │   ├── cli/
    │   │   ├── cli.go          // where all commands are wired up
    │   │   ├── commands/       // all command implementations
    │   │   ├── options/        // all command flags and configuration options
    │   │   └── ui/             // all handlers for events that are shown on the UI
    │   └── main.go             // entrypoint for the application
    ...
    
  • /NAME/ - Public library API. This is how API users interact with the underlying capabilities without coupling to the application configuration, specific presentation on the terminal, or high-level workflows.

The internalization philosophy

Applications extensively use internal/ packages at multiple levels to minimize the public API surface area. The codebase follows the guiding principle “internalize anything you can” - expose only what library consumers truly need.

Take for example the various internal packages within Syft

/internal/               # Project-wide internals (bus, log, etc...)
/syft/internal/          # Syft library-specific internals (relationships, evidence)
/cmd/syft/internal/      # CLI-specific internals (options, UI handlers)
/syft/source/internal/   # Package-specific internals (source resolution details)
/syft/pkg/cataloger/<ecosystem>/internal/  # Cataloger-specific internals

This multi-level approach allows Syft to expose a minimal, stable public API while keeping implementation details flexible and changeable. Go’s module system prevents importing internal/ packages from outside their parent directory, which enforces clean separation of concerns.

Core facilities

The bus system

The bus system, under /internal/bus/ within the target application, is an event publishing mechanism that enables progress reporting and UI updates without coupling the library to any specific user interface implementation.

The bus follows a strict one-way communication pattern: the library publishes events but never subscribes to them. The intention is that functionality is NOT fulfilled by listening to events on the bus and taking action. Only the application layer (CLI) subscribes to events for display. This keeps the library completely decoupled from UI concerns.

You can think of the bus as a structured extension of the logger, allowing for publishing not just strings or maps of strings, but enabling publishing objects that can yield additional telemetry on-demand, fueling richer interactions.

This enables library consumers to implement any UI they want (terminal UI, web UI, no UI) by subscribing to events and handling them appropriately. The library has zero knowledge of how events are used, maintaining a clean separation between business logic and presentation.

The bus is implemented as a singleton with a global publisher that can be set by library consumers:

var publisher partybus.Publisher

func Set(p partybus.Publisher) {
    publisher = p
}

func Publish(e partybus.Event) {
    if publisher != nil {
        publisher.Publish(e)
    }
}

The library calls bus.Publish() throughout cataloging operations. If no publisher is set, events are silently discarded. This makes events truly optional.

Event streams

Picking the right “level” for events is key. Libraries tend to not assume that events can be read “quickly” off the bus. At the same time, to remain lively and useful, we want to be able to have consumers of the bus to get information at a rate they choose. A common pattern used is to publish a “start” event (for example, “cataloging started”) and publish with that event a read-only, thread-safe object that can be polled by the caller to get progress or status-based information out.

sequenceDiagram
    participant CMD as cmd/<br/>(CLI Layer)
    participant Bus as internal/bus/<br/>(Event Bus)
    participant Lib as lib/<br/>(Library Layer)
    participant Progress as Progress Object

    CMD->>Bus: Subscribe()
    CMD->>+Lib: PerformOperation()
    Lib->>Progress: Create progress object
    Lib->>Bus: Publish(StartEvent, progress)
    Bus->>CMD: StartEvent

    loop Poll until complete
        CMD->>Progress: Size(), Current(), Stage(), Error()
        Progress-->>CMD: status (Error: nil)
    end

    Lib-->>-CMD: Return result
    CMD->>Progress: Error()
    Progress-->>CMD: ErrCompleted

This prevents against the library accidentally being a “firehose” and overwhelming subscribers who are trying to convey timely information. When subscribers cannot keep up with the amount of events emitted from the library then the very information being displayed tends to get stale and useless anyway. At the same time, the there is a lot of value in responding to events instead of polling for all information.

This pattern helps to balance the best of both worlds, getting an event driven system with a consumer-driven update cadence.

The logging system

The logging system, under /internal/log/ within the target application, provides structured logging throughout Anchore’s applications with an injectable logger interface. This allows library consumers to integrate the application’s logging into their own logging infrastructure. There is an adapter for logrus to this interface implemented, and we’re happy to take additional contributions for other concrete logger adapters.

The logging system is implemented as a singleton with global functions (log.Info, log.Debug, etc.). Library consumers inject their logger by calling the public API function syft.SetLogger(yourLoggerHere).

By default, Syft uses a discard logger (no-op) that silently ignores all log messages. This ensures the library produces no output unless a logger is explicitly provided.

All loggers are automatically wrapped with a redaction layer when you call SetLogger(). The wrapping is applied internally by the logging system, which removes sensitive information (like authentication tokens) from log output. This happens transparently within the application CLI, however, API users will need to explicitly register secrets to be redacted.

Releasing

Each application uses goreleaser to build and publish releases, as orchestrated by a release workflow.

The release workflow can be triggered with make release from a local checkout of the repository. Chronicle is used to automatically generate release notes based on GitHub issues and PR titles/labels, using the same information to determine the next version for the release.

With each repo, we tend to publish (but some details may vary slightly between repos):

  • a tag with the version (e.g., v0.50.0)
  • binaries for Linux, Mac, and Windows, uploaded as GitHub release assets (note, we sign and notarize Mac binaries with Quill)
  • Docker images, pushed to Docker Hub and ghcr.io registries
  • Update homebrew taps

We ensure the same tool versions are used locally and in CI by using Binny, orchestrated with make and task.

7.2 - Syft

Architecture and design of the Syft SBOM tool

Code organization

At a high level, this is the package structure of Syft:

./cmd/syft/                 // main entrypoint
│   └── ...
└── syft/                   // the "core" syft library
    ├── format/             // contains code to encode or decode to and from SBOM formats
    ├── pkg/                // contains code to catalog packages from a source
    ├── sbom/               // contains the definition of an SBOM
    └── source/             // contains code to create a source object for some input type (e.g. container image, directory, etc)

Syft’s core library is implemented in the syft package and subpackages. The major packages work together in a pipeline:

  • The syft/source package produces a source.Source object that can be used to catalog a directory, container, and other source types.
  • The syft package knows how to take a source.Source object and catalog it to produce an sbom.SBOM object.
  • The syft/format package contains the ability to encode an sbom.SBOM object to and from different SBOM formats (such as SPDX and CycloneDX).

This design creates a clear flow: source → catalog → format:

sequenceDiagram
    actor User
    participant CLI
    participant Resolve as Source Resolution
    participant Catalog as SBOM Creation
    participant Format as Format Output

    User->>CLI: syft scan <target>
    CLI->>CLI: Parse configuration

    CLI->>Resolve: Resolve input (image/dir/file)
    Note over Resolve: Tries: File→Directory→OCI→Docker→Podman→Containerd→Registry
    Resolve-->>CLI: source.Source

    CLI->>Catalog: Create SBOM from source
    Note over Catalog: Task-based cataloging engine
    Catalog-->>CLI: sbom.SBOM struct

    CLI->>Format: Write to format(s)
    Note over Format: Parallel: SPDX, CycloneDX, Syft JSON, etc.
    Format-->>User: SBOM file(s)

Shows the task-based architecture and execution phases. Tasks are selected by tags (image/directory/installed) and organized into serial phases, with parallel execution within each phase.

sequenceDiagram
    participant CLI as scan.go
    participant GetSource as Source Providers
    participant CreateSBOM as syft.CreateSBOM
    participant Config as CreateSBOMConfig
    participant Executor as Task Executor
    participant Builder as sbomsync.Builder
    participant Resolver as file.Resolver

    Note over CLI,GetSource: Source Resolution
    CLI->>GetSource: GetSource(userInput, cfg)
    GetSource->>GetSource: Try providers until success
    GetSource-->>CLI: source.Source + file.Resolver

    Note over CLI,Builder: SBOM Creation (task-based architecture)
    CLI->>CreateSBOM: CreateSBOM(ctx, source, cfg)
    CreateSBOM->>Config: makeTaskGroups(srcMetadata)

    Note over Config: Task Selection & Organization
    Config->>Config: Select catalogers by tags<br/>(image/directory/installed)
    Config->>Config: Organize into execution phases
    Config-->>CreateSBOM: [][]Task (grouped by phase)

    CreateSBOM->>Builder: Initialize thread-safe builder

    Note over CreateSBOM,Executor: Phase 1: Environment Detection
    CreateSBOM->>Executor: Execute environment tasks
    Executor->>Resolver: Read OS release files
    Executor->>Builder: SetLinuxDistribution()

    Note over CreateSBOM,Executor: Phase 2: Package + File Cataloging
    CreateSBOM->>Executor: Execute package & file tasks
    par Parallel Task Execution
        Executor->>Resolver: Read package manifests
        Executor->>Builder: AddPackages()
    and
        Executor->>Resolver: Read file metadata
        Executor->>Builder: Add file artifacts
    end

    Note over CreateSBOM,Executor: Phase 3: Post-Processing
    CreateSBOM->>Executor: Execute relationship tasks
    Executor->>Builder: AddRelationships()
    CreateSBOM->>Executor: Execute cleanup tasks

    CreateSBOM-->>CLI: *sbom.SBOM

    Note over CLI: Format Output
    CLI->>CLI: Write multi-format output

The Package object

The pkg.Package object is a core data structure that represents a software package.

Key fields include:

  • FoundBy: the name of the cataloger that discovered this package (e.g. python-pip-cataloger).
  • Locations: the set of paths and layer IDs that were parsed to discover this package.
  • Language: the language of the package (e.g. python).
  • Type: a high-level categorization of the ecosystem the package resides in. For instance, even if the package is an egg, wheel, or requirements.txt reference, it is still logically a “python” package. Not all package types align with a language (e.g. rpm) but it is common.
  • Metadata: specialized data for specific location(s) parsed. This should contain as much raw information as seems useful, kept as flat as possible using the raw names and values from the underlying source material.

Additional package Metadata

Packages can have specialized metadata that is specific to the package type and source of information. This metadata is stored in the Metadata field of the pkg.Package struct as an any type, allowing for flexibility in the data stored.

When pkg.Package is serialized, an additional MetadataType field is shown to help consumers understand the datashape of the Metadata field.

By convention the MetadataType value follows these rules:

  • Only use lowercase letters, numbers, and hyphens. Use hyphens to separate words.
  • Anchor the name in the ecosystem, language, or packaging tooling. For language ecosystems, prefix with the language/framework/runtime. For instance dart-pubspec-lock is better than pubspec-lock. For OS package managers this is not necessary (e.g. apk-db-entry is good, but alpine-apk-db-entry is redundant).
  • Be as specific as possible to what the data represents. For instance ruby-gem is NOT a good MetadataType value, but ruby-gemspec is, since Ruby gem information can come from a gemspec file or a Gemfile.lock, which are very different.
  • Describe WHAT the data is, NOT HOW it’s used. For instance r-description-installed-file is not good since it’s trying to convey how we use the DESCRIPTION file. Instead simply describe what the DESCRIPTION file is: r-description.
  • Use the lock suffix to distinguish between manifest files that loosely describe package version requirements vs files that strongly specify one and only one version of a package (“lock” files). These should only be used with respect to package managers that have the guide and lock distinction, but would not be appropriate otherwise (e.g. rpm does not have a guide vs lock, so lock should NOT be used to describe a db entry).
  • Use the archive suffix to indicate a package archive (e.g. rpm file, apk file) that describes the contents of the package. For example an RPM file would have a rpm-archive metadata type (not to be confused with an RPM DB record entry which would be rpm-db-entry).
  • Use the entry suffix to indicate information about a package found as a single entry within a file that has multiple package entries. If found within a DB or flat-file store for an OS package manager, use db-entry.
  • Should NOT contain the phrase package, though exceptions are allowed if the canonical name literally has the phrase package in it.
  • Should NOT have a file suffix unless the canonical name has the term “file”, such as a pipfile or gemfile.
  • Should NOT contain the exact filename+extensions. For instance pipfile.lock shouldn’t be in the name; instead describe what the file is: python-pipfile-lock.
  • Should NOT contain the phrase metadata, unless the canonical name has this term.
  • Should represent a single use case. For example, trying to describe Hackage metadata with a single HackageMetadata struct is not allowed since it represents 3 mutually exclusive use cases: stack.yaml, stack.lock, or cabal.project. Each should have its own struct and MetadataType.

The goal is to provide a consistent naming scheme that is easy to understand. If the rules don’t apply in your situation, use your best judgement.

When the underlying parsed data represents multiple files, there are two approaches:

  • Use the primary file to represent all the data. For instance, though the dpkg-cataloger looks at multiple files, it’s the status file that gets represented.
  • Nest each individual file’s data under the Metadata field. For instance, the java-archive-cataloger may find information from pom.xml, pom.properties, and MANIFEST.MF. The metadata is java-metadata with each possibility as a nested optional field.

Package Catalogers

Catalogers are the mechanism by which Syft identifies and constructs packages given a targeted list of files.

For example, a cataloger can ask Syft for all package-lock.json files in order to parse and raise up JavaScript packages (see file globs and file parser functions for examples).

There is a generic cataloger implementation that can be leveraged to quickly create new catalogers by specifying file globs and parser functions (browse the source code for syft catalogers for example usage).

Design principles

From a high level, catalogers have the following properties:

  • They are independent of one another. The Java cataloger has no idea of the processes, assumptions, or results of the Python cataloger, for example.

  • They do not know what source is being analyzed. Are we analyzing a local directory? An image? If so, the squashed representation or all layers? The catalogers do not know the answers to these questions. Only that there is an interface to query for file paths and contents from an underlying “source” being scanned.

  • Packages created by the cataloger should not be mutated after they are created. There is one exception made for adding CPEs to a package after the cataloging phase, but that will most likely be moved back into the cataloger in the future.

Naming conventions

Cataloger names should be unique and named with these rules in mind:

  • Must end with -cataloger
  • Use lowercase letters, numbers, and hyphens only
  • Use hyphens to separate words
  • Catalogers for language ecosystems should start with the language name (e.g. python-)
  • Distinguish between when the cataloger is searching for evidence of installed packages vs declared packages. For example, there are two different gemspec-based catalogers: ruby-gemspec-cataloger and ruby-installed-gemspec-cataloger, where the latter requires that the gemspec is found within a specifications directory (meaning it was installed, not just at the root of a source repo).

File search and selection

All catalogers are provided an instance of the file.Resolver to interface with the image and search for files. The implementations for these abstractions leverage stereoscope to perform searching. Here is a rough outline how that works:

  1. A stereoscope file.Index is searched based on the input given (a path, glob, or MIME type). The index is relatively fast to search, but requires results to be filtered down to the files that exist in the specific layer(s) of interest. This is done automatically by the filetree.Searcher abstraction. This abstraction will fallback to searching directly against the raw filetree.FileTree if the index does not contain the file(s) of interest. Note: the filetree.Searcher is used by the file.Resolver abstraction.

  2. Once the set of files are returned from the filetree.Searcher the results are filtered down further to return the most unique file results. For example, you may have requested files by a glob that returns multiple results. These results are filtered down to deduplicate by real files, so if a result contains two references to the same file (one accessed via symlink and one accessed via the real path), then the real path reference is returned and the symlink reference is filtered out. If both were accessed by symlink then the first (by lexical order) is returned. This is done automatically by the file.Resolver abstraction.

  3. By the time results reach the pkg.Cataloger you are guaranteed to have a set of unique files that exist in the layer(s) of interest (relative to what the resolver supports).

CLI and core API

The CLI (in the cmd/syft/ package) and the core library API (in the syft/ package) are separate layers with a clear boundary. Application level concerns always reside with the CLI, while the core library focuses on SBOM generation logic. That means that there is an application configuration (e.g. cmd/syft/cli) and a separate library configuration, and when the CLI uses the library API, it must adapt its configuration to the library’s configuration types. In that adapter, the CLI layer defers to API-level defaults as much as possible so there is a single source of truth for default behavior.

See the Syft reponitory on GitHub for detailed API example usage.

7.3 - Grype

Architecture and design of the Grype vulnerability scanner

Code organization

At a high level, this is the package structure of Grype:

./cmd/grype/                // main entrypoint
│   └── ...
└── grype/                  // the "core" grype library
    ├── db/                 // vulnerability database management, schemas, readers, and writers
    │   ├── v5/             // V5 database schema
    │   └── v6/             // v6 database schema
    ├── match/              // core types for matches and result processing
    ├── matcher/            // vulnerability matching strategies
    │   ├── stock/          // default matcher (ecosystem + CPE)
    │   └── <ecosystem>/    // ecosystem-specific matchers (java, dpkg, rpm, etc.)
    ├── pkg/                // types for package representation (wraps Syft packages)
    ├── search/             // search criteria and strategies
    ├── version/            // version comparison across formats
    ├── vulnerability/      // core types for vulnerabilities and provider interface
    └── presenter/          // output formatters (JSON, table, etc.)

The grype package and subpackages implement Grype’s core library. The major packages work together in a pipeline:

  • The grype/pkg package wraps Syft packages and prepares them as match candidates, augmenting them with upstream package information and CPEs.
  • The grype/matcher package contains matching strategies that search for vulnerabilities matching specific package types.
  • The grype/db package manages the vulnerability database and provides query interfaces for matchers.
  • The grype/vulnerability package defines vulnerability data structures and the Provider interface for database queries.
  • The grype/search package implements search strategies (ecosystem, distro, CPE) and criteria composition.
  • The grype/presenter package formats match results into various output formats.

This design creates a clear flow: SBOM → package preparation → matching → results:

sequenceDiagram
    actor User
    participant CLI
    participant DB as Database
    participant Prep as Package Prep
    participant Match as Matching Engine
    participant Post as Post-Processing
    participant Format as Presenter

    User->>CLI: grype <target>
    CLI->>CLI: Parse configuration

    Note over CLI: Input Phase
    alt SBOM provided
        CLI->>CLI: Load SBOM from file
    else Scan target
        CLI->>CLI: Generate SBOM with Syft
    end

    Note over CLI,Prep: Preparation Phase
    CLI->>DB: Load vulnerability database
    DB-->>CLI: Database provider

    CLI->>Prep: Prepare packages for matching
    Note over Prep: Wrap Syft packages<br/>Add upstream packages<br/>Generate CPEs<br/>Filter overlaps
    Prep-->>CLI: Match candidates

    Note over CLI,Match: Matching Phase
    CLI->>Match: FindMatches(match candidates, provider)
    Note over Match: Group by package type<br/>Select matchers<br/>Execute in parallel
    Match-->>CLI: Raw matches + ignore filters

    Note over CLI,Post: Post-Processing Phase
    CLI->>Post: Process matches
    Note over Post: Apply ignore filters<br/>Apply user ignore rules<br/>Apply VEX statements<br/>Deduplicate results
    Post-->>CLI: Final matches

    Note over CLI,Format: Output Phase
    CLI->>Format: Format results
    Format-->>User: Vulnerability report

This diagram zooms into the Matching Phase from the high-level diagram, showing how the matching engine executes parallel matcher searches against the database. Components are grouped in boxes to show how they map to the high-level participants.

sequenceDiagram
    participant CLI as grype/main

    box rgba(200, 220, 240, 0.3) Matching Engine
        participant Matcher as VulnerabilityMatcher
        participant M as Matcher<br/>(Stock, Java, Dpkg, etc.)
    end

    participant Search as Search Strategies

    box rgba(220, 240, 200, 0.3) Database
        participant Provider as DB Provider
        participant DB as SQLite
    end

    Note over CLI,DB: Matching Phase (expanded from high-level view)
    CLI->>Matcher: FindMatches(match candidates, provider)

    Matcher->>Matcher: Group candidates by package type

    Note over Matcher,M: Each matcher runs in parallel with ecosystem-specific logic

    loop For each package type (stock, java, dpkg, etc.)
        Matcher->>M: Match(packages for this type)
        M->>Search: Build search criteria<br/>(ecosystem, distro, or CPE-based)
        Search->>Provider: SearchForVulnerabilities(criteria)
        Provider->>DB: Query vulnerability_handles
        DB-->>Provider: Matching handles
        Provider->>Provider: Compare versions against constraints
        Provider->>DB: Check unaffected_package_handles
        DB-->>Provider: Unaffected records
        Provider->>DB: Load blobs for confirmed matches
        DB-->>Provider: Vulnerability details
        Provider-->>Search: Confirmed matches
        Search-->>M: Filtered matches
        M-->>Matcher: Matches + ignore filters
    end

    Matcher->>Matcher: Collect matches from all matchers
    Matcher-->>CLI: Raw matches + ignore filters

    Note over CLI: Continues to Post-Processing Phase (see high-level view)

Relationship to Syft

Grype uses Syft’s SBOM generation capabilities rather than reimplementing package cataloging. The integration happens at two levels:

  1. External SBOMs: You can provide an SBOM file generated by Syft (or any SPDX/CycloneDX SBOM), and Grype consumes it directly.
  2. Inline scanning: When you provide a scan target (like a container image or directory), Grype invokes Syft internally to generate an SBOM, then immediately matches it against vulnerabilities.

The grype/pkg package wraps syft/pkg.Package objects and augments them with matching-specific data:

  • Upstream packages: For packages built from source (like Debian or RPM packages), Grype adds references to the source package so it can search both the binary package name and source package name.
  • CPE generation: Grype generates Common Platform Enumeration (CPE) identifiers for packages based on their metadata, enabling CPE-based matching as a fallback strategy.
  • Distro context: Grype preserves the Linux distribution information from Syft to enable distro-specific vulnerability matching.

This wrapping pattern maintains a clear architectural boundary. Syft focuses on finding packages, while Grype focuses on finding vulnerabilities in those packages.

Package representation

The grype/pkg package converts Syft packages into Grype match candidates. The pkg.FromCollection() function performs this conversion:

  1. Wraps each Syft package in a grype.Package that preserves the original package data.
  2. Adds upstream packages for packages that have source package relationships (e.g., a Debian binary package has a source package).
  3. Generates CPEs based on package metadata (name, version, vendor, product).
  4. Filters overlapping packages for comprehensive distros (like Debian or RPM-based distros) where you might have both installed packages and package files, preferring the installed packages.

The grype.Package type maintains a reference to the original syft.Package while augmenting it with:

  • Upstreams []UpstreamPackage: Source packages to search in addition to the binary package.
  • CPEs []syftPkg.CPE: Generated CPE identifiers for fallback matching.

This design preserves the complete SBOM information while preparing packages for the matching process. Matchers receive these enhanced packages and decide which attributes to use for searching.

Data flow

The data flow through Grype follows these steps:

  1. SBOM ingestion: Load an SBOM from a file or generate one by scanning a target.
  2. Package conversion: Convert Syft packages into grype.Package match candidates, adding upstream packages, CPEs, and filtering overlapping packages.
  3. Matcher selection: Group packages by type (e.g., Java, dpkg, npm) and select appropriate matchers.
  4. Parallel matching: Execute matchers in parallel, each querying the database with search criteria specific to their package types.
  5. Result aggregation: Collect matches from all matchers and apply deduplication using ignore filters.
  6. Post-processing: Apply user-configured ignore rules, VEX (Vulnerability Exploitability eXchange) statements, and optional CVE normalization.
  7. Output formatting: Format the final matches using the selected presenter (JSON, table, SARIF, etc.).

The database sits at the center of this flow. All matchers query the same database provider, but they use different search strategies based on their package types.

Vulnerability database

Grype uses a SQLite database to store vulnerability data. The database design prioritizes query performance and storage efficiency.

In order to interoperate any DB schema with the high-level Grype engine, each schema must implement a Provider interface. This allows for DB specific schemas to be adapted to the core Grype types.

v6 Schema design

The overall design of the v6 database schema is heavily influenced by the OSV schema, so if you are familiar with OSV, many of the entities / concepts will feel similar.

The database uses a blob + handle pattern:

  • Handles: Small, indexed records containing anything you might want to search by (package name, vulnerability id, provider name, etc.). Grype stores these in tables optimized for fast lookups. These tables point to blobs for full details. See the Grype DB SQL schemas for more details on handle table structures.

  • Blobs: Full JSON documents containing complete vulnerability details. Grype stores these separately and loads them only when a match is made. See the Grype DB blob schemas for more details on blob structures.

This separation allows Grype to quickly query millions of vulnerability records without loading full vulnerability details until necessary.

Key tables include:

  • vulnerability_handles: Searchable for vulnerability records by name (CVE/advisory ID), status (active, withdrawn, etc), published/modified/withdrawn dates, and provider ID. References a blob containing full vulnerability details (description, references, aliases, severities).

  • affected_package_handles: Links vulnerabilities, packages, and (optionally) operating systems. The referenced blob contains version constraints (for example, “vulnerable in 1.0.0 to 1.2.5”) and fix information. Used when the package ecosystem is known (npm, python, gem, etc.).

  • unaffected_package_handles: Explicitly marks package versions that are NOT vulnerable. Same structure as affected_package_handles but represents exemptions. These are applied on top of any discovered affected records to remove matches (thus reduce false positives).

  • affected_cpe_handles: Links vulnerabilities and explicit CPEs, useful when a CPE cannot be resolved to a clear package ecosystem.

  • packages: Stores unique ecosystem + name combinations (for example, ecosystem=‘npm’, name=‘lodash’).

  • operating_systems: Stores OS release information with name, major/minor version, codename, and channel (for example, RHEL EUS versus mainline). Provides context for distro-specific package matching.

  • cpes: Stores parsed CPE 2.3 components (part, vendor, product, edition, etc.). Version constraints are stored in blobs, not in this table.

  • blobs: Complete vulnerability, package, and decorator details as compressed JSON. There are 3 blob types:

    • VulnerabilityBlob (full vulnerability data)
    • PackageBlob (version ranges and fixes)
    • KnownExploitedVulnerabilityBlob (KEV catalog data).

Additional decorator tables enhance vulnerability information:

  • known_exploited_vulnerability_handles: Links CVE identifiers to blob containing CISA KEV catalog data (date added, vendor, product, required action, ransomware campaign use).

  • epss_handles: Stores EPSS (Exploit Prediction Scoring System) data with CVE identifier, EPSS score (0-1 probability), and percentile ranking.

  • cwe_handles: Maps CVE identifiers to CWE (Common Weakness Enumeration) IDs with source and type information.

The schema also includes a package_cpes junction table creating many-to-many relationships between packages and CPEs. When a CPE can be resolved to a package (via this table), vulnerabilities use affected_package_handles. When a CPE cannot be resolved, vulnerabilities use affected_cpe_handles instead.

Grype versions the database schema (currently v6). When the schema changes, users download a new database file that Grype automatically detects and uses.

Data organization

Relationships between tables enable efficient querying:

  1. Matchers create search criteria (package name, version, distro, etc.).
  2. The database provider queries the appropriate handle tables with these criteria.
  3. The grype/version package filters handles by version constraints.
  4. The provider loads the corresponding vulnerability blob for confirmed matches.
  5. The complete vulnerability record returns to the matcher.

Version constraints in the database use multi-version constraint syntax, allowing a single record to express complex version ranges like “affected in 1.0.0 to 1.2.5 and 2.0.0 to 2.1.3”.

Matching engine

The matching engine orchestrates vulnerability matching across different package types. The core component is the VulnerabilityMatcher, which:

  1. Groups packages by type: Java packages go to the Java matcher, dpkg packages to the dpkg matcher, etc.
  2. Selects matchers: Each matcher declares which package types it handles.
  3. Executes matching: Matchers run in parallel, querying the database with their specific search strategies.
  4. Collects results: Matches from all matchers are aggregated.
  5. Applies ignore filters: Matchers can mark certain matches to be ignored by other matchers, preventing duplicate reporting.

The ignore filter mechanism is important. For example, the dpkg matcher searches both the binary package name and the source package name. When it finds a match via the source package, it creates an ignore filter so the stock matcher doesn’t report the same vulnerability using a CPE match. This prevents duplicate matches for the same vulnerability.

Matchers

Each matcher implements the Matcher interface. This allows Grype to support multiple matching strategies for different package ecosystems.

The process of making a match involves several steps:

  1. Candidate creation: Matchers create match candidates when database records meet search criteria.
  2. Version comparison: The grype/version package compares the package version against the vulnerability’s version constraints.
  3. Unaffected check: If the database has an explicit “not affected” record for this version, the match is rejected.
  4. Match creation: Confirmed matches become Match objects with confidence scores (the scores are currently unused).
  5. Ignore filter check: Matches are checked against ignore filters from other matchers.
  6. User ignore rules: Matches are checked against user-configured ignore rules.

Search strategies

Matchers determine what to search for based on package type and available metadata. Grype supports three main search strategies:

  • Ecosystem search: Queries vulnerabilities by package name and version within a specific package ecosystem (npm, pypi, gem, etc.). Search fields include ecosystem, package name, and version. The database returns handles where the package name matches and version constraints include the specified version.

  • Distro search: Queries vulnerabilities by Linux distribution, package name, and version for OS packages managed by apt, yum, or apk. Search fields include distro name and version (for example, debian:10), package name, and version. Also understands distro channels like RHEL EUS versus mainline.

  • CPE matching: Fallback strategy when ecosystem or distro matching isn’t applicable, using CPE identifiers in the format cpe:2.3:a:vendor:product:version:.... Search fields include CPE components (part, vendor, product). Broader and less precise than ecosystem matching, used primarily when ecosystem data isn’t available.

Search criteria system

The grype/search package provides a criteria system that matchers use to express search requirements. Criteria can be combined with AND and OR operators:

  • AND(ecosystem("npm"), packageName("lodash"), version("4.17.20"))
  • OR(distro("debian:10"), distro("debian:11"))

The database provider translates these criteria into SQL queries against the handle tables. This abstraction allows matchers to express complex search requirements without writing SQL directly.

Ideally, matchers orchestrate search criteria at a high level, letting each specific criteria type handle its own needs. It’s the vulnerability provider that ultimately translates criteria into efficient database queries.

Version comparison

Grype supports multiple version formats because different ecosystems have different versioning schemes. The grype/version package provides format-specific version comparers, falling back to a “catch all” fuzzy comparer when the format cannot be determined.

Each format has its own constraint parser that understands ecosystem-specific constraint syntax. The version comparison system detects the appropriate format based on the package type, then uses the correct comparer to evaluate version constraints from the database.

The records from the Grype DB specify which version format to use on one side of the comparison, and the package type determines the format on the other side. If no specific format is found, or the formats are incompatible (essentially do not match), the fuzzy comparer is used as a last resort.

7.4 - Grype DB

Architecture and design of the Grype vulnerability database build system

Overview

grype-db is essentially an application that extracts information from upstream vulnerability data providers, transforms it into smaller records targeted for Grype consumption, and loads the individual records into a new SQLite DB.

flowchart LR
    subgraph pull["Pull"]
        A[Pull vuln data<br/>from upstream]
    end

    subgraph build["Build"]
        B[Transform entries]
        C[Load entries<br/>into new DB]
    end

    subgraph package["Package"]
        D[Package DB]
    end

    A --> B --> C --> D

    style pull stroke-dasharray: 5 5, fill:none
    style build stroke-dasharray: 5 5, fill:none
    style package stroke-dasharray: 5 5, fill:none

Multi-Schema Support Architecture

What makes grype-db unique compared to a typical ETL job is the extra responsibility of needing to transform the most recent vulnerability data shape (defined in the vunnel repo) to all supported DB schema versions.

From the perspective of the Daily DB Publisher workflow, (abridged) execution looks something like this:

%%{ init: { 'flowchart': { 'curve': 'linear' } } }%%
flowchart LR
    A[Pull vulnerability data]

    B5[Build v5 DB]
    C5[Package v5 DB]
    D5[Publish v5]

    B6[Build v6 DB]
    C6[Package v6 DB]
    D6[Publish v6]

    A --- B5 --> C5 --> D5
    A --- B6 --> C6 --> D6

Core Abstractions

In order to support multiple DB schemas easily from a code-organization perspective, the following abstractions exist:

  • Provider - Responsible for providing raw vulnerability data files that are cached locally for later processing.

  • Processor - Responsible for unmarshalling any entries given by the Provider, passing them into Transformers, and returning any resulting entries. Note: the object definition is schema-agnostic but instances are schema-specific since Transformers are dependency-injected into this object.

  • Transformer (v5, v6) - Takes raw data entries of a specific vunnel-defined schema and transforms the data into schema-specific entries to later be written to the database. Note: the object definition is schema-specific, encapsulating grypeDB/v# specific objects within schema-agnostic Entry objects.

  • Entry - Encapsulates schema-specific database records produced by Processors/Transformers (from the provider data) and accepted by Writers.

  • Writer (v5, v6) - Takes Entry objects and writes them to a backing store (today a SQLite database). Note: the object definition is schema-specific and typically references grypeDB/v# schema-specific writers.

Data Flow

All the above abstractions are defined in the pkg/data Go package and are used together commonly in the following flow:

%%{ init: { 'flowchart': { 'curve': 'linear' } } }%%
flowchart LR
    A["data.Provider"]

    subgraph processor["data.Processor"]
        direction LR
        B["unmarshaller"]
        C["v# data.Transformer"]
        B --> C
    end

    D["data.Writer"]
    E["grypeDB/v#/writer.Write"]

    A -->|"cache file"| processor
    processor -->|"[]data.Entry"| D --> E

    style processor fill:none

Where there is:

  • A data.Provider for each upstream data source (e.g. canonical, redhat, github, NIST, etc.)
  • A data.Processor for every vunnel-defined data shape (github, os, msrc, nvd, etc… defined in the vunnel repo)
  • A data.Transformer for every processor and DB schema version pairing
  • A data.Writer for every DB schema version

Code Organization

From a Go package organization perspective, the above abstractions are organized as follows:

grype-db/
└── pkg
    ├── data                      # common data structures and objects that define the ETL flow
    ├── process
    │    ├── processors           # common data.Processors to call common unmarshallers and pass entries into data.Transformers
    │    ├── v5                   # schema v5 (legacy, active)
    │    │    ├── processors.go   # wires up all common data.Processors to v5-specific data.Transformers
    │    │    ├── writer.go       # v5-specific store writer
    │    │    └── transformers    # v5-specific transformers
    │    └── v6                   # schema v6 (current, active)
    │         ├── processors.go   # wires up all common data.Processors to v6-specific data.Transformers
    │         ├── writer.go       # v6-specific store writer
    │         └── transformers    # v6-specific transformers
    └── provider                  # common code to pull, unmarshal, and cache upstream vuln data into local files
        └── ...

Note: Historical schema versions (v1-v4) have been removed from the codebase.

DB Structure and Definitions

The definitions of what goes into the database and how to access it (both reads and writes) live in the public grype repo under the grype/db package. Responsibilities of grype (not grype-db) include (but are not limited to):

  • What tables are in the database
  • What columns are in each table
  • How each record should be serialized for writing into the database
  • How records should be read/written from/to the database
  • Providing rich objects for dealing with schema-specific data structures
  • The name of the SQLite DB file within an archive
  • The definition of a listing file and listing file entries

The purpose of grype-db is to use the definitions from grype/db and the upstream vulnerability data to create DB archives and make them publicly available for consumption via Grype.

DB Distribution Files

Grype DB currently supports two active schema versions, each with a different distribution mechanism:

  • Schema v5 (legacy): Supports Grype v0.87.0+
  • Schema v6 (current): Supports Grype main branch

Historical schemas (v1-v4) are no longer supported and their code has been removed from the codebase.

Schema v5: listing.json

The listing.json file is a legacy distribution mechanism used for schema v5 (and historically v1-v4):

  • Location: databases/listing.json
  • Structure: Contains URLs to DB archives organized by schema version, ordered by latest-date-first
  • Format: { "available": { "1": [...], "2": [...], "5": [...] } }
  • Update Process: Re-generated daily by the grype-db publisher workflow through a separate listing update step

Schema v6+: latest.json

The latest.json file is the modern distribution mechanism used for schema v6 and future versions:

  • Location: databases/v{major}/latest.json (e.g., v6/latest.json, v7/latest.json)
  • Structure: Contains metadata and URL for the single latest DB archive for that major schema version
  • Format: { "url": "...", "built": "...", "checksum": "...", "schemaVersion": 6 }
  • Update Process: Generated and uploaded atomically with each DB build (no separate update step)

This dual-distribution approach allows Grype to maintain backward compatibility with v5 while providing a more efficient distribution mechanism for v6 and future versions.

Implementation Notes:

  • Distribution file definitions reside in the grype repo, while the grype-db repo is responsible for generating DBs and creating/updating these distribution files
  • As long as Grype has been configured to point to the correct distribution file URL, the DBs can be stored separately, replaced with a service returning the distribution file contents, or mirrored for systems behind an air gap

Daily Workflows

There are two workflows that drive getting a new Grype DB out to OSS users:

  1. The daily data sync workflow, which uses vunnel to pull upstream vulnerability data.
  2. The daily DB publisher workflow, which builds and publishes a Grype DB from the data obtained in the daily data sync workflow.

Daily Data Sync Workflow

This workflow takes the upstream vulnerability data (from canonical, redhat, debian, NVD, etc), processes it, and writes the results to OCI repos.

%%{ init: { 'flowchart': { 'curve': 'linear' } } }%%
flowchart LR
    A1["Pull alpine"] --> B1["Publish to ghcr.io/anchore/grype-db/data/alpine:&lt;date&gt;"]
    A2["Pull amazon"] --> B2["Publish to ghcr.io/anchore/grype-db/data/amazon:&lt;date&gt;"]
    A3["Pull debian"] --> B3["Publish to ghcr.io/anchore/grype-db/data/debian:&lt;date&gt;"]
    A4["Pull github"] --> B4["Publish to ghcr.io/anchore/grype-db/data/github:&lt;date&gt;"]
    A5["Pull nvd"] --> B5["Publish to ghcr.io/anchore/grype-db/data/nvd:&lt;date&gt;"]
    A6["..."] --> B6["... repeat for all upstream providers ..."]

    style A6 fill:none,stroke:none
    style B6 fill:none,stroke:none

Once all providers have been updated, a single vulnerability cache OCI repo is updated with all of the latest vulnerability data at ghcr.io/anchore/grype-db/data:<date>. This repo is what is used downstream by the DB publisher workflow to create Grype DBs.

The in-repo .grype-db.yaml and .vunnel.yaml configurations are used to define the upstream data sources, how to obtain them, and where to put the results locally.

Daily DB Publishing Workflow

This workflow takes the latest vulnerability data cache, builds a Grype DB, and publishes it for general consumption:

%%{ init: { 'flowchart': { 'curve': 'linear' } } }%%
flowchart LR
    subgraph pull["1. Pull"]
        A["Pull vuln data<br/>(from the daily<br/>sync workflow<br/>output)"]
    end

    subgraph generate["2. Generate Databases"]
        B5["Build v5 DB"]
        C5["Package v5 DB"]
        D5["Upload Archive"]

        B6["Build v6 DB"]
        C6["Package v6 DB<br/>(includes latest.json)"]
        D6["Upload Archive<br/>+ latest.json"]

        B5 --> C5 --> D5
        B6 --> C6 --> D6
    end

    subgraph listing["3. Update Listing (v5 only)"]
        F["Update listing.json"]
    end

    A --- B5
    A --- B6

    D5 --- F
    D6 -.->|"No listing update<br/>needed for v6"| G[Done]

    style pull stroke-dasharray: 5 5, fill:none
    style generate stroke-dasharray: 5 5, fill:none
    style listing stroke-dasharray: 5 5, fill:none
    style G fill:none,stroke:none

The manager/ directory contains all code responsible for driving the Daily DB Publisher workflow, generating DBs for all supported schema versions (currently v5 and v6) and making them available to the public.

1. Pull

Download the latest vulnerability data from various upstream data sources into a local directory. The destination for the provider data is in the data/vunnel directory.

2. Generate

Build databases for all supported schema versions based on the latest vulnerability data and upload them to Cloudflare R2 (S3-compatible storage).

Supported Schemas (see schema-info.json):

  • Schema v5 (legacy)
  • Schema v6 (current)

Build and Upload Process:

Each DB undergoes the following steps:

  1. Build: Transform vulnerability data into the schema-specific format
  2. Package: Create a compressed archive (.tar.zst)
  3. Validate: Smoke test with Grype by comparing against the previous release using vulnerability-match-labels
  4. Upload: Only DBs that pass validation are uploaded

Storage Location:

  • Distribution base URL: https://grype.anchore.io/databases/...
  • Schema-specific paths:
    • v5: databases/<archive-name>.tar.zst
    • v6: databases/v6/<archive-name>.tar.zst + databases/v6/latest.json

Key Difference:

  • v5: Only the DB archive is uploaded; discoverability happens in the next step
  • v6: Both the DB archive AND latest.json are uploaded atomically, making the DB immediately discoverable

3. Update Listing (v5 Only)

This step only applies to schema v5.

Generate and upload a new listing.json file to Cloudflare R2 based on the existing listing file and newly discovered DB archives.

The listing file is tested against installations of Grype to ensure scans can successfully discover and download the DB. The scan must have a non-zero count of matches to pass validation.

Once the listing file has been uploaded to databases/listing.json, user-facing Grype v5 installations can discover and download the new DB.

Note: Schema v6 does not require this step because the latest.json file is generated and uploaded atomically with the DB archive in step 2, with a 5-minute cache TTL for fast updates.

For more details on:

  • How Vunnel processes vulnerability data, see the Vunnel Architecture page
  • How quality gates validate database builds, see the Quality Gates section

7.5 - Vunnel

Architecture and design of the Vunnel vulnerability data processing tool

Overview

Vunnel is a CLI tool that downloads and processes vulnerability data from various sources (in the codebase, these are called “providers”).

flowchart LR
    subgraph input[ ]
        alpine_data(((<b>Alpine Sec DB</b><br/><small>secdb.alpinelinux.org</small>)))
        rhel_data(((<b>RedHat CSAF</b><br/><small>redhat.com/security</small>)))
        nvd_data(((<b>NVD Data</b><br/><small>services.nvd.nist</small>)))
        other_data((("...")))
    end

    subgraph vunnel["<b>Vunnel</b>"]
        alpine_provider[Alpine Provider]
        rhel_provider[RHEL Provider]
        nvd_provider[NVD Provider]
        other_provider[(...)]
    end

    subgraph output[ ]
        alpine_out[./data/alpine/]
        rhel_out[./data/rhel/]
        nvd_out[./data/nvd/]
        other_out[...]
    end

    alpine_data -->|download| alpine_provider
    rhel_data -->|download| rhel_provider
    nvd_data -->|download| nvd_provider

    alpine_provider -->|write| alpine_out
    rhel_provider -->|write| rhel_out
    nvd_provider -->|write| nvd_out


    vunnel:::Application

    style other_data fill:none,stroke:none
    style other_provider fill:none,stroke:none
    style other_out fill:none,stroke:none
    style output fill:none,stroke:none
    style input fill:none,stroke:none

    alpine_data:::ExternalSource@{ shape: cloud }
    rhel_data:::ExternalSource@{ shape: cloud }
    nvd_data:::ExternalSource@{ shape: cloud }

    alpine_provider:::Provider
    rhel_provider:::Provider
    nvd_provider:::Provider

    alpine_out:::Database@{ shape: db }
    rhel_out:::Database@{ shape: db }
    nvd_out:::Database@{ shape: db }

    classDef ExternalSource stroke-width:1px, stroke-dasharray:none, stroke:#424242, fill:#f0f8ff, color:#000000
    classDef Application fill:#e1ffe1,stroke:#424242,stroke-width:1px
    classDef Provider fill:#none,stroke:#424242,stroke-width:1px
    classDef Database stroke-width:1px, stroke-dasharray:none, stroke:#424242, fill:#fff9c4, color:#000000

Conceptually, one or more invocations of Vunnel will produce a single data directory which Grype DB uses to create a Grype database:

flowchart LR
    subgraph vunnel_runs[ ]
        vunnel_alpine[<b>vunnel run alpine</b>]
        vunnel_rhel[<b>vunnel run rhel</b>]
        vunnel_nvd[<b>vunnel run nvd</b>]
        vunnel_other[(...)]
    end

    subgraph data[ ]
        alpine_data[./data/alpine/]
        rhel_data[./data/rhel/]
        nvd_data[./data/nvd/]
        other_data[...]
    end

    db_processor[Grype-DB]

    subgraph db_out[ ]
        sqlite_db[vulnerability.db<br/><small>sqlite</small>]
    end

    vunnel_alpine -->|write| alpine_data
    vunnel_rhel -->|write| rhel_data
    vunnel_nvd -->|write| nvd_data

    alpine_data -->|read| db_processor
    rhel_data -->|read| db_processor
    nvd_data -->|read| db_processor

    db_processor -->|write| sqlite_db

    db_processor:::Application
    vunnel_alpine:::Application
    vunnel_rhel:::Application
    vunnel_nvd:::Application
    sqlite_db:::Database@{ shape: db }

    alpine_data:::Database@{ shape: db }
    rhel_data:::Database@{ shape: db }
    nvd_data:::Database@{ shape: db }

    style vunnel_other fill:none,stroke:none
    style other_data fill:none,stroke:none
    style vunnel_runs fill:none,stroke:none
    style data fill:none,stroke:none
    style db_out fill:none,stroke:none

    classDef Application fill:#e1ffe1,stroke:#424242,stroke-width:1px
    classDef Database stroke-width:1px, stroke-dasharray:none, stroke:#424242, fill:#fff9c4, color:#000000

Integration with Grype DB

The Vunnel CLI tool is optimized to run a single provider at a time, not orchestrating multiple providers at once. Grype DB is the tool that collates output from multiple providers and produces a single database, and is ultimately responsible for orchestrating multiple Vunnel calls to prepare the input data:

grype-db pull

flowchart LR
    config["<code><b># .grype-db.yaml</b></code><br><code>providers:</code><br><code>  - alpine</code><br><code>  - rhel</code><br><code>  - nvd</code><br><code>  - ...</code>"]
    pull[grype-db pull]

    subgraph vunnel_runs[ ]
        vunnel_alpine[<b>vunnel run alpine</b>]
        vunnel_rhel[<b>vunnel run rhel</b>]
        vunnel_nvd[<b>vunnel run nvd</b>]
        vunnel_other[<b>vunnel run ...</b>]
    end

    subgraph data[ ]
        data_out[(./data/)]
    end

    config -->|read| pull
    pull -->|execute| vunnel_alpine
    pull -->|execute| vunnel_rhel
    pull -->|execute| vunnel_nvd
    pull -.->|execute| vunnel_other

    vunnel_alpine -->|write| data_out
    vunnel_rhel -->|write| data_out
    vunnel_nvd -->|write| data_out
    vunnel_other -.->|write| data_out

    pull:::Application
    vunnel_alpine:::Application
    vunnel_rhel:::Application
    vunnel_nvd:::Application
    vunnel_other:::Application

    config:::AnalysisInput@{ shape: document }
    data_out:::Database@{ shape: db }

    style vunnel_runs fill:none,stroke:none
    style data fill:none,stroke:none

    classDef AnalysisInput stroke-width:1px, stroke-dasharray:none, stroke:#424242, fill:#f0f8ff, color:#000000
    classDef Application fill:#e1ffe1,stroke:#424242,stroke-width:1px
    classDef Database stroke-width:1px, stroke-dasharray:none, stroke:#424242, fill:#fff9c4, color:#000000

grype-db build

flowchart LR
    subgraph data[ ]
        data_in[(./data/)]
    end

    build[grype-db build]

    subgraph db_out[ ]
        db[(vulnerability.db<br/><small>sqlite</small>)]
    end

    data_in -->|read| build
    build -->|write| db

    build:::Application
    data_in:::Database@{ shape: db }
    db:::Database@{ shape: db }

    style data fill:none,stroke:none
    style db_out fill:none,stroke:none

    classDef Application fill:#e1ffe1,stroke:#424242,stroke-width:1px
    classDef Database stroke-width:1px, stroke-dasharray:none, stroke:#424242, fill:#fff9c4, color:#000000

grype-db package

flowchart LR
    subgraph db_in[ ]
        db[vulnerability.db<br/><small>sqlite</small>]
    end

    package[grype-db package]

    subgraph archive_out[ ]
        archive[[vulnerability-db-DATE.tar.gz]]
    end

    db -->|read| package
    package -->|write| archive

    package:::Application
    db:::Database@{ shape: db }
    archive:::Database@{ shape: document }

    style db_in fill:none,stroke:none
    style archive_out fill:none,stroke:none

    classDef Application fill:#e1ffe1,stroke:#424242,stroke-width:1px
    classDef Database stroke-width:1px, stroke-dasharray:none, stroke:#424242, fill:#fff9c4, color:#000000

For more information about how Grype DB uses Vunnel see the Grype DB Architecture page.

Provider Architecture

A “Provider” is the core abstraction for Vunnel and represents a single source of vulnerability data. Vunnel is a CLI wrapper around multiple vulnerability data providers.

Provider Requirements

All provider implementations should:

  • Live under src/vunnel/providers in their own directory (e.g. the NVD provider code is under src/vunnel/providers/nvd/...)
  • Have a class that implements the Provider interface
  • Be centrally registered with a unique name under src/vunnel/providers/__init__.py
  • Be independent from other vulnerability providers data — that is, the debian provider CANNOT reach into the NVD data provider directory to look up information (such as severity)
  • Follow the workspace conventions for downloaded provider inputs, produced results, and tracking of metadata

Workspace Conventions

Each provider has a “workspace” directory within the “vunnel root” directory (defaults to ./data) named after the provider.

data/                       # the "vunnel root" directory
└── alpine/                 # the provider workspace directory
    ├── input/              # any file that needs to be downloaded and referenced should be stored here
    ├── results/            # schema-compliant vulnerability results (1 record per file)
    ├── checksums           # listing of result file checksums (xxh64 algorithm)
    └── metadata.json       # metadata about the input and result files

The metadata.json and checksums are written out after all results are written to results/. An example metadata.json:

{
  "provider": "amazon",
  "urls": ["https://alas.aws.amazon.com/AL2022/alas.rss"],
  "listing": {
    "digest": "dd3bb0f6c21f3936",
    "path": "checksums",
    "algorithm": "xxh64"
  },
  "timestamp": "2023-01-01T21:20:57.504194+00:00",
  "schema": {
    "version": "1.0.0",
    "url": "https://raw.githubusercontent.com/anchore/vunnel/main/schema/provider-workspace-state/schema-1.0.0.json"
  }
}

Where:

  • provider: the name of the provider that generated the results
  • urls: the URLs that were referenced to generate the results
  • listing: the path to the checksums listing file that lists all of the results, the checksum of that file, and the algorithm used to checksum the file (and the same algorithm used for all contained checksums)
  • timestamp: the point in time when the results were generated or last updated
  • schema: the data shape that the current file conforms to

Result Format

All results from a provider are handled by a common base class helper (provider.Provider.results_writer()) and is driven by the application configuration (e.g. JSON flat files or SQLite database). The data shape of the results are self-describing via an envelope with a schema reference.

For example:

{
  "schema": "https://raw.githubusercontent.com/anchore/vunnel/main/schema/vulnerability/os/schema-1.0.0.json",
  "identifier": "3.3/cve-2015-8366",
  "item": {
    "Vulnerability": {
      "Severity": "Unknown",
      "NamespaceName": "alpine:3.3",
      "FixedIn": [
        {
          "VersionFormat": "apk",
          "NamespaceName": "alpine:3.3",
          "Name": "libraw",
          "Version": "0.17.1-r0"
        }
      ],
      "Link": "http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8366",
      "Description": "",
      "Metadata": {},
      "Name": "CVE-2015-8366",
      "CVSS": []
    }
  }
}

Where:

  • The schema field is a URL to the schema that describes the data shape of the item field
  • The identifier field should have a unique identifier within the context of the provider results
  • The item field is the actual vulnerability data, and the shape of this field is defined by the schema

Note that the identifier is 3.3/cve-2015-8366 and not just cve-2015-8366 in order to uniquely identify cve-2015-8366 as applied to the alpine 3.3 distro version among other records in the results directory.

Currently only JSON payloads are supported.

Vulnerability Schemas

Possible vulnerability schemas supported within the vunnel repo are:

If at any point a breaking change needs to be made to a provider (and say the schema remains the same), then you can set the __version__ attribute on the provider class to a new integer value (incrementing from 1 onwards). This is a way to indicate that the cached input/results are not compatible with the output of the current version of the provider, in which case the next invocation of the provider will delete the previous input and results before running.

Provider Configuration

Each provider has a configuration object defined next to the provider class. This object is used in the vunnel application configuration and is passed as input to the provider class. Take the debian provider configuration for example:

from dataclasses import dataclass, field

from vunnel import provider, result

@dataclass
class Config:
    runtime: provider.RuntimeConfig = field(
        default_factory=lambda: provider.RuntimeConfig(
            result_store=result.StoreStrategy.SQLITE,
            existing_results=provider.ResultStatePolicy.DELETE_BEFORE_WRITE,
        ),
    )
    request_timeout: int = 125

Configuration Requirements

Every provider configuration must:

  • Be a dataclass
  • Have a runtime field that is a provider.RuntimeConfig field

The runtime field is used to configure common behaviors of the provider that are enforced within the vunnel.provider.Provider subclass.

Runtime Configuration Options

  • on_error: what to do when the provider fails

    • action: choose to fail, skip, or retry when the failure occurs
    • retry_count: the number of times to retry the provider before failing (only applicable when action is retry)
    • retry_delay: the number of seconds to wait between retries (only applicable when action is retry)
    • input: what to do about the input data directory on failure (such as keep or delete)
    • results: what to do about the results data directory on failure (such as keep or delete)
  • existing_results: what to do when the provider is run again and the results directory already exists

    • delete-before-write: delete the existing results just before writing the first processed (new) result
    • delete: delete existing results before running the provider
    • keep: keep the existing results
  • existing_input: what to do when the provider is run again and the input directory already exists

    • delete: delete the existing input before running the provider
    • keep: keep the existing input
  • result_store: where to store the results

    • sqlite: store results as key-value form in a SQLite database, where keys are the record identifiers values are the json vulnerability records
    • flat-file: store results in JSON files named after the record identifiers

Any provider-specific config options can be added to the configuration object as needed (such as request_timeout, which is a common field).

For more details on how Grype DB uses Vunnel output, see the Grype DB Architecture page.

Next Steps

8 - About

About Anchore OSS and its community

8.1 - Events

Anchore OSS Community Events and Meetings

Open Source Live Streams

Almost every Thursday the OSS team holds a “Gardening” live stream on the Anchore YouTube channel. Each week, we announce what time the live stream is happening in the Announcements on Discourse.

The streams are recorded and archived in our Live stream playlist.

Community Meetings

We hold open meetings with the community, on alternate Thursdays. These are on Zoom, and are not recorded or streamed. There is an optional agenda which can be filled in. Everyone is welcome. A webcam is not required.

Anchore Events

Anchore has a separate Events page, for announcing industry & corporate events, and webinars.

8.2 - Adopters

Adopters of Anchore Open Source Tools

Our tools are used by organisations and developer teams of all sizes. Below is a small sample of users of our tools, in public GitHub repositories.

Docker
Docker
SAP
SAP
Grafana
Grafana
OpenTelemetry
OpenTelemetry
Wolfi
Wolfi
Kubescape
Kubescape

More organisations below are all adopters of our tools, in public GitHub repositories.

SBOM Action

RepositoryStars
  n8n-io / n8n136966
  caddyserver / caddy66730
  ultralytics / ultralytics45473
  grafana / k628717
  grafana / loki26408
  SigNoz / signoz23479
  cilium / cilium22428
  jaegertracing / jaeger21847
  getsops / sops19369
  nats-io / nats-server18185
  stackblitz-labs / bolt.diy17813
  goreleaser / goreleaser15102
  App-vNext / Polly13956
  kubescape / kubescape10979
  orhun / git-cliff10777
  anchore / grype10629
  loft-sh / vcluster10585
  dexidp / dex10175
  fission / fission8737
  Workiva / go-datastructures7837
  anchore / syft7598
  fluxcd / flux27409
  k8sgpt-ai / k8sgpt6938
  kubevela / kubevela6834
  gopasspw / gopass6462
  podman-desktop / podman-desktop6450
  external-secrets / external-secrets5817
  open-telemetry / opentelemetry-collector5773
  inventree / InvenTree5721
  apache / nifi5660
  domaindrivendev / Swashbuckle.AspNetCore5407
  fluxcd / flagger5152
  nginx / kubernetes-ingress4817
  grafana / tempo4738
  jenkins-x / jx4664
  openbao / openbao4482
  openfga / openfga4087
  cerbos / cerbos4039
  modelcontextprotocol / registry3589
  version-fox / vfox3529
  orhun / binsider3403
  mpromonet / webrtc-streamer3389
  orhun / kmon2782
  dragonflyoss / dragonfly2772
  akuity / kargo2694
  kube-vip / kube-vip2531
  IBM / mcp-context-forge2412
  goreleaser / nfpm2395
  ory / polis2137
  badtuxx / girus-cli2103
  mpromonet / v4l2rtspserver1969
  projectcapsule / capsule1925
  artifacthub / hub1905
  ublue-os / bluefin1887
  nginx / nginx-prometheus-exporter1855
  stefanprodan / timoni1852
  keptn / keptn1789
  regclient / regclient1632
  kubewall / kubewall1573
  helm / chart-testing1561
  kubeshop / testkube1494
  flux-iac / tofu-controller1486
  project-copacetic / copacetic1424
  guacsec / guac1406
  OWASP / SecurityShepherd1402
  orhun / systeroid1384
  charmbracelet / wishlist1371
  trueforge-org / truecharts1270
  dimonomid / nerdlog1261
  aserto-dev / topaz1257
  containerd / runwasi1208
  abhimanyu003 / sttr1182
  kitops-ml / kitops1180
  stacklok / toolhive1164
  k8gb-io / k8gb1074
  minicli / minicli1063
  lensesio / stream-reactor1038
  sigstore / gitsign1020
  jonrau1 / ElectricEye1010
  gnolang / gno989
  orhun / rustypaste971
  open-cluster-management-io / ocm957
  controlplaneio / simulator956
  cBioPortal / cbioportal853
  percona / pmm838
  intigriti / misconfig-mapper816
  updatecli / updatecli792
  kluctl / kluctl788
  helm / chart-releaser760
  open-feature / flagd758
  orhun / halp750
  poweradmin / poweradmin729
  flux-subsystem-argo / flamingo706
  caarlos0 / svu697
  getprobo / probo686
  opea-project / GenAIExamples682
  nuxeo / nuxeo673
  nginx / nginx-gateway-fabric668
  glasskube / distr647
  falcosecurity / falcosidekick619
  orhun / linuxwave601
  devops-kung-fu / bomber576
  epinio / epinio559
  editorconfig-checker / editorconfig-checker548
  microsoft / call-center-ai531
  clemlesne / scrape-it-now525
  in-toto / witness494
  caioricciuti / ch-ui463
  kubestellar / kubestellar463
  fluxcd / helm-controller457
  anchore / quill455
  kyverno / chainsaw437
  retracedhq / retraced412
  keptn / lifecycle-toolkit400
  open-telemetry / opentelemetry-collector-releases388
  k8sgpt-ai / k8sgpt-operator387
  pcasteran / terraform-graph-beautifier387
  controlplaneio / netassert377
  justeattakeaway / httpclient-interception376
  tbckr / sgpt370
  NLeSC / mcfly365
  controlplaneio-fluxcd / flux-operator355
  mindersec / minder349
  ublue-os / aurora341
  wanghaisheng / tiktoka-studio-uploader331
  jkroepke / openvpn-auth-oauth2329
  ahmetb / gen-crd-api-reference-docs325
  rad-security / kbom318
  Lissy93 / domain-locker313
  caarlos0 / domain_exporter313
  stefanprodan / kustomizer296
  martincostello / xunit-logging293
  avisi-cloud / structurizr-site-generatr293
  home-operations / containers292
  kexa-io / Kexa290
  compozy / compozy290
  ahoy-cli / ahoy281
  notaryproject / ratify278
  udx / wp-stateless268
  gatewayd-io / gatewayd266
  spr-networks / super266
  sgl-project / ome265
  fluxcd / kustomize-controller265
  fluxcd / source-controller257
  nicholas-fedor / watchtower251
  martincostello / sqllocaldb251
  open-feature / open-feature-operator251
  digitalghost-dev / premier-league246
  KWasm / kwasm-operator237
  au2001 / icloud-passwords-firefox236
  mitre / heimdall2235
  FDio / govpp232
  micro-lc / micro-lc221
  Hyperledger-TWGC / tape220
  hazcod / ransomwhere212
  SchwarzIT / go-template200
  mostafa / xk6-kafka193
  snyk / parlay191
  fluxcd / image-automation-controller187
  muhlba91 / pulumi-proxmoxve184
  defenseunicorns / pepr181
  dirien / minectl178
  plgd-dev / hub173
  opea-project / GenAIComps172
  cerberauth / vulnapi172
  roots / trellis-cli169
  rond-authz / rond162
  mitre / saf162
  fluxcd / notification-controller162
  docker / sbom-cli-plugin156
  chainguard-dev / incert156
  soraro / kurt154
  jauderho / dockerfiles154
  elastic / harp152
  stacklok / frizbee150
  jkroepke / access-log-exporter147
  sigstore / policy-controller142
  kaansk / shomon131
  laoshanxi / app-mesh128
  philips-software / amp-devcontainer125
  openimsdk / chat125
  falcosecurity / falcosidekick-ui124
  Workiva / built_redux123
  OpenUnison / openunison-k8s123
  Hive-Academy / Anubis-MCP122
  hemilabs / heminetwork122
  asymmetric-research / solana-exporter119
  holos-run / holos116
  fluxcd / image-reflector-controller116
  dirien / minecraft-prometheus-exporter116
  bomctl / bomctl115
  homeall / caddy-reverse-proxy-cloudflare109
  civiform / civiform109
  html2rss / html2rss-web108
  SAP / terraform-provider-btp106
  descope / descopecli103
  raffis / gitops-zombies103
  sigstore / timestamp-authority102
  IAreKyleW00t / docker-caddy-cloudflare101
  ossf / sbom-everywhere101
  autobrr / mkbrr101
  raffis / mongodb-query-exporter101
  shopware / shopware-cli98
  salrashid123 / gce_metadata_server98
  caarlos0 / twitter-cleaner97
  pteich / elastic-query-export96
  dwisiswant0 / unch94
  martincostello / openapi-extensions92
  actinia-org / actinia-core92
  caddyserver / gateway92
  OpenZeppelin / openzeppelin-relayer89
  AlbrechtL / openwrt-docker89
  OpenZeppelin / openzeppelin-monitor88
  caarlos0 / jsonfmt88
  kyverno / kyverno-json86
  cardinalhq / lakerunner85
  microsoft / terraform-provider-fabric85
  intelops / compage85
  openfga / cli82
  schednex-ai / schednex81
  0x61nas / aarty81
  erfianugrah / revista-380
  some-natalie / kubernoodles80
  Workiva / opentelemetry-dart79
  mitre / vulcan78
  devops-kung-fu / hookz78
  sigstore / cosign-gatekeeper-provider78
  certonid / certonid77
  PurpleBooth / git-mit76
  wimpysworld / stream-sprout76
  cpanato / github_actions_exporter76
  fystack / mpcium74
  Workiva / dart_dev74
  gearnode / privatebin74
  crashappsec / github-analyzer72
  shini4i / argo-watcher69
  alegrey91 / fwdctl69
  phoban01 / cue-flux-controller69
  stackabletech / spark-k8s-operator66
  Workiva / state_machine65
  tektronix / tm_devices65
  mchmarny / vimp65
  sigstore / helm-sigstore65
  saas-factory-labs / Saas-Factory64
  Workiva / dart_codemod64
  SigNoz / signoz-otel-collector64
  ICTU / quality-time63
  anchore / chronicle63
  peak-scale / sops-operator63
  gembaadvantage / uplift63
  yurishkuro / microsim62
  tuannvm / mcp-trino60
  opensearch-project / opensearch-migrations60
  redhat-certification / chart-verifier59
  muhlba91 / external-dns-provider-adguard59
  ilijamt / vault-plugin-secrets-gitlab59
  gopasspw / git-credential-gopass58
  stackabletech / trino-operator58
  goreleaser / example-supply-chain58
  metal-stack / firewall-controller57
  grafana / grafana-opentelemetry-dotnet57
  apigee / apigeecli57
  theparanoids / crypki57
  go-faster / oteldb57
  nginx / nginx-asg-sync57
  ultralytics / thop56
  ublue-os / bluefin-lts55
  theopenlane / core55
  JSchmie / ScrAIbe54
  akuity / kargo-render54
  sapcc / ntp_exporter54
  engity-com / bifroest54
  spinframework / runtime-class-manager54
  dirien / pulumi-fly53
  FalcoSuessgott / vault-kubernetes-kms52
  justeattakeaway / JustSaying51
  teler-sh / teler-proxy51
  xmlking / grpc-starter-kit50
  philips-labs / slsa-provenance-action50

Scan Action

RepositoryStars
  ClickHouse / ClickHouse42822
  airbytehq / airbyte19470
  bitwarden / server17139
  docker-mailserver / docker-mailserver17053
  ory / hydra16505
  goreleaser / goreleaser15102
  elastic / logstash14628
  wazuh / wazuh13461
  Unstructured-IO / unstructured12634
  ory / kratos12294
  bitwarden / clients11220
  cookieY / Yearning8763
  flowable / flowable-engine8706
  gopasspw / gopass6462
  photoview / photoview6128
  fastrepl / hyprnote6082
  apache / nifi5660
  0xERR0R / blocky5601
  nuclio / nuclio5572
  ory / keto5111
  freedomofpress / dangerzone4239
  ory / oathkeeper3426
  chaskiq / chaskiq3400
  determined-ai / determined3187
  deepseek-ai / DreamCraft3D2969
  buildpacks / pack2799
  akuity / kargo2694
  sakai135 / wsl-vpnkit2639
  submariner-io / submariner2568
  Checkmarx / kics2469
  onekey-sec / unblob2354
  11notes / docker-kms2010
  nginx / nginx-prometheus-exporter1855
  cloudfoundry / cli1844
  GIScience / openrouteservice1711
  mlrun / mlrun1588
  openremote / openremote1510
  TheresAFewConors / Sooty1425
  py-pdf / fpdf21363
  wahyd4 / aria2-ariang-docker1093
  jonrau1 / ElectricEye1010
  mixcore / mix.core873
  kanisterio / kanister840
  Unstructured-IO / unstructured-api808
  mendhak / docker-http-https-echo727
  voxpupuli / puppetboard726
  kool-dev / kool708
  Threagile / threagile692
  getprobo / probo686
  hipages / php-fpm_exporter681
  nuxeo / nuxeo673
  nginx / nginx-gateway-fabric668
  estahn / k8s-image-swapper611
  ThomasVitale / cloud-native-spring-in-action538
  shenxn / protonmail-bridge-docker534
  opentracing-contrib / nginx-opentracing507
  bitwarden / self-host506
  grafana / grafana-image-renderer437
  BallAerospace / COSMOS372
  bitwarden / sdk-sm336
  wanghaisheng / tiktoka-studio-uploader331
  rad-security / kbom318
  interledger / rafiki315
  adrianbrad / queue314
  home-operations / containers292
  RAJANAGORI / Nightingale288
  banzaicloud / thanos-operator281
  cnoe-io / idpbuilder275
  udx / wp-stateless268
  waldo-vision / waldo254
  Secure-Compliance-Solutions-LLC / GVM-Docker251
  tarampampam / mikrotik-hosts-parser248
  digitalghost-dev / premier-league246
  sstarcher / helm-exporter246
  istio-ecosystem / authservice236
  signalfx / splunk-otel-collector223
  buildpacks / lifecycle198
  defenseunicorns / pepr181
  righettod / toolbox-pentest-web166
  MustacheCase / zanadir165
  11notes / docker-socket-proxy164
  ilteoood / docker-surfshark161
  OpenC3 / cosmos156
  jauderho / dockerfiles154
  jedisct1 / dnscrypt-server-docker145
  artefactory / NLPretext140
  michelin / kstreamplify132
  submariner-io / lighthouse130
  11notes / docker-adguard122
  11notes / docker-kms-gui119
  submariner-io / submariner-operator119
  11notes / docker-traefik115
  Soneji / docker-chromium115
  alex1989hu / kubelet-serving-cert-approver113
  banzaicloud / jwt-to-rbac113
  DataDog / datadog-lambda-extension109
  azinchen / nordvpn108
  anweiss / cddl106
  cfpb / hmda-platform104
  madereddy / noisy104
  WeblateOrg / docker103
  michelin / ns4kafka94
  Chr157i4n / PyTmcStepper92
  HHS / simpler-grants-gov91
  OpenZeppelin / openzeppelin-relayer89
  tarampampam / tinifier89
  OpenZeppelin / openzeppelin-monitor88
  bitwarden / mcp-server86
  UKHomeOffice / kd85
  HariSekhon / GitHub-Actions81
  some-natalie / kubernoodles80
  ThomasVitale / spring-cloud-gateway-resilience-security-observability77
  wimpysworld / stream-sprout76
  astrolabsoftware / fink-broker72
  werbot / lime72
  Ortus-Solutions / docker-commandbox66
  tektronix / tm_devices65
  ortus-boxlang / BoxLang65
  mchmarny / vimp65
  XGovFormBuilder / digital-form-builder64
  analysys / ans-android-sdk64
  pegasystems / docker-pega-web-ready61
  gopasspw / gopass-jsonapi59
  redhat-certification / chart-verifier59
  Altinity / ClickHouse59
  datagrok-ai / public58
  theparanoids / crypki57
  submariner-io / shipyard56
  kube-tarian / tarian56
  cogini / phoenix_container_example55
  cogini / phoenix_container_example_old55
  singlestore-labs / singlestoredb-dev-image55
  JSchmie / ScrAIbe54
  ryaneorth / k8s-scheduled-volume-snapshotter54
  michelin / suricate52
  SmartTokenLabs / attestation51
  adlnet / CATAPULT50

Generated using github-dependents-info, by Nicolas Vuillamy"

8.3 - Discussion

Official Anchore OSS online discussion

Official platforms

Below are platforms maintained and monitored by Anchore OSS Team staff.

Discourse

We have an official community Discourse for discussion of the Anchore OSS tools.

Anchore Community Discourse

Video

We post OSS and Anchore Enterprise related content on our YouTube channel.

Anchore YouTube Channel

Social

Find and engage with us on various social media platforms.

MastodonBlueSkyX
Syft@syft@fosstodon.org@syftproject.bsky.social@syftproject
Grype@grype@fosstodon.org@grypeproject.bsky.social@grypeproject
Anchore@anchore@mstdn.business@anchore.com@anchore

8.4 - Glossary

Definitions of terms used in software security, SBOM generation, and vulnerability scanning

A

Artifact

In Syft’s JSON output format, “artifacts” refers to the array of software packages discovered during scanning. Each artifact represents a single package (library, application, OS package, etc.) with its metadata, version, licenses, locations, and identifiers like CPE and PURL. This is distinct from general software artifacts like binaries or container images.

Related documentation: Working with Syft JSON

Attestation

A cryptographically signed statement about a software artifact that provides verifiable claims about its properties, such as provenance, build process, or security scan results. Attestations establish trust in the software supply chain by allowing you to verify that an SBOM truly represents a specific artifact or that vulnerability scan results are authentic.

Why it matters: Attestations enable you to verify the authenticity and integrity of SBOMs generated by Syft and vulnerability reports from Grype, ensuring they haven’t been tampered with.

C

Cataloger

A cataloger is a component within Syft that specializes in discovering and extracting package information from specific ecosystems or file formats. Each cataloger knows how to find and parse packages for a particular type (e.g., apk-cataloger for Alpine packages, npm-cataloger for Node.js packages). When Syft scans a target, it runs multiple catalogers to comprehensively discover all software components.

Why it matters: The foundBy field in Syft’s JSON output tells you which cataloger discovered each package, which can help debug why certain packages appear in your SBOM or troubleshoot scanning issues.

Related documentation: Working with Syft JSON

Container image

A lightweight, standalone, executable package that includes everything needed to run a piece of software, including the code, runtime, system tools, libraries, and settings. Container images are built from layers and typically run using container runtimes like Docker or containerd. See also OCI.

Why it matters: Both Syft and Grype can scan container images directly without requiring them to be running. Syft generates SBOMs from container images, and Grype scans them for vulnerabilities.

Related documentation: SBOM Generation, Vulnerability Scanning

CPE

Common Platform Enumeration (CPE) is a standardized method for describing and identifying software applications, operating systems, and hardware devices. CPEs are used in vulnerability databases to match software components with known vulnerabilities.

Formats:

  • URI binding: cpe:/{part}:{vendor}:{product}:{version}:{update}:{edition}:{language}
  • Formatted string: cpe:2.3:{part}:{vendor}:{product}:{version}:{update}:{edition}:{language}:{sw_edition}:{target_sw}:{target_hw}:{other}
  • Well-Formed Name (WFN): cpe:2.3:wfn:[attributes]

Examples:

  • cpe:/a:mozilla:firefox:68.0::~~~en-us~~
  • cpe:2.3:a:microsoft:internet_explorer:8.0.6001:beta:*:*:*:*:*:*
  • wfn:[part="a", vendor="microsoft", product="internet_explorer",version="8\.0\.6001", update="beta", edition=ANY, language=ANY]

Why it matters: Syft generates CPEs for discovered packages (from the NVD dictionary or synthetic generation), which Grype then uses to match packages against vulnerability data. Understanding CPEs helps you troubleshoot why certain vulnerabilities are or aren’t being detected relative to vulnerabilities from NVD.

External resources:

Related documentation: Working with Syft JSON

CVE

Common Vulnerabilities and Exposures (CVE) is a standardized identifier for publicly known security vulnerabilities. Each CVE ID uniquely identifies a specific vulnerability and provides a common reference point for discussing and tracking security issues.

Format example: CVE-2024-12345

Why it matters: Grype reports vulnerabilities by their CVE IDs, making it easy to research specific issues, understand their impact, and find remediation guidance. Each match in a Grype scan references one or more CVE IDs.

External resources:

Related documentation: Vulnerability Scanning

CVSS

Common Vulnerability Scoring System (CVSS) is an open framework for communicating the characteristics and severity of software vulnerabilities. CVSS (base) scores range from 0.0 to 10.0, with higher scores indicating more severe vulnerabilities.

Severity ranges:

  • None: 0.0
  • Low: 0.1-3.9
  • Medium: 4.0-6.9
  • High: 7.0-8.9
  • Critical: 9.0-10.0

There are more dimensions to CVSS, including Temporal and Environmental scores, but the Base score is the most commonly used as a way to quickly assess severity.

Why it matters: Grype uses CVSS scores to categorize vulnerability severity, helping you prioritize which issues to fix first. You can filter Grype results by severity level to focus on the most critical vulnerabilities.

External resources:

Related documentation: Vulnerability Scanning

CycloneDX

CycloneDX is an open-source standard for creating Software Bill of Materials (SBOMs), supporting JSON and XML representations.

Why it matters: Syft can generate SBOMs in CycloneDX format (-o cyclonedx-json or -o cyclonedx-xml), which is widely supported by security tools and compliance platforms. Grype can also scan CycloneDX SBOMs for vulnerabilities.

External resources:

Related documentation: SBOM Generation

D

Dependency

A software component that another piece of software relies on to function. Dependencies can be direct (explicitly required by your code) or transitive (required by your dependencies). Understanding and tracking dependencies is crucial for security and license compliance.

Why it matters: Syft catalogs both direct and transitive dependencies in your software, creating a complete inventory. Grype then scans all dependencies for vulnerabilities, not just your direct dependencies—important because transitive dependencies often contain hidden security risks.

Distro

Short for “distribution”, referring to a specific Linux distribution like Alpine, Ubuntu, Debian, or Red Hat. The distro information includes the distribution name and version (e.g., “alpine 3.18”).

Why it matters: Grype uses distro information to match OS packages against the correct vulnerability database. Syft automatically detects the distro from files like /etc/os-release and includes it in the SBOM, ensuring accurate vulnerability matching.

Related documentation: Working with Syft JSON

Docker

Docker is a platform for developing, shipping, and running applications in containers. While Docker is a specific implementation, the term is often used colloquially to refer to container technology in general. See Container image and OCI.

Why it matters: Syft and Grype can pull and scan images directly from Docker registries or analyze images in your local Docker daemon without needing Docker to be installed.

External resources:

E

Ecosystem

In software, an ecosystem refers to a package management system and its associated community, tools, and conventions. Examples include npm (JavaScript), PyPI (Python), Maven Central (Java), and RubyGems (Ruby). Different ecosystems have different package formats, naming conventions, and vulnerability data sources.

Why it matters: Syft supports dozens of package ecosystems, and each uses a different cataloger. The ecosystem determines how packages are identified (PURL type), which metadata is captured, and which vulnerability data sources Grype uses for matching.

Related documentation: SBOM Generation

EPSS

Exploit Prediction Scoring System (EPSS) is a data-driven framework that estimates the probability that a software vulnerability will be exploited in the wild within the next 30 days.

EPSS provides two complementary metrics:

  • Score: A probability value from 0.0 to 1.0 (0% to 100%) indicating the likelihood of exploitation. For example, a score of 0.00034 means a 0.034% probability of exploitation.
  • Percentile: A ranking showing what percentage of all CVEs have a lower EPSS score. For example, a percentile of 0.09274 means this CVE scores higher than 9.274% of all tracked vulnerabilities.

Unlike CVSS which measures theoretical severity, EPSS predicts actual exploitation probability by analyzing factors like available exploits, social media activity, and observed attacks (among other signals).

Why it matters: EPSS helps you prioritize vulnerabilities more effectively than severity alone. A critical CVSS vulnerability with a low EPSS score might be less urgent than a medium severity issue with a high EPSS score. Grype can display EPSS scores alongside CVSS to help you focus remediation efforts on vulnerabilities that are both severe and likely to be exploited.

External resources:

Related documentation: Vulnerability Scanning

F

False positive

In the context of scanning for vulnerabilities, a false positive is a vulnerability-package match reported by a scanner that doesn’t actually affect the software package in question. False positives can occur due to incorrect CPE matching, version misidentification, or when a vulnerability applies to one variant of a package but not another.

Why it matters: When Grype reports a false positive, you can use VEX documents or Grype’s ignore rules to suppress it, preventing alert fatigue and focusing on real security issues. If you believe a match is incorrect, you can report it on GitHub to help improve Grype for everyone.

False negative

In the context of scanning for vulnerabilities, a false negative occurs when a scanner fails to detect a vulnerability that actually affects a software package. False negatives can happen when vulnerability data is incomplete, when a package uses non-standard naming or versioning, when CPE or PURL identifiers don’t match correctly, or when the vulnerability database hasn’t been updated yet.

Why it matters: False negatives are more dangerous than false positives because they create a false sense of security. To minimize false negatives, keep Grype’s vulnerability database updated regularly and understand that no scanner catches 100% of vulnerabilities—defense in depth and multiple security controls are essential.

K

KEV

Known Exploited Vulnerability (KEV) is a designation for vulnerabilities that have been confirmed as actively exploited in real-world attacks. CISA (Cybersecurity and Infrastructure Security Agency) maintains the authoritative KEV catalog, which lists CVEs with evidence of active exploitation and provides binding operational directives for federal agencies.

The CISA KEV catalog includes:

  • CVE identifiers for exploited vulnerabilities
  • The product and vendor affected
  • A brief description of the vulnerability
  • Required remediation actions
  • Due dates for federal agencies to patch

Vulnerabilities are added to the KEV catalog based on reliable evidence of active exploitation, such as public reporting, threat intelligence, or incident response data.

Why it matters: KEV status is a strong signal for prioritization—these vulnerabilities are being actively exploited right now. When Grype identifies a vulnerability that’s on the CISA KEV list, you should treat it as urgent regardless of CVSS score. A medium-severity KEV vulnerability poses more immediate risk than a critical-severity vulnerability that’s never been exploited. Some organizations make KEV remediation mandatory within tight timeframes (e.g., 15 days for critical KEVs).

External resources:

Related documentation: Vulnerability Scanning

L

Layer

Container images are built as a series of filesystem layers, where each layer represents changes from a Dockerfile instruction. Layers are stacked together to create the final filesystem.

Why it matters: By default, Syft scans only the “squashed” view of an image (what you’d see if the container were running). Use --scope all-layers to scan all layers, which can reveal packages that were installed then deleted, potentially exposing vulnerabilities in build-time dependencies.

Related documentation: SBOM Generation

License

A legal instrument governing the use and distribution of software. Software licenses range from permissive (MIT, Apache) to copyleft (GPL) to proprietary.

Why it matters: Syft extracts license information from packages and includes it in SBOMs, helping you ensure compliance with open source licenses and identify packages with incompatible or restricted licenses.

Related documentation: License Compliance

M

Match

A match is a vulnerability finding in Grype’s output, representing a single package-vulnerability pair. Each match indicates that a specific package version is affected by a particular CVE.

Related documentation: Vulnerability Scanning

Matcher

A matcher is a component within Grype that compares package information against vulnerability data using specific matching strategies. Different matchers handle different package types or ecosystems (e.g., distro matcher for OS packages, language matcher for application dependencies).

Why it matters: Grype uses multiple matchers to ensure comprehensive vulnerability coverage. The matcher used for each finding is included in detailed output, helping you understand how the match was made.

N

NVD

National Vulnerability Database (NVD) is the U.S. government repository known software vulnerabilities. It provides comprehensive vulnerability information including CVE IDs, CVSS scores, and affected software configurations. The NVD is maintained by NIST.

Why it matters: The NVD is one of the primary vulnerability data sources used by Grype. Syft also uses the NVD’s CPE dictionary to generate CPEs for packages, enabling accurate vulnerability matching.

External resources:

Related documentation: Vulnerability Scanning

O

OCI

Open Container Initiative (OCI) is an open governance structure for creating industry standards around container formats and runtimes. The OCI Image Specification defines the standard format for container images, ensuring interoperability across different container tools and platforms.

Why it matters: Syft and Grype work with OCI-compliant images from any registry (Docker Hub, GitHub Container Registry, Amazon ECR, etc.), not just Docker images. They can read images in OCI layout format directly from disk.

External resources:

P

Package

A bundle of software that can be installed and managed by a package manager. Packages typically include the software itself, metadata (like version and dependencies), and installation instructions. Packages are the fundamental units tracked in an SBOM.

Why it matters: Every entry in a Syft-generated SBOM represents a package. Grype matches packages against vulnerability data to find security issues. Understanding what constitutes a “package” in different ecosystems helps you interpret SBOM contents.

Package manager

A tool that automates the process of installing, upgrading, configuring, and removing software packages. Examples include npm, pip, apt, yum, and Maven. Package managers maintain repositories of available packages and handle dependency resolution.

Why it matters: Syft discovers packages by reading package manager metadata files (like package.json, requirements.txt, or /var/lib/dpkg/status). Each package manager stores information differently, which is why Syft needs ecosystem-specific catalogers.

Provenance

Information about the origin and build process of a software artifact, including who built it, when, from what source code, and using what tools. Build provenance helps verify that software was built as expected and hasn’t been tampered with.

Why it matters: SBOMs generated by Syft during builds can be combined with provenance information to create comprehensive supply chain attestations, enabling you to verify both what’s in your software and how it was built.

External resources:

PURL

Package URL (PURL) is a standardized way to identify and locate software packages across different package managers and ecosystems. PURLs provide a uniform identifier that works across different systems.

Format: pkg:type/namespace/name@version?qualifiers#subpath

Example: pkg:npm/lodash@4.17.21

Why it matters: Syft generates PURLs for all discovered packages, and Grype uses PURLs as one of the primary identifiers for vulnerability matching. PURLs provide a consistent way to refer to packages across different SBOM formats.

External resources:

Related documentation: Working with Syft JSON

R

Relationship

In Syft’s JSON output, relationships describe connections between artifacts (packages), files, and sources (what was scanned). For example, a relationship might indicate that a file is “contained-by” a package, or that one package “depends-on” another.

Why it matters: Relationships provide the graph structure of your software, showing not just what packages exist but how they’re connected. This is essential for understanding dependency chains and reachability analysis.

Related documentation: Working with Syft JSON

S

SBOM

Software Bill of Materials (SBOM) is a comprehensive inventory of all components, libraries, and modules that make up a piece of software. Like a list of ingredients on food packaging, an SBOM provides transparency into what’s included in your software, enabling security analysis, license compliance, and supply chain risk management.

Why it matters: Syft generates SBOMs that you can use with Grype for vulnerability scanning, share with customers for transparency, or use for license compliance. SBOMs are becoming required by regulations and standards like Executive Order 14028.

External resources:

Related documentation: SBOM Generation

Severity

A classification of how serious a vulnerability is, typically based on CVSS scores. Common severity levels are Critical, High, Medium, Low, and Negligible (or None).

Why it matters: Grype reports vulnerability severity to help you prioritize remediation efforts. You can filter Grype output by severity (e.g., --fail-on high to fail CI builds for high or critical vulnerabilities).

Related documentation: Vulnerability Scanning

Software supply chain

The software supply chain encompasses all the components, processes, and steps involved in creating, building, and delivering software. This includes source code, dependencies, build tools, CI/CD pipelines, and distribution mechanisms. Securing the software supply chain helps prevent attacks that target the development and delivery process.

Why it matters: Syft and Grype are key tools in supply chain security. Syft provides visibility into what’s in your software (SBOM), and Grype identifies known vulnerabilities, helping you secure each link in the chain.

Source

In Syft’s JSON output, the “source” object describes what was scanned—whether it was a container image, directory, file archive, or other input. It includes details like image name, digest, and tags.

Why it matters: The source information helps you correlate SBOMs with specific artifacts, especially important when tracking multiple image versions or builds.

Related documentation: Working with Syft JSON

SPDX

Software Package Data Exchange (SPDX) is an open standard for communicating software bill of materials information, including components, licenses, copyrights, and security references. SPDX is an ISO/IEC standard (ISO/IEC 5962:2021) and supports multiple formats including JSON, YAML, XML, and tag-value.

Why it matters: Syft can generate SBOMs in SPDX format (-o spdx-json or -o spdx-tag-value), which is widely supported by compliance tools and required by many organizations and regulations. Grype can also scan SPDX SBOMs for vulnerabilities.

External resources:

Related documentation: SBOM Generation

Squash

The “squashed” view of a container image represents the final filesystem that would be visible if you ran the container. It’s the result of applying all image layers in sequence, where later layers can override or delete files from earlier layers.

Why it matters: Syft scans the squashed view by default (what you actually run), but you can use --scope all-layers to also see packages that existed in intermediate layers but were deleted before the final image.

Related documentation: SBOM Generation

V

VEX

Vulnerability Exploitability eXchange (VEX) is a series of formats for communicating information about the exploitability status of vulnerabilities in software products. VEX documents allow software vendors to provide context about whether identified vulnerabilities actually affect their product, helping users prioritize remediation efforts.

Why it matters: Grype can consume VEX documents to suppress false positives or provide additional context about vulnerabilities. When Grype reports a vulnerability that doesn’t actually affect your application, you can create a VEX document explaining why it’s not exploitable.

External resources:

Related documentation: Vulnerability Scanning

Vulnerability

A security weakness, flaw, or defect in software that can be exploited by an attacker to perform unauthorized actions, compromise systems, steal data, or cause harm. Vulnerabilities can arise from coding errors, design flaws, misconfigurations, or outdated dependencies with known security issues.

Not all vulnerabilities affect all users of a package. Whether a vulnerability impacts you depends on:

  • The specific version you’re using
  • Which features or code paths you actually invoke
  • Your deployment configuration and environment
  • Whether compensating security controls are in place

Why it matters: Grype identifies vulnerabilities in the packages discovered by Syft, enabling you to find and fix security issues before they can be exploited. Not all vulnerabilities are equally serious—use severity ratings (CVSS) and exploitation probability (EPSS) to prioritize fixes. Understanding the context of a vulnerability helps you assess real risk rather than just responding to every CVE.

External resources:

Related documentation: Vulnerability Scanning

Vulnerability database

A repository of known security vulnerabilities, their affected software versions, severity scores, and remediation information. Vulnerability databases aggregate data from multiple sources like NVD, security advisories, and vendor bulletins.

Why it matters: Grype downloads and maintains a local vulnerability database that’s updated daily. The database quality directly impacts scan accuracy—Grype uses curated, high-quality data from multiple providers to minimize false positives and false negatives.

Related documentation: Vulnerability Database

Vulnerability scanner

A tool that identifies known security vulnerabilities in software by comparing components against vulnerability databases. Vulnerability scanners like Grype analyze software artifacts (container images, filesystems, or SBOMs) and report potential security issues that should be addressed.

Why it matters: Grype is a vulnerability scanner that works seamlessly with Syft-generated SBOMs. You can scan images directly with Grype, or generate an SBOM with Syft first and scan it separately, enabling workflows where SBOMs are generated once and scanned multiple times as new vulnerabilities are discovered.

Related documentation: Vulnerability Scanning