This is the multi-page printable view of this section. Click here to print.
Guides
1 - SBOM Generation
1.1 - Getting Started
What is an SBOM?
A Software Bill of Materials (SBOM) is a detailed list of all libraries and components that make up software.
For developers, it’s crucial for tracking dependencies, identifying vulnerabilities, and ensuring license compliance.
For organizations, it provides transparency into the software supply chain to assess security risks.
Syft is a CLI tool for generating an SBOM from container images and filesystems.
Installation
Syft is provided as a single compiled executable and requires no external dependencies to run. Run the command for your platform to download the latest release.
curl -sSfL https://get.anchore.io/syft | sudo sh -s -- -b /usr/local/binbrew install syftnuget install Anchore.SyftCheck out installation guide for full list of official and community-maintained packaging options.
Find packages within a container image
Run syft against a small container image; the output will be a simple human-readable table of the installed packages found:
syft alpine:latest
NAME VERSION TYPE
alpine-baselayout 3.6.8-r1 apk
alpine-baselayout-data 3.6.8-r1 apk
alpine-keys 2.5-r0 apk
alpine-release 3.21.3-r0 apk
apk-tools 2.14.6-r3 apk
busybox 1.37.0-r12 apk
busybox-binsh 1.37.0-r12 apk
...
Learn more
Syft supports more than just containers. Learn more about Supported Scan TargetsCreate an industry-standard SBOM
This command will display the human-readable table and write SBOMs in both SPDX and CycloneDX formats, the two primary industry standards.
syft alpine:latest \ # what we're scanning
-o table \ # a human-readable table to stdout
-o spdx-json=alpine.spdx.json \ # SPDX-JSON formatted SBOM to a file
-o cyclonedx-json=alpine.cdx.json # CycloneDX-JSON formatted SBOM to a file
The same table will be displayed, and two SBOM files will be created in the current directory.
Learn more
Syft supports multiple SBOM output formats, find out more about Output Formats.Examine the SBOM file contents
We can use jq to extract specific package data from the SBOM files (by default Syft outputs JSON on a single line,
but you can enable pretty-printing with the SYFT_FORMAT_PRETTY=true environment variable).
Both formats structure package information differently:
SPDX format:
jq '.packages[].name' alpine.spdx.json
CycloneDX format:
jq '.components[].name' alpine.cdx.json
Both commands show the packages that Syft found in the container image:
"alpine-baselayout"
"alpine-baselayout-data"
"alpine-keys"
"alpine-release"
"apk-tools"
"busybox"
"busybox-binsh"
...
By default, Syft shows only software visible in the final container image (the “squashed” representation).
To include software from all image layers, regardless of its presence in the final image, use --scope all-layers:
syft <image> --scope all-layers
More JSON examples
For more examples of working with Syft’s JSON output using jq, see the jq recipes.FAQ
Does Syft need internet access?
Only for downloading container images. By default, scanning works offline.
What about private container registries?
Syft supports authentication for private registries. See Private Registries.
Can I use Syft in CI/CD pipelines?
Absolutely! Syft is designed for automation. Generate SBOMs during builds and scan them for vulnerabilities.
What data does Syft send externally?
Nothing. Syft runs entirely locally and doesn’t send any data to external services.
Next steps
Continue the guide
Next: Learn about all the different Supported Scan Targets Syft can analyze –from container images to local directories and archives.Now that you’ve generated your first SBOM, here are additional resources:
- Scan for vulnerabilities: Use Grype to find security issues in your SBOMs
- Check licenses: Learn about License Scanning to understand dependency licenses
- Customize output: Explore different Output Formats for various tools and workflows
- Query SBOM data: Master Working with Syft JSON for advanced data extraction
1.2 - Supported Scan Targets
TL;DR
- Syft automatically detects scan target type, simply pass it as an argument:
syft <target> - Supports container images (Docker/Podman/Containerd/registries), directories, files, and archives
- Use
--from <type>to explicitly specify scan target type (e.g.,--from registryto bypass local daemons)
Syft can generate an SBOM from a variety of scan targets including container images, directories, files, and archives. In most cases, you can simply point Syft at what you want to analyze and it will automatically detect and catalog it correctly.
Catalog a container image from your local daemon or a remote registry:
syft alpine:latest
Catalog a directory (useful for analyzing source code or installed applications):
syft /path/to/project
Catalog a container image archive:
syft image.tar
To explicitly specify the scan target type, use the --from flag:
--from ARG | Description |
|---|---|
docker | Use images from the Docker daemon |
podman | Use images from the Podman daemon |
containerd | Use images from the Containerd daemon |
docker-archive | Use a tarball from disk for archives created from docker save |
oci-archive | Use a tarball from disk for OCI archives (from Skopeo or otherwise) |
oci-dir | Read directly from a path on disk for OCI layout directories (from Skopeo or otherwise) |
singularity | Read directly from a Singularity Image Format (SIF) container file on disk |
dir | Read directly from a path on disk (any directory) |
file | Read directly from a path on disk (any single file) |
registry | Pull image directly from a registry (bypass any container runtimes) |
Instead of using the --from flag explicitly, you can instead:
provide no hint and let Syft automatically detect the scan target type implicitly based on the input provided
provide the scan target type as a URI scheme in the target argument (e.g.,
docker:alpine:latest,oci-archive:/path/to/image.tar,dir:/path/to/dir)
Scan Target-Specific Behaviors
Container Image Scan Targets
When working with container images, Syft applies the following defaults and behaviors:
- Registry: If no registry is specified in the image reference (e.g.
alpine:latestinstead ofdocker.io/alpine:latest), Syft assumesdocker.io - Platform: For unspecific image references (tags) or multi-arch images pointing to an index (not a manifest), Syft analyzes the
linux/amd64manifest by default. Use the--platformflag to target a different platform.
When you provide an image reference without specifying a scan target type (i.e. no --from flag), Syft attempts to resolve the image using the following scan targets in order:
- Docker daemon
- Podman daemon
- Containerd daemon
- Direct registry access
For example, when you run syft alpine:latest, Syft will first check your local Docker daemon for the image.
If Docker isn’t available, it tries Podman, then Containerd, and finally attempts to pull directly from the registry.
You can override this default behavior with the default-image-pull-source configuration option to always prefer a specific scan target.
See Configuration for more details.
Directory Scan Targets
When you provide a directory path as the scan target, Syft recursively scans the directory tree to catalog installed software packages and files.
When you point Syft at a directory (especially system directories like /), it automatically skips certain filesystem types to improve
scan performance and avoid indexing areas that don’t contain installed software packages.
Filesystems always skipped
proc/procfs- Virtual filesystem for process informationsysfs- Virtual filesystem for kernel and device informationdevfs/devtmpfs/udev- Device filesystems
Filesystems conditionally skipped
tmpfs filesystems are only skipped when mounted at these specific locations:
/dev- Device files/sys- System information/runand/var/run- Runtime data and process IDs/var/lock- Lock files
These paths are excluded because they contain virtual or temporary runtime data rather than installed software packages. Skipping them significantly improves scan performance and enables you to catalog entire system root directories without getting stuck scanning thousands of irrelevant entries.
Syft identifies these filesystems by reading your system’s mount table (/proc/self/mountinfo on Linux).
When a directory matches one of these criteria, the entire directory tree under that mount point is skipped.
File types excluded
These file types are never indexed during directory scans:
- Character devices
- Block devices
- Sockets
- FIFOs (named pipes)
- Irregular files
Regular files, directories, and symbolic links are always processed.
Archive Scan Targets
Syft automatically detects and unpacks common archive formats, then catalogs their contents.
If an archive is a container image archive (from docker save or skopeo copy), Syft treats it as a container image.
Supported archive formats:
Standard archives:
.zip.tar(uncompressed).rar(read-only extraction)
Compressed tar variants:
.tar.gz/.tgz.tar.bz2/.tbz2.tar.br/.tbr(brotli).tar.lz4/.tlz4.tar.sz/.tsz(snappy).tar.xz/.txz.tar.zst/.tzst(zstandard)
Standalone compression formats (extracted if containing tar):
.gz(gzip).bz2(bzip2).br(brotli).lz4.sz(snappy).xz.zst/.zstd(zstandard)
OCI Archives and Layout Scan Targets
Syft automatically detects OCI archive and directory structures (including OCI layouts and SIF files) and catalogs them accordingly.
OCI archives and layouts are particularly useful for CI/CD pipelines, as they allow you to catalog images, scan for vulnerabilities, or perform other checks without publishing to a registry. This provides a powerful pattern for build-time gating.
Create OCI scan targets without a registry
OCI archive from an image:
skopeo copy \
docker://alpine@sha256:eafc1edb577d2e9b458664a15f23ea1c370214193226069eb22921169fc7e43f \
oci-archive:alpine.tar
OCI layout directory from an image:
skopeo copy \
docker://alpine@sha256:eafc1edb577d2e9b458664a15f23ea1c370214193226069eb22921169fc7e43f \
oci:alpine
Container image archive from an image:
docker save -o alpine.tar alpine:latest
Container Runtime Configuration
Image Availability and Authentication
When using container runtime scan targets (Docker, Podman, or Containerd):
- Missing images: If an image doesn’t exist locally in the container runtime, Syft attempts to pull it from the registry via the runtime
- Private images: You must be logged in to the registry via the container runtime (e.g.,
docker login) or have credentials configured for direct registry access. See Authentication with Private Registries for more details.
Environment Variables
Syft respects the following environment variables for each container runtime:
| Scan Target | Environment Variables | Description |
|---|---|---|
| Docker | DOCKER_HOST | Docker daemon socket/host address (supports ssh:// for remote connections) |
DOCKER_TLS_VERIFY | Enable TLS verification (auto-sets DOCKER_CERT_PATH if not set) | |
DOCKER_CERT_PATH | Path to TLS certificates (defaults to ~/.docker if DOCKER_TLS_VERIFY is set) | |
DOCKER_CONFIG | Override default Docker config directory | |
| Podman | CONTAINER_HOST | Podman socket/host address (e.g., unix:///run/podman/podman.sock or ssh://user@host/path/to/socket) |
CONTAINER_SSHKEY | SSH identity file path for remote Podman connections | |
CONTAINER_PASSPHRASE | Passphrase for the SSH key | |
| Containerd | CONTAINERD_ADDRESS | Containerd socket address (overrides default /run/containerd/containerd.sock) |
CONTAINERD_NAMESPACE | Containerd namespace (defaults to default) |
Podman Daemon Requirements
Unlike Docker Desktop, which typically auto-starts, Podman requires explicitly starting the service.
Syft attempts to connect to Podman using the following methods in order:
Unix Socket (primary)
- Checks
CONTAINER_HOSTenvironment variable first - Falls back to Podman config files
- Finally tries default socket locations ($XDG_RUNTIME_DIR/podman/podman.sock
and/run/podman/podman.sock`)
- Checks
SSH (fallback)
- Configured via
CONTAINER_HOST,CONTAINER_SSHKEY, andCONTAINER_PASSPHRASEenvironment variables - Used for remote Podman instances
- Configured via
Direct Registry Access
The registry scan target bypasses container runtimes entirely and pulls images directly from the registry.
Credentials are resolved in the following order:
- Syft first attempts to use default Docker credentials from
~/.docker/config.jsonif they exist - If default credentials are not available, you can provide credentials via environment variables. See Authentication with Private Registries for more details.
Troubleshooting
Image not found in local daemon
If Syft reports an image doesn’t exist but you know it’s available:
- Check which daemon has the image: Run
docker images,podman images, ornerdctl imagesto see where the image exists - Specify the scan target type explicitly: Use
--from docker,--from podman, or--from containerdto target the correct daemon - Pull from registry: Use
--from registryto bypass local daemons and pull directly
Authentication failures with private registries
If you get authentication errors when scanning private images:
- For daemon scan targets: Ensure you’re logged in via the daemon (e.g.,
docker login registry.example.com) - For registry scan target: Configure credentials in
~/.docker/config.jsonor use environment variables (see Private Registries) - Verify credentials: Check that your credentials haven’t expired and have appropriate permissions
Podman connection issues
If Syft can’t connect to Podman:
- Start the service: Run
podman system serviceto start the Podman socket - Check socket location: Verify the socket exists at
$XDG_RUNTIME_DIR/podman/podman.sockor/run/podman/podman.sock - Use environment variable: Set
CONTAINER_HOSTto point to your Podman socket location
Slow directory scans
If scanning a directory takes too long:
- Exclude unnecessary paths: Use file selection options to skip build artifacts, caches, or virtual environments (see File Selection)
- Avoid system directories: Scanning
/includes all mounted filesystems; consider scanning specific application directories instead - Check mount points: Ensure you’re not accidentally scanning network mounts or remote filesystems
Next steps
Continue the guide
Next: Learn about Output Formats to understand how to generate SBOMs in different standard formats like SPDX and CycloneDX.Additional resources:
- Authenticate with registries: Set up Private Registry Authentication for scanning private images
- Control what gets scanned: Use File Selection to include or exclude specific files
- Configure defaults: See Configuration for setting default source preferences
1.3 - Output Formats
TL;DR
- Choose a format with
-o <format>:table(default)json(complete data)spdx-json/spdx-tag-valuecyclonedx-json/cyclonedx-xml
- Write to file:
-o json=sbom.json - Generate multiple formats at once: use multiple
-oflags
Syft supports multiple output formats to fit different workflows and requirements by using the -o (or --output) flag:
syft <image> -o <format>
Available formats
-o ARG | Description |
|---|---|
table | A columnar summary (default) |
json | Native output for Syft—use this to get as much information out of Syft as possible! (see the JSON schema) |
purls | A line-separated list of Package URLs (PURLs) for all discovered packages |
github-json | A JSON report conforming to GitHub’s dependency snapshot format |
template | Lets you specify a custom output format via Go templates (see Templates for more detail) |
text | A row-oriented, human-and-machine-friendly output |
CycloneDX
CycloneDX is an OWASP-maintained industry standard SBOM format.
-o ARG | Description |
|---|---|
cyclonedx-json | A JSON report conforming to the CycloneDX specification |
cyclonedx-xml | An XML report conforming to the CycloneDX specification |
SPDX
SPDX (Software Package Data Exchange) is an ISO/IEC 5962:2021 industry standard SBOM format.
-o ARG | Description |
|---|---|
spdx-json | A JSON report conforming to the SPDX JSON Schema |
spdx-tag-value | A tag-value formatted report conforming to the SPDX specification |
Format versions
Some output formats support multiple schema versions. Specify a version by appending @<version> to the format name:
syft <source> -o <format>@<version>
Examples:
# Use CycloneDX JSON version 1.4
syft <source> -o cyclonedx-json@1.4
# Use SPDX JSON version 2.2
syft <source> -o spdx-json@2.2
# Default to latest version if not specified
syft <source> -o cyclonedx-json
Formats with version support:
- cyclonedx-json:
1.2,1.3,1.4,1.5,1.6 - cyclonedx-xml:
1.0,1.1,1.2,1.3,1.4,1.5,1.6 - spdx-json:
2.2,2.3 - spdx-tag-value:
2.1,2.2,2.3
When no version is specified, Syft uses the latest supported version of the format.
Format examples
NAME VERSION TYPE
busybox 1.37.0 binary
{
"artifacts": [
{
"id": "fe44cee3fe279dfa",
"name": "busybox",
"version": "1.37.0",
"type": "binary",
"foundBy": "binary-classifier-cataloger",
"locations": [
{
"path": "/bin/[",
"layerID": "sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05",
"accessPath": "/bin/busybox",
"annotations": {
"evidence": "primary"
}
}
],
"licenses": [],
"language": "",
"cpes": [
{
"cpe": "cpe:2.3:a:busybox:busybox:1.37.0:*:*:*:*:*:*:*",
"source": "nvd-cpe-dictionary"
}
],
"purl": "pkg:generic/busybox@1.37.0",
"metadataType": "binary-signature",
"metadata": {
"matches": [
{
"classifier": "busybox-binary",
"location": {
"path": "/bin/[",
"layerID": "sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05",
"accessPath": "/bin/busybox",
"annotations": {
"evidence": "primary"
}
}
}
]
}
}
],
"artifactRelationships": [
{
"parent": "396fa78f221c72de93053a00e33e3d69b5bdfa80131777e6ea518eb9a1af3f3b",
"child": "fe44cee3fe279dfa",
"type": "contains"
},
{
"parent": "fe44cee3fe279dfa",
"child": "3a6b3df220691408",
"type": "evident-by",
"metadata": {
"kind": "primary"
}
}
],
"files": [
{
"id": "3a6b3df220691408",
"location": {
"path": "/bin/[",
"layerID": "sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
},
"metadata": {
"mode": 755,
"type": "RegularFile",
"userID": 0,
"groupID": 0,
"mimeType": "application/x-sharedlib",
"size": 1119808
},
"digests": [
{
"algorithm": "sha1",
"value": "5231d5d79cb52f3581f9c137396e7d9df7aa6d6b"
},
{
"algorithm": "sha256",
"value": "f19470457088612bc3285404783d9f93533d917e869050aca13a4139b937c0a5"
}
],
"executable": {
"format": "elf",
"hasExports": true,
"hasEntrypoint": true,
"importedLibraries": ["libm.so.6", "libresolv.so.2", "libc.so.6"],
"elfSecurityFeatures": {
"symbolTableStripped": true,
"stackCanary": false,
"nx": true,
"relRO": "partial",
"pie": true,
"dso": true,
"safeStack": false
}
}
},
{
"id": "eab1ede6d517d844",
"location": {
"path": "/bin/getconf",
"layerID": "sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
},
"executable": {
"format": "elf",
"hasExports": true,
"hasEntrypoint": true,
"importedLibraries": ["libc.so.6"],
"elfSecurityFeatures": {
"symbolTableStripped": true,
"stackCanary": false,
"nx": true,
"relRO": "full",
"pie": true,
"dso": true,
"safeStack": false
}
},
"unknowns": ["unknowns-labeler: no package identified in executable file"]
},
{
"id": "9c61e609f3b76f4a",
"location": {
"path": "/lib/ld-linux-aarch64.so.1",
"layerID": "sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
},
"executable": {
"format": "elf",
"hasExports": true,
"hasEntrypoint": true,
"importedLibraries": [],
"elfSecurityFeatures": {
"symbolTableStripped": true,
"stackCanary": true,
"nx": true,
"relRO": "full",
"pie": false,
"dso": true,
"safeStack": false
}
},
"unknowns": ["unknowns-labeler: no package identified in executable file"]
},
{
"id": "456b7910a9499337",
"location": {
"path": "/lib/libc.so.6",
"layerID": "sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
},
"executable": {
"format": "elf",
"hasExports": true,
"hasEntrypoint": true,
"importedLibraries": ["ld-linux-aarch64.so.1"],
"elfSecurityFeatures": {
"symbolTableStripped": true,
"stackCanary": true,
"nx": true,
"relRO": "full",
"pie": false,
"dso": true,
"safeStack": false
}
},
"unknowns": ["unknowns-labeler: no package identified in executable file"]
},
{
"id": "9376910c472a1ddd",
"location": {
"path": "/lib/libm.so.6",
"layerID": "sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
},
"executable": {
"format": "elf",
"hasExports": true,
"hasEntrypoint": false,
"importedLibraries": ["libc.so.6", "ld-linux-aarch64.so.1"],
"elfSecurityFeatures": {
"symbolTableStripped": true,
"stackCanary": true,
"nx": true,
"relRO": "full",
"pie": false,
"dso": true,
"safeStack": false
}
},
"unknowns": ["unknowns-labeler: no package identified in executable file"]
},
{
"id": "383904be0603bd22",
"location": {
"path": "/lib/libnss_compat.so.2",
"layerID": "sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
},
"executable": {
"format": "elf",
"hasExports": true,
"hasEntrypoint": false,
"importedLibraries": ["libc.so.6", "ld-linux-aarch64.so.1"],
"elfSecurityFeatures": {
"symbolTableStripped": true,
"stackCanary": true,
"nx": true,
"relRO": "full",
"pie": false,
"dso": true,
"safeStack": false
}
},
"unknowns": ["unknowns-labeler: no package identified in executable file"]
},
{
"id": "324828ff45e1fc0b",
"location": {
"path": "/lib/libnss_dns.so.2",
"layerID": "sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
},
"executable": {
"format": "elf",
"hasExports": true,
"hasEntrypoint": false,
"importedLibraries": ["libc.so.6"],
"elfSecurityFeatures": {
"symbolTableStripped": true,
"stackCanary": false,
"nx": true,
"relRO": "full",
"pie": false,
"dso": true,
"safeStack": false
}
},
"unknowns": ["unknowns-labeler: no package identified in executable file"]
},
{
"id": "9a791682497737bd",
"location": {
"path": "/lib/libnss_files.so.2",
"layerID": "sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
},
"executable": {
"format": "elf",
"hasExports": true,
"hasEntrypoint": false,
"importedLibraries": ["libc.so.6"],
"elfSecurityFeatures": {
"symbolTableStripped": true,
"stackCanary": false,
"nx": true,
"relRO": "full",
"pie": false,
"dso": true,
"safeStack": false
}
},
"unknowns": ["unknowns-labeler: no package identified in executable file"]
},
{
"id": "c6f668db34996e30",
"location": {
"path": "/lib/libnss_hesiod.so.2",
"layerID": "sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
},
"executable": {
"format": "elf",
"hasExports": true,
"hasEntrypoint": false,
"importedLibraries": ["libresolv.so.2", "libc.so.6", "ld-linux-aarch64.so.1"],
"elfSecurityFeatures": {
"symbolTableStripped": true,
"stackCanary": true,
"nx": true,
"relRO": "full",
"pie": false,
"dso": true,
"safeStack": false
}
},
"unknowns": ["unknowns-labeler: no package identified in executable file"]
},
{
"id": "d5aa00430d994aa8",
"location": {
"path": "/lib/libpthread.so.0",
"layerID": "sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
},
"executable": {
"format": "elf",
"hasExports": true,
"hasEntrypoint": false,
"importedLibraries": ["libc.so.6"],
"elfSecurityFeatures": {
"symbolTableStripped": true,
"stackCanary": false,
"nx": true,
"relRO": "full",
"pie": false,
"dso": true,
"safeStack": false
}
},
"unknowns": ["unknowns-labeler: no package identified in executable file"]
},
{
"id": "5804ce9e713c7582",
"location": {
"path": "/lib/libresolv.so.2",
"layerID": "sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
},
"executable": {
"format": "elf",
"hasExports": true,
"hasEntrypoint": false,
"importedLibraries": ["libc.so.6", "ld-linux-aarch64.so.1"],
"elfSecurityFeatures": {
"symbolTableStripped": true,
"stackCanary": true,
"nx": true,
"relRO": "full",
"pie": false,
"dso": true,
"safeStack": false
}
},
"unknowns": ["unknowns-labeler: no package identified in executable file"]
}
],
"source": {
"id": "396fa78f221c72de93053a00e33e3d69b5bdfa80131777e6ea518eb9a1af3f3b",
"name": "busybox",
"version": "sha256:396fa78f221c72de93053a00e33e3d69b5bdfa80131777e6ea518eb9a1af3f3b",
"type": "image",
"metadata": {
"userInput": "busybox:latest",
"imageID": "sha256:eade5be814e817df411f138aa7711c3f81595185eb54b3257fd19f6c4966b285",
"manifestDigest": "sha256:396fa78f221c72de93053a00e33e3d69b5bdfa80131777e6ea518eb9a1af3f3b",
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"tags": [],
"imageSize": 4170774,
"layers": [
{
"mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
"digest": "sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05",
"size": 4170774
}
],
"manifest": "ewoJInNjaGVtYVZlcnNpb24iOiAyLAoJIm1lZGlhVHlwZSI6ICJhcHBsaWNhdGlvbi92bmQub2NpLmltYWdlLm1hbmlmZXN0LnYxK2pzb24iLAoJImNvbmZpZyI6IHsKCQkibWVkaWFUeXBlIjogImFwcGxpY2F0aW9uL3ZuZC5vY2kuaW1hZ2UuY29uZmlnLnYxK2pzb24iLAoJCSJkaWdlc3QiOiAic2hhMjU2OmVhZGU1YmU4MTRlODE3ZGY0MTFmMTM4YWE3NzExYzNmODE1OTUxODVlYjU0YjMyNTdmZDE5ZjZjNDk2NmIyODUiLAoJCSJzaXplIjogNDc3Cgl9LAoJImxheWVycyI6IFsKCQl7CgkJCSJtZWRpYVR5cGUiOiAiYXBwbGljYXRpb24vdm5kLm9jaS5pbWFnZS5sYXllci52MS50YXIrZ3ppcCIsCgkJCSJkaWdlc3QiOiAic2hhMjU2OjViYzUxYjg3ZDRlY2NlMDYyOWM0ODg2NzRlMjU4MGEzZDU4ZDI5MzdkNzBjODFkNGY2ZDQ4NWQ0M2UwNmViMDYiLAoJCQkic2l6ZSI6IDE5MDI5OTEKCQl9CgldLAoJImFubm90YXRpb25zIjogewoJCSJvcmcub3BlbmNvbnRhaW5lcnMuaW1hZ2UudXJsIjogImh0dHBzOi8vZ2l0aHViLmNvbS9kb2NrZXItbGlicmFyeS9idXN5Ym94IiwKCQkib3JnLm9wZW5jb250YWluZXJzLmltYWdlLnZlcnNpb24iOiAiMS4zNy4wLWdsaWJjIgoJfQp9Cg==",
"config": "ewoJImNvbmZpZyI6IHsKCQkiQ21kIjogWwoJCQkic2giCgkJXSwKCQkiRW52IjogWwoJCQkiUEFUSD0vdXNyL2xvY2FsL3NiaW46L3Vzci9sb2NhbC9iaW46L3Vzci9zYmluOi91c3IvYmluOi9zYmluOi9iaW4iCgkJXQoJfSwKCSJjcmVhdGVkIjogIjIwMjQtMDktMjZUMjE6MzE6NDJaIiwKCSJoaXN0b3J5IjogWwoJCXsKCQkJImNyZWF0ZWQiOiAiMjAyNC0wOS0yNlQyMTozMTo0MloiLAoJCQkiY3JlYXRlZF9ieSI6ICJCdXN5Qm94IDEuMzcuMCAoZ2xpYmMpLCBEZWJpYW4gMTMiCgkJfQoJXSwKCSJyb290ZnMiOiB7CgkJInR5cGUiOiAibGF5ZXJzIiwKCQkiZGlmZl9pZHMiOiBbCgkJCSJzaGEyNTY6MWEzODI3NDBjNTY0MmU0NjA3NDEyYTM0MWRmMzcxNmMyMjI4N2ZmYTZhZGY5MmVhZmY1NGUwNzlhMTkwMmYwNSIKCQldCgl9LAoJImFyY2hpdGVjdHVyZSI6ICJhcm02NCIsCgkib3MiOiAibGludXgiLAoJInZhcmlhbnQiOiAidjgiCn0K",
"repoDigests": [
"index.docker.io/library/busybox@sha256:e3652a00a2fabd16ce889f0aa32c38eec347b997e73bd09e69c962ec7f8732ee"
],
"architecture": "arm64",
"os": "linux"
}
},
"distro": {
"prettyName": "BusyBox v1.37.0",
"name": "busybox",
"id": "busybox",
"idLike": ["busybox"],
"version": "1.37.0",
"versionID": "1.37.0"
},
"descriptor": {
"name": "syft",
"version": "1.38.0",
"configuration": {
"catalogers": {
"requested": {
"default": ["image", "file"]
},
"used": [
"alpm-db-cataloger",
"apk-db-cataloger",
"binary-classifier-cataloger",
"bitnami-cataloger",
"cargo-auditable-binary-cataloger",
"conan-info-cataloger",
"dotnet-deps-binary-cataloger",
"dotnet-packages-lock-cataloger",
"dpkg-db-cataloger",
"elf-binary-package-cataloger",
"file-content-cataloger",
"file-digest-cataloger",
"file-executable-cataloger",
"file-metadata-cataloger",
"gguf-cataloger",
"go-module-binary-cataloger",
"graalvm-native-image-cataloger",
"homebrew-cataloger",
"java-archive-cataloger",
"java-jvm-cataloger",
"javascript-package-cataloger",
"linux-kernel-cataloger",
"lua-rock-cataloger",
"nix-cataloger",
"pe-binary-package-cataloger",
"php-composer-installed-cataloger",
"php-interpreter-cataloger",
"php-pear-serialized-cataloger",
"portage-cataloger",
"python-installed-package-cataloger",
"r-package-cataloger",
"rpm-db-cataloger",
"ruby-installed-gemspec-cataloger",
"snap-cataloger",
"wordpress-plugins-cataloger"
]
},
"data-generation": {
"generate-cpes": true
},
"files": {
"content": {
"globs": null,
"skip-files-above-size": 0
},
"hashers": ["sha-1", "sha-256"],
"selection": "owned-by-package"
},
"licenses": {
"coverage": 75,
"include-content": "none"
},
"packages": {
"binary": [
"python-binary",
"python-binary-lib",
"pypy-binary-lib",
"go-binary",
"julia-binary",
"helm",
"redis-binary",
"nodejs-binary",
"go-binary-hint",
"busybox-binary",
"util-linux-binary",
"haproxy-binary",
"perl-binary",
"php-composer-binary",
"httpd-binary",
"memcached-binary",
"traefik-binary",
"arangodb-binary",
"postgresql-binary",
"mysql-binary",
"mysql-binary",
"mysql-binary",
"xtrabackup-binary",
"mariadb-binary",
"rust-standard-library-linux",
"rust-standard-library-macos",
"ruby-binary",
"erlang-binary",
"erlang-alpine-binary",
"erlang-library",
"swipl-binary",
"dart-binary",
"haskell-ghc-binary",
"haskell-cabal-binary",
"haskell-stack-binary",
"consul-binary",
"hashicorp-vault-binary",
"nginx-binary",
"bash-binary",
"openssl-binary",
"gcc-binary",
"fluent-bit-binary",
"wordpress-cli-binary",
"curl-binary",
"lighttpd-binary",
"proftpd-binary",
"zstd-binary",
"xz-binary",
"gzip-binary",
"sqlcipher-binary",
"jq-binary",
"chrome-binary",
"ffmpeg-binary",
"ffmpeg-library",
"ffmpeg-library",
"elixir-binary",
"elixir-library",
"java-binary",
"java-jdb-binary"
],
"dotnet": {
"dep-packages-must-claim-dll": true,
"dep-packages-must-have-dll": false,
"propagate-dll-claims-to-parents": true,
"relax-dll-claims-when-bundling-detected": true
},
"golang": {
"local-mod-cache-dir": "/root/go/pkg/mod",
"local-vendor-dir": "",
"main-module-version": {
"from-build-settings": true,
"from-contents": false,
"from-ld-flags": true
},
"proxies": ["https://proxy.golang.org", "direct"],
"search-local-mod-cache-licenses": false,
"search-local-vendor-licenses": false,
"search-remote-licenses": false
},
"java-archive": {
"include-indexed-archives": true,
"include-unindexed-archives": false,
"maven-base-url": "https://repo1.maven.org/maven2",
"maven-localrepository-dir": "/root/.m2/repository",
"max-parent-recursive-depth": 0,
"resolve-transitive-dependencies": false,
"use-maven-localrepository": false,
"use-network": false
},
"javascript": {
"include-dev-dependencies": false,
"npm-base-url": "https://registry.npmjs.org",
"search-remote-licenses": false
},
"linux-kernel": {
"catalog-modules": true
},
"nix": {
"capture-owned-files": false
},
"python": {
"guess-unpinned-requirements": false,
"pypi-base-url": "https://pypi.org/pypi",
"search-remote-licenses": false
}
},
"relationships": {
"exclude-binary-packages-with-file-ownership-overlap": true,
"package-file-ownership": true,
"package-file-ownership-overlap": true
},
"search": {
"scope": "squashed"
}
}
},
"schema": {
"version": "16.1.0",
"url": "https://raw.githubusercontent.com/anchore/syft/main/schema/json/schema-16.1.0.json"
}
}
pkg:generic/busybox@1.37.0
{
"$schema": "http://cyclonedx.org/schema/bom-1.6.schema.json",
"bomFormat": "CycloneDX",
"specVersion": "1.6",
"serialNumber": "urn:uuid:8831f243-6dcd-4bdd-a2b0-562480154c9b",
"version": 1,
"metadata": {
"timestamp": "2025-11-21T20:47:28Z",
"tools": {
"components": [
{
"type": "application",
"author": "anchore",
"name": "syft",
"version": "1.38.0"
}
]
},
"component": {
"bom-ref": "e98d5f0296649c51",
"type": "container",
"name": "busybox",
"version": "sha256:396fa78f221c72de93053a00e33e3d69b5bdfa80131777e6ea518eb9a1af3f3b"
}
},
"components": [
{
"bom-ref": "pkg:generic/busybox@1.37.0?package-id=fe44cee3fe279dfa",
"type": "application",
"name": "busybox",
"version": "1.37.0",
"cpe": "cpe:2.3:a:busybox:busybox:1.37.0:*:*:*:*:*:*:*",
"purl": "pkg:generic/busybox@1.37.0",
"properties": [
{
"name": "syft:package:foundBy",
"value": "binary-classifier-cataloger"
},
{
"name": "syft:package:type",
"value": "binary"
},
{
"name": "syft:package:metadataType",
"value": "binary-signature"
},
{
"name": "syft:location:0:layerID",
"value": "sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
},
{
"name": "syft:location:0:path",
"value": "/bin/["
}
]
},
{
"bom-ref": "os:busybox@1.37.0",
"type": "operating-system",
"name": "busybox",
"version": "1.37.0",
"description": "BusyBox v1.37.0",
"swid": {
"tagId": "busybox",
"name": "busybox",
"version": "1.37.0"
},
"properties": [
{
"name": "syft:distro:extendedSupport",
"value": "false"
},
{
"name": "syft:distro:id",
"value": "busybox"
},
{
"name": "syft:distro:idLike:0",
"value": "busybox"
},
{
"name": "syft:distro:prettyName",
"value": "BusyBox v1.37.0"
},
{
"name": "syft:distro:versionID",
"value": "1.37.0"
}
]
},
{
"bom-ref": "3a6b3df220691408",
"type": "file",
"name": "/bin/[",
"hashes": [
{
"alg": "SHA-1",
"content": "5231d5d79cb52f3581f9c137396e7d9df7aa6d6b"
},
{
"alg": "SHA-256",
"content": "f19470457088612bc3285404783d9f93533d917e869050aca13a4139b937c0a5"
}
]
}
]
}
<?xml version="1.0" encoding="UTF-8"?>
<bom xmlns="http://cyclonedx.org/schema/bom/1.6" serialNumber="urn:uuid:33ad49e5-992c-4f1e-be05-68f4095b764f" version="1">
<metadata>
<timestamp>2025-11-21T20:47:29Z</timestamp>
<tools>
<components>
<component type="application">
<author>anchore</author>
<name>syft</name>
<version>1.38.0</version>
</component>
</components>
</tools>
<component bom-ref="e98d5f0296649c51" type="container">
<name>busybox</name>
<version>sha256:396fa78f221c72de93053a00e33e3d69b5bdfa80131777e6ea518eb9a1af3f3b</version>
</component>
</metadata>
<components>
<component bom-ref="pkg:generic/busybox@1.37.0?package-id=fe44cee3fe279dfa" type="application">
<name>busybox</name>
<version>1.37.0</version>
<cpe>cpe:2.3:a:busybox:busybox:1.37.0:*:*:*:*:*:*:*</cpe>
<purl>pkg:generic/busybox@1.37.0</purl>
<properties>
<property name="syft:package:foundBy">binary-classifier-cataloger</property>
<property name="syft:package:type">binary</property>
<property name="syft:package:metadataType">binary-signature</property>
<property name="syft:location:0:layerID">sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05</property>
<property name="syft:location:0:path">/bin/[</property>
</properties>
</component>
<component bom-ref="os:busybox@1.37.0" type="operating-system">
<name>busybox</name>
<version>1.37.0</version>
<description>BusyBox v1.37.0</description>
<swid tagId="busybox" name="busybox" version="1.37.0"></swid>
<properties>
<property name="syft:distro:extendedSupport">false</property>
<property name="syft:distro:id">busybox</property>
<property name="syft:distro:idLike:0">busybox</property>
<property name="syft:distro:prettyName">BusyBox v1.37.0</property>
<property name="syft:distro:versionID">1.37.0</property>
</properties>
</component>
<component bom-ref="3a6b3df220691408" type="file">
<name>/bin/[</name>
<hashes>
<hash alg="SHA-1">5231d5d79cb52f3581f9c137396e7d9df7aa6d6b</hash>
<hash alg="SHA-256">f19470457088612bc3285404783d9f93533d917e869050aca13a4139b937c0a5</hash>
</hashes>
</component>
</components>
</bom>
{
"spdxVersion": "SPDX-2.3",
"dataLicense": "CC0-1.0",
"SPDXID": "SPDXRef-DOCUMENT",
"name": "busybox",
"documentNamespace": "https://anchore.com/syft/image/busybox-9730898a-4b77-4396-b39c-e08a872ec19f",
"creationInfo": {
"licenseListVersion": "3.27",
"creators": ["Organization: Anchore, Inc", "Tool: syft-1.38.0"],
"created": "2025-11-21T20:47:30Z"
},
"packages": [
{
"name": "busybox",
"SPDXID": "SPDXRef-Package-binary-busybox-fe44cee3fe279dfa",
"versionInfo": "1.37.0",
"supplier": "NOASSERTION",
"downloadLocation": "NOASSERTION",
"filesAnalyzed": false,
"sourceInfo": "acquired package info from the following paths: /bin/[",
"licenseConcluded": "NOASSERTION",
"licenseDeclared": "NOASSERTION",
"copyrightText": "NOASSERTION",
"externalRefs": [
{
"referenceCategory": "SECURITY",
"referenceType": "cpe23Type",
"referenceLocator": "cpe:2.3:a:busybox:busybox:1.37.0:*:*:*:*:*:*:*"
},
{
"referenceCategory": "PACKAGE-MANAGER",
"referenceType": "purl",
"referenceLocator": "pkg:generic/busybox@1.37.0"
}
]
},
{
"name": "busybox",
"SPDXID": "SPDXRef-DocumentRoot-Image-busybox",
"versionInfo": "sha256:396fa78f221c72de93053a00e33e3d69b5bdfa80131777e6ea518eb9a1af3f3b",
"supplier": "NOASSERTION",
"downloadLocation": "NOASSERTION",
"filesAnalyzed": false,
"checksums": [
{
"algorithm": "SHA256",
"checksumValue": "396fa78f221c72de93053a00e33e3d69b5bdfa80131777e6ea518eb9a1af3f3b"
}
],
"licenseConcluded": "NOASSERTION",
"licenseDeclared": "NOASSERTION",
"copyrightText": "NOASSERTION",
"externalRefs": [
{
"referenceCategory": "PACKAGE-MANAGER",
"referenceType": "purl",
"referenceLocator": "pkg:oci/busybox@sha256%3A396fa78f221c72de93053a00e33e3d69b5bdfa80131777e6ea518eb9a1af3f3b?arch=arm64&tag=latest"
}
],
"primaryPackagePurpose": "CONTAINER"
}
],
"files": [
{
"fileName": "bin/[",
"SPDXID": "SPDXRef-File-bin---3a6b3df220691408",
"fileTypes": ["APPLICATION", "BINARY"],
"checksums": [
{
"algorithm": "SHA1",
"checksumValue": "5231d5d79cb52f3581f9c137396e7d9df7aa6d6b"
},
{
"algorithm": "SHA256",
"checksumValue": "f19470457088612bc3285404783d9f93533d917e869050aca13a4139b937c0a5"
}
],
"licenseConcluded": "NOASSERTION",
"licenseInfoInFiles": ["NOASSERTION"],
"copyrightText": "NOASSERTION",
"comment": "layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
},
{
"fileName": "bin/getconf",
"SPDXID": "SPDXRef-File-bin-getconf-eab1ede6d517d844",
"checksums": [
{
"algorithm": "SHA1",
"checksumValue": "0000000000000000000000000000000000000000"
}
],
"licenseConcluded": "NOASSERTION",
"licenseInfoInFiles": ["NOASSERTION"],
"copyrightText": "NOASSERTION",
"comment": "layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
},
{
"fileName": "lib/ld-linux-aarch64.so.1",
"SPDXID": "SPDXRef-File-lib-ld-linux-aarch64.so.1-9c61e609f3b76f4a",
"checksums": [
{
"algorithm": "SHA1",
"checksumValue": "0000000000000000000000000000000000000000"
}
],
"licenseConcluded": "NOASSERTION",
"licenseInfoInFiles": ["NOASSERTION"],
"copyrightText": "NOASSERTION",
"comment": "layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
},
{
"fileName": "lib/libc.so.6",
"SPDXID": "SPDXRef-File-lib-libc.so.6-456b7910a9499337",
"checksums": [
{
"algorithm": "SHA1",
"checksumValue": "0000000000000000000000000000000000000000"
}
],
"licenseConcluded": "NOASSERTION",
"licenseInfoInFiles": ["NOASSERTION"],
"copyrightText": "NOASSERTION",
"comment": "layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
},
{
"fileName": "lib/libm.so.6",
"SPDXID": "SPDXRef-File-lib-libm.so.6-9376910c472a1ddd",
"checksums": [
{
"algorithm": "SHA1",
"checksumValue": "0000000000000000000000000000000000000000"
}
],
"licenseConcluded": "NOASSERTION",
"licenseInfoInFiles": ["NOASSERTION"],
"copyrightText": "NOASSERTION",
"comment": "layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
},
{
"fileName": "lib/libnss_compat.so.2",
"SPDXID": "SPDXRef-File-lib-libnss-compat.so.2-383904be0603bd22",
"checksums": [
{
"algorithm": "SHA1",
"checksumValue": "0000000000000000000000000000000000000000"
}
],
"licenseConcluded": "NOASSERTION",
"licenseInfoInFiles": ["NOASSERTION"],
"copyrightText": "NOASSERTION",
"comment": "layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
},
{
"fileName": "lib/libnss_dns.so.2",
"SPDXID": "SPDXRef-File-lib-libnss-dns.so.2-324828ff45e1fc0b",
"checksums": [
{
"algorithm": "SHA1",
"checksumValue": "0000000000000000000000000000000000000000"
}
],
"licenseConcluded": "NOASSERTION",
"licenseInfoInFiles": ["NOASSERTION"],
"copyrightText": "NOASSERTION",
"comment": "layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
},
{
"fileName": "lib/libnss_files.so.2",
"SPDXID": "SPDXRef-File-lib-libnss-files.so.2-9a791682497737bd",
"checksums": [
{
"algorithm": "SHA1",
"checksumValue": "0000000000000000000000000000000000000000"
}
],
"licenseConcluded": "NOASSERTION",
"licenseInfoInFiles": ["NOASSERTION"],
"copyrightText": "NOASSERTION",
"comment": "layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
},
{
"fileName": "lib/libnss_hesiod.so.2",
"SPDXID": "SPDXRef-File-lib-libnss-hesiod.so.2-c6f668db34996e30",
"checksums": [
{
"algorithm": "SHA1",
"checksumValue": "0000000000000000000000000000000000000000"
}
],
"licenseConcluded": "NOASSERTION",
"licenseInfoInFiles": ["NOASSERTION"],
"copyrightText": "NOASSERTION",
"comment": "layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
},
{
"fileName": "lib/libpthread.so.0",
"SPDXID": "SPDXRef-File-lib-libpthread.so.0-d5aa00430d994aa8",
"checksums": [
{
"algorithm": "SHA1",
"checksumValue": "0000000000000000000000000000000000000000"
}
],
"licenseConcluded": "NOASSERTION",
"licenseInfoInFiles": ["NOASSERTION"],
"copyrightText": "NOASSERTION",
"comment": "layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
},
{
"fileName": "lib/libresolv.so.2",
"SPDXID": "SPDXRef-File-lib-libresolv.so.2-5804ce9e713c7582",
"checksums": [
{
"algorithm": "SHA1",
"checksumValue": "0000000000000000000000000000000000000000"
}
],
"licenseConcluded": "NOASSERTION",
"licenseInfoInFiles": ["NOASSERTION"],
"copyrightText": "NOASSERTION",
"comment": "layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
}
],
"relationships": [
{
"spdxElementId": "SPDXRef-Package-binary-busybox-fe44cee3fe279dfa",
"relatedSpdxElement": "SPDXRef-File-bin---3a6b3df220691408",
"relationshipType": "OTHER",
"comment": "evident-by: indicates the package's existence is evident by the given file"
},
{
"spdxElementId": "SPDXRef-DocumentRoot-Image-busybox",
"relatedSpdxElement": "SPDXRef-Package-binary-busybox-fe44cee3fe279dfa",
"relationshipType": "CONTAINS"
},
{
"spdxElementId": "SPDXRef-DOCUMENT",
"relatedSpdxElement": "SPDXRef-DocumentRoot-Image-busybox",
"relationshipType": "DESCRIBES"
}
]
}
SPDXVersion: SPDX-2.3
DataLicense: CC0-1.0
SPDXID: SPDXRef-DOCUMENT
DocumentName: busybox
DocumentNamespace: https://anchore.com/syft/image/busybox-04c37b1f-d42c-4c7b-847b-747d25fb694c
LicenseListVersion: 3.27
Creator: Organization: Anchore, Inc
Creator: Tool: syft-1.38.0
Created: 2025-11-21T20:47:30Z
##### Unpackaged files
FileName: bin/[
SPDXID: SPDXRef-File-bin---3a6b3df220691408
FileType: APPLICATION
FileType: BINARY
FileChecksum: SHA1: 5231d5d79cb52f3581f9c137396e7d9df7aa6d6b
FileChecksum: SHA256: f19470457088612bc3285404783d9f93533d917e869050aca13a4139b937c0a5
LicenseConcluded: NOASSERTION
LicenseInfoInFile: NOASSERTION
FileCopyrightText: NOASSERTION
FileComment: layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05
FileName: bin/getconf
SPDXID: SPDXRef-File-bin-getconf-eab1ede6d517d844
FileChecksum: SHA1: 0000000000000000000000000000000000000000
LicenseConcluded: NOASSERTION
LicenseInfoInFile: NOASSERTION
FileCopyrightText: NOASSERTION
FileComment: layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05
FileName: lib/ld-linux-aarch64.so.1
SPDXID: SPDXRef-File-lib-ld-linux-aarch64.so.1-9c61e609f3b76f4a
FileChecksum: SHA1: 0000000000000000000000000000000000000000
LicenseConcluded: NOASSERTION
LicenseInfoInFile: NOASSERTION
FileCopyrightText: NOASSERTION
FileComment: layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05
FileName: lib/libc.so.6
SPDXID: SPDXRef-File-lib-libc.so.6-456b7910a9499337
FileChecksum: SHA1: 0000000000000000000000000000000000000000
LicenseConcluded: NOASSERTION
LicenseInfoInFile: NOASSERTION
FileCopyrightText: NOASSERTION
FileComment: layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05
FileName: lib/libm.so.6
SPDXID: SPDXRef-File-lib-libm.so.6-9376910c472a1ddd
FileChecksum: SHA1: 0000000000000000000000000000000000000000
LicenseConcluded: NOASSERTION
LicenseInfoInFile: NOASSERTION
FileCopyrightText: NOASSERTION
FileComment: layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05
FileName: lib/libnss_compat.so.2
SPDXID: SPDXRef-File-lib-libnss-compat.so.2-383904be0603bd22
FileChecksum: SHA1: 0000000000000000000000000000000000000000
LicenseConcluded: NOASSERTION
LicenseInfoInFile: NOASSERTION
FileCopyrightText: NOASSERTION
FileComment: layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05
FileName: lib/libnss_dns.so.2
SPDXID: SPDXRef-File-lib-libnss-dns.so.2-324828ff45e1fc0b
FileChecksum: SHA1: 0000000000000000000000000000000000000000
LicenseConcluded: NOASSERTION
LicenseInfoInFile: NOASSERTION
FileCopyrightText: NOASSERTION
FileComment: layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05
FileName: lib/libnss_files.so.2
SPDXID: SPDXRef-File-lib-libnss-files.so.2-9a791682497737bd
FileChecksum: SHA1: 0000000000000000000000000000000000000000
LicenseConcluded: NOASSERTION
LicenseInfoInFile: NOASSERTION
FileCopyrightText: NOASSERTION
FileComment: layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05
FileName: lib/libnss_hesiod.so.2
SPDXID: SPDXRef-File-lib-libnss-hesiod.so.2-c6f668db34996e30
FileChecksum: SHA1: 0000000000000000000000000000000000000000
LicenseConcluded: NOASSERTION
LicenseInfoInFile: NOASSERTION
FileCopyrightText: NOASSERTION
FileComment: layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05
FileName: lib/libpthread.so.0
SPDXID: SPDXRef-File-lib-libpthread.so.0-d5aa00430d994aa8
FileChecksum: SHA1: 0000000000000000000000000000000000000000
LicenseConcluded: NOASSERTION
LicenseInfoInFile: NOASSERTION
FileCopyrightText: NOASSERTION
FileComment: layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05
FileName: lib/libresolv.so.2
SPDXID: SPDXRef-File-lib-libresolv.so.2-5804ce9e713c7582
FileChecksum: SHA1: 0000000000000000000000000000000000000000
LicenseConcluded: NOASSERTION
LicenseInfoInFile: NOASSERTION
FileCopyrightText: NOASSERTION
FileComment: layerID: sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05
##### Package: busybox
PackageName: busybox
SPDXID: SPDXRef-DocumentRoot-Image-busybox
PackageVersion: sha256:396fa78f221c72de93053a00e33e3d69b5bdfa80131777e6ea518eb9a1af3f3b
PackageSupplier: NOASSERTION
PackageDownloadLocation: NOASSERTION
PrimaryPackagePurpose: CONTAINER
FilesAnalyzed: false
PackageChecksum: SHA256: 396fa78f221c72de93053a00e33e3d69b5bdfa80131777e6ea518eb9a1af3f3b
PackageLicenseConcluded: NOASSERTION
PackageLicenseDeclared: NOASSERTION
PackageCopyrightText: NOASSERTION
ExternalRef: PACKAGE-MANAGER purl pkg:oci/busybox@sha256%3A396fa78f221c72de93053a00e33e3d69b5bdfa80131777e6ea518eb9a1af3f3b?arch=arm64&tag=latest
##### Package: busybox
PackageName: busybox
SPDXID: SPDXRef-Package-binary-busybox-fe44cee3fe279dfa
PackageVersion: 1.37.0
PackageSupplier: NOASSERTION
PackageDownloadLocation: NOASSERTION
FilesAnalyzed: false
PackageSourceInfo: acquired package info from the following paths: /bin/[
PackageLicenseConcluded: NOASSERTION
PackageLicenseDeclared: NOASSERTION
PackageCopyrightText: NOASSERTION
ExternalRef: SECURITY cpe23Type cpe:2.3:a:busybox:busybox:1.37.0:*:*:*:*:*:*:*
ExternalRef: PACKAGE-MANAGER purl pkg:generic/busybox@1.37.0
##### Relationships
Relationship: SPDXRef-Package-binary-busybox-fe44cee3fe279dfa OTHER SPDXRef-File-bin---3a6b3df220691408
RelationshipComment: evident-by: indicates the package's existence is evident by the given file
Relationship: SPDXRef-DocumentRoot-Image-busybox CONTAINS SPDXRef-Package-binary-busybox-fe44cee3fe279dfa
Relationship: SPDXRef-DOCUMENT DESCRIBES SPDXRef-DocumentRoot-Image-busybox
{
"version": 0,
"job": {},
"detector": {
"name": "syft",
"url": "https://github.com/anchore/syft",
"version": "1.38.0"
},
"metadata": {
"syft:distro": "pkg:generic/busybox@1.37.0?like=busybox"
},
"manifests": {
"busybox:latest:/bin/busybox": {
"name": "busybox:latest:/bin/busybox",
"file": {
"source_location": "busybox:latest:/bin/busybox"
},
"metadata": {
"syft:filesystem": "sha256:1a382740c5642e4607412a341df3716c22287ffa6adf92eaff54e079a1902f05"
},
"resolved": {
"pkg:generic/busybox@1.37.0": {
"package_url": "pkg:generic/busybox@1.37.0",
"relationship": "direct",
"scope": "runtime"
}
}
}
},
"scanned": "2025-11-21T20:47:31Z"
}
Writing output to files
Direct Syft output to a file instead of stdout by appending =<file> to the format option:
# Write JSON to a file
syft <source> -o json=sbom.json
# Write to stdout (default behavior)
syft <source> -o json
Multiple outputs
Generate multiple SBOM formats in a single run by specifying multiple -o flags:
syft <source> \
-o json=sbom.json \
-o spdx-json=sbom.spdx.json
You can both display to terminal and write to file:
syft <source> \
-o table \ # report to stdout
-o json=sbom.json # write to file
FAQ
Which format should I use?
- For human review: Use
table(default) for quick package lists - For automation and queries: Use
jsonto access all Syft data including file details, relationships, and metadata - For compliance and sharing: Use
spdx-jsonorcyclonedx-json- both are widely supported industry standards - For custom formats: Use
templateto create your own output format
Can I convert between formats?
Yes! See the Format Conversion guide to convert existing SBOMs between formats without re-scanning.
Do all formats contain the same information?
No. Syft’s native json format contains the most complete information. Standard formats (SPDX, CycloneDX) contain package data but may not include all file details or Syft-specific metadata. Some data may be omitted or transformed to fit the target schema.
Which version should I use for SPDX or CycloneDX?
Use the latest version (default) unless you need compatibility with specific tools that require older versions. Check your downstream tools’ documentation for version requirements.
Next steps
Continue the guide
Next: Explore Working with Syft JSON to learn how to query and extract specific data from Syft’s native format using jq.Additional resources:
- Custom formats: Learn about customizing output with templates for specialized formats
- Convert formats: See Format Conversion to convert between different SBOM formats
- Advanced settings: Check configuration options for format-specific settings
1.4 - Working with JSON
Syft’s native JSON format provides the most comprehensive view of discovered software components, capturing all package metadata, file details, relationships, and source information.
Since Syft can convert from its native JSON format to standard SBOM formats, capturing your SBOM in Syft JSON format lets you generate any SBOM format as needed for compliance requirements.
JSON Schema Reference
For the complete, detailed JSON schema specification, see the Syft JSON Schema Reference.Data Shapes
A Syft JSON output contains these main sections:
{
"artifacts": [], // Package nodes discovered
"artifactRelationships": [], // Edges between packages and files
"files": [], // File nodes discovered
"source": {}, // What was scanned (the image, directory, etc.)
"distro": {}, // Linux distribution discovered
"descriptor": {}, // Syft version and configuration that captured this SBOM
"schema": {} // Schema version
}
Package (artifacts)
A software package discovered by Syft (library, application, OS package, etc.).
{
"id": "74d9294c42941b37", // Unique identifier for this package that is content addressable
"name": "openssl",
"version": "1.1.1k",
"type": "apk", // Package ecosystem (apk, deb, npm, etc.)
"foundBy": "apk-cataloger",
"locations": [
// Paths used to populate information on this package object
{
"path": "/lib/apk/db/installed", // Always the real-path
"layerID": "sha256:...",
"accessPath": "/lib/apk/db/installed", // How Syft accessed the file (may be a symlink)
"annotations": {
"evidence": "primary" // Qualifies the kind of evidence extracted from this location (primary, supporting)
}
}
],
"licenses": [
{
"value": "Apache-2.0", // Raw value discovered
"spdxExpression": "Apache-2.0", // Normalized SPDX expression of the discovered value
"type": "declared", // "declared", "concluded", or "observed"
"urls": ["https://..."],
"locations": [] // Where license was found
}
],
"language": "c",
"cpes": [
{
"cpe": "cpe:2.3:a:openssl:openssl:1.1.1k:*:*:*:*:*:*:*",
"source": "nvd-dictionary" // Where the CPE was derived from (nvd-dictionary or syft-generated)
}
],
"purl": "pkg:apk/alpine/openssl@1.1.1k",
"metadata": {} // Ecosystem-specific fields (varies by type)
}
File
A file found on disk or referenced in package manager metadata.
{
"id": "def456",
"location": {
"path": "/usr/bin/example",
"layerID": "sha256:..." // For container images
},
"metadata": {
"mode": 493, // File permissions in octal
"type": "RegularFile",
"mimeType": "application/x-executable",
"size": 12345 // Size in bytes
},
"digests": [
{
"algorithm": "sha256",
"value": "abc123..."
}
],
"licenses": [
{
"value": "Apache-2.0", // Raw value discovered
"spdxExpression": "Apache-2.0", // Normalized SPDX expression of the discovered value
"type": "declared", // "declared", "concluded", or "observed"
"evidence": {
"confidence": 100,
"offset": 1234, // Byte offset in file
"extent": 567 // Length of match
}
}
],
"executable": {
"format": "elf", // "elf", "pe", or "macho"
"hasExports": true,
"hasEntrypoint": true,
"importedLibraries": [
// Shared library dependencies
"libc.so.6",
"libssl.so.1.1"
],
"elfSecurityFeatures": {
// ELF binaries only
"symbolTableStripped": false,
"stackCanary": true, // Stack protection
"nx": true, // No-Execute bit
"relRO": "full", // Relocation Read-Only
"pie": true // Position Independent Executable
}
}
}
Relationship
Connects any two nodes (package, file, or source) with a typed relationship.
{
"parent": "package-id", // Package, file, or source ID
"child": "file-id",
"type": "contains" // contains, dependency-of, etc.
}
Source
Information about what was scanned (container image, directory, file, etc.).
{
"id": "sha256:...",
"name": "alpine:3.9.2", // User input
"version": "sha256:...",
"type": "image", // image, directory, file
"metadata": {
"imageID": "sha256:...",
"manifestDigest": "sha256:...",
"mediaType": "application/vnd.docker...",
"tags": ["alpine:3.9.2"],
"repoDigests": []
}
}
Distribution
Linux distribution details from /etc/os-release or similar sources.
{
"name": "alpine",
"version": "3.9.2",
"idLike": ["alpine"] // Related distributions
}
Location
Describes where a package or file was found.
{
"path": "/lib/apk/db/installed",
"layerID": "sha256:...",
"accessPath": "/var/lib/apk/installed",
"annotations": {
"evidence": "primary"
}
}
The path field always contains the real path after resolving symlinks, while accessPath shows how Syft accessed the file (which may be through a symlink).
The evidence annotation indicates whether this location was used to discover the package (primary) or contains only auxiliary information (supporting).
Descriptor
Syft version and configuration used to generate this SBOM.
{
"name": "syft",
"version": "1.0.0",
"configuration": {} // Syft configuration used
}
The Syft JSON schema is versioned and available in the Syft repository:
JQ Recipes
jq is a command-line tool for querying and manipulating JSON. The following examples demonstrate practical queries for working with Syft JSON output.
Find packages by name pattern
Uses regex pattern matching to find security-critical packages
.artifacts[] |
select(.name | test("^(openssl|ssl|crypto)")) | # Regex pattern match on package name
{
name,
version,
type # Package type (apk, deb, rpm, etc.)
}
syft alpine:3.9.2 -o json | \
jq '.artifacts[] |
select(.name | test("^(openssl|ssl|crypto)")) |
{
name,
version,
type
}'
{
"name": "ssl_client",
"version": "1.29.3-r10",
"type": "apk"
}
Location of all JARs
Shows Java packages with their primary installation paths
.artifacts[] |
select(.type == "java-archive") | # Filter for JAR packages
{
package: "\(.name)@\(.version)",
path: (.locations[] | select(.annotations.evidence == "primary") | .path) # Primary installation path
}
syft openjdk:11.0.11-jre-slim -o json | \
jq '.artifacts[] |
select(.type == "java-archive") |
{
package: "\(.name)@\(.version)",
path: (.locations[] | select(.annotations.evidence == "primary") | .path)
}'
{
"package": "jrt-fs@11.0.11",
"path": "/usr/local/openjdk-11/lib/jrt-fs.jar"
}
All executable files
Lists all binary files with their format and entry point status
.files[] |
select(.executable != null) | # Filter for executable files
{
path: .location.path,
format: .executable.format, # ELF, Mach-O, PE, etc.
importedLibraries: .executable.importedLibraries # Shared library dependencies
}
syft alpine:3.9.2 -o json | \
jq '.files[] |
select(.executable != null) |
{
path: .location.path,
format: .executable.format,
importedLibraries: .executable.importedLibraries
}'
{
"path": "/bin/busybox",
"format": "elf",
"importedLibraries": [
"libc.musl-aarch64.so.1"
]
}
{
"path": "/lib/ld-musl-aarch64.so.1",
"format": "elf",
"importedLibraries": []
}
{
"path": "/lib/libcrypto.so.1.1",
"format": "elf",
"importedLibraries": [
"libc.musl-aarch64.so.1"
]
}
{
"path": "/lib/libssl.so.1.1",
"format": "elf",
"importedLibraries": [
"libcrypto.so.1.1",
"libc.musl-aarch64.so.1"
]
}
{
"path": "/lib/libz.so.1.2.11",
"format": "elf",
"importedLibraries": [
"libc.musl-aarch64.so.1"
]
}
{
"path": "/sbin/apk",
"format": "elf",
"importedLibraries": [
"libssl.so.1.1",
"libcrypto.so.1.1",
"libz.so.1",
"libc.musl-aarch64.so.1"
]
}
{
"path": "/sbin/mkmntdirs",
"format": "elf",
"importedLibraries": [
"libc.musl-aarch64.so.1"
]
}
{
"path": "/usr/bin/getconf",
"format": "elf",
"importedLibraries": [
"libc.musl-aarch64.so.1"
]
}
{
"path": "/usr/bin/getent",
"format": "elf",
"importedLibraries": [
"libc.musl-aarch64.so.1"
]
}
{
"path": "/usr/bin/iconv",
"format": "elf",
"importedLibraries": [
"libc.musl-aarch64.so.1"
]
}
{
"path": "/usr/bin/scanelf",
"format": "elf",
"importedLibraries": [
"libc.musl-aarch64.so.1"
]
}
{
"path": "/usr/bin/ssl_client",
"format": "elf",
"importedLibraries": [
"libtls-standalone.so.1",
"libc.musl-aarch64.so.1"
]
}
{
"path": "/usr/lib/engines-1.1/afalg.so",
"format": "elf",
"importedLibraries": [
"libcrypto.so.1.1",
"libc.musl-aarch64.so.1"
]
}
{
"path": "/usr/lib/engines-1.1/capi.so",
"format": "elf",
"importedLibraries": [
"libc.musl-aarch64.so.1"
]
}
{
"path": "/usr/lib/engines-1.1/padlock.so",
"format": "elf",
"importedLibraries": [
"libc.musl-aarch64.so.1"
]
}
{
"path": "/usr/lib/libtls-standalone.so.1.0.0",
"format": "elf",
"importedLibraries": [
"libssl.so.1.1",
"libcrypto.so.1.1",
"libc.musl-aarch64.so.1"
]
}
Binaries not owned by packages
Uses set operations on relationships to identify untracked binaries that might indicate supply chain issues
. as $root |
[.files[] | select(.executable != null) | .id] as $binaries | # All binary IDs
[.artifactRelationships[] | select(.type == "contains") | .child] as $owned | # Package-owned files
($binaries - $owned) as $unowned | # Set subtraction to find unowned binaries
$root.files[] |
select(.id as $id | $unowned | index($id)) | # Filter to unowned binaries
{
path: .location.path,
sha256: .digests[] | select(.algorithm == "sha256") | .value # For integrity verification
}
syft httpd:2.4.65 -o json | \
jq '. as $root |
[.files[] | select(.executable != null) | .id] as $binaries |
[.artifactRelationships[] | select(.type == "contains") | .child] as $owned |
($binaries - $owned) as $unowned |
$root.files[] |
select(.id as $id | $unowned | index($id)) |
{
path: .location.path,
sha256: .digests[] | select(.algorithm == "sha256") | .value
}'
# .syft.yaml
file:
metadata:
selection: all
{
"path": "/usr/local/apache2/bin/ab",
"sha256": "1aa76de1f9eb534fe22d35a01ccbf7ede03e250f6f5d0a00553e687187565d3a"
}
{
"path": "/usr/local/apache2/bin/checkgid",
"sha256": "af3372d60eee3f8132d2bdd10fb8670db8a9965b2e056c267131586184ba11fb"
}
{
"path": "/usr/local/apache2/bin/fcgistarter",
"sha256": "eea2fa75671e7e647692cd0352405ef8a0b17167a05770b9552602a3c720bfdb"
}
{
"path": "/usr/local/apache2/bin/htcacheclean",
"sha256": "94e0fd5f0f5cf6231080177072846a4e99846f1f534224911e3bed17ce27ec38"
}
{
"path": "/usr/local/apache2/bin/htdbm",
"sha256": "e2a41d96c92cb16c98972a043ac380c06f19b5bddbafe0b2d2082ed174f8cfe3"
}
{
"path": "/usr/local/apache2/bin/htdigest",
"sha256": "0881598a4fd15455297c186fa301fdb1656ff26d0f77626d54a15421095e047f"
}
{
"path": "/usr/local/apache2/bin/htpasswd",
"sha256": "871ef0aa4ae0914747a471bf3917405548abf768dd6c94e3e0177c8e87334d9e"
}
{
"path": "/usr/local/apache2/bin/httpd",
"sha256": "2f3b52523394d1f4d4e2c5e1c5565161dcf8a0fc8e957e8d2d741acd3a111563"
}
{
"path": "/usr/local/apache2/bin/httxt2dbm",
"sha256": "1d5eb8e5d910760aa859c45e79b541362a84499f08fb79b8773bf9b8faf7bbdb"
}
{
"path": "/usr/local/apache2/bin/logresolve",
"sha256": "de8ed1fa5184170fca09980025f40c55d9fbf14b47c73b2575bc90ac1c9bf20e"
}
{
"path": "/usr/local/apache2/bin/rotatelogs",
"sha256": "f5ed895712cddcec7f542dee08a1ff74fd00ae3a9b0d92ede429e04ec2b9b8ae"
}
{
"path": "/usr/local/apache2/bin/suexec",
"sha256": "264efc529c09a60fed57fcde9e7a2c36f8bb414ae0e1afc9bb85595113ab4ec2"
}
{
"path": "/usr/local/apache2/modules/mod_access_compat.so",
"sha256": "0d6322b7d7d3d6c459751f8b271f733fa05a8b56eecd75f608100a5dbf464fc2"
}
{
"path": "/usr/local/apache2/modules/mod_actions.so",
"sha256": "6dc5dea7137ec0ae139c545b26efd860c6de7bcc19d2e31db213399c86bf2ead"
}
{
"path": "/usr/local/apache2/modules/mod_alias.so",
"sha256": "bb422c4486600ec349ac9b89acaa3793265d69498c30370e678a362900daea04"
}
{
"path": "/usr/local/apache2/modules/mod_allowmethods.so",
"sha256": "99a9db80c8f18fe3defb315731af3bceef321a98bd52f518f068ca2632596cee"
}
{
"path": "/usr/local/apache2/modules/mod_asis.so",
"sha256": "039014ad5ad3f357e811b570bd9977a772e74f191856981a503e57263b88cc44"
}
{
"path": "/usr/local/apache2/modules/mod_auth_basic.so",
"sha256": "1f9534187df98194fa60259c3d9feca05f1b2564d49b37b49da040232e7a327b"
}
{
"path": "/usr/local/apache2/modules/mod_auth_digest.so",
"sha256": "ad77d0457b773c9d13097adf47bebcd95297466fc9fb6886b7bff85e2acdd99d"
}
{
"path": "/usr/local/apache2/modules/mod_auth_form.so",
"sha256": "ceb56183d83c22ff08853982b0f35f122185cf69d3bcfd948eeb1df32dd12bbb"
}
{
"path": "/usr/local/apache2/modules/mod_authn_anon.so",
"sha256": "44308e1d5a65ab64232d27f24a827aa1afdb2fef580dd1a8454788431ebd639f"
}
{
"path": "/usr/local/apache2/modules/mod_authn_core.so",
"sha256": "9cbf85b1a20da26483ca4a57186161a2876ca296dd1174ed5a5af9f5301fe5e8"
}
{
"path": "/usr/local/apache2/modules/mod_authn_dbd.so",
"sha256": "08dc7b848a67131a091563046e3fc6914e86f248740bd2f23905f2f6df3ce541"
}
{
"path": "/usr/local/apache2/modules/mod_authn_dbm.so",
"sha256": "1e5900c8b41ca227b59ba54738154e04841cef2045d8040747e4b7887526a763"
}
{
"path": "/usr/local/apache2/modules/mod_authn_file.so",
"sha256": "74f83d5717276ae6a37f4a2d0c54f8d23e57ae1c3f73bb2b332c77860b7421ed"
}
{
"path": "/usr/local/apache2/modules/mod_authn_socache.so",
"sha256": "2f51212b62c5bbda54ddec0c1a07f523e96c2b56d987fefa43e0cc42dbf6f5d0"
}
{
"path": "/usr/local/apache2/modules/mod_authnz_fcgi.so",
"sha256": "4fa0fa7d3d4b742b3f73a781d2e8d4625d477c76aa0698aa0d499f87e6985554"
}
{
"path": "/usr/local/apache2/modules/mod_authnz_ldap.so",
"sha256": "dccffc453f46d201ecb1003b372a6ca417ac40a33036500a2215697b2e5ac0af"
}
{
"path": "/usr/local/apache2/modules/mod_authz_core.so",
"sha256": "e2b825ec9e2992b1cc157aef12c4ecd75960604658c3b7aa4a370088e89455b5"
}
{
"path": "/usr/local/apache2/modules/mod_authz_dbd.so",
"sha256": "61b427078b5d11b3fd8693cbfa22cb5871dc9784b08d3182b73ad3e99b8579d9"
}
{
"path": "/usr/local/apache2/modules/mod_authz_dbm.so",
"sha256": "1d99ed703743d9dd2185a0d7e9e351fa38066b3234ae997e87efa6dc1e4513eb"
}
{
"path": "/usr/local/apache2/modules/mod_authz_groupfile.so",
"sha256": "3e9adb775d41a8b01802ff610dda01f8e62a0d282ea0522d297a252207453c4d"
}
{
"path": "/usr/local/apache2/modules/mod_authz_host.so",
"sha256": "c0fcd53dc9596fd6bc280c55d14b61c72dc12470bf5c1bc86e369217af05cb2c"
}
{
"path": "/usr/local/apache2/modules/mod_authz_owner.so",
"sha256": "e8923ef5f11e03c37b4579e18d396758ee085bae4dadc0519374ca63da86c932"
}
{
"path": "/usr/local/apache2/modules/mod_authz_user.so",
"sha256": "3c5674a1e7af6b7d09e8c66f973a3138fed0dde4dfaee98fc132c89730cd9156"
}
{
"path": "/usr/local/apache2/modules/mod_autoindex.so",
"sha256": "2d992f31f40be2c0ec34a29981191c3bfb9e4448a2099f11a4876ba4d394dc2f"
}
{
"path": "/usr/local/apache2/modules/mod_brotli.so",
"sha256": "73bfe5aeff2040a7b56a0bf822bc4069ce3e9954186f81322060697f5cf0546f"
}
{
"path": "/usr/local/apache2/modules/mod_bucketeer.so",
"sha256": "9f146159e928405d2a007dba3690566a45e5793cde87871a30dbfd1dc9114db1"
}
{
"path": "/usr/local/apache2/modules/mod_buffer.so",
"sha256": "710bd1b238a7814963b2857eb92c891bafeff61d9e40f807d68ded700c8c37f2"
}
{
"path": "/usr/local/apache2/modules/mod_cache.so",
"sha256": "976222e2c7ddb317d8804383801b310be33c6b3542f6972edd12c38ddc527e38"
}
{
"path": "/usr/local/apache2/modules/mod_cache_disk.so",
"sha256": "c5359004a563b9b01bf0416cbe856bb50de642bf06649383ffcae26490dc69c8"
}
{
"path": "/usr/local/apache2/modules/mod_cache_socache.so",
"sha256": "94abdf3779a9f7d258b1720021e1e3f10c630e625f5aa13c683c3c811b8dac10"
}
{
"path": "/usr/local/apache2/modules/mod_case_filter.so",
"sha256": "79a0a336c1bacd06c0fc5ca14cfc97223c92f0f5b0c88ec95f7e163e8cdf917d"
}
{
"path": "/usr/local/apache2/modules/mod_case_filter_in.so",
"sha256": "aa5e1c9452e1be3789a8a867a98dab700e4a579c0ea1ff7180adf4e41b8495e3"
}
{
"path": "/usr/local/apache2/modules/mod_cern_meta.so",
"sha256": "1a6da74d768c01b1a96f5c0f0e74686d5b0f51c3d7f1149fa1124cdf10ba842a"
}
{
"path": "/usr/local/apache2/modules/mod_cgi.so",
"sha256": "f2716c663f4f7db8cd78f456e5bd098a62c1b8fde86253ed4617edfe9cdb93b2"
}
{
"path": "/usr/local/apache2/modules/mod_cgid.so",
"sha256": "d5a19aeeb7b9063bac25e4a172ea7578e83bb32da4fe21ecd858409115de166c"
}
{
"path": "/usr/local/apache2/modules/mod_charset_lite.so",
"sha256": "9c4a1b27532c5f47eea7cfc61f65a7cf2f132286e556175ec28e313024641c9d"
}
{
"path": "/usr/local/apache2/modules/mod_data.so",
"sha256": "4dcae9a704c7d9861497e57b15423b9ce3fc7dda6544096ecfff64e4223f3684"
}
{
"path": "/usr/local/apache2/modules/mod_dav.so",
"sha256": "1a33728b16ad05b12fbecf637168608cb10f258ef7a355bd37cef8ce2ed86fd7"
}
...
Binary file digests
Useful for verifying binary integrity and detecting tampering
.files[] |
select(.executable != null) | # Filter for executable files
{
path: .location.path,
digests: [.digests[] | {algorithm, value}] # All available hash algorithms
}
syft alpine:3.9.2 -o json | \
jq '.files[] |
select(.executable != null) |
{
path: .location.path,
digests: [.digests[] | {algorithm, value}]
}'
{
"path": "/bin/busybox",
"digests": [
{
"algorithm": "sha1",
"value": "7423801dfb28659fcaaaa5e8d41051d470b19008"
},
{
"algorithm": "sha256",
"value": "2c1276c3c02ccec8a0e1737d3144cdf03db883f479c86fbd9c7ea4fd9b35eac5"
}
]
}
{
"path": "/lib/ld-musl-aarch64.so.1",
"digests": [
{
"algorithm": "sha1",
"value": "0b83c1eb91d633379e0c17349e7dae821fa36dbb"
},
{
"algorithm": "sha256",
"value": "0132814479f1acc1e264ef59f73fd91563235897e8dc1bd52765f974cde382ca"
}
]
}
{
"path": "/lib/libcrypto.so.1.1",
"digests": [
{
"algorithm": "sha1",
"value": "e9d1540e5bbd9e77b388ab0e6e2f52603eb032a4"
},
{
"algorithm": "sha256",
"value": "6c597c8ad195eeb7a9130ad832dfa4cbf140f42baf96304711b2dbd43ba8e617"
}
]
}
{
"path": "/lib/libssl.so.1.1",
"digests": [
{
"algorithm": "sha1",
"value": "a8d5036010b52a80402b900c626fe862ab06bd8b"
},
{
"algorithm": "sha256",
"value": "fb72f4615fb4574bd6eeabfdb86be47012618b9076d75aeb1510941c585cae64"
}
]
}
{
"path": "/lib/libz.so.1.2.11",
"digests": [
{
"algorithm": "sha1",
"value": "83378fc7a19ff908a7e92a9fd0ca39eee90d0a3c"
},
{
"algorithm": "sha256",
"value": "19e790eb36a09eba397b5af16852f3bea21a242026bbba3da7b16442b8ba305b"
}
]
}
{
"path": "/sbin/apk",
"digests": [
{
"algorithm": "sha1",
"value": "adac7738917adecff81d4a6f9f0c7971b173859a"
},
{
"algorithm": "sha256",
"value": "22d7d85bd24923f1f274ce765d16602191097829e22ac632748302817ce515d8"
}
]
}
{
"path": "/sbin/mkmntdirs",
"digests": [
{
"algorithm": "sha1",
"value": "fff9b110ad6c659a39681e7be3b2a036fbbcca7b"
},
{
"algorithm": "sha256",
"value": "a14a5a28525220224367616ef46d4713ef7bd00d22baa761e058e8bdd4c0af1b"
}
]
}
{
"path": "/usr/bin/getconf",
"digests": [
{
"algorithm": "sha1",
"value": "06ed40070e1c2ad6d4171095eff4a6bdf9c8489b"
},
{
"algorithm": "sha256",
"value": "82bcde66ead19bc3b9ff850f66c2dbf5eaff36d481f1ec154100f73f6265d2ef"
}
]
}
{
"path": "/usr/bin/getent",
"digests": [
{
"algorithm": "sha1",
"value": "c318a3a780fc27ed7dba57827a825191fa7ee8bd"
},
{
"algorithm": "sha256",
"value": "53ffb508150e91838d795831e8ecc71f2bc3a7db036c6d7f9512c3973418bb5e"
}
]
}
{
"path": "/usr/bin/iconv",
"digests": [
{
"algorithm": "sha1",
"value": "eb98f04742e41cfc3ed44109b0e059d13e5523ea"
},
{
"algorithm": "sha256",
"value": "1c99d1f4edcb8da6db1da60958051c413de45a4c15cd3b7f7285ed87f9a250ff"
}
]
}
{
"path": "/usr/bin/scanelf",
"digests": [
{
"algorithm": "sha1",
"value": "cb085d106f35862e44e17849026927bd05845bff"
},
{
"algorithm": "sha256",
"value": "908da485ad2edea35242f8989c7beb9536414782abc94357c72b7d840bb1fda2"
}
]
}
{
"path": "/usr/bin/ssl_client",
"digests": [
{
"algorithm": "sha1",
"value": "7e17cb64c3fce832e5fa52a3b2ed1e1ccd26acd0"
},
{
"algorithm": "sha256",
"value": "67ab7f3a1ba35630f439d1ca4f73c7d95f8b7aa0e6f6db6ea1743f136f074ab4"
}
]
}
{
"path": "/usr/lib/engines-1.1/afalg.so",
"digests": [
{
"algorithm": "sha1",
"value": "6bd2c385e3884109c581659a8b184592c86e7cee"
},
{
"algorithm": "sha256",
"value": "ea7c2f48bc741fd828d79a304dbf713e20e001c0187f3f534d959886af87f4af"
}
]
}
{
"path": "/usr/lib/engines-1.1/capi.so",
"digests": [
{
"algorithm": "sha1",
"value": "41bb990b6f8e2013487980fd430455cc3b59905f"
},
{
"algorithm": "sha256",
"value": "b461ed43f0f244007d872e84760a446023b69b178c970acf10ed2666198942c6"
}
]
}
{
"path": "/usr/lib/engines-1.1/padlock.so",
"digests": [
{
"algorithm": "sha1",
"value": "82d8308700f481884fd77c882e0e9406fb17b317"
},
{
"algorithm": "sha256",
"value": "0ccb04f040afb0216da1cea2c1db7a0b91d990ce061e232782aedbd498483649"
}
]
}
{
"path": "/usr/lib/libtls-standalone.so.1.0.0",
"digests": [
{
"algorithm": "sha1",
...
Binaries with security features
Analyzes ELF security hardening features extracted during SBOM generation
.files[] |
select(.executable != null and .executable.format == "elf") | # ELF binaries only
{
path: .location.path,
pie: .executable.elfSecurityFeatures.pie, # Position Independent Executable
stackCanary: .executable.elfSecurityFeatures.stackCanary, # Stack protection
nx: .executable.elfSecurityFeatures.nx # No-Execute bit
}
syft alpine:3.9.2 -o json | \
jq '.files[] |
select(.executable != null and .executable.format == "elf") |
{
path: .location.path,
pie: .executable.elfSecurityFeatures.pie,
stackCanary: .executable.elfSecurityFeatures.stackCanary,
nx: .executable.elfSecurityFeatures.nx
}'
{
"path": "/bin/busybox",
"pie": true,
"stackCanary": true,
"nx": true
}
{
"path": "/lib/ld-musl-aarch64.so.1",
"pie": false,
"stackCanary": true,
"nx": true
}
{
"path": "/lib/libcrypto.so.1.1",
"pie": false,
"stackCanary": true,
"nx": true
}
{
"path": "/lib/libssl.so.1.1",
"pie": false,
"stackCanary": true,
"nx": true
}
{
"path": "/lib/libz.so.1.2.11",
"pie": false,
"stackCanary": true,
"nx": true
}
{
"path": "/sbin/apk",
"pie": true,
"stackCanary": true,
"nx": true
}
{
"path": "/sbin/mkmntdirs",
"pie": true,
"stackCanary": false,
"nx": true
}
{
"path": "/usr/bin/getconf",
"pie": true,
"stackCanary": false,
"nx": true
}
{
"path": "/usr/bin/getent",
"pie": true,
"stackCanary": true,
"nx": true
}
{
"path": "/usr/bin/iconv",
"pie": true,
"stackCanary": true,
"nx": true
}
{
"path": "/usr/bin/scanelf",
"pie": true,
"stackCanary": true,
"nx": true
}
{
"path": "/usr/bin/ssl_client",
"pie": true,
"stackCanary": true,
"nx": true
}
{
"path": "/usr/lib/engines-1.1/afalg.so",
"pie": false,
"stackCanary": true,
"nx": true
}
{
"path": "/usr/lib/engines-1.1/capi.so",
"pie": false,
"stackCanary": false,
"nx": true
}
{
"path": "/usr/lib/engines-1.1/padlock.so",
"pie": false,
"stackCanary": false,
"nx": true
}
{
"path": "/usr/lib/libtls-standalone.so.1.0.0",
"pie": false,
"stackCanary": true,
"nx": true
}
Binaries importing specific libraries
Identifies which binaries depend on specific shared libraries for security audits
.files[] |
select(.executable != null and .executable.importedLibraries != null) |
select(.executable.importedLibraries[] | contains("libcrypto")) | # Find binaries using libcrypto
{
path: .location.path,
imports: .executable.importedLibraries # Shared library dependencies
}
syft alpine:3.9.2 -o json | \
jq '.files[] |
select(.executable != null and .executable.importedLibraries != null) |
select(.executable.importedLibraries[] | contains("libcrypto")) |
{
path: .location.path,
imports: .executable.importedLibraries
}'
{
"path": "/lib/libssl.so.1.1",
"imports": [
"libcrypto.so.1.1",
"libc.musl-aarch64.so.1"
]
}
{
"path": "/sbin/apk",
"imports": [
"libssl.so.1.1",
"libcrypto.so.1.1",
"libz.so.1",
"libc.musl-aarch64.so.1"
]
}
{
"path": "/usr/lib/engines-1.1/afalg.so",
"imports": [
"libcrypto.so.1.1",
"libc.musl-aarch64.so.1"
]
}
{
"path": "/usr/lib/libtls-standalone.so.1.0.0",
"imports": [
"libssl.so.1.1",
"libcrypto.so.1.1",
"libc.musl-aarch64.so.1"
]
}
Extract Package URLs (PURLs)
Extracts Package URLs for cross-tool SBOM correlation and vulnerability matching
.artifacts[] |
select(.purl != null and .purl != "") | # Filter packages with PURLs
{
name,
version,
purl # Package URL for cross-tool compatibility
}
syft alpine:3.9.2 -o json | \
jq '.artifacts[] |
select(.purl != null and .purl != "") |
{
name,
version,
purl
}'
{
"name": "alpine-baselayout",
"version": "3.1.0-r3",
"purl": "pkg:apk/alpine/alpine-baselayout@3.1.0-r3?arch=aarch64&distro=alpine-3.9.2"
}
{
"name": "alpine-keys",
"version": "2.1-r1",
"purl": "pkg:apk/alpine/alpine-keys@2.1-r1?arch=aarch64&distro=alpine-3.9.2"
}
{
"name": "apk-tools",
"version": "2.10.3-r1",
"purl": "pkg:apk/alpine/apk-tools@2.10.3-r1?arch=aarch64&distro=alpine-3.9.2"
}
{
"name": "busybox",
"version": "1.29.3-r10",
"purl": "pkg:apk/alpine/busybox@1.29.3-r10?arch=aarch64&distro=alpine-3.9.2"
}
{
"name": "ca-certificates-cacert",
"version": "20190108-r0",
"purl": "pkg:apk/alpine/ca-certificates-cacert@20190108-r0?arch=aarch64&distro=alpine-3.9.2&upstream=ca-certificates"
}
{
"name": "libc-utils",
"version": "0.7.1-r0",
"purl": "pkg:apk/alpine/libc-utils@0.7.1-r0?arch=aarch64&distro=alpine-3.9.2&upstream=libc-dev"
}
{
"name": "libcrypto1.1",
"version": "1.1.1a-r1",
"purl": "pkg:apk/alpine/libcrypto1.1@1.1.1a-r1?arch=aarch64&distro=alpine-3.9.2&upstream=openssl"
}
{
"name": "libssl1.1",
"version": "1.1.1a-r1",
"purl": "pkg:apk/alpine/libssl1.1@1.1.1a-r1?arch=aarch64&distro=alpine-3.9.2&upstream=openssl"
}
{
"name": "libtls-standalone",
"version": "2.7.4-r6",
"purl": "pkg:apk/alpine/libtls-standalone@2.7.4-r6?arch=aarch64&distro=alpine-3.9.2"
}
{
"name": "musl",
"version": "1.1.20-r3",
"purl": "pkg:apk/alpine/musl@1.1.20-r3?arch=aarch64&distro=alpine-3.9.2"
}
{
"name": "musl-utils",
"version": "1.1.20-r3",
"purl": "pkg:apk/alpine/musl-utils@1.1.20-r3?arch=aarch64&distro=alpine-3.9.2&upstream=musl"
}
{
"name": "scanelf",
"version": "1.2.3-r0",
"purl": "pkg:apk/alpine/scanelf@1.2.3-r0?arch=aarch64&distro=alpine-3.9.2&upstream=pax-utils"
}
{
"name": "ssl_client",
"version": "1.29.3-r10",
"purl": "pkg:apk/alpine/ssl_client@1.29.3-r10?arch=aarch64&distro=alpine-3.9.2&upstream=busybox"
}
{
"name": "zlib",
"version": "1.2.11-r1",
"purl": "pkg:apk/alpine/zlib@1.2.11-r1?arch=aarch64&distro=alpine-3.9.2"
}
Group packages by language
Groups and counts packages by programming language
[.artifacts[] | select(.language != null and .language != "")] |
group_by(.language) | # Group by programming language
map({
language: .[0].language,
count: length # Count packages per language
}) |
sort_by(.count) |
reverse # Highest count first
syft node:18-alpine -o json | \
jq '[.artifacts[] | select(.language != null and .language != "")] |
group_by(.language) |
map({
language: .[0].language,
count: length
}) |
sort_by(.count) |
reverse'
[
{
"language": "javascript",
"count": 204
}
]
Count packages by type
Provides a summary count of packages per ecosystem
[.artifacts[]] |
group_by(.type) | # Group packages by ecosystem type
map({
type: .[0].type,
count: length # Count packages in each group
}) |
sort_by(.count) |
reverse # Highest count first
syft node:18-alpine -o json | \
jq '[.artifacts[]] |
group_by(.type) |
map({
type: .[0].type,
count: length
}) |
sort_by(.count) |
reverse'
[
{
"type": "npm",
"count": 204
},
{
"type": "apk",
"count": 17
},
{
"type": "binary",
"count": 1
}
]
Package locations
Maps packages to their filesystem locations
.artifacts[] |
{
name,
version,
type,
locations: [.locations[] | .path] # All filesystem locations
}
syft alpine:3.9.2 -o json | \
jq '.artifacts[] |
{
name,
version,
type,
locations: [.locations[] | .path]
}'
{
"name": "alpine-baselayout",
"version": "3.1.0-r3",
"type": "apk",
"locations": [
"/lib/apk/db/installed"
]
}
{
"name": "alpine-keys",
"version": "2.1-r1",
"type": "apk",
"locations": [
"/lib/apk/db/installed"
]
}
{
"name": "apk-tools",
"version": "2.10.3-r1",
"type": "apk",
"locations": [
"/lib/apk/db/installed"
]
}
{
"name": "busybox",
"version": "1.29.3-r10",
"type": "apk",
"locations": [
"/lib/apk/db/installed"
]
}
{
"name": "ca-certificates-cacert",
"version": "20190108-r0",
"type": "apk",
"locations": [
"/lib/apk/db/installed"
]
}
{
"name": "libc-utils",
"version": "0.7.1-r0",
"type": "apk",
"locations": [
"/lib/apk/db/installed"
]
}
{
"name": "libcrypto1.1",
"version": "1.1.1a-r1",
"type": "apk",
"locations": [
"/lib/apk/db/installed"
]
}
{
"name": "libssl1.1",
"version": "1.1.1a-r1",
"type": "apk",
"locations": [
"/lib/apk/db/installed"
]
}
{
"name": "libtls-standalone",
"version": "2.7.4-r6",
"type": "apk",
"locations": [
"/lib/apk/db/installed"
]
}
{
"name": "musl",
"version": "1.1.20-r3",
"type": "apk",
"locations": [
"/lib/apk/db/installed"
]
}
{
"name": "musl-utils",
"version": "1.1.20-r3",
"type": "apk",
"locations": [
"/lib/apk/db/installed"
]
}
{
"name": "scanelf",
"version": "1.2.3-r0",
"type": "apk",
"locations": [
"/lib/apk/db/installed"
]
}
{
"name": "ssl_client",
"version": "1.29.3-r10",
"type": "apk",
"locations": [
"/lib/apk/db/installed"
]
}
{
"name": "zlib",
"version": "1.2.11-r1",
"type": "apk",
"locations": [
"/lib/apk/db/installed"
]
}
Files by MIME type
Filters files by MIME type, useful for finding specific file types
.files[] |
select(.metadata.mimeType == "application/x-sharedlib") | # Filter by MIME type
{
path: .location.path,
mimeType: .metadata.mimeType,
size: .metadata.size # File size in bytes
}
syft alpine:3.9.2 -o json | \
jq '.files[] |
select(.metadata.mimeType == "application/x-sharedlib") |
{
path: .location.path,
mimeType: .metadata.mimeType,
size: .metadata.size
}'
{
"path": "/bin/busybox",
"mimeType": "application/x-sharedlib",
"size": 841320
}
{
"path": "/lib/ld-musl-aarch64.so.1",
"mimeType": "application/x-sharedlib",
"size": 616960
}
{
"path": "/lib/libcrypto.so.1.1",
"mimeType": "application/x-sharedlib",
"size": 2321984
}
{
"path": "/lib/libssl.so.1.1",
"mimeType": "application/x-sharedlib",
"size": 515376
}
{
"path": "/lib/libz.so.1.2.11",
"mimeType": "application/x-sharedlib",
"size": 91888
}
{
"path": "/sbin/apk",
"mimeType": "application/x-sharedlib",
"size": 218928
}
{
"path": "/sbin/mkmntdirs",
"mimeType": "application/x-sharedlib",
"size": 5712
}
{
"path": "/usr/bin/getconf",
"mimeType": "application/x-sharedlib",
"size": 33544
}
{
"path": "/usr/bin/getent",
"mimeType": "application/x-sharedlib",
"size": 48704
}
{
"path": "/usr/bin/iconv",
"mimeType": "application/x-sharedlib",
"size": 21968
}
{
"path": "/usr/bin/scanelf",
"mimeType": "application/x-sharedlib",
"size": 79592
}
{
"path": "/usr/bin/ssl_client",
"mimeType": "application/x-sharedlib",
"size": 9808
}
{
"path": "/usr/lib/engines-1.1/afalg.so",
"mimeType": "application/x-sharedlib",
"size": 18568
}
{
"path": "/usr/lib/engines-1.1/capi.so",
"mimeType": "application/x-sharedlib",
"size": 5672
}
{
"path": "/usr/lib/engines-1.1/padlock.so",
"mimeType": "application/x-sharedlib",
"size": 5672
}
{
"path": "/usr/lib/libtls-standalone.so.1.0.0",
"mimeType": "application/x-sharedlib",
"size": 96032
}
Dependency relationships
Traverses package dependency graph using relationships
. as $root |
.artifactRelationships[] |
select(.type == "dependency-of") | # Filter for dependency relationships
.parent as $parent |
.child as $child |
{
parent: ($root.artifacts[] | select(.id == $parent).name), # Parent package name
child: ($root.artifacts[] | select(.id == $child).name) # Dependency name
}
syft node:18-alpine -o json | \
jq '. as $root |
.artifactRelationships[] |
select(.type == "dependency-of") |
.parent as $parent |
.child as $child |
{
parent: ($root.artifacts[] | select(.id == $parent).name),
child: ($root.artifacts[] | select(.id == $child).name)
}'
{
"parent": "ca-certificates-bundle",
"child": "apk-tools"
}
{
"parent": "alpine-keys",
"child": "alpine-release"
}
{
"parent": "alpine-baselayout-data",
"child": "alpine-baselayout"
}
{
"parent": "musl",
"child": "ssl_client"
}
{
"parent": "musl",
"child": "libgcc"
}
{
"parent": "musl",
"child": "libstdc++"
}
{
"parent": "musl",
"child": "musl-utils"
}
{
"parent": "musl",
"child": "libssl3"
}
{
"parent": "musl",
"child": "busybox"
}
{
"parent": "musl",
"child": "apk-tools"
}
{
"parent": "musl",
"child": "scanelf"
}
{
"parent": "musl",
"child": "libcrypto3"
}
{
"parent": "musl",
"child": "zlib"
}
{
"parent": "libgcc",
"child": "libstdc++"
}
{
"parent": "libssl3",
"child": "ssl_client"
}
{
"parent": "libssl3",
"child": "apk-tools"
}
{
"parent": "busybox",
"child": "busybox-binsh"
}
{
"parent": "scanelf",
"child": "musl-utils"
}
{
"parent": "busybox-binsh",
"child": "alpine-baselayout"
}
{
"parent": "libcrypto3",
"child": "ssl_client"
}
{
"parent": "libcrypto3",
"child": "libssl3"
}
{
"parent": "libcrypto3",
"child": "apk-tools"
}
{
"parent": "zlib",
"child": "apk-tools"
}
Files without packages
Finds orphaned files not associated with any package
. as $root |
[.files[].id] as $allFiles | # All file IDs
[.artifactRelationships[] | select(.type == "contains") | .child] as $ownedFiles | # Package-owned files
($allFiles - $ownedFiles) as $orphans | # Set subtraction for unowned files
$root.files[] |
select(.id as $id | $orphans | index($id)) | # Filter to orphaned files
.location.path
syft alpine:3.9.2 -o json | \
jq '. as $root |
[.files[].id] as $allFiles |
[.artifactRelationships[] | select(.type == "contains") | .child] as $ownedFiles |
($allFiles - $ownedFiles) as $orphans |
$root.files[] |
select(.id as $id | $orphans | index($id)) |
.location.path'
"/lib/apk/db/installed"
Largest files
Identifies the top 10 largest files by size
[.files[] |
{
path: .location.path,
size: .metadata.size,
mimeType: .metadata.mimeType
}] |
sort_by(.size) |
reverse | # Largest first
.[0:10] # Top 10 files
syft alpine:3.9.2 -o json | \
jq '[.files[] |
{
path: .location.path,
size: .metadata.size,
mimeType: .metadata.mimeType
}] |
sort_by(.size) |
reverse |
.[0:10]'
[
{
"path": "/lib/libcrypto.so.1.1",
"size": 2321984,
"mimeType": "application/x-sharedlib"
},
{
"path": "/bin/busybox",
"size": 841320,
"mimeType": "application/x-sharedlib"
},
{
"path": "/lib/ld-musl-aarch64.so.1",
"size": 616960,
"mimeType": "application/x-sharedlib"
},
{
"path": "/lib/libssl.so.1.1",
"size": 515376,
"mimeType": "application/x-sharedlib"
},
{
"path": "/etc/ssl/cert.pem",
"size": 232598,
"mimeType": "text/plain"
},
{
"path": "/sbin/apk",
"size": 218928,
"mimeType": "application/x-sharedlib"
},
{
"path": "/usr/lib/libtls-standalone.so.1.0.0",
"size": 96032,
"mimeType": "application/x-sharedlib"
},
{
"path": "/lib/libz.so.1.2.11",
"size": 91888,
"mimeType": "application/x-sharedlib"
},
{
"path": "/usr/bin/scanelf",
"size": 79592,
"mimeType": "application/x-sharedlib"
},
{
"path": "/usr/bin/getent",
"size": 48704,
"mimeType": "application/x-sharedlib"
}
]
Extract CPEs
Lists Common Platform Enumeration identifiers for vulnerability scanning
.artifacts[] |
select(.cpes != null and (.cpes | length) > 0) | # Filter packages with CPEs
{
name,
version,
cpes: [.cpes[].cpe] # Extract CPE strings
}
syft alpine:3.9.2 -o json | \
jq '.artifacts[] |
select(.cpes != null and (.cpes | length) > 0) |
{
name,
version,
cpes: [.cpes[].cpe]
}'
{
"name": "alpine-baselayout",
"version": "3.1.0-r3",
"cpes": [
"cpe:2.3:a:alpine-baselayout:alpine-baselayout:3.1.0-r3:*:*:*:*:*:*:*",
"cpe:2.3:a:alpine-baselayout:alpine_baselayout:3.1.0-r3:*:*:*:*:*:*:*",
"cpe:2.3:a:alpine_baselayout:alpine-baselayout:3.1.0-r3:*:*:*:*:*:*:*",
"cpe:2.3:a:alpine_baselayout:alpine_baselayout:3.1.0-r3:*:*:*:*:*:*:*",
"cpe:2.3:a:alpine:alpine-baselayout:3.1.0-r3:*:*:*:*:*:*:*",
"cpe:2.3:a:alpine:alpine_baselayout:3.1.0-r3:*:*:*:*:*:*:*"
]
}
{
"name": "alpine-keys",
"version": "2.1-r1",
"cpes": [
"cpe:2.3:a:alpine-keys:alpine-keys:2.1-r1:*:*:*:*:*:*:*",
"cpe:2.3:a:alpine-keys:alpine_keys:2.1-r1:*:*:*:*:*:*:*",
"cpe:2.3:a:alpine_keys:alpine-keys:2.1-r1:*:*:*:*:*:*:*",
"cpe:2.3:a:alpine_keys:alpine_keys:2.1-r1:*:*:*:*:*:*:*",
"cpe:2.3:a:alpine:alpine-keys:2.1-r1:*:*:*:*:*:*:*",
"cpe:2.3:a:alpine:alpine_keys:2.1-r1:*:*:*:*:*:*:*"
]
}
{
"name": "apk-tools",
"version": "2.10.3-r1",
"cpes": [
"cpe:2.3:a:apk-tools:apk-tools:2.10.3-r1:*:*:*:*:*:*:*",
"cpe:2.3:a:apk-tools:apk_tools:2.10.3-r1:*:*:*:*:*:*:*",
"cpe:2.3:a:apk_tools:apk-tools:2.10.3-r1:*:*:*:*:*:*:*",
"cpe:2.3:a:apk_tools:apk_tools:2.10.3-r1:*:*:*:*:*:*:*",
"cpe:2.3:a:apk:apk-tools:2.10.3-r1:*:*:*:*:*:*:*",
"cpe:2.3:a:apk:apk_tools:2.10.3-r1:*:*:*:*:*:*:*"
]
}
{
"name": "busybox",
"version": "1.29.3-r10",
"cpes": [
"cpe:2.3:a:busybox:busybox:1.29.3-r10:*:*:*:*:*:*:*"
]
}
{
"name": "ca-certificates-cacert",
"version": "20190108-r0",
"cpes": [
"cpe:2.3:a:ca-certificates-cacert:ca-certificates-cacert:20190108-r0:*:*:*:*:*:*:*",
"cpe:2.3:a:ca-certificates-cacert:ca_certificates_cacert:20190108-r0:*:*:*:*:*:*:*",
"cpe:2.3:a:ca_certificates_cacert:ca-certificates-cacert:20190108-r0:*:*:*:*:*:*:*",
"cpe:2.3:a:ca_certificates_cacert:ca_certificates_cacert:20190108-r0:*:*:*:*:*:*:*",
"cpe:2.3:a:ca-certificates:ca-certificates-cacert:20190108-r0:*:*:*:*:*:*:*",
"cpe:2.3:a:ca-certificates:ca_certificates_cacert:20190108-r0:*:*:*:*:*:*:*",
"cpe:2.3:a:ca_certificates:ca-certificates-cacert:20190108-r0:*:*:*:*:*:*:*",
"cpe:2.3:a:ca_certificates:ca_certificates_cacert:20190108-r0:*:*:*:*:*:*:*",
"cpe:2.3:a:mozilla:ca-certificates-cacert:20190108-r0:*:*:*:*:*:*:*",
"cpe:2.3:a:mozilla:ca_certificates_cacert:20190108-r0:*:*:*:*:*:*:*",
"cpe:2.3:a:ca:ca-certificates-cacert:20190108-r0:*:*:*:*:*:*:*",
"cpe:2.3:a:ca:ca_certificates_cacert:20190108-r0:*:*:*:*:*:*:*"
]
}
{
"name": "libc-utils",
"version": "0.7.1-r0",
"cpes": [
"cpe:2.3:a:libc-utils:libc-utils:0.7.1-r0:*:*:*:*:*:*:*",
"cpe:2.3:a:libc-utils:libc_utils:0.7.1-r0:*:*:*:*:*:*:*",
"cpe:2.3:a:libc_utils:libc-utils:0.7.1-r0:*:*:*:*:*:*:*",
"cpe:2.3:a:libc_utils:libc_utils:0.7.1-r0:*:*:*:*:*:*:*",
"cpe:2.3:a:libc:libc-utils:0.7.1-r0:*:*:*:*:*:*:*",
"cpe:2.3:a:libc:libc_utils:0.7.1-r0:*:*:*:*:*:*:*"
]
}
{
"name": "libcrypto1.1",
"version": "1.1.1a-r1",
"cpes": [
"cpe:2.3:a:libcrypto1.1:libcrypto1.1:1.1.1a-r1:*:*:*:*:*:*:*",
"cpe:2.3:a:libcrypto1.1:libcrypto:1.1.1a-r1:*:*:*:*:*:*:*",
"cpe:2.3:a:libcrypto:libcrypto1.1:1.1.1a-r1:*:*:*:*:*:*:*",
"cpe:2.3:a:libcrypto:libcrypto:1.1.1a-r1:*:*:*:*:*:*:*"
]
}
{
"name": "libssl1.1",
"version": "1.1.1a-r1",
"cpes": [
"cpe:2.3:a:libssl1.1:libssl1.1:1.1.1a-r1:*:*:*:*:*:*:*",
"cpe:2.3:a:libssl1.1:libssl:1.1.1a-r1:*:*:*:*:*:*:*",
"cpe:2.3:a:libssl:libssl1.1:1.1.1a-r1:*:*:*:*:*:*:*",
"cpe:2.3:a:libssl:libssl:1.1.1a-r1:*:*:*:*:*:*:*"
]
}
{
"name": "libtls-standalone",
"version": "2.7.4-r6",
"cpes": [
"cpe:2.3:a:libtls-standalone:libtls-standalone:2.7.4-r6:*:*:*:*:*:*:*",
"cpe:2.3:a:libtls-standalone:libtls_standalone:2.7.4-r6:*:*:*:*:*:*:*",
"cpe:2.3:a:libtls_standalone:libtls-standalone:2.7.4-r6:*:*:*:*:*:*:*",
"cpe:2.3:a:libtls_standalone:libtls_standalone:2.7.4-r6:*:*:*:*:*:*:*",
"cpe:2.3:a:libtls:libtls-standalone:2.7.4-r6:*:*:*:*:*:*:*",
"cpe:2.3:a:libtls:libtls_standalone:2.7.4-r6:*:*:*:*:*:*:*"
]
}
{
"name": "musl",
"version": "1.1.20-r3",
"cpes": [
"cpe:2.3:a:musl-libc:musl:1.1.20-r3:*:*:*:*:*:*:*",
"cpe:2.3:a:musl_libc:musl:1.1.20-r3:*:*:*:*:*:*:*",
"cpe:2.3:a:musl:musl:1.1.20-r3:*:*:*:*:*:*:*"
]
}
{
"name": "musl-utils",
"version": "1.1.20-r3",
"cpes": [
"cpe:2.3:a:musl-utils:musl-utils:1.1.20-r3:*:*:*:*:*:*:*",
"cpe:2.3:a:musl-utils:musl_utils:1.1.20-r3:*:*:*:*:*:*:*",
"cpe:2.3:a:musl_utils:musl-utils:1.1.20-r3:*:*:*:*:*:*:*",
"cpe:2.3:a:musl_utils:musl_utils:1.1.20-r3:*:*:*:*:*:*:*",
"cpe:2.3:a:musl:musl-utils:1.1.20-r3:*:*:*:*:*:*:*",
"cpe:2.3:a:musl:musl_utils:1.1.20-r3:*:*:*:*:*:*:*"
]
}
{
"name": "scanelf",
"version": "1.2.3-r0",
"cpes": [
"cpe:2.3:a:scanelf:scanelf:1.2.3-r0:*:*:*:*:*:*:*"
]
}
{
"name": "ssl_client",
"version": "1.29.3-r10",
"cpes": [
"cpe:2.3:a:ssl-client:ssl-client:1.29.3-r10:*:*:*:*:*:*:*",
"cpe:2.3:a:ssl-client:ssl_client:1.29.3-r10:*:*:*:*:*:*:*",
"cpe:2.3:a:ssl_client:ssl-client:1.29.3-r10:*:*:*:*:*:*:*",
"cpe:2.3:a:ssl_client:ssl_client:1.29.3-r10:*:*:*:*:*:*:*",
"cpe:2.3:a:ssl:ssl-client:1.29.3-r10:*:*:*:*:*:*:*",
"cpe:2.3:a:ssl:ssl_client:1.29.3-r10:*:*:*:*:*:*:*"
]
}
{
"name": "zlib",
"version": "1.2.11-r1",
"cpes": [
"cpe:2.3:a:zlib:zlib:1.2.11-r1:*:*:*:*:*:*:*"
]
}
Packages without licenses
Identifies packages missing license information for compliance audits
.artifacts[] |
select(.licenses == null or (.licenses | length) == 0) | # Packages without license info
{
name,
version,
type,
locations: [.locations[].path] # Where package is installed
}
syft httpd:2.4.65 -o json | \
jq '.artifacts[] |
select(.licenses == null or (.licenses | length) == 0) |
{
name,
version,
type,
locations: [.locations[].path]
}'
{
"name": "httpd",
"version": "2.4.65",
"type": "binary",
"locations": ["/usr/local/apache2/bin/httpd"]
}
Packages with CPE identifiers
Lists packages with CPE identifiers indicating potential CVE matches
.artifacts[] |
select(.cpes != null and (.cpes | length) > 0) | # Packages with CPE identifiers
{
name,
version,
type,
cpeCount: (.cpes | length) # Number of CPE matches
}
syft alpine:3.9.2 -o json | \
jq '.artifacts[] |
select(.cpes != null and (.cpes | length) > 0) |
{
name,
version,
type,
cpeCount: (.cpes | length)
}'
{
"name": "alpine-baselayout",
"version": "3.1.0-r3",
"type": "apk",
"cpeCount": 6
}
{
"name": "alpine-keys",
"version": "2.1-r1",
"type": "apk",
"cpeCount": 6
}
{
"name": "apk-tools",
"version": "2.10.3-r1",
"type": "apk",
"cpeCount": 6
}
{
"name": "busybox",
"version": "1.29.3-r10",
"type": "apk",
"cpeCount": 1
}
{
"name": "ca-certificates-cacert",
"version": "20190108-r0",
"type": "apk",
"cpeCount": 12
}
{
"name": "libc-utils",
"version": "0.7.1-r0",
"type": "apk",
"cpeCount": 6
}
{
"name": "libcrypto1.1",
"version": "1.1.1a-r1",
"type": "apk",
"cpeCount": 4
}
{
"name": "libssl1.1",
"version": "1.1.1a-r1",
"type": "apk",
"cpeCount": 4
}
{
"name": "libtls-standalone",
"version": "2.7.4-r6",
"type": "apk",
"cpeCount": 6
}
{
"name": "musl",
"version": "1.1.20-r3",
"type": "apk",
"cpeCount": 3
}
{
"name": "musl-utils",
"version": "1.1.20-r3",
"type": "apk",
"cpeCount": 6
}
{
"name": "scanelf",
"version": "1.2.3-r0",
"type": "apk",
"cpeCount": 1
}
{
"name": "ssl_client",
"version": "1.29.3-r10",
"type": "apk",
"cpeCount": 6
}
{
"name": "zlib",
"version": "1.2.11-r1",
"type": "apk",
"cpeCount": 1
}
Troubleshooting
jq command not found
Install jq to query JSON output:
- macOS:
brew install jq - Ubuntu/Debian:
apt-get install jq - Fedora/RHEL:
dnf install jq - Windows: Download from jqlang.org
Empty or unexpected query results
Common jq query issues:
- Wrong field path: Use
jq 'keys'to list available top-level keys, then explore nested structures - Missing select filter: Remember to use
select()when filtering (e.g.,.artifacts[] | select(.type=="apk")) - String vs array: Some fields like licenses are arrays; use
.[0]or iterate with.[]
Query works in terminal but not in scripts
When using jq in shell scripts:
- Quote properly: Single quotes prevent shell variable expansion (e.g.,
jq '.artifacts'notjq ".artifacts") - Escape for heredocs: Use different quotes or escape when embedding jq in heredocs
- Pipe errors: Add
set -o pipefailto catch jq errors in pipelines
Performance issues with large SBOMs
For very large JSON files:
- Stream processing: Use jq’s
--streamflag for memory-efficient processing - Filter early: Apply filters as early as possible in the pipeline to reduce data volume
- Use specific queries: Avoid
.[]on large arrays; be specific about what you need
Next steps
Continue the guide
Next: Dive into Package Catalogers to understand how Syft discovers different types of software packages.Additional resources:
- Other formats: Explore output formats to see all available SBOM formats
- Convert formats: Learn about format conversion to generate multiple formats efficiently
- Custom output: Use templates to create custom output formats
- Syft JSON Schema: Review the Syft JSON Schema Reference for detailed field definitions
1.5 - Package Catalogers
TL;DR
- Syft automatically picks the right catalogers for you (recommended for most users)
- Scanning a container image? Finds installed packages (like Python packages in
site-packages) - Scanning a directory? Finds both installed packages and declared dependencies (like
requirements.txt) - Want to customize? Use
--select-catalogersto filter, add, or remove catalogers - Need complete control? Use
--override-default-catalogersto replace all defaults
Catalogers are Syft’s detection modules that identify software packages in your projects.
Each cataloger specializes in finding specific types of packages—for example, python-package-cataloger finds Python dependencies declared in requirements.txt,
while python-installed-package-cataloger finds Python packages that have already been installed.
Syft includes dozens of catalogers covering languages like Python, Java, Go, JavaScript, Ruby, Rust, and more, as well as OS packages (APK, RPM, DEB) and binary formats.
Default Behavior
Syft uses different cataloger sets depending on what you’re scanning:
| Scan Type | Default Catalogers | What They Find | Example |
|---|---|---|---|
| Container Image | Image-specific catalogers | Installed packages only | Python packages in site-packages |
| Directory | Directory-specific catalogers | Installed packages + declared dependencies | Python packages in site-packages AND requirements.txt |
This behavior ensures accurate results across different contexts. When you scan an image, Syft assumes installation steps have completed –this way you are getting results for software that is positively present. When you scan a directory (like a source code repository), Syft looks for both what’s installed and what’s declared as a dependency –this way you are getting results for not only what’s installed but also what you intend to install.
Why use different catalogers for different sources?
Most of the time, files that hint at the intent to install software do not have enough information in them to determine the exact version of the package that would be installed.
For example, a requirements.txt file might specify a package without a version, or with a version range.
By looking at installed packages in an image, after any build tooling has been invoked, Syft can provide more accurate version information.
Example: Python Package Detection
Scanning an image:
syft <container-image> --select-catalogers python
# Uses: python-installed-package-cataloger
# Finds: Packages in site-packages directories
Scanning a directory:
syft <source-directory> --select-catalogers python
# Uses: python-installed-package-cataloger, python-package-cataloger
# Finds: Packages in site-packages + requirements.txt, setup.py, Pipfile, etc.
Viewing Active Catalogers
The most reliable way to see which catalogers Syft used is to check the SBOM itself. Every SBOM captures both the catalogers that were requested and those that actually ran:
syft busybox:latest -o json | jq '.descriptor.configuration.catalogers'
Output:
{
"requested": {
"default": [
"image",
"file"
]
},
"used": [
"alpm-db-cataloger",
"apk-db-cataloger",
"binary-classifier-cataloger",
"bitnami-cataloger",
"cargo-auditable-binary-cataloger",
"conan-info-cataloger",
"dotnet-deps-binary-cataloger",
"dotnet-packages-lock-cataloger",
"dpkg-db-cataloger",
"elf-binary-package-cataloger",
...
]
}
This shows what catalogers were attempted, not just what found packages. The requested field shows your cataloger selection strategy, while used lists every cataloger that ran.
You can also see cataloger activity in real-time using verbose logging, though this is less comprehensive and not as direct.
Exploring Available Catalogers
Use the syft cataloger list command to see all available catalogers, their tags, and test selection expressions.
List all catalogers
syft cataloger list
Output shows file and package catalogers with their tags:
┌───────────────────────────┬───────────────────────┐
│ FILE CATALOGER │ TAGS │
├───────────────────────────┼───────────────────────┤
│ file-content-cataloger │ content, file │
│ file-digest-cataloger │ digest, file │
│ file-executable-cataloger │ binary-metadata, file │
│ file-metadata-cataloger │ file, file-metadata │
└───────────────────────────┴───────────────────────┘
┌────────────────────────────────────┬────────────────────────────────────────────────────────┐
│ PACKAGE CATALOGER │ TAGS │
├────────────────────────────────────┼────────────────────────────────────────────────────────┤
│ python-installed-package-cataloger │ directory, image, installed, language, package, python │
│ python-package-cataloger │ declared, directory, language, package, python │
│ java-archive-cataloger │ directory, image, installed, java, language, maven │
│ go-module-binary-cataloger │ binary, directory, go, golang, image, installed │
│ ... │ │
└────────────────────────────────────┴────────────────────────────────────────────────────────┘
Test cataloger selection
Preview which catalogers a selection expression would use:
syft cataloger list --select-catalogers python
Default selections: 1
• 'all'
Selection expressions: 1
• 'python' (intersect)
┌────────────────────────────────────┬────────────────────────────────────────────────────────┐
│ PACKAGE CATALOGER │ TAGS │
├────────────────────────────────────┼────────────────────────────────────────────────────────┤
│ python-installed-package-cataloger │ directory, image, installed, language, package, python │
│ python-package-cataloger │ declared, directory, language, package, python │
└────────────────────────────────────┴────────────────────────────────────────────────────────┘
This shows exactly which catalogers your selection expression will use, helping you verify your configuration before running a scan.
Output formats
Get cataloger information in different formats:
# Table format (default)
syft cataloger list
# JSON format (useful for automation)
syft cataloger list -o json
Cataloger References
You can refer to catalogers in two ways:
- By name: The exact cataloger identifier (e.g.,
java-pom-cataloger,go-module-binary-cataloger) - By tag: A group label for related catalogers (e.g.,
java,python,image,directory)
Common tags include:
- Language tags:
python,java,go,javascript,ruby,rust, etc. - Scan type tags:
image,directory - Installation state tags:
installed,declared - Ecosystem tags:
maven,npm,cargo,composer, etc.
Customizing Cataloger Selection
Syft provides two flags for controlling catalogers:
--select-catalogers: Modify Defaults
Use this flag to adjust the default cataloger set. This is the recommended approach for most use cases.
Syntax:
| Operation | Syntax | Example | Description |
|---|---|---|---|
| Filter | <tag> | --select-catalogers java | Use only Java catalogers from the defaults |
| Add | +<name> | --select-catalogers +sbom-cataloger | Add a specific cataloger to defaults |
| Remove | -<name-or-tag> | --select-catalogers -rpm | Remove catalogers by name or tag |
| Combine | <tag>,+<name>,-<tag> | --select-catalogers java,+sbom-cataloger,-maven | Multiple operations together |
Selection Logic:
- Start with default catalogers (image or directory based)
- If tags provided (without
+or-), filter to only those tagged catalogers - Remove any catalogers matching
-<name-or-tag> - Add any catalogers specified with
+<name>
Note
Added catalogers (prefixed with+) are always included, regardless of other filters or removals.--override-default-catalogers: Replace Defaults
Use this flag to completely replace Syft’s default cataloger selection. This bypasses the automatic image vs. directory behavior.
Syntax:
--override-default-catalogers <comma-separated-names-or-tags>
When to use:
- You need catalogers from both image and directory sets
- You want to use catalogers that aren’t in the default set
- You need precise control regardless of scan type
Warning
Overriding defaults can lead to incomplete or inaccurate results if you don’t include all necessary catalogers. Use--select-catalogers for most cases.Examples by Use Case
Filtering to Specific Languages
Scan for only Python packages using defaults for your scan type:
syft <target> --select-catalogers python
Scan for only Java and Go packages:
syft <target> --select-catalogers java,go
Adding Catalogers
Use defaults and also include the SBOM cataloger (which finds embedded SBOMs):
syft <target> --select-catalogers +sbom-cataloger
Scan with defaults plus both SBOM and binary catalogers:
syft <target> --select-catalogers +sbom-cataloger,+binary-classifier-cataloger
Removing Catalogers
Use defaults but exclude all RPM-related catalogers:
syft <target> --select-catalogers -rpm
Scan with defaults but remove Java JAR cataloger specifically:
syft <target> --select-catalogers -java-archive-cataloger
Combining Operations
Scan for Go packages, always include SBOM cataloger, but exclude binary analysis:
syft <container-image> --select-catalogers go,+sbom-cataloger,-binary
# Result: go-module-binary-cataloger, sbom-cataloger
# (binary cataloger excluded even though it's in go tag)
Filter to Java, add POM cataloger, remove Gradle:
syft <directory> --select-catalogers java,+java-pom-cataloger,-gradle
Complete Override Examples
Use only binary analysis catalogers regardless of scan type:
syft <target> --override-default-catalogers binary
# Result: binary-cataloger, cargo-auditable-binary-cataloger,
# dotnet-portable-executable-cataloger, go-module-binary-cataloger
Use exactly two specific catalogers:
syft <target> --override-default-catalogers go-module-binary-cataloger,go-module-file-cataloger
Use all directory catalogers even when scanning an image:
syft <container-image> --override-default-catalogers directory
Troubleshooting
My language isn’t being detected
Check which catalogers ran and whether they found packages:
# See which catalogers were used
syft <target> -o json | jq '.descriptor.configuration.catalogers.used'
# See which catalogers found packages
syft <target> -o json | jq '.artifacts[].foundBy'
# See packages found by a specific cataloger
syft <target> -o json | jq '.artifacts[] | select(.foundBy == "python-package-cataloger") | .name'
If your expected cataloger isn’t in the used list:
- Verify the cataloger exists for your scan type: Use
syft cataloger list --select-catalogers <tag>to preview - Check your selection expressions: You may have excluded it with
-or not included it in your filter - Check file locations: Some catalogers look for specific paths (e.g.,
site-packagesfor Python)
If the cataloger ran but found nothing, check that:
- Package files exist in the scanned source
- Files are properly formatted
- Files are in the expected locations for that cataloger
How do I know if I’m using image or directory defaults?
Check the SBOM’s cataloger configuration:
syft <target> -o json | jq '.descriptor.configuration.catalogers.requested'
This shows the selection strategy used:
"default": ["image", "file"]indicates image defaults"default": ["directory", "file"]indicates directory defaults
What’s the difference between a name and a tag?
- Name: The unique identifier for a single cataloger (e.g.,
python-package-cataloger) - Tag: A label that groups multiple catalogers (e.g.,
pythonincludes bothpython-package-catalogerandpython-installed-package-cataloger)
Use tags when you want to downselect from the default catalogers, and names when you need to target a specific cataloger.
Why use –select-catalogers vs –override-default-catalogers?
--select-catalogers: Respects Syft’s automatic image/directory behavior, safer for most use cases--override-default-catalogers: Ignores scan type, gives complete control, requires more knowledge
When in doubt, use --select-catalogers.
Technical Reference
For reference, here’s the formal logic Syft uses for cataloger selection:
image_catalogers = all_catalogers AND catalogers_tagged("image")
directory_catalogers = all_catalogers AND catalogers_tagged("directory")
default_catalogers = image_catalogers OR directory_catalogers
sub_selected_catalogers = default_catalogers INTERSECT catalogers_tagged(TAG) [ UNION sub_selected_catalogers ... ]
base_catalogers = default_catalogers OR sub_selected_catalogers
final_set = (base_catalogers SUBTRACT removed_catalogers) UNION added_catalogers
This logic applies when using --select-catalogers. The --override-default-catalogers flag bypasses the default cataloger selection entirely and starts with the specified catalogers instead.
Next steps
Continue the guide
Next: Learn about File Selection to control which files and directories Syft scans during cataloging.Additional resources:
- Reference: See the ecosystem capabilities for detailed information about package detection and vulnerability matching
- Configuration: Check configuration options for persistent cataloger settings
- Filter files: Use File Selection to exclude irrelevant paths before cataloging
1.6 - File Selection
TL;DR
- By default, Syft includes information about files owned by packages into the SBOM
- Select which files to include:
file.metadata.selectioncan be one ofall,none, orowned-by-package - Exclude paths and globs:
--exclude '**/node_modules/**'
By default, Syft catalogs file details and digests for files owned by discovered packages. You can change this behavior using the SYFT_FILE_METADATA_SELECTION environment variable or the file.metadata.selection configuration option.
Available options:
all: capture all files from the search spaceowned-by-package: capture only files owned by packages (default)none: disable file information capture
Excluding file paths
You can exclude specific files and paths from scanning using glob patterns with the --exclude parameter. Use multiple --exclude flags to specify multiple patterns.
# Exclude a specific directory
syft <source> --exclude /etc
# Exclude files by pattern
syft <source> --exclude './out/**/*.json'
# Combine multiple exclusions
syft <source> --exclude './out/**/*.json' --exclude /etc --exclude '**/*.log'
Tip
Always wrap glob patterns in single quotes to prevent your shell from expanding wildcards:
syft <source> --exclude '**/*.json' # Correct
syft <source> --exclude **/*.json # May not work as expected
Exclusion behavior by source type
How Syft interprets exclusion patterns depends on whether you’re scanning an image or a directory.
Image scanning
When scanning container images, Syft scans the entire filesystem. Use absolute paths for exclusions:
# Exclude system directories
syft alpine:latest --exclude /etc --exclude /var
# Exclude files by pattern across entire filesystem
syft alpine:latest --exclude '/usr/**/*.txt'
Directory scanning
When scanning directories, Syft resolves exclusion patterns relative to the specified directory. All exclusion patterns must begin with ./, */, or **/.
# Scanning /usr/foo
syft /usr/foo --exclude ./package.json # Excludes /usr/foo/package.json
syft /usr/foo --exclude '**/package.json' # Excludes all package.json files under /usr/foo
syft /usr/foo --exclude './out/**' # Excludes everything under /usr/foo/out
Path prefix requirements for directory scans:
| Pattern | Meaning | Example |
|---|---|---|
./ | Relative to scan directory root | ./config.json |
*/ | One level of directories | */temp |
**/ | Any depth of directories | **/node_modules |
Note
When scanning directories, you cannot use absolute paths like/etc or /usr/**/*.txt. The pattern must begin with ./, */, or **/ to be resolved relative to your specified scan directory.Common exclusion patterns
# Exclude all JSON files
syft <source> --exclude '**/*.json'
# Exclude build output directories
syft <source> --exclude '**/dist/**' --exclude '**/build/**'
# Exclude dependency directories
syft <source> --exclude '**/node_modules/**' --exclude '**/vendor/**'
# Exclude test files
syft <source> --exclude '**/*_test.go' --exclude '**/test/**'
FAQ
Why is my exclusion pattern not working?
Common issues:
- Missing quotes: Wrap patterns in single quotes to prevent shell expansion (
'**/*.json'not**/*.json) - Wrong path prefix: Directory scans require
./,*/, or**/prefix; absolute paths like/etcwon’t work - Pattern syntax: Use glob syntax, not regex (e.g.,
**/*.txtnot.*\.txt)
What’s the difference between owned-by-package and all file metadata?
owned-by-package(default): Only catalogs files that belong to discovered packages (e.g., files in an RPM’s file manifest)all: Catalogs every file in the scan space, which significantly increases SBOM size and scan time
Use all when you need complete file listings for compliance or audit purposes.
Can I exclude directories based on .gitignore?
Not directly, but you can convert .gitignore patterns to --exclude flags. Note that .gitignore syntax differs from glob patterns, so you may need to adjust patterns (e.g., node_modules/ becomes **/node_modules/**).
Do exclusions affect package detection?
Yes! If you exclude a file that a cataloger needs (like package.json or requirements.txt), Syft won’t detect packages from that file. Exclude carefully to avoid missing dependencies.
Next steps
Continue the guide
Next: Learn about Using Templates to create custom SBOM output formats tailored to your specific needs.Additional resources:
- Configure catalogers: See Package Catalogers to control which package types are detected
- Configuration file: Use Configuration to set persistent exclusion patterns
- Scan target types: Review Supported Scan Targets to understand scanning behavior for different scan target types
1.7 - Using Templates
TL;DR
- Create custom formats:
syft <image> -o template -t ./template.tmpl - Templates receive same data as JSON output (explore with
syft <image> -o json) - Supports Sprig helper functions
Syft lets you define custom output formats using Go templates. This is useful for generating custom reports, integrating with specific tools, or extracting only the data you need.
How to use templates
Set the output format to template and specify the template file path:
syft <image> -o template -t ./path/to/custom.tmpl
You can also configure the template path in your configuration file:
#.syft.yaml
format:
template:
path: "/path/to/template.tmpl"
Available fields
Templates receive the same data structure as the syft-json output format. The Syft JSON schema is the source of truth for all available fields and their structure.
To see what data is available:
# View the full JSON structure
syft <image> -o json
# Explore specific fields
syft <image> -o json | jq '.artifacts[0]'
Key fields commonly used in templates:
.artifacts- Array of discovered packages.files- Array of discovered files.source- Information about what was scanned.distro- Detected Linux distribution (if applicable).descriptor- Syft version and configuration
Common package (artifact) fields:
.name,.version,.type- Basic package info.licenses- License information (array).purl- Package URL.cpes- Common Platform Enumerations.locations- Where the package was found
Template functions
Syft templates support:
- Go template built-ins - See the Go template documentation
- Sprig functions - Additional helpers from Sprig
- Syft-specific functions:
| Function | Arguments | Description |
|---|---|---|
getLastIndex | collection | Returns the last index of a slice (length - 1), useful for comma-separated lists |
hasField | obj, field | Checks if a field exists on an object, returns boolean |
Examples
The following examples show template source code and the rendered output when run against alpine:3.9.2:
CSV output
"Package","Version","Type","Found by"
{{- range .artifacts}}
"{{.name}}","{{.version}}","{{.type}}","{{.foundBy}}"
{{- end}}
"Package","Version","Type","Found by"
"alpine-baselayout","3.1.0-r3","apk","apk-db-cataloger"
"alpine-keys","2.1-r1","apk","apk-db-cataloger"
"apk-tools","2.10.3-r1","apk","apk-db-cataloger"
"busybox","1.29.3-r10","apk","apk-db-cataloger"
"ca-certificates-cacert","20190108-r0","apk","apk-db-cataloger"
"libc-utils","0.7.1-r0","apk","apk-db-cataloger"
"libcrypto1.1","1.1.1a-r1","apk","apk-db-cataloger"
"libssl1.1","1.1.1a-r1","apk","apk-db-cataloger"
"libtls-standalone","2.7.4-r6","apk","apk-db-cataloger"
"musl","1.1.20-r3","apk","apk-db-cataloger"
"musl-utils","1.1.20-r3","apk","apk-db-cataloger"
"scanelf","1.2.3-r0","apk","apk-db-cataloger"
"ssl_client","1.29.3-r10","apk","apk-db-cataloger"
"zlib","1.2.11-r1","apk","apk-db-cataloger"
Filter by package type
{{range .artifacts}}
{{- if eq .type "apk"}}
{{.name}}@{{.version}}{{end}}
{{- end}}
alpine-baselayout@3.1.0-r3
alpine-keys@2.1-r1
apk-tools@2.10.3-r1
busybox@1.29.3-r10
ca-certificates-cacert@20190108-r0
libc-utils@0.7.1-r0
libcrypto1.1@1.1.1a-r1
libssl1.1@1.1.1a-r1
libtls-standalone@2.7.4-r6
musl@1.1.20-r3
musl-utils@1.1.20-r3
scanelf@1.2.3-r0
ssl_client@1.29.3-r10
zlib@1.2.11-r1
Markdown report
# SBOM Report: {{.source.metadata.userInput}}
Scanned: {{.source.name}}:{{.source.version}} ({{.source.type}})
{{- if .distro}}
Distribution: {{.distro.prettyName}}
{{- end}}
## Packages ({{len .artifacts}})
| Package | Version | Type |
|---------|---------|------|
{{- range .artifacts}}
| {{.name}} | {{.version}} | {{.type}} |
{{- end}}
# SBOM Report: alpine:3.9.2
Scanned: alpine:3.9.2 (image)
Distribution: Alpine Linux v3.9
## Packages (14)
| Package | Version | Type |
| ---------------------- | ----------- | ---- |
| alpine-baselayout | 3.1.0-r3 | apk |
| alpine-keys | 2.1-r1 | apk |
| apk-tools | 2.10.3-r1 | apk |
| busybox | 1.29.3-r10 | apk |
| ca-certificates-cacert | 20190108-r0 | apk |
| libc-utils | 0.7.1-r0 | apk |
| libcrypto1.1 | 1.1.1a-r1 | apk |
| libssl1.1 | 1.1.1a-r1 | apk |
| libtls-standalone | 2.7.4-r6 | apk |
| musl | 1.1.20-r3 | apk |
| musl-utils | 1.1.20-r3 | apk |
| scanelf | 1.2.3-r0 | apk |
| ssl_client | 1.29.3-r10 | apk |
| zlib | 1.2.11-r1 | apk |
License compliance
{{range .artifacts}}
{{- if .licenses}}
{{.name}}: {{range .licenses}}{{.value}} {{end}}{{end}}
{{- end}}
alpine-baselayout: GPL-2.0
alpine-keys: MIT
apk-tools: GPL2
busybox: GPL-2.0
ca-certificates-cacert: GPL-2.0-or-later MPL-2.0
libc-utils: BSD
libcrypto1.1: OpenSSL
libssl1.1: OpenSSL
libtls-standalone: ISC
musl: MIT
musl-utils: BSD GPL2+ MIT
scanelf: GPL-2.0
ssl_client: GPL-2.0
zlib: zlib
Custom JSON subset
{
"scanned": "{{.source.metadata.userInput}}",
"packages": [
{{- $last := sub (len .artifacts) 1}}
{{- range $i, $pkg := .artifacts}}
{"name": "{{$pkg.name}}", "version": "{{$pkg.version}}"}{{if ne $i $last}},{{end}}
{{- end}}
]
}
{
"scanned": "alpine:3.9.2",
"packages": [
{ "name": "alpine-baselayout", "version": "3.1.0-r3" },
{ "name": "alpine-keys", "version": "2.1-r1" },
{ "name": "apk-tools", "version": "2.10.3-r1" },
{ "name": "busybox", "version": "1.29.3-r10" },
{ "name": "ca-certificates-cacert", "version": "20190108-r0" },
{ "name": "libc-utils", "version": "0.7.1-r0" },
{ "name": "libcrypto1.1", "version": "1.1.1a-r1" },
{ "name": "libssl1.1", "version": "1.1.1a-r1" },
{ "name": "libtls-standalone", "version": "2.7.4-r6" },
{ "name": "musl", "version": "1.1.20-r3" },
{ "name": "musl-utils", "version": "1.1.20-r3" },
{ "name": "scanelf", "version": "1.2.3-r0" },
{ "name": "ssl_client", "version": "1.29.3-r10" },
{ "name": "zlib", "version": "1.2.11-r1" }
]
}
Executable file digests
{{range .files -}}
{{- if .executable}}
{{.location.path}}: {{range .digests}}{{if eq .algorithm "sha256"}}{{.value}}{{end}}{{end}}
{{end}}
{{- end}}
/bin/busybox: 2c1276c3c02ccec8a0e1737d3144cdf03db883f479c86fbd9c7ea4fd9b35eac5
/lib/ld-musl-aarch64.so.1: 0132814479f1acc1e264ef59f73fd91563235897e8dc1bd52765f974cde382ca
/lib/libcrypto.so.1.1: 6c597c8ad195eeb7a9130ad832dfa4cbf140f42baf96304711b2dbd43ba8e617
/lib/libssl.so.1.1: fb72f4615fb4574bd6eeabfdb86be47012618b9076d75aeb1510941c585cae64
/lib/libz.so.1.2.11: 19e790eb36a09eba397b5af16852f3bea21a242026bbba3da7b16442b8ba305b
/sbin/apk: 22d7d85bd24923f1f274ce765d16602191097829e22ac632748302817ce515d8
/sbin/mkmntdirs: a14a5a28525220224367616ef46d4713ef7bd00d22baa761e058e8bdd4c0af1b
/usr/bin/getconf: 82bcde66ead19bc3b9ff850f66c2dbf5eaff36d481f1ec154100f73f6265d2ef
/usr/bin/getent: 53ffb508150e91838d795831e8ecc71f2bc3a7db036c6d7f9512c3973418bb5e
/usr/bin/iconv: 1c99d1f4edcb8da6db1da60958051c413de45a4c15cd3b7f7285ed87f9a250ff
/usr/bin/scanelf: 908da485ad2edea35242f8989c7beb9536414782abc94357c72b7d840bb1fda2
/usr/bin/ssl_client: 67ab7f3a1ba35630f439d1ca4f73c7d95f8b7aa0e6f6db6ea1743f136f074ab4
/usr/lib/engines-1.1/afalg.so: ea7c2f48bc741fd828d79a304dbf713e20e001c0187f3f534d959886af87f4af
/usr/lib/engines-1.1/capi.so: b461ed43f0f244007d872e84760a446023b69b178c970acf10ed2666198942c6
/usr/lib/engines-1.1/padlock.so: 0ccb04f040afb0216da1cea2c1db7a0b91d990ce061e232782aedbd498483649
/usr/lib/libtls-standalone.so.1.0.0: 7f4c2ff4010e30a69f588ab4f213fdf9ce61a524a0eecd3f5af31dc760e8006c
Find binaries importing a library
{{range .files -}}
{{- if .executable}}
{{- $path := .location.path}}
{{- range .executable.importedLibraries}}
{{- if eq . "libcrypto.so.1.1"}}
{{$path}}
{{break}}
{{- end}}
{{- end}}
{{- end}}
{{- end}}
/lib/libssl.so.1.1
/sbin/apk
/usr/lib/engines-1.1/afalg.so
/usr/lib/libtls-standalone.so.1.0.0
Troubleshooting
“can’t evaluate field” errors: The field doesn’t exist or is misspelled. Check field names with syft <image> -o json | jq.
Empty output: Verify your field paths are correct. Use syft <image> -o json to see the actual data structure.
Template syntax errors: Refer to the Go template documentation for syntax help.
Note
If you have templates from before Syft v0.102.0 that no longer work, set format.template.legacy: true in your configuration. This uses internal Go structs instead of the JSON output schema.
Long-term support for this legacy option is not guaranteed.
Next steps
Continue the guide
Next: Learn about Format Conversion to convert existing SBOMs between different formats without re-scanning.Additional resources:
- Template syntax: See Go template documentation for syntax reference
- Helper functions: Browse Sprig function documentation for available helpers
- Query with jq: Check Working with Syft JSON for query examples to use in templates
- Configuration: See Configuration options for persistent template settings
1.8 - Format Conversion
Experimental Feature
This feature is experimental and may change in future releases.TL;DR
- Convert from Syft JSON to other SBOM formats:
syft convert <sbom-file> -o <format> - Best practice: keep Syft JSON as source, convert to SPDX/CycloneDX as needed
- Avoid chaining conversions (e.g., SPDX → CycloneDX)
The ability to convert existing SBOMs means you can create SBOMs in different formats quickly, without the need to regenerate the SBOM from scratch, which may take significantly more time.
syft convert <ORIGINAL-SBOM-FILE> -o <NEW-SBOM-FORMAT>[=<NEW-SBOM-FILE>]
We support formats with wide community usage AND good encode/decode support by Syft. The supported formats are:
- Syft JSON (
-o json) - SPDX JSON (
-o spdx-json) - SPDX tag-value (
-o spdx-tag-value) - CycloneDX JSON (
-o cyclonedx-json) - CycloneDX XML (
-o cyclonedx-xml)
Conversion example:
syft alpine:latest -o syft-json=sbom.syft.json # generate a syft SBOM
syft convert sbom.syft.json -o cyclonedx-json=sbom.cdx.json # convert it to CycloneDX
Best practices
Use Syft JSON as the source format
Generate and keep Syft JSON as your primary SBOM. Convert from it to other formats as needed:
# Generate Syft JSON (native format with complete data)
syft <source> -o json=sbom.json
# Convert to other formats
syft convert sbom.json -o spdx-json=sbom.spdx.json
syft convert sbom.json -o cyclonedx-json=sbom.cdx.json
Converting between non-Syft formats loses data. Syft JSON contains all information Syft extracted, while other formats use different schemas that can’t represent the same fields.
Learn more
Learn more about working with Syft’s native format in the Working with Syft JSON guide.What gets preserved
Data Loss During Conversion
Converting between formats may lose data. Packages (names, versions, licenses) transfer reliably, while tool metadata, source details, and format-specific fields may not. Use Syft JSON as the source format to minimize data loss.Conversions from Syft JSON to SPDX or CycloneDX preserve all standard SBOM fields. Converted output matches directly-generated output (only timestamps and IDs differ).
Avoid chaining conversions (e.g., SPDX → CycloneDX). Each step may lose format-specific data.
Reliably preserved across conversions:
- Package names, versions, and PURLs
- License information
- CPEs and external references
- Package relationships
May be lost in conversions:
- Tool configuration and cataloger information
- Source metadata (image manifests, layers, container config)
- File location details and layer attribution
- Package-manager-specific metadata (git commits, checksums, provides/dependencies)
- Distribution details
When to convert vs regenerate
Convert from Syft JSON when:
- You need multiple formats for different tools
- The original source is unavailable
- Scanning takes significant time
Regenerate from source when:
- You need complete format-specific data
- Conversion output is missing critical information
FAQ
Can I convert from SPDX to CycloneDX?
Yes, but it’s not recommended. Converting between non-Syft formats loses data with each conversion. If you have the original Syft JSON or can re-scan the source, that’s a better approach.
Why is some data missing after conversion?
Different SBOM formats have different schemas with different capabilities. SPDX and CycloneDX can’t represent all Syft metadata. Converting from Syft JSON to standard formats works best; converting between standard formats loses more data.
Is conversion faster than re-scanning?
Yes, significantly. Conversion takes milliseconds while scanning can take seconds to minutes depending on source size. This makes conversion ideal for CI/CD pipelines that need multiple formats.
Can I convert back to Syft JSON from SPDX?
Yes, but you’ll lose Syft-specific metadata that doesn’t exist in SPDX (like cataloger information, layer details, and file metadata). The result won’t match the original Syft JSON.
Which format versions are supported?
See the Output Formats guide for supported versions of each format. Syft converts to the latest version by default, but you can specify older versions (e.g., -o spdx-json@2.2).
Next steps
Continue the guide
Next: Explore Attestation to learn how to sign and verify your SBOMs for supply chain security.Additional resources:
- Source format: See Working with Syft JSON to understand the source format
- Available formats: Check Output Formats for all supported SBOM formats
- Direct generation: Learn about generating formats directly in Getting Started
1.9 - Attestation
Experimental Feature
This feature is experimental and may change in future releases.TL;DR
- Sign SBOMs:
syft attest --output cyclonedx-json <image>(keyless via OIDC) - Or with keys:
syft attest --key cosign.key --output spdx-json <image> - Requires
cosign ≥ v1.12.0and registry write access - Verify:
cosign verify-attestation - Attestations attach to images in OCI registries
Overview
An attestation is cryptographic proof that you created a specific SBOM for a container image. When you publish an image, consumers need to trust that the SBOM accurately describes the image contents. Attestations solve this by letting you sign SBOMs and attach them to images, enabling consumers to verify authenticity.
Syft supports two approaches:
- Keyless attestation: Uses your identity (GitHub, Google, Microsoft) as trust root via Sigstore. Best for CI/CD and teams.
- Local key attestation: Uses cryptographic key pairs you manage. Best for air-gapped environments or specific security requirements.
Prerequisites
Before creating attestations, ensure you have:
- Syft installed
- Cosign ≥ v1.12.0 installed (installation guide)
- Write access to the OCI registry where you’ll publish attestations
- Registry authentication configured (e.g.,
docker loginfor Docker Hub)
For local key attestations, you’ll also need a key pair. Generate one with:
cosign generate-key-pair
This creates cosign.key (private key) and cosign.pub (public key). Keep the private key secure.
Keyless attestation
Keyless attestation uses Sigstore to tie your OIDC identity (GitHub, Google, or Microsoft account) to the attestation. This eliminates key management overhead.
Create a keyless attestation
syft attest --output cyclonedx-json <IMAGE>
Replace <IMAGE> with your image reference (e.g., docker.io/myorg/myimage:latest). You must have write access to this image.
What happens:
- Syft opens your browser to authenticate via OIDC (GitHub, Google, or Microsoft)
- After authentication, Syft generates the SBOM
- Sigstore signs the SBOM using your identity
- The attestation is uploaded to the OCI registry alongside your image
Verify a keyless attestation
Anyone can verify the attestation using cosign:
COSIGN_EXPERIMENTAL=1 cosign verify-attestation <IMAGE>
Successful output shows:
- Attestation claims are validated
- Claims exist in the Sigstore transparency log
- Certificates verified against Fulcio (Sigstore’s certificate authority)
- Certificate subject (your identity email)
- Certificate issuer (identity provider URL)
Example:
Certificate subject: user@example.com
Certificate issuer URL: https://accounts.google.com
This proves the attestation was created by the specified identity.
Local key attestation
Local key attestation uses cryptographic key pairs you manage. You sign attestations with your private key, and consumers verify with your public key.
Create a key-based attestation
Generate the attestation and save it locally:
syft attest --output spdx-json --key cosign.key docker.io/myorg/myimage:latest > attestation.json
The output is a DSSE envelope containing an in-toto statement with your SBOM as the predicate.
Attach the attestation to your image
Use cosign to attach the attestation:
cosign attach attestation --attestation attestation.json docker.io/myorg/myimage:latest
You need write access to the image registry for this to succeed.
Verify a key-based attestation
Consumers verify using your public key:
cosign verify-attestation --key cosign.pub --type spdxjson docker.io/myorg/myimage:latest
Successful output shows:
Verification for docker.io/myorg/myimage:latest --
The following checks were performed on each of these signatures:
- The cosign claims were validated
- The signatures were verified against the specified public key
- Any certificates were verified against the Fulcio roots.
To extract and view the SBOM:
cosign verify-attestation --key cosign.pub --type spdxjson docker.io/myorg/myimage:latest | \
jq '.payload | @base64d | .payload | fromjson | .predicate'
Use with vulnerability scanning
Pipe the verified SBOM directly to Grype for vulnerability analysis:
cosign verify-attestation --key cosign.pub --type spdxjson docker.io/myorg/myimage:latest | \
jq '.payload | @base64d | .payload | fromjson | .predicate' | \
grype
This ensures you’re scanning a verified, trusted SBOM.
Troubleshooting
Authentication failures
- Ensure you’re logged into the registry:
docker login <registry> - Verify you have write access to the image repository
Cosign version errors
- Update to cosign ≥ v1.12.0:
cosign version
Verification failures
- For keyless: ensure
COSIGN_EXPERIMENTAL=1is set - For key-based: verify you’re using the correct public key
- Check the attestation type matches (
--type spdxjsonor--type cyclonedx-json)
Permission denied uploading attestations
- Verify write access to the registry
- Check authentication credentials are current
- Ensure the image exists in the registry before attaching attestations
Next steps
Guide complete!
Now let’s put those SBOMs to good use and scan with Grype to understand your exposure to vulnerabilities.Continue your journey:
- Scan for vulnerabilities: Use Grype to find security issues in your SBOMs
- Check licenses: Analyze open source licenses with Grant
- Reference documentation: Explore Syft CLI reference for all available commands and options
- Configure Syft: See Configuration for advanced settings and persistent configuration
Key pages to revisit:
- Getting Started - Quick start and installation
- Supported Scan Targets - All scanning capabilities
- Output Formats - SBOM format options
- Working with Syft JSON - Query and extract data
2 - Vulnerability Scanning
Vulnerability scanning is the automated process of proactively identifying security weaknesses and known exploits within software and systems. This is crucial because it helps developers and organizations find and fix potential security holes before malicious actors can discover and exploit them, thus protecting data and maintaining system integrity.
Grype is an open-source vulnerability scanner specifically designed to analyze container images and filesystems. It works by comparing the software components it finds against a database of known vulnerabilities, providing a report of potential risks so they can be addressed.
2.1 - Getting Started
What is Vulnerability Scanning?
Vulnerability scanning is the process of identifying known security vulnerabilities in software packages and dependencies.
For developers, it helps catch security issues early in development, before they reach production.
For organizations, it’s essential for maintaining security posture and meeting compliance requirements.
Grype is a CLI tool for scanning container images, filesystems, and SBOMs for known vulnerabilities.
Installation
Grype is provided as a single compiled executable and requires no external dependencies to run. Run the command for your platform to download the latest release.
curl -sSfL https://get.anchore.io/grype | sudo sh -s -- -b /usr/local/binbrew install grypenuget install Anchore.GrypeCheck out installation guide for full list of official and community-maintained packaging options.
Scan a container image for vulnerabilities
Run grype against a small container image. Grype will download the latest vulnerability database
and output simple human-readable table of packages that are vulnerable:
grype alpine:latest
✔ Loaded image alpine:latest
✔ Parsed image sha256:8d591b0b7dea080ea3be9e12ae563eebf9…
✔ Cataloged contents 058c92d86112aa6f641b01ed238a07a3885…
├── ✔ Packages [15 packages]
├── ✔ File metadata [82 locations]
├── ✔ File digests [82 files]
└── ✔ Executables [17 executables]
✔ Scanned for vulnerabilities [6 vulnerability matches]
├── by severity: 0 critical, 0 high, 0 medium, 6 low, 0 negligible
└── by status: 0 fixed, 6 not-fixed, 0 ignored
NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY
busybox 1.37.0-r12 apk CVE-2024-58251 Low
busybox 1.37.0-r12 apk CVE-2025-46394 Low
busybox-binsh 1.37.0-r12 apk CVE-2024-58251 Low
busybox-binsh 1.37.0-r12 apk CVE-2025-46394 Low
ssl_client 1.37.0-r12 apk CVE-2024-58251 Low
ssl_client 1.37.0-r12 apk CVE-2025-46394 Low
Learn more
Grype supports more than just containers. Learn more about Supported Scan TargetsScan an existing SBOM for vulnerabilities
Grype can scan container images directly, but it can also scan an existing SBOM document.
Note
This presumes you already createdalpine_latest-spdx.json using Syft, or some other tool. If not, go to SBOM Generation Getting Started and create it now.grype alpine_latest-spdx.json
Create a vulnerability report in JSON format
The JSON-formatted output from Grype can be processed or visualized by other tools.
Create the vulnerability report using the --output flag:
grype alpine:latest --output json | jq . > vuln_report.json
While the JSON is piped to the file, you’ll see progress on stderr:
✔ Pulled image
✔ Loaded image alpine:latest
✔ Parsed image sha256:8d591b0b7dea080ea3be9e12ae563eebf9869168ffced1cb25b2470a3d9fe15e
✔ Cataloged contents 058c92d86112aa6f641b01ed238a07a3885b8c0815de3e423e5c5f789c398b45
├── ✔ Packages [15 packages]
├── ✔ File digests [82 files]
├── ✔ Executables [17 executables]
└── ✔ File metadata [82 locations]
✔ Scanned for vulnerabilities [6 vulnerability matches]
├── by severity: 0 critical, 0 high, 0 medium, 6 low, 0 negligible
└── by status: 0 fixed, 6 not-fixed, 0 ignored
FAQ
Does Grype need internet access?
Only for downloading container images and the vulnerability database. After the initial database download, scanning works offline until you update the database.
What about private container registries?
Grype supports authentication for private registries. See Private Registries.
Can I use Grype in CI/CD pipelines?
Absolutely! Grype is designed for automation. Scan images or SBOMs during builds and fail pipelines based on severity thresholds.
What data does Grype send externally?
Nothing. Grype runs entirely locally and doesn’t send any data to external services.
Next steps
Continue the guide
Next: Learn about all the different Supported scan targets Grype can analyze –from container images to local directories and SBOMs.Now that you’ve scanned for vulnerabilities, here are additional resources:
- Understand results: Learn how to interpret scan output
- Filter vulnerabilities: Use result filtering to focus on actionable findings
- Manage database: Keep your vulnerability database up to date
2.2 - Supported Scan Targets
TL;DR
- Grype automatically detects scan target type, simply pass it as an argument:
grype <target> - Supports container images (Docker/Podman/Containerd/registries), directories, files, archives, and SBOMs
- Scan individual packages via PURLs or CPE identifiers
- Use
--from <type>to explicitly specify scan target type
Grype can scan a variety of scan targets including container images, directories, files, archives, SBOMs, and individual packages. In most cases, you can simply point Grype at what you want to analyze and it will automatically detect and scan it correctly.
Scan a container image from your local daemon or a remote registry:
grype alpine:latest
Scan a directory or file:
grype /path/to/project
Scan an SBOM:
grype sbom.json
To explicitly specify the scan target type, use the --from flag:
--from ARG | Description |
|---|---|
docker | Use images from the Docker daemon |
podman | Use images from the Podman daemon |
containerd | Use images from the Containerd daemon |
docker-archive | Use a tarball from disk for archives created from docker save |
oci-archive | Use a tarball from disk for OCI archives |
oci-dir | Read directly from a path on disk for OCI layout directories |
singularity | Read directly from a Singularity Image Format (SIF) container file on disk |
dir | Read directly from a path on disk (any directory) |
file | Read directly from a path on disk (any single file) |
registry | Pull image directly from a registry (bypass any container runtimes) |
sbom | Read SBOM from file (supports Syft JSON, SPDX, CycloneDX formats) |
purl | Scan individual packages via Package URL identifiers |
Instead of using the --from flag explicitly, you can instead:
provide no hint and let Grype automatically detect the scan target type implicitly based on the input provided
provide the scan target type as a URI scheme in the target argument (e.g.,
docker:alpine:latest,oci-archive:/path/to/image.tar,dir:/path/to/dir)
Learn more
Grype supports all the same scan targets as Syft (see Syft’s supported scan targets for the complete details).
In addition to Syft’s scan targets, Grype can scan:
- Pre-generated SBOMs in multiple formats (Syft JSON, SPDX, CycloneDX)
- Individual packages via Package URL (PURL) or Common Platform Enumeration (CPE) identifiers
Scan target-specific behaviors
With each kind of scan target, there are specific behaviors and defaults to be aware of.
For scan target capabilities that are inherited from Syft, please see the SBOM scan targets documentation:
For scan targets that are uniquely supported by Grype, see the sections below.
SBOM Scan Targets
You can scan pre-generated SBOMs instead of scanning the scan target directly. This approach offers several benefits:
- Faster scans since package cataloging is already complete
- Ability to cache and reuse SBOMs
- Standardized vulnerability scanning across different tools
Scan an SBOM file
Grype scans SBOM files in multiple formats. You can provide an explicit sbom: prefix or just provide the file path:
Explicit SBOM prefix:
grype sbom:sbom.json
Implicit detection:
grype sbom.json
Grype automatically detects the SBOM format. Supported formats include:
- Syft JSON
- SPDX JSON, XML, and tag-value
- CycloneDX JSON and XML
Use the explicit sbom: prefix when the file path might be ambiguous or when you want to be clear about the input type.
Scan an SBOM from stdin
You can pipe SBOM output directly from Syft or other SBOM generation tools:
Syft → Grype pipeline:
syft alpine:latest -o json | grype
Read SBOM from file via stdin:
Grype detects stdin input automatically when no command-line argument is provided and stdin is piped:
cat sbom.json | grype
Note
Grype will not attempt to read from redirected stdin in interactive terminal sessions when the use of a pipe is not detected.
Thus grype < sbom.json will not work in an interactive terminal session.
Package scan targets
You can scan specific packages without scanning an entire image or directory. This is useful for:
- Testing whether a specific package has vulnerabilities
- Lightweight vulnerability checks
- Compliance scanning for specific dependencies
Grype supports two formats for individual package scanning: Package URLs (PURLs) and Common Platform Enumerations (CPEs). When Grype receives input, it checks for PURL format first, then CPE format, before trying other scan target types.
Scan Package URLs (PURLs)
Package URLs (PURLs) provide a standardized way to identify software packages.
A PURL has this format:
pkg:<type>/<namespace>/<name>@<version>?<qualifiers>#<subpath>
Grype can take purls from the CLI or from a file.
For instance, to scan the python library urllib3 (version 1.26.7):
grype pkg:pypi/urllib3@1.26.7
You’ll see vulnerabilities for that specific package:
NAME INSTALLED FIXED IN TYPE VULNERABILITY SEVERITY EPSS RISK
urllib3 1.26.7 1.26.17 python GHSA-v845-jxx5-vc9f High 0.9% (74th) 0.6
urllib3 1.26.7 1.26.19 python GHSA-34jh-p97f-mpxf Medium 0.1% (35th) < 0.1
urllib3 1.26.7 1.26.18 python GHSA-g4mx-q9vg-27p4 Medium < 0.1% (15th) < 0.1
urllib3 1.26.7 2.5.0 python GHSA-pq67-6m6q-mj2v Medium < 0.1% (4th) < 0.1
For operating system packages (apk, deb, rpm), use the distro qualifier to specify the distribution:
grype "pkg:apk/alpine/openssl@3.1.5-r0?distro=alpine-3.19"
grype "pkg:deb/debian/openssl@1.1.1w-0+deb11u1?distro=debian-11"
grype "pkg:rpm/redhat/openssl@1.0.2k-19.el7?distro=rhel-7"
Remember
Always quote PURL arguments to prevent shell expansion of special characters like? and &.You can specify distribution information with the --distro flag instead:
grype "pkg:rpm/redhat/openssl@1.0.2k-19.el7?arch=x86_64" --distro rhel:7
Without either the distro qualifier or the --distro flag hint, Grype may not find distribution-specific vulnerabilities.
Other qualifiers include:
upstream: The upstream package name or version. Vulnerability information tends to be tracked with the source or origin package instead of the installed package itself (e.g. libcrypto might be installed but the pacakge it was built from is openssl which is where vulnerabilities are attributed to)epoch: The epoch value for RPM packages. This is necessary when the package in question has changed the methodology for versioning (e.g., switching from date-based versions to semantic versions) and the epoch is used to indicate that change.
You can scan multiple packages from a file. The file contains one PURL per line:
# contents of packages.txt follow, which must be a text file with one PURL per line
pkg:npm/lodash@4.17.20
pkg:pypi/requests@2.25.1
pkg:maven/org.apache.commons/commons-lang3@3.12.0
grype ./packages.txt
Grype scans all the packages in the file:
NAME INSTALLED FIXED IN TYPE VULNERABILITY SEVERITY
lodash 4.17.20 4.17.21 npm GHSA-35jh-r3h4-6jhm High
requests 2.25.1 2.31.0 python GHSA-j8r2-6x86-q33q Medium
commons-lang3 3.12.0 3.18.0 java-archive GHSA-j288-q9x7-2f5v Medium
...
Learn more
See the official Package URL types documentation for more details on supported package types.Scan Common Platform Enumerations (CPEs)
Common Platform Enumeration (CPE) is an older identification format for software and hardware. You can scan using CPE format:
grype "cpe:2.3:a:apache:log4j:2.14.1:*:*:*:*:*:*:*"
Grype supports multiple CPE formats:
# CPE 2.2 format (WFN URI binding)
grype "cpe:/a:apache:log4j:2.14.1"
# CPE 2.3 format (string binding)
grype "cpe:2.3:a:apache:log4j:2.14.1:*:*:*:*:*:*:*"
Use CPE when:
- You’re working with legacy systems that use CPE identifiers
- You need to test for vulnerabilities in a specific CVE that references a CPE
- PURL format is not available for your package type
For most modern scanning workflows, PURL format is preferred because it provides better precision and ecosystem-specific information.
Learn more
See the CPE specification for more details on CPE formats and usage.Next steps
Continue the guide
Next: Explore Supported ecosystems to understand how Grype selects vulnerability data for different package types.Additional resources:
- Understand results: Learn how to interpret scan output
- Filter vulnerabilities: Use result filtering to focus on specific findings
- Private registries: Set up authentication for private images
2.3 - Supported package ecosystems
TL;DR
- OS packages use distribution-specific security feeds (Alpine, Debian, Ubuntu, RHEL, etc.)
- Language packages use GitHub Security Advisories (npm, PyPI, Maven, Go, etc.)
- Other packages fall back to CPE matching against NVD (may have false positives)
- Grype automatically selects the right data source based on package type
Grype automatically selects the right vulnerability data source based on the package type and distribution information in your SBOM. This guide explains how Grype chooses which vulnerability feed to use and what level of accuracy to expect.
How Grype chooses vulnerability data
Grype selects vulnerability feeds based on package type:
- OS packages (
apk,deb,rpm,portage,alpm) use vulnerability data sourced from distribution-specific security feeds. - Language packages (npm, PyPI, Maven, Go modules, etc.) use GitHub Security Advisories.
- Other packages (binaries, Homebrew, Jenkins plugins, etc.) fall back to CPE matching against the NVD.
OS packages
When Grype scans an OS package, it uses vulnerability data sourced from distribution security feeds. Distribution maintainers curate these feeds and provide authoritative information about vulnerabilities affecting specific distribution versions.
For example, when you scan Debian 10, Grype looks for vulnerabilities affecting Debian 10 packages:
$ grype debian:10
NAME INSTALLED FIXED IN TYPE VULNERABILITY SEVERITY
libgcrypt20 1.8.4-5+deb10u1 (won't fix) deb CVE-2021-33560 High
bash 5.0-4 deb CVE-2019-18276 Negligible
libidn2-0 2.0.5-1+deb10u1 (won't fix) deb CVE-2019-12290 High
OS distributions
Grype supports major Linux distributions with dedicated vulnerability feeds, including Alpine, Debian, Ubuntu, RHEL, SUSE, and many others. Some distributions have mature security tracking programs that report both fixed and unfixed vulnerabilities, providing comprehensive coverage.
Derivative distributions automatically use their parent distribution’s vulnerability feed.
Grype maps derivative distributions to their upstream source using the ID_LIKE field from /etc/os-release.
For example, Rocky Linux and AlmaLinux use the RHEL vulnerability feed, while Raspbian uses Debian’s feed.
When scanning Rocky Linux, Grype uses Red Hat security data:
$ grype rockylinux:9 -o json | jq '.matches[0].matchDetails[0].searchedBy.distro'
{
"type": "rockylinux",
"version": "9.3"
}
The distro type shows rockylinux, but Grype searches the RHEL vulnerability feed automatically.
You don’t need to configure this mapping –it happens transparently based on the distribution’s ID_LIKE field.
Learn more
For a complete list of supported Linux distributions and their versions, see the OS support reference.
To learn more about data source capabilities, see the data sources reference.
Language packages
Language packages use vulnerability data from GitHub Security Advisories (GHSA). GitHub maintains security advisories for major package ecosystems, sourced from package maintainers, security researchers, and automated scanning.
When you scan a JavaScript package, Grype searches GHSA for npm advisories:
$ grype node:18-alpine
NAME INSTALLED FIXED IN TYPE VULNERABILITY SEVERITY
cross-spawn 7.0.3 7.0.5 npm GHSA-3xgq-45jj-v275 High
Supported language ecosystems
Grype supports these language ecosystems through GHSA:
- Python (PyPI) - Python packages
- JavaScript (npm) - Node.js packages
- Java (Maven) - Java archives
- Go (modules) - Go modules
- PHP (Composer) - PHP packages
- .NET (NuGet) - .NET packages
- Dart (Pub) - Dart and Flutter packages
- Ruby (RubyGems) - Ruby gems
- Rust (Crates) - Rust crates
- Swift - Swift packages
- GitHub Actions - GitHub Actions workflow dependencies
For language packages, Grype searches GHSA by package name and version, applying ecosystem-specific version comparison rules to determine if your package version falls within the vulnerable range.
In addition to language packages, Bitnami packages are searched against Bitnami’s vulnerability feed in a similar manner.
Other packages
Packages without dedicated feeds use CPE fallback matching
Packages using CPE matching
These package types rely on Common Platform Enumeration (CPE) matching against the National Vulnerability Database (NVD):
- Binary executables
- Homebrew packages
- Jenkins plugins
- Conda packages
- WordPress plugins
CPE matching constructs a CPE string from the package name and version, then searches the NVD for matching vulnerability entries.
Understanding CPE match accuracy
CPE matching has important limitations:
- May produce false positives - CPEs often do not distinguish between package ecosystems. For example, the PyPI package
docker(a Python library for talking to the Docker daemon) can match vulnerabilities for Docker the container runtime because they share similar CPE identifiers. - May miss vulnerabilities - Not all vulnerabilities have CPE entries in the NVD.
- Requires CPE metadata - Packages must have CPE information for matching to work.
You should verify CPE matches against the actual vulnerability details to confirm they apply to your specific package. Here’s a CPE match example:
{
"matchDetails": [
{
"type": "cpe-match",
"searchedBy": {
"cpes": ["cpe:2.3:a:zlib:zlib:1.2.11:*:*:*:*:*:*:*"]
},
"found": {
"versionConstraint": "<= 1.2.12 (unknown)"
}
}
]
}
Notice the version constraint shows (unknown) format rather than ecosystem-specific semantics, and the match type is cpe-match instead of exact-direct-match.
For more details on interpreting match types, confidence levels, and result reliability, see Understanding Grype results.
Next steps
Continue the guide
Next: Learn how to understand Grype results to read scan output and assess match reliability.Additional resources:
- Filter results: Use result filtering to focus on specific vulnerabilities
- Data sources: Explore vulnerability data sources for details on each data source and supported operating systems
2.4 - Understanding Grype results
TL;DR
- Default table output shows package, vulnerability, severity, and fix info
- Match types in JSON output indicate reliability:
exact-direct-matchandexact-indirect-matchare high confidence,cpe-matchrequires verification - Use
--by-cveto normalize vulnerability IDs to CVE format - Filter results with
jqfor analysis by match type, severity, or data source
This guide explains how to read and interpret Grype’s vulnerability scan output. You’ll learn what different match types mean, how to assess result reliability, and how to filter results based on confidence levels.
Output formats
Grype supports several output formats for scan results:
- Table (default) - Human-readable columnar output for terminal viewing
- JSON - Complete structured data with all match details
- SARIF - Standard format for tool integration and CI/CD pipelines
- Template - Custom output using Go templates
This guide focuses on table and JSON formats, which you’ll use most often for understanding scan results.
Reading table output
The table format is Grype’s default output. When you run grype <image>, you see a table displaying one row per
unique vulnerability match, with deduplication of identical rows.
Table columns
The table displays eight standard columns, with an optional ninth column for annotations:
- NAME - The package name
- INSTALLED - The version of the package
- FIXED-IN - The version that fixes the vulnerability (shows
(won't fix)if the vendor won’t fix it, or empty if no fix is available). See Filter by fix availability to filter results based on fix states - TYPE - Package type (apk, deb, rpm, npm, python, java-archive, etc.)
- VULNERABILITY - The vulnerability identifier (see below)
- SEVERITY - Vulnerability severity rating (Critical, High, Medium, Low, Negligible, Unknown)
- EPSS - Exploit Prediction Scoring System score and percentile showing the probability of exploitation
- RISK - Calculated risk score combining CVSS, EPSS, and other severity metrics into a single numeric value (0.0 to 10.0)
- Annotations (conditional) - Additional context like KEV (Known Exploited Vulnerability), suppressed status, or distribution version when scanning multi-distro images
Here’s what a typical scan looks like:
NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY EPSS RISK
log4j-core 2.4.0 2.12.2 java-archive CVE-2021-44228 Critical 94.4% (99th) 100.0 (kev)
log4j-core 2.4.0 2.12.2 java-archive CVE-2021-45046 Critical 94.3% (99th) 99.0 (kev)
apk-tools 2.10.6-r0 2.10.7-r0 apk CVE-2021-36159 Critical 12% (85th) 8.5
libcrypto1.1 1.1.1k-r0 apk CVE-2021-3711 Critical 9% (78th) 9.1
libcrypto1.1 1.1.1k-r0 (won't fix) apk CVE-2021-3712 High 5% (62nd) 7.2
The Annotations column appears conditionally to provide additional context:
- KEV or (kev) - Indicates the vulnerability is in CISA’s Known Exploited Vulnerabilities catalog
- suppressed or suppressed by VEX - Shown when using
--show-suppressedflag (see View filtered results) - Distribution version (e.g.,
ubuntu:20.04) - Shown when scan results include matches from multiple different distributions
Understanding vulnerability IDs
The VULNERABILITY column displays different types of identifiers depending on the data source:
- CVE IDs (e.g.,
CVE-2024-1234) - Common Vulnerabilities and Exposures identifiers used by most Linux distributions (Alpine, Debian, Ubuntu, RHEL, SUSE) and the NVD - GHSA IDs (e.g.,
GHSA-xxxx-xxxx-xxxx) - GitHub Security Advisory identifiers for language ecosystem packages - ALAS IDs (e.g.,
ALAS-2023-1234) - Amazon Linux Security Advisory identifiers - ELSA IDs (e.g.,
ELSA-2023-12205) - Oracle Enterprise Linux Security Advisory identifiers
By default, Grype displays the vulnerability ID from the original data source. For example, an Alpine package might show CVE-2024-1234
while a GitHub Advisory for the same issue shows GHSA-abcd-1234-efgh. Use the --by-cve flag to normalize results to CVE identifiers:
grype <image> --by-cve
This flag replaces non-CVE vulnerability IDs with their related CVE ID when available, uses CVE metadata instead of the original advisory metadata, and makes it easier to correlate vulnerabilities across different data sources.
Compare the two approaches:
# Default output - shows GitHub Advisory ID
$ grype node:18
NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY
lodash 4.17.20 4.17.21 npm GHSA-35jh-r3h4-6jhm High
# With --by-cve - converts to CVE
$ grype node:18 --by-cve
NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY
lodash 4.17.20 4.17.21 npm CVE-2021-23337 High
Sorting results
By default, Grype sorts vulnerability results by risk score, which combines multiple factors to help you prioritize remediation efforts. Understanding how sorting works and when to use alternative methods helps you build effective security workflows.
Why risk-based sorting works best
The default risk score takes a holistic approach by combining:
- Threat (likelihood of exploitation) - Based on EPSS (Exploit Prediction Scoring System) scores or presence in CISA’s Known Exploited Vulnerabilities (KEV) catalog
- Impact (potential damage) - Based on CVSS scores and severity ratings from multiple sources
- Context (exploitation evidence) - Additional weight for vulnerabilities with known ransomware campaigns
This multi-factor approach aligns with security best practices recommended by the EPSS project, which emphasizes that “CVSS is a useful tool for capturing the fundamental properties of a vulnerability, but it needs to be used in combination with data-driven threat information, like EPSS.”
Risk-based sorting helps you focus on vulnerabilities that are both likely to be exploited AND have significant business impact, optimizing your remediation efficiency.
Why single-metric sorting can be misleading
While Grype offers several sorting options via the --sort-by flag, using single metrics can lead to inefficient prioritization:
Severity-only sorting (--sort-by severity) focuses solely on potential impact:
- You may waste effort patching Critical severity vulnerabilities that are unlikely to ever be exploited in the wild
- No consideration for whether attackers are actively targeting the vulnerability
- Ignores real-world threat intelligence
EPSS-only sorting (--sort-by epss) focuses solely on exploitation likelihood:
- You may prioritize vulnerabilities with high exploitation probability but low business impact
- EPSS is not a risk score – it only addresses the threat component, not the complete risk picture
- Missing context like asset criticality, network exposure, or available compensating controls
The EPSS documentation explicitly states that EPSS scores should be combined with severity information to make informed prioritization decisions, which is exactly what Grype’s risk score does.
Understanding EPSS in Grype
EPSS (Exploit Prediction Scoring System) is a data-driven scoring model that estimates the probability a vulnerability will be exploited in the next 30 days.
Grype displays EPSS data in the table output showing both the raw score and percentile, such as 94.4% (99th), which means:
- 94.4% - The raw EPSS score indicating a 94.4% probability of exploitation within 30 days
- 99th - The percentile rank, meaning this score is higher than 99% of all EPSS scores
EPSS percentiles help normalize the heavily skewed distribution of EPSS scores, making it easier to set thresholds. For example, a vulnerability in the 90th percentile is more concerning than one in the 50th percentile, even if the raw likelihood values appear to be similar.
Grype incorporates EPSS as the threat component of its risk calculation. When a vulnerability appears in the KEV catalog, Grype automatically treats it as maximum threat (overriding EPSS) since observed exploitation is more significant than predicted exploitation.
For more details on EPSS methodology and interpretation, see the EPSS model documentation.
When to use alternative sorting methods
While risk-based sorting is recommended for most remediation workflows, alternative sorting methods serve specific use cases:
Sort by KEV status (--sort-by kev):
- When you need to comply with regulatory requirements like CISA BOD 22-01
- For incident response scenarios focusing on actively exploited vulnerabilities
Sort by severity (--sort-by severity):
- When organizational SLAs or compliance frameworks specify severity-based remediation timeframes (e.g., “patch all Critical within 7 days”)
Sort by EPSS (--sort-by epss):
- For threat landscape analysis and security research
Sort by package (--sort-by package):
- When organizing remediation work by team ownership (different teams maintain different packages)
- For coordinating updates across multiple instances of the same package
Sort by vulnerability ID (--sort-by vulnerability):
- When tracking specific CVE campaigns across your environment
- For correlating findings with external threat intelligence reports
For most security and remediation workflows, stick with the default risk-based sorting. It provides the best balance of threat intelligence and impact assessment to help you prioritize effectively.
Next steps
Continue the guide
Next: Learn how to work with the JSON results.Additional resources:
- Filter results: Use result filtering for severity thresholds and ignore rules
- Supported ecosystems: Understand data source selection for different package types
- Configuration: See Grype configuration reference for customizing behavior
2.5 - Working with JSON
Grype’s native JSON output format provides a comprehensive representation of vulnerability scan results, including detailed information about each vulnerability, how it was matched, and the affected packages. This guide explains the structure of the JSON output and how to interpret its contents effectively.
Data shapes
The JSON output contains a top-level matches array. Each match has this structure:
{
"matches": [
{
"vulnerability": { ... },
"relatedVulnerabilities": [ ... ],
"matchDetails": [ ... ],
"artifact": { ... }
}
]
}
Ultimately, matches are the core results of a Grype scan. Matches are composed of:
- vulnerability - Primary vulnerability information
- matchDetails - How Grype found the match
- artifact - The package/artifact that was matched against the vulnerability
Vulnerability fields
The vulnerability object contains the primary vulnerability information:
- id (string) - The vulnerability identifier (CVE, GHSA, ALAS, ELSA, etc.)
- dataSource (string) - URL to the vulnerability record in the data feed
- namespace (string) - The data source namespace (e.g.,
alpine:distro:alpine:3.10,debian:distro:debian:10,github:language:javascript,nvd:cpe) - severity (string) - Severity rating from the data source
- urls (array) - Reference URLs for the vulnerability
- description (string) - Human-readable vulnerability description
- cvss (array) - CVSS score information from various sources
- fix (object) - Fix information including available versions and fix state (
fixed,not-fixed,wont-fix,unknown). See Understanding fix states for details - advisories (array) - Related security advisories (where RHSAs appear)
- risk (float64) - Calculated risk score combining CVSS, EPSS, and other severity metrics
A typical vulnerability object looks like:
{
"vulnerability": {
"id": "CVE-2021-36159",
"dataSource": "https://security.alpinelinux.org/vuln/CVE-2021-36159",
"namespace": "alpine:distro:alpine:3.10",
"severity": "Critical",
"urls": [],
"fix": {
"versions": ["2.10.7-r0"],
"state": "fixed"
},
"advisories": [],
"risk": 0.92
}
}
Match detail fields
The matchDetails array contains information about how Grype found the match. Each detail object includes:
- type (string) - Match type:
exact-direct-match,exact-indirect-match, orcpe-match - matcher (string) - The matcher that produced this result (e.g.,
apk-matcher,github-matcher,stock-matcher) - searchedBy (object) - The specific attributes used to search (package name, version, etc.)
- found (object) - The specific attributes in the vulnerability data that matched
- fix (object) - Fix details specific to this match (may differ from vulnerability-level fix)
Here’s what matchDetails looks like:
{
"matchDetails": [
{
"type": "exact-direct-match",
"matcher": "apk-matcher",
"searchedBy": {
"distro": {
"type": "alpine",
"version": "3.10.9"
},
"package": {
"name": "apk-tools",
"version": "2.10.6-r0"
},
"namespace": "alpine:distro:alpine:3.10"
},
"found": {
"vulnerabilityID": "CVE-2021-36159",
"versionConstraint": "< 2.10.7-r0 (apk)"
}
}
]
}
Understanding match types
Grype determines how it matched a package to a vulnerability based on the available data sources. The match type indicates how the match was made:
exact-direct-match means the package name matched directly in a dedicated vulnerability feed. Grype searched the feed using the package name from your scan and found a matching vulnerability entry.
exact-indirect-match means the source package name matched in a dedicated vulnerability feed. This occurs when you scan a binary package (e.g.,
libcrypto1.1) but the feed tracks vulnerabilities under the source package (e.g.,openssl). Grype searches the feed using the source package name and maps the results to the binary package.cpe-match means Grype used Common Platform Enumeration (CPE) matching as a fallback when no exact match was found in ecosystem-specific feeds. CPE matching relies on CPE identifiers derived from package metadata and is less precise.
You can loosely think of the match type as a proxy for confidence level in the match, where exact-direct-match has the highest confidence, followed by exact-indirect-match, and finally cpe-match.
A cpe-match means Grype used Common Platform Enumeration (CPE) matching as a fallback.
CPE matching occurs when:
- No exact package match exists in ecosystem-specific feeds
- Grype falls back to the NVD database
- The match is based on CPE identifiers derived from package metadata
This match type has lower confidence because:
- CPE matching is generic and not package-ecosystem aware
- Package naming may not match CPE naming conventions exactly
- Version ranges may be broader or less precise
Understanding version constraints
The found.versionConstraint field shows the version range (found on the vulnerability) which the
package version was found to be within (thus, the package is affected by the vulnerability).
The format indicates the constraint type and the comparison logic used:
< 1.2.3 (apk)- Alpine package version constraint using apk version comparison< 1.2.3 (deb)- Debian package version constraint using dpkg version comparison< 1.2.3 (rpm)- RPM package version constraint using rpm version comparison< 1.2.3 (python)- Python package version constraint using PEP 440 comparison< 1.2.3 (semantic)- Semantic versioning constraint using semver comparison< 1.2.3 (unknown)- Unknown version format (lower reliability)
The constraint type tells you how Grype compared versions. Ecosystem-specific formats (apk, deb, rpm) use that
ecosystem’s version comparison rules, which handle epoch numbers, release tags, and other format-specific details correctly.
Generic formats like unknown may have less precise matching.
Tip
When you use filtering flags or ignore rules, filtered vulnerabilities appear in the ignoredMatches array instead of matches.
See View filtered results to learn how to inspect filtered vulnerabilities.
Filtering and querying results
Use jq to filter and analyze JSON output based on match type, severity, or data source.
Filter by match type
Show only high-confidence exact matches:
grype <image> -o json | jq '.matches[] | select(.matchDetails[0].type == "exact-direct-match")'
Exclude CPE matches:
grype <image> -o json | jq '.matches[] | select(.matchDetails[0].type != "cpe-match")'
Filter by data source
Show only matches from Alpine security data:
grype <image> -o json | jq '.matches[] | select(.vulnerability.namespace | startswith("alpine:"))'
Show only GitHub Security Advisories:
grype <image> -o json | jq '.matches[] | select(.vulnerability.namespace | startswith("github:"))'
Filter by severity
Show only Critical and High severity vulnerabilities:
grype <image> -o json | jq '.matches[] | select(.vulnerability.severity == "Critical" or .vulnerability.severity == "High")'
Combine filters
Show Critical/High severity vulnerabilities with exact matches only:
grype <image> -o json | jq '.matches[] | select(
(.vulnerability.severity == "Critical" or .vulnerability.severity == "High") and
(.matchDetails[0].type == "exact-direct-match" or .matchDetails[0].type == "exact-indirect-match")
)'
Count matches by type
grype <image> -o json | jq '[.matches[].matchDetails[0].type] | group_by(.) | map({type: .[0], count: length})'
Understanding a match
Each match in JSON output contains information about how Grype found the vulnerability and links to the original sources. This lets you examine what Grype looked at and verify the match yourself.
Reference URLs
The vulnerability object includes reference URLs from the vulnerability data:
grype <image> -o json | jq '.matches[].vulnerability | {id, dataSource, urls}'
- dataSource - URL to the vulnerability record in Grype’s data feed
- urls - Reference URLs from the original vulnerability disclosure (CVE details, vendor advisories, etc.)
These URLs point to the original vulnerability information that Grype used.
What Grype searched for
The matchDetails[].searchedBy field shows what Grype looked at when searching for vulnerabilities:
grype <image> -o json | jq '.matches[].matchDetails[].searchedBy'
For distro packages, this shows the distro, package name, and version. For CPE matches, this shows the CPE strings Grype constructed. This lets you see exactly what Grype queried.
What Grype found
The matchDetails[].found field shows what matched in the vulnerability data:
grype <image> -o json | jq '.matches[].matchDetails[] | {found, type}'
This shows the vulnerability ID and version constraint that matched, along with the match type. Comparing searchedBy and found shows how Grype connected your package to the vulnerability.
Next steps
Continue the guide
Next: Learn how to filter scan results to control which vulnerabilities Grype reports.Additional resources:
- Filter results: Use result filtering for severity thresholds and ignore rules
- Supported ecosystems: Understand data source selection for different package types
- Configuration: See Grype configuration reference for customizing behavior
2.6 - Filter scan results
TL;DR
- Use
--fail-on <severity>to set exit code thresholds for CI/CD pipelines - Filter by fix availability with
--only-fixedor--only-notfixed - Create ignore rules in
.grype.yamlto exclude specific vulnerabilities or packages - Use VEX documents with
--vexto filter based on exploitability information
Learn how to control which vulnerabilities Grype reports using filtering flags and configuration options.
Set failure thresholds
Use the --fail-on flag to control Grype’s exit code based on vulnerability severity. This can be helpful for integrating Grype into CI/CD pipelines.
The --fail-on flag (alias: -f) sets a severity threshold. When scanning completes, Grype exits with code 2 if it found vulnerabilities at or above the specified severity:
grype alpine:3.10 --fail-on high
You’ll see vulnerabilities at or above the threshold:
NAME INSTALLED FIXED IN TYPE VULNERABILITY SEVERITY EPSS RISK
zlib 1.2.11-r1 apk CVE-2022-37434 Critical 92.7% (99th) 87.1
libcrypto1.1 1.1.1k-r0 apk CVE-2023-0286 High 89.1% (99th) 66.4
libssl1.1 1.1.1k-r0 apk CVE-2023-0286 High 89.1% (99th) 66.4
...
[0026] ERROR discovered vulnerabilities at or above the severity threshold
# Exit code: 2
Valid severity values, from lowest to highest:
negligible < low < medium < high < critical
When you set a threshold, Grype fails if it finds vulnerabilities at that severity or higher. For example, --fail-on high fails on both high and critical vulnerabilities.
Filter by fix availability
Grype provides flags to filter vulnerabilities based on whether fixes are available.
Show only vulnerabilities with fixes available
The --only-fixed flag filters scan results to show only vulnerabilities that have fixes available:
grype alpine:latest --only-fixed
This flag filters out vulnerabilities with these fix states:
not-fixed- No fix is available yetwont-fix- Maintainers won’t fix this vulnerabilityunknown- No fix state information is available
This is useful when you want to focus on actionable vulnerabilities that you can remediate by updating packages.
Note
Do not use--only-fixed and --only-notfixed together. These flags are mutually exclusive and filter out all vulnerabilities.Show only vulnerabilities without fixes available
The --only-notfixed flag filters scan results to show only vulnerabilities that do not have fixes available:
grype alpine:3.10 --only-notfixed
These vulnerabilities don’t have fixes available yet:
NAME INSTALLED TYPE VULNERABILITY SEVERITY EPSS RISK
zlib 1.2.11-r1 apk CVE-2022-37434 Critical 92.7% (99th) 87.1
libcrypto1.1 1.1.1k-r0 apk CVE-2023-0286 High 89.1% (99th) 66.4
libssl1.1 1.1.1k-r0 apk CVE-2023-0286 High 89.1% (99th) 66.4
libcrypto1.1 1.1.1k-r0 apk CVE-2023-2650 Medium 92.0% (99th) 52.9
libssl1.1 1.1.1k-r0 apk CVE-2023-2650 Medium 92.0% (99th) 52.9
...
This flag filters out vulnerabilities with fix state fixed. Notice the FIXED-IN column is empty for these vulnerabilities.
This is useful when you want to identify vulnerabilities that require alternative mitigation strategies, such as:
- Accepting the risk
- Implementing compensating controls
- Waiting for a fix to become available
- Switching to a different package
Understanding fix states
Grype assigns one of four fix states to each vulnerability based on information from vulnerability data sources:
| Fix State | Description |
|---|---|
fixed | A fix is available for this vulnerability |
not-fixed | No fix is available yet, but maintainers may release one |
wont-fix | Package maintainers have decided not to fix this vulnerability |
unknown | No fix state information is available |
Vulnerabilities with no fix state information are treated as unknown. This ensures Grype handles missing data consistently.
Ignore specific fix states
The --ignore-states flag gives you fine-grained control over which fix states to filter out. You can ignore one or more fix states by specifying them as a comma-separated list:
# Ignore vulnerabilities with unknown fix states
grype alpine:3.10 --ignore-states unknown
Only vulnerabilities with known fix states appear:
NAME INSTALLED FIXED IN TYPE VULNERABILITY SEVERITY EPSS RISK
apk-tools 2.10.6-r0 2.10.7-r0 apk CVE-2021-36159 Critical 1.0% (76th) 0.9
# Ignore both wont-fix and not-fixed vulnerabilities
grype alpine:3.10 --ignore-states wont-fix,not-fixed
This leaves only fixed vulnerabilities and those with unknown states:
NAME INSTALLED FIXED IN TYPE VULNERABILITY SEVERITY EPSS RISK
zlib 1.2.11-r1 apk CVE-2022-37434 Critical 92.7% (99th) 87.1
libcrypto1.1 1.1.1k-r0 apk CVE-2023-0286 High 89.1% (99th) 66.4
libssl1.1 1.1.1k-r0 apk CVE-2023-0286 High 89.1% (99th) 66.4
apk-tools 2.10.6-r0 2.10.7-r0 apk CVE-2021-36159 Critical 1.0% (76th) 0.9
...
Valid fix state values are: fixed, not-fixed, wont-fix, unknown.
If you specify an invalid fix state, Grype returns an error:
grype alpine:latest --ignore-states invalid-state
# Error: unknown fix state invalid-state was supplied for --ignore-states
Combining severity with fix filtering
You can combine --fail-on with fix state filtering to create sophisticated CI/CD policies:
# Fail only if fixable critical or high vulnerabilities exist
grype alpine:3.10 --fail-on high --only-fixed
Grype now only fails on fixable critical/high vulnerabilities:
NAME INSTALLED FIXED IN TYPE VULNERABILITY SEVERITY EPSS RISK
apk-tools 2.10.6-r0 2.10.7-r0 apk CVE-2021-36159 Critical 1.0% (76th) 0.9
[0026] ERROR discovered vulnerabilities at or above the severity threshold
# Exit code: 2
# Fail on medium or higher, but ignore wont-fix vulnerabilities
grype alpine:latest --fail-on medium --ignore-states wont-fix
The --fail-on check runs after vulnerability matching and filtering. Grype converts all filtering options (--only-fixed, --only-notfixed, --ignore-states, configuration ignore rules, and VEX documents) into ignore rules and applies them during matching. The severity threshold check then evaluates only the remaining vulnerabilities.
View filtered results
By default, Grype hides filtered vulnerabilities from output. You can view them in table output with --show-suppressed or in JSON output by inspecting the ignoredMatches field.
In table output
The --show-suppressed flag displays filtered vulnerabilities in table output with a (suppressed) label:
grype alpine:3.10 --only-fixed --show-suppressed
Filtered vulnerabilities now appear with a (suppressed) label:
NAME INSTALLED FIXED IN TYPE VULNERABILITY SEVERITY EPSS RISK
apk-tools 2.10.6-r0 2.10.7-r0 apk CVE-2021-36159 Critical 1.0% (76th) 0.9
zlib 1.2.11-r1 apk CVE-2018-25032 High < 0.1% (26th) < 0.1 (suppressed)
libcrypto1.1 1.1.1k-r0 apk CVE-2021-3711 Critical 2.7% (85th) 2.4 (suppressed)
libssl1.1 1.1.1k-r0 apk CVE-2021-3711 Critical 2.7% (85th) 2.4 (suppressed)
libcrypto1.1 1.1.1k-r0 apk CVE-2021-3712 High 0.5% (66th) 0.4 (suppressed)
libssl1.1 1.1.1k-r0 apk CVE-2021-3712 High 0.5% (66th) 0.4 (suppressed)
...
Note
The--show-suppressed flag only applies to table output format. It has no effect on JSON, SARIF, or other output formats.In JSON output
When you use JSON output (-o json), Grype places filtered vulnerabilities in the ignoredMatches array. Non-filtered vulnerabilities appear in the matches array.
For details on the complete JSON structure and all fields, see Reading JSON output.
View the structure:
grype alpine:3.10 --only-fixed -o json | jq '{matches, ignoredMatches}'
The structure separates matched from ignored vulnerabilities:
{
"matches": [
{
"vulnerability": {...},
"artifact": {...},
...
}
],
"ignoredMatches": [
{
"vulnerability": {...},
"artifact": {...},
...
},
...
]
}
Inspect a specific ignored vulnerability:
grype alpine:3.10 --only-fixed -o json | jq '.ignoredMatches[0] | {vulnerability: .vulnerability.id, package: .artifact.name, reason: .appliedIgnoreRules}'
Each ignored match shows why it was filtered:
{
"vulnerability": "CVE-2018-25032",
"package": "zlib",
"reason": [
{
"namespace": "",
"fix-state": "unknown"
}
]
}
The appliedIgnoreRules field shows why each vulnerability was filtered.
Ignore specific vulnerabilities or packages
You can create ignore rules in your .grype.yaml configuration file to exclude specific vulnerabilities or packages from scan results.
Use ignore rules
Create a .grype.yaml file with ignore rules:
ignore:
# Ignore specific CVEs
- vulnerability: CVE-2008-4318
- vulnerability: GHSA-1234-5678-90ab
# Ignore all vulnerabilities in a package
- package:
name: libcurl
# Ignore vulnerabilities in a specific version
- package:
name: openssl
version: 1.1.1g
# Ignore by package type
- package:
type: npm
name: lodash
# Ignore by package location (supports glob patterns)
- package:
location: "/usr/local/lib/node_modules/**"
# Ignore by fix state
- vulnerability: CVE-2020-1234
fix-state: not-fixed
# Combine multiple criteria
- vulnerability: CVE-2008-4318
fix-state: unknown
package:
name: libcurl
version: 1.5.1
Valid fix-state values are: fixed, not-fixed, wont-fix, unknown.
When you combine multiple criteria in a rule, all criteria must match for the rule to apply.
Use VEX documents
Grype supports Vulnerability Exploitability eXchange (VEX) documents to provide information about which vulnerabilities affect your software. VEX allows you to communicate vulnerability status in a machine-readable format that follows CISA minimum requirements.
Grype supports two VEX formats as input:
- OpenVEX - Compact JSON format with minimal required fields
- CSAF VEX - Comprehensive format with rich advisory metadata (OASIS standard)
VEX-filtered vulnerabilities behave like other filtered results:
- Table output: Hidden by default, shown with
--show-suppressedflag and marked as(suppressed by VEX) - JSON output: Moved to the
ignoredMatchesarray with VEX rules listed inappliedIgnoreRules
This guide uses OpenVEX examples for simplicity, but both formats work identically with Grype. The core concepts (status values, product identification, filtering behavior) apply to both formats.
Basic usage
Use the --vex flag to provide one or more VEX documents:
# Single VEX document
grype alpine:latest --vex vex-report.json
# Multiple VEX documents
grype alpine:latest --vex vex-1.json,vex-2.json
You can also specify VEX documents in your configuration file:
# .grype.yaml file
vex-documents:
- vex-report.json
- vex-findings.json
VEX status values
VEX documents use four standard status values:
Filtering statuses (automatically applied):
not_affected- Product is not affected by the vulnerabilityfixed- Vulnerability has been remediated
Augmenting statuses (require explicit configuration):
affected- Product is affected by the vulnerabilityunder_investigation- Impact is still being assessed
By default, Grype moves vulnerabilities with not_affected or fixed status to the ignored list.
Vulnerabilities with affected or under_investigation status are only added to results when you enable augmentation:
vex-add: ["affected", "under_investigation"]
Creating VEX documents with vexctl
The easiest way to create OpenVEX documents is with vexctl:
# Create a VEX statement marking a CVE as not affecting your image
vexctl create \
--product="pkg:oci/alpine@sha256:4b7ce07002c69e8f3d704a9c5d6fd3053be500b7f1c69fc0d80990c2ad8dd412" \
--subcomponents="pkg:apk/alpine/busybox@1.37.0-r19" \
--vuln="CVE-2024-58251" \
--status="not_affected" \
--justification="vulnerable_code_not_present" \
--file="vex.json"
# Use the VEX document with Grype
grype alpine:3.22.2 --vex vex.json
Note
vexctl creates OpenVEX documents only.
For CSAF VEX, you’ll need to create documents manually or use CSAF-specific tooling. Both formats work the same way with Grype.
You can also create VEX documents manually. Here’s an OpenVEX example:
{
"@context": "https://openvex.dev/ns/v0.2.0",
"@id": "https://openvex.dev/docs/public/vex-07f09249682f6d9d2924be146078475538731fa0ee6a50ad3c9f33617e4a0be4",
"author": "Alex Goodman",
"version": 1,
"statements": [
{
"vulnerability": {
"name": "CVE-2024-58251"
},
"products": [
{
"@id": "pkg:oci/alpine@sha256:4b7ce07002c69e8f3d704a9c5d6fd3053be500b7f1c69fc0d80990c2ad8dd412",
"subcomponents": [
{
"@id": "pkg:apk/alpine/busybox@1.37.0-r19"
}
]
}
],
"status": "not_affected",
"justification": "vulnerable_code_not_present",
"timestamp": "2025-11-21T20:30:11.725672Z"
}
],
"timestamp": "2025-11-21T20:30:11Z"
}
CSAF VEX documents have a more complex structure with product trees, branches, and vulnerability arrays. See the CSAF specification for complete structure details.
Justifications for not_affected
OpenVEX provides standardized justification values when marking vulnerabilities as not_affected:
component_not_present- The component is not included in the productvulnerable_code_not_present- The vulnerable code is not presentvulnerable_code_not_in_execute_path- The vulnerable code cannot be executedvulnerable_code_cannot_be_controlled_by_adversary- The vulnerability cannot be exploitedinline_mitigations_already_exist- Mitigations prevent exploitation
CSAF VEX uses a richer product status model with categories like known_not_affected that Grype maps to the standard VEX statuses. See the CSAF specification for details on CSAF-specific fields.
These justifications help security teams understand the rationale behind VEX statements.
Product identification
Grype matches VEX statements to scan results using several identification methods:
Container images (most reliable):
"products": [
{ "@id": "pkg:oci/alpine@sha256:124c7d2707a0ee..." }
]
Image tags (less reliable, can change):
"products": [
{ "@id": "alpine:3.17" }
]
Individual packages via PURLs:
"products": [
{
"@id": "pkg:oci/alpine@sha256:124c7d...",
"subcomponents": [
{ "@id": "pkg:apk/alpine/libssl3@3.0.8-r3" }
]
}
]
Use container digests for the most reliable matching, as tags can move to different images over time.
Next steps
Continue the guide
Next: Learn about the vulnerability database to understand how Grype keeps vulnerability data up to date.Additional resources:
- Interpret results: Learn how to understand scan output and assess match reliability
- Configuration: See Grype configuration reference for all configuration options
- Data sources: Explore vulnerability data sources for details on each feed
2.7 - Vulnerability Database
TL;DR
- Grype uses a locally cached database of known vulnerabilities
- Database auto-updates on each Grype launch when newer version is available
- Manage manually with
grype db checkandgrype db update - Database published by Anchore at no cost from multiple upstream feeds
Grype uses a locally cached database of known vulnerabilities when searching a container, directory, or SBOM for security vulnerabilities. Anchore collates vulnerability data from common feeds, and publishes that data online, at no cost to users.
Learn more
Find out more about the vulnerability data sources at Vulnerability Data Sources.Updating the local database
When Grype is launched, it checks for an existing vulnerability database, and looks for an updated one online. If available, Grype will automatically download the new database.
Database age validation
Grype will automatically fail scans if the vulnerability database is more than 5 days old.
You can disable this behavior or adjust the age threshold in your configuration:
- Set
db.validate-age: falseto disable age validation - Adjust
db.max-allowed-built-ageto change the threshold (e.g.,168hfor 7 days)
To update the database manually, use the following command:
grype db update
If instead, you would like to simply check if a new DB is available without actually updating, use:
grype db check
This will return 0 if the database is up to date, and 1 if an update is available.
Or, you can delete the local database entirely:
grype db delete
Searching the database
The Grype vulnerability database contains detailed information about vulnerabilities and affected packages across all supported ecosystems.
While you can examine the raw SQLite database directly (use grype db status to find the local storage path),
the grype db search commands provide a much easier way to explore what’s in the database.
Search tips
For both affected package and vulnerability searches, keep these tips in mind:
- Result limit: By default, searches return up to 5,000 results. Use
--limit 0for unlimited results. - JSON output: Add
--output jsonfor programmatic processing of results.
Search for affected packages
Use grype db search to find packages affected by vulnerabilities. This is useful when you want to understand
what packages are impacted by a specific CVE, or when you want to see all vulnerabilities affecting a particular package.
For example, to find all packages affected by Log4Shell across all ecosystems:
grype db search --vuln CVE-2021-44228
To find all vulnerable versions of the log4j package:
grype db search --pkg log4j
To search by PURL or CPE formats:
grype db search --pkg 'pkg:rpm/redhat/openssl'
grype db search --pkg 'cpe:2.3:a:jetty:jetty_http_server:*:*:*:*:*:*:*:*'
Any version value provided will be ignored entirely.
You can also use these options in combination to filter results further (finding the common intersection); in this example, finding packages named “openssl” in Alpine Linux 3.18 that have fixes available:
grype db search --pkg openssl --distro alpine-3.18 --fixed-state fixed
Search for vulnerabilities
Use grype db search vuln to look up vulnerability details directly, including descriptions, severity ratings, and data sources.
This is subtly different from searching for affected packages, as it focuses on the vulnerabilities themselves, so you can find information about vulnerabilities that may not affect any packages (there are a few reasons why this could happen.)
To view full metadata for a specific CVE:
grype db search vuln CVE-2021-44228
To filter by data provider:
grype db search vuln CVE-2021-44228 --provider nvd
Next steps
Explore more
Generate SBOMs with Syft to enable faster vulnerability scanning workflows.Now that you understand how Grype’s vulnerability database works, here are additional resources:
- Scan targets: Learn about all supported scan targets Grype can analyze
- Filter results: Use result filtering to focus on actionable findings
- Data sources: Explore vulnerability data sources for details on each feed
- License scanning: Check dependency licenses with Grant
3 - License Scanning
License scanning involves automatically identifying and analyzing the licenses associated with the various software components used in a project.
This is important because most software relies on third-party and open-source components, each with its own licensing terms that dictate how the software can be used, modified, and distributed, and failing to comply can lead to legal issues.
Grant is an open-source command-line tool designed to discover and report on the software licenses present in container images, SBOM documents, or filesystems. It helps users understand the licenses of their software dependencies and can check them against user-defined policies to ensure compliance.
3.1 - Getting Started
Introduction
Grant searches SBOMs for licenses and the packages they belong to.
Install the latest Grant release
Grant is provided as a single compiled executable. Issue the command for your platform to download the latest release of Grant. The full list of official and community maintained packages can be found on the installation page.
Note
Grant is not currently available for Windowscurl -sSfL <a href="https://get.anchore.io/grant">https://get.anchore.io/grant</a> | sudo sh -s – -b /usr/local/binbrew install grant- Scan a container for all the licenses used
grant alpine:latest
Grant will produce a list of licenses.
* alpine:latest
* license matches for rule: default-deny-all; matched with pattern *
* Apache-2.0
* BSD-2-Clause
* GPL-2.0-only
* GPL-2.0-or-later
* MIT
* MPL-2.0
* Zlib
- Scan a container for OSI compliant licenses
Now we scan a different container, that contains some software that is distributed under non-OSI-compliant licenses.
Note
The image used here is quite large (over 3GB) so may take a while to download and analyzegrant check pytorch/pytorch:latest --osi-approved
Read more in our License Auditing User Guide.
Next steps
- Try running Syft against other containers, or an application directory on your workstation.
- Find out more about Supported Scan Targets and Output Formats.
- Learn about Vulnerability Scanning and License Scanning your SBOMs.
4 - Private Registries
The Anchore OSS tools analyze container images from private registries using multiple authentication methods. When a container runtime isn’t available, the tools use the go-containerregistry library to handle authentication directly with registries.
When using a container runtime explicitly (for instance, with the --from docker flag) the tools defer to the runtime’s authentication mechanisms.
However, if the registry source is used, the tools use the Docker configuration file and any configured credential helpers to authenticate with the registry.
Registry tokens and personal access tokens
Many registries support personal access tokens (PATs) or registry tokens for authentication. Use docker login with your token, then the tools can use the cached credentials:
# GitHub Container Registry - create token at https://github.com/settings/tokens (needs read:packages scope)
docker login ghcr.io -u <username> -p <token>
syft ghcr.io/username/private-image:latest
# GitLab Container Registry - use deploy token or personal access token
docker login registry.gitlab.com -u <username> -p <token>
syft registry.gitlab.com/group/project/image:latest
The tools read credentials from ~/.docker/config.json, the same file Docker uses when you run docker login. This file can contain either basic authentication credentials or credential helper configurations.
Here are examples of what the config looks like if you are crafting it manually:
Basic authentication example:
{
"auths": {
"registry.example.com": {
"username": "AzureDiamond",
"password": "hunter2"
}
}
}
Token authentication example:
// token auth, where credentials are base64-encoded
{
"auths": {
"ghcr.io": {
"auth": "dXNlcm5hb...m5h=="
}
}
}
Security Warning
Storing plaintext passwords inconfig.json is insecure. Use credential helpers where possible.By default, the tools look for credentials in ~/.docker/config.json. You can override this location using the DOCKER_CONFIG environment variable:
export DOCKER_CONFIG=/path/to/custom/config
syft registry.example.com/private/image:latest
You can also use this in a container:
docker run -v ./config.json:/auth/config.json -e "DOCKER_CONFIG=/auth" anchore/syft:latest <private_image>
Docker credential helpers
Docker credential helpers are specialized programs that securely store and retrieve registry credentials. They’re particularly useful for cloud provider registries that use dynamic, short-lived tokens.
Instead of storing passwords as plaintext in config.json, you configure helpers that generate credentials on-demand. This is facilitated by the google/go-containerregistry library.
Configuring credential helpers
Add credential helpers to your config.json:
{
"credHelpers": {
// using the docker-credential-gcr for Google Container Registry and Artifact Registry
"gcr.io": "gcr",
"us-docker.pkg.dev": "gcloud",
// using the amazon-ecr-credential-helper for AWS Elastic Container Registry
"123456789012.dkr.ecr.us-west-2.amazonaws.com": "ecr-login",
// using the docker-credential-acr for Azure Container Registry
"myregistry.azurecr.io": "acr"
}
}
When the tools access these registries, they execute the corresponding helper program (for example, docker-credential-gcr, or more generically docker-credential-NAME where NAME is the config value) to obtain credentials.
Note
If bothcredHelpers and auths are configured for the same registry, credHelpers takes precedence.For more information about Docker credential helpers for various cloud providers:
- ECR authentication documentation.
- Artifact Registry authentication documentation.
- ACR authentication documentation.
Within Kubernetes
When running the tools in Kubernetes and you need access to private registries, mount Docker credentials as a secret.
Create secret
Create a Kubernetes secret containing your Docker credentials. The key config.json is important—it becomes the filename when mounted into the pod.
For more information about the credential file format, see the go-containerregistry config docs.
# secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: registry-config
namespace: syft
data:
config.json: <base64-encoded-config.json>
Create the secret:
# Base64 encode your config.json
cat ~/.docker/config.json | base64
# Apply the secret
kubectl apply -f secret.yaml
Configure pod
Configure your pod to use the credential secret. The DOCKER_CONFIG environment variable tells the tools where to look for credentials.
Setting DOCKER_CONFIG=/config means the tools look for credentials at /config/config.json.
This matches the secret key config.json we created above—when Kubernetes mounts secrets, each key becomes a file with that name.
The volumeMounts section mounts the secret to /config, and the volumes section references the secret created in the previous step.
# pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: syft-k8s-usage
spec:
containers:
- image: anchore/syft:latest
name: syft-private-registry-demo
env:
- name: DOCKER_CONFIG
value: /config
volumeMounts:
- mountPath: /config
name: registry-config
readOnly: true
args:
- <private-image>
volumes:
- name: registry-config
secret:
secretName: registry-config
Apply and check logs:
kubectl apply -f pod.yaml
kubectl logs syft-private-registry-demo