This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Contributing

Guidelines for developing & contributing to Anchore Open Source projects

Welcome! We appreciate all contributions to Anchore’s open source projects. Whether you’re fixing a bug, adding a feature, or improving documentation, your help makes these tools better for everyone.

Getting Help

The Anchore open source community is here to help. Use Discourse for questions, discussions, and troubleshooting. Use GitHub for reporting bugs, requesting features, and submitting code contributions. See Issues vs Discussions for guidance on which channel to use.

For security vulnerabilities, email security@anchore.com - do not create public issues. See our Security Policy for details.

1 - Issues and Discussions

When to use GitHub Issues versus Discourse Discussions

Understanding where to post helps you get faster, more relevant responses.

GitHub Issues

Use GitHub issues for:

  • Bug reports: Something isn’t working as documented
  • Feature requests: Proposals for new functionality
  • Enhancement requests: Improvements to existing features
  • Security vulnerabilities: Please follow our Security Policy (reported privately)

Creating a good issue

  • Write a clear title: Issue titles become changelog entries in release notes, so make them descriptive and user-focused
  • Search existing issues first: This helps avoid duplicates and keeps discussions in one place
  • Use issue templates: Templates guide you through providing the right information
  • Include version information: Specify which version you’re using
  • Provide reproduction steps: For bugs, describe how to recreate the issue
  • Describe expected vs actual behavior: Explain what you expected to happen and what actually happened
  • Add supporting details: Include relevant logs, error messages, or screenshots

Discourse Discussions

Use the Anchore Discourse for:

  • Questions: “How do I…?” or “Why does…?”
  • Clarifications: Understanding how features work
  • General discussion: Ideas, use cases, and community chat
  • Help requests: Troubleshooting your specific setup
  • Best practices: Sharing knowledge and experiences

Why separate channels?

GitHub issues track work items that require code changes. Each issue represents a potential task for the development team. Discourse provides a better format for conversations, questions, and community support without cluttering the issue tracker.

If you’re unsure which to use, start with Discourse. The community can help identify if an issue should be created.

Security Issues

If you discover a security vulnerability, please report it privately rather than creating a public issue. See our Security Policy for details on how to report security issues responsibly. This gives us time to fix the problem and protect users before details become public.

2 - Syft

Developer guidelines when contributing to Syft

Getting started

In order to test and develop in the Syft repo you will need the following dependencies installed:

  • Golang
  • Docker
  • Python (>= 3.9)
  • make

Initial setup

Run once after cloning to install development tools:

make bootstrap

Useful commands

Common commands for ongoing development:

  • make help - List all available commands
  • make lint - Check code formatting and linting
  • make lint-fix - Auto-fix formatting issues
  • make unit - Run unit tests
  • make integration - Run integration tests
  • make cli - Run CLI tests
  • make snapshot - Build release snapshot with all binaries and packages

Testing

Levels of testing

  • unit (make unit): The default level of test which is distributed throughout the repo are unit tests. Any _test.go file that does not reside somewhere within the /test directory is a unit test. Other forms of testing should be organized in the /test directory. These tests should focus on the correctness of functionality in depth. % test coverage metrics only considers unit tests and no other forms of testing.

  • integration (make integration): located within cmd/syft/internal/test/integration, these tests focus on the behavior surfaced by the common library entrypoints from the syft package and make light assertions about the results surfaced. Additionally, these tests tend to make diversity assertions for enum-like objects, ensuring that as enum values are added to a definition that integration tests will automatically fail if no test attempts to use that enum value. For more details see the “Data diversity and freshness assertions” section below.

  • cli (make cli): located with in test/cli, these are tests that test the correctness of application behavior from a snapshot build. This should be used in cases where a unit or integration test will not do or if you are looking for in-depth testing of code in the cmd/ package (such as testing the proper behavior of application configuration, CLI switches, and glue code before syft library calls).

  • acceptance (make install-test): located within test/compare and test/install, these are smoke-like tests that ensure that application packaging and installation works as expected. For example, during release we provide RPM packages as a download artifact. We also have an accompanying RPM acceptance test that installs the RPM from a snapshot build and ensures the output of a syft invocation matches canned expected output. New acceptance tests should be added for each release artifact and architecture supported (when possible).

Data diversity and freshness assertions

It is important that tests against the codebase are flexible enough to begin failing when they do not cover “enough” of the objects under test. “Cover” in this case does not mean that some percentage of the code has been executed during testing, but instead that there is enough diversity of data input reflected in testing relative to the definitions available.

For instance, consider an enum-like value like so:

type Language string

const (
  Java            Language = "java"
  JavaScript      Language = "javascript"
  Python          Language = "python"
  Ruby            Language = "ruby"
  Go              Language = "go"
)

Say we have a test that exercises all the languages defined today:

func TestCatalogPackages(t *testing.T) {
  testTable := []struct {
    // ... the set of test cases that test all languages
  }
  for _, test := range cases {
    t.Run(test.name, func (t *testing.T) {
      // use inputFixturePath and assert that syft.CatalogPackages() returns the set of expected Package objects
      // ...
    })
  }
}

Where each test case has a inputFixturePath that would result with packages from each language. This test is brittle since it does not assert that all languages were exercised directly and future modifications (such as adding a new language) won’t be covered by any test cases.

To address this, the enum-like object should have a definition of all objects that can be used in testing:

type Language string

// const( Java Language = ..., ... )

var AllLanguages = []Language{
 Java,
 JavaScript,
 Python,
 Ruby,
 Go,
 Rust,
}

Allowing testing to automatically fail when adding a new language:

func TestCatalogPackages(t *testing.T) {
  testTable := []struct {
   // ... the set of test cases that (hopefully) covers all languages
  }

  // new stuff...
  observedLanguages := strset.New()

  for _, test := range cases {
    t.Run(test.name, func (t *testing.T) {
      // use inputFixturePath and assert that syft.CatalogPackages() returns the set of expected Package objects
     // ...

     // new stuff...
     for _, actualPkg := range actual {
        observedLanguages.Add(string(actualPkg.Language))
     }

    })
  }

   // new stuff...
  for _, expectedLanguage := range pkg.AllLanguages {
    if  !observedLanguages.Contains(expectedLanguage) {
      t.Errorf("failed to test language=%q", expectedLanguage)
    }
  }
}

This is a better test since it will fail when someone adds a new language but fails to write a test case that should exercise that new language. This method is ideal for integration-level testing, where testing correctness in depth is not needed (that is what unit tests are for) but instead testing in breadth to ensure that units are well integrated.

A similar case can be made for data freshness; if the quality of the results will be diminished if the input data is not kept up to date then a test should be written (when possible) to assert any input data is not stale.

An example of this is the static list of licenses that is stored in internal/spdxlicense for use by the SPDX presenters. This list is updated and published periodically by an external group and syft can grab and update this list by running go generate ./... from the root of the repo.

An integration test has been written to grabs the latest license list version externally and compares that version with the version generated in the codebase. If they differ, the test fails, indicating to someone that there is an action needed to update it.

Snapshot tests

The format objects make a lot of use of “snapshot” testing, where you save the expected output bytes from a call into the git repository and during testing make a comparison of the actual bytes from the subject under test with the golden copy saved in the repo. The “golden” files are stored in the test-fixtures/snapshot directory relative to the go package under test and should always be updated by invoking go test on the specific test file with a specific CLI update flag provided.

Many of the Format tests make use of this approach, where the raw SBOM report is saved in the repo and the test compares that SBOM with what is generated from the latest presenter code. The following command can be used to update the golden files for the various snapshot tests:

make update-format-golden-files

These flags are defined at the top of the test files that have tests that use the snapshot files.

Snapshot testing is only as good as the manual verification of the golden snapshot file saved to the repo! Be careful and diligent when updating these files.

Test fixtures

Syft uses a sophisticated test fixture caching system to speed up test execution. Test fixtures include pre-built test images, language-specific package manifests, and other test data. Rather than rebuilding fixtures on every checkout, Syft can download a pre-built cache from GitHub Container Registry.

Common fixture commands:

  • make fixtures - Intelligently download or rebuild fixtures as needed
  • make build-fixtures - Manually build all fixtures from scratch
  • make clean-cache - Remove all cached test fixtures
  • make check-docker-cache - Verify docker cache size is within limits

When to use each command:

  • First time setup: Run make fixtures after cloning the repository. This will download the latest fixture cache.
  • Tests failing unexpectedly: Try make clean-cache followed by make fixtures to ensure you have fresh fixtures.
  • Working offline: Set DOWNLOAD_TEST_FIXTURE_CACHE=false and run make build-fixtures to build fixtures locally without downloading.
  • Modifying test fixtures: After changing fixture source files, run make build-fixtures to rebuild affected fixtures.

The fixture system tracks input fingerprints and only rebuilds fixtures when their source files change. This makes the development cycle faster while ensuring tests always run against the correct fixture data.

Code generation

Syft generates several types of code and data files that need to be kept in sync with external sources or internal structures:

What gets generated:

  • JSON Schema - Generated from Go structs to define the Syft JSON output format
  • SPDX License List - Up-to-date list of license identifiers from the SPDX project
  • CPE Dictionary Index - Index of Common Platform Enumeration identifiers for vulnerability matching

When to regenerate:

Run code generation after:

  • Modifying the pkg.Package struct or related types (requires JSON schema regeneration)
  • SPDX releases a new license list
  • CPE dictionary updates are available

Generation commands:

  • make generate - Run all generation tasks
  • make generate-json-schema - Generate JSON schema from Go types
  • make generate-license-list - Download and generate latest SPDX license list
  • make generate-cpe-dictionary-index - Generate CPE dictionary index

After running generation commands, review the changes carefully and commit them as part of your pull request. The CI pipeline will verify that generated files are up to date.

Adding a new cataloger

Catalogers must fulfill the pkg.Cataloger interface in order to add packages to the SBOM.

All catalogers are registered as tasks in Syft’s task-based cataloging system:

  • Add your cataloger to DefaultPackageTaskFactories() using newSimplePackageTaskFactory or newPackageTaskFactory
  • Tag the task appropriately to indicate when it should run:
    • pkgcataloging.InstalledTag - for packages positively installed
    • pkgcataloging.DeclaredTag - for packages described in manifests (places where we intend to install software, but does not describe installed software)
    • pkgcataloging.ImageTag - should run when scanning container images
    • pkgcataloging.DirectoryTag - should run when scanning directories/filesystems
    • pkgcataloging.LanguageTag - for language-specific packages
    • pkgcataloging.OSTag - for OS-specific packages
    • Ecosystem tags like "java", "python", "alpine", etc.
  • If your cataloger needs configuration, add it to pkgcataloging.Config

The task system orchestrates all catalogers through CreateSBOMConfig, which manages task execution, parallelism, and configuration.

generic.NewCataloger is an abstraction syft used to make writing common components easier (see the alpine cataloger for example usage). It takes the following information as input:

  • A catalogerName to identify the cataloger uniquely among all other catalogers.
  • Pairs of file globs as well as parser functions to parse those files. These parser functions return a slice of pkg.Package as well as a slice of artifact.Relationship to describe how the returned packages are related. See this the alpine cataloger parser function as an example.

Identified packages share a common pkg.Package struct so be sure that when the new cataloger is constructing a new package it is using the Package struct. If you want to return more information than what is available on the pkg.Package struct then you can do so in the pkg.Package.Metadata field, which accepts any type. Metadata types tend to be unique for each pkg.Type but this is not required. See the pkg package for examples of the different metadata types that are supported today. When encoding to JSON, metadata type names are determined by reflection and mapped according to internal/packagemetadata/names.go.

Finally, here is an example of where the package construction is done within the alpine cataloger:

Troubleshooting

Cannot build test fixtures with Artifactory repositories

Some companies have Artifactory setup internally as a solution for sourcing secure dependencies. If you’re seeing an issue where the unit tests won’t run because of the below error then this section might be relevant for your use case.

[ERROR] [ERROR] Some problems were encountered while processing the POMs

If you’re dealing with an issue where the unit tests will not pull/build certain java fixtures check some of these settings:

  • a settings.xml file should be available to help you communicate with your internal artifactory deployment
  • this can be moved to syft/pkg/cataloger/java/test-fixtures/java-builds/example-jenkins-plugin/ to help build the unit test-fixtures
  • you’ll also want to modify the build-example-jenkins-plugin.sh to use settings.xml

For more information on this setup and troubleshooting see issue 1895

Next Steps

Understanding the Codebase

  • Architecture - Learn about package structure, core library flow, cataloger design patterns, and file searching
  • API Reference - Explore the public Go API, type definitions, and function signatures

Contributing Your Work

Finding Work

Getting Help

3 - Grype

Developer guidelines when contributing to Grype

Getting started

In order to test and develop in the Grype repo you will need the following dependencies installed:

  • Golang
  • Docker
  • Python (>= 3.9)
  • make
  • SQLite3 (optional – for database inspection)

Initial setup

Run once after cloning to install development tools:

make bootstrap

Useful commands

Common commands for ongoing development:

  • make help - List all available commands
  • make lint - Check code formatting and linting
  • make lint-fix - Auto-fix formatting issues
  • make format - Auto-format source code
  • make unit - Run unit tests
  • make integration - Run integration tests
  • make cli - Run CLI tests
  • make quality - Run vulnerability matching quality tests
  • make snapshot - Build release snapshot with all binaries and packages

Testing

Levels of testing

  • unit (make unit): The default level of test which is distributed throughout the repo are unit tests. Any _test.go file that does not reside somewhere within the /test directory is a unit test. Other forms of testing should be organized in the /test directory. These tests should focus on the correctness of functionality in depth. % test coverage metrics only considers unit tests and no other forms of testing.

  • integration (make integration): located within test/integration, these tests focus on the behavior surfaced by the Grype library entrypoints and make assertions about vulnerability matching results. The integration tests also update the vulnerability database and run with the race detector enabled to catch concurrency issues.

  • cli (make cli): located within test/cli, these are tests that test the correctness of application behavior from a snapshot build. This should be used in cases where a unit or integration test will not do or if you are looking for in-depth testing of code in the cmd/ package (such as testing the proper behavior of application configuration, CLI switches, and glue code before grype library calls).

  • quality (make quality): located within test/quality, these are tests that verify vulnerability matching quality by comparing Grype’s results against known-good results (quality gates). These tests help ensure that changes to vulnerability matching logic don’t introduce regressions in match quality. The quality tests use a pinned database version to ensure consistent results. See the quality gate architecture documentation for how the system works and the test/quality README for practical development workflows.

  • install (part of acceptance testing): located within test/install, these are smoke-like tests that ensure that application packaging and installation works as expected. For example, during release we provide RPM packages as a download artifact. We also have an accompanying RPM acceptance test that installs the RPM from a snapshot build and ensures the output of a grype invocation matches canned expected output.

Quality Gates

Quality gates validate that code changes don’t cause performance regressions in vulnerability matching. The system compares your PR’s matching results against a baseline using a pinned database to isolate code changes from database volatility.

What quality gates validate:

  • F1 score (combination of true positives, false positives, and false negatives)
  • False negative count (should not increase)
  • Indeterminate matches (should remain below 10%)

Common development workflows:

  • make capture - Download SBOMs and generate match results
  • make validate - Analyze output and evaluate pass/fail
  • yardstick label explore [UUID] - Interactive TUI for labeling matches
  • ./gate.py --image [digest] - Test specific images

Learn more:

Relationship to Syft

Grype uses Syft as a library for all-things related to obtaining and parsing the given scan target (pulling container images, parsing container images, indexing directories, cataloging packages, etc). Releases of Grype should always use released versions of Syft (commits that are tagged and show up in the GitHub releases page). However, continually integrating unreleased Syft changes into Grype incrementally is encouraged (e.g. go get github.com/anchore/syft@main) as long as by the time a release is cut the Syft version is updated to a released version (e.g. go get github.com/anchore/syft@v<semantic-version>).

Inspecting the database

The currently supported database format is Sqlite3. Install sqlite3 in your system and ensure that the sqlite3 executable is available in your path. Ask grype about the location of the database, which will be different depending on the operating system:

$ go run ./cmd/grype db status
Location:  /Users/alfredo/Library/Caches/grype/db
Built:  2020-07-31 08:18:29 +0000 UTC
Current DB Version:  1
Require DB Version:  1
Status: Valid

The database is located within the XDG_CACHE_HOME path. To verify the database filename, list that path:

# OSX-specific path
$ ls -alh  /Users/alfredo/Library/Caches/grype/db
total 445392
drwxr-xr-x  4 alfredo  staff   128B Jul 31 09:27 .
drwxr-xr-x  3 alfredo  staff    96B Jul 31 09:27 ..
-rw-------  1 alfredo  staff   139B Jul 31 09:27 metadata.json
-rw-r--r--  1 alfredo  staff   217M Jul 31 09:27 vulnerability.db

Next, open the vulnerability.db with sqlite3:

sqlite3 /Users/alfredo/Library/Caches/grype/db/vulnerability.db

To make the reporting from Sqlite3 easier to read, enable the following:

sqlite> .mode column
sqlite> .headers on

List the tables:

sqlite> .tables
id                      vulnerability           vulnerability_metadata

In this example you retrieve a specific vulnerability from the nvd namespace:

sqlite> select * from vulnerability where (namespace="nvd" and package_name="libvncserver") limit 1;
id             record_source  package_name  namespace   version_constraint  version_format  cpes                                                         proxy_vulnerabilities
-------------  -------------  ------------  ----------  ------------------  --------------  -----------------------------------------------------------  ---------------------
CVE-2006-2450                 libvncserver  nvd         = 0.7.1             unknown         ["cpe:2.3:a:libvncserver:libvncserver:0.7.1:*:*:*:*:*:*:*"]  []

Next Steps

Understanding the Codebase

  • Architecture - Learn about package structure, core library flow, and matchers

  • API Reference - Explore the public Go API, type definitions, and function signatures Contributing Your Work

  • Pull Requests - Guidelines for submitting PRs and working with reviewers

  • Issues and Discussions - Where to get help and report issues

Finding Work

Getting Help

4 - Pull Requests

Guidelines for submitting pull requests and working with reviewers

If you’ve made changes and the tests are passing, it’s time to submit a pull request (PR). This guide will help you through the process.

Quick Checklist

Before submitting your PR, make sure you have:

  • ✓ Run the test suite and confirmed tests pass
  • ✓ Signed off all commits (see Sign-off Requirements)
  • ✓ Updated in-repo documentation if your changes affect user-facing behavior
  • ✓ Written a clear PR title that describes the user-facing impact
  • ✓ Followed existing code style and patterns in the project

Each of these items helps maintainers review your contribution more effectively and merge it faster.

PR Title

Your PR title is important—it becomes the changelog entry in release notes. Write titles that are meaningful to end users, not just developers.

Guidelines

  • Start with an action verb: “Add”, “Fix”, “Update”, “Remove”
  • Be specific: “Add support for Alpine 3.19” rather than “Update Alpine”
  • Keep it concise: Under 72 characters when possible
  • Focus on user impact: What changed for users, not implementation details

Examples

Good titles:

  • Add support for Python 3.12 package detection
  • Fix crash when parsing malformed RPM databases
  • Update documentation for custom template usage

Poor titles:

  • Updates (too vague—updates to what?)
  • Fixed bug (which bug?)
  • WIP: trying some things (not ready for review)
  • Refactor parseRPM function (implementation detail, not a user-facing change)

PR Description

A clear description helps reviewers understand your changes quickly. Include these key sections:

What to include

  1. Summary: Briefly describe what changed
  2. Motivation: Explain why this change is needed or what problem it solves
  3. Approach: If your solution isn’t obvious, explain your approach
  4. Testing: Describe how you tested the changes
  5. Related issues: Link to issues or discussions that provide context

Template

## Summary

Brief description of the change.

## Motivation

Why is this change needed? What problem does it solve?

## Changes

- Bullet point list of key changes
- Include any breaking changes or migration steps

## Type of change

<!-- Delete any that are not relevant -->

- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (please discuss with the team first; Syft is 1.0 software and we won't accept breaking changes without going to 2.0)
- [ ] Documentation (updates the documentation)
- [ ] Chore (improve the developer experience, fix a test flake, etc, without changing the visible behavior of Syft)
- [ ] Performance (make Syft run faster or use less memory, without changing visible behavior much)

## Checklist

- [ ] I have added unit tests that cover changed behavior
- [ ] I have tested my code in common scenarios and confirmed there are no regressions
- [ ] I have added comments to my code, particularly in hard-to-understand sections

Closes #123

Commit History

We use squash merging for all pull requests, which means:

  • Your entire PR becomes a single commit on the main branch
  • You don’t need to maintain a clean commit history in your PR
  • Merge commits in your feature branch are perfectly fine
  • You can commit as frequently as you like during development
  • The PR title (not individual commit messages) becomes the changelog entry

This approach keeps the main branch clean and linear while reducing friction for contributors. Focus on code quality rather than commit structure—reviewers care about the changes, not how you got there.

Size Matters

Small PRs get reviewed faster. Here’s how to make your PR easier to review:

  • Keep changes focused: Try to address one concern per PR
  • Avoid mixing unrelated changes: Don’t combine bug fixes with new features
  • Split large PRs when possible: If a PR is unavoidably large, provide extra context in the description

Consider breaking work into multiple PRs if you’re making both refactoring changes and feature additions. Reviewers can process smaller, focused changes more quickly.

What to Expect

Review Feedback

It’s normal and expected for reviewers to have questions and suggestions:

  • Questions about your approach: Be prepared to explain your decisions
  • Code style adjustments: You may be asked to match existing project patterns
  • Additional tests: Reviewers might request more test coverage
  • Scope changes: You might be asked to split or narrow the PR

How to respond to feedback

  • Address feedback promptly: Respond when you can, even if just to acknowledge
  • Ask for clarification: If something isn’t clear, ask questions
  • Explain your reasoning: It’s okay to discuss alternatives respectfully
  • Make changes in new commits: This makes incremental review easier
  • Mark conversations as resolved: When you’ve addressed a comment

Remember that review feedback is about the code, not about you. Reviewers want to help make the contribution successful.

After Approval

Once approved, a maintainer will merge your PR. Depending on the project, you might be asked to:

  • Rebase on the latest main branch if there are conflicts
  • Update the PR title or description for clarity
  • Make final adjustments based on last-minute feedback

Common Issues

Watch out for these common pitfalls:

  • Missing sign-off: All commits must be signed off (see Sign-off Requirements)
  • Failing CI checks: Make sure all tests and checks pass before requesting review
  • Merge conflicts: Keep your branch up to date with main to avoid conflicts
  • Formatting-only changes: Submit formatting and refactoring in separate PRs from features
  • Missing documentation: User-facing changes need corresponding documentation updates

Need Help?

If you’re stuck or have questions about the PR process:

  • Ask in the PR comments—maintainers are happy to help
  • Reach out on the project’s Discourse
  • Check the project-specific contributing guide for any additional requirements

Contributing to open source can feel intimidating at first, but the community is here to support you. Don’t hesitate to ask questions.

5 - Grype DB

Developer guidelines when contributing to Grype DB

Getting started

This codebase is primarily Go, however, there are also Python scripts critical to the daily DB publishing process as well as acceptance testing. You will require the following:

  • Python 3.11+ installed on your system (Python 3.11-3.13 supported). Consider using pyenv if you do not have a preference for managing python interpreter installations.
  • zstd binary utility if you are packaging v6+ DB schemas
  • (optional) xz binary utility if you have specifically overridden the package command options
  • uv installed for Python package and virtualenv management

To download Go tooling used for static analysis, dependent Go modules, and Python dependencies run:

make bootstrap

Useful commands

Common commands for ongoing development:

  • make help - List all available commands
  • make lint - Check code formatting and linting
  • make lint-fix - Auto-fix formatting issues
  • make unit - Run unit tests (Go and Python)
  • make cli - Run CLI tests
  • make db-acceptance schema=<version> - Run DB acceptance tests for a schema version
  • make snapshot - Build release snapshot with all binaries and packages
  • make download-all-provider-cache - Download pre-built vulnerability data cache

Development workflows

Getting vulnerability data

In order to build a grype DB you will need a local cache of vulnerability data:

make download-all-provider-cache

This will populate the ./data directory locally with everything needed to run grype-db build (without needing to run grype-db pull).

This data being pulled down is the same data used in the daily DB publishing workflow, so it should be relatively fresh.

Creating a new DB schema

  1. Create a new v# schema package in the grype repo (within pkg/db)
  2. Create a new v# schema package in the grype-db repo (use the bump-schema.py helper script) that uses the new changes from grype-db
  3. Modify the manager/src/grype_db_manager/data/schema-info.json to pin the last-latest version to a specific version of grype and add the new schema version pinned to the “main” branch of grype (or a development branch)
  4. Update all references in grype to use the new schema
  5. Use the Staging DB Publisher workflow to test your DB changes with grype in a flow similar to the daily DB publisher workflow

Testing with staging databases

While developing a new schema version it may be useful to get a DB built for you by the Staging DB Publisher GitHub Actions workflow. This code exercises the same code as the Daily DB Publisher, with the exception that only a single schema is built and is validated against a given development branch of grype. When these DBs are published you can point grype at the proper listing file like so:

GRYPE_DB_UPDATE_URL=https://toolbox-data.anchore.io/grype/staging-databases/listing.json grype centos:8 ...

Testing

Levels of testing

  • unit (make unit): Unit tests for both Go code in the main codebase and Python scripts in the manager/ directory. These tests focus on correctness of individual functions and components. Coverage metrics track Go test coverage.

  • cli (make cli): CLI tests for both Go and Python components. These validate that command-line interfaces work correctly with various inputs and configurations.

  • db-acceptance (make db-acceptance schema=<version>): Acceptance tests that verify a specific DB schema version works correctly with Grype. These tests build a database, run Grype scans, and validate that vulnerability matches are correct and complete.

Running tests

To run unit tests for Go code and Python scripts:

make unit

To verify that a specific DB schema version interops with Grype:

make db-acceptance schema=<version>
# Note: this may take a while... go make some coffee.

Next Steps

Understanding the Codebase

Related Projects

Getting Help

6 - Vunnel

Developer guidelines when contributing to Vunnel

Getting started

This project requires:

  • python (>= 3.11)
  • pip (>= 22.2)
  • uv
  • docker
  • go (>= 1.20)
  • posix shell (bash, zsh, etc… needed for the make dev “development shell”)

Once you have python and uv installed, get the project bootstrapped:

# clone grype and grype-db, which is needed for provider development
git clone git@github.com:anchore/grype.git
git clone git@github.com:anchore/grype-db.git
# note: if you already have these repos cloned, you can skip this step. However, if they
# reside in a different directory than where the vunnel repo is, then you will need to
# set the `GRYPE_PATH` and/or `GRYPE_DB_PATH` environment variables for the development
# shell to function. You can add these to a local .env file in the vunnel repo root.

# clone the vunnel repo
git clone git@github.com:anchore/vunnel.git
cd vunnel

# get basic project tooling
make bootstrap

# install project dependencies
uv sync --all-extras --dev

Pre-commit is used to help enforce static analysis checks with git hooks:

uv run pre-commit install --hook-type pre-push

Developing

Development shell

The easiest way to develop on a providers is to use the development shell, selecting the specific provider(s) you’d like to focus your development workflow on:

# Specify one or more providers you want to develop on.
# Any provider from the output of "vunnel list" is valid.
# Specify multiple as a space-delimited list:
# make dev providers="oracle wolfi nvd"
$ make dev provider="oracle"

Entering vunnel development shell...
• Configuring with providers: oracle ...
• Writing grype config: /Users/wagoodman/code/vunnel/.grype.yaml ...
• Writing grype-db config: /Users/wagoodman/code/vunnel/.grype-db.yaml ...
• Activating virtual env: /Users/wagoodman/code/vunnel/.venv ...
• Installing editable version of vunnel ...
• Building grype ...
• Building grype-db ...

Note: development builds grype and grype-db are now available in your path.
To update these builds run 'make build-grype' and 'make build-grype-db' respectively.
To run your provider and update the grype database run 'make update-db'.
Type 'exit' to exit the development shell.

You can now run the provider you specified in the make dev command, build an isolated grype DB, and import the DB into grype:

$ make update-db
• Updating vunnel providers ...
[0000]  INFO grype-db version: ede464c2def9c085325e18ed319b36424d71180d-adhoc-build
...
[0000]  INFO configured providers parallelism=1 providers=1
[0000] DEBUG   └── oracle
[0000] DEBUG all providers started, waiting for graceful completion...
[0000]  INFO running vulnerability provider provider=oracle
[0000] DEBUG oracle:  2023-03-07 15:44:13 [INFO] running oracle provider
[0000] DEBUG oracle:  2023-03-07 15:44:13 [INFO] downloading ELSA from https://linux.oracle.com/security/oval/com.oracle.elsa-all.xml.bz2
[0019] DEBUG oracle:  2023-03-07 15:44:31 [INFO] wrote 6298 entries
[0019] DEBUG oracle:  2023-03-07 15:44:31 [INFO] recording workspace state
• Building grype-db ...
[0000]  INFO grype-db version: ede464c2def9c085325e18ed319b36424d71180d-adhoc-build
[0000]  INFO reading all provider state
[0000]  INFO building DB build-directory=./build providers=[oracle] schema=5
• Packaging grype-db ...
[0000]  INFO grype-db version: ede464c2def9c085325e18ed319b36424d71180d-adhoc-build
[0000]  INFO packaging DB from="./build" for="https://toolbox-data.anchore.io/grype/databases"
[0000]  INFO created DB archive path=build/vulnerability-db_v5_2023-03-07T20:44:13Z_405ae93d52ac4cde6606.tar.gz
• Importing DB into grype ...
Vulnerability database imported

You can now run grype that uses the newly created DB:

$ grype oraclelinux:8.4
 ✔ Pulled image
 ✔ Loaded image
 ✔ Parsed image
 ✔ Cataloged packages      [195 packages]
 ✔ Scanning image...       [193 vulnerabilities]
   ├── 0 critical, 25 high, 146 medium, 22 low, 0 negligible
   └── 193 fixed

NAME                        INSTALLED                FIXED-IN                    TYPE  VULNERABILITY   SEVERITY
bind-export-libs            32:9.11.26-4.el8_4       32:9.11.26-6.el8            rpm   ELSA-2021-4384  Medium
bind-export-libs            32:9.11.26-4.el8_4       32:9.11.36-3.el8            rpm   ELSA-2022-2092  Medium
bind-export-libs            32:9.11.26-4.el8_4       32:9.11.36-3.el8_6.1        rpm   ELSA-2022-6778  High
bind-export-libs            32:9.11.26-4.el8_4       32:9.11.36-5.el8            rpm   ELSA-2022-7790  Medium

# note that we're using the database we just built...
$ grype db status
Location:  /Users/wagoodman/code/vunnel/.cache/grype/5  # <--- this is the local DB we just built
...

# also note that we're using a development build of grype
$ which grype
/Users/wagoodman/code/vunnel/bin/grype

The development builds of grype and grype-db provided are derived from ../grype and ../grype-db paths relative to the vunnel project. If you want to use a different path, you can set the GRYPE_PATH and GRYPE_DB_PATH environment variables. This can be persisted by adding a .env file to the root of the vunnel project:

# example .env file in the root of the vunnel repo
GRYPE_PATH=~/somewhere/else/grype
GRYPE_DB_PATH=~/also/somewhere/else/grype-db

Rebuilding development tools

To rebuild the grype and grype-db binaries from local source, run:

make build-grype
make build-grype-db

Common commands

This project uses Make for running common development tasks:


make                  # run static analysis and unit testing
make static-analysis  # run static analysis
make unit             # run unit tests
make format           # format the codebase with black
make lint-fix         # attempt to automatically fix linting errors
...

If you want to see all of the things you can do:

make help

If you want to use a locally-editable copy of vunnel while you develop without the custom development shell:

uv pip uninstall vunnel  #... if you already have vunnel installed in this virtual env
uv pip install -e .

Snapshot tests

In order to ensure that the same feed state from providers would make the same set of vulnerabilities, snapshot testing is used.

Snapshot tests are run as part of ordinary unit tests, and will run during make unit.

To update snapshots, run the following pytest command. (Note that this example is for the debian provider, and the test name and path will be different for other providers):

pytest ./tests/unit/providers/debian/test_debian.py -k test_provider_via_snapshot --snapshot-update

Architecture

For detailed information about Vunnel’s architecture, including:

  • Provider abstraction and design
  • Workspace conventions
  • Vulnerability schemas (OS, NVD, GitHub, OSV)
  • Provider configuration options
  • Integration with Grype DB

See the Vunnel Architecture page.

Adding a new provider

“Vulnerability matching” is the process of taking a list of vulnerabilities and matching them against a list of packages. A provider in this repo is responsible for the “vulnerability” side of this process. The “package” side is handled by Syft. A prerequisite for adding a new provider is that Syft can catalog the package types that the provider is feeding vulnerability data for, so Grype can perform the matching from these two sources.

To add a new provider, you will need to create a new provider class under /src/vunnel/providers/<name> that inherits from provider.Provider and implements:

  • name(): a unique and semantically-useful name for the provider (same as the name of the directory)
  • update(): downloads and processes the raw data, writing all results with self.results_writer()

All results must conform to a particular schema, today there are a few kinds:

  • os: a generic operating system vulnerability (e.g redhat, debian, ubuntu, alpine, wolfi, etc.)
  • nvd: tailored to describe vulnerabilities from the NVD
  • github-security-advisory: tailored to describe vulnerabilities from GitHub
  • osv: tailored to describe vulnerabilities from the aggregated OSV vulnerability database

Once the provider is implemented, you will need to wire it up into the application in a couple places:

  • add a new entry under the dispatch table in src/vunnel/providers/__init__.py mapping your provider name to the class
  • add the provider configuration to the application configuration under src/vunnel/cli/config.py (specifically the Providers dataclass)

For a more detailed example on the implementation details of a provider see the “example” provider.

Validating this provider has different implications depending on what is being added. For example, if the provider is adding a new vulnerability source but is ultimately using an existing schema to express results then there may be very little to do! If you are adding a new schema, then the downstream data pipeline will need to be altered to support reading data in the new schema.

For an existing schema

1. Fork Vunnel and add the new provider.

Take a look at the example provider in the example directory. You are encouraged to copy example/awesome/* into src/vunnel/providers/YOURPROVIDERNAME/ and modify it to fit the needs of your new provider, however, this is not required:

# from the root of the vunnel repo
cp -a example/awesome src/vunnel/providers/YOURPROVIDERNAME

See the “example” provider README as well as the code comments for steps and considerations to take when implementing a new provider.

Once implemented, you should be able to see the new provider in the vunnel list command and run it with vunnel run <name>. The entries written should write out to a specific namespace in the DB downstream, as indicated in the record. This namespace is needed when making Grype changes.

While developing the provider consider using the make dev provider="<your-provider-name>"developer shell to run the provider and manually test the results against grype.

At this point you can optionally open a Vunnel PR with your new provider and a Maintainer can help with the next steps. Or if you’d like to get PR changes merged faster you can continue with the next steps.

2. Fork Grype and map distro type to a specific namespace.

This step might not be needed depending on the provider.

Common reasons for needing Grype changes include:

If you’re using the developer shell (make dev ...) then you can run make build-grype to get a build of grype with your changes.

3. In Vunnel: add a new test case to tests/quality/config.yaml for the new provider.

The configuration maps a provider to test to specific images to test with, for example:

---
- provider: amazon
  images:
    - docker.io/amazonlinux:2@sha256:1301cc9f889f21dc45733df9e58034ac1c318202b4b0f0a08d88b3fdc03004de
    - docker.io/anchore/test_images:vulnerabilities-amazonlinux-2-5c26ce9@sha256:cf742eca189b02902a0a7926ac3fbb423e799937bf4358b0d2acc6cc36ab82aa

These images are used to test the provider on PRs and nightly builds to verify the specific provider is working. Always use both the image tag and digest for all container image entries. Pick an image that has a good representation of the package types that your new provider is adding vulnerability data for.

4. In Vunnel: swap the tools to your Grype branch in tests/quality/config.yaml.

If you wanted to see PR quality gate checks pass with your specific Grype changes (if you have any) then you can update the yardstick.tools[*] entries for grype to use the a version that points to your fork (w.g. your-fork-username/grype@main). If you don’t have any grype changes needed then you can skip this step.

5. In Vunnel: add new “vulnerability match labels” to annotate True and False positive findings with Grype.

In order to evaluate the quality of the new provider, we need to know what the expected results are. This is done by annotating Grype results with “True Positive” labels (good results) and “False Positive” labels (bad results). We’ll use Yardstick to do this:

$ cd tests/quality

# capture results with the development version of grype (from your fork)
$ make capture provider=<your-provider-name>

# list your results
$ uv run yardstick result list | grep grype

d415064e-2bf3-4a1d-bda6-9c3957f2f71a  docker.io/anc...  grype@v0.58.0             2023-03...
75d1fe75-0890-4d89-a497-b1050826d9f6  docker.io/anc...  grype[custom-db]@bdcefd2  2023-03...

# use the "grype[custom-db]" result UUID and explore the results and add labels to each entry
$ uv run yardstick label explore 75d1fe75-0890-4d89-a497-b1050826d9f6

# You can use the yardstick TUI to label results:
# - use "T" to label a row as a True Positive
# - use "F" to label a row as a False Positive
# - Ctrl-Z to undo a label
# - Ctrl-S to save your labels
# - Ctrl-C to quit when you are done

Later we’ll open a PR in the vulnerability-match-labels repo to persist these labels. For the meantime we can iterate locally with the labels we’ve added.

6. In Vunnel: run the quality gate.

cd tests/quality

# runs your specific provider to gather vulnerability data, builds a DB, and runs grype with the new DB
make capture provider=<your-provider-name>

# evaluate the quality gate
make validate

This uses the latest Grype DB release to build a DB and the specified Grype version with a DB containing only data from the new provider.

You are looking for a passing run before continuing further.

7. Open a vulnerability-match-labels repo PR to persist the new labels.

Vunnel uses the labels in the vulnerability-Match-Labels repo via a git submodule. We’ve already added labels locally within this submodule in an earlier step. To persist these labels we need to push them to a fork and open a PR:

# fork the github.com/anchore/vulnerability-match-labels repo, but you do not need to clone it...

# from the Vunnel repo...
$ cd tests/quality/vulnerability-match-labels

$ git remote add fork git@github.com:your-fork-name/vulnerability-match-labels.git
$ git checkout -b 'add-labels-for-<your-provider-name>'
$ git status

# you should see changes from the labels/ directory for your provider that you added

$ git add .
$ git commit -m 'add labels for <your-provider-name>'
$ git push fork add-labels-for-<your-provider-name>

At this point you can open a PR against in the vulnerability-match-labels repo.

Note: you will not be able to open a Vunnel PR that passes PR checks until the labels are merged into the vulnerability-match-labels repo.

Once the PR is merged in the vulnerability-match-labels repo you can update the submodule in Vunnel to point to the latest commit in the vulnerability-match-labels repo.

cd tests/quality

git submodule update --remote vulnerability-match-labels

8. In Vunnel: open a PR with your new provider.

The PR will also run all of the same quality gate checks that you ran locally.

If you have Grype changes, you should also create a PR for that as well. The Vunnel PR will not pass PR checks until the Grype PR is merged and the test/quality/config.yaml file is updated to point back to the latest Grype version.

For a new schema

This is the same process as listed above with a few additional steps:

  1. You will need to add the new schema to the Vunnel repo in the schemas directory.
  2. Grype DB will need to be updated to support the new schema in the pkg/provider/unmarshal and pkg/process/v* directories.
  3. The Vunnel tests/quality/config.yaml file will need to be updated to use development grype-db.version, pointing to your fork.
  4. The final Vunnel PR will not be able to be merged until the Grype DB PR is merged and the tests/quality/config.yaml file is updated to point back to the latest Grype DB version.

Contributing improvements

Finding refactoring opportunities

Looking to help out with improving the code quality of Vunnel, but not sure where to start?

The best way is to look for issues with the refactor label.

More general ways would be to use radon to search for complexity and maintainability issues:

$ radon cc src --total-average -nb
src/vunnel/provider.py
    M 115:4 Provider._on_error - B
src/vunnel/providers/alpine/parser.py
    M 73:4 Parser._download - C
    M 178:4 Parser._normalize - C
    M 141:4 Parser._load - B
    C 44:0 Parser - B
src/vunnel/providers/amazon/parser.py
    M 66:4 Parser._parse_rss - C
    C 164:0 JsonifierMixin - C
    M 165:4 JsonifierMixin.json - C
    C 32:0 Parser - B
    M 239:4 PackagesHTMLParser.handle_data - B
...

The output of radon indicates the type (M=method, C=class, F=function), the path/name, and a A-F grade. Anything that’s not an A is worth taking a look at.

Another approach is to use wily:

$ wily build
...
$ wily rank
-----------Rank for Maintainability Index for bdb4983 by Alex Goodman on 2022-12-25.------------
╒═════════════════════════════════════════════════╤═════════════════════════╕
│ File                                            │   Maintainability Index │
╞═════════════════════════════════════════════════╪═════════════════════════╡
│ src/vunnel/providers/rhel/parser.py             │                 21.591  │
├─────────────────────────────────────────────────┼─────────────────────────┤
│ src/vunnel/providers/ubuntu/parser.py           │                 21.6144 │
├─────────────────────────────────────────────────┼─────────────────────────┤
│ tests/unit/providers/github/test_github.py      │                 35.3599 │
├─────────────────────────────────────────────────┼─────────────────────────┤
│ tests/unit/utils/test_oval_v2.py                │                 36.3388 │
├─────────────────────────────────────────────────┼─────────────────────────┤
│ src/vunnel/providers/debian/parser.py           │                 37.3723 │
├─────────────────────────────────────────────────┼─────────────────────────┤
│ tests/unit/utils/test_fdb.py                    │                 38.6926 │
├─────────────────────────────────────────────────┼─────────────────────────┤
│ tests/unit/providers/sles/test_sles.py          │                 41.6602 │
├─────────────────────────────────────────────────┼─────────────────────────┤
│ tests/unit/providers/ubuntu/test_ubuntu.py      │                 43.1323 │
├─────────────────────────────────────────────────┼─────────────────────────┤
...

Ideally we should try to get wily diff output into the CI pipeline and post on a sticky PR comment to show regressions (and potentially fail the CI run).

Adding type hints

This codebase has been ported from another repo that did not have any type hints. This is OK, though ideally over time this should be corrected as new features are added and bug fixes made.

We use mypy today for static type checking, however, the ported code has been explicitly ignored (see pyproject.toml).

If you want to make enhancements in this area consider using automated tooling such as pytype to generate types via inference into .pyi files and later merge them into the codebase with merge-pyi.

Alternatively a tool like MonkeyType can be used generate static types from runtime data and incorporate into the code.

Next Steps

Understanding the Codebase

Finding Work

Getting Help

7 - Grant

Developer guidelines when contributing to Grant

Getting started

In order to test and develop in the Grant repo you will need the following dependencies installed:

  • Golang
  • Docker
  • make

Initial setup

Run once after cloning to install development tools:

make bootstrap

Useful commands

Common commands for ongoing development:

  • make help - List all available commands
  • make lint - Check code formatting and linting
  • make lint-fix - Auto-fix formatting issues
  • make unit - Run unit tests
  • make test - Run all tests
  • make snapshot - Build release snapshot with all binaries and packages (also available as make build)
  • make generate - Generate SPDX license index and license patterns

Testing

Levels of testing

  • unit (make unit): The default level of test which is distributed throughout the repo are unit tests. Any _test.go file that does not reside somewhere within the /tests directory is a unit test. These tests focus on the correctness of functionality in depth. % test coverage metrics only consider unit tests and no other forms of testing.

  • integration (make test): located in tests/integration_test.go, these tests focus on policy loading, license evaluation, and core library behavior. They test the interaction between different components like policy parsing, license matching with glob patterns, and package evaluation logic.

  • cli (part of make test): located in tests/cli/, these are tests that test the correctness of application behavior from a snapshot build. These tests execute the actual Grant binary and verify command output, exit codes, and behavior of commands like check, list, and version.

Testing conventions

  • Unit tests should focus on correctness of individual functions and components
  • Integration tests validate that core library components work together correctly (policy evaluation, license matching, etc.)
  • CLI tests ensure user-facing commands produce expected output and behavior
  • Current coverage threshold is 8% (see Taskfile.yaml)
  • Use table-driven tests where appropriate to test multiple scenarios

Linting

You can run the linter for the project by running:

make lint

This checks code formatting with gofmt and runs golangci-lint checks.

To automatically fix linting issues:

make lint-fix

Code generation

Grant generates code and data files that need to be kept in sync with external sources:

What gets generated:

  • SPDX License Index - Up-to-date list of license identifiers from the SPDX project for license identification and validation
  • License File Patterns - Generated patterns to identify license files in scanned directories

When to regenerate:

Run code generation after:

  • The SPDX license list has been updated
  • Adding new license file naming patterns
  • Contributing changes to license detection logic

Generation commands:

  • make generate - Run all generation tasks
  • make generate-spdx-licenses - Download and generate latest SPDX license list
  • make generate-license-patterns - Generate license file patterns (depends on SPDX license index)

After running generation commands, review the changes carefully and commit them as part of your pull request.

Package structure

Grant is organized into two main areas: the public library API and the CLI application. For detailed API documentation, see the Grant Go package reference.

grant/ - Public Library API

The top-level grant/ package is the public library that other projects can import and use. This is what you’d reference with import "github.com/anchore/grant/grant".

This package contains the core functionality:

  • License evaluation and matching
  • Policy loading and validation
  • Package analysis and filtering

Most contributions to core Grant functionality belong in this package.

cmd/grant/ - CLI Application

The CLI application is built on top of the grant/ library and contains application-specific code:

cmd/grant/
├── cli/            # Command wiring and application setup
│   ├── command/    # CLI command implementations (list, check, etc.)
│   ├── internal/   # Internal command implementations
│   ├── option/     # Command flags and configuration options
│   └── tui/        # Terminal UI and event handlers
└── main.go         # Application entrypoint

Contributions to CLI features, command behavior, or user interface improvements belong in this package.

Next Steps

Understanding the Codebase

Contributing Your Work

Finding Work

Getting Help

8 - Sign-off Commits

How to sign-off commits with the Developer’s Certificate of Origin

Sign off your work

All commits require a simple sign-off line to confirm you have the right to contribute your code. This is a standard practice in open source called the Developer Certificate of Origin (DCO).

How to sign off

The easiest way is to use the -s or --signoff flag when committing:

git commit -s -m "your commit message"

This automatically adds a sign-off line to your commit message:

Signed-off-by: Your Name <your.email@example.com>

Tip: You can configure Git to always sign off commits automatically:

git config --global format.signoff true

Verify your sign-off

To check that your commit includes the sign-off, look at the log output:

git log -1

You should see the Signed-off-by: line at the end of your commit message:

commit 37ceh170e4hb283bb73d958f2036ee5k07e7fde7
Author: Your Name <your.email@example.com>
Date:   Mon Aug 1 11:27:13 2020 -0400

    your commit message

    Signed-off-by: Your Name <your.email@example.com>

Why we require sign-off

In plain English: By adding a sign-off line, you’re confirming that:

  • You wrote the code yourself, OR
  • You have permission to submit it, AND
  • You’re okay with it being released under the project’s open source license

This protects both you and the project. It’s a simple legal formality that takes just a few seconds to add to each commit.

All contributions to this project are licensed under the Apache License Version 2.0.

Adding sign-off to existing commits

If you’ve already committed without a sign-off (easy to do!), you can add it retroactively.

For your most recent commit

git commit --amend --signoff

This updates your last commit to include the sign-off line.

For older commits

If you need to add sign-off to commits further back in your history:

git rebase --signoff HEAD~N

Replace N with the number of commits you need to sign. For example, HEAD~3 signs off the last 3 commits.

Note: If you’ve already pushed these commits, you’ll need to force-push after rebasing:

git push --force-with-lease

If you’re new to rebasing

Rebasing rewrites commit history, which can be tricky if you’re not familiar with it. If you run into issues:

  1. Ask for help in the PR comments
  2. Or, create a fresh branch from the latest main and cherry-pick your changes
  3. The maintainers can also help you fix sign-off issues during the review process

What the DCO means (technical details)

The Developer Certificate of Origin (DCO) is a legal attestation that you have the right to submit your contribution under the project’s license. Here’s the full text:

Developer's Certificate of Origin 1.1

By making a contribution to this project, I certify that:

   (a) The contribution was created in whole or in part by me and I
       have the right to submit it under the open source license
       indicated in the file; or

   (b) The contribution is based upon previous work that, to the best
       of my knowledge, is covered under an appropriate open source
       license and I have the right under that license to submit that
       work with modifications, whether created in whole or in part
       by me, under the same open source license (unless I am
       permitted to submit under a different license), as indicated
       in the file; or

   (c) The contribution was provided directly to me by some other
       person who certified (a), (b) or (c) and I have not modified
       it.

   (d) I understand and agree that this project and the contribution
       are public and that a record of the contribution (including all
       personal information I submit with it, including my sign-off) is
       maintained indefinitely and may be redistributed consistent with
       this project or the open source license(s) involved.

The DCO protects both contributors and the project by creating a clear record of contribution rights and licensing terms.

9 - SBOM Action

Developer guidelines when contributing to sbom-action

Getting started

In order to test and develop in the sbom-action repo you will need the following dependencies installed:

  • Node.js (>= 20.11.0)
  • npm
  • Docker

Initial setup

Run once after cloning to install dependencies and development tools:

npm install

This command installs all dependencies and sets up Husky git hooks that automatically format code and rebuild the distribution files before commits.

Useful commands

Common commands for ongoing development:

  • npm run build - Check TypeScript compilation (no output files)
  • npm run lint - Check code with ESLint
  • npm run format - Auto-format code with Prettier
  • npm run format-check - Check code formatting without changes
  • npm run package - Build distribution files with ncc (outputs to dist/)
  • npm test - Run Jest tests
  • npm run all - Complete validation suite (build + format + lint + package + test)

Testing

The sbom-action uses Jest for testing. To run the test suite:

npm test

The CI workflow handles any additional setup automatically (like Docker registries). For local development, you just need to install dependencies and run tests.

Test types

The test suite includes two main categories:

  • Unit tests (e.g., tests/GithubClient.test.ts, tests/SyftGithubAction.test.ts): Test individual components in isolation by mocking GitHub Actions context and external dependencies.

  • Integration tests (tests/integration/): Execute the full action workflow with real Syft invocations against test fixtures in tests/fixtures/ (npm-project, yarn-project). These tests use snapshot testing to validate SBOM output and GitHub dependency snapshot uploads.

Snapshot testing

Integration tests extensively use Jest’s snapshot testing to validate SBOM output. When you run integration tests, Jest compares the generated SBOMs against saved snapshots in tests/integration/__snapshots__/.

The tests normalize dynamic values (timestamps, hashes, IDs) before comparison to ensure consistent snapshots across runs.

Updating snapshots:

When you intentionally change SBOM output format or content, update the snapshots:

npm run test:update-snapshots

Development workflow

Pre-commit hooks

The sbom-action uses Husky to run automated checks before each commit:

  1. Code formatting - Prettier formats staged TypeScript files
  2. Distribution rebuild - Runs npm run package to rebuild dist/ directory
  3. Auto-staging - Automatically stages updated dist/ files

The hook is defined in .husky/pre-commit and runs the precommit npm script.

Code organization

The sbom-action consists of three GitHub Actions, each with its own entry point:

Main action (action.yml):

  • Entry point: src/runSyftAction.ts
  • Compiled to: dist/runSyftAction/index.js
  • Generates SBOMs and uploads as workflow artifacts and release assets

Publish SBOM sub-action (publish-sbom/action.yml):

  • Entry point: src/attachReleaseAssets.ts
  • Compiled to: dist/attachReleaseAssets/index.js
  • Uploads existing SBOMs to GitHub releases

Download Syft sub-action (download-syft/action.yml):

  • Entry point: src/downloadSyft.ts
  • Compiled to: dist/downloadSyft/index.js
  • Downloads and caches Syft binary

Key modules:

  • src/Syft.ts - Wraps Syft execution and configuration
  • src/SyftVersion.ts - Manages Syft version resolution
  • src/github/SyftDownloader.ts - Handles Syft binary downloads
  • src/github/SyftGithubAction.ts - Core action orchestration logic
  • src/github/GithubClient.ts - GitHub API interactions
  • src/github/Executor.ts - Command execution wrapper

GitHub Actions specifics

Debugging Actions

Enable detailed debug logging by setting a repository secret:

  1. Go to your repository Settings → Secrets and variables → Actions
  2. Add a new secret: ACTIONS_STEP_DEBUG = true

This enables debug logging from the @actions/toolkit libraries used throughout the action.

See the GitHub documentation for more details.

Testing Actions locally

CI validation:

The repository includes comprehensive CI workflows in .github/workflows/test.yml that:

  • Test on Ubuntu and Windows
  • Validate distribution files are up-to-date
  • Test scanning directories and container images
  • Verify all SBOM formats
  • Test sub-actions (download-syft, publish-sbom)

Manual testing:

Test changes in your own workflows using the repository name and branch:

- uses: your-username/sbom-action@your-branch
  with:
    path: ./

Or test locally using act if you have it installed.

Action runtime

The sbom-action uses the Node.js 20 runtime (runs.using: node20 in action.yml). This runtime is provided by GitHub Actions and doesn’t require separate installation in workflows.

Next Steps

Understanding the Codebase

Contributing Your Work

Finding Work

Getting Help

10 - Security Policy

How to report security vulnerabilities in Anchore OSS projects

Security is a top priority for Anchore’s open source projects. We appreciate the security research community’s efforts in responsibly disclosing vulnerabilities to help keep our users safe.

Supported Versions

Security updates are applied only to the most recent release of each project. We strongly recommend staying up to date with the latest versions to ensure you have the most recent security patches and fixes.

If you’re using an older version and concerned about a security issue, please upgrade to the latest release. For questions about specific versions, reach out on Discourse.

Reporting a Vulnerability

Found a security vulnerability? Please report security issues privately by emailing security@anchore.com rather than creating a public GitHub issue. This gives us time to fix the problem and protect users before details become public.

What to Include in Your Report

To help us understand and address the issue quickly, please include as much detail as you can:

  • Description: A clear description of the vulnerability and its potential impact
  • Steps to reproduce: Detailed steps to recreate the issue
  • Affected versions: Which versions of the tool are vulnerable
  • Proof of concept: If available, a minimal example demonstrating the issue
  • Suggested mitigation: If you have ideas for how to fix or mitigate the issue
  • Urgency level: Your assessment of the severity (Critical, High, Medium, or Low)

Don’t worry if you can’t provide every detail –partial reports are still valuable and welcome. We’ll work with you to understand the issue.

What to Expect

After you submit a report:

  1. Acknowledgment: You’ll receive an initial response confirming we’ve received your report
  2. Assessment: The security team will investigate and assess the severity and impact
  3. Updates: We’ll keep you informed of our progress and any questions we have
  4. Resolution: Once a fix is developed, if necessary, we’ll coordinate disclosure timing with you
  5. Credit: With your permission, we’ll acknowledge your responsible disclosure in release notes

Disclosure Policy

Anchore follows a coordinated disclosure process:

  1. Security issues are addressed privately until a fix is available
  2. Fixes are released as quickly as possible based on severity
  3. Security advisories are published after fixes are released
  4. Credit is given to security researchers who report responsibly

Thank you for helping keep Anchore’s open source projects and their users secure.

11 - Code of Conduct

Community standards and guidelines for respectful collaboration

All Anchore open source projects follow the Contributor Covenant Code of Conduct.

Our Pledge

We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.

We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community.

Our Standards

Examples of behavior that contributes to a positive environment for our community include:

  • Demonstrating empathy and kindness toward other people
  • Being respectful of differing opinions, viewpoints, and experiences
  • Giving and gracefully accepting constructive feedback
  • Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience
  • Focusing on what is best not just for us as individuals, but for the overall community

Examples of unacceptable behavior include:

  • The use of sexualized language or imagery, and sexual attention or advances of any kind
  • Trolling, insulting or derogatory comments, and personal or political attacks
  • Public or private harassment
  • Publishing others’ private information, such as a physical or email address, without their explicit permission
  • Other conduct which could reasonably be considered inappropriate in a professional setting

Enforcement Responsibilities

Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful.

Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate.

Scope

This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official email address, posting via an official social media account, or acting as an appointed representative at an online or offline event.

Enforcement

Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at opensource@anchore.com.

All complaints will be reviewed and investigated promptly and fairly.

All community leaders are obligated to respect the privacy and security of the reporter of any incident.

Enforcement Guidelines

Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:

1. Warning

Community Impact: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community.

Consequence: The original post will be edited or removed and a warning issued to the offender.

2. Temporary Ban

Community Impact: A serious violation of community standards, including sustained inappropriate behavior.

Consequence: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.

3. Permanent Ban

Community Impact: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals.

Consequence: A permanent ban from any sort of public interaction within the community.

Attribution

This Code of Conduct is adapted from the Contributor Covenant, version 2.0, available at https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.

Community Impact Guidelines were inspired by Mozilla’s code of conduct enforcement ladder.

For answers to common questions about this code of conduct, see the FAQ at https://www.contributor-covenant.org/faq. Translations are available at https://www.contributor-covenant.org/translations.

12 - Scan Action

Developer guidelines when contributing to scan-action

Getting started

In order to test and develop in the scan-action repo you will need the following dependencies installed:

  • Node.js (>= 20.11.0)
  • npm
  • Docker

Initial setup

Run once after cloning to install dependencies and development tools:

npm install

This command installs all dependencies and sets up Husky git hooks that automatically format code and rebuild the distribution files before commits.

Useful commands

Common commands for ongoing development:

  • npm run build - Bundle with ncc and normalize line endings
  • npm run lint - Check code with ESLint
  • npm run prettier - Auto-format code with Prettier
  • npm test - Complete test suite (lint + install Grype + build + run tests)
  • npm run run-tests - Run Jest tests only
  • npm run test:update-snapshots - Update test expectations (lint + install Grype + run tests with snapshot updates)
  • npm run audit - Run security audit on production dependencies
  • npm run update-deps - Update dependencies with npm-check-updates

Testing

Tests require Grype to be installed locally and a Docker registry for integration tests. Set up your test environment:

Install Grype locally:

npm run install-and-update-grype

Start local Docker registry:

docker run -d -p 5000:5000 --name registry registry:2

Tests automatically disable Grype database auto-update and validation to ensure consistent test results.

CI environment:

The GitHub Actions test workflow automatically:

  • Starts a Docker registry service on port 5000
  • Tests on Ubuntu, Windows, and macOS
  • Validates across multiple configurations (image/path/sbom sources, output formats)

Test types

The scan-action uses Jest for testing with several categories:

  • Unit tests (e.g., tests/action.test.js, tests/grype_command.test.js): Test individual functions in isolation by mocking GitHub Actions context and external dependencies.

  • Integration tests: Execute the full action workflow with real Grype invocations. These tests validate end-to-end functionality including downloading Grype, running scans, and generating output files.

  • SARIF validation tests (tests/sarif_output.test.js): Validate SARIF report structure and content using the @microsoft/jest-sarif library to ensure consistent output format and compliance with the SARIF specification.

  • Distribution tests (tests/dist.test.js): Verify that the committed dist/ directory is up-to-date with the source code.

Test fixtures:

The tests/fixtures/ directory contains sample projects and files for testing:

  • npm-project/ - Sample npm project for directory scanning
  • yarn-project/ - Sample yarn project for directory scanning
  • test_sbom.spdx.json - Sample SBOM file for SBOM scanning tests

SARIF output testing

The SARIF output tests validate report structure using the @microsoft/jest-sarif library. Tests normalize dynamic values (versions, fully qualified names) before validation to ensure consistent results across test runs.

The tests validate that:

  • Generated SARIF reports are valid according to the SARIF specification
  • Expected vulnerabilities are detected in test fixtures
  • Output structure remains consistent across runs

If you need to update test expectations, run:

npm run test:update-snapshots

Development workflow

Pre-commit hooks

The scan-action uses Husky to run automated checks before each commit:

  1. Code formatting - lint-staged runs Prettier on staged JavaScript files
  2. Distribution rebuild - Runs npm run precommit to rebuild dist/ directory
  3. Auto-staging - Automatically stages updated dist/ files

The hook is defined in .husky/pre-commit and ensures that distribution files are always synchronized with source code.

Code organization

The scan-action has a straightforward single-file architecture:

Main action (action.yml):

  • Entry point: index.js
  • Compiled to: dist/index.js
  • Downloads Grype, runs vulnerability scans, generates reports

Download Grype sub-action (download-grype/action.yml):

  • Entry point: Reuses dist/index.js with run: "download-grype" input
  • Provides standalone Grype download and caching
  • Returns cmd output with path to Grype binary

Key functions in index.js:

  • downloadGrype() - Downloads Grype using install script
  • downloadGrypeWindowsWorkaround() - Windows-specific download logic
  • installGrype() - Installs and caches Grype binary
  • sourceInput() - Validates mutually exclusive inputs (image/path/sbom)
  • run() - Main action execution flow
  • Command construction and output handling

GitHub Actions specifics

Debugging Actions

Enable detailed debug logging by setting a repository secret:

  1. Go to your repository Settings → Secrets and variables → Actions
  2. Add a new secret: ACTIONS_STEP_DEBUG = true

This enables debug logging from the @actions/toolkit libraries used throughout the action.

See the GitHub documentation for more details.

Testing Actions locally

CI validation:

The repository includes comprehensive CI workflows in .github/workflows/test.yml that:

  • Test on Ubuntu, Windows, and macOS
  • Validate distribution files are up-to-date
  • Test scanning images, directories, and SBOM files
  • Verify all output formats (SARIF, JSON, CycloneDX, table)
  • Test download-grype sub-action

Manual testing:

Test changes in your own workflows using the repository name and branch:

- uses: <your-username>/scan-action@<your-branch>
  with:
    image: "alpine:latest"

Or test locally using act if you have it installed.

Action runtime

The scan-action uses the Node.js 20 runtime (runs.using: node20 in action.yml). This runtime is provided by GitHub Actions and doesn’t require separate installation in workflows.

Next Steps

Understanding the Codebase

Contributing Your Work

Finding Work

Getting Help

13 - Docs (this site!)

Style guide for writing Anchore OSS documentation

This style guide is for the Anchore OSS documentation. The style guide helps contributors to write documentation that readers can understand quickly and correctly.

The Anchore OSS docs aim for:

  • Consistency in style and terminology, so that readers can expect certain structures and conventions. Readers don’t have to keep re-learning how to use the documentation or questioning whether they’ve understood something correctly.
  • Clear, concise writing so that readers can quickly find and understand the information they need.

Use standard American spelling

Use American spelling rather than Commonwealth or British spelling. Refer to Merriam-Webster’s Collegiate Dictionary, Eleventh Edition.

Use capital letters sparingly

Some hints:

  • Capitalize only the first letter of each heading within the page. (That is, use sentence case.)
  • Capitalize (almost) every word in page titles. (That is, use title case.) The little words like “and”, “in”, etc, don’t get a capital letter.
  • In page content, use capitals only for brand names, like Syft, Anchore, and so on. See more about brand names below.
  • Don’t use capital letters to emphasize words.

Spell out abbreviations and acronyms on first use

Always spell out the full term for every abbreviation or acronym the first time you use it on the page. Don’t assume people know what an abbreviation or acronym means, even if it seems like common knowledge.

Example: “To run Grype locally in a virtual machine (VM)”

Use contractions if you want to

For example, it’s fine to write “it’s” instead of “it is”.

Use full, correct brand names

When referring to a product or brand, use the full name. Capitalize the name as the product owners do in the product documentation. Do not use abbreviations even if they’re in common use, unless the product owner has sanctioned the abbreviation.

Use thisInstead of this
Anchoreanchore
Kubernetesk8s
GitHubgithub

Be consistent with punctuation

Use punctuation consistently within a page. For example, if you use a period (full stop) after every item in list, then use a period on all other lists on the page.

Check the other pages if you’re unsure about a particular convention.

Examples:

  • Most pages in the Anchore OSS docs use a period at the end of every list item.
  • There is no period at the end of the page subtitle and the subtitle need not be a full sentence. (The subtitle comes from the description in the front matter of each page.)

Use active voice rather than passive voice

Passive voice is often confusing, as it’s not clear who should perform the action.

Use active voiceInstead of passive voice
You can configure Grype toGrype can be configured to
Add the directory to your pathThe directory should be added to your path

Use simple present tense

Avoid future tense (“will”) and complex syntax such as conjunctive mood (“would”, “should”).

Use simple present tenseInstead of future tense or complex syntax
The following command provisions a virtual machineThe following command will provision a virtual machine
If you add this configuration element, the system is open to the InternetIf you added this configuration element, the system would be open to the Internet

Exception: Use future tense if it’s necessary to convey the correct meaning. This requirement is rare.

Address the audience directly

Using “we” in a sentence can be confusing, because the reader may not know whether they’re part of the “we” you’re describing.

For example, compare the following two statements:

  • “In this release we’ve added many new features.”
  • “In this tutorial we build a flying saucer.”

The words “the developer” or “the user” can be ambiguous. For example, if the reader is building a product that also has users, then the reader does not know whether you’re referring to the reader or the users of their product.

Address the reader directlyInstead of "we", "the user", or "the developer"
Include the directory in your pathThe user must make sure that the directory is included in their path
In this tutorial you build a flying saucerIn this tutorial we build a flying saucer

Use short, simple sentences

Keep sentences short. Short sentences are easier to read than long ones. Below are some tips for writing short sentences.

Use fewer words instead of many words that convey the same meaning
Use thisInstead of this
You can useIt is also possible to use
You canYou are able to
Split a single long sentence into two or more shorter ones
Use thisInstead of this
You do not need a running GKE cluster. The deployment process creates a cluster for youYou do not need a running GKE cluster, because the deployment process creates a cluster for you
Use a list instead of a long sentence showing various options
Use thisInstead of this

To scan a container for vulnerabilities:

  1. Package the software in an OCI container.
  2. Upload the container to an online registry.
  3. Run Grype with the container name as a parameter.
To scan a container, you must package the software in an OCI container, upload the container to an online registry, and run Grype with the container name as a parameter.

Avoid too much text styling

Use bold text when referring to UI controls or other UI elements.

Use code style for:

  • filenames, directories, and paths
  • inline code and commands
  • object field names

Avoid using bold text or capital letters for emphasis. If a page has too much textual highlighting it becomes confusing and even annoying.

Use angle brackets for placeholders

For example:

  • export SYFT_PARALLELISM=<number>
  • --email <your email address>

Style your images

The Anchore OSS docs recognize Bootstrap classes to style images and other content.

The following code snippet shows the typical styling that makes an image show up nicely on the page:

<!-- for wide images -->
<img src="/images/my-image.png" alt="My image" class="mt-3 mb-3 border rounded" />

<!-- for tall images -->
<img src="/images/my-image.png" alt="My image" class="mt-3 mb-3 border rounded" style="width: 100%; max-width: 30em" />

To see some examples of styled images, take a look at the Kubeflow OAuth setup page.

For more help, see the guide to Bootstrap image styling and the Bootstrap utilities, such as borders.

A detailed style guide

The Google Developer Documentation Style Guide contains detailed information about specific aspects of writing clear, readable, succinct documentation for a developer audience.

Next steps