Installing the OpenHound CLI
OpenHound is typically installed and executed inside a dedicated Python virtual environment. We recommend using a Python package and project manager such as uv to manage the virtual environment and dependencies. When usinguv, you can install OpenHound by running the following command in your terminal:
- Okta
- Jamf
- Github
CLI Commands
Theopenhound command supports several subcommands for running collectors and converting data to the OpenGraph format.
The following sections provide an overview of the commonly used subcommands. For a full list of available commands and options, refer to the built-in help documentation by running openhound --help or openhound <subcommand> --help.
Collect
Thecollect subcommand is used to start the collection process. You can start collecting data by specifying the collector name
and output directory.
For example, to run the Okta collector and save the output in the output directory:
./output/okta directory. The collected
resources will be saved in a JSONL compressed format and can be processed and converted to OpenGraph using the preproc
and convert subcommands.
Preprocess
Thepreproc subcommand preprocesses collected resources before they are converted into OpenGraph data. During this phase, OpenHound imports raw data into a DuckDB database and creates materialized tables to support efficient lookups for transformations.
This phase is optional and is only required for collectors that depend on preprocessed data. The following command preprocesses Okta resources from the ./output/okta directory and generates a DuckDB lookup database at ./lookup.duckdb:
Convert
Theconvert subcommand transforms collected resources into the OpenGraph format. This phase converts the raw data to
nodes and edges and saves them as OpenGraph-compatible JSON files that you can upload to BloodHound.
The following command converts the collected Okta resources from the ./output/okta directory and saves the OpenGraph files in the ./graph/okta directory:
OpenHound will automatically split each graph into multiple files based on source node and chunk size to optimize file uploads for BloodHound.