Skip to main content
Applies to BloodHound Enterprise
This is a SpecterOps-managed feature. If it is not enabled in your environment, contact your account team for assistance.
OpenHound for BloodHound Enterprise is deployed as a container and can be run in different environments, such as:
  • Bare-metal or virtual Linux server using Docker or Podman (e.g., Ubuntu, RHEL)
  • Cloud-native container service such as AWS Fargate
  • Self-managed Kubernetes cluster on-premises or in the cloud
  • Managed Kubernetes service such as Amazon EKS, Google GKE, or Azure AKS
The OpenHound container image is available on Docker Hub and includes the collectors for Github, Jamf, and Okta by default.

Deployment process

To deploy OpenHound and run a collector:
1

Provision the environment

Provision a server with container (docker/podman) support, a Kubernetes cluster, or cloud-native container service to run OpenHound.Ensure that the environment meets the system requirements listed below.
2

Create an OpenHound collector client

Register an OpenHound collector client on your BloodHound Enterprise instance to obtain the necessary API credentials for OpenHound to upload data.
3

Deploy OpenHound

Deploy the OpenHound container and configure the required parameters using the configuration file or environment variables.For more details on the required parameters, visit the configuration page as well as the collector-specific configuration pages in the collectors section.
4

Run the collector

Run an On Demand scan or create a data collection schedule to start the collector and collect data from your environment.

Enterprise collector configuration

Minimum configuration parameters

The following parameters must be set in the [destination.bloodhoundenterprise] section of the configuration file or via environment variables to run OpenHound and have it upload data to BloodHound Enterprise.
Runtime optionEnvironment VariableDescription
token_keyDESTINATION__BLOODHOUNDENTERPRISE__TOKEN_KEYThe API token key for authenticating with BloodHound Enterprise.
token_idDESTINATION__BLOODHOUNDENTERPRISE__TOKEN_IDThe API token ID for authenticating with BloodHound Enterprise.
urlDESTINATION__BLOODHOUNDENTERPRISE__URLThe URL of the BloodHound Enterprise instance.
intervalDESTINATION__BLOODHOUNDENTERPRISE__INTERVALThe interval at which OpenHound will check for available jobs.

Collector-specific configuration parameters

Each collector requires additional configuration parameters that are unique to its data source. These settings can be defined in the configuration file or supplied via environment variables. For details on the available parameters, refer to the collector-specific configuration documentation using the links below.

Github

View the required configuration parameters for the Github collector.

Jamf

View the required configuration parameters for the Jamf collector.

Okta

View the required configuration parameters for the Okta collector.

Full configuration example

The following example shows the full configuration for OpenHound with the required parameters for BloodHound Enterprise as well as example parameters for the Jamf collector. The configuration is split into two files: config.toml for the non-sensitive OpenHound/DLT configuration parameters and secrets.toml for the sensitive credentials and secrets. The configuration files can be provided through compose secrets, Kubernetes Secret and ConfigMap objects or supplied via environment variables.
config.toml
[runtime]
http_show_error_body = true
log_cli_level = "WARNING"
log_format = "JSON"
log_rotate_when = "midnight"

[extract]
workers = 8

[normalize]
workers = 3

[load]
delete_completed_jobs = true
truncate_staging_dataset = true
secrets.toml
[sources.source.jamf]
username = "myusername"
host = "https://tenant.jamfcloud.com"
password = "mypassword"

[destination.bloodhoundenterprise]
interval = "300"
token_key = "client_token_key"
token_id = "client_token_id"
url = "https://test.bloodhoundenterprise.io"

Deployment examples

Kubernetes

An example Helm chart is available in the OpenHound GitHub repository for deploying OpenHound in a Kubernetes cluster. The chart can be used as a reference implementation for how OpenHound can be run on Kubernetes. The repository also includes the full README with instructions for deploying OpenHound and configuring the collector parameters and secrets required to connect to BloodHound Enterprise.

Prerequisites

  • Kubernetes cluster v1.33+
  • Helm v4.0+
  • The OpenHound repository cloned locally or access to the Helm chart files in the repository
  • A namespace created for OpenHound (e.g., openhound)

ConfigMap / Secret

The chart requires a Kubernetes ConfigMap and Secret to be pre-created and referenced in config.existingConfigMap and config.existingSecret. The ConfigMap should contain the OpenHound configuration file with the non-sensitive DLT configuration parameters while the Secret should contain the secrets for authenticating with the target services and BloodHound Enterprise. The contents of the ConfigMap and Secret are mounted as /app/.dlt/config.toml and /app/.dlt/secrets.toml. Create the necessary config.toml and secrets.toml files locally and deploy these resources using kubectl.
# Using jamf as an example collector, but the same pattern applies for any collector
kubectl create -n openhound configmap openhound-jamf-config \
  --from-file=config.toml=<path_to_local_config>.toml

kubectl create -n openhound secret generic openhound-jamf-secrets \
  --from-file=secrets.toml=<path_to_local_secrets>.toml

Optional: Additional secret files

Collectors may require extra secret files such as a GitHub app PEM or an Okta JSON key file. Use collector.extraSecretMounts to mount files from pre-existing Kubernetes Secret objects.
collector:
  extraSecretMounts:
    github-pem:
      secretName: github-app-credentials
      secretKey: github.pem
      mountPath: /app/.dlt/github.pem
    okta-json:
      secretName: okta-client-secret
      secretKey: okta.json
      mountPath: /app/.dlt/okta.json
Each item references an existing Secret, the key containing the data/file and the target mount path. Similarly to the previously created secrets.toml you can deploy any additional secrets using kubectl.
kubectl create -n openhound secret generic github-app-credentials \
  --from-file=github.pem=./github.pem

kubectl create -n openhound secret generic okta-client-secret \
  --from-file=okta.json=./okta.json

Installation

Create a values.yaml file with your collector configuration using values.example.yaml as a reference and install the chart using:
helm install -f values.yaml -n openhound openhound-[name] [openhound_path]/deployments/helm/openhound

Verification

Inspect the logs of the deployment to verify that the OpenHound collector is running correctly.
kubectl logs -n openhound -l app.kubernetes.io/name=openhound -f

Compose

The OpenHound container image can also be deployed using Docker Compose or Podman Compose on a Linux server. This is an alternative option for testing or environments where Kubernetes is not necessary.

Prerequisites

  • A Linux server with Docker or Podman installed
  • The compose plugin installed (e.g., docker compose or podman compose)
  • A compose file such as docker-compose.bhe.yml

Installation

To get started, create a config.toml and secrets.toml file with the necessary configuration parameters on the host machine. Use the docker-compose.bhe.yml file as a reference for how to structure the Compose file and mount the configuration files and secrets. Then run the following command to start the OpenHound container:
docker compose up -d

Verification

Inspect the logs of the container(s) to verify that the OpenHound collector is running correctly.
docker compose logs -f

Server/container requirements

Hardware

ComponentMinimum RequirementRecommended Requirement
CPU2 cores6 cores or more
Memory4 GB8 GB or more
*Storage50 GB80 GB or more
The requirements vary depending on the size of the environment being collected/converted. Container storage persistence is not required. However, it is recommended to have a persistent storage for the logs for troubleshooting. The logs are stored in the ~/.local/share/openhound/logs directory and can be configured to rotate based on time or file size.