This is a SpecterOps-managed feature. If it is not enabled in your environment, contact your account team for assistance.
- Bare-metal or virtual Linux server using Docker or Podman (e.g., Ubuntu, RHEL)
- Cloud-native container service such as AWS Fargate
- Self-managed Kubernetes cluster on-premises or in the cloud
- Managed Kubernetes service such as Amazon EKS, Google GKE, or Azure AKS
Deployment process
To deploy OpenHound and run a collector:Provision the environment
Provision a server with container (docker/podman) support, a Kubernetes cluster, or cloud-native container service to run OpenHound.Ensure that the
environment meets the system requirements listed below.
Create an OpenHound collector client
Register an OpenHound collector client on your BloodHound Enterprise instance to obtain the necessary API credentials for OpenHound to upload data.
Deploy OpenHound
Deploy the OpenHound container and configure the required parameters using the configuration file or environment variables.For more details on the required parameters, visit the configuration page as well as the collector-specific configuration pages in the collectors section.
Enterprise collector configuration
Minimum configuration parameters
The following parameters must be set in the[destination.bloodhoundenterprise] section of the configuration file or via environment variables to run OpenHound and have it upload data to BloodHound Enterprise.
| Runtime option | Environment Variable | Description |
|---|---|---|
token_key | DESTINATION__BLOODHOUNDENTERPRISE__TOKEN_KEY | The API token key for authenticating with BloodHound Enterprise. |
token_id | DESTINATION__BLOODHOUNDENTERPRISE__TOKEN_ID | The API token ID for authenticating with BloodHound Enterprise. |
url | DESTINATION__BLOODHOUNDENTERPRISE__URL | The URL of the BloodHound Enterprise instance. |
interval | DESTINATION__BLOODHOUNDENTERPRISE__INTERVAL | The interval at which OpenHound will check for available jobs. |
Collector-specific configuration parameters
Each collector requires additional configuration parameters that are unique to its data source. These settings can be defined in the configuration file or supplied via environment variables. For details on the available parameters, refer to the collector-specific configuration documentation using the links below.Github
View the required configuration parameters for the Github collector.
Jamf
View the required configuration parameters for the Jamf collector.
Okta
View the required configuration parameters for the Okta collector.
Full configuration example
The following example shows the full configuration for OpenHound with the required parameters for BloodHound Enterprise as well as example parameters for the Jamf collector. The configuration is split into two files:config.toml for the non-sensitive OpenHound/DLT configuration parameters and secrets.toml for the sensitive credentials and secrets. The configuration files can be provided through compose secrets, Kubernetes Secret and ConfigMap objects or supplied via environment variables.
config.toml
secrets.toml
Deployment examples
Kubernetes
An example Helm chart is available in the OpenHound GitHub repository for deploying OpenHound in a Kubernetes cluster. The chart can be used as a reference implementation for how OpenHound can be run on Kubernetes. The repository also includes the fullREADME with instructions for deploying OpenHound and configuring the collector parameters and secrets required to connect to BloodHound Enterprise.
Prerequisites
- Kubernetes cluster v1.33+
- Helm v4.0+
- The OpenHound repository cloned locally or access to the Helm chart files in the repository
- A namespace created for OpenHound (e.g.,
openhound)
ConfigMap / Secret
The chart requires a KubernetesConfigMap and Secret to be pre-created and referenced in config.existingConfigMap
and config.existingSecret. The ConfigMap should contain the OpenHound configuration file with the non-sensitive DLT configuration parameters while the Secret should contain the secrets for authenticating with the target services and BloodHound Enterprise.
The contents of the ConfigMap and Secret are mounted as /app/.dlt/config.toml and /app/.dlt/secrets.toml. Create the necessary config.toml and secrets.toml files locally and deploy these resources using kubectl.
Optional: Additional secret files
Collectors may require extra secret files such as a GitHub app PEM or an Okta JSON key file. Usecollector.extraSecretMounts to mount files from pre-existing Kubernetes Secret objects.
Secret, the key containing the data/file and the target mount path. Similarly to the previously created secrets.toml you can deploy any additional secrets using kubectl.
Installation
Create avalues.yaml file with your collector configuration using values.example.yaml as a reference and install the chart using:
Verification
Inspect the logs of the deployment to verify that the OpenHound collector is running correctly.Compose
The OpenHound container image can also be deployed using Docker Compose or Podman Compose on a Linux server. This is an alternative option for testing or environments where Kubernetes is not necessary.Prerequisites
- A Linux server with Docker or Podman installed
- The compose plugin installed (e.g.,
docker composeorpodman compose) - A compose file such as docker-compose.bhe.yml
Installation
To get started, create aconfig.toml and secrets.toml file with the necessary configuration parameters on the host machine. Use the docker-compose.bhe.yml file as a reference for how to structure the Compose file and mount the configuration files and secrets.
Then run the following command to start the OpenHound container:
Verification
Inspect the logs of the container(s) to verify that the OpenHound collector is running correctly.Server/container requirements
Hardware
| Component | Minimum Requirement | Recommended Requirement |
|---|---|---|
| CPU | 2 cores | 6 cores or more |
| Memory | 4 GB | 8 GB or more |
| *Storage | 50 GB | 80 GB or more |
The requirements vary depending on the size of the environment being collected/converted. Container storage
persistence is not required. However, it is recommended to have a persistent storage for the logs for troubleshooting.
The logs are stored in the
~/.local/share/openhound/logs directory and can be configured to rotate based on time or
file size.