External Deployers
When workload_mode = "external" (or when using k3s), Terraform provisions infrastructure only and outputs a structured workload_handoff. External deployers consume this handoff to deploy workloads independently.
When to Use External Mode
Set workload_mode = "external" when:
- CI/CD pipelines deploy your containers (GitHub Actions, GitLab CI, Jenkins)
- GitOps tools manage workloads (ArgoCD, Flux)
- Different deploy cadences -- infrastructure changes monthly, application deploys daily
- Team separation -- infra team manages Terraform, app team manages Helm or ArgoCD
- Testing -- you want to iterate on workload configs without touching infrastructure
module "evm_cloud" {
source = "github.com/evm-cloud/evm-cloud"
workload_mode = "external"
compute_engine = "eks"
# ... other variables
}Note:
k3salways operates in external mode regardless of theworkload_modesetting. See Two-Phase Deployment for why.
The Workload Handoff v1 Contract
The workload_handoff output is a versioned JSON contract between Terraform (infrastructure) and deployers (workloads). The schema is stable within a major version.
{
"version": "v1",
"mode": "external",
"compute_engine": "k3s",
"project_name": "my-indexer",
"identity": {
"iam_role_arn": "arn:aws:iam::123456789012:role/evm-cloud-my-indexer",
"instance_profile": "evm-cloud-my-indexer-profile"
},
"network": {
"vpc_id": "vpc-0abc123def456",
"private_subnet_ids": ["subnet-0aaa", "subnet-0bbb"],
"security_group_ids": ["sg-0ccc"]
},
"runtime": {
"ec2": {
"ssh_command": "ssh -i key.pem ubuntu@203.0.113.10",
"instance_ip": "203.0.113.10",
"config_path": "/opt/evm-cloud/config/"
},
"eks": {
"cluster_name": "evm-cloud-prod",
"cluster_endpoint": "https://ABCDEF.gr7.us-east-1.eks.amazonaws.com",
"oidc_provider_arn": "arn:aws:iam::123456789012:oidc-provider/..."
},
"k3s": {
"kubeconfig_base64": "YXBpVmVyc2lvbjogdjEK...",
"cluster_endpoint": "https://203.0.113.10:6443",
"host_ip": "203.0.113.10"
}
},
"services": {
"erpc": { "name": "erpc", "port": 4000, "internal_url": "http://erpc:4000" },
"rindexer": { "name": "rindexer", "port": 3001, "internal_url": "http://rindexer:3001" }
},
"data": {
"database_type": "postgresql",
"host": "evm-cloud-db.abc123.us-east-1.rds.amazonaws.com",
"port": 5432,
"username": "rindexer",
"password": "generated-password",
"database": "rindexer"
},
"artifacts": {
"config_channel": "helm",
"cloudwatch_log_group": "/evm-cloud/my-indexer"
}
}Only the runtime block matching the active compute_engine is populated. The others are null.
Handoff Consumption Pattern
Pipe directly (recommended)
Avoids writing credentials to disk:
terraform output -json workload_handoff | ./deployers/k3s/deploy.sh /dev/stdin --config-dir ./configSave to file
When the deployer will run from a different machine or CI step:
terraform output -json workload_handoff > handoff.json
chmod 0600 handoff.json # contains credentials!Then on the deploy machine:
./deployers/k3s/deploy.sh handoff.json --config-dir ./configWarning:
handoff.jsoncontains database passwords and potentially a full kubeconfig. Treat it as a credential. Do not commit it to version control. Delete it after use. See Secrets Management for production handling.
Extract individual fields
For custom pipelines, extract specific values with jq:
# Get kubeconfig for kubectl
terraform output -json workload_handoff | jq -r '.runtime.k3s.kubeconfig_base64' | base64 -d > /tmp/kubeconfig
# Get database connection string
DB_HOST=$(terraform output -json workload_handoff | jq -r '.data.host')
DB_PASS=$(terraform output -json workload_handoff | jq -r '.data.password')
DB_PORT=$(terraform output -json workload_handoff | jq -r '.data.port')Three Deployer Paths
evm-cloud ships with deployer scripts for each compute engine. All deployers live in the deployers/ directory and use shared Helm charts from deployers/charts/.
EC2 SSH Deployer (deployers/ec2/)
For compute_engine = "ec2" with workload_mode = "external".
- Reads SSH connection details from the handoff (
runtime.ec2.ssh_command) - Copies config files to the instance at
/opt/evm-cloud/config/ - Updates the
.envfile with current credentials - Restarts Docker Compose services
terraform output -json workload_handoff | ./deployers/ec2/deploy.sh /dev/stdin --config-dir ./config# Edit config locally, then redeploy
vim config/rindexer.yaml
terraform output -json workload_handoff | ./deployers/ec2/deploy.sh /dev/stdin --config-dir ./configThe deployer uses scp to copy files and ssh to restart services. No Helm or Kubernetes involved.
EKS Helm Deployer (deployers/eks/)
For compute_engine = "eks" with workload_mode = "external".
- Configures kubectl via
aws eks update-kubeconfigusing the cluster name from the handoff - Renders Helm values from handoff data (database connection, service configuration)
- Injects config files from
--config-dir - Runs
helm upgrade --installfor eRPC and rindexer using shared charts
terraform output -json workload_handoff | ./deployers/eks/deploy.sh /dev/stdin --config-dir ./configThe EKS deployer also includes ArgoCD Application manifests at deployers/eks/argocd/. Point ArgoCD at your config repository and it will sync workloads automatically:
# Apply ArgoCD Application resources
kubectl apply -f deployers/eks/argocd/k3s Helm Deployer (deployers/k3s/)
For compute_engine = "k3s" (always external mode).
- Extracts kubeconfig from
runtime.k3s.kubeconfig_base64and decodes it - Renders Helm values from handoff data
- Injects real configs from
--config-dir - Runs
helm upgrade --installfor eRPC and rindexer
terraform output -json workload_handoff | ./deployers/k3s/deploy.sh /dev/stdin --config-dir ./configterraform output -json workload_handoff | ./deployers/k3s/teardown.sh /dev/stdinSee Two-Phase Deployment for the full k3s workflow.
Shared Helm Charts
All Kubernetes deployers (k3s and EKS) use the same Helm charts at deployers/charts/:
| Chart | Purpose |
|---|---|
deployers/charts/rpc-proxy/ | eRPC proxy deployment, service, and ConfigMap |
deployers/charts/indexer/ | rindexer deployment, service, ConfigMap, and ABI volume |
The charts are designed to be generic. Deployer scripts render values from the handoff and config files, then pass them to helm upgrade --install.
CI/CD Examples
GitHub Actions
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::123456789012:role/deploy
aws-region: us-east-1
- name: Setup Terraform
uses: hashicorp/setup-terraform@v3
- name: Get handoff
run: |
cd examples/minimal_aws_k3s_byo_clickhouse
terraform init
terraform output -json workload_handoff > /tmp/handoff.json
chmod 0600 /tmp/handoff.json
- name: Deploy workloads
run: |
./deployers/k3s/deploy.sh /tmp/handoff.json --config-dir ./configGitOps (ArgoCD)
For teams using ArgoCD, the infra pipeline outputs the handoff and the app pipeline reads it:
- Terraform provisions infrastructure and writes handoff to an S3 bucket (encrypted)
- ArgoCD Application resources reference the shared Helm charts
- ArgoCD syncs workloads using values derived from the handoff
Related Pages
- Core Concepts -- Workload Modes -- terraform vs external mode
- Two-Phase Deployment -- Full k3s walkthrough
- Secrets Management -- Securing the handoff
- Updating Configuration -- Post-deploy config changes per engine
- Architecture -- Module map and dependency graph