Are you an LLM? Read llms.txt for a summary of the docs, or llms-full.txt for the full context.
Skip to content

Architecture

How evm-cloud's Terraform modules fit together to deploy a complete EVM indexing stack.

High-Level Overview

CLI
evm-cloud CLI
init ยท deploy ยท status ยท logs ยท scale
Config
Chain Presets
ABIs ยท RPC endpoints ยท BYO Node
Sizing Profiles
Secrets
YAML Config
rindexer.yaml ยท erpc.yaml
Platform
Providers
AWS
GCP
Bare Metal
Compute
EC2 + Docker
EKS
k3s
Docker Compose
Provider + compute engine are selected per deployment
Networking
VPC
Subnets
Security Groups
Bastion Host
DNS
Provisioned per provider โ€” isolated network per deployment
Data Pipeline
EVM Node
Reth / Erigon
โ†’
eRPC Proxy
Failover ยท Caching
โ†’
rindexerโ†— events
No-code or Rust
โ†’
Database
ClickHouse ยท PostgreSQL
Streaming
Kafka
SNS/SQS
CDC
Webhooks
Fan-out from rindexer indexed events
Ops
Observability
Grafana
Prometheus
Alerting
Ingress
TLS
Caddy / ALB
Routing
GraphQL ยท Admin
Monitors all pipeline services ยท Routes external traffic
Available
Planned
โ†’Data flow
โ†—Branch output

Data Flow at Runtime

Upstream RPCs
Infura
Alchemy
Llamarpc
Public endpoints
BYO Node
โ†“
JSON-RPC
eRPC Proxy
Upstream health checks
Automatic failover
Response caching
Rate limit management
Hedged requests
โ†“
Polling + WebSockets
rindexer
Block polling
Event log filtering
ABI decoding
Reorg handling
Batch writes
โ†“SQL inserts
โ†“Events
Database
PostgreSQL (RDS managed)
ClickHouse (BYODB)
Events, transactions, logs
Streaming
Kafka
SNS/SQS
Webhooks
CDC

Layer Model

evm-cloud separates concerns into two layers. The workload_mode variable controls how much Terraform manages.

Layer 1: Infrastructure

Everything needed before workloads can run.

ComponentWhat Terraform Creates
NetworkingVPC, public/private subnets, security groups, NAT gateway, VPC endpoints
ComputeEC2 instances, EKS cluster + node groups, or k3s host
DatabaseRDS PostgreSQL instance (or outputs for BYODB)
IAMInstance profiles, roles, policies for Secrets Manager access
SecretsAWS Secrets Manager entries for DB credentials, RPC keys

Layer 2: Workloads

The actual containers and configuration that run on top of infrastructure.

ComponentWhat Gets Deployed
eRPCRPC proxy container with your erpc.yaml config
rindexerIndexer container with your rindexer.yaml + ABI files
Config injectionYAML/ABI files delivered to containers via bind mounts, ConfigMaps, or Secrets Manager

Workload Modes

variable "workload_mode" {
  type    = string
  default = "terraform"  # or "external"
}
ModeBehaviorUse Case
terraformTerraform manages both Layer 1 and Layer 2. Containers are deployed via cloud-init (EC2), Helm (k3s/EKS), or SSH provisioner (bare metal).Simple deployments, dev/staging, teams that want one terraform apply for everything.
externalTerraform manages Layer 1 only. Outputs a workload_handoff object with connection details, endpoints, and credentials for external tools to deploy workloads.Teams with existing CI/CD, GitOps (ArgoCD/Flux), or Helm-based deploy pipelines. Separates infra and app lifecycles.

workload_handoff output (available when workload_mode = "external"):

output "workload_handoff" {
  value = {
    runtime = {
      ec2 = { public_ip, private_ip, ssh_user, ssh_key_name }
      eks = { cluster_name, cluster_endpoint, kubeconfig_cmd }
      k3s = { kubeconfig, node_ip }
    }
    database = {
      host, port, username, password_secret_arn, database_name
    }
    networking = {
      vpc_id, private_subnet_ids, security_group_ids
    }
    config = {
      erpc_config_path, rindexer_config_path, abi_dir_path
    }
  }
}

Module Map

ModulePathPurpose
core/capabilitiesmodules/core/capabilities/Provider-neutral capability contracts. Defines the interface every provider adapter must implement.
core/k8s/k3s-bootstrapmodules/core/k8s/k3s-bootstrap/Installs k3s on any host via SSH. Used by both aws/k3s-host and bare_metal providers.
aws/networkingmodules/providers/aws/networking/VPC, subnets, security groups, NAT gateway, VPC endpoints.
aws/ec2modules/providers/aws/ec2/EC2 instance with Docker Compose workloads. Cloud-init for setup, bind mounts for config.
aws/eks_clustermodules/providers/aws/eks/EKS cluster, managed node groups, OIDC provider, CoreDNS/kube-proxy addons.
aws/k3s-hostmodules/providers/aws/k3s-host/EC2 instance bootstrapped with k3s via the core/k8s/k3s-bootstrap module.
aws/postgresmodules/providers/aws/postgres/RDS PostgreSQL with automated backups, security group rules, and Secrets Manager integration.
bare_metalmodules/providers/bare_metal/SSH + Docker Compose on any VPS. No cloud provider required.

Module Dependency Graph

root module (main.tf)
  โ”‚
  โ”œโ”€โ”€ modules/core/capabilities/          # Capability contracts
  โ”‚
  โ”œโ”€โ”€ modules/providers/aws/              # AWS adapter
  โ”‚   โ”œโ”€โ”€ networking/                     # VPC, subnets, SGs
  โ”‚   โ”œโ”€โ”€ ec2/                            # EC2 + Docker Compose
  โ”‚   โ”œโ”€โ”€ eks/                            # EKS cluster
  โ”‚   โ”œโ”€โ”€ k3s-host/                       # EC2 + k3s
  โ”‚   โ”‚   โ””โ”€โ”€ modules/core/k8s/k3s-bootstrap/
  โ”‚   โ””โ”€โ”€ postgres/                       # RDS PostgreSQL
  โ”‚
  โ””โ”€โ”€ modules/providers/bare_metal/       # Bare metal adapter
      โ””โ”€โ”€ modules/core/k8s/k3s-bootstrap/ (optional)

Compute Engine Comparison

EC2 + Docker ComposeEKSk3sBare Metal
ProviderAWSAWSAWS or bare_metalbare_metal
KubernetesNoYes (managed)Yes (lightweight)No
Control plane cost$0~$73/mo$0$0
Workload managementDocker ComposeK8s deploymentsK8s deploymentsDocker Compose
Config deliverycloud-init + bind mountsConfigMap/SecretHelm values -> ConfigMapSSH + file provisioner
ScalingVertical onlyHPA, node autoscalerManual / limited HPAVertical only
Best forDev/staging, simple setupsProduction K8s teamsCost-conscious K8s, single-node prodSelf-hosted, no cloud vendor
Decision guide:
  • Just want it to work -- EC2 + Docker Compose. Simplest path, lowest cost.
  • Need Kubernetes but not the EKS bill -- k3s. Same K8s API, $0 control plane.
  • Production K8s with managed upgrades -- EKS. AWS handles the control plane.
  • No AWS account / own hardware -- Bare Metal. SSH to any box with Docker installed.

Data Flow

See Data Flow at Runtime above for the full pipeline diagram.

Provider Abstraction

The root module is provider-neutral. The infrastructure_provider variable routes to the correct adapter.

variables.tf (root)
  โ”‚
  โ”‚  infrastructure_provider = "aws"    โ”€โ”€โ†’  modules/providers/aws/
  โ”‚  infrastructure_provider = "bare_metal" โ”€โ”€โ†’  modules/providers/bare_metal/
  โ”‚
  โ””โ”€โ”€ Every adapter implements the same output contract:
        - networking: { vpc_id, subnet_ids, security_group_ids }
        - compute:    { host_ip, ssh_user, connection_type }
        - database:   { host, port, credentials_arn }
        - workload:   { deploy_method, config_paths }

How It Works

  1. Root module declares provider-neutral variables (instance_type, database_engine, compute_engine, etc.)
  2. infrastructure_provider selects which adapter directory to use
  3. Adapter module translates generic variables into provider-specific resources (e.g., instance_type = "medium" becomes t3.medium on AWS)
  4. Adapter outputs conform to the capability contract defined in modules/core/capabilities/
  5. Workload deployment reads adapter outputs to place containers on the provisioned infrastructure

Adding a New Provider

Adding support for a new cloud provider (GCP, Azure, Hetzner, etc.) requires:

  1. Create modules/providers/<provider>/ with networking, compute, and database sub-modules
  2. Implement the output contract from modules/core/capabilities/
  3. Add the provider option to infrastructure_provider validation in variables.tf
  4. No changes to root module logic, workload deployers, or existing providers

Related Pages