Securing HashiCorp Vault Access with Internal NLB and VPN

Securing HashiCorp Vault Access with Internal NLB and VPN

7 October 2025

Kilian Niemegeerts

HashiCorp Vault contains every secret in your infrastructure. Database passwords, API keys, certificates – everything. Exposing it to the internet, even with authentication, creates an attack surface we couldn’t accept. But here’s the thing: engineers need access. They need to configure policies, debug issues, and manage secrets. The challenge is to provide secure access without any public endpoints. We built a solution that keeps Vault completely private while staying accessible to authorized users. This is no rocket science, just three practical layers of defense that actually work.

Series overview:

  1. Production Kubernetes Architecture with HashiCorp Vault 
  2. Terraform Infrastructure for HashiCorp Vault on EKS 
  3. External Secrets Operator: GitOps for Kubernetes Secrets 
  4. Dynamic PostgreSQL Credentials with HashiCorp Vault 
  5. Vault Agent vs Secrets Operator vs CSI Provider 
  6. Securing Vault Access with Internal NLB and VPN

Layer 1: Internal Network Load Balancer

The first layer ensures Vault never gets a public IP address. We expose Vault through an AWS Network Load Balancer configured as internal-only:

 

annotations:
  service.beta.kubernetes.io/aws-load-balancer-type: "external"
  service.beta.kubernetes.io/aws-load-balancer-scheme: "internal"
  service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "ip"

Why NLB Instead of ALB?

Vault uses TCP traffic on port 8200. Application Load Balancers (ALB) work at Layer 7 (HTTP/HTTPS) and would require additional configuration and overhead. Network Load Balancers operate at Layer 4 (TCP/UDP) – perfect for Vault’s Kubernetes integration needs.

Health Check Configuration

The NLB has one listener on port 8200, linked to a target group containing the IPs of the Vault pods. For health checks, we use Vault’s /v1/sys/health endpoint.

Important detail: In high-availability setups, standby nodes return HTTP 429 (Too Many Requests) on the health endpoint. We configure the target group to consider 429 as healthy – otherwise, only the active node would be in the target group, defeating the purpose of HA.

Layer 2: Private DNS with Route 53

AWS generates DNS names for load balancers like: vault-nlb-internal-1234567890.elb.eu-west-1.amazonaws.com

Not exactly user-friendly. And if the load balancer gets recreated, the DNS changes.

We solved this with Route 53 private hosted zones:

  1. Created a private hosted zone: tooling.internal
  2. Added a CNAME record: vault.tooling.internal → NLB DNS name
  3. Associated the zone with both our tooling and application cluster VPCs via VPC peering

Now everyone uses https://vault.tooling.internal:8200. Clean, consistent, and it works across kubernetes clusters.

Layer 3: VPN Access for Engineers

With Vault completely private, how do engineers access it from their workstations? AWS Client VPN provides the secure tunnel we need for Kubernetes vault integration.

Certificate-Based Authentication

We chose certificate authentication over Active Directory for simplicity:

  • No dependency on AD infrastructure
  • Easy to revoke individual certificates
  • Scriptable certificate generation

The setup uses a simple PKI hierarchy:

  • Root CA for the VPN endpoint
  • Server certificate for the VPN service
  • Individual client certificates for each engineer

Client Certificate Generation

We created a generate-client.sh script that:

  1. Generates a unique client certificate
  2. Signs it with our CA
  3. Packages everything into a .ovpn configuration file

Engineers run the script, get their personal VPN config, and import it into their OpenVPN client. Connection established, Vault accessible.

Vault vs AWS Native Solutions

Many teams ask about AWS Secrets Manager or AWS KMS for kubernetes. We chose Vault because:

  • Native kubernetes integration via service accounts
  • Dynamic secret generation
  • Better suited for multi-cluster setups
  • Works across cloud providers
  • More granular access policies

The Complete Security Picture

This three-layer approach means:

  1. No public IP exists – You can’t attack what you can’t reach
  2. Clean internal DNS – No hardcoded IPs in configurations
  3. Controlled VPN access – Certificate-based entry

Applications in the Kubernetes cluster access Vault directly through the internal load balancer. Engineers access it through VPN. Nobody accesses it from the public internet.

Get the code: Our Vault configuration on GitHub

That’s a wrap! You’ve now seen our complete HashiCorp Vault production architecture – from multi-cluster setup to dynamic credentials to zero-trust access. Ready to implement this in your environment?

 

FAQ

Q: How does this Vault integration work with Kubernetes? A: The internal NLB allows pods to access Vault using kubernetes service accounts for authentication, while engineers use VPN for external access.

Q: Can I use AWS external secrets with this setup? A: Yes, External Secrets Operator works perfectly with this architecture. See part 2 of our series for detailed implementation.

Q: Why not use AWS Secrets Manager for kubernetes secrets? A: Vault offers superior kubernetes integration, dynamic credentials, and works across cloud providers. It also provides better audit logs and more flexible policies.

Q: Why use a Network Load Balancer instead of an Application Load Balancer for Vault? A: Vault communicates over TCP on port 8200, noat HTTP/HTTPS. NLBs operate at Layer 4 (TCP/UDP) making them perfect for Vault’s TCP traffic. ALBs work at Layer 7 (HTTP/HTTPS) and would add unnecessary overhead.

Q: What happens if the internal NLB gets recreated? A: That’s why we use Route 53 private DNS. The CNAME record vault.tooling.internal always points to the current NLB DNS name. If the NLB changes, we only update one DNS record – all configurations keep working.

Q: Is VPN the only way to access Vault externally? A: For our zero-trust approach, yes. Alternatives like bastion hosts or transit gateways exist, but VPN with certificate-based authentication provides the best balance of security and manageability.

Q: Why mark HTTP 429 as healthy in the health checks? A: In Vault’s high-availability setup, standby nodes return 429 (Too Many Requests) on health checks. They’re actually healthy and ready to take over if needed. Only accepting 200 would remove standby nodes from the pool.

Q: Can this setup work with multiple AWS accounts? A: Yes! You can share the Route 53 private hosted zone across accounts and VPCs. The VPN can also be configured to access resources in multiple accounts through proper networking setup.

FAQ

Q: What is the default TTL for database credentials in the setup?

A: We use a default TTL of 1 hour (default_ttl=”1h”) with a maximum TTL of 24 hours (max_ttl=”24h”). This balances security with performance in production environments.

Q: Which PostgreSQL statement does Vault use to create temporary users?

A: Vault executes CREATE ROLE “{{name}}” WITH LOGIN PASSWORD ‘{{password}}’ VALID UNTIL ‘{{expiration}}’ to create temporary database users with an automatic expiration time.

Q: How does Vault connect to the PostgreSQL database?

A: Through a connection URL in the format postgresql://{{username}}:{{password}}@db.example.com:5432/mydb using a dedicated vaultadmin user with appropriate privileges.

Q: What is a lease in the context of Vault database credentials?

A: A lease is Vault’s mechanism for tracking temporary credentials. When an application requests database credentials, it receives them with a lease that determines how long they’re valid and when they’ll be automatically revoked.

Q: Does the database secrets engine store passwords permanently?

A: No, Vault never stores passwords permanently. Credentials are generated on-demand when requested and automatically expire after their TTL period.

Q: Why use 1-hour TTL instead of shorter or longer periods?

A: The 1-hour TTL is our sweet spot in production – short enough to limit exposure if credentials leak, but long enough to avoid performance issues from constant user creation/deletion operations.

No Comments

Sorry, the comment form is closed at this time.