I Built A Digital Safe With Multiple Keys After A Few Too Many Bike Concussions
I Built A Digital Safe With Multiple Keys After A Few Too Many Bike Concussions
Introduction
A sudden crash on a mountain bike trail left me staring at the sky with a familiar dread - not about broken bones, but about broken access. As a DevOps engineer managing dozens of self-hosted services, I realized my entire digital existence relied on one fragile biological container: my brain. After three concussions, I needed a better solution than just sharing my 1Password vault with my partner.
This isn’t just about personal disaster recovery. Enterprise environments face similar challenges with secret management and access redundancy. How do you ensure continuity when:
- Key personnel become unavailable
- Credentials get lost or compromised
- Systems need to survive organizational changes
In this guide, I’ll walk through building ReMemory - a self-hosted, distributed secret management system using proven cryptographic principles. You’ll learn how to implement:
- Shamir’s Secret Sharing for distributed access control
- Hardened Vault Operations with automatic failover
- Multi-jurisdiction secret distribution
- Dead man’s switch functionality
This isn’t theoretical - we’ll use battle-tested open-source tools like HashiCorp Vault, ssss, and GPG wrapped in automation workflows. The solution complies with NIST SP 800-57 key management recommendations while remaining practical for homelab environments.
Understanding Distributed Secret Management
The Cryptographic Foundation
At ReMemory’s core lies Shamir’s Secret Sharing (SSS), an algorithm created by Adi Shamir (of RSA fame) in 1979. This cryptographic scheme:
- Splits a secret S into n parts
- Requires k parts to reconstruct (k ≤ n)
- Provides information-theoretic security - possessing k-1 shares reveals zero information about S
Unlike password managers that create single points of failure, SSS gives us:
| Approach | Availability | Security | Recovery Complexity |
|---|---|---|---|
| Single Password | Low | Medium | Impossible |
| Full Replication | High | Low | Easy |
| SSS (5-of-8) | High | High | Moderate |
Why Not Just Use a Password Manager?
Traditional password managers excel at convenience but fail at:
- Recovery Autonomy: Most require human intervention for emergency access
- Geographic Redundancy: Cloud providers have regional failure modes
- Legal Survivability: Heirs may face legal hurdles accessing accounts
The Architecture
ReMemory combines multiple technologies:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
+----------------+
| Secret (S) |
+-------+--------+
|
+---------------v----------------+
| Shamir's Secret Sharing (SSS) |
| Split into n shares (k needed) |
+---------------+----------------+
|
+--------------+--------------+
| | |
+-------v------+ +-----v-------+ +----v--------+
| USB Drives | | Paper | | Cloud |
| (Geo-locked) | | (Safe) | | (Encrypted)|
+--------------+ +-------------+ +-------------+
Each share storage location has independent:
- Access Controls: Biometric, physical, cryptographic
- Geographic Separation: Different jurisdictions
- Medium Types: Digital, analog, cloud-based
Real-World Applications
Beyond personal use, this pattern solves enterprise challenges:
- Board-Level Access: Requiring 3-of-5 C-suite members to access root certificates
- Disaster Recovery: Storing API keys across multiple cloud regions
- Regulatory Compliance: Meeting FINRA 4370 backup access requirements
Prerequisites
Hardware Requirements
- Minimum: Raspberry Pi 4 (4GB RAM) or equivalent
- Recommended: x86_64 server with TPM 2.0 chip
- Storage: 20GB SSD (LUKS-encrypted)
Software Dependencies
1
2
3
4
5
6
7
# Core components
sudo apt install ssss gnupg2 vault jq openssl
# Version requirements
ssss --version | grep 'ssss v0.5' # >=0.5.7
gpg --version | grep 'gpg (GnuPG) 2.2.' # >=2.2.27
vault --version | grep 'Vault v1.12.' # >=1.12.0
Security Considerations
Before installation:
- Air-Gapped Environment: Generate master secrets on a disconnected machine
- GPG Key Setup: Create a dedicated 4096-bit RSA keypair
- Network Isolation: Place vault servers in separate VLANs
- Physical Security: Plan storage locations for secret shares
Installation & Setup
Step 1: Secret Generation
On an air-gapped machine:
1
2
3
4
5
# Generate 256-bit random secret
SECRET=$(openssl rand -hex 32)
# Verify entropy
echo $SECRET | ent | grep "Entropy" # Should be >7.9
Step 2: Share Distribution
Split the secret using ssss-split:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# 5 shares required from 8 generated
echo $SECRET | ssss-split -t 5 -n 8 -Q
# Sample output
Shamir's Secret Sharing Scheme v0.5
Generating shares using a (5,8) scheme with dynamic security level.
Enter the secret, at most 128 ASCII characters:
Using a 256 bit security level.
1-797842b76a70c4e5
2-28caa137b3a1e34d
3-c3037c3d5ecc3b2a
4-9e8f5d417a3d8b0e
5-4a2b1c8d9e0f1a2b
6-...
Step 3: Vault Initialization
Deploy HashiCorp Vault in Docker:
1
2
3
4
5
6
7
8
9
10
# Create secure volume
docker volume create vault_data
# Start container
docker run -d --name=vault \
-p 8200:8200 \
--cap-add=IPC_LOCK \
-v vault_data:/vault/data \
-e 'VAULT_LOCAL_CONFIG={"storage": {"file": {"path": "/vault/data"}}, "listener": [{"tcp": { "address": "0.0.0.0:8200", "tls_disable": true}}], "ui": true}' \
vault:1.12.0 server
Verify status:
1
2
docker exec vault vault status -format=json | jq .initialized
# Should return false if new instance
Step 4: Sealed Storage
Initialize Vault with recovery keys:
1
2
3
4
5
6
7
8
# Initialize with 5 recovery keys (matches our SSSS threshold)
docker exec -it vault vault operator init \
-key-shares=8 \
-key-threshold=5 \
-format=json > vault-init.json
# Extract root token
ROOT_TOKEN=$(jq -r '.root_token' vault-init.json)
Step 5: Share Protection
Encrypt each share separately:
1
2
3
4
for SHARE_NUM in {1..8}; do
SHARE=$(jq -r ".recovery_keys_b64[$SHARE_NUM]" vault-init.json)
echo $SHARE | gpg --encrypt --recipient your@email.com > share_$SHARE_NUM.gpg
done
Step 6: Distributed Storage
Store shares across mediums:
| Share | Location | Access Method | Jurisdiction |
|---|---|---|---|
| 1 | Bank Safe Deposit | Physical | Country A |
| 2 | Partner’s Phone | VeraCrypt Container | Country B |
| 3 | AWS S3 | SSE-KMS Encrypted | Region C |
| 4 | Paper Copy | UV-Reactive Ink | Home |
| 5-8 | Trusted Contacts | Encrypted Email | Multiple |
Configuration & Optimization
Security Hardening
- TPM Integration (Requires hardware):
1
2
3
4
vault secrets enable -path=tpm kmip
vault write tpm/config \
tpm_device=/dev/tpm0 \
tpm_emulator=false
- Auto-Unseal Configuration:
1
2
vault operator unseal -migrate -format=json > unseal_keys.json
ssss-split -t 3 -n 5 < unseal_keys.json | tee shares.txt
- Network Policies:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# Example Calico network policy
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: vault-access
spec:
selector: app == 'vault'
ingress:
- action: Allow
protocol: TCP
destination:
ports: [8200]
source:
selector: app == 'bastion'
Performance Tuning
Adjust Vault’s resource limits:
1
2
3
4
5
6
7
8
9
10
11
12
13
# vault.hcl
storage "raft" {
path = "/vault/data"
performance_multiplier = 2
}
listener "tcp" {
tls_disable = 0
# ...
}
api_addr = "https://vault.example.com:8200"
cluster_addr = "https://$NODE_IP:8201"
Backup Strategy
Automated share verification:
1
2
3
4
5
6
7
8
#!/bin/bash
# verify_shares.sh
MIN_SHARES=5
VAULT_SHARES=$(find /shares -name "share_*.gpg" | wc -l)
if [ $VAULT_SHARES -lt $MIN_SHARES ]; then
echo "CRITICAL: Only $VAULT_SHARES shares available!" | mail -s "Share Alert" admin@example.com
fi
Schedule with cron:
1
0 3 * * * /usr/local/bin/verify_shares.sh
Usage & Operations
Daily Operations
- Secret Storage:
1
2
3
4
5
# Store SSH key
vault kv put secret/ssh_keys/id_rsa type=rsa private=@$HOME/.ssh/id_rsa
# Retrieve when needed
vault kv get -field=private secret/ssh_keys/id_rsa > ~/.ssh/id_rsa
- Access Monitoring:
1
2
# Audit log review
vault audit list -detailed
- Share Rotation:
1
2
3
4
# Generate new shares
vault operator rekey -init -key-shares=8 -key-threshold=5
# Distribute as before
Recovery Scenario
When needed, collect 5+ shares:
1
2
3
4
5
6
# Decrypt shares
gpg --decrypt share_1.gpg > share1.txt
# ... repeat for required shares
# Combine shares
ssss-combine -t 5 < share1.txt share2.txt ... | vault operator unseal
Backup Procedures
- Share Integrity Checking:
1
2
sha256sum /shares/*.gpg > share_checksums.txt
gpg --clearsign share_checksums.txt
- Geographic Redistribution:
1
2
3
4
# Use rclone with client-side encryption
rclone copy --crypt-remote encrypted:s3://backup-bucket /shares \
--include *.gpg \
--progress
Troubleshooting
Common Issues
Problem: ssss-combine fails with invalid shares
Solution:
1
2
3
4
# Verify share integrity
echo "share-content" | ssss-combine -t 5 2>&1 | grep "Invalid"
# Regenerate missing shares from backup
Problem: Vault becomes sealed unexpectedly
Debug:
1
2
docker logs $CONTAINER_ID | grep seal
vault status -format=json | jq .sealed
Problem: Network connectivity issues
Diagnosis:
1
2
3
4
5
# Check cluster status
vault operator raft list-peers
# Verify ports
netstat -tulpn | grep 8200
Security Audits
Monthly checklist:
- Verify share locations physically
- Rotate root tokens
- Review access logs:
1
vault audit list -format=json | jq '.[].options.path'
Performance Issues
If response times degrade:
1
2
3
4
5
# Check storage metrics
vault operator raft info -format=json | jq .data
# Monitor API performance
vault read -format=json sys/metrics | jq '.data.*.count'
Conclusion
Building ReMemory transformed how I approach secret management - both professionally and personally. By implementing this decentralized approach, you’ve created a system that:
- Survives individual component failures
- Maintains availability across geographic regions
- Provides cryptographic proof of access control
- Complies with enterprise security standards
Next steps for enhancement:
- Hardware Integration: Add YubiKey or Ledger device support
- Temporal Constraints: Implement time-locked secrets using drand
- Blockchain Verification: Store share checksums on Ethereum or IPFS
For further reading:
In our field, resilience isn’t just about server uptime - it’s about ensuring access survives beyond individual humans. Ride safe, but build safer.