The problem
AI workflows depend on multiple API keys — model providers, transcription services, cloud storage, version control. Hardcoding them is a security risk. Leaking one key can compromise your entire setup.
The real challenge isn't just "don't put keys in code." It's managing keys across different contexts: your shell, background services, cron jobs, and web applications — each with different environment scoping rules.
A typical AI workflow stack
A personal AI setup might use 4-6 services, each requiring separate credentials:
| Service type | Purpose | Credential type |
|---|---|---|
| AI model provider | Reasoning, generation, analysis | API key |
| Transcription service | Audio-to-text conversion | API key |
| Search API | Web research | API key |
| Version control | Code management | OAuth token |
| Cloud storage | File sync and backup | OAuth2 refresh token |
Each service needs its own key, stored securely and accessible from the right execution context.
Environment variables: the basics
The simplest approach is environment variables. No secrets in files that get committed. No hardcoded values.
For interactive shells
Add keys to your shell profile (e.g., ~/.bashrc or ~/.zshrc):
export MY_SERVICE_API_KEY="your-key-here"
export ANOTHER_SERVICE_KEY="your-key-here"
Reload with source ~/.bashrc.
Limitation: This only works when you are logged in. Background services and cron jobs don't load your shell profile.
For systemd services
If your AI agent runs as a systemd service, the service needs its own environment:
[Service]
Environment="MY_SERVICE_API_KEY=your-key-here"
Or use an EnvironmentFile for cleaner separation:
[Service]
EnvironmentFile=/etc/your-service/env
Where the env file contains your keys, one per line.
Important: Set restrictive permissions on the env file:
sudo chmod 600 /etc/your-service/env
sudo chown root:root /etc/your-service/env
After changes, reload systemd:
sudo systemctl daemon-reload
sudo systemctl restart your-service
Experiment: the environment scoping issue
When first setting up an AI workflow with a transcription service, a common mistake surfaces: the API key works in your interactive shell but fails when called from a background service.
What happens:
- Add the API key to
~/.bashrc✅ - Run the service manually — works ✅
- The background agent tries to call the same service —
401 Unauthorized❌
Root cause: systemd services don't inherit the user's shell environment. They run in an isolated process context.
Fix: Add the key to both your shell profile (for manual use) AND the systemd service environment. After daemon-reload and restart, the service works from both contexts.
Lesson learned: Always test your keys from the same context they'll actually run in. A key that works in your terminal may not exist in your service's environment. This is one of the most common gotchas in personal AI infrastructure.
OAuth-based authentication
Not everything uses API keys. Some services use OAuth flows that require browser-based authentication.
Version control CLIs (like GitHub's gh CLI) typically use an OAuth browser flow and store the token locally. The CLI handles token refresh automatically.
Cloud storage sync tools (like rclone) walk through an OAuth2 flow and store refresh tokens in a local config file. These tokens grant ongoing access without re-authentication.
Best practices for OAuth credentials:
- Set restrictive file permissions on any config files storing tokens (
chmod 600) - Scope access to the minimum necessary (e.g., limit cloud storage access to a single folder rather than granting full drive access)
- Access scoping is sometimes a policy decision enforced by discipline, not a technical enforcement. Document your boundaries.
What NOT to do
- Never commit keys to git. Audit all repos before making them public. Search for common patterns:
grep -rn "API_KEY\|SECRET\|TOKEN" . - Never put keys in scripts. Scripts should read from environment variables, not contain keys directly.
- Never share keys across services unnecessarily. Each service gets only the keys it needs.
- Never use the same key for dev and production. Separate environments limit blast radius.
Where to improve from here
Common areas where personal AI setups can level up their secrets management:
- Key rotation. Rotating API keys and tokens on a schedule (monthly or quarterly) limits the blast radius if a key is ever exposed.
- Dedicated secrets managers. HashiCorp Vault, AWS Secrets Manager, or even encrypted env files offer better protection than plain-text environment variables. For single-machine setups, env files are pragmatic; for anything larger, a secrets manager pays for itself.
- Environment separation. Using different keys for development and production prevents a dev mistake from compromising production access.
- Audit trails. Logging which key was used when helps detect unauthorized access. Most API providers offer usage dashboards — check them periodically.
Checklist
Before adding a new service to your AI workflow:
- ☐ Generate a dedicated API key (don't reuse across services)
- ☐ Add to your shell profile for interactive use
- ☐ Add to systemd service environment if running as a daemon
- ☐ Test from the actual execution context (shell, service, cron)
- ☐ Set restrictive file permissions on any credential files (600)
- ☐ Verify credentials are NOT in any committed files
- ☐ Document what each key is for in a private reference file
Sources
- systemd Environment directives — official docs on service environment variables
- 12-Factor App: Config — environment-based configuration principles