Infrastructure Engineer (DevOps & Data)
An opportunity is available for a versatile Infrastructure Engineer at datma, an early-stage healthcare technology company focused on extracting value from complex, heterogeneous healthcare data through a specialized platform. This role is a critical bridge between DevOps and Data Engineering, responsible for building scalable data infrastructure and ensuring reliable, secure deployment in regulated environments.
This is a full-time, remote position in the United States.
Role Summary and Healthcare Data Infrastructure
The Infrastructure Engineer will be responsible for the full lifecycle of cloud infrastructure that supports Datma’s data ingestion, harmonization, visualization, and AI/ML capabilities. The role requires deep expertise in Kubernetes, Infrastructure-as-Code, and security compliance (HIPAA/HITRUST).
Key Responsibilities
DevOps Functions (Infrastructure Focus)
- Container Orchestration: Architect, deploy, and manage Kubernetes clusters running in customer cloud tenancies (AWS, Azure, GCP).
- Infrastructure-as-Code: Create robust templates using tools like Terraform and Helm for repeatable, automated deployments.
- Security & Compliance: Implement and maintain security controls aligned with HIPAA and HITRUST frameworks. Configure secure networking, IAM, encryption, and audit logging.
- Observability: Implement scaling, monitoring, disaster recovery, and observability solutions (metrics, logging, tracing).
- Automation: Automate deployment processes for data pipelines, ML models, and analytics applications, including automated testing.
Data Engineering Functions (Service Focus)
- AI Infrastructure: Build infrastructure to host in-house AI models and integrate with external AI services (e.g., GPT-5). Optimize data pipelines and storage to support GPU-based compute for ML workloads.
- API Management: Design and manage scalable API gateways and authentication mechanisms for external data consumers, ensuring high-throughput, low-latency access to sensitive healthcare datasets.
- Pipeline Collaboration: Collaborate with the data/applications team to optimize data processing pipelines using tools like Prefect or cloud-native solutions, supporting diverse client integrations.
Required Experience and Technical Qualifications
The ideal candidate possesses deep, hands-on experience in cloud infrastructure and security, with a strong understanding of the specialized needs of data and machine learning workloads.
- Experience: 3+ years of experience in cloud infrastructure engineering, preferably in a regulated data environment.
- Core DevOps: Deep expertise with Kubernetes and container orchestration in production. Strong proficiency in Infrastructure as Code tools (Terraform, Helm, Ansible, etc.).
- Security & Compliance: Experience with cloud security best practices and regulatory frameworks (HIPAA, SOC 2, or HITRUST).
- Monitoring & CI/CD: Hands-on experience with CI/CD pipelines and monitoring tools (e.g., Prometheus, Grafana, ELK).
- Programming: Proficiency in Python and/or Go, SQL, and bash scripting.
- Data Knowledge: Understanding of data modeling, warehousing concepts, and data pipeline orchestration tools.
Preferred Qualifications
- Experience deploying in customer-owned cloud environments (multi-tenant architecture design).
- Knowledge of machine learning infrastructure and MLOps practices.
- Background involving healthcare data and interoperability standards (FHIR, HL7).
- Familiarity with secure API design and management (OAuth2, JWT, API gateways).
Job Features
| Job Category | Data, DevOps |