Have a question?
Message sent Close

​SAIC is seeking a results-driven Java Backend Developer to support a high-priority IT modernization effort for a large Federal Agency. This role focuses on breaking down monolithic legacy services into modular, cloud-native Spring Boot applications. You will be a key player in migrating on-premise systems to AWS, utilizing DevSecOps and Lean best practices to ensure the new architecture is scalable and fault-tolerant.

  • Location: Remote (HQ in Alexandria, VA)
  • Experience: 5+ years in Java Development.
  • Clearance: Must be able to obtain a Public Trust clearance.
  • Core Tech: Java, Spring Boot, Hibernate, AWS, Oracle, REST.
  • Focus: Cloud Migration, Monolith Decomposition, and Agile Delivery.

​Cloud-Native Java Development

​You will design and code microservices using the Spring Boot framework and Hibernate/JPA for ORM. The role requires a strong grasp of object-oriented principles to ensure code is maintainable and secure. You will transition legacy logic into RESTful web services, utilizing JSON and XML for data exchange and ensuring that all new modules are optimized for a cloud environment.

​AWS Migration & Serverless

​A primary responsibility is migrating on-premise applications to AWS. You will utilize a wide array of AWS resources, including ECS Fargate for containerized workloads, Lambda for serverless functions, and API Gateway for service orchestration. You will also manage data persistence using RDS (Oracle) and handle asynchronous messaging through SQS and SNS.

​DevSecOps & Observability

​Working in a fast-paced Agile environment, you will use GitLab for source control and Maven for build automation. To ensure high availability and performance, you will analyze logs using Splunk and monitor data flow performance with Instana. You will participate in the full Agile lifecycle—from story elaboration in JIRA/Rally to sprint reviews and retrospectives—ensuring that every deployment meets federal quality standards.

Summary: You are the architect of modularity for federal IT systems. By decomposing monolithic services into agile, cloud-native Java applications on AWS, you provide the government with the high-performance, secure, and maintainable software needed to serve the public effectively.

Job Features

Job CategoryDevOps, Information Technology

​SAIC is seeking a results-driven Java Backend Developer to support a high-priority IT modernization effort for a large Federal Agency. This role focuses on breaking down monolithic legacy services ...View more

​Peraton is a major national security partner providing mission-critical IT solutions across the federal government. In this role, you will support a federal financial customer by migrating complex data flows into a secure AWS environment. A unique aspect of this position is the integration of legacy file transfer and messaging middleware—such as IBM MQ and Connect:Direct—into modern, cloud-native architectures.

  • Salary Range: $86,000 - $138,000 USD
  • Location: Remote (Home based) with Hybrid flexibility/travel as required.
  • Experience: 5+ years (with Degree) or 9+ years (with HS Diploma).
  • Clearance: Must be U.S. Citizen / Public Trust.
  • Core Tech: AWS (Lambda, Glue, EKS), Python, Ansible, CDK/CloudFormation.

​Data Migration & Legacy Integration

​You will migrate large-scale data flows from on-premise systems to AWS. This includes managing IBM MQ and Connect:Direct services, ensuring that high-volume file transfers for financial systems remain reliable and secure. You will utilize AWS data services like Glue for ETL processes and S3 for durable storage, bridging the gap between traditional enterprise messaging and cloud-native serverless logic.

​Infrastructure as Code (IaC) & Serverless

​You will build the migration target environments using AWS CDK, CloudFormation, or Terraform. The role involves writing logic for Lambda and Step Functions to automate complex workflows. By applying Ansible and Python scripting, you will ensure that the infrastructure is version-controlled and deployed through robust CI/CD pipelines, adhering to federal security best practices.

​Containerization & Orchestration

​The project leverages modern container patterns, requiring proficiency in Docker and orchestration via Amazon EKS (Kubernetes) or ECS. You will be responsible for containerizing legacy components where possible and managing their lifecycle, performance tuning, and observability using CloudWatch and CloudTrail to meet strict federal compliance standards.

Summary: You are the bridge between legacy financial systems and the future of cloud computing. By mastering the integration of enterprise MQ services with modern AWS serverless and container technologies, you ensure that vital national financial data flows are secure, resilient, and highly automated.

Job Features

Job CategoryInformation Technology, Software Engineering

​Peraton is a major national security partner providing mission-critical IT solutions across the federal government. In this role, you will support a federal financial customer by migrating complex ...View more

​GovCIO is a prominent government IT contractor focused on digital transformation and cloud modernization. In this role, you will lead the migration and architectural evolution of a critical federal application into the AWS cloud. This is a senior-level position requiring a deep blend of Java development and DevSecOps engineering to ensure high-scale government systems remain secure, compliant, and highly available.

  • Location: Fully Remote (HQ in Alexandria, VA)
  • Experience: 8–12+ years in solutions design and engineering.
  • Clearance: Must be able to obtain a Public Trust clearance.
  • Core Tech: AWS, Java, GitLab CI/CD, Terraform, Ansible, Docker.
  • Focus: Cloud Migration, Disaster Recovery, and Blue-Green Deployments.

​Cloud Migration & Hybrid Architecture

​You will be responsible for the end-to-end cloud transformation of legacy government systems. This involves designing "blueprints" for hybrid cloud infrastructures, focusing on VPC networking, storage, and security topologies. You will advise federal clients on architectural decisions, ensuring that the move to AWS follows industry best practices for service decomposition and microservices.

​Automated CI/CD & Infrastructure as Code

​You will manage the Infrastructure as Code (IaC) baseline using Terraform and Ansible. Your goal is to build a fully automated software delivery lifecycle—from code commit to production—using GitLab CI/CD. By implementing Blue-Green deployment environments, you ensure that updates can be released with zero downtime and minimal risk to critical government operations.

​DevSecOps & Compliance

​Given the government context, security is integrated into every stage of the project. You will embed security controls into the CI/CD pipelines and work closely with Java and Angular developers to ensure the application meets federal compliance standards. Your responsibilities also include designing Disaster Recovery protocols and maintaining a "secure-by-default" IAM and network posture.

Summary: You are the technical lead for a major federal cloud modernization effort. By combining expert Java coding skills with advanced AWS orchestration and DevSecOps automation, you provide the resilient and secure foundation necessary for government applications to thrive in the cloud era.

Job Features

Job CategoryDevOps, Information Technology

​GovCIO is a prominent government IT contractor focused on digital transformation and cloud modernization. In this role, you will lead the migration and architectural evolution of a critical federal...View more

​This role represents the cutting edge of SRE, moving beyond traditional scripting toward Agentic Workflows and Autonomous Infrastructure. You will be responsible for building self-sustaining systems that use AI to eliminate operational toil. This involves integrating Large Language Models (LLMs) and orchestration frameworks directly into the production lifecycle to automate incident response and system scaling.

  • Focus: AI Operations (AIOps), Autonomous Agents, and Predictive Observability.
  • Core Frameworks: LangChain, LangGraph, n8n, CrewAI, AutoGPT.
  • Automation Tools: Airplane.dev, Custom AI Flow Builders.
  • Key Metric: Elimination of toil through self-healing systems.

​Autonomous Agent Orchestration

​You will design and deploy agentic workflows using frameworks like LangGraph or CrewAI. Unlike standard linear automation, these autonomous agents can reason through complex infrastructure alerts, interact with APIs, and execute remediation steps independently. You will be tasked with integrating these "AI Copilots" into production systems to handle routine maintenance and complex multi-step recoveries.

​AI-Driven Observability & Predictive SLOs

​A major component of this role is evolving traditional monitoring into Predictive Observability. You will build LLM-based assistants that help engineers query system state using natural language and design dashboards that predict Service Level Objective (SLO) breaches before they occur. By measuring "everything," you will create the data loops necessary for AI to understand and maintain system health.

​Documentation & Communication

​Clarity is critical when automating high-stakes infrastructure. You will be responsible for documenting complex AI flow logic and communicating technical resolutions to partners and customers. This ensures that even as the systems become more autonomous, the human operators maintain full visibility and "precision of understanding" regarding how the AI is managing the platform.

Summary: You are at the forefront of the "SRE 2.0" movement. By replacing manual toil with intelligent, agent-driven automation and predictive analytics, you ensure that enterprise-scale systems are not just reliable, but inherently self-sustaining.

Job Features

Job CategoryAI (Artificial Intelligence)

​This role represents the cutting edge of SRE, moving beyond traditional scripting toward Agentic Workflows and Autonomous Infrastructure. You will be responsible for building self-sustaining system...View more

​Tempo is the leading time management and resource planning provider within the Atlassian ecosystem, serving over 30,000 customers and a third of the Fortune 500. As a Senior SRE, you will join the team responsible for the stable foundation upon which all other engineering departments build. This is a "Remote First" role focused on scaling a high-traffic SaaS platform on AWS, championing DevOps culture, and ensuring enterprise-grade availability.

  • Location: Remote (United States or Canada)
  • Experience: 5+ years in a SaaS environment.
  • Core Tech: AWS, Kubernetes, CI/CD, Infrastructure as Code.
  • Focus: Observability, Database Administration, and Platform Scalability.

​Cloud-Native Infrastructure & Kubernetes

​You will own the evolution of Tempo's AWS-based platform, ensuring it scales alongside a rapidly growing customer base. A primary focus is working with Kubernetes and modern cloud-native tools to improve platform extensibility. You will act as a "build champion," implementing architectural changes that increase deployment speed while maintaining the high quality expected by enterprise clients.

​Observability & Performance Metrics

​A critical part of this role involves deep-diving into system performance. You will implement and manage Real User Monitoring (RUM), distributed tracing, and advanced monitoring pipelines. By analyzing these metrics, you will create automated alerting and recovery systems that minimize downtime and improve the end-user experience across Tempo's suite of integrated solutions.

​Database Administration & Automation

​Beyond standard infrastructure, you will be responsible for the health of Tempo's data layer. This includes database administration tasks such as provisioning, performance tuning, and troubleshooting complex storage issues. You will automate these key processes—alongside build and release cycles—to ensure that manual intervention is minimized and recovery is predictable.

Summary: You are the architect of stability for Tempo’s global infrastructure. By combining deep AWS and Kubernetes expertise with a mentor-led approach to DevOps, you empower modern teams worldwide to deliver value through highly available and cost-efficient tech solutions.

Job Features

Job CategoryDevOps, Software Engineering

​Tempo is the leading time management and resource planning provider within the Atlassian ecosystem, serving over 30,000 customers and a third of the Fortune 500. As a Senior SRE, you will join the ...View more

Remote
United States
Posted 3 weeks ago

​Vultr is the world’s largest privately-held cloud infrastructure company, recently valued at $3.5 billion. Unlike typical DevOps roles that manage a company's internal apps, this position involves building the actual cloud products that Vultr's customers use. You will be working on the "engine room" of the cloud, developing and operating services like Vultr Kubernetes Engine (VKE), Load Balancers (VLB), and AI Inference platforms.

  • Compensation: $75,000 – $100,000 USD
  • Location: 100% Remote (United States)
  • Experience: 3–5+ years in DevOps, SRE, or Cloud Engineering.
  • Core Tech: Go (Golang), Kubernetes Internals, Terraform, Ansible.
  • Focus: Cloud Provider Infrastructure and Container Runtimes.

​Cloud Product Engineering & Go Development

​This is a Go-first engineering role. You won't just be using cloud tools; you will be building them. You will contribute directly to Vultr’s open-source ecosystem, including their Terraform Provider, Crossplane integrations, and the vultr-cli. Your work will involve writing code to manage Vultr’s global footprint of Cloud GPUs, Bare Metal, and Cloud Storage.

​Deep Kubernetes & Container Internals

​You will move beyond high-level orchestration into the internals of the Kubernetes ecosystem. This includes working with the kubelet, custom controllers, and CRDs. A key part of the role involves designing integrations for container runtimes like containerd and runc, ensuring that OCI images deploy reliably and securely across Vultr's 32 global data center locations.

​Networking & Load Balancing

​You will help develop and enhance Vultr Load Balancers (VLB) and NAT Gateways. This requires a solid understanding of HAProxy, Envoy, or NGINX, and the ability to troubleshoot complex distributed systems. You'll work on the networking layer (CNI) to ensure high-performance connectivity for thousands of active customers worldwide.

Summary: You are building the cloud itself. By mastering Go and the deep internals of Kubernetes and container runtimes, you provide the high-performance infrastructure that powers the next generation of AI innovators and global enterprises.

Job Features

Job CategoryCloud Engineering, DevOps

​Vultr is the world’s largest privately-held cloud infrastructure company, recently valued at $3.5 billion. Unlike typical DevOps roles that manage a company's internal apps, this position involve...View more

​This role is with a fast-growing SaaS provider specialized in high-volume data processing. As a Senior DevOps Engineer, you will be the primary architect of a reliable and scalable Google Cloud Platform (GCP) environment. You will bridge the gap between development and operations by leading initiatives in infrastructure automation, cost optimization, and observability, ensuring that the global platform remains secure and performant.

  • Compensation: $100,000 – $140,000 USD
  • Location: Remote Local (Boston, MA area)
  • Experience: 5+ years in DevOps, SRE, or Platform Engineering.
  • Core Tech: Google Cloud Platform (GCP), Terraform, Ansible, Docker, Jenkins.
  • Focus: Cloud-Native Scalability, CI/CD, and Observability.

​Google Cloud Infrastructure & Automation

​You will be responsible for the design and operation of a highly available GCP environment. Using Terraform, you will manage the infrastructure-as-code (IaC) lifecycle, ensuring that all cloud resources are versioned and reproducible. You will also utilize Ansible for configuration management across your Linux fleet, maintaining a standardized and secure server environment for the company’s containerized workloads.

​Release Engineering & CI/CD

​You will lead the development and maintenance of automated deployment pipelines using Jenkins. Your goal is to enable "paved roads" for software engineers, allowing for fast and reliable deployments of Docker containers. This includes integrating automated testing and security guardrails into the CI/CD flow, reducing the manual effort required for high-frequency SaaS releases.

​Observability & Incident Response

​To support a global customer base, you will implement and optimize a modern observability stack using tools like Prometheus and Grafana (or GCP Cloud Monitoring). You will define critical alerts and dashboards to monitor system health, participate in on-call rotations, and lead post-incident reviews to drive long-term reliability improvements. You will also mentor junior engineers in these SRE practices to foster a culture of technical excellence.

Summary: You are the technical leader ensuring the stability of a high-growth SaaS platform. By mastering GCP internals and driving automation through Terraform and Ansible, you provide the resilient foundation necessary for global data processing at scale.

Job Features

Job CategoryCloud Engineering, DevOps

​This role is with a fast-growing SaaS provider specialized in high-volume data processing. As a Senior DevOps Engineer, you will be the primary architect of a reliable and scalable Google Cloud Pla...View more

Remote
United States
Posted 3 weeks ago

​Cyera is a fast-growing startup reinventing data security for the cloud era. As a DevOps Engineer, you will be a high-impact contributor responsible for the infrastructure and automation that powers their data security platform. You will work across multi-cloud environments, focusing on "DevSecOps" to ensure that as the company scales its Fortune 1000 client base, the platform remains resilient, automated, and compliant with global security standards.

  • Location: R&D US Remote
  • Experience: 3–5+ years in DevOps, SRE, or Platform Engineering.
  • Core Tech: Kubernetes, Docker, Terraform, CI/CD (GitHub Actions/GitLab).
  • Cloud Platforms: AWS, GCP, and Azure.
  • Focus: Data Security, Infrastructure as Code, and Observability.

​Multi-Cloud Infrastructure & Kubernetes

​You will design and maintain highly available infrastructure across AWS, GCP, and Azure. Central to this is Kubernetes orchestration, where you will manage containerized workloads to ensure they scale dynamically. By using Terraform for Infrastructure as Code (IaC), you will automate the provisioning of these multi-cloud environments, ensuring consistency across development, staging, and production.

​DevSecOps & CI/CD Pipelines

​As a security company, Cyera requires security to be "shifted left." You will build CI/CD pipelines using tools like GitHub Actions or GitLab CI, embedding automated security scanning and compliance checks (SAST/DAST) directly into the deployment workflow. Your goal is to increase release velocity without compromising the rigorous requirements of SOC2 or ISO 27001.

​Reliability & Observability

​You will own the uptime and performance of production services. This involves implementing comprehensive observability stacks using tools like Prometheus, Grafana, and Datadog. You will be responsible for defining alerting thresholds, conducting root cause analysis (RCA) for incidents, and creating automated runbooks to ensure the platform can handle the data security needs of modern enterprises.

Summary: You are the architect of the automated systems that protect enterprise data. By bridging the gap between high-speed software delivery and iron-clad cloud security, you enable Cyera to remain the leader in the next generation of cybersecurity.

Job Features

Job CategoryCloud Engineering, DevOps, Security

​Cyera is a fast-growing startup reinventing data security for the cloud era. As a DevOps Engineer, you will be a high-impact contributor responsible for the infrastructure and automation that power...View more

​eSimplicity is a digital services firm that partners with federal agencies to modernize public health systems. This Senior DevOps Engineer role is specifically focused on supporting the Centers for Medicare and Medicaid Services (CMS). You will operate in a large-scale AWS environment, integrating DevSecOps principles into pipelines that handle massive healthcare data sets and critical government reporting tools.

  • Salary Range: $106,300 – $136,600 USD
  • Location: Fully Remote (Operating on Eastern Time)
  • Experience: 8+ years in DevOps, DevSecOps, or Security Engineering.
  • Clearance: Must be able to obtain a Public Trust clearance.
  • Core Tech: AWS, Terraform, Terragrunt, GitHub Actions, Docker.
  • Data Stack: Databricks, Redshift.

​Federal DevSecOps & Compliance

​A primary focus of this role is ensuring that all systems meet strict federal compliance standards, including FISMA and NIST. You will embed security controls—such as SAST, DAST, and SCA—directly into the software development lifecycle. Working with Java and Python/Django teams, you will automate the remediation of vulnerabilities to protect the data of millions of Americans.

​Infrastructure as Code (IaC) with Terragrunt

​You will manage cloud infrastructure using Terraform, specifically utilizing Terragrunt to keep your code DRY (Don't Repeat Yourself) and manage remote state across multiple AWS accounts. This ensures that the infrastructure supporting Databricks and Redshift clusters is repeatable, auditable, and secure-by-default, adhering to strict IAM and network security policies.

​Big Data & Container Security

​Supporting CMS involves managing high-performance data platforms. You will be responsible for the security and scaling of Databricks and Redshift clusters, ensuring that big data processing remains performant and compliant. Additionally, you will manage Docker container security, implementing hardened images and secure registry workflows to protect the application layer.

Summary: You are a critical guardian of federal healthcare infrastructure. By combining deep AWS expertise with advanced DevSecOps automation and a firm understanding of government compliance, you ensure that CMS can process vital data securely and efficiently to serve the public good.

Job Features

Job CategoryData, DevOps, Healthcare

​eSimplicity is a digital services firm that partners with federal agencies to modernize public health systems. This Senior DevOps Engineer role is specifically focused on supporting the Centers for...View more

​This role is a unique hybrid position within a lean, growing startup environment. Despite the "DevOps" title, the responsibilities lean heavily toward Full Stack Development with a strong emphasis on functional programming and distributed systems. You will be a "generalist" engineer, moving seamlessly between frontend features, backend logic, and the Microsoft Azure infrastructure that supports it.

  • Location: Remote - Work from Home
  • Experience: 5+ years in software development.
  • Core Tech: F#, .NET Ecosystem, Microsoft Orleans, Microsoft Azure.
  • Focus: Full-stack development, Distributed Systems, and Architecture.

​Functional Programming & The .NET Stack

​The most distinct aspect of this role is the use of F# across the entire stack. You will be writing functional-first code within the .NET ecosystem. This approach prioritizes immutability and type safety, which is critical for the complex logic required by clinical product leaders and business stakeholders.

​Distributed Systems with Microsoft Orleans

​You will work with Microsoft Orleans, a "virtual actor" framework designed for building massive-scale distributed systems. This allows the team to handle complex state management and concurrency without the usual overhead of distributed locking. You will be responsible for ensuring these distributed components are performant and resilient within a cloud-native architecture.

​Cloud Infrastructure & Architecture

​While the majority of your time is spent on hands-on development, you will own the Microsoft Azure infrastructure. This includes designing the architecture for new services, conducting code reviews with a focus on cloud-best practices, and providing technical support for third-party vendor integrations. Your goal is to ensure the infrastructure scales alongside the growing business needs.

Summary: You are a key technical pillar in a collaborative startup. By leveraging the power of F# and the scalability of Microsoft Orleans on Azure, you bridge the gap between high-level clinical requirements and robust, distributed software architecture.

Job Features

Job CategoryFull Stack Developer

​This role is a unique hybrid position within a lean, growing startup environment. Despite the "DevOps" title, the responsibilities lean heavily toward Full Stack Development with a strong emphasis ...View more

​Evidation is a digital health platform that translates real-world data into personalized health guidance. This Senior DevOps Engineer role is a high-level position focused on building secure, product-oriented cloud infrastructure within a highly regulated environment (HIPAA, SOC 2, ISO 27001). You will be responsible for the "operational excellence" of a platform that connects with millions of users, requiring a "security-first" approach to AWS and Kubernetes.

  • Location: Remote (Preferred: Santa Barbara or Southern California)
  • Experience: 8+ years in DevOps, SRE, or Platform Engineering.
  • Core Tech: AWS, EKS, Terraform, Pulumi, GitHub Actions, Docker, Helm.
  • Stack Focus: Ruby/Puma, Snowflake, Postgres, Redis, RabbitMQ, Bottlerocket OS.

​Expert Kubernetes & Cluster Operations

​You will design and operate multi-tenant Kubernetes environments using Amazon EKS. Your focus will be on the complete cluster lifecycle, including workload management, KEDA for event-driven autoscaling, and cost-optimized configurations. You will specifically work with Bottlerocket OS—a Linux-based open-source operating system purpose-built by Amazon to run containers securely and efficiently.

​Infrastructure as Code & CI/CD

​You will drive best practices in Infrastructure-as-Code (IaC) by utilizing both Terraform and Pulumi. This includes creating modular, versioned, and tested deployment patterns. You will also mature the CI/CD ecosystem using GitHub Actions, leveraging OIDC authentication, reusable workflows, and secure secrets management to ensure a traceable and resilient software delivery pipeline.

​Observability & Incident Response

​Leveraging Datadog, you will define and improve monitoring, logging, and tracing (APM) to create a highly observable system. As a senior leader, you will provide advanced support for major incidents, performing deep-dive root cause analysis and writing postmortems that lead to long-term corrective actions. You will also use AI-assisted development tools like GitHub Copilot to accelerate the creation of automation scripts in Python, Ruby, or Go.

Summary: You are the architect of reliability for a platform that measures human health. By combining expert-level Kubernetes orchestration with a rigorous adherence to healthcare regulatory controls, you enable Evidation's data scientists and engineers to deploy high-impact health tools with speed and security.

Job Features

Job CategoryCloud Engineering, DevOps, Security

​Evidation is a digital health platform that translates real-world data into personalized health guidance. This Senior DevOps Engineer role is a high-level position focused on building secure, produ...View more

Remote
United States
Posted 3 weeks ago

​Sharetec is a provider of innovative core banking and lending software for credit unions. This Senior DevOps Engineer role within the Platform Engineering organization is a unique blend of traditional Windows-based hosting and modern mobile deployment. You will be responsible for the end-to-end reliability of financial applications, moving the needle from manual processes to Infrastructure-as-Code (IaC) while managing the complexities of app store publishing and IIS optimization.

  • Compensation: $105,000 – $120,000 USD
  • Location: Fully Remote (USA, excluding California)
  • Experience: 5–8 years in DevOps or Platform Engineering.
  • Core Tech: Ansible, Terraform, IIS (Windows), Docker, Kubernetes.
  • Specialty: Mobile Deployment (iOS/Android) and CI/CD Automation.

​Mobile Deployment & App Store Management

​A key differentiator for this role is the ownership of the mobile build and release lifecycle. You will manage the end-to-end publishing process for iOS and Android applications, navigating App Store Connect and Google Play Console. This includes handling certificate signing, provisioning profiles, and automating compliance checks to ensure credit union members have seamless access to their mobile banking tools.

​Infrastructure-as-Code & Ansible Automation

​You will lead the transition toward a "source of truth" infrastructure. Using Ansible, you will automate IIS configurations, API deployments, and certificate rotations to eliminate configuration drift. Simultaneously, you will use Terraform to provision the underlying infrastructure, ensuring that environment standardization is maintained across the entire production SaaS platform.

​Platform Modernization & Windows Hosting

​While the current environment relies heavily on IIS and Windows-based production, you will be a key driver in the shift toward Docker and Kubernetes. You will tune and optimize legacy web servers while building the "future state" of the platform. Your scripting skills in PowerShell, C#, or Python will be essential for troubleshooting the application and infrastructure layers of the core banking suite.

Summary: You are the bridge between stable core banking and modern deployment practices. By mastering both the nuances of Windows/IIS hosting and the automation of mobile app stores, you ensure that Sharetec’s financial technology remains reliable, predictable, and ready for the next generation of containerized scaling.

Job Features

Job CategoryDevOps

​Sharetec is a provider of innovative core banking and lending software for credit unions. This Senior DevOps Engineer role within the Platform Engineering organization is a unique blend of traditio...View more

​Anaconda is the foundational platform for modern data science and AI, serving over 50 million users. As a Platform Engineer on the Platform Core team, you will build the infrastructure that powers AI at scale. Your role is to bridge the gap between complex open-source ecosystems and enterprise-grade reliability, ensuring that both SaaS and self-hosted products remain scalable and developer-friendly.

  • Compensation: $115,000 – $170,000 + Bonus & Equity
  • Location: Fully Remote (Distributed Team)
  • Experience: 2–5 years in Infrastructure, DevOps, or Platform Engineering.
  • Core Tech: Kubernetes, Terraform, AWS/Azure/GCP, Python, Go.
  • Deadline: February 19, 2026.

​Cloud-Native Infrastructure & Kubernetes

​You will own the infrastructure components that support core offerings. This involves advanced Kubernetes orchestration across multi-cloud environments including AWS, Azure, and Google Cloud. You will design enduring solutions that handle high-throughput demands while ensuring data science workloads—including those requiring GPU patterns—remain stable.

​Developer Experience & Automation

​A primary goal is to empower developers through self-service capabilities. You will use Terraform or CDK to build infrastructure-as-code blueprints, allowing engineers to deploy software faster without manual intervention. By automating CI/CD workflows and creating robust internal tooling in Python or Go, you directly increase the velocity of the entire engineering organization.

​Observability & Incident Response

​As a member of the Platform Core team, you will support production systems through an on-call rotation. You will utilize observability tools to monitor Linux-based systems, perform root-cause analysis on complex infrastructure failures, and implement permanent fixes. Your success is measured by gains in infrastructure stability and the reduction of incident response times.

Summary: You are the architect of the environment where data science thrives. By combining deep Kubernetes expertise with a passion for developer experience and automated infrastructure, you ensure that Anaconda remains the most trusted platform for securing and deploying AI at scale.

Job Features

Job CategoryAI (Artificial Intelligence), Data

​Anaconda is the foundational platform for modern data science and AI, serving over 50 million users. As a Platform Engineer on the Platform Core team, you will build the infrastructure that powers ...View more

Remote
United States
Posted 3 weeks ago

​Bayer’s digital farming arm is at the forefront of regenerative agriculture, using data science and engineering to power platforms like Climate FieldView. This Senior Cloud Engineer role is part of the Platform Engineering team, where you will treat the cloud infrastructure as a product. Your goal is to build "paved roads" (golden paths) that enable hundreds of engineers to ship software to AWS securely and efficiently.

  • Location: Remote (US)
  • Experience: 5+ years hands-on AWS production experience.
  • Core Tech: Terraform, AWS Service Catalog, Kubernetes (EKS/ECS), GitLab CI.
  • Focus: Internal Developer Experience (IDP), DevSecOps, and FinOps.

​Platform Engineering & Developer Enablement

​You will own the internal toolchain, including Backstage (IDP) and Terraform modules, to simplify the path from idea to production. This isn't just about building infrastructure; it's about productizing it. You will run office hours, build "exemplar" templates, and establish SLAs/SLOs to ensure internal engineering teams have a seamless experience.

​Paved Roads & Policy-as-Code

​You will maintain opinionated Infrastructure as Code (IaC) modules that are secure by default. Using Policy-as-Code (OPA/Conftest), you will implement guardrails across VPCs, IAM, and KMS. This ensures that every deployment—whether using Lambda, ECS, or EKS—automatically adheres to Bayer’s networking and security standards without manual intervention.

​Reliability, Observability & FinOps

​As a steward of the platform, you will manage Datadog and CloudWatch to reduce MTTR through actionable runbooks and observability. You will also lead FinOps initiatives, implementing cost guardrails, budgets, and rightsizing strategies to ensure the cloud spend is optimized across Bayer's massive agronomic data footprint.

Summary: You are the architect of the developer experience at Bayer. By building standardized, secure-by-default AWS patterns and fostering a DevSecOps culture, you empower engineering teams to focus on agricultural innovation while you handle the complexity of the underlying cloud-native distributed systems.

Job Features

Job CategoryCloud Engineering, Software Engineering

​Bayer’s digital farming arm is at the forefront of regenerative agriculture, using data science and engineering to power platforms like Climate FieldView. This Senior Cloud Engineer role is part ...View more

Remote
Posted 3 weeks ago

​Mindex is a leading software development firm specializing in agile cloud services and innovative product development. This DevOps Engineer role is specifically designed to support and scale AI-powered platforms, including AI/ML services and Virtual Agent systems. You will work in a multi-cloud environment focusing on building resilient CI/CD pipelines and managing containerized, distributed systems.

  • Location: Remote (Rochester, NY HQ)
  • Experience: 3+ years in DevOps, SRE, or Cloud Engineering.
  • Core Tech: Kubernetes, Terraform, Docker, GitHub Actions, Jenkins.
  • Cloud Platforms: Google Cloud Platform (GCP) and Microsoft Azure.
  • Focus: AI/ML Infrastructure, MLOps, and Multi-Cloud Automation.

​Multi-Cloud Infrastructure as Code

​You will design and automate infrastructure across GCP and Azure using Terraform. This involves managing container orchestration platforms such as Google Kubernetes Engine and Azure Kubernetes Service, as well as serverless options like Cloud Run. Your goal is to ensure high availability and scalability for cloud-native applications that power intelligent virtual agents.

​Advanced CI/CD & Automation

​The role requires building sophisticated delivery pipelines using GitHub Actions, GitOps, and Jenkins. You will use Python, Shell, and Groovy to script automation for the full software development lifecycle. Beyond standard deployments, you will integrate APIs and tools specifically aimed at streamlining MLOps workflows and AI/ML service ecosystems.

​Monitoring & Reliability Engineering

​As a guardian of platform health, you will implement proactive alerting and monitoring solutions. You will troubleshoot complex networking and deployment issues across the application stack, ensuring that the AI-powered services meet strict performance and security standards. You will also collaborate with data scientists to optimize the release processes for machine learning models.

Summary: You are the engine behind Mindex’s most advanced AI platforms. By mastering multi-cloud orchestration and MLOps automation, you provide the scalable, secure foundation necessary for modern organizations to deploy and manage intelligent, high-performing applications.

Job Features

Job CategoryAI (Artificial Intelligence), Cloud Engineering, DevOps

​Mindex is a leading software development firm specializing in agile cloud services and innovative product development. This DevOps Engineer role is specifically designed to support and scale AI-pow...View more