An opportunity is available for an Azure Engineer at Cyclotron, a leading Microsoft partner committed to delivering comprehensive Azure solutions across IaaS, PaaS, and SaaS. This mid-level role is crucial for designing, implementing, and managing cutting-edge cloud infrastructure and application services.
This is a full-time, remote position, available anywhere in the U.S.
Role Summary and Azure Full Stack Mandate
This engineer will be a hands-on expert responsible for the entire Azure solution lifecycle, from foundational networking and infrastructure-as-code to managing microservices and security compliance. The role requires a broad understanding of the Azure ecosystem to deploy and maintain secure, scalable, and modern enterprise platforms.
Key Responsibilities
- Infrastructure & Platform: Design, implement, and manage Azure solutions across IaaS, PaaS, and SaaS, including VMs, App Services, and Networking. Deploy and configure key services like Azure Landing Zones, Azure Virtual Desktop, and Defender for Cloud.
- Networking & Security: Architect and deploy secure network solutions, including VNETs, VPNs, and ExpressRoute. Ensure security best practices are implemented across all services using Azure Security Center.
- Automation & DevOps: Utilize Azure DevOps for CI/CD pipelines, monitoring, and automation. Implement Infrastructure as Code (IaC) using tools like ARM templates or Terraform.
- Containerization & Applications: Implement and manage containerized applications using Azure Kubernetes Service (AKS). Develop and deploy applications using Azure App Services and integrate with other Azure services.
- Storage: Implement and manage Azure storage solutions, including Blob, Queue, and and Table Storage.
- Industry Expertise: Stay abreast of the latest Azure services and technologies.
Required Experience and Technical Qualifications
The ideal candidate is a certified Azure professional with a minimum of five years of broad, hands-on experience across Azure's core services, focusing on networking, security, and modern DevOps practices.
- Experience: A minimum of 5 years of hands-on experience with a broad range of Azure services.
- Education: Bachelor’s degree in Computer Science, Information Technology, or a related field.
- Certifications: Relevant Azure certifications such as Microsoft Certified: Azure Solutions Architect Expert or Azure Developer Associate.
- Core Azure Knowledge: Strong understanding of Azure networking, security, storage, and application services.
- DevOps & IaC: Experience with Azure DevOps and Infrastructure as Code (IaC) practices (ARM or Terraform).
- Soft Skills: Excellent problem-solving skills and strong communication abilities.
Desirable Skills
- Familiarity with other cloud platforms (AWS or Google Cloud).
- Experience with Azure Bicep for IaC.
- Experience with Azure Monitor and Azure Application Insights.
- Knowledge of Azure governance and best practices, including Azure Active Directory, B2C, and B2B.
Job Features
| Job Category | Cloud Engineering |
An opportunity is available for a Cloud Developer on the Core Platform team at Yahoo, a company committed to building beloved brands in News, Sports, and Finance. This role is central to building the industry-leading, next-generation AI-powered cloud platforms, services, and tools that support hundreds of millions of users and billions of daily pageviews.
This is a full-time, remote position in the United States.
Role Summary and AI-Powered Cloud Automation
This developer will be a key contributor to achieving developer self-sufficiency and driving DevOps ownership models by focusing on automation, standardization, and deploying containerized applications at massive scale across multiple cloud providers. The role has a strong focus on leveraging AI to enhance platform operations, security, and developer onboarding.
You Will:
- Infrastructure as Code (IaC): Develop and enhance Terraform modules to standardize cloud features across the Yahoo environment.
- Kubernetes & Deployment: Configure and automate Helm deployments for services hosted in Kubernetes.
- AI-Driven Operations: Design and deploy AI-driven troubleshooting systems and cost optimization features across all managed Kubernetes clusters.
- Developer Productivity: Create intelligent onboarding agents that guide developers to configure and deploy applications seamlessly. Automate setup, security, and scaling best practices through AI-driven workflows.
- Production Reliability: Diagnose, resolve, and prevent issues in production application infrastructure using critical knowledge of Kubernetes, UNIX tools, and cloud technologies.
- Monitoring & Documentation: Implement and maintain monitoring for complete transparency to the application and system state. Write clear and concise engineering documentation for internal users.
Required Experience and Technical Qualifications
The ideal candidate is a talented, self-driven developer with a passion for automation, deep knowledge of Kubernetes, and a desire to leverage AI to solve complex, large-scale cloud challenges.
- Experience: 3+ years with a BS/MS in Computer Science or equivalent.
- Cloud Fundamentals: Strong knowledge of cloud fundamentals, including automation workflows, application deployments, monitoring, networking, and identity/access management. Experience working with AWS or GCP public cloud.
- Kubernetes Expertise: Strong proficiency with Kubernetes (cluster management, operators, CRDs, Helm, GitOps).
- Programming & Scripting: Expertise in any one of the scripting languages. Programming experience in Go is preferred.
- Observability & AI: Knowledge of observability tooling (Prometheus, Grafana, OpenTelemetry) for collecting data to feed AI models.
- Troubleshooting: Skilled in identifying performance bottlenecks, anomalous system behavior, and determining the root cause of incidents in the cloud.
- Preferred Knowledge: Experience working with Terraform is preferred. Knowledge of Service mesh technologies such as Istio, IP networking, DNS, load balancing, and CDNs is also preferred.
Job Features
| Job Category | Cloud Engineering |
An opportunity is available for a Site Reliability / GitOps Engineer to join the Information Systems (IS) team at Canonical, the leading provider of Ubuntu and open-source software to global enterprise and technology markets. This role is a unique opportunity for an "automation-first" technologist to manage and evolve the core IT production services used by over 60 million Ubuntu users worldwide.
This is a full-time, remote position, available globally in any timezone.
Role Summary and Automation Leadership Mandate
This SRE & GitOps Engineer will drive operations automation to the next level across Canonical's private and public clouds. The role combines deep hands-on expertise with infrastructure as code (IaC) and software development practices to ensure the reliability and scalability of Canonical’s services and products.
As a Site Reliability / GitOps Engineer, you will:
- IaC & Automation: Apply your experience of IaC to develop infrastructure as code practice within IS by constantly increasing automation and improving IaC processes. Automate software operations for re-usability and consistency across private and public clouds.
- Resilience & Development: Develop new features and improve the resilience and scalability of the existing cloud and container portfolio. You'll be given uninterrupted development time to focus on large-scale projects and automation of manual tasks.
- Operational Responsibility: Maintain operational responsibility for all of Canonical’s core services, networks, and infrastructure. Carry final responsibility for time-critical escalations.
- Observability & Troubleshooting: Develop skills in troubleshooting, capacity planning, and performance investigation. Set up, maintain, and use observability tools such as Prometheus, Grafana, and Elasticsearch.
- Collaboration & Improvement: Collaborate with development teams to design service architecture, documentation, playbooks, and operational procedures. You will also improve Canonical products and the open-source technologies by providing critical feedback (submitting bugs and sometimes pull requests).
- GitOps Practice: Utilize version control, peer review, and CI/CD to roll out changes to both applications and infrastructure, defining operations entirely in code.
Required Experience and Technical Qualifications
The ideal candidate is a Linux and automation expert with a strong modern engineering background, capable of operating distributed systems and solving complex, full-stack problems.
- IaC & GitOps Expertise: A deep experience of, and knowledge to define operations in code, using version control, peer review, and CI/CD to roll out changes.
- Engineering Background: Strong modern engineering background (peer-review, unit testing, SCM, CI/CD, Agile).
- Programming: Python software development experience, particularly with large projects.
- Linux & Networking: Practical knowledge of Linux networking, routing, and firewalls. Hands-on experience administering enterprise Linux servers.
- Systems Knowledge: Affinity with various forms of Linux storage (from Ceph to Databases). Proficiency with cloud computing concepts and technologies.
- Education: Bachelor's degree or greater, preferably in computer science or a related engineering field.
- Attributes: Motivated and able to troubleshoot from kernel to web. Passionate and familiar with open-source, especially Ubuntu or Debian.
Job Features
| Job Category | Information Technology, Product Management, Software Engineering |
An opportunity is available for an Infrastructure Software Engineer at Dropbox, joining the team responsible for shaping the robust technological backbone that supports their flagship products and future innovations. This role is crucial for building and maintaining the massive-scale systems that define the Dropbox platform.
This is a full-time, remote position hiring in specific US zones. The annual salary ranges are $177,500—$240,100 (Zone 2) and $157,800—$213,400 (Zone 3), plus corporate bonus and stock (RSUs).
Role Summary and Global Infrastructure Mandate
The Infrastructure Engineer will be at the forefront of tackling challenges related to system scalability, data integrity, and cross-ecosystem interoperability. The contributions will directly impact millions of users by working on systems handling petabytes of data and millions of concurrent connections.
Key Responsibilities
- Massive Scale Infrastructure: Build infrastructure capable of managing metadata for hundreds of billions of files, handling hundreds of petabytes of user data, and facilitating millions of concurrent connections.
- Data Fabric Expansion: Lead the expansion of Dropbox's function as the data fabric, connecting hundreds of millions of applications, devices, and services globally, while driving initiatives to enhance interoperability and adaptability.
- Performance & Optimization: Measure and optimize Dropbox's analytics platform to maintain its status as one of the most advanced in the industry for extracting meaningful insights.
- Collaboration & Innovation: Collaborate with cross-functional teams to innovate and implement solutions that enhance the performance, reliability, and security of the infrastructure.
- Mentorship & On-Call: Mentor and guide junior team members. Participate in an on-call rotation (availability during both core and non-core business hours) as required by the team.
Required Experience and Technical Qualifications
The ideal candidate is a highly experienced software development professional with a proven track record in building expansive, distributed backend systems and deep expertise in core programming and operating system fundamentals.
- Experience: 5+ years of professional software development experience.
- Distributed Systems: Proven track record in constructing and managing expansive, multi-threaded, geographically dispersed backend systems.
- Programming Proficiency: Proficient in programming and debugging across a range of languages such as Python, Go, C/C++, or Java.
- Systems Fundamentals: Proficiency with operating system internals, filesystems, databases, networks, and compilers.
- Project Delivery: Proven track record of defining & delivering well-scoped milestones/projects.
- Problem Solving: Ability to independently define the right solutions for ambiguous, open-ended problems.
- Education: BS, MS, or PhD in Computer Science or related technical field involving coding (or equivalent technical experience).
Preferred Qualifications
- Familiarity with Semaphores and Mutexes.
Job Features
| Job Category | Software Engineering |
An opportunity is available for a Cloud Infrastructure Engineer at Guidewire, the market leader in providing critical platform solutions for over 400 insurance companies. This engineer will be responsible for designing, architecting, building, and maintaining the infrastructure surrounding a suite of geospatial data microservices known as NextGen HazardHub, which provides comprehensive property and hazard risk scoring for the P&C insurance industry.
This is a full-time, remote position in the United States.
Role Summary and Microservices Infrastructure Mandate
Joining a team of API, Infra, and Data Engineers, this role will be pivotal in maturing the HazardHub platform from its startup roots to an enterprise-grade product. The Engineer's core mission is to create a reliable, secure, and toil-free deployment environment, emphasizing Infrastructure as Code and self-service for API teams.
Key Responsibilities
- Infrastructure as Code (IaC): Write Terraform modules, Helm charts, and other IaC to create and maintain cloud resources primarily in AWS (EKS).
- Kubernetes & Self-Service: Manage an EKS deploy environment where API engineers can develop and deploy GIS microservices in a primarily self-service model.
- API Gateway Design: Help design and implement HazardHub's next-generation API gateway for network ingress, authentication integrations, and API management.
- Security & Compliance: Maintain secure deploy environments and assist with evidence gathering for SOC2-style audits.
- CI/CD & Automation: Develop robust CI/CD pipelines in GitHub Actions for your code and assist other engineers in their efforts.
- Architecture Collaboration: Assist API and Data Engineers to architect new GIS services and select the appropriate cloud resources. Collaborate with Data Engineers to build a robust ETL infrastructure.
- Monitoring & Alerting: Implement infrastructure monitoring and alerting (Datadog preferred).
Required Experience and Technical Qualifications
The ideal candidate is a proficient Infrastructure Engineer with expertise in AWS, Kubernetes, modern IaC tools, and a strong understanding of API best practices and deployment automation.
- Cloud & Orchestration: Strong knowledge of AWS and Kubernetes; preference for EKS experience.
- Infrastructure as Code: Strong knowledge of common IaC tools; preference for Terraform and Helm.
- API Expertise: Broad knowledge of current API best practices, architectures, integration styles, technologies, and platforms. Proficiency with API contracts (REST, OpenAPI, GraphQL).
- CI/CD: Proficiency with git and CI/CD; preference for GitHub Actions experience.
- Networking & Containers: Proficiency with basic cloud networking (ALBs, VPCs, DNS) and container image creation (Docker).
- Data Skills: Proficiency with SQL; PostGIS experience is a plus.
- Programming & Domain (Plus): Golang experience is a plus. Familiarity with GIS concepts and risk models in the insurance industry is are strong plus.
Job Features
| Job Category | Cloud Engineering |
A position is open for a highly skilled and motivated DevOps Engineer to join the technology team at CallMiner, the global leader in conversation intelligence powered by AI and ML. This engineer will be crucial in building, maintaining, and optimizing CI/CD pipelines and cloud infrastructure to deliver scalable and reliable software releases.
This is a full-time, remote position.
Role Summary and CI/CD Optimization Mandate
The DevOps Engineer is a hands-on technical expert focused on automation, continuous integration/delivery, and orchestration across hybrid cloud environments. The role involves deep collaboration with development, QA, and operations teams to implement best practices, particularly utilizing the GitOps methodology.
Key Responsibilities
- CI/CD Pipeline Management: Design, implement, and maintain CI/CD pipelines using GitLab and other automation tooling.
- Hybrid Cloud Infrastructure: Manage and optimize infrastructure across AWS and Azure environments.
- Orchestration & Containerization: Deploy, manage, and troubleshoot applications using Docker and Kubernetes.
- GitOps & Automation: Implement GitOps workflows utilizing ArgoCD. Automate provisioning and configuration management with Ansible and orchestration with AWX.
- System Administration: Administer and maintain Linux and Windows Server systems in hybrid environments.
- Tooling & Scripting: Develop automation, tooling, and scripts using Bash, Python, or PowerShell.
- Release Engineering: Collaborate on release engineering processes, including versioning, packaging, testing, and automated deployment.
Required Experience and Technical Qualifications
The ideal candidate is a proven DevOps professional with deep expertise in multi-cloud infrastructure, container orchestration, and leading automation efforts via modern GitOps and configuration management tools.
- Experience: Proven experience as a DevOps Engineer or in a similar role with at least three (3) years of experience.
- Cloud Expertise (Required): Expertise with AWS and Azure cloud infrastructure and services.
- Containerization & Orchestration: Strong background in Docker and Kubernetes.
- Automation Tooling: Configuration management experience using Ansible and orchestration with AWX.
- CI/CD & GitOps: Strong understanding of CI/CD using GitLab and proficiency with GitOps principles and ArgoCD.
- Scripting & OS: Proficiency with Linux administration and strong scripting skills (Bash, Python, PowerShell, or similar).
- Skills: Strong troubleshooting skills across software, infrastructure, and network layers. Knowledge of release engineering concepts.
Preferred Qualifications
- Experience managing Windows Server environments.
- Familiarity with Infrastructure as Code tools (Terraform, CloudFormation).
- Experience with observability and monitoring tools (Prometheus, Grafana, ELK Stack, etc.).
- AWS or Azure DevOps-related certifications.
Job Features
| Job Category | DevOps, Software Engineering |
An opportunity is available for a Site Reliability Engineer (SRE) to join the infrastructure team of a product and engineering organization. This key technical role is responsible for ensuring the scalability, reliability, and performance of the company's cloud-based services through automation and continuous improvement.
This is a full-time, remote position.
Role Summary and Reliability Mandate
The SRE will work closely with IT, Engineering, and Security teams to design, secure, and maintain highly available, cost-efficient, and observable systems. A strong emphasis is placed on Infrastructure as Code (IaC), incident management, and modern cloud practices.
Key Responsibilities
- Cloud Reliability & Performance: Responsible for the overall scalability, reliability, and performance of the cloud-based services.
- Automation & IaC: Strong emphasis on automation and continuous improvement. Design and maintain systems using Infrastructure as Code (Terraform preferred).
- Observability: Implement and manage log aggregation and observability tools (e.g., Sumo Logic, Datadog, ELK) for monitoring and proactive management.
- Container Orchestration: Work with Kubernetes (EKS), Helm, and container orchestration to manage services.
- Security & Compliance: Design and maintain secure systems, with familiarity in compliance frameworks (SOC2, HIPAA, etc.).
- Incident Management: Utilize incident management practices and SRE principles (SLAs, SLOs, error budgets) to ensure operational excellence.
- Collaboration: Work closely with cross-functional teams to design systems that are secure, observable, and cost-efficient.
Required Experience and Technical Qualifications
The ideal candidate is a hands-on SRE or DevOps professional with deep expertise in AWS, Kubernetes, and leveraging IaC and observability tools in a fast-paced environment.
- Experience: 5+ years of experience in Site Reliability Engineering, DevOps, or Cloud Infrastructure roles.
- AWS Expertise (Strong Proficiency): Hands-on experience with core AWS services, including IAM, EC2, ECS/Fargate, S3, RDS, CloudFormation, or Terraform.
- DevOps Tooling: Experience with Infrastructure as Code (Terraform preferred) and GitHub (workflow automation, PR workflows, secrets management).
- Observability: Hands-on experience with log aggregation and observability tools (Sumo Logic or equivalents like Datadog, ELK).
- Containerization: Experience with Kubernetes (EKS), Helm, and container orchestration.
- Environment: Prior experience in fast-paced SaaS or startup environments is highly valued.
- Principles: Familiarity with incident management practices and SRE principles (SLAs, SLOs, error budgets).
Job Features
| Job Category | Cloud Engineering, Product Management |
An opportunity is available for a versatile Infrastructure Engineer at datma, an early-stage healthcare technology company focused on extracting value from complex, heterogeneous healthcare data through a specialized platform. This role is a critical bridge between DevOps and Data Engineering, responsible for building scalable data infrastructure and ensuring reliable, secure deployment in regulated environments.
This is a full-time, remote position in the United States.
Role Summary and Healthcare Data Infrastructure
The Infrastructure Engineer will be responsible for the full lifecycle of cloud infrastructure that supports Datma's data ingestion, harmonization, visualization, and AI/ML capabilities. The role requires deep expertise in Kubernetes, Infrastructure-as-Code, and security compliance (HIPAA/HITRUST).
Key Responsibilities
DevOps Functions (Infrastructure Focus)
- Container Orchestration: Architect, deploy, and manage Kubernetes clusters running in customer cloud tenancies (AWS, Azure, GCP).
- Infrastructure-as-Code: Create robust templates using tools like Terraform and Helm for repeatable, automated deployments.
- Security & Compliance: Implement and maintain security controls aligned with HIPAA and HITRUST frameworks. Configure secure networking, IAM, encryption, and audit logging.
- Observability: Implement scaling, monitoring, disaster recovery, and observability solutions (metrics, logging, tracing).
- Automation: Automate deployment processes for data pipelines, ML models, and analytics applications, including automated testing.
Data Engineering Functions (Service Focus)
- AI Infrastructure: Build infrastructure to host in-house AI models and integrate with external AI services (e.g., GPT-5). Optimize data pipelines and storage to support GPU-based compute for ML workloads.
- API Management: Design and manage scalable API gateways and authentication mechanisms for external data consumers, ensuring high-throughput, low-latency access to sensitive healthcare datasets.
- Pipeline Collaboration: Collaborate with the data/applications team to optimize data processing pipelines using tools like Prefect or cloud-native solutions, supporting diverse client integrations.
Required Experience and Technical Qualifications
The ideal candidate possesses deep, hands-on experience in cloud infrastructure and security, with a strong understanding of the specialized needs of data and machine learning workloads.
- Experience: 3+ years of experience in cloud infrastructure engineering, preferably in a regulated data environment.
- Core DevOps: Deep expertise with Kubernetes and container orchestration in production. Strong proficiency in Infrastructure as Code tools (Terraform, Helm, Ansible, etc.).
- Security & Compliance: Experience with cloud security best practices and regulatory frameworks (HIPAA, SOC 2, or HITRUST).
- Monitoring & CI/CD: Hands-on experience with CI/CD pipelines and monitoring tools (e.g., Prometheus, Grafana, ELK).
- Programming: Proficiency in Python and/or Go, SQL, and bash scripting.
- Data Knowledge: Understanding of data modeling, warehousing concepts, and data pipeline orchestration tools.
Preferred Qualifications
- Experience deploying in customer-owned cloud environments (multi-tenant architecture design).
- Knowledge of machine learning infrastructure and MLOps practices.
- Background involving healthcare data and interoperability standards (FHIR, HL7).
- Familiarity with secure API design and management (OAuth2, JWT, API gateways).
Job Features
| Job Category | Data, DevOps |
An opportunity is available for an IT Director - Risk Assessment (Information Security) at Signet Jewelers, the world's largest retailer of diamond jewelry, operating iconic brands like Kay Jewelers and Zales. This motivated leader will be responsible for executive leadership of third-party security matters and driving transformational initiatives.
This is a full-time, remote position.
Role Summary and Vendor Risk Mandate
This Director role is central to managing Signet's cybersecurity risk across its global supply chain. The primary focus is building, evolving, and governing the vendor risk assessment program, ensuring due diligence, implementing mitigation strategies, and maintaining security compliance.
Key Responsibilities
- Vendor Risk Program Ownership: Manage and evolve the vendor risk assessment program. Design the due diligence process and implement risk mitigation strategies.
- Framework Implementation: Manage vendor cybersecurity risk across the global supply chain, implementing frameworks such as NIST CSF and developing risk scores based on vendor impact and criticality.
- Due Diligence & Compliance: Work with procurement and legal to ensure contractual security clauses are enforced. Serve as the primary contact for vendor security discussions and due diligence support.
- Monitoring & Incident Response: Conduct continuous monitoring and lead incident response coordination for vendor-related breaches.
- Reporting & Governance: Report regularly to senior leadership, including the CISO, on the state of third-party security risk. Maintain a risk register of critical vendor findings and track SLAs for timely remediation.
- Guidance: Provide guidance to business units and project teams during vendor selection and procurement processes. Optionally, review Data Protection Impact Assessments (DPIAs).
Required Experience and Qualifications
The ideal candidate is a seasoned Information Security professional with extensive experience managing vendor risk, leading large-scale projects, and overseeing the security of large IT environments.
- Experience: 10+ years of related experience.
- Leadership & Project Management: Experience in project management, from conception to delivery. Experience in managing large, complex projects and large teams. Experience managing consultants/contractors at scale.
- Security Expertise: Extensive experience with a variety of security control tools and processes. Past experience overseeing the security of large IT environments through the entire program lifecycle.
- Communication: Strong communication and interpersonal skills, with the ability to independently set direction and own resolution.
- Education: Bachelor’s degree or equivalent experience; Certifications are a plus.
Job Features
| Job Category | Product, Strategy and Ops |
An opportunity is available for a Director, AI Enablement at BusPatrol, reporting to the Director of AI Enablement (likely an internal reporting structure). This critical leadership role is tasked with defining, strategizing, and driving the adoption and business impact of AI across the entire organization.
This is a full-time, remote position within the USA. The posted salary range is $200,000.00 - $220,000.00 /Yr.
Role Summary and AI Transformation Mandate
The Director will act as the organization's internal thought leader, translating core operational processes into high-impact, value-generating AI use cases. This role requires a blend of high-level strategy, hands-on development (MVPs), and robust governance to successfully embed AI into the company's DNA.
Key Responsibilities
- AI Strategy & Execution: Define and execute the company-wide AI operational transformation strategy from scratch, including the overall AI Transformation roadmap. Lead executive reviews of AI outcomes to inform strategy and investment.
- Use Case Identification & Prioritization: Partner with senior leaders to surface, evaluate, and prioritize high-impact AI use cases (both existing and emerging) aligned with strategic goals. Conduct proactive value stream mapping to uncover opportunities for efficiency and better quality output.
- Prototyping & Deployment: Use rapid experimentation through MVPs or low-code prototypes (e.g., N8n) to quickly test and scale successful use cases. Collaborate closely with the AI Solutions Architect to design, deploy, and operationalize solutions enterprise-wide, embedding tools like LLMs and Generative AI.
- Performance & Accountability Models: Design and implement robust frameworks and KPIs to measure business value (ROI) from AI initiatives. Define the AI Governance Framework and lead the working group to enable safe, high-impact AI adoption.
- Change Management & Cultural Adoption: Define the AI Cultural framework and champion the AI mindset. Drive change management through tailored education, demos, and coaching sessions, setting up a network of cross-departmental AI Champions to scale adoption.
Required Experience and Qualifications
The ideal candidate is an experienced leader in digital transformation with a strong track record of defining and deploying enterprise-wide AI strategies and deep knowledge of AI technologies and cloud platforms.
- Experience: 5+ years of experience in consulting, technology enablement, digital transformation, or a related field.
- Transformation Leadership: Experience leading enterprise-wide transformations and defining the strategy & deployment from scratch.
- AI Domain Expertise: Strong understanding of AI/ML technologies, data architecture, and cloud platforms (e.g., Azure, AWS, GCP).
- Collaboration: Experience partnering with business, product, marketing, customer service, and/or operations teams to deliver AI-powered solutions.
- Process Improvement: Experience leading automation, process reengineering, or digital innovation initiatives using AI or analytics.
- Education: Bachelor’s degree in computer science, Information Systems, Business Administration, or a related field—or equivalent practical experience.
Job Features
| Job Category | AI (Artificial Intelligence) |
A position is available for a Senior Manager, Retail Technology at The ODP Corporation (Office Depot and OfficeMax), a leading provider of products and services through a business-to-business (B2B) distribution platform and omnichannel presence. This role is responsible for the strategic planning, execution, and customer experience of product and service offerings on the company's business platforms.
This is a full-time, fully remote position, but candidates must be available to work during Eastern Time (ET) hours. The salary range is $116,100/year to $175,000/year, with eligibility for an incentive program.
Role Summary and Platform Product Leadership
The Senior Manager serves as a strategic and operational leader, driving customer-facing business initiatives from conception to delivery. The role requires strong analytical and people leadership skills to manage a team and execute a strategic technology roadmap in a matrixed environment.
Primary Responsibilities
- Strategy & Roadmap: Implement a strategic roadmap for product and service offerings, outlining key functional capabilities to maximize sales and customer experience.
- Technology Leadership: Guide the team in overall ** technology design activities**. Serve as the domain expert and point person for escalations, which is essential for defining product specifications and managing on-call situations.
- Project Management & Execution: Manage project timelines and budget, responsible for application/system scopes, dependencies, and deliverables. Assign and manage resource allocation across projects.
- Cross-Functional Partnership: Work closely with business partners, product managers, UX designers, technical architects, and core business teams to understand needs and deliver appropriate solutions.
- People Management: Overarching people management responsibilities include hiring, goal setting, performance management, coaching, training, and development. Fosters growth and skill acquisition.
Required Experience and Qualifications
The ideal candidate is an experienced manager with a proven track record of successful technology delivery, superior analytical skills, and strong people leadership capabilities in a dynamic, customer-facing retail technology environment.
- Experience (Required):
- Minimum of 6 years of overall experience.
- 3 years in a managerial role.
- Proven experience with functionality delivery from concept to deployment, leveraging mobile, social, and other rich media applications to build web solutions.
- Education: Bachelor's degree or equivalent experience (Computer Science, Information Systems, or Business preferred); an advanced degree is a plus.
- Analytical Skills: Demands superior analytical skills to refine strategic, technical roadmaps. Must rely on scientific findings in decision-making, establish relevant KPIs, and routinely track them.
- Leadership Qualities: Ability to continuously drive results, inspire and motivate team performance, and demonstrate the ability to identify, attract, and develop team talent.
- Communication Skills: Excellent verbal and written communication skills; regularly required to make presentations to stakeholders and clearly defend strategies.
- Time Zone Requirement: Must be available to work during Eastern Time hours.
Job Features
| Job Category | Product Management, Sales & Customer Success |
An opportunity is available for a Senior Director of Business Systems at IDC, reporting to Business Operations. This strategic leadership role is responsible for the design, implementation, and optimization of technology systems that drive organizational transformation, process improvements, and greater efficiency.
This is a full-time, remote position in the U.S.
Role Summary and Tech-Enabled Transformation Mandate
The Senior Director acts as the critical bridge between core business operations and the technology team, serving as the primary business lead for major initiatives. This role requires a holistic understanding of internal processes coupled with the technical fluency to drive the adoption of scalable, forward-looking solutions, including AI-powered tools.
Key Responsibilities
- Strategic Leadership: Serve as the primary business lead for critical, tech-enabled initiatives that drive internal processes, delivery, and operational transformation.
- Solution Architecture & Alignment: Collaborate closely with technology counterparts to shape end-to-end solution architectures that meet business needs. Drive ** business requirements definition**, process design, and solution alignment across all stakeholders.
- Technology Adoption: Drive the adoption of technology solutions—including workflow and project management systems, as well as emerging AI-powered tools—to enhance productivity and effectiveness.
- Vendor & Implementation Management: Evaluate and select technology vendors in partnership with Technology, Procurement, and business stakeholders. Lead business system implementations and process transformation initiatives.
- Translator & Champion: Act as a translator between business and technology teams, ensuring mutual understanding, shared goals, and joint accountability. Champion a culture of partnership, agility, and continuous improvement.
- Program Oversight: Track program performance, manage risks, and ensure delivery against scope, timeline, and budget.
Required Experience and Qualifications
The ideal candidate is a seasoned leader with extensive experience in technology-enabled programs, possessing the ability to balance strategic vision with hands-on execution across complex initiatives.
- Experience: 10+ years of experience in business systems, product, or technology-enabled programs.
- Skills: Strong understanding of operational processes, technology solutions, and solution architecture.
- Implementation Expertise: Experience leading business system implementations and process transformation initiatives. Experience with vendor selection, RFPs, and technology implementation.
- Communication: Excellent communication skills—able to engage with senior leaders, technology owners, architects, and external vendors.
- Problem-Solving: Adept at managing ambiguity and bringing structure to complex challenges.
- Education: Bachelor’s degree required; MBA or relevant technical/engineering degree preferred.
- Domain Knowledge: Knowledge of enterprise systems, with workflow and project management experience being a plus.
Job Features
| Job Category | Information Technology |
An opportunity is available for a Principal Services Engineer at teamLFG, a studio focused on creating a new franchise at PlayStation centered on deep social games. This senior engineering role will drive the development, architecture, and scaling of the backend services for a new live-service game.
This is a full-time, remote position in the United States.
Role Summary and Live-Service Architecture Mandate
The Principal Services Engineer is a technical leader responsible for the entire lifecycle of the game's backend services, from initial requirements and architecture to production implementation and on-call support. The role requires expertise in building high-scale distributed systems and strong cross-disciplinary communication.
Key Responsibilities
- Service Development: Build and maintain production-quality backend game services from prototype to production.
- Architecture & Design: Design a comprehensive, pragmatic services architecture covering all aspects of a live-services game, including sessions, matchmaking, player data, and service partitioning.
- Collaboration & Communication: Collaborate daily with a cross-disciplinary team to design service features. Write clear, concise technical documentation and effectively summarize complex topics to achieve alignment across multiple teams.
- Production Operations: Calmly deal with production incidents and participate in an on-call rotation after the game launches.
- System Diagnosis: Ability to diagnose complex system failures using logging and metrics.
Required Experience and Technical Qualifications
The ideal candidate is an expert in distributed systems, proficient in the Rust programming language, and skilled at navigating the complexities of high-scale, live-service environments.
- Systems Expertise: Production experience with distributed systems (or other non-deterministic software architectures) in a high-scale, high-latency environment.
- Programming (Required): Ability to write, debug, and maintain code in Rust.
- Communication: Ability to communicate with customers (internal and external) of different technical and non-technical backgrounds.
- Technical Troubleshooting: Ability to diagnose complex system failures using logging and metrics.
- Culture: Embrace a "we" culture, demonstrating outstanding collaboration and communication skills.
Nice-to-Have Qualifications
- Experience with "games as a service" online game development.
- Experience leading (direct management, mentoring, or guiding) engineers.
- Ability to write, debug, and maintain code in C++.
Job Features
| Job Category | Product Management, Technical Services |
An opportunity is available for a Director of Product Management to join teamLFG, a core leadership team within Sony focused on building one of its next great Intellectual Properties (IPs). This director will lead the product strategy for a new, deep social live service game from pre-production through launch and ongoing operations.
This is a full-time, remote position in the United States.
Role Summary and Live Service Ownership
The Director is the business owner of the game's performance, responsible for defining and driving the monetization and retention strategy, building the Product Management team, and ensuring the product meets all financial and KPI targets.
Key Responsibilities
- Product Performance & KPIs: Define and be responsible for the product’s performance against business KPIs. Build and evolve business models across multiple platforms.
- Monetization & Retention: Determine the core retention and monetization loop, partnering closely with design, user testing, and industry best practices.
- Financial Strategy & Modeling: Build out the revenue assumptions to drive financial modeling in partnership with studio leadership and finance, and guide the team to meet those targets.
- Launch & Live Service: Define and refine the soft launch and performance measurement strategy. Prioritize game backlogs in collaboration with the Game Leadership Team to maximize revenue opportunities in the live service environment.
- Team & Data Culture: Build out the Product Management team and help define the Insights Strategy with Analytics and Research, ensuring a data-informed culture of experimentation and shared learning.
- Market Analysis: Lead industry analysis to define the competitive space and understand game systems and features that drive genre performance.
Required Experience and Qualifications
The ideal candidate is a seasoned product leader from the gaming industry with deep expertise in live service models, financial modeling, and leading teams through the full lifecycle of a game's development.
- Experience (Required): Prior experience successfully taking games from pre-production to full launch and running them as a live service with rapid iteration loops.
- Leadership: Seasoned people leader with an ability to manage, lead, and inspire a highly skilled team.
- Business Acumen: Proven business acumen with strong strategic and analytical capabilities, using data to drive strategy and business decisions.
- Industry Expertise: Expertise and a wealth of knowledge of industry trends and the competitor landscape, with a passion for competitive live service games.
- Skills: Strong communication and collaboration skills to partner with multiple teams (development, marketing, leadership). Possesses the vision to see the bigger picture and translate it into innovative initiatives.
Job Features
| Job Category | Product Management |
An incredible opportunity is available for a Vice President of IT to join a highly esteemed and growing Quick Service Restaurant (QSR) group. This executive leader will drive the strategic vision and direct all IT operations for the organization.
This is a full-time, remote position, but candidates must be located in the 12 Midwest states or anywhere in the Eastern Time Zone (EST). The salary range is $135,000 to $165,000, along with a bonus program.
Role Summary and QSR Technology Leadership
This VP will have direct oversight of all IT staff and systems, setting the strategic vision for the entire technology infrastructure. The role requires a hands-on leader capable of blending strategic planning with direct operational accountability in a fast-paced hospitality environment.
Key Responsibilities
- Strategic Vision: Develop a strategic vision for the organization's entire IT infrastructure, encompassing computer and information systems, security, and communication systems.
- Operational Oversight: Direct oversight of all IT operations, including supervision of IT staff and systems. Vigilantly monitor system performance to ensure seamless delivery and operation of IT services.
- Budget Management: Management and monitoring of the IT department's annual budget.
- Cross-Functional Collaboration: Collaborate with senior-level stakeholders across the organization to identify business and technology needs and optimize IT usage.
- Standards & Guidance: Establish processes and standards for the selection, implementation, and support of systems. Provide direction, guidance, and training to IT personnel.
Required Experience and Qualifications
The ideal candidate possesses significant IT leadership experience, specifically within the Quick Service Restaurant (QSR) or hospitality sector, coupled with deep technical expertise in systems, security, and specialized QSR technology.
- Experience (Required): A minimum of 4 years of experience in QSR IT operations, including supervision of technology teams and oversight of substantial IT projects.
- Education: A Bachelor's degree in Information Technology, Computer Science, Information Systems, or a related field.
- Technical Knowledge: Profound knowledge of computer systems, security, network and systems administration, databases, data storage systems, and telecommunications within the hospitality industry.
- QSR Specific Skills (Preferred): Preferably, experience with C# and other .NET programming languages, along with familiarity with Point of Sale Extended Markup Language (POSXML) and installing Kitchen Display Units (KDUs).
- Location Requirement: Must be located in the 12 Midwest states or anywhere in EST.
Job Features
| Job Category | Information Technology |