Hadoop Developer – Chandler, AZ (Hybrid) (Contract)

Contract, Hybrid
Chandler, AZ
Posted 3 months ago

An opportunity has come through our network for a Hadoop Developer at a leading firm in the financial services industry. This is a contract position located in Chandler, Arizona, with a hybrid work schedule requiring three days a week in the office. This is a great opportunity for an experienced professional to join the Data Engineering Department and contribute to innovative data solutions. The ideal candidate will be a proactive problem-solver with a strong background in data engineering and a passion for building robust data pipelines.


The Role: Building Data Pipelines and Enhancing Data Solutions

As a Hadoop Developer, your primary responsibility will be to build and maintain data pipelines using a variety of big-data stack technologies. This is a hands-on role that requires a deep understanding of data engineering principles and a knack for creating efficient and reliable data solutions. You will be using technologies such as Hadoop, Hive, and PySpark to manage and process large volumes of data.

A key part of your job will involve utilizing Amazon AWS S3 for object storage and data service integration. You will be a vital part of the data architecture team, engaging in data modeling and database design to enhance the overall data structure. You will also be responsible for implementing job scheduling using Autosys to ensure that data processing is efficient and timely. To support the business, you will leverage tools like Power BI and Dremio for data visualization and analysis, turning raw data into actionable insights.


Required Skills and Experience

To be successful in this role, you must have a solid foundation of experience and a specific set of skills. We are seeking a professional with a minimum of 4 years of hands-on experience in data engineering and pipeline development. This is a core requirement for the position. You must also be proficient in Unix/shell scripting and have experience with CI/CD pipeline practices.

A strong understanding of database design principles, preferably with MySQL or an equivalent, is essential. Experience in automating processes using Spark, Python, and Hadoop/Hive is also a key requirement. While not a strict requirement, exposure to GCP cloud data engineering is considered a strong plus.

This position offers a great opportunity to work in a dynamic and innovative environment, engaging in challenging projects that will enhance your technical skills. You will have the chance to collaborate with experienced engineers and industry professionals and be part of a pilot program that may lead to future opportunities. This is a great role for a professional who is eager to take on a new challenge and contribute to a leading firm in the financial services industry.

profile picture

Job Features

Job CategoryData, Finance, Banking, & Accounting

Apply For This Job

A valid phone number is required.