Senior Data Engineer
Looking for Senior Data Engineers who are experienced, self‑driven, analytical, and strategic. In this role you will work with clients across large and complex data lake / warehouse environments, bringing disparate datasets together to answer business questions. Your deep expertise in creating and managing datasets, and proven ability to translate data into meaningful insights through collaboration with product managers, data engineers, business intelligence developers, operations managers, and client leaders will be integral to strategic decision‑making and complex problem solving.
Responsibilities
- Interface with project managers, business customers, data architects and data modelers to understand requirements and implement solutions.
- Design, develop, and operate highly scalable, high‑performance, low‑cost, and accurate data pipelines in distributed data processing platforms providing ad hoc access to large datasets and computing power.
- Explore and learn the latest big data technologies; evaluate and make decisions around the use of new or existing software products to design data architectures.
- Recognize and adopt best practices in data processing, reporting, and analysis : data integrity, test design, analysis, validation, and documentation.
Qualifications
Bachelor’s degree in Computer Science or related technical field, or equivalent work experience.5+ years of work experience with ETL, Data Modeling, and Data Architecture.5+ years of experience with SQL and large data sets, data modeling, ETL development, and data warehousing, or similar skills.2+ years experience with the AWS or MS Azure technology stacks. AWS should include Redshift, RDS, S3, EMR or similar solutions built around Hive / Spark, etc. MS Azure should include ADF, Azure Blob storage, Azure Synapse, etc.Excellent in ETL optimization, designing, coding, and tuning big data processes using Apache Spark or similar technologies.Experience operating very large data warehouses or data lakes.Experience with building data pipelines and applications to stream and process datasets at low latencies.Demonstrate efficiency in handling data – tracking data lineage, ensuring data quality, and improving discoverability of data.Knowledge of distributed systems and data architecture – design and implement batch and stream data processing pipelines, knows how to optimize the distribution, partitioning, and MPP of high‑level data structures.Experience with full software development life cycle, including coding standards, code reviews, source control management, build processes, and testing.Seniority Level
Mid‑Senior level
Employment Type
Full‑time
Job Function
Information Technology, Engineering, and Other
Industries
IT System Data Services and IT System Operations & Maintenance
#J-18808-Ljbffr