You You are an innovative technology enthusiast who enjoys building software products and quickly seeing them work in the real world.
You like to develop seriously collaborative teams and guide passionate, cross-functional technologists to solve new problems.
Even more, you drive results and hold yourself and your teammates to extreme levels of software standards and professional quality while keeping up with new web application tools and technologies.
You can agree to disagree with a smile and drive through with results. Us We drive valuable Digital Experiences for established enterprises, emerging startups and other companies through our Data Engineering, Analytics, and Application Development services.
Our customized enterprise grade solutions enable our partners to achieve improved operational efficiency and deliver improved business outcomes.
Egen's Data Engineering team builds scalable data pipelines using Python, Java, or Scala and AWS. The pipelines we build typically integrate with technologies such as Kafka, Storm, and Elasticsearch.
We are working on a continuous deployment pipeline that leverages rapid on-demand releases. Our developers work in an agile process to efficiently deliver high value applications and product packages.
Your Day As a Sr. Data Platform Engineer at Egen, you will architect and implement cloud-native data pipelines and infrastructure to enable analytics and machine learning on Egen's rich datasets.
Why we’re looking for you :
- 3-5 years minimum experience in a production level Data Engineering role building pipelines using Python.
- You know what it takes to build and run resilient data pipelines in production and have experience implementing ETL / ELT to load a multi-terabyte enterprise data warehouse.
- You have implemented analytics applications using multiple database technologies, such as relational, multidimensional (OLAP), key-value, document, or graph.
- You value the importance of defining data contracts, and have experience writing specifications including REST APIs.
- You write code to transform data between data models and formats, preferably in Python or PySpark (bonus points).
- You've worked in agile environments and are comfortable iterating quickly.
Bonus points for :
- Experience moving trained machine learning models into production data pipelines.
- Expert knowledge of relational database modeling concepts, SQL skills, proficiency in query performance tuning, and desire to share knowledge with others.
- Experience building cloud-native applications and supporting technologies / patterns / practices including : AWS, Docker, CI / CD, DevOps, and microservices.
- Experience moving trained machine learning models into production data pipelines.
- Expert knowledge of relational database modeling concepts, SQL skills, proficiency in query performance tuning, and desire to share knowledge with others.
- Experience building cloud-native applications and supporting technologies / patterns / practices including : AWS, Docker, CI / CD, DevOps, and microservices.