We are seeking a highly skilled Big Data Developer with strong expertise in Spark, Scala, and Java.
The ideal candidate will have a solid background in big data technologies, relational databases, and scripting in Unix environments.
This role involves developing scalable data processing solutions and contributing to the design and optimization of big data systems.
Key Responsibilities
- Design, develop, and maintain big data applications using Spark and Scala.
- Write efficient and complex SQL queries for data extraction and transformation.
- Develop and maintain backend components using Java.
- Work in Unix / Linux environments and create shell scripts for automation and data processing tasks.
- Collaborate with cross-functional teams to understand data requirements and deliver scalable solutions.
- Ensure high performance and reliability of big data pipelines and applications.
- Utilize version control systems and CI / CD tools for streamlined development and deployment.
Required Qualifications
6–10 years of experience in big data technologies, with a strong focus on Spark and Scala.Proficiency in Java programming.Strong experience with SQL and relational databases.Hands-on experience with Unix / Linux environments and shell scripting.Familiarity with the Hadoop ecosystem, including Hive and other distributed systems.Experience with CI / CD tools and Git.Excellent problem-solving and communication skills.Preferred Qualifications
Experience with cloud platforms such as AWS, Azure, or GCP.Knowledge of data governance and compliance standards.Exposure to Agile methodologies and tools like JIRA.Certifications
Oracle Certified Professional (OCP) certification is highly desirable.#J-18808-Ljbffr