Job Description:
We are looking for a skilled Sr Java Spark Developer with hands-on experience in Java, Spark, Impala, Hive, and Hadoop. The ideal candidate will have a strong background in developing and leading Spark-based big data solutions. As a Java Spark Lead Developer, you will play a crucial role in designing, implementing, and optimizing high-performance data processing applications using Java, Spark, and related big data technologies. If you are passionate about big data, distributed computing, and Java development, and you possess the required skills and experience, we would love to hear from you. Join our innovative team and contribute to the development of cutting-edge data processing and analytics solutions using Java and Apache Spark.
Responsibilities:
- Technical Leadership: Provide technical leadership and guidance in the design and development of Java and Spark-based data processing applications. Mentor and lead a team of developers to deliver high-quality solutions.
- Solution Design: Architect and design scalable and efficient data processing solutions using Java, Spark, Impala, Hive, and Hadoop, addressing complex business requirements and performance considerations.
- Hands-on Development: Lead by example through hands-on development, coding, and debugging of Java and Spark applications, ensuring adherence to best practices and coding standards.
- Performance Optimization: Identify and implement performance optimizations for data processing workflows, leveraging Spark, Impala, and Hive to achieve efficient data processing and analytics.
- Data Integration: Collaborate with data engineering and integration teams to ensure seamless integration of data processing applications with Hadoop-based data platforms and ecosystem tools.
- Testing and Quality Assurance: Implement testing strategies and quality assurance processes for Java and Spark applications, ensuring robustness, reliability, and scalability.
- Stakeholder Collaboration: Work closely with business stakeholders, data analysts, and other technical teams to understand requirements, provide technical insights, and deliver solutions aligned with business needs.
- Documentation and Best Practices: Document technical designs, architecture, and development best practices for Java and Spark-based applications, promoting knowledge sharing and standardization.
Requirements:
- Minimum 8 years of software development experience with a focus on Java, Spark, and big data technologies.
- Proven hands-on experience in Java programming, Spark development, and working with Hadoop ecosystem tools including Impala and Hive.
- Strong understanding of distributed computing principles and big data processing concepts.
- Experience in performance tuning, optimization, and troubleshooting of Spark and Hadoop-based applications.
- Proficiency in SQL and data querying using Impala, Hive, or similar tools.
- Excellent leadership and communication skills, with the ability to lead a technical team and collaborate effectively with stakeholders.
- Familiarity with agile software development methodologies and DevOps practices.