Spark/Scala Developer
Posted today
Job Viewed
Job Description
Greetings from Maneva!
Job Description
Job Title - Spark/Scala Developer
Experience - 7 -15 Years
Location - Bangalore
Notice - Immediate to 15 days
Requirements:-
- Excellent Knowledge on Spark; The professional must have a thorough understanding Spark framework, Performance Tuning etc.
- Excellent Knowledge and hands-on experience of at least 4+ years in Spark And Scala.
- Excellent Knowledge of the Hadoop eco System- Knowledge of Hive mandatory
- Strong Unix and Shell Scripting Skills.
- Excellent Inter-personal skills and for experienced candidates’ Excellent leadership skills.
- Mandatory for anyone to have Good knowledge of any of the CSPs like Azure, AWS or GCP; Certifications on Azure will be additional plus.
If you are excited to grab this opportunity, please apply directly or share your CV at and
Big Data Spark
Posted today
Job Viewed
Job Description
• Should have extensive working experience in Hive and other components of the Hadoop ecosystem (HBase, Zookeeper, Kafka, and Flume)
• Should be able to understand the complex transformation logic and translate them to Spark-SQL queries.
• Unix Shell Scripting and setting up CRON
• Familiar with Data Warehouse concepts and Change Data Capture (CDC and SCD Types)
• Should have worked on Cloudera distribution framework, Oozie Workflow (or any Scheduler), Jenkins (or any version controller).
• Prior Experience in Consumer Banking Domain is an advantage.
• Prior Experience in the agile delivery method is an advantage.
• Excellent understanding of technology life cycles and the concepts and practices required to build big data solutions
• Core Java skillset is an added advantage. Airflow is an added advantage.
• Good Knowledge and Experience in any database (Teradata or Data Lake or Oracle or SQL Server) is a plus.
Scala Spark
Posted 13 days ago
Job Viewed
Job Description
Scala Spark
Exp: 5 to 9 yrs
Location: Bengaluru, Pune, Mumbai & Chennai
Notice period: Less than 30 days
Thanks & Regards,
Indumati N
Snowflake, Spark
Posted today
Job Viewed
Job Description
Talent Worx is thrilled to announce an exciting opportunity for the roles of Snowflake and Spark Developers! Join us in revolutionizing the data analytics landscape as we partner with one of the Big 4 firms in India.
What impact will you make?
Your contributions will play a vital role in shaping our clients' success stories by utilizing innovative technologies and frameworks. Envision a dynamic culture that supports inclusion, collaboration, and exceptional performance. With us, you will discover unrivaled opportunities to accelerate your career and achieve your goals.
The Team
In our Analytics & Cognitive (A&C) practice, you will find a dedicated team committed to unlocking the value hidden within large datasets. Our globally-connected network ensures that our clients gain actionable insights that support fact-driven decision-making, leveraging advanced techniques including big data, cloud computing, cognitive capabilities, and machine learning.
Work you will do
As a key player in our organization, you will contribute directly to enhancing our clients’ competitive positioning and performance with innovative and sustainable solutions. We expect you to collaborate closely with our teams and clients to deliver outstanding results across various projects.
Requirements
- 5+years of relevant experience in Spark and Snowflake with practical experience in at least one project implementation.
- Strong experience in developing ETL pipelines and data processing workflows using Spark.
- Expertise in Snowflake architecture, including data loading and unloading processes, table structures, and virtual warehouses.
- Proficiency in writing complex SQL queries in Snowflake for data transformation and analysis.
- Experience with data integration tools and techniques, ensuring the seamless ingestion of data.
- Familiarity with building and monitoring data pipelines in a cloud environment.
- Exposure to Agile methodology and tools like Jira and Confluence.
- Strong analytical and problem-solving skills, with meticulous attention to detail.
- Excellent communication and interpersonal skills to foster collaborations with clients and team members.
- Ability to travel as required by project demands.
Qualifications
- Snowflake certification or equivalent qualification is a plus.
- Prior experience working with both Snowflake and Spark in a corporate setting.
- Formal education in Computer Science, Information Technology, or a related field.
- Proven track record of working with cross-functional teams.
Benefits
Work with one of the Big 4's in India
Healthy work Environment
Work Life Balance
Scala Spark
Posted today
Job Viewed
Job Description
Scala Spark
Exp: 5 to 9 yrs
Location: Bengaluru, Pune, Mumbai & Chennai
Notice period: Less than 30 days
Thanks & Regards,
Indumati N
Scala Spark
Posted 13 days ago
Job Viewed
Job Description
Scala Spark
Exp: 5 to 9 yrs
Location: Bengaluru, Pune, Mumbai & Chennai
Notice period: Less than 30 days
Thanks & Regards,
Indumati N
Scala Spark
Posted today
Job Viewed
Job Description
Scala Spark
Exp: 5 to 9 yrs
Location: Bengaluru, Pune, Mumbai & Chennai
Notice period: Less than 30 days
Thanks & Regards,
Indumati N
Be The First To Know
About the latest Spark Jobs in Bengaluru !
Java Developer(with Spark SQL)
Posted 17 days ago
Job Viewed
Job Description
Experience - 4 - 9 years
Work Location: India - Remote, Bengaluru will be preferred.
Work Timings: 1:00pm to 10:00pm IST
We are seeking experienced Java Developers with strong Spark SQL skills to join a fast-paced project for a global travel technology client. The role focuses on building API integrations to connect with external data vendors and creating high-performance Spark jobs to process and land raw data into target systems.
You will work closely with distributed teams, including US-based stakeholders, and must be able to deliver quality output in a short timeframe.
Key Responsibilities:
- Design, develop, and optimize Java-based backend services (Spring Boot / Microservices) for API integrations.
- Develop and maintain Spark SQL queries and data processing pipelines for large-scale data ingestion.
- Build Spark batch and streaming jobs to land raw data from multiple vendor APIs into data lakes or warehouses.
- Implement robust error handling, logging, and monitoring for data pipelines.
- Collaborate with cross-functional teams across geographies to define integration requirements and deliverables.
- Troubleshoot and optimize Spark SQL for performance and cost efficiency.
- Participate in Agile ceremonies, daily standups, and client discussions.
Required Skills:
- 4 to 8 years of relevant experience.
- Core Java (Java 8 or above) with proven API development experience.
- Apache Spark (Core, SQL, DataFrame APIs) for large-scale data processing.
- Spark SQL – strong ability to write and optimize queries for complex joins, aggregations, and transformations.
- Experience with API integration (RESTful APIs, authentication, payload handling, and rate limiting).
- Hands-on with data ingestion frameworks and ETL concepts.
- Experience with MySQL or other RDBMS for relational data management.
- Proficiency in Git for version control.
- Strong debugging, performance tuning, and problem-solving skills.
- Ability to work with minimal supervision in a short-term, delivery-focused engagement.
Java Developer(with Spark SQL)
Posted 3 days ago
Job Viewed
Job Description
Experience - 4 - 9 years
Work Location: India - Remote, Bengaluru will be preferred.
Work Timings: 1:00pm to 10:00pm IST
We are seeking experienced Java Developers with strong Spark SQL skills to join a fast-paced project for a global travel technology client. The role focuses on building API integrations to connect with external data vendors and creating high-performance Spark jobs to process and land raw data into target systems.
You will work closely with distributed teams, including US-based stakeholders, and must be able to deliver quality output in a short timeframe.
Key Responsibilities:
- Design, develop, and optimize Java-based backend services (Spring Boot / Microservices) for API integrations.
- Develop and maintain Spark SQL queries and data processing pipelines for large-scale data ingestion.
- Build Spark batch and streaming jobs to land raw data from multiple vendor APIs into data lakes or warehouses.
- Implement robust error handling, logging, and monitoring for data pipelines.
- Collaborate with cross-functional teams across geographies to define integration requirements and deliverables.
- Troubleshoot and optimize Spark SQL for performance and cost efficiency.
- Participate in Agile ceremonies, daily standups, and client discussions.
Required Skills:
- 4 to 8 years of relevant experience.
- Core Java (Java 8 or above) with proven API development experience.
- Apache Spark (Core, SQL, DataFrame APIs) for large-scale data processing.
- Spark SQL – strong ability to write and optimize queries for complex joins, aggregations, and transformations.
- Experience with API integration (RESTful APIs, authentication, payload handling, and rate limiting).
- Hands-on with data ingestion frameworks and ETL concepts.
- Experience with MySQL or other RDBMS for relational data management.
- Proficiency in Git for version control.
- Strong debugging, performance tuning, and problem-solving skills.
- Ability to work with minimal supervision in a short-term, delivery-focused engagement.
Java developer(with spark sql)
Posted today
Job Viewed
Job Description
Experience - 4 - 9 yearsWork Location: India - Remote, Bengaluru will be preferred.Work Timings: 1:00pm to 10:00pm ISTWe are seeking experienced Java Developers with strong Spark SQL skills to join a fast-paced project for a global travel technology client. The role focuses on building API integrations to connect with external data vendors and creating high-performance Spark jobs to process and land raw data into target systems.You will work closely with distributed teams, including US-based stakeholders, and must be able to deliver quality output in a short timeframe.Key Responsibilities:Design, develop, and optimize Java-based backend services (Spring Boot / Microservices) for API integrations.Develop and maintain Spark SQL queries and data processing pipelines for large-scale data ingestion.Build Spark batch and streaming jobs to land raw data from multiple vendor APIs into data lakes or warehouses.Implement robust error handling, logging, and monitoring for data pipelines.Collaborate with cross-functional teams across geographies to define integration requirements and deliverables.Troubleshoot and optimize Spark SQL for performance and cost efficiency.Participate in Agile ceremonies, daily standups, and client discussions.Required Skills:4 to 8 years of relevant experience.Core Java (Java 8 or above) with proven API development experience.Apache Spark (Core, SQL, Data Frame APIs) for large-scale data processing.Spark SQL – strong ability to write and optimize queries for complex joins, aggregations, and transformations.Experience with API integration (RESTful APIs, authentication, payload handling, and rate limiting).Hands-on with data ingestion frameworks and ETL concepts.Experience with My SQL or other RDBMS for relational data management.Proficiency in Git for version control.Strong debugging, performance tuning, and problem-solving skills.Ability to work with minimal supervision in a short-term, delivery-focused engagement.