Senior Backend Developer W/Spark
Posted 1 day ago
Job Viewed
Job Description
About Us
LiveRamp is the data collaboration platform of choice for the world’s most innovative companies. A groundbreaking leader in consumer privacy, data ethics, and foundational identity, LiveRamp is setting the new standard for building a connected customer view with unmatched clarity and context while protecting precious brand and consumer trust. LiveRamp offers complete flexibility to collaborate wherever data lives to support the widest range of data collaboration use cases—within organizations, between brands, and across its premier global network of top-quality partners.
Hundreds of global innovators, from iconic consumer brands and tech giants to banks, retailers, and healthcare leaders turn to LiveRamp to build enduring brand and business value by deepening customer engagement and loyalty, activating new partnerships, and maximizing the value of their first-party data while staying on the forefront of rapidly evolving compliance and privacy requirements.
About the Role
LiveRamp is looking for a strong Backend Engineer with deep expertise in Big Data technologies to build and scale high-performance distributed systems
You will:
- Build and maintain large-scale, distributed backend systems.
- Design and optimize Big Data ecosystems including Spark, Hadoop/MR, and Kafka.
- Leverage cloud-based platforms (GCP, AWS, Azure) for development and deployment.
- Implement observability practices including distributed tracing, SLOs, and SLIs.
- Write maintainable, extensible, scalable, and high-performance backend code.
- Collaborate with a global, cross-functional team to deliver projects end-to-end.
- Ensure reliability, scalability, and performance of backend infrastructure.
Your team will:
You will be part of the White Box Monitoring Development Team consisting of: 1 Team Lead Manager (TLM), 2 Backend Engineers (Level 5), and 1 Data Analyst. This team is responsible for building robust backend systems for monitoring, data processing, and event-driven architectures.
About you:
- 6+ years of experience in software engineering (backend).
- 3+ years of experience with cloud platforms (GCP, AWS, Azure).
- 2+ years of hands-on experience managing/optimizing Big Data ecosystems (Spark, Hadoop/MR, Kafka).
- Proficiency in compiled languages: Java, Scala, or Go.
- 1+ year of experience in observability practices (distributed tracing, SLIs, SLOs, SLAs).
- Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
- Strong knowledge of Object-Oriented Design (OOD) and Object-Oriented Analysis (OOA).
- Proven track record of delivering large-scale, cross-functional projects.
- Strong communication and collaboration skills, especially with remote teams.
- Passion for building reliable, scalable distributed systems.
Preferred Skills:
- Experience with real-time distributed databases (e.g.,SingleStore).
- Familiarity with GCP products such as BigTable, Big Query, DataProc, PubSub.
- Knowledge of event systems design and implementation.
- Experience with infrastructure/deployment tools like Terraform, Kubernetes, Helm, Gradle.
- Experience designing and implementing RESTful APIs at scale.
- Strong technical knowledge of monitoring and reliability practices.
Benefits:
- People: Work with talented, collaborative, and friendly people who love what they do.
- Work/Life Harmony: Flexible paid time off, paid holidays, options for working from home, and paid parental leave.
More about us:
LiveRampers are empowered to live our values of committing to shared goals and operational excellence. Connecting LiveRampers to new ideas and to one another is one of our guiding principles one that informs how we hire, train, and grow our global teams across nine countries and four continents. By continually building inclusive, high belonging teams, LiveRampers can deliver exceptional work, champion innovative ideas, and be their best selves. Click here to learn more about Diversity, Inclusion, & Belonging (DIB) at LiveRamp.
Senior Backend Developer W/Spark
Posted 1 day ago
Job Viewed
Job Description
About Us
LiveRamp is the data collaboration platform of choice for the world’s most innovative companies. A groundbreaking leader in consumer privacy, data ethics, and foundational identity, LiveRamp is setting the new standard for building a connected customer view with unmatched clarity and context while protecting precious brand and consumer trust. LiveRamp offers complete flexibility to collaborate wherever data lives to support the widest range of data collaboration use cases—within organizations, between brands, and across its premier global network of top-quality partners.
Hundreds of global innovators, from iconic consumer brands and tech giants to banks, retailers, and healthcare leaders turn to LiveRamp to build enduring brand and business value by deepening customer engagement and loyalty, activating new partnerships, and maximizing the value of their first-party data while staying on the forefront of rapidly evolving compliance and privacy requirements.
About the Role
LiveRamp is looking for a strong Backend Engineer with deep expertise in Big Data technologies to build and scale high-performance distributed systems
You will:
- Build and maintain large-scale, distributed backend systems.
- Design and optimize Big Data ecosystems including Spark, Hadoop/MR, and Kafka.
- Leverage cloud-based platforms (GCP, AWS, Azure) for development and deployment.
- Implement observability practices including distributed tracing, SLOs, and SLIs.
- Write maintainable, extensible, scalable, and high-performance backend code.
- Collaborate with a global, cross-functional team to deliver projects end-to-end.
- Ensure reliability, scalability, and performance of backend infrastructure.
Your team will:
You will be part of the White Box Monitoring Development Team consisting of: 1 Team Lead Manager (TLM), 2 Backend Engineers (Level 5), and 1 Data Analyst. This team is responsible for building robust backend systems for monitoring, data processing, and event-driven architectures.
About you:
- 6+ years of experience in software engineering (backend).
- 3+ years of experience with cloud platforms (GCP, AWS, Azure).
- 2+ years of hands-on experience managing/optimizing Big Data ecosystems (Spark, Hadoop/MR, Kafka).
- Proficiency in compiled languages: Java, Scala, or Go.
- 1+ year of experience in observability practices (distributed tracing, SLIs, SLOs, SLAs).
- Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
- Strong knowledge of Object-Oriented Design (OOD) and Object-Oriented Analysis (OOA).
- Proven track record of delivering large-scale, cross-functional projects.
- Strong communication and collaboration skills, especially with remote teams.
- Passion for building reliable, scalable distributed systems.
Preferred Skills:
- Experience with real-time distributed databases (e.g.,SingleStore).
- Familiarity with GCP products such as BigTable, Big Query, DataProc, PubSub.
- Knowledge of event systems design and implementation.
- Experience with infrastructure/deployment tools like Terraform, Kubernetes, Helm, Gradle.
- Experience designing and implementing RESTful APIs at scale.
- Strong technical knowledge of monitoring and reliability practices.
Benefits:
- People: Work with talented, collaborative, and friendly people who love what they do.
- Work/Life Harmony: Flexible paid time off, paid holidays, options for working from home, and paid parental leave.
More about us:
LiveRampers are empowered to live our values of committing to shared goals and operational excellence. Connecting LiveRampers to new ideas and to one another is one of our guiding principles one that informs how we hire, train, and grow our global teams across nine countries and four continents. By continually building inclusive, high belonging teams, LiveRampers can deliver exceptional work, champion innovative ideas, and be their best selves. Click here to learn more about Diversity, Inclusion, & Belonging (DIB) at LiveRamp.
Senior Backend Developer W/Spark
Posted 1 day ago
Job Viewed
Job Description
About Us
LiveRamp is the data collaboration platform of choice for the world’s most innovative companies. A groundbreaking leader in consumer privacy, data ethics, and foundational identity, LiveRamp is setting the new standard for building a connected customer view with unmatched clarity and context while protecting precious brand and consumer trust. LiveRamp offers complete flexibility to collaborate wherever data lives to support the widest range of data collaboration use cases—within organizations, between brands, and across its premier global network of top-quality partners.
Hundreds of global innovators, from iconic consumer brands and tech giants to banks, retailers, and healthcare leaders turn to LiveRamp to build enduring brand and business value by deepening customer engagement and loyalty, activating new partnerships, and maximizing the value of their first-party data while staying on the forefront of rapidly evolving compliance and privacy requirements.
About the Role
LiveRamp is looking for a strong Backend Engineer with deep expertise in Big Data technologies to build and scale high-performance distributed systems
You will:
- Build and maintain large-scale, distributed backend systems.
- Design and optimize Big Data ecosystems including Spark, Hadoop/MR, and Kafka.
- Leverage cloud-based platforms (GCP, AWS, Azure) for development and deployment.
- Implement observability practices including distributed tracing, SLOs, and SLIs.
- Write maintainable, extensible, scalable, and high-performance backend code.
- Collaborate with a global, cross-functional team to deliver projects end-to-end.
- Ensure reliability, scalability, and performance of backend infrastructure.
Your team will:
You will be part of the White Box Monitoring Development Team consisting of: 1 Team Lead Manager (TLM), 2 Backend Engineers (Level 5), and 1 Data Analyst. This team is responsible for building robust backend systems for monitoring, data processing, and event-driven architectures.
About you:
- 6+ years of experience in software engineering (backend).
- 3+ years of experience with cloud platforms (GCP, AWS, Azure).
- 2+ years of hands-on experience managing/optimizing Big Data ecosystems (Spark, Hadoop/MR, Kafka).
- Proficiency in compiled languages: Java, Scala, or Go.
- 1+ year of experience in observability practices (distributed tracing, SLIs, SLOs, SLAs).
- Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
- Strong knowledge of Object-Oriented Design (OOD) and Object-Oriented Analysis (OOA).
- Proven track record of delivering large-scale, cross-functional projects.
- Strong communication and collaboration skills, especially with remote teams.
- Passion for building reliable, scalable distributed systems.
Preferred Skills:
- Experience with real-time distributed databases (e.g.,SingleStore).
- Familiarity with GCP products such as BigTable, Big Query, DataProc, PubSub.
- Knowledge of event systems design and implementation.
- Experience with infrastructure/deployment tools like Terraform, Kubernetes, Helm, Gradle.
- Experience designing and implementing RESTful APIs at scale.
- Strong technical knowledge of monitoring and reliability practices.
Benefits:
- People: Work with talented, collaborative, and friendly people who love what they do.
- Work/Life Harmony: Flexible paid time off, paid holidays, options for working from home, and paid parental leave.
More about us:
LiveRampers are empowered to live our values of committing to shared goals and operational excellence. Connecting LiveRampers to new ideas and to one another is one of our guiding principles one that informs how we hire, train, and grow our global teams across nine countries and four continents. By continually building inclusive, high belonging teams, LiveRampers can deliver exceptional work, champion innovative ideas, and be their best selves. Click here to learn more about Diversity, Inclusion, & Belonging (DIB) at LiveRamp.
Spark / Scala Data Engineer
Posted today
Job Viewed
Job Description
Role - Spark / Scala Data Engineer
Experience - 8 to 10 yrs
Location - Bangalore/Chennai/Hyderabad/Delhi/Pune
Must Have- Big Data Hadoop - Hive and Spark/Scala solid experience- SQL advance knowledge - Been able to test changes and issues properly, replicating the code functionality into SQL- Worked with Code Repositories as GIT, Maven, .- DevOps Knowledge (Jenkins, Scripts, .) - Tools used for deploying software into environments, use of Jira.Good to have:- Analyst Skills - Being able to translate technical requirements to non-technical partners and to deliver clear solutions. Been able to create test cases scenarios.- Control-m solid experience - Been able to create jobs, modify parameters- Documentation - Experience of carrying out data and process analysis to create specifications documents- Finance Knowledge - Have a experience working in Financial Services / Banking organization with an understanding of Financial Services / Retail, Business and Corporate Banking- AWS knowledge- Unix / Linux
Spark / Scala Data Engineer
Posted today
Job Viewed
Job Description
Experience - 8 to 10 yrs
Location - Bangalore/Chennai/Hyderabad/Delhi/Pune
Must Have- Big Data Hadoop - Hive and Spark/Scala solid experience- SQL advance knowledge - Been able to test changes and issues properly, replicating the code functionality into SQL- Worked with Code Repositories as GIT, Maven, .- DevOps Knowledge (Jenkins, Scripts, .) - Tools used for deploying software into environments, use of Jira.Good to have:- Analyst Skills - Being able to translate technical requirements to non-technical partners and to deliver clear solutions. Been able to create test cases scenarios.- Control-m solid experience - Been able to create jobs, modify parameters- Documentation - Experience of carrying out data and process analysis to create specifications documents- Finance Knowledge - Have a experience working in Financial Services / Banking organization with an understanding of Financial Services / Retail, Business and Corporate Banking- AWS knowledge- Unix / Linux
Spark / Scala Data Engineer
Posted 1 day ago
Job Viewed
Job Description
Role - Spark / Scala Data Engineer
Experience - 8 to 10 yrs
Location - Bangalore/Chennai/Hyderabad/Delhi/Pune
Must Have- Big Data Hadoop - Hive and Spark/Scala solid experience- SQL advance knowledge - Been able to test changes and issues properly, replicating the code functionality into SQL- Worked with Code Repositories as GIT, Maven, .- DevOps Knowledge (Jenkins, Scripts, .) - Tools used for deploying software into environments, use of Jira.Good to have:- Analyst Skills - Being able to translate technical requirements to non-technical partners and to deliver clear solutions. Been able to create test cases scenarios.- Control-m solid experience - Been able to create jobs, modify parameters- Documentation - Experience of carrying out data and process analysis to create specifications documents- Finance Knowledge - Have a experience working in Financial Services / Banking organization with an understanding of Financial Services / Retail, Business and Corporate Banking- AWS knowledge- Unix / Linux
Big Data Engineer
Posted today
Job Viewed
Job Description
Work Location : Pan India
Experience : 6+ Years
Notice Period : Immediate - 30 days
Mandatory Skills : Big Data, Python, SQL, Spark/Pyspark, AWS Cloud
JD and required Skills & Responsibilities :
Actively participate in all phases of the software development lifecycle, including requirements gathering, functional and technical design, development, testing, roll-out, and support.
Solve complex business problems by utilizing a disciplined development methodology.
Produce scalable, flexible, efficient, and supportable solutions using appropriate technologies.
Analyse the source and target system data. Map the transformation that meets the requirements.
Interact with the client and onsite coordinators during different phases of a project.
Design and implement product features in collaboration with business and Technology stakeholders.
Anticipate, identify, and solve issues concerning data management to improve data quality.
Clean, prepare, and optimize data at scale for ingestion and consumption.
Support the implementation of new data management projects and re-structure the current data architecture.
Implement automated workflows and routines using workflow scheduling tools.
Understand and use continuous integration, test-driven development, and production deployment frameworks.
Participate in design, code, test plans, and dataset implementation performed by other data engineers in support of maintaining data engineering standards.
Analyze and profile data for the purpose of designing scalable solutions.
Troubleshoot straightforward data issues and perform root cause analysis to proactively resolve product issues.
Required Skills :
5+ years of relevant experience developing Data and analytic solutions.
Experience building data lake solutions leveraging one or more of the following AWS, EMR, S3, Hive & PySpark
Experience with relational SQL.
Experience with scripting languages such as Python.
Experience with source control tools such as GitHub and related dev process.
Experience with workflow scheduling tools such as Airflow.
In-depth knowledge of AWS Cloud (S3, EMR, Databricks)
Has a passion for data solutions.
Has a strong problem-solving and analytical mindset
Working experience in the design, Development, and test of data pipelines.
Experience working with Agile Teams.
Able to influence and communicate effectively, both verbally and in writing, with team members and business stakeholders
Able to quickly pick up new programming languages, technologies, and frameworks.
Bachelor's degree in computer science
Be The First To Know
About the latest Spark Jobs in Delhi !
Big Data Engineer
Posted today
Job Viewed
Job Description
We are seeking an experienced and driven Data Engineer with 5+ years of hands-on experience in building scalable data infrastructure and systems. You will play a key role in designing and developing robust, high-performance ETL pipelines and managing large-scale datasets to support critical business functions. This role requires deep technical expertise, strong problem-solving skills, and the ability to thrive in a fast-paced, evolving environment.
Key Responsibilities :
Design, develop, and maintain scalable and reliable ETL/ELT pipelines for processing large volumes of data (terabytes and beyond).
Model and structure data for performance, scalability, and usability.
Work with cloud infrastructure (preferably Azure) to build and optimize data workflows.
Leverage distributed computing frameworks like Apache Spark and Hadoop for large-scale data processing.
Build and manage data lake/lakehouse architectures in alignment with best practices.
Optimize ETL performance and manage cost-effective data operations.
Collaborate closely with cross-functional teams including data science, analytics, and software engineering.
Ensure data quality, integrity, and security across all stages of the data lifecycle.
Required Skills & Qualifications :
7 to 10 years of relevant experience in bigdata engineering.
Advanced proficiency in Python,
Strong skills in SQL for complex data manipulation and analysis.
Hands-on experience with Apache Spark, Hadoop, or similar distributed systems.
Proven track record of handling large-scale datasets (TBs) in production environments.
Cloud development experience with Azure (preferred), AWS, or GCP.
Solid understanding of data lake and data lakehouse architectures.
Expertise in ETL performance tuning and cost optimization techniques.
Knowledge of data structures, algorithms, and modern software engineering practices.
Soft Skills :
Strong communication skills with the ability to explain complex technical concepts clearly and concisely.
Self-starter who learns quickly and takes ownership.
High attention to detail with a strong sense of data quality and reliability.
Comfortable working in an agile, fast-changing environment with incomplete requirements.
Preferred Qualifications :
Experience with tools like Apache Airflow, Azure Data Factory, or similar.
Familiarity with CI/CD and DevOps in the context of data engineering.
Knowledge of data governance, cataloging, and access control principles.
Skills : Python,Sql,Aws,Azure, Hadoop
Senior Big Data Engineer
Posted today
Job Viewed
Job Description
With a focus on innovation and acceleration, Veltris empowers clients to build, modernize, and scale intelligent products that deliver connected, AI-powered experiences. Our experience-centric approach, agile methodologies, and exceptional talent enable us to streamline product development, maximize platform ROI, and drive meaningful business outcomes across both digital and physical ecosystems.
In a strategic move to strengthen our healthcare offerings and expand industry capabilities, Veltris has acquired BPK Technologies. This acquisition enhances our domain expertise, broadens our go-to-market strategy, and positions us to deliver even greater value to enterprise and mid-market clients in healthcare and beyond.
Position-Senior Big Data Engineer
Must have Big Data analytics platform experience.
• Key stacks: Spark, Druid, Drill, ClickHouse.
• 8+ years experience in Python/Java, CI/CD, infrastructure & cloud, Terraform, plus depth in:
o Big Data pipelines: Spark, Kafka, Glue, EMR, Hudi, Schema Registry, Data Lineage.
o Graph DBs: Neo4j, Neptune, JanusGraph, Dgraph.
Preferred Qualifications:
• Master’s degree (M.Tech/MS) or Ph.D. in Computer Science, Information Technology, Data Science, Artificial Intelligence, Machine Learning, Software Engineering, or a related technical field.
• Candidates with an equivalent combination of education and relevant industry experience will also be considered.
Disclaimer:
The information provided herein is for general informational purposes only and reflects the current strategic direction and service offerings of Veltris. While we strive for accuracy, Veltris makes no representations or warranties regarding the completeness, reliability, or suitability of the information for any specific purpose. Any statements related to business growth, acquisitions, or future plans, including the acquisition of BPK Technologies, are subject to change without notice and do not constitute a binding commitment. Veltris reserves the right to modify its strategies, services, or business relationships at its sole discretion. For the most up-to-date and detailed information, please contact Veltris directly
GCP Big Data Engineer
Posted 11 days ago
Job Viewed
Job Description
We are seeking an experienced GCP Big Data Engineer with 8–10 years of expertise in designing, developing, and optimizing large-scale data processing solutions. The ideal candidate will bring strong leadership capabilities, technical depth, and a proven track record of delivering end-to-end big data solutions in cloud environments.
Key Responsibilities:-
- Lead and mentor teams in designing scalable and efficient ETL pipelines on Google Cloud Platform (GCP) .
- Drive best practices for data modeling, data integration, and data quality management .
- Collaborate with stakeholders to define data engineering strategies aligned with business goals.
- Ensure high performance, scalability, and reliability in data systems using SQL and PySpark .
Must-Have Skills:-
- GCP expertise in data engineering services (BigQuery, Dataflow, Dataproc, Pub/Sub, Cloud Storage).
- Strong programming in SQL & PySpark .
- Hands-on experience in ETL pipeline design, development, and optimization .
- Strong problem-solving and leadership skills with experience guiding data engineering teams.
Qualification:-
- Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field .
- Relevant certifications in GCP Data Engineering preferred.