38,620 Cloud Data Engineering jobs in India
Cloud Data Engineering (VP)
Posted 1 day ago
Job Viewed
Job Description
Experience - 15+ years to 19 years (Hands-on Role)
Reporting to - Director
Location - Pune
Skills - Java/Scala/Python GCP/AWS/Azure, Bigquery, Data Engineering, Apache spark or Beam
In this role, you will:
- Engineer the data transformations and analysis
- Technology SME on the real-time stream processing paradigm.
- Bring your experience in Low latency, High through-put, auto scaling platform design and implementation.
- Implementing an end-to-end platform service, assessing the operations and non-functional needs clearly.
- Mentor and Coach the engineering and SME talent to realize their potential and build a high-performance team.
- Manage complex end to end functional transformation module from planning estimations to execution.
- Improve the platform standards by bringing in new ideas and solutions on the table.
Qualifications
To be successful in this role, you should meet the following requirements:
- 15+ years of experience in data engineering technology and tools.
- Must have experience with Java / Scala based implementations for enterprise-wide platforms.
- Experience with Apache Beam, Google Dataflow, Apache Kafka for real-time steam processing technology stack.
- Complex state-full processing of events with partitioning for higher throughputs.
- Have dealt with fine-tuning the through-puts and improving the performance aspects on data pipelines.
- Experience with analytical data store optimizations, querying and managing them.
- Experience with alternate data engineering tools (Apache Beam, Apache Flink, Apache Spark etc).
- Reason and have an ability to convince the stake holders and wider technology team about your decisions.
- Set highest standards of integrity and ethics and lead with examples on technology implementations.
Cloud Data Engineering (VP)
Posted today
Job Viewed
Job Description
Reporting to - Director
Location - Pune
Skills - Java/Scala/Python GCP/AWS/Azure, Bigquery, Data Engineering, Apache spark or Beam
In this role, you will:
- Engineer the data transformations and analysis
- Technology SME on the real-time stream processing paradigm.
- Bring your experience in Low latency, High through-put, auto scaling platform design and implementation.
- Implementing an end-to-end platform service, assessing the operations and non-functional needs clearly.
- Mentor and Coach the engineering and SME talent to realize their potential and build a high-performance team.
- Manage complex end to end functional transformation module from planning estimations to execution.
- Improve the platform standards by bringing in new ideas and solutions on the table.
Qualifications
To be successful in this role, you should meet the following requirements:
- 15+ years of experience in data engineering technology and tools.
- Must have experience with Java / Scala based implementations for enterprise-wide platforms.
- Experience with Apache Beam, Google Dataflow, Apache Kafka for real-time steam processing technology stack.
- Complex state-full processing of events with partitioning for higher throughputs.
- Have dealt with fine-tuning the through-puts and improving the performance aspects on data pipelines.
- Experience with analytical data store optimizations, querying and managing them.
- Experience with alternate data engineering tools (Apache Beam, Apache Flink, Apache Spark etc).
- Reason and have an ability to convince the stake holders and wider technology team about your decisions.
- Set highest standards of integrity and ethics and lead with examples on technology implementations.
Cloud Data Engineering Role
Posted today
Job Viewed
Job Description
Cloud Data Engineer
This role involves designing and developing end-to-end ETL pipelines to ingest, transform, and load data from diverse sources including Excel files, SAS datasets, and Cloud-native services. The target architecture will be fully hosted on Google Cloud Platform.
Key Responsibilities:
- Develop scalable and secure data ingestion frameworks using GCP services such as Cloud Storage, BigQuery, Cloud Functions, and Dataflow.
- Collaborate with data scientists and analysts to enable model monitoring, including drift detection, performance tracking, and alerting.
- Implement data validation, quality checks, and audit trails across the pipeline.
- Optimize pipeline performance and cost using GCP-native tools and best practices.
- Automate workflows using Cloud Composer or Cloud Scheduler.
- Maintain compliance with data governance, security, and privacy standards.
Required Skills & Qualifications:
- 7+ years of experience in cloud engineering, with at least 5 years on GCP and Banking.
- Strong proficiency in Python, SQL, and data engineering frameworks.
- Hands-on experience with BigQuery, Cloud Storage, Pub/Sub, Dataflow, and Cloud Functions.
- Experience integrating with SAS and handling Excel-based data ingestion.
- Familiarity with model monitoring concepts and ML lifecycle management.
- Knowledge of CI/CD pipelines, Terraform, or Deployment Manager is a plus.
- Excellent problem-solving and communication skills.
Cloud Data Engineering Specialist
Posted today
Job Viewed
Job Description
We are actively seeking a skilled professional to fill the role of GCP Data Engineer.
This position involves onboarding new data sources, designing and building cloud data ingest pipelines, and developing automated tests to validate ETL pipelines.
- Data enrichment, standardization, cleansing, aggregation, and performing incremental and full data loading strategies for structured and semi-structured data with medium-to-high volume, velocity, variety.
- Developing procedures and scripts for data migration, back-population, and feed-to-warehouse initialization, and ensuring pipeline health, performance, and data delivery.
Key qualifications include 4+ years of experience in GCP data engineering.
Benefits:
The ideal candidate will have excellent communication skills, be able to work collaboratively in a team environment, and demonstrate strong problem-solving abilities.
Requirements:
A bachelor's degree in Computer Science or related field is required. The candidate should have hands-on experience with GCP services, including data ingestion, processing, and storage.
Why this job?
This is an exciting opportunity to join a dynamic team and contribute to the development of innovative data solutions.
Cloud Data Engineering Specialist
Posted today
Job Viewed
Job Description
We are seeking a seasoned Cloud Data Engineering Specialist to design, develop, and deploy scalable data pipelines on AWS cloud infrastructure.
Main Responsibilities:
- Data Pipeline Development : Utilize Amazon S3, AWS Glue, AWS Lambda, and Amazon Redshift to create efficient data processing workflows.
- Data Transformation and Processing : Implement Apache Spark and SQL for analytics and reporting requirements.
- Orchestration and Automation : Build and maintain workflows using AppFlow, Event Bridge, and Lambda to automate data pipeline execution, scheduling, and monitoring.
- Collaboration and Communication : Work closely with analysts and stakeholders to deliver tailored data solutions.
- Pipeline Optimization : Leverage AWS best practices and cloud-native technologies to ensure performance, reliability, and cost-effectiveness.
Required Skills and Qualifications:
- 8+ Years of Experience : Proven track record of building large-scale data processing pipelines in production environments.
- AWS Expertise : Hands-on experience with Amazon S3, AWS Glue, AWS Lambda, and Amazon Redshift.
- Spark Proficiency : Strong experience with Apache Spark for data processing and analytics.
- Orchestration Skills : Hands-on experience with AppFlow, Event Bridge, and Lambda.
- Data Modeling : Solid understanding of data modeling, database design principles, and SQL and Spark SQL.
Benefits of This Role:
This is an exceptional opportunity to leverage your technical expertise and contribute to the growth and success of our organization. If you're a motivated and skilled professional looking for a new challenge, we encourage you to apply.
Cloud Data Engineering Lead
Posted today
Job Viewed
Job Description
We are seeking an experienced Data Engineering Leader to oversee the implementation of full lifecycle data solutions. This includes setting up secure cloud platforms, integrating with various systems, and developing high-performance data pipelines.
This role presents an opportunity to drive enterprise data transformation as part of our analytics projects.
Key responsibilities include:
- Setting up and managing Snowflake platforms
- Designing scalable infrastructure-as-code using Terraform or Pulumi
- Creating robust CI/CD pipelines and integrating with GitHub Actions or Azure DevOps
- Developing ETL/ELT pipelines using dbt, Airflow, or custom Python solutions
- Ingesting and transforming data from APIs, SaaS apps, and event streams
The ideal candidate will have a strong background in data engineering, experience with Snowflake, and knowledge of cloud providers such as AWS, Azure, or GCP.
Bonus points include:
- SnowPro Advanced certification
- Experience with dbt, Fivetran, Informatica, or Matillion
- Familiarity with Kafka, Delta Lake, Iceberg, or Databricks
You will enjoy working in a remote environment with competitive pay, equity options, professional development support, health, dental, and wellness programs.
- 5+ years of experience in data engineering or platform operations
- 3+ years of deep Snowflake hands-on experience
- Strong SQL, Python, Terraform/YAML, and Git skills
- Proven CI/CD and DevOps knowledge
- Experience with at least one major cloud provider
- Competitive pay
- Equity options
- Professional development support
- Health, dental, and wellness programs
Cloud Data Engineering Expert
Posted today
Job Viewed
Job Description
Job Summary:
We are seeking a highly skilled Cloud Data Engineering Expert to design, develop, and deploy end-to-end data pipelines on AWS cloud infrastructure.
- Responsibilities:
- Design and build large-scale data processing pipelines in a production environment using AWS services such as Amazon S3, AWS Glue, AWS Lambda, and Amazon Redshift.
- Implement data processing and transformation workflows using Apache Spark and SQL to support analytics and reporting requirements.
- Build and maintain orchestration workflows to automate data pipeline execution, scheduling, and monitoring.
- Collaborate with analysts and business stakeholders to understand data requirements and deliver scalable data solutions.
- Optimize data pipelines for performance, reliability, and cost-effectiveness by leveraging AWS best practices and cloud-native technologies.
Qualifications:
- Minimum 8+ years of experience building and deploying large-scale data processing pipelines in a production environment.
- Hands-on experience in designing and building data pipelines on AWS cloud infrastructure.
- Strong proficiency in AWS services such as Amazon S3, AWS Glue, AWS Lambda, and Amazon Redshift.
- Strong experience with Apache Spark for data processing and analytics.
- Hands-on experience in orchestrating and scheduling data pipelines using AppFlow, Event Bridge, and Lambda.
- Solid understanding of data modeling, database design principles, and SQL and Spark SQL.
Be The First To Know
About the latest Cloud data engineering Jobs in India !
Cloud Data Engineering Specialist
Posted today
Job Viewed
Job Description
We are seeking a skilled Cloud Data Engineer to join our team.
- Key Responsibilities:
- Design, develop, and maintain scalable data pipelines using cloud-native tools like AWS DMS, AWS Glue, Kafka, Azure Data Factory, GCP Dataflow.
- Architect and implement data lakes and data warehouses on cloud platforms like AWS, Azure, GCP.
- Develop and optimize data ingestion, transformation, and loading processes using Databricks, Snowflake, Redshift, BigQuery, Azure Synapse.
- Implement data integration processes using tools like Informatica, SAP Data Intelligence.
- Data Integration and Management:
- Integrate various data sources, including relational databases, APIs, unstructured data, and ERP systems into the data lake.
- Ensure data quality and integrity through rigorous testing and validation.
- Performance Optimization:
- Monitor and optimize the performance of data pipelines and data integration processes.
- Collaboration and Communication:
- Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions.
- Documentation and Maintenance:
- Document technical solutions, processes, and workflows.
- Maintain and troubleshoot existing data pipelines and data integrations.
- Bachelor's degree in Computer Science, Information Technology, or a related field.
- 7+ years of experience as a Data Engineer or in a similar role.
- Proven experience with cloud platforms: AWS, Azure, and GCP.
- Hands-on experience with cloud-native data integration tools like AWS DMS, AWS Glue, Kafka, Azure Data Factory, GCP Dataflow.
- Strong programming skills in Python, Java, or Scala.
- Proficient in SQL and query optimization techniques.
- Remote work options.
- Flexible working hours.
- Opportunities for professional growth and development.
- We offer a collaborative team environment and state-of-the-art technology.
Qualifications:
Experience:
Skills:
Benefits:
About Us:
Cloud Data Engineering Professional
Posted today
Job Viewed
Job Description
We are seeking a skilled Data Engineer to join our team and drive the design, development, and maintenance of complex data assets and products. This role involves collaborating with various stakeholders to understand data requirements and create effective solutions.
The ideal candidate will have strong knowledge of Python and PySpark, as well as proficiency in SQL, Hadoop, Hive, Azure, Databricks, and Greenplum. Experience in writing SQL queries to retrieve metadata and tables from various data management systems is also highly valued.
Key Responsibilities:
- Design, develop, and maintain ETL processes that meet business requirements.
- Create detailed documentation of data integration and diagram processes.
- Lead data validation, UAT, and regression testing for new data asset creation.
- Develop and maintain data models, including schema design and optimization.
- Design and manage data pipelines that automate data flow, ensuring data quality and consistency.
- Strong programming skills in Python, PySpark, and SQL.
- Proficiency in data engineering tools such as Azure Data Factory, Azure Databricks, and Hadoop.
- Experience in big data technologies like Spark and distributed computing frameworks.
- Excellent communication and collaboration skills when working with stakeholders and business teams.
- Strong problem-solving and troubleshooting abilities.
- Ability to establish comprehensive data quality test cases and procedures.
As a member of our team, you will have the opportunity to work on challenging projects, collaborate with experienced professionals, and contribute to the growth and success of our organization.
We offer a competitive salary and benefits package, as well as opportunities for professional development and advancement.
Cloud Data Engineering Specialist
Posted today
Job Viewed
Job Description
We are currently seeking experienced professionals to join our global data engineering team. Their primary responsibilities will include onboarding new data sources, designing and building cloud-based data ingest pipelines, developing custom plugins, handling incremental and full data loading strategies, performing data enrichment, standardization, cleansing, aggregation, and carrying out data operations activities.
Some key tasks include negotiating IT data contracts, defining effective data validation rules, error scenarios, and retry mechanisms. They must also ensure data integrity, consistency, and compliance with business standards. Additionally, they will review and refine technical requirements, implement data migration procedures, and protect solution data masking and lineage capabilities as needed.
This role requires a deep understanding of data management principles, including data warehousing, ETL processes, and data governance. The ideal candidate will have experience working with cloud-based platforms, such as GCP, and be proficient in programming languages like Python or Java.
To succeed in this position, you should possess excellent problem-solving skills, be able to work independently, and have strong communication and collaboration abilities. You will be part of a dynamic team that values innovation, teamwork, and continuous learning.