1162 Senior Data Engineer jobs in Hyderabad
Big Data Engineer
Posted 1 day ago
Job Viewed
Job Description
Experience: 5–9 Years
Location: Hyderabad-Hybrid
Employment Type: Full-Time
Job Summary:
We are seeking a skilled Big Data Engineer with 5–9 years of experience in building and managing scalable data pipelines and analytics solutions. The ideal candidate will have strong expertise in Big Data, Hadoop, Apache Spark, SQL, Hadoop, and Data Lake/Data Warehouse architectures. Experience working with any cloud platform (AWS, Azure, or GCP) is preferred.
Required Skills:
- 5–9 years of hands-on experience as a Big Data Engineer.
- Strong proficiency in Apache Spark (PySpark or Scala).
- Solid understanding and experience with SQL and database optimization.
- Experience with data lake or data warehouse environments and architecture patterns.
- Good understanding of data modeling, performance tuning, and partitioning strategies.
- Experience in working with large-scale distributed systems and batch/stream data processing.
- Preferred Qualifications:
- Experience with cloud platforms such as AWS, Azure, or GCP is preferred.
- Education:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
Big Data Engineer
Posted today
Job Viewed
Job Description
Job Title: Big Data Engineer
Experience: 5–9 Years
Location: Hyderabad-Hybrid
Employment Type: Full-Time
Job Summary:
We are seeking a skilled Big Data Engineer with 5–9 years of experience in building and managing scalable data pipelines and analytics solutions. The ideal candidate will have strong expertise in Big Data, Hadoop, Apache Spark, SQL, Hadoop, and Data Lake/Data Warehouse architectures. Experience working with any cloud platform (AWS, Azure, or GCP) is preferred.
Required Skills:
- 5–9 years of hands-on experience as a Big Data Engineer.
- Strong proficiency in Apache Spark (PySpark or Scala).
- Solid understanding and experience with SQL and database optimization.
- Experience with data lake or data warehouse environments and architecture patterns.
- Good understanding of data modeling, performance tuning, and partitioning strategies.
- Experience in working with large-scale distributed systems and batch/stream data processing.
- Preferred Qualifications:
- Experience with cloud platforms such as AWS, Azure, or GCP is preferred.
- Education:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
Big Data Engineer
Posted today
Job Viewed
Job Description
Specification-
• 2 – 7 years of recent experience in
data engineering.
• Bachelor's Degree or more in Computer
Science or a related field.
• A solid track record of data management showing your flawless execution
and attention to detail.
• Strong knowledge of and experience with statistics.
• Programming experience, ideally in , Spark, Kafka, or , and a willingness
to learn new programming languages to meet goals and objectives.
• Experience in C, Perl, Javascript, or other
programming languages is a plus
• Knowledge of data cleaning, wrangling,
visualization, and reporting, with an understanding of the best, most efficient
use of associated tools and applications to complete these tasks.
• In-depth knowledge of data mining, machine
learning, natural language processing, or information retrieval.
• Experience processing large amounts of
structured and unstructured data, including integrating data from multiple
sources.
• Experience with machine learning tool kits including, H2O, Spark-ml or Mahout.
Big Data Engineer
Posted today
Job Viewed
Job Description
Core Responsibilities
- Design and optimize batch/streaming data pipelines using Scala, Spark, and Kafka
- Implement real-time tokenization/cleansing microservices in Java
- Manage production workflows via Apache Airflow (batch scheduling)
- Conduct root-cause analysis of data incidents using Spark/Dynatrace logs
- Monitor EMR clusters and optimize performance via YARN/Dynatrace metrics
- Ensure data security through HashiCorp Vault (Transform Secrets Engine)
- Validate data integrity and configure alerting systems
Requirements
- Programming :Scala (Spark batch/streaming), Java (real-time microservices)
- Big Data Systems: Apache Spark, EMR, HDFS, YARN resource management
- Cloud & Storage :Amazon S3, EKS
- Security: HashiCorp Vault, tokenization vs. encryption (FPE)
- Orchestration :Apache Airflow (batch scheduling)
- Operational Excellence Spark log analysis, Dynatrace monitoring, incident handling, data validation
Mandatory Competencies
- Expertise in distributed data processing (Spark on EMR/Hadoop)
- Proficiency in shell scripting and YARN job management
- Ability to implement format-preserving encryption (tokenization solutions)
- Experience with production troubleshooting (executor logs, metrics, RCA)
Benefits
Insurance - Family
Term Insurance
PF
Paid Time Off - 20 days
Holidays - 10 days
Flexi timing
Competitive Salary
Diverse & Inclusive workspace
Big Data Engineer
Posted today
Job Viewed
Job Description
Hiring for Big Data:
LOCATION: Chennai, Bengaluru, Hyderabad.
EXPERIENCE: 7-10
Notice Period : Immediate Joiner or 30 Day
Key Skills
- Hands-on experience on technologies like Python, SQL, Snowflake, HDFS , Hive, Scala, Spark, AWS, HBase and Cassandra.
- Good knowledge in Data Warehousing concepts.
- Proficient in Hadoop distributions such as Cloudera, Hortonworks.
- Good working experience on technologies like Python , Scala, SQL & PL/SQL
- Developers design and build the foundational architecture to manage massive-scale data storage, processing, and analysis using distributed, cloud-based systems and platforms.
- Coding Big Data Pipelines.
- Managing Big Data Infrastructure and Pipelines.
Big Data Engineer
Posted today
Job Viewed
Job Description
Core Responsibilities
- Design and optimize batch/streaming data pipelines using Scala, Spark, and Kafka
- Implement real-time tokenization/cleansing microservices in Java
- Manage production workflows via Apache Airflow (batch scheduling)
- Conduct root-cause analysis of data incidents using Spark/Dynatrace logs
- Monitor EMR clusters and optimize performance via YARN/Dynatrace metrics
- Ensure data security through HashiCorp Vault (Transform Secrets Engine)
- Validate data integrity and configure alerting systems
Requirements
- Programming :Scala (Spark batch/streaming), Java (real-time microservices)
- Big Data Systems: Apache Spark, EMR, HDFS, YARN resource management
- Cloud & Storage :Amazon S3, EKS
- Security: HashiCorp Vault, tokenization vs. encryption (FPE)
- Orchestration :Apache Airflow (batch scheduling)
- Operational Excellence Spark log analysis, Dynatrace monitoring, incident handling, data validation
Mandatory Competencies
- Expertise in distributed data processing (Spark on EMR/Hadoop)
- Proficiency in shell scripting and YARN job management
- Ability to implement format-preserving encryption (tokenization solutions)
- Experience with production troubleshooting (executor logs, metrics, RCA)
Benefits
Insurance - Family
Term Insurance
PF
Paid Time Off - 20 days
Holidays - 10 days
Flexi timing
Competitive Salary
Diverse & Inclusive workspace
Requirements
1. Proven experience as a Java Tech Lead with expertise in micro services architecture. 2. Strong proficiency in Spring Boot for API development and Java programming. 3. Extensive experience with Kafka for building scalable and event-driven systems. 4. Solid understanding of containerization and orchestration tools, such as Docker and Kubernetes. 5. Hands-on experience in implementing and maintaining CI/CD pipelines. 6. Excellent communication skills and ability to collaborate effectively with diverse teams. 7. Strong problem-solving skills and a proactive attitude towards challenges. 8. Familiarity with cloud platforms (e.g., AWS, Azure) for deploying and managing applications. Education and Certifications: 1. Bachelor’s or Master’s degree in Computer Science or a related field. 2. Relevant certifications in Java, Spring, or Kafka are a plus. This job description outlines the key responsibilities and requirements for a Java Tech Lead specializing in micro services, Spring Boot, API development, and Kafka integration.
Big data engineer
Posted today
Job Viewed
Job Description
Identify roles/access needed for data migration from federated bucket to managed bucket and Build APIs for the same
Integrate CDMS framework with Lake and Data bridge API
Data migration from S3 Managed to Hadoop On prem
Jobs for Daily and Bulk loads
Test support for AVRO to test lake features
Test support for Compression types like LZO, .ENC to test lake features
ABINITIO integration: Build feature to create operation trigger for ABI pipeline
Movement to new datacenter -SQL server migration
Carlstadt to Ashburn (DR switchover)
Develop and maintain data platforms using Python.
Work with AWS and Big Data, design and implement data pipelines, and ensure data quality and integrity.
Collaborate with cross functional teams to understand data requirements and design solutions that meet business needs .
Implement and manage agents for monitoring, logging, and automation within AWS environments.
Handling migration from PySpark to AWS.
(Secondary) Resource must have hands on development experience with various Ab Initio components such as Rollup Scan join Partition by key Partition by Round Robin Gather Merge Interleave Lookup etc.
Must have experience with SQL database programming SQL performance tuning relational model analysis.
Good knowledge in developing UNIX scripts Oracle SQLPLSQL.
Leverage internal tools and SDKs, utilize AWS services such as S3, Athena, and Glue, and integrate with our internal Archival Service Platform for efficient data purging.
Lead the integration efforts with the internal Archival Service Platform for seamless data purging and lifecycle management.
Collaborate with the data engineering team to continuously improve data integration pipelines, ensuring adaptability to evolving business needs.
Be The First To Know
About the latest Senior data engineer Jobs in Hyderabad !
Big Data Engineer
Posted today
Job Viewed
Job Description
Job Title: Big Data Engineer
Experience: 5–9 Years
Location: Hyderabad-Hybrid
Employment Type: Full-Time
Job Summary:
We are seeking a skilled Big Data Engineer with 5–9 years of experience in building and managing scalable data pipelines and analytics solutions. The ideal candidate will have strong expertise in Big Data, Hadoop, Apache Spark, SQL, Hadoop, and Data Lake/Data Warehouse architectures. Experience working with any cloud platform (AWS, Azure, or GCP) is preferred.
Required Skills:
- 5–9 years of hands-on experience as a Big Data Engineer.
- Strong proficiency in Apache Spark (PySpark or Scala).
- Solid understanding and experience with SQL and database optimization.
- Experience with data lake or data warehouse environments and architecture patterns.
- Good understanding of data modeling, performance tuning, and partitioning strategies.
- Experience in working with large-scale distributed systems and batch/stream data processing.
- Preferred Qualifications:
- Experience with cloud platforms such as AWS, Azure, or GCP is preferred.
- Education:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
GCP Big Data Engineer
Posted today
Job Viewed
Job Description
Job Description
.css-ylb{color:var(--chakra-colors-black-);}.css-l0deym h1{margin:0;padding:0;font-size:26px;}.css-l0deym h2{margin:0;padding:0;font-size:20px;}.css-l0deym h3{margin:0;padding:0;font-size:15px;}.css-l0deym h4{margin:0;padding:0;font-size:13px;}.css-l0deym ul{list-style-type:disc;margin:0.5rem;margin-bottom:1.5rem;}.css-l0deym ul ul{list-style-type:circle;}.css-l0deym ul ul ul{list-style-type:square;}.css-l0deym ol{margin:0.5rem;margin-bottom:1.5rem;list-style-type:decimal;}.css-l0deym ol ol{list-style-type:lower-alpha;}.css-l0deym ol ol ol{list-style-type:lower-roman;}.css-l0deym ol ol ol ol{list-style-type:decimal;}.css-l0deym strong{font-weight:bold;}.css-l0deym blockquote{border-left:5px solid #eee;color:#;font-family:'Hoefler Text','Georgia',serif;font-style:italic;margin:16px 0;padding:10px 20px;}.css-l0deym p code{font-family:'Courier New',monospace,'Lucida Console';}About the role:We are looking for a Senior Data Engineer to be based out of our Chennai office. This role involves a combination of hands-on contribution, customer engagement, and technical team management. As a Senior Data Engineer, you will● Design and build solutions for near real-time stream processing as well as batch processing on the Big Data platform.● Set up and run Hadoop development frameworks.● Collaborate with a team of business domain experts, data scientists, and application developers to identify relevant data for analysis and develop the Big Data solution.● Explore and learn new technologies for creative business problem-solving.#LI-UNPost.css-vfo6{padding-top:24px;padding-bottom:12px;font-size:20px;font-weight:;}Job RequirementRequired Experience, Skills & Competencies:● Ability to develop and manage scalable Hadoop cluster environments● Ability to design solutions for Big Data applications● Experience in Big Data technologies like HDFS, Hadoop, Hive, Yarn, Pig, HBase, Sqoop, Flume, etc● Working experience on Big Data services in any cloud-based environment.● Experience in Spark, Pyspark, Python or Scala, Kafka, Akka, core or advanced Java, and Databricks● Knowledge of how to create and debug Hadoop and Spark jobs● Experience in NoSQL technologies like HBase, Cassandra, MongoDB, Cloudera, or Hortonworks Hadoop distribution● Familiar with data warehousing concepts, distributed systems, data pipelines, and ETL● Familiar with data visualization tools like Tableau● Good communication and interpersonal skills● Minimum 6-8 years of Professional experience with 3+ years of Big Data project experience● B.Tech/B.E from reputed institute preferred#LI-UNPost.css-1uw51ih{padding:28px 0;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-align-items:baseline;-webkit-box-align:baseline;-ms-flex-align:baseline;align-items:baseline;-webkit-box-pack:justify;-webkit-justify-content:space-between;justify-content:space-between;padding-bottom:16px;}.css-12oy1kx{font-size:1rem;font-weight:inherit;display:inline-block;padding-right:var(--chakra-space-5);}.css-jkbrc8{position:relative;width:%;margin:0 auto;box-sizing:border-box;}.css-1ylu0bo{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:column;-ms-flex-direction:column;flex-direction:column;-webkit-box-flex:1;-webkit-flex-grow:1;-ms-flex-positive:1;flex-grow:1;}.css-1n7cw71{width:%;-webkit-margin-start:auto;margin-inline-start:auto;-webkit-margin-end:auto;margin-inline-end:auto;max-width:var(--chakra-sizes-container-lg);-webkit-padding-start:1rem;padding-inline-start:1rem;-webkit-padding-end:1rem;padding-inline-end:1rem;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;}.css-1yrkcw4{transition-property:var(--chakra-transition-property-common);transition-duration:var(--chakra-transition-duration-fast);transition-timing-function:var(--chakra-transition-easing-ease-out);cursor:pointer;-webkit-text-decoration:none;text-decoration:none;outline:2px solid transparent;outline-offset:2px;color:inherit;font-weight:;font-size:var(--chakra-fontSizes-md);margin:20px;}.css-1yrkcw4:hover,.css-1yrkcw4(data-hover){-webkit-text-decoration:underline;text-decoration:underline;cursor:pointer;}.css-1yrkcw4:focus,.css-1yrkcw4(data-focus){box-shadow:var(--chakra-shadows-outline);}AWS Big Data Engineer
Posted today
Job Viewed
Job Description
Job Description
We are looking for a Senior Data Engineer to be based out of our Chennai, Bangalore & Hyderabad offices. This role involves a combination of hands-on contribution, customer engagement, and technical team management. As a Senior Data Engineer, you will Design and build solutions for near real-time stream processing as well as batch processing on the Big Data platform.● Set up and run Hadoop development frameworks.● Collaborate with a team of business domain experts, data scientists, and application developers to identify relevant data for analysis and develop the Big Data solution.● Explore and learn new technologies for creative business problem-solving.#LI-UNPostJob Requirement Ability to develop and manage scalable Hadoop cluster environments● Ability to design solutions for Big Data applications● Experience in Big Data technologies like HDFS, Hadoop, Hive, Yarn, Pig, HBase, Sqoop, Flume, etc● Working experience on Big Data services in any cloud-based environment.● Experience in Spark, Pyspark, Python or Scala, Kafka, Akka, core or advanced Java, and Databricks● Knowledge of how to create and debug Hadoop and Spark jobs● Experience in NoSQL technologies like HBase, Cassandra, MongoDB, Cloudera, or Hortonworks Hadoop distribution● Familiar with data warehousing concepts, distributed systems, data pipelines, and ETL● Familiar with data visualization tools like Tableau● Good communication and interpersonal skills● Minimum 6+ years of Professional experience with 3+ years of Big Data project experience● B.Tech/B.E from reputed institute preferred#LI-UNPost