1063 Senior Data Engineer jobs in Hyderabad
Big Data Engineer
Posted today
Job Viewed
Job Description
Specification-
• 2 – 7 years of recent experience in
data engineering.
• Bachelor's Degree or more in Computer
Science or a related field.
• A solid track record of data management showing your flawless execution
and attention to detail.
• Strong knowledge of and experience with statistics.
• Programming experience, ideally in , Spark, Kafka, or , and a willingness
to learn new programming languages to meet goals and objectives.
• Experience in C, Perl, Javascript, or other
programming languages is a plus
• Knowledge of data cleaning, wrangling,
visualization, and reporting, with an understanding of the best, most efficient
use of associated tools and applications to complete these tasks.
• In-depth knowledge of data mining, machine
learning, natural language processing, or information retrieval.
• Experience processing large amounts of
structured and unstructured data, including integrating data from multiple
sources.
• Experience with machine learning tool kits including, H2O, Spark-ml or Mahout.
Big data engineer
Posted today
Job Viewed
Job Description
Identify roles/access needed for data migration from federated bucket to managed bucket and Build APIs for the same
Integrate CDMS framework with Lake and Data bridge API
Data migration from S3 Managed to Hadoop On prem
Jobs for Daily and Bulk loads
Test support for AVRO to test lake features
Test support for Compression types like LZO, .ENC to test lake features
ABINITIO integration: Build feature to create operation trigger for ABI pipeline
Movement to new datacenter -SQL server migration
Carlstadt to Ashburn (DR switchover)
Develop and maintain data platforms using Python.
Work with AWS and Big Data, design and implement data pipelines, and ensure data quality and integrity.
Collaborate with cross functional teams to understand data requirements and design solutions that meet business needs .
Implement and manage agents for monitoring, logging, and automation within AWS environments.
Handling migration from PySpark to AWS.
(Secondary) Resource must have hands on development experience with various Ab Initio components such as Rollup Scan join Partition by key Partition by Round Robin Gather Merge Interleave Lookup etc.
Must have experience with SQL database programming SQL performance tuning relational model analysis.
Good knowledge in developing UNIX scripts Oracle SQLPLSQL.
Leverage internal tools and SDKs, utilize AWS services such as S3, Athena, and Glue, and integrate with our internal Archival Service Platform for efficient data purging.
Lead the integration efforts with the internal Archival Service Platform for seamless data purging and lifecycle management.
Collaborate with the data engineering team to continuously improve data integration pipelines, ensuring adaptability to evolving business needs.
Big Data Engineer
Posted today
Job Viewed
Job Description
Hiring for Big Data:
LOCATION: Chennai, Bengaluru, Hyderabad.
EXPERIENCE: 7-10
Notice Period : Immediate Joiner or 30 Day
Key Skills
- Hands-on experience on technologies like Python, SQL, Snowflake, HDFS , Hive, Scala, Spark, AWS, HBase and Cassandra.
- Good knowledge in Data Warehousing concepts.
- Proficient in Hadoop distributions such as Cloudera, Hortonworks.
- Good working experience on technologies like Python , Scala, SQL & PL/SQL
- Developers design and build the foundational architecture to manage massive-scale data storage, processing, and analysis using distributed, cloud-based systems and platforms.
- Coding Big Data Pipelines.
- Managing Big Data Infrastructure and Pipelines.
Data Engineer (Big Data)
Posted today
Job Viewed
Job Description
The role: As a senior engineer in the Data team you will build and run production-grade data and machine learning pipelines and products at scale in an agile setup. You will work closely with our data engineers, architects and product managers to create the technology that generates and transforms data into applications, insights and experiences for our users.
Responsibilities:
Design, productionize and own end-to-end solutions that solve our customer's problems.
Define, plan and execute on strategic projects.
Communicate and align with peers and cross functional stakeholders.
Drive for technical excellence and pick the right balance between quality and speed of delivery.
Consistently steer the target architecture by identifying areas of critical need based on future growth.
Ensure code quality and maintainability by tackling tech debt, conducting code reviews, initiating refactoring and improving build and test systems.
What we are looking for:
You have expertise in Python and Java/Scala programming languages.
You have experience designing and productionizing large-scale distributed systems built around big data.
You have experience with batch and streaming technologies: e.g Apache Flink, Apache Spark, Apache Beam, Google DataFlow.
You have expertise with distributed data stores (Cassandra, Google BigTable, Redis, ClickHouse, Elasticsearch) and messaging systems (Kafka, Google PubSub) at scale.
You have experience with Linux, Docker, and public cloud (GCP, AWS, Azure).
You have a strong focus on execution, delivery and customer impact and craft code that is understandable, simple and clean.
You are an excellent communicator who can explain complex problems in clear and concise language to both business and technical audiences.
Job Requirements:
Kafka (2 - 3 yrs) - required
Data Engineering (4 - 6 yrs) - required
SQL (4 - 6 yrs) - required
Apache Spark (2 - 3 yrs) - required
Big Data (2 - 3 yrs) - required
Scala (2 - 3 yrs) - optional
AWS/GCP (2 - 3 yrs) - required
Time zone requirements:
FLEXIBLE WORKING HOURS
AWS Big Data Engineer
Posted today
Job Viewed
Job Description
Job Description
We are looking for a Senior Data Engineer to be based out of our Chennai, Bangalore & Hyderabad offices. This role involves a combination of hands-on contribution, customer engagement, and technical team management. As a Senior Data Engineer, you will Design and build solutions for near real-time stream processing as well as batch processing on the Big Data platform.● Set up and run Hadoop development frameworks.● Collaborate with a team of business domain experts, data scientists, and application developers to identify relevant data for analysis and develop the Big Data solution.● Explore and learn new technologies for creative business problem-solving.#LI-UNPostJob Requirement Ability to develop and manage scalable Hadoop cluster environments● Ability to design solutions for Big Data applications● Experience in Big Data technologies like HDFS, Hadoop, Hive, Yarn, Pig, HBase, Sqoop, Flume, etc● Working experience on Big Data services in any cloud-based environment.● Experience in Spark, Pyspark, Python or Scala, Kafka, Akka, core or advanced Java, and Databricks● Knowledge of how to create and debug Hadoop and Spark jobs● Experience in NoSQL technologies like HBase, Cassandra, MongoDB, Cloudera, or Hortonworks Hadoop distribution● Familiar with data warehousing concepts, distributed systems, data pipelines, and ETL● Familiar with data visualization tools like Tableau● Good communication and interpersonal skills● Minimum 6+ years of Professional experience with 3+ years of Big Data project experience● B.Tech/B.E from reputed institute preferred#LI-UNPostGCP Big Data Engineer
Posted today
Job Viewed
Job Description
Job Description
.css-ylb{color:var(--chakra-colors-black-);}.css-l0deym h1{margin:0;padding:0;font-size:26px;}.css-l0deym h2{margin:0;padding:0;font-size:20px;}.css-l0deym h3{margin:0;padding:0;font-size:15px;}.css-l0deym h4{margin:0;padding:0;font-size:13px;}.css-l0deym ul{list-style-type:disc;margin:0.5rem;margin-bottom:1.5rem;}.css-l0deym ul ul{list-style-type:circle;}.css-l0deym ul ul ul{list-style-type:square;}.css-l0deym ol{margin:0.5rem;margin-bottom:1.5rem;list-style-type:decimal;}.css-l0deym ol ol{list-style-type:lower-alpha;}.css-l0deym ol ol ol{list-style-type:lower-roman;}.css-l0deym ol ol ol ol{list-style-type:decimal;}.css-l0deym strong{font-weight:bold;}.css-l0deym blockquote{border-left:5px solid #eee;color:#;font-family:'Hoefler Text','Georgia',serif;font-style:italic;margin:16px 0;padding:10px 20px;}.css-l0deym p code{font-family:'Courier New',monospace,'Lucida Console';}About the role:We are looking for a Senior Data Engineer to be based out of our Chennai office. This role involves a combination of hands-on contribution, customer engagement, and technical team management. As a Senior Data Engineer, you will● Design and build solutions for near real-time stream processing as well as batch processing on the Big Data platform.● Set up and run Hadoop development frameworks.● Collaborate with a team of business domain experts, data scientists, and application developers to identify relevant data for analysis and develop the Big Data solution.● Explore and learn new technologies for creative business problem-solving.#LI-UNPost.css-vfo6{padding-top:24px;padding-bottom:12px;font-size:20px;font-weight:;}Job RequirementRequired Experience, Skills & Competencies:● Ability to develop and manage scalable Hadoop cluster environments● Ability to design solutions for Big Data applications● Experience in Big Data technologies like HDFS, Hadoop, Hive, Yarn, Pig, HBase, Sqoop, Flume, etc● Working experience on Big Data services in any cloud-based environment.● Experience in Spark, Pyspark, Python or Scala, Kafka, Akka, core or advanced Java, and Databricks● Knowledge of how to create and debug Hadoop and Spark jobs● Experience in NoSQL technologies like HBase, Cassandra, MongoDB, Cloudera, or Hortonworks Hadoop distribution● Familiar with data warehousing concepts, distributed systems, data pipelines, and ETL● Familiar with data visualization tools like Tableau● Good communication and interpersonal skills● Minimum 6-8 years of Professional experience with 3+ years of Big Data project experience● B.Tech/B.E from reputed institute preferred#LI-UNPost.css-1uw51ih{padding:28px 0;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-align-items:baseline;-webkit-box-align:baseline;-ms-flex-align:baseline;align-items:baseline;-webkit-box-pack:justify;-webkit-justify-content:space-between;justify-content:space-between;padding-bottom:16px;}.css-12oy1kx{font-size:1rem;font-weight:inherit;display:inline-block;padding-right:var(--chakra-space-5);}.css-jkbrc8{position:relative;width:%;margin:0 auto;box-sizing:border-box;}.css-1ylu0bo{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:column;-ms-flex-direction:column;flex-direction:column;-webkit-box-flex:1;-webkit-flex-grow:1;-ms-flex-positive:1;flex-grow:1;}.css-1n7cw71{width:%;-webkit-margin-start:auto;margin-inline-start:auto;-webkit-margin-end:auto;margin-inline-end:auto;max-width:var(--chakra-sizes-container-lg);-webkit-padding-start:1rem;padding-inline-start:1rem;-webkit-padding-end:1rem;padding-inline-end:1rem;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;}.css-1yrkcw4{transition-property:var(--chakra-transition-property-common);transition-duration:var(--chakra-transition-duration-fast);transition-timing-function:var(--chakra-transition-easing-ease-out);cursor:pointer;-webkit-text-decoration:none;text-decoration:none;outline:2px solid transparent;outline-offset:2px;color:inherit;font-weight:;font-size:var(--chakra-fontSizes-md);margin:20px;}.css-1yrkcw4:hover,.css-1yrkcw4(data-hover){-webkit-text-decoration:underline;text-decoration:underline;cursor:pointer;}.css-1yrkcw4:focus,.css-1yrkcw4(data-focus){box-shadow:var(--chakra-shadows-outline);}Big Data Engineer - Scala
Posted today
Job Viewed
Job Description
Job Title: Big Data Engineer – Scala
Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.
Experience: 7–10 Years (Minimum 3+ years in Scala)
Notice Period: Immediate to 30 Days
Mode of Work: Hybrid
Click the link below to learn more about the role and take the AI Interview to begin your application journey:
Role Overview
We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.
Key Responsibilities
- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.
- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.
- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .
- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .
- Collaborate with cross-functional teams to design scalable cloud-based data architectures .
- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .
- Build monitoring and alerting systems leveraging Splunk or equivalent tools .
- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.
- Contribute to product development with a focus on scalability, maintainability, and performance.
Mandatory Skills
- Scala – Minimum 3+ years of hands-on experience.
- Strong expertise in Spark (PySpark) and Python .
- Hands-on experience with Apache Kafka .
- Knowledge of NiFi / Airflow for orchestration.
- Strong experience in Distributed Data Systems (5+ years) .
- Proficiency in SQL and query optimization.
- Good understanding of Cloud Architecture .
Preferred Skills
- Exposure to messaging technologies like Apache Kafka or equivalent.
- Experience in designing intuitive, responsive UIs for data analytics visualization.
- Familiarity with Splunk or other monitoring/alerting solutions .
- Hands-on experience with CI/CD tools (Git, Jenkins).
- Strong grasp of software engineering concepts, data modeling, and optimization techniques .
Be The First To Know
About the latest Senior data engineer Jobs in Hyderabad !
Big Data Engineer - Scala
Posted 3 days ago
Job Viewed
Job Description
Job Title: Big Data Engineer – Scala
Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.
Experience: 7–10 Years (Minimum 3+ years in Scala)
Notice Period: Immediate to 30 Days
Mode of Work: Hybrid
Click the link below to learn more about the role and take the AI Interview to begin your application journey:
Role Overview
We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.
Key Responsibilities
- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.
- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.
- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .
- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .
- Collaborate with cross-functional teams to design scalable cloud-based data architectures .
- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .
- Build monitoring and alerting systems leveraging Splunk or equivalent tools .
- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.
- Contribute to product development with a focus on scalability, maintainability, and performance.
Mandatory Skills
- Scala – Minimum 3+ years of hands-on experience.
- Strong expertise in Spark (PySpark) and Python .
- Hands-on experience with Apache Kafka .
- Knowledge of NiFi / Airflow for orchestration.
- Strong experience in Distributed Data Systems (5+ years) .
- Proficiency in SQL and query optimization.
- Good understanding of Cloud Architecture .
Preferred Skills
- Exposure to messaging technologies like Apache Kafka or equivalent.
- Experience in designing intuitive, responsive UIs for data analytics visualization.
- Familiarity with Splunk or other monitoring/alerting solutions .
- Hands-on experience with CI/CD tools (Git, Jenkins).
- Strong grasp of software engineering concepts, data modeling, and optimization techniques .
Big Data Engineer - Scala
Posted 3 days ago
Job Viewed
Job Description
Job Title: Big Data Engineer – Scala
Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.
Experience: 7–10 Years (Minimum 3+ years in Scala)
Notice Period: Immediate to 30 Days
Mode of Work: Hybrid
Click the link below to learn more about the role and take the AI Interview to begin your application journey:
Role Overview
We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.
Key Responsibilities
- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.
- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.
- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .
- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .
- Collaborate with cross-functional teams to design scalable cloud-based data architectures .
- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .
- Build monitoring and alerting systems leveraging Splunk or equivalent tools .
- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.
- Contribute to product development with a focus on scalability, maintainability, and performance.
Mandatory Skills
- Scala – Minimum 3+ years of hands-on experience.
- Strong expertise in Spark (PySpark) and Python .
- Hands-on experience with Apache Kafka .
- Knowledge of NiFi / Airflow for orchestration.
- Strong experience in Distributed Data Systems (5+ years) .
- Proficiency in SQL and query optimization.
- Good understanding of Cloud Architecture .
Preferred Skills
- Exposure to messaging technologies like Apache Kafka or equivalent.
- Experience in designing intuitive, responsive UIs for data analytics visualization.
- Familiarity with Splunk or other monitoring/alerting solutions .
- Hands-on experience with CI/CD tools (Git, Jenkins).
- Strong grasp of software engineering concepts, data modeling, and optimization techniques .
Big Data Engineer, Data Modeling
Posted today
Job Viewed
Job Description
What can you tell your friends
when they ask you what you do?We’re looking for an experienced Big Data Engineer who can create innovative new products in the analytics and data space. You will participate in the development that creates the world's #1 mobile app analytics service. Together with the team, you will build out new product features and applications using agile methodologies and open-source technologies. You will work directly with Data Scientists, Data Engineers, Product Managers, and Software Architects, and will be on the front lines of coding new and exciting analytics and data mining products. You should be passionate about what you do and excited to join an entrepreneurial start-up.
To ensure we execute on our values we are looking for someone who has a passion for:
As a Big Data Engineer, we will need you to be in charge of model implementation and maintenance, and to build a clean, robust, and maintainable data processing program that can support these projects on huge amounts of data, this includes
You should recognize yourself in the following…
This position is located in Hyderabad, India.
We are hiring for our engineering team at our data.ai India subsidiary entity, which is in the process of getting established . As we are awaiting approval from the Indian government, they shall be interim employees at Innova Solutions who is our Global Employer of Record.