1063 Data Engineer jobs in Hyderabad
Big Data Engineer
Posted today
Job Viewed
Job Description
Who we are
Alef Education began with a bold idea: that every learner deserves a personalised and meaningful education experience. What started in 2016 as a small pilot programme in Abu Dhabi has evolved into one of the world’s most dynamic EdTech companies—reshaping how millions of students engage with learning across the globe.
Today, Alef is proudly headquartered in the UAE, working hand-in-hand with ministries of education, schools, and teachers to bring smart, data-powered platforms into classrooms in over 14,000 schools.
Supporting over 1.1 million students and 50,000 teachers across the UAE, Indonesia & Morocco our AI-driven platforms generate 16+ million data points every day, helping drive smarter learning decisions. Whether it’s improving national exam results, boosting classroom engagement, or supporting educators with world-class tools, Alef is committed to impact at scale.
In 2024, Alef made history as the first EdTech company to list on the Abu Dhabi Securities Exchange (ADX), cementing our role as a regional innovator with global reach.
About The Role
As an ALEF Big Data Engineer you will have a strong understanding of big data technologies with an exceptional ability to code. You will provide technical leadership, working closely with the wider team to ensure high quality code is delivered in line with the project goals and delivery cycles. You will work closely with other teams to deliver rapid prototypes as well as production code for which you will ensure high accessibility standards are upheld. We expect familiarity with modern frameworks and languages, as well as working practices such as Clean Code, TDD, BDD, continuous integration, continuous delivery, and DevOps.
Key Responsibilities
Defining and developing services and solutions
- Define, design, and develop services and solutions around large data ingestion, storage, and management such as withRDBMS, No SQL DBs, Log Files, Events.
- Define, design, and run robust data pipelines/batch jobs in a production environment.
- Architecting highly scalable, highly concurrent, and low latency systems
Maintain, support, and enhance current systems.
- Contribute to paying down technical debt and use development approaches that minimize the growth of new technical debt.
- Contribute feedback to improve the quality, readability, and testability of the code base within your team.
- Mentor and train other developers in a non-line management capacity.
- Build tools (One of SBT, Gradle, Maven).
- Ensure all software built is robust and scalable.
Collaborating with Internal and external stakeholders
- Participating in sprint planning to work with developers and project teams to ensure projects are deployable and monitorable from the outside.
- Work with third-party and other internal providers to support a variety of integrations.
- As part of the team, you may be expected to participate in some of the 2nd line in-house support and Out-of-Hours support rotas.
- Proactively advise on best practices.
To Be The Right Fit, You'll Need
- Degree in Computer Science, Software Engineering or related preferred
- Minimum of 5 years experience in a Big Data
- Follow Clean Code/Solid principles
- Adhere and use TDD/BDD.
- Outstanding ability to develop efficient, readable, highly optimized/maintainable and clear code.
- Highly Proficient in either Functional Java or Scala, Python
- Knowledge of Azure Big Data/Analytics services – ADLS (Azure Data Lake Storage), HDInsight, Azure Data Factory, Azure Synapse Analytics, Azure Fabric, Azure Event Hubs, Azure Stream Analytics, Azure Databricks
- Experience of Storing Data in systems such as Hadoop HDFS, ADLS, Event Hubs
- Experience of designing, setting up and running big data tech stacks such as Hadoop, Azure Databricks, Spark and distributed datastores such as Cassandra, DocumentDBs, MongoDB, Event Hubs
- In-depth knowledge of Hadoop technology ecosystem – HDFS, Spark, Hive, HBase, Event Hubs, Flume, Sqoop, Oozie, SPARK, Avro, Parquet
- Experience debugging a complex multi-server service.
- In depth knowledge and experience in IaaS/PaaS solutions (eg AWS Infrastructure hosting and managed services)
- Familiarity with network protocols - TCP/IP, HTTP, SSL, etc.
- Knowledge of relational and non-relational database systems
- Understanding continuous integration and delivery.
- Mocking (any of the following Mockito, ScalaTest Spock, Jasmine, Mocha).
- IDE Intellij or Eclipse.
- Build tools (One of SBT, Gradle, Maven).
- Ensure all software built is robust and scalable.
- An ability to communicate technical concepts to a non-technical audience.
- Working knowledge of unix-like operating systems such as Linux and/or Mac OS X.
- Knowledge of the git version control system.
- Ability to quickly research and learn new programming tools and techniques.
Big Data Engineer
Posted 2 days ago
Job Viewed
Job Description
Who we are
Alef Education began with a bold idea: that every learner deserves a personalised and meaningful education experience. What started in 2016 as a small pilot programme in Abu Dhabi has evolved into one of the world’s most dynamic EdTech companies—reshaping how millions of students engage with learning across the globe.
Today, Alef is proudly headquartered in the UAE, working hand-in-hand with ministries of education, schools, and teachers to bring smart, data-powered platforms into classrooms in over 14,000 schools.
Supporting over 1.1 million students and 50,000 teachers across the UAE, Indonesia & Morocco our AI-driven platforms generate 16+ million data points every day, helping drive smarter learning decisions. Whether it’s improving national exam results, boosting classroom engagement, or supporting educators with world-class tools, Alef is committed to impact at scale.
In 2024, Alef made history as the first EdTech company to list on the Abu Dhabi Securities Exchange (ADX), cementing our role as a regional innovator with global reach.
About The Role
As an ALEF Big Data Engineer you will have a strong understanding of big data technologies with an exceptional ability to code. You will provide technical leadership, working closely with the wider team to ensure high quality code is delivered in line with the project goals and delivery cycles. You will work closely with other teams to deliver rapid prototypes as well as production code for which you will ensure high accessibility standards are upheld. We expect familiarity with modern frameworks and languages, as well as working practices such as Clean Code, TDD, BDD, continuous integration, continuous delivery, and DevOps.
Key Responsibilities
Defining and developing services and solutions
- Define, design, and develop services and solutions around large data ingestion, storage, and management such as withRDBMS, No SQL DBs, Log Files, Events.
- Define, design, and run robust data pipelines/batch jobs in a production environment.
- Architecting highly scalable, highly concurrent, and low latency systems
Maintain, support, and enhance current systems.
- Contribute to paying down technical debt and use development approaches that minimize the growth of new technical debt.
- Contribute feedback to improve the quality, readability, and testability of the code base within your team.
- Mentor and train other developers in a non-line management capacity.
- Build tools (One of SBT, Gradle, Maven).
- Ensure all software built is robust and scalable.
Collaborating with Internal and external stakeholders
- Participating in sprint planning to work with developers and project teams to ensure projects are deployable and monitorable from the outside.
- Work with third-party and other internal providers to support a variety of integrations.
- As part of the team, you may be expected to participate in some of the 2nd line in-house support and Out-of-Hours support rotas.
- Proactively advise on best practices.
To Be The Right Fit, You'll Need
- Degree in Computer Science, Software Engineering or related preferred
- Minimum of 5 years experience in a Big Data
- Follow Clean Code/Solid principles
- Adhere and use TDD/BDD.
- Outstanding ability to develop efficient, readable, highly optimized/maintainable and clear code.
- Highly Proficient in either Functional Java or Scala, Python
- Knowledge of Azure Big Data/Analytics services – ADLS (Azure Data Lake Storage), HDInsight, Azure Data Factory, Azure Synapse Analytics, Azure Fabric, Azure Event Hubs, Azure Stream Analytics, Azure Databricks
- Experience of Storing Data in systems such as Hadoop HDFS, ADLS, Event Hubs
- Experience of designing, setting up and running big data tech stacks such as Hadoop, Azure Databricks, Spark and distributed datastores such as Cassandra, DocumentDBs, MongoDB, Event Hubs
- In-depth knowledge of Hadoop technology ecosystem – HDFS, Spark, Hive, HBase, Event Hubs, Flume, Sqoop, Oozie, SPARK, Avro, Parquet
- Experience debugging a complex multi-server service.
- In depth knowledge and experience in IaaS/PaaS solutions (eg AWS Infrastructure hosting and managed services)
- Familiarity with network protocols - TCP/IP, HTTP, SSL, etc.
- Knowledge of relational and non-relational database systems
- Understanding continuous integration and delivery.
- Mocking (any of the following Mockito, ScalaTest Spock, Jasmine, Mocha).
- IDE Intellij or Eclipse.
- Build tools (One of SBT, Gradle, Maven).
- Ensure all software built is robust and scalable.
- An ability to communicate technical concepts to a non-technical audience.
- Working knowledge of unix-like operating systems such as Linux and/or Mac OS X.
- Knowledge of the git version control system.
- Ability to quickly research and learn new programming tools and techniques.
Big Data Engineer
Posted 2 days ago
Job Viewed
Job Description
Who we are
Alef Education began with a bold idea: that every learner deserves a personalised and meaningful education experience. What started in 2016 as a small pilot programme in Abu Dhabi has evolved into one of the world’s most dynamic EdTech companies—reshaping how millions of students engage with learning across the globe.
Today, Alef is proudly headquartered in the UAE, working hand-in-hand with ministries of education, schools, and teachers to bring smart, data-powered platforms into classrooms in over 14,000 schools.
Supporting over 1.1 million students and 50,000 teachers across the UAE, Indonesia & Morocco our AI-driven platforms generate 16+ million data points every day, helping drive smarter learning decisions. Whether it’s improving national exam results, boosting classroom engagement, or supporting educators with world-class tools, Alef is committed to impact at scale.
In 2024, Alef made history as the first EdTech company to list on the Abu Dhabi Securities Exchange (ADX), cementing our role as a regional innovator with global reach.
About The Role
As an ALEF Big Data Engineer you will have a strong understanding of big data technologies with an exceptional ability to code. You will provide technical leadership, working closely with the wider team to ensure high quality code is delivered in line with the project goals and delivery cycles. You will work closely with other teams to deliver rapid prototypes as well as production code for which you will ensure high accessibility standards are upheld. We expect familiarity with modern frameworks and languages, as well as working practices such as Clean Code, TDD, BDD, continuous integration, continuous delivery, and DevOps.
Key Responsibilities
Defining and developing services and solutions
- Define, design, and develop services and solutions around large data ingestion, storage, and management such as withRDBMS, No SQL DBs, Log Files, Events.
- Define, design, and run robust data pipelines/batch jobs in a production environment.
- Architecting highly scalable, highly concurrent, and low latency systems
Maintain, support, and enhance current systems.
- Contribute to paying down technical debt and use development approaches that minimize the growth of new technical debt.
- Contribute feedback to improve the quality, readability, and testability of the code base within your team.
- Mentor and train other developers in a non-line management capacity.
- Build tools (One of SBT, Gradle, Maven).
- Ensure all software built is robust and scalable.
Collaborating with Internal and external stakeholders
- Participating in sprint planning to work with developers and project teams to ensure projects are deployable and monitorable from the outside.
- Work with third-party and other internal providers to support a variety of integrations.
- As part of the team, you may be expected to participate in some of the 2nd line in-house support and Out-of-Hours support rotas.
- Proactively advise on best practices.
To Be The Right Fit, You'll Need
- Degree in Computer Science, Software Engineering or related preferred
- Minimum of 5 years experience in a Big Data
- Follow Clean Code/Solid principles
- Adhere and use TDD/BDD.
- Outstanding ability to develop efficient, readable, highly optimized/maintainable and clear code.
- Highly Proficient in either Functional Java or Scala, Python
- Knowledge of Azure Big Data/Analytics services – ADLS (Azure Data Lake Storage), HDInsight, Azure Data Factory, Azure Synapse Analytics, Azure Fabric, Azure Event Hubs, Azure Stream Analytics, Azure Databricks
- Experience of Storing Data in systems such as Hadoop HDFS, ADLS, Event Hubs
- Experience of designing, setting up and running big data tech stacks such as Hadoop, Azure Databricks, Spark and distributed datastores such as Cassandra, DocumentDBs, MongoDB, Event Hubs
- In-depth knowledge of Hadoop technology ecosystem – HDFS, Spark, Hive, HBase, Event Hubs, Flume, Sqoop, Oozie, SPARK, Avro, Parquet
- Experience debugging a complex multi-server service.
- In depth knowledge and experience in IaaS/PaaS solutions (eg AWS Infrastructure hosting and managed services)
- Familiarity with network protocols - TCP/IP, HTTP, SSL, etc.
- Knowledge of relational and non-relational database systems
- Understanding continuous integration and delivery.
- Mocking (any of the following Mockito, ScalaTest Spock, Jasmine, Mocha).
- IDE Intellij or Eclipse.
- Build tools (One of SBT, Gradle, Maven).
- Ensure all software built is robust and scalable.
- An ability to communicate technical concepts to a non-technical audience.
- Working knowledge of unix-like operating systems such as Linux and/or Mac OS X.
- Knowledge of the git version control system.
- Ability to quickly research and learn new programming tools and techniques.
Big Data Engineer
Posted today
Job Viewed
Job Description
Role - Spark / Scala Data Engineer
Experience - 8 to 10 yrs
Location - Bangalore/Chennai/Hyderabad/Delhi/Pune
Must Have- Big Data Hadoop - Hive and Spark/Scala solid experience- SQL advance knowledge - Been able to test changes and issues properly, replicating the code functionality into SQL- Worked with Code Repositories as GIT, Maven, .- DevOps Knowledge (Jenkins, Scripts, .) - Tools used for deploying software into environments, use of Jira.Good to have:- Analyst Skills - Being able to translate technical requirements to non-technical partners and to deliver clear solutions. Been able to create test cases scenarios.- Control-m solid experience - Been able to create jobs, modify parameters- Documentation - Experience of carrying out data and process analysis to create specifications documents- Finance Knowledge - Have a experience working in Financial Services / Banking organization with an understanding of Financial Services / Retail, Business and Corporate Banking- AWS knowledge- Unix / Linux
Big Data Engineer - GCP
Posted today
Job Viewed
Job Description
Job Title : GCP Data Engineer
Job Location – Chennai / Hyderabd / Bangalore / Pune / Gurgoan/ Noida / NCR
Experience: 5 to 10 Years of experience in IT industry in Planning, deploying, and configuring GCP based solutions.
Requirement:
- Mandatory to have knowledge of Big Data Architecture Patterns and experience in delivery of BigData and Hadoop Ecosystems.
- Strong experience required in GCP. Must have done multiple large projects with GCP Big Query and ETL
- Experience working in GCP based Big Data deployments (Batch/Realtime) leveraging components like GCP Big Query, air flow, Google Cloud Storage, Data fusion, Data flow, Data Proc etc
- Should have experience in SQL/Data Warehouse
- Expert in programming languages like Java, Hadoop, Scala
- Expert in at least one distributed data processing frameworks: like Spark (Core, Streaming, SQL), Storm or Flink etc.
- Should have worked on any of Orchestration tools – Oozie, Airflow, Ctr-M or similar, Kubernetes.
- Worked on Performance Tuning, Optimization and Data security
- Preferred Experience and Knowledge:
- Excellent understanding of data technologies landscape / ecosystem.
- Good Exposure in development with CI / CD pipelines. Knowledge of containerization, orchestration and Kubernetes engine would be an added advantage.
- Well versed with pros and cons of various database technologies like Relational, BigQuery, Columnar databases, NOSQL
- Exposure in data governance, catalog, lineage and associated tools would be an added advantage.
- Well versed with SaaS, PaaS and IaaS concepts and can drive clients to a decisions
- Good skills in Python Language and PYSPARK
Key word:
GCP, BigQuery, Python, Pyspark
Big Data Engineer (GCP Focus)
Posted today
Job Viewed
Job Description
We await your innovation at TCS: Hiring |GCP Data Engineer |
Greetings from TCS!
Required Total Experience: 3-10 years
Work location: Hyderabad, Bangalore.
Required Technical Skill Set:
- 3+ Years Hands-on development experience on
Google Cloud Platform
- Strong in Big Query
- Following Cloud services: Cloud Composer, Pub sub, Data proc, Data flow, CDAP, Big table, GCS
- Hands-on development Experience on Airflow DAG creation.
- Hands-on development experience on Data migration Pipeline creation on Pub Sub with DataProc DataFlow.
- Python Scripting knowledge
- Linux Basic
MUST HAVE:
- Hands-on development experience on Google Cloud Platform, on
Following Cloud services: Cloud Composer, Big query, Pub sub, Data proc, Data flow, CDAP, Big table, GCS
- Hands-on development Experience on Airflow DAG creation.
- Hands-on development experience on Data migration Pipeline creation on PubSub with DataProc DataFlow.
- Hands-on development Experience on Cloud function creation.
- Hands-on development experience in shell scripting, pySpark, Scala programming language
- ETL jobs development using Spark and Python
- Python / Java / Scala programming
- Debugging/troubleshooting of Spark jobs
- Performance tuning experience for Hadoop/Spark jobs
- Good understanding on Datawarehouse knowledge and data modelling
- Hands-on development experience in Big Query and performance tuning of BQ queries, BQ Data Load Jobs
GOOD TO HAVE:
- Good Telecom domain knowledge
Thanks & Regards
Hari Chandana
Big Data Platform Engineer
Posted today
Job Viewed
Job Description
Job Description
Hadoop Administrator JD
5-7 Years Experience in Hadoop Engineering with working experience on Python Ansible DevOps methodologies
Primary Skills HDP CDP Linux Python Ansible and Kubernetes
Extensive experience on CDPHDP Cluster and Server build including Control nodes Worker nodes Edge nodes
Primary Skills : Hadoop, Hortonworks Data Platform, Cloudera Distribution of Hadoop, Linux, Python, Ansible, YAML Scripting and Kubernetes
Secondary Skills: Other Devops Tools
Be The First To Know
About the latest Data engineer Jobs in Hyderabad !
Senior Data Engineer – Big Data & AWS (Python + PySpark)
Posted 12 days ago
Job Viewed
Job Description
We are hiring for the position of Senior Data Engineer – Big Data & AWS (Python + PySpark) at Coforge Ltd.
Job Location: Greater Noida & Hyderabad Only.
Experience: 5 to 7 Years
Employment Type: Full-Time
Send your updated CV to
For queries, WhatsApp:
If you are a data enthusiast with a passion for building scalable, high-performance data solutions, consider joining us.
This role offers the opportunity to work with cutting-edge technologies in a collaborative and innovative environment.
Key Responsibilities:
- Design and develop high-volume, mission-critical data engineering solutions.
- Enhance applications to meet business and audit requirements.
- Optimize Spark jobs for large-scale data processing.
- Collaborate with onshore and offshore teams to ensure platform availability.
- Participate in the full development lifecycle: coding, testing, and release.
- Support weekend release activities and resolve application issues.
Must-Have Skills:
- Strong hands-on experience with Python and PySpark.
- Proficiency in Spark DataFrames, Jupyter Notebook, and PyCharm.
- Experience with AWS services: EMR, Athena, Glue, Lambda, EC2, S3, SNS.
- Familiarity with ETL processes, handling various file formats (CSV, JSON, XML, etc.).
- Knowledge of data warehousing concepts and columnar storage formats (Parquet, Avro, ORC).
- Version control using Git and CI/CD with Jenkins.
- Awareness of DevOps and automated release pipelines.
Good to Have:
- Experience with AWS databases (Aurora, RDS, Redshift, DynamoDB).
- Exposure to BFSI domain and API Gateway platforms.
- Knowledge of front-end frameworks.
- Experience working with US clients.
Senior Data Engineer – Big Data & AWS (Python + PySpark)
Posted today
Job Viewed
Job Description
Job Location: Greater Noida & Hyderabad Only.
Experience: 5 to 7 Years
Employment Type: Full-Time
Send your updated CV to
For queries, WhatsApp:
If you are a data enthusiast with a passion for building scalable, high-performance data solutions, consider joining us.
This role offers the opportunity to work with cutting-edge technologies in a collaborative and innovative environment.
Key Responsibilities:
- Design and develop high-volume, mission-critical data engineering solutions.
- Enhance applications to meet business and audit requirements.
- Optimize Spark jobs for large-scale data processing.
- Collaborate with onshore and offshore teams to ensure platform availability.
- Participate in the full development lifecycle: coding, testing, and release.
- Support weekend release activities and resolve application issues.
Must-Have Skills:
- Strong hands-on experience with Python and PySpark.
- Proficiency in Spark DataFrames, Jupyter Notebook, and PyCharm.
- Experience with AWS services: EMR, Athena, Glue, Lambda, EC2, S3, SNS.
- Familiarity with ETL processes, handling various file formats (CSV, JSON, XML, etc.).
- Knowledge of data warehousing concepts and columnar storage formats (Parquet, Avro, ORC).
- Version control using Git and CI/CD with Jenkins.
- Awareness of DevOps and automated release pipelines.
Good to Have:
- Experience with AWS databases (Aurora, RDS, Redshift, DynamoDB).
- Exposure to BFSI domain and API Gateway platforms.
- Knowledge of front-end frameworks.
- Experience working with US clients.
Senior Data Engineer – Big Data & AWS (Python + PySpark)
Posted 12 days ago
Job Viewed
Job Description
We are hiring for the position of Senior Data Engineer – Big Data & AWS (Python + PySpark) at Coforge Ltd.
Job Location: Greater Noida & Hyderabad Only.
Experience: 5 to 7 Years
Employment Type: Full-Time
Send your updated CV to
For queries, WhatsApp:
If you are a data enthusiast with a passion for building scalable, high-performance data solutions, consider joining us.
This role offers the opportunity to work with cutting-edge technologies in a collaborative and innovative environment.
Key Responsibilities:
- Design and develop high-volume, mission-critical data engineering solutions.
- Enhance applications to meet business and audit requirements.
- Optimize Spark jobs for large-scale data processing.
- Collaborate with onshore and offshore teams to ensure platform availability.
- Participate in the full development lifecycle: coding, testing, and release.
- Support weekend release activities and resolve application issues.
Must-Have Skills:
- Strong hands-on experience with Python and PySpark.
- Proficiency in Spark DataFrames, Jupyter Notebook, and PyCharm.
- Experience with AWS services: EMR, Athena, Glue, Lambda, EC2, S3, SNS.
- Familiarity with ETL processes, handling various file formats (CSV, JSON, XML, etc.).
- Knowledge of data warehousing concepts and columnar storage formats (Parquet, Avro, ORC).
- Version control using Git and CI/CD with Jenkins.
- Awareness of DevOps and automated release pipelines.
Good to Have:
- Experience with AWS databases (Aurora, RDS, Redshift, DynamoDB).
- Exposure to BFSI domain and API Gateway platforms.
- Knowledge of front-end frameworks.
- Experience working with US clients.