1,636 Data Engineering jobs in India

Big Data Engineering Specialist

Alappuzha, Kerala beBeeBigData

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Title: Big Data Engineering Specialist

We are seeking a skilled Big Data Engineering Specialist to join our team.


About the Role:
  • This is a challenging opportunity for an experienced professional to lead data engineering initiatives and contribute to driving business growth through data-driven insights.

Key Responsibilities:
  1. Design, develop, and deploy large-scale data processing pipelines using Apache Spark/Streamsets/Apache NIFI or similar frameworks.
  2. Collaborate with cross-functional teams to integrate data engineering solutions into existing systems.
  3. Analyze complex data sets to identify trends and opportunities for improvement.
  4. Stay up-to-date with industry developments and emerging technologies in big data engineering.

Requirements:
  • At least 7 years of experience in development in Data specific projects.
  • Mandatory skills include knowledge of streaming data Kafka Framework (kSQL/Mirror Maker etc) and strong programming skills in Groovy/Java.
  • Good knowledge of Data Structure, ETL Design, and storage.
  • Experience working in near real-time/Streaming Data pipeline development using Apache Spark/Streamsets/Apache NIFI or similar frameworks.

Benefits:
  • Opportunity to work on cutting-edge technology projects.
  • Collaborative and dynamic work environment.
  • Professional growth and development opportunities.
This advertiser has chosen not to accept applicants from your region.

Sr Associate Big Data Engineering

Hyderabad, Andhra Pradesh AT&T

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Description

:

Key Responsibilities:

Data Engineering & Architecture:

  • Design, develop, and maintain high-performance data pipelines for structured and unstructured data using Azure Data Bricks and Apache Spark.
  • Build and manage scalable data ingestion frameworks for batch and real-time data processing.
  • Implement and optimize data lake architecture in Azure Data Lake to support analytics and reporting workloads.
  • Develop and optimize data models and queries in Azure Synapse Analytics to power BI and analytics use cases.
  • Cloud-Based Data Solutions:

  • Architect and implement modern data lakehouses combining the best of data lakes and data warehouses.
  • Leverage Azure services like Data Factory, Event Hub, and Blob Storage for end-to-end data workflows.
  • Ensure security, compliance, and governance of data through Azure Role-Based Access Control (RBAC) and Data Lake ACLs.
  • ETL/ELT Development:

  • Develop robust ETL/ELT pipelines using Azure Data Factory, Data Bricks notebooks, and PySpark.
  • Perform data transformations, cleansing, and validation to prepare datasets for analysis.
  • Manage and monitor job orchestration, ensuring pipelines run efficiently and reliably.
  • Performance Optimization:

  • Optimize Spark jobs and SQL queries for large-scale data processing.
  • Implement partitioning, caching, and indexing strategies to improve performance and scalability of big data workloads.
  • Conduct capacity planning and recommend infrastructure optimizations for cost-effectiveness.
  • Collaboration & Stakeholder Management:

  • Work closely with business analysts, data scientists, and product teams to understand data requirements and deliver solutions.
  • Participate in cross-functional design sessions to translate business needs into technical specifications.
  • Provide thought leadership on best practices in data engineering and cloud computing.
  • Documentation & Knowledge Sharing:

  • Create detailed documentation for data workflows, pipelines, and architectural decisions.
  • Mentor junior team members and promote a culture of learning and innovation.
  • Required Qualifications:

  • Experience:7+ years of experience in data engineering, big data, or cloud-based data solutions.Proven expertise with Azure Data Bricks, Azure Data Lake, and Azure Synapse Analytics.
  • Technical Skills:Strong hands-on experience with Apache Spark and distributed data processing frameworks.Advanced proficiency in Python and SQL for data manipulation and pipeline development.Deep understanding of data modeling for OLAP, OLTP, and dimensional data models.Experience with ETL/ELT tools like Azure Data Factory or Informatica.Familiarity with Azure DevOps for CI/CD pipelines and version control.
  • Big Data Ecosystem:Familiarity with Delta Lake for managing big data in Azure.Experience with streaming data frameworks like Kafka, Event Hub, or Spark Streaming.
  • Cloud Expertise:Strong understanding of Azure cloud architecture, including storage, compute, and networking.Knowledge of Azure security best practices, such as encryption and key management.
  • Preferred Skills (Nice to Have):

  • Experience with machine learning pipelines and frameworks like MLFlow or Azure Machine Learning.
  • Knowledge of data visualization tools such as Power BI for creating dashboards and reports.
  • Familiarity with Terraform or ARM templates for infrastructure as code (IaC).
  • Exposure to NoSQL databases like Cosmos DB or MongoDB.
  • Experience with data governance to

    Weekly Hours:

    40

    Time Type:

    Regular

    Location:

    Hyderabad, Andhra Pradesh, India

    It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made.

    This advertiser has chosen not to accept applicants from your region.

    Data Engineering - Data Engineering Data Engineering

    New Delhi, Delhi Generis Tek Inc.

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Please contact: To discuss this amazing opportunity, reach out to our Talent Acquisition Specialist Rushi Panchal  at email address   can be reached on # .
     
    We have Contract role Data Engineer-Remote f or our client at New Delhi. Please let me know if you or any of your friends would be interested in this position.
     
    Position Details:
    Data Engineer-Remote-New Delhi
    Location                            : (Remote)
    Project Duration              : 06 months Contract
     
    Job Description:
    We are seeking a skilled Data Engineer, who is  knowledgeable about and loves working with modern data integration frameworks, big data, and cloud technologies. Candidates must also be proficient with data programming languages (e.g., Python and SQL). The Yum! data engineer will build a variety of data pipelines and models to support advanced AI/ML analytics projects—with the intent of elevating the customer experience and driving revenue and profit growth in our restaurants globally. The candidate will work in our office in Gurgaon, India. 
     
    Key Responsibilities 
    As a data engineer, you will:
    •    Partner with KFC, Pizza Hut, Taco Bell & Habit Burger to build data pipelines to enable best-in-class restaurant technology solutions.
    •    Play a key role in our Data Operations team—developing data solutions responsible for driving Yum! growth.
    •    Design and develop data pipelines—streaming and batch—to move data from point-of-sale, back of house, operational platforms, and more to our Global Data Hub
    •    Contribute to standardizing and developing a framework to extend these pipelines across brands and markets
    •    Develop on the Yum! data platform by building applications using a mix of open-source frameworks (PySpark, Kubernetes, Airflow, etc.) and best in breed SaaS tools (Informatica Cloud, Snowflake, Domo, etc.).
    •    Implement and manage production support processes around data lifecycle, data quality, coding utilities, storage, reporting, and other data integration points.

    Skills and Qualifications:
    •    Vast background in all things data-related
    •    AWS platform development experience (EKS, S3, API Gateway, Lambda, etc.)
    •    Experience with modern ETL tools such as Informatica, Matillion, or DBT; Informatica CDI is a plus
    •    High level of proficiency with SQL (Snowflake a big plus)
    •    Proficiency with Python for transforming data and automating tasks
    •    Experience with Kafka, Pulsar, or other streaming technologies
    •    Experience orchestrating complex task flows across a variety of technologies
    •    Bachelor’s degree from an accredited institution or relevant experience
     
     
    To discuss this amazing opportunity, reach out to our Talent Acquisition Specialist Rushi Panchal   at email address   can be reached on #    .
     
     
    About generis tek : generis tek is a boutique it/professional staffing based in Chicago land. We offer both contingent labor & permanent placement services to several fortune 500 clients nationwide. Our philosophy is based on delivering long-term value and build lasting relationships with our clients, consultants and employees. Our fundamental success lies in understanding our clients’ specific needs and working very closely with our consultants to create a right fit for both sides. We aspire to be our client has most trusted business partner.
     
    This advertiser has chosen not to accept applicants from your region.

    Data Engineering

    Mumbai, Maharashtra NR Consulting - India

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Title: Data Engineering
    Location: Mumbai

    Job Description: Data Engineering

    This advertiser has chosen not to accept applicants from your region.

    Data Engineering

    Bengaluru, Karnataka ScaleneWorks

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Job Title: Middleware Engineer
    Position: Data Engineer
    Experience: 5-6yrs
    Category: IT Infrastructure
    Main location: India, Karnataka, Bangalore
    Employment Type: Full Time
    Qualification: Bachelor's degree in Computer Science or related field or higher.
    Roles and Responsibilities


    Data Engineer - 5-6 years experience.
    Responsibilities
    ===
    Design, develop, and maintain data architectures, pipelines, and workflows for the collection, processing, storage, and retrieval of large volumes of structured and unstructured data from multiple sources.
    Collaborate with cross-functional teams to identify and prioritize data engineering requirements and to develop and deploy data-driven solutions to address business challenges.
    Build and maintain scalable data storage and retrieval systems (e.g., data lakes, data warehouses, databases), fault-tolerant, and high-performance data platforms on cloud infrastructure such as AWS, Azure, or Google Cloud Platform.
    Develop and maintain ETL workflows, data pipelines, and data transformation processes to prepare data for machine learning and AI applications.
    Implement and optimize distributed computing frameworks such as Hadoop, Spark, or Flink to support high-performance and scalable processing of large data sets.
    Build and maintain monitoring, alerting, and logging systems to ensure the availability, reliability, and performance of data pipelines and data platforms.
    Collaborate with Data Scientists and Machine Learning Engineers to deploy models on production environments and ensure their scalability, reliability, and accuracy.
    Requirements:
    ===
    Bachelor s or master s degree in computer science, engineering, or related field.
    At least 5-6 years of experience in data engineering, with a strong background in machine learning, cloud computing and big data technologies.
    Experience with at least one major cloud platform (AWS, Azure, GCP).
    Proficiency in programming languages like Python, Java, and SQL.
    Experience with distributed computing technologies such as Hadoop, Spark, and Kafka.
    Familiarity with database technologies such as SQL, NoSQL, NewSQL.
    Experience with data warehousing and ETL tools such as Redshift, Snowflake, or Airflow.
    Strong problem-solving and analytical skills.
    Excellent communication and teamwork skills.
    Preferred qualification:
    ===
    Experience with DevOps practices and tools such as Docker, Kubernetes, or Ansible, Terraform.
    Experience with data visualization tools such as Tableau, Superset, Power BI, or Plotly, D3.js.
    Experience with stream processing frameworks such as Kafka, Pulsar or Kinesis.
    Experience with data governance, data security, and compliance.
    Experience with software engineering best practices and methodologies such as Agile or Scrum.
    Must Have Skills
    ===
    data engineer with expertise in machine learning, cloud computing , and big data technologies.
    Data Engineering Experince on multiple clouds one of them , preferably GCP
    data lakes, data warehouses, databases
    ETL workflows, data pipelines,data platforms
    Hadoop, Spark, or Flink
    Hadoop, Spark, and Kafka
    SQL, NoSQL, NewSQL
    Redshift, Snowflake, or Airflow

    This advertiser has chosen not to accept applicants from your region.

    Data Engineering Manager

    Mumbai, Maharashtra UnitedHealth Group

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start **Caring. Connecting. Growing together.**
    We are looking for a skilled Data Engineer to design, build, and maintain scalable, secure, and high-performance data solutions. This role spans the full data engineering lifecycle - from research and architecture to deployment and support- within cloud-native environments, with a strong focus on AWS and Kubernetes (EKS).
    **Primary Responsibilities:**
    + **Data Engineering Lifecycle:** Lead research, proof of concept, architecture, development, testing, deployment, and ongoing maintenance of data solutions
    + **Data Solutions:** Design and implement modular, flexible, secure, and reliable data systems that scale with business needs
    + **Instrumentation and Monitoring:** Integrate pipeline observability to detect and resolve issues proactively
    + **Troubleshooting and Optimization:** Develop tools and processes to debug, optimize, and maintain production systems
    + **Tech Debt Reduction:** Identify and address legacy inefficiencies to improve performance and maintainability
    + **Debugging and Troubleshooting:** Quickly diagnose and resolve unknown issues across complex systems
    + **Documentation and Governance:** Maintain clear documentation of data models, transformations, and pipelines to ensure security and governance compliance
    + **Cloud Expertise:** Leverage advanced skills in AWS and EKS to build, deploy, and scale cloud-native data platforms
    + **Cross-Functional Support:** Collaborate with analytics, application development, and business teams to enable data-driven solutions
    + **Team Leadership:** Lead and mentor engineering teams to ensure operational efficiency and innovation
    + Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
    **Required Qualifications:**
    + Bachelor's degree in Computer Science or related field
    + 5+ years of experience in data engineering or related roles
    + Proven experience designing and deploying scalable, secure, high-quality data solutions
    + Solid expertise in full Data Engineering lifecycle (research to maintenance)
    + Advanced AWS and EKS knowledge
    + Proficient in CI/CD, IaC, and addressing tech debt
    + Proven skilled in monitoring and instrumentation of data pipelines
    + Proven advanced troubleshooting and performance optimization abilities
    + Proven ownership mindset with ability to manage multiple components
    + Proven effective cross-functional collaborator (DS, SMEs, and external teams).
    + Proven exceptional debugging and problem-solving skills
    + Proven solid individual contributor with a team-first approach
    _At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission._
    _#njp_
    This advertiser has chosen not to accept applicants from your region.

    Data Engineering Analyst

    Chennai, Tamil Nadu UnitedHealth Group

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start **Caring. Connecting. Growing together.**
    The Data Engineering Analyst, using technical and analytical skills, is responsible for supporting Optum members on, not limiting to, ongoing data refreshes and implementations which are delivered on time and with utmost quality, complete analysis of an issue to its final solution, including creative problem solving and technical decision making.
    **Primary Responsibility:**
    + Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
    **Required Qualifications:**
    + Bachelor's degree in Computer Science or any engineering
    + 2+ years of experience in Data analysis and functional QC
    + Basic Knowledge on Cloud (AWS)
    + Basic knowledge in Spark Sql
    + Basic Knowledge in Python
    + Basic US Healthcare knowledge
    + Fair knowledge on Cloud (AWS)
    + Technical aptitude for learning new technologies
    + Solid SQL skills
    + Proven solid analytical and problem-solving skills
    + Proven passion to work with lot of data in a challenging environment
    _At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission._
    This advertiser has chosen not to accept applicants from your region.
    Be The First To Know

    About the latest Data engineering Jobs in India !

    Data Engineering Consultant

    Noida, Uttar Pradesh UnitedHealth Group

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start **Caring. Connecting. Growing together.**
    **Primary Responsibilities:**
    + Ingest data from multiple on-prem and cloud data sources using various tools & capabilities in Azure
    + Design and develop Azure Databricks processes using PySpark/Spark-SQL
    + Design and develop orchestration jobs using ADF, Databricks Workflow
    + Analyzing data engineering processes being developed and act as an SME to troubleshoot performance issues and suggest solutions to improve
    + Building test framework for the Databricks notebook jobs for automated testing before code deployment
    + Design and build POCs to validate new ideas, tools, and architectures in Azure
    + Continuously explore new Azure services and capabilities; assess their applicability to business needs
    + Prepare case studies and technical write-ups to showcase successful implementations and lessons learned
    + Work closely with clients, business stakeholders, and internal teams to gather requirements and translate them into technical solutions using best practices and appropriate architecture
    + Contribute to full lifecycle project implementations, from design and development to deployment and monitoring
    + Ensure solutions adhere to security, compliance, and governance standards
    + Monitor and optimize data pipelines and cloud resources for cost and performance efficiency
    + Identifies solutions to non-standard requests and problems
    + Mentor and support existing on-prem developers for cloud environment
    + Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
    **Required Qualifications:**
    + Undergraduate degree or equivalent experience
    + 7+ years of overall experience in Data & Analytics engineering
    + 5+ years of experience working with Azure, Databricks, and ADF, Data Lake
    + 5+ years of experience working with data platform or product using PySpark and Spark-SQL
    + Solid experience with CICD tools such as Jenkins, GitHub, Github Actions, Maven etc.
    + In-depth understanding of Azure architecture & ability to come up with efficient design & solutions
    + Highly proficient in Python and SQL
    + Proven excellent communication skills
    + **Key Skill:** Azure Data Engineer - Azure Databricks, Azure Data factory, Python/Pyspark, Terraform
    _At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission._
    This advertiser has chosen not to accept applicants from your region.
     

    Nearby Locations

    Other Jobs Near Me

    Industry

    1. request_quote Accounting
    2. work Administrative
    3. eco Agriculture Forestry
    4. smart_toy AI & Emerging Technologies
    5. school Apprenticeships & Trainee
    6. apartment Architecture
    7. palette Arts & Entertainment
    8. directions_car Automotive
    9. flight_takeoff Aviation
    10. account_balance Banking & Finance
    11. local_florist Beauty & Wellness
    12. restaurant Catering
    13. volunteer_activism Charity & Voluntary
    14. science Chemical Engineering
    15. child_friendly Childcare
    16. foundation Civil Engineering
    17. clean_hands Cleaning & Sanitation
    18. diversity_3 Community & Social Care
    19. construction Construction
    20. brush Creative & Digital
    21. currency_bitcoin Crypto & Blockchain
    22. support_agent Customer Service & Helpdesk
    23. medical_services Dental
    24. medical_services Driving & Transport
    25. medical_services E Commerce & Social Media
    26. school Education & Teaching
    27. electrical_services Electrical Engineering
    28. bolt Energy
    29. local_mall Fmcg
    30. gavel Government & Non Profit
    31. emoji_events Graduate
    32. health_and_safety Healthcare
    33. beach_access Hospitality & Tourism
    34. groups Human Resources
    35. precision_manufacturing Industrial Engineering
    36. security Information Security
    37. handyman Installation & Maintenance
    38. policy Insurance
    39. code IT & Software
    40. gavel Legal
    41. sports_soccer Leisure & Sports
    42. inventory_2 Logistics & Warehousing
    43. supervisor_account Management
    44. supervisor_account Management Consultancy
    45. supervisor_account Manufacturing & Production
    46. campaign Marketing
    47. build Mechanical Engineering
    48. perm_media Media & PR
    49. local_hospital Medical
    50. local_hospital Military & Public Safety
    51. local_hospital Mining
    52. medical_services Nursing
    53. local_gas_station Oil & Gas
    54. biotech Pharmaceutical
    55. checklist_rtl Project Management
    56. shopping_bag Purchasing
    57. home_work Real Estate
    58. person_search Recruitment Consultancy
    59. store Retail
    60. point_of_sale Sales
    61. science Scientific Research & Development
    62. wifi Telecoms
    63. psychology Therapy
    64. pets Veterinary
    View All Data Engineering Jobs