260 Data Engineer jobs in Delhi

Data Engineer

Delhi, Delhi Deloitte

Posted 4 days ago

Job Viewed

Tap Again To Close

Job Description

Your potential, unleashed.


India’s impact on the global economy has increased at an exponential rate and Deloitte presents an opportunity to unleash and realise your potential amongst cutting edge leaders, and organisations shaping the future of the region, and indeed, the world beyond.


At Deloitte, your whole self to work, every day. Combine that with our drive to propel with purpose and you have the perfect playground to collaborate, innovate, grow, and make an impact that matters.


The team

As a member of the Operations Transformations team you will embark on an exciting and fulfilling journey with a group of intelligent and innovative globally aware individuals.

We work in conjuncture with various institutions solving key business problems across a broad-spectrum roles and functions, all set against the backdrop of constant industry change.


Your work profile


Job Title: Database Engineer

Experience: 3+ Years

Skills

  • Design, develop, and maintain efficient and scalable
  • ETL/ELT data pipelines using Python or PySpark.
  • Collaborate with data engineers, analysts, and stakeholders to understand data requirements and translate them into technical solutions.
  • Perform data cleansing, transformation, and validation to ensure data quality and integrity.
  • Optimize and troubleshoot performance issues in data processing jobs.
  • Implement data integration solutions for various sources including databases, APIs, and file systems.
  • Participate in code reviews, testing, and deployment processes.
  • Maintain proper documentation for data workflows, systems, and best practices.


Qualifications:

  • Bachelor’s degree in computer science, Engineering, or a related field.
  • 3 to 5 years of hands-on experience as a Data Developer
  • Proficient in Python and/or PySpark for data processing.
  • Experience working with big data platforms such as Hadoop, Spark, or Databricks.
  • Strong understanding of relational databases and SQL.
  • Familiarity with data warehousing concepts and tools(e.g., Snowflake, Redshift, BigQuery) is a plus.
  • Knowledge of cloud platforms (AWS, Azure, or GCP) is an advantage.


How you’ll grow


Connect for impact


Our exceptional team of professionals across the globe are solving some of the world’s most complex business problems, as well as directly supporting our communities, the planet, and each other. Know more in our Global Impact Report and our India Impact Report .


Empower to lead


You can be a leader irrespective of your career level. Our colleagues are characterised by their ability to inspire, support, and provide opportunities for people to deliver their best and grow both as professionals and human beings. Know more about Deloitte and our One Young World partnership.


Inclusion for all


At Deloitte, people are valued and respected for who they are and are trusted to add value to their clients, teams and communities in a way that reflects their own unique capabilities. Know more about everyday steps that you can take to be more inclusive. At Deloitte, we believe in the unique skills, attitude and potential each and every one of us brings to the table to make an impact that matters.





Drive your career


At Deloitte, you are encouraged to take ownership of your career. We recognise there is no one size fits all career path, and global, cross-business mobility and up / re-skilling are all within the range of possibilities to shape a unique and fulfilling career. Know more about Life at Deloitte.



Everyone’s welcome… entrust your happiness to us

Our workspaces and initiatives are geared towards your 360-degree happiness. This includes specific needs you may have in terms of accessibility, flexibility, safety and security, and caregiving. Here’s a glimpse of things that are in store for you.


Interview tips


We want job seekers exploring opportunities at Deloitte to feel prepared, confident and comfortable. To help you with your interview, we suggest that you do your research, know some background about the organisation and the business area you’re applying to. Check out recruiting tips from Deloitte professionals.

This advertiser has chosen not to accept applicants from your region.

Data engineer

Delhi, Delhi Incept Labs

Posted today

Job Viewed

Tap Again To Close

Job Description

permanent
Position: Software Engineer (Data)Location: Remote, IndiaAt Incept Labs, we believe the future of education and research lies in humans and AI workingtogether side by side. AI brings the ability to process knowledge at scale, while peoplecontribute imagination, values, and lived experience. When combined, they create a partnershipwhere each strengthens the other, opening new ways to discover, adapt, and grow.We are a small team of scientists, engineers, and builders who are passionate about buildingdomain-specific, next-generation AI solutions to enhance education and research.About This RoleWe're looking for a Software Engineer with deep expertise in large-scale data processing forLLM development. Data engineering is critical to successful model training and evaluation. You'll work directly with researchers to accelerate experiments, develop new datasets, improve infrastructure efficiency, and enable key insights across our data assets.You’ll join a high-impact, compact team responsible for both architecture and scaling of Incept’sdata and model development infrastructure, and work with highly complex, multi-modal data.ResponsibilitiesDesign, build, and operate scalable, fault-tolerant data infrastructure to supportdistributed computing and data orchestration for LLM ResearchDevelop and maintain high-throughput systems for data ingestion, processing, andtransformation to support LLM model developmentDevelop synthetic datasets using state-of-the-art solutionsCollaborate with research teams to deliver critical data assets for model development and evaluationImplement and maintain monitoring and alerting to support platform reliability and performanceBuild systems for traceability, reproducibility, and robust quality control to ensure adherence to industry compliance standardsRequired Qualifications5+ years of experience in data infrastructure, ideally supporting high-scale applications or research platforms.Fluent in distributed computing frameworks.Deeply familiar with cloud infrastructure, data storage architectures, and batch + streaming pipelines.Experience with specialized hardware (GPUs, TPUs) computing and GPU cluster.Strong knowledge of databases, storage systems, and how architecture choices impact performance at scale.Familiar with microservices architectures, containerization and orchestration, and both synchronous and asynchronous processing. Extensive experience with performance optimization and memory management in high-volume data systems.Proactive about documentation, automation, testing, and empowering your teammates with good tooling.This role is fully remote, India-based. Compensation and benefits will vary based on background, skills, and experience levels.
This advertiser has chosen not to accept applicants from your region.

Data engineer

Delhi, Delhi Astreya

Posted today

Job Viewed

Tap Again To Close

Job Description

permanent
Data EngineerAstreya offers comprehensive IT support and managed services. These services include DataCenter and Network Management, Digital Workplace Services (like Service Desk, Audio Visual, andIT Asset Management), as well as Next-Gen Digital Engineering services encompassing SoftwareEngineering, Data Engineering, and cybersecurity solutions. Astreya's expertise lies in creatingseamless interactions between people and technology to help organizations achieve operationalexcellence and growth.Job DescriptionWe are seeking experienced Data Engineer to join our analytics division.You will be aligned with our Data Analytics and BI vertical. You will have to conceptualize and own the build out of problem-solving data marts for consumption by data science and BI teams, evaluating design and operational tradeoffs within systems.Design, develop, and maintain robust data pipelines and ETL processes using data platforms for the organization's centralized data warehouse.Create or contribute to frameworks that improve the efficacy of logging data, while working with the Engineering team to triage issues and resolve them.Validate data integrity throughout the collection process, performing data profiling to identify and comprehend data anomalies.Influence product and cross-functional (engineering, data science, operations, strategy) teams to identify data opportunities to drive impact.RequirementsExperience & EducationBachelor's degree in Computer Science, Mathematics, a related field, or equivalent practical experience.5 years of experience coding with SQL or one or more programming languages (e.g., Python, Java, R, etc.) for data manipulation, analysis, and automation5 years of experience designing data pipelines (ETL) and dimensional data modeling for synchronous and asynchronous system integration and implementation.Experience in managing troubleshooting technical issues, and working with Engineering and Sales Services teams.Preferred qualifications:Master’s degree in Engineering, Computer Science, Business, or a related field.Experience with cloud-based services relevant to data engineering, data storage, data processing, data warehousing, real-time streaming, and serverless computing.Experience with experimentation infrastructure, and measurement approaches in a technology platform.Experience with data processing software (e.g., Hadoop, Spark) and algorithms (e.g., Map Reduce, Flume).
This advertiser has chosen not to accept applicants from your region.

Data engineer

Delhi, Delhi Bahwan CyberTek

Posted today

Job Viewed

Tap Again To Close

Job Description

permanent
Job Title: Data Engineer – Google Cloud Platform (GCP) Job Summary We are seeking a skilled and motivated Data Engineer with hands-on experience in building scalable data pipelines and cloud-native data solutions on Google Cloud Platform. The ideal candidate will be proficient in GCP services like Pub/Sub, Dataflow, Cloud Storage, and Big Query, with a foundational understanding of AI/ML workflows using Vertex AI. Key Responsibilities Design, develop, and optimize robust data ingestion pipelines using GCP services such as Pub/Sub, Dataflow, and Cloud Storage. Architect and manage scalable Big Query data warehouses to support analytics, reporting, and business intelligence needs. Collaborate with data scientists and ML engineers to support AI/ML workflows using Vertex AI (AO Vertex), including model training and deployment. Ensure data quality, reliability, and performance across all pipeline components. Work closely with cross-functional teams to understand data requirements and deliver efficient solutions. Maintain documentation and contribute to best practices in cloud data engineering. Required Skills & Qualifications 3–6 years of experience in data engineering, with strong exposure to GCP. Proficiency in GCP services: Pub/Sub, Dataflow, Cloud Storage, and Big Query. Solid understanding of data modeling, ETL/ELT processes, and performance optimization. Experience with Python, SQL, and cloud-native development practices. Familiarity with CI/CD pipelines and version control (e.g., Git). Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field. Secondary Skills (Interview-Ready Knowledge) Basic understanding of AI/ML workflows and tools within Vertex AI. Ability to discuss model lifecycle, deployment strategies, and integration with data pipelines. Awareness of MLOps principles and cloud-based ML orchestration.
This advertiser has chosen not to accept applicants from your region.

Data engineer

Delhi, Delhi Firstsource

Posted today

Job Viewed

Tap Again To Close

Job Description

permanent
Key Skills and Responsibilities:We are seeking a Senior Data Engineer with a strong background in cloud-native data engineering, primarily on Microsoft Azure, and familiarity with AWS and GCP. The ideal candidate will have deep expertise in building scalable data pipelines, implementing enterprise-grade data governance, security, and AI-powered engineering automation.This role will play a pivotal part in designing, developing, and optimizing data ingestion, transformation, and governance frameworks, enabling real-time and batch data analytics across our data platform.Kindly click on the below link to apply: Uh P5s PEJKey Responsibilities:● Design and implement scalable, robust, and secure data pipelines using:● Azure Data Factory, Databricks, Synapse Analytics● Azure Data Lake Gen2, Event Hub, Azure Functions● Develop and maintain ETL/ELT processes for structured and unstructured data● Implement Change Data Capture (CDC), streaming pipelines, and batch ingestion● Work with AWS Glue, S3, Redshift or GCP Big Query, Dataflow as needed● Optimize performance and cost across data workloads and cloud environments● Establish and maintain data quality frameworks, observability tools, and monitoring dashboards● Define and enforce data rules, validations, anomaly detection and reconciliation● Implement data lineage and metadata tracking using Azure Purview● Drive adoption of data cataloguing, profiling, and classification tools● Collaborate with data stewards and compliance teams to ensure governance alignment● Implement role-based access control (RBAC), data masking, encryption, and tokenization● Responsible for technical design, coding, unit testing, test case documentation, and walkthroughsfor all assigned Azure related projects to support company business and operational needs.● Ensures software developed follows the defined programming standards and follows the code /design review processes. ● Critically evaluate information gathered from multiple sources, reconcile conflicts, decompose high-level information into details, abstract up from low level information to a general understanding, and distinguish user requests from the underlying true needs. ● Collaborate with developers and subject matter experts to establish the technical vision and analyse trade-offs between usability and performance needs. ● Mentor junior data engineers and contribute to technical best practicesQualification & Experience Technical Skills: ● 4+ years in data engineering with at least 2+ years on Azure cloud platform ● Hands-on with Azure Data Factory, Azure Data Lake Gen2, Databricks, Synapse, Purview ● Proficiency in SQL, Py Spark, Python, and data orchestration tools ● Strong understanding of data architecture patterns – Lakehouse, medallion, delta architecture ● Familiarity with Snowflake, Big Query, Redshift, or AWS Glue ● Experience with data versioning and Git Ops for data ● Working knowledge of data observability, lineage, cataloguing, data quality ● Exposure to privacy-enhancing techniques, access control, security auditing ● Exposure to machine learning use cases in data engineering pipelines, data quality, anomaly detection, and schema change detection ● Exposure to Gen AI or agentic AI to Automate data cataloguing and metadata enrichment etc. ● Experience with NLP or LLMs in metadata extraction or data classification is a plus Soft Skills: - ● Strong problem-solving, communication, and stakeholder collaboration skills ● Ability to lead data initiatives and mentor team members ● Proactive in learning and adopting emerging technologies in data & AI Qualification: - ·Candidate must be BE/BTech/MCA with 4 to 8 years’ experience in Data Engineering.Personal Attributes/Traits • Consultative • Socially confident • Achievement oriented • Decisive and action oriented • Creative • Eager to learn • Resilient Competencies •Business Foresight • Influencing Others • Fostering Partnerships With Customers • Managing Transformation • Driving Excellence • Leading Teams • Working Across Boundaries
This advertiser has chosen not to accept applicants from your region.

Data Engineer

New Delhi, Delhi Randstad Enterprise

Posted today

Job Viewed

Tap Again To Close

Job Description

Role : Data Engineer

Location : Remote

Shift Timing : 2:00 Pm - 11:00 Pm

Experience : 2- 4 years relevant only ( this is a Junior position with us )


Must have skillset :

GCP - 2 years minimum working Experience

Python and Pyspark - 2 years

SQL - 2 years

Excellent communication

Worked with global stakeholders


Who we are:

Randstad Sourceright’s global talent solutions provides instant access to experienced recruitment

and contingent workforce management support by combining technology, analytics and deep

global and local expertise. Our operations consist of the client aligned service delivery teams

operating across RPO, MSP and Blended Workforce Solutions. We are certified as a “great place to

work” for the last 3 consecutive years and are recognized as the best place to work by Glassdoor

Group Objective

The mission of the business intelligence team is to create a data-driven culture that empowers leaders to integrate data into daily decisions and strategic planning. We aim to provide visibility, transparency, and guidance regarding the quantity and quality of results, activities, financial KPIs, and leading indicators to identify trends aimed at data-based decision-making easily.

Position Objective

As a Senior Data Engineer, you will be responsible for designing, architecting, and implementing robust data solutions in a cloud-based environment (GCP). You will partner with other data engineers and technical teams to ensure the availability, reliability, and performance of our data systems.

Position Summary

Programming & Code Writing

  • Architect and build complex data pipelines using advanced cloud data technologies
  • Lead efforts to optimize data pipelines for performance, scalability, and cost-efficiency
  • Define industry best practices for building data pipelines
  • Ensure data security, compliance, and governance standards are met.
  • Partner with leadership team to define and implement agile and DevOps methodologies


Consulting & Partnership

  • Serve as subject matter expert and define data architecture and infrastructure requirements
  • Partner with business analysts to plan project execution including appropriate product and technical specifications, direction, resources, and establishing realistic completion times
  • Understand data technology trends and identify opportunities to implement new technologies and provide forward-thinking recommendations
  • Proactively partner with internal stakeholders to bridge gaps, provide historical references, and design the appropriate processes


Troubleshooting & Continuous Improvement

  • Design and implement a robust data observability process
  • Resolve escalated reporting requests and communicate proactively and timely
  • Troubleshoot, and provide technical guidance to resolve issues related to misaligned or inaccurate data or data fields or new customer requirements
  • Maintain new release, migration, and sprint schedules for software upgrades, enhancements, and fixes to aid with product evolution
  • Write QA/QC Scripts to conduct first round of testing and partner with BA team for test validation for new developments prior to moving to production
  • Use industry knowledge & feedback to aid in the development of technology roadmap and future product(s) vision
  • Document standard ways of working via QRGs, intranet pages, and video series


Senior activities

  • Drive day-to-day development activities of development team in close collaboration with on-site and off-shore resources, scrum masters and product owners
  • Bootstrapping a data engineering team at an early stage in the team’s evolution
  • Provide leadership on technical front in difficult situations facilitate contentious discussions, and report up when necessary
  • Guide, mentor and coach offshore resources
  • Provide input in forming a long-term data strategy


Education

  • Master’s degree in Computer Science / Information Technology or related field, highly preferred


Experience

  • Extensive knowledge of BI concepts and related technologies that help drive sustainable technical solutions
  • Extensive Experience with data lakes, ETL and data warehouses
  • Advanced experience of building data pipelines
  • Passion for building quality BI software
  • Project Management and/or process improvement experience highly preferred

Knowledge, Skills, and Abilities

  • Polyglot coder and expert level in multiple languages including, Python, R, Java, SQL, relational databases, ERP, DOMO or other data visualization tools i.e. Tableau
  • Advanced and proven experience with Google cloud platform (GCP) is preferred. But experience with Microsoft Azure / Amazon will be considered.
  • Any exposure to Kafka, Spark, and Scala will be an added advantage.
  • Should demonstrate a strong understanding of OOPS concepts and methodologies
  • Expert level understanding of data engineering
  • Intrinsic motivation and problem-solving
  • Proactive leadership, project management, time management, and problem-solving skills
  • Demonstrated continuous improvement, process documentation, and workflow skills
  • Extensive experience with data analysis , modeling, and data pipelining including data cleaning, standardizing, scaling, tuning, scheduling and deployment
  • Experience composing detailed technical documentation and procedures for data models
  • Ability to prioritize and manage multiple projects, tasks, and meeting deadlines while maintaining quality
  • Strong drive and commitment for delivering outstanding results
  • Strong follow up and service orientation


Supervisory Responsibility

Provides guidance, leadership, or training to junior employees

Directly responsible for supervising non-exempt, clerical, or office administrative personnel

Directly responsible for supervising exempt, professional, or technical employees

Directly responsible for supervising supervisory/managerial employees


Organizational Structure:

Job Title this position reports: Manager, Data Engineering

This advertiser has chosen not to accept applicants from your region.

Data Engineer

Narela, Delhi Randstad Enterprise

Posted today

Job Viewed

Tap Again To Close

Job Description

Role : Data Engineer

Location : Remote

Shift Timing : 2:00 Pm - 11:00 Pm

Experience : 2- 4 years relevant only ( this is a Junior position with us )


Must have skillset :

GCP - 2 years minimum working Experience

Python and Pyspark - 2 years

SQL - 2 years

Excellent communication

Worked with global stakeholders


Who we are:

Randstad Sourceright’s global talent solutions provides instant access to experienced recruitment

and contingent workforce management support by combining technology, analytics and deep

global and local expertise. Our operations consist of the client aligned service delivery teams

operating across RPO, MSP and Blended Workforce Solutions. We are certified as a “great place to

work” for the last 3 consecutive years and are recognized as the best place to work by Glassdoor

Group Objective

The mission of the business intelligence team is to create a data-driven culture that empowers leaders to integrate data into daily decisions and strategic planning. We aim to provide visibility, transparency, and guidance regarding the quantity and quality of results, activities, financial KPIs, and leading indicators to identify trends aimed at data-based decision-making easily.

Position Objective

As a Senior Data Engineer, you will be responsible for designing, architecting, and implementing robust data solutions in a cloud-based environment (GCP). You will partner with other data engineers and technical teams to ensure the availability, reliability, and performance of our data systems.

Position Summary

Programming & Code Writing

  • Architect and build complex data pipelines using advanced cloud data technologies
  • Lead efforts to optimize data pipelines for performance, scalability, and cost-efficiency
  • Define industry best practices for building data pipelines
  • Ensure data security, compliance, and governance standards are met.
  • Partner with leadership team to define and implement agile and DevOps methodologies


Consulting & Partnership

  • Serve as subject matter expert and define data architecture and infrastructure requirements
  • Partner with business analysts to plan project execution including appropriate product and technical specifications, direction, resources, and establishing realistic completion times
  • Understand data technology trends and identify opportunities to implement new technologies and provide forward-thinking recommendations
  • Proactively partner with internal stakeholders to bridge gaps, provide historical references, and design the appropriate processes


Troubleshooting & Continuous Improvement

  • Design and implement a robust data observability process
  • Resolve escalated reporting requests and communicate proactively and timely
  • Troubleshoot, and provide technical guidance to resolve issues related to misaligned or inaccurate data or data fields or new customer requirements
  • Maintain new release, migration, and sprint schedules for software upgrades, enhancements, and fixes to aid with product evolution
  • Write QA/QC Scripts to conduct first round of testing and partner with BA team for test validation for new developments prior to moving to production
  • Use industry knowledge & feedback to aid in the development of technology roadmap and future product(s) vision
  • Document standard ways of working via QRGs, intranet pages, and video series


Senior activities

  • Drive day-to-day development activities of development team in close collaboration with on-site and off-shore resources, scrum masters and product owners
  • Bootstrapping a data engineering team at an early stage in the team’s evolution
  • Provide leadership on technical front in difficult situations facilitate contentious discussions, and report up when necessary
  • Guide, mentor and coach offshore resources
  • Provide input in forming a long-term data strategy


Education

  • Master’s degree in Computer Science / Information Technology or related field, highly preferred


Experience

  • Extensive knowledge of BI concepts and related technologies that help drive sustainable technical solutions
  • Extensive Experience with data lakes, ETL and data warehouses
  • Advanced experience of building data pipelines
  • Passion for building quality BI software
  • Project Management and/or process improvement experience highly preferred

Knowledge, Skills, and Abilities

  • Polyglot coder and expert level in multiple languages including, Python, R, Java, SQL, relational databases, ERP, DOMO or other data visualization tools i.e. Tableau
  • Advanced and proven experience with Google cloud platform (GCP) is preferred. But experience with Microsoft Azure / Amazon will be considered.
  • Any exposure to Kafka, Spark, and Scala will be an added advantage.
  • Should demonstrate a strong understanding of OOPS concepts and methodologies
  • Expert level understanding of data engineering
  • Intrinsic motivation and problem-solving
  • Proactive leadership, project management, time management, and problem-solving skills
  • Demonstrated continuous improvement, process documentation, and workflow skills
  • Extensive experience with data analysis , modeling, and data pipelining including data cleaning, standardizing, scaling, tuning, scheduling and deployment
  • Experience composing detailed technical documentation and procedures for data models
  • Ability to prioritize and manage multiple projects, tasks, and meeting deadlines while maintaining quality
  • Strong drive and commitment for delivering outstanding results
  • Strong follow up and service orientation


Supervisory Responsibility

Provides guidance, leadership, or training to junior employees

Directly responsible for supervising non-exempt, clerical, or office administrative personnel

Directly responsible for supervising exempt, professional, or technical employees

Directly responsible for supervising supervisory/managerial employees


Organizational Structure:

Job Title this position reports: Manager, Data Engineering

This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Data engineer Jobs in Delhi !

Data Engineer

Delhi, Delhi Randstad Enterprise

Posted today

Job Viewed

Tap Again To Close

Job Description

Role : Data Engineer

Location : Remote

Shift Timing : 2:00 Pm - 11:00 Pm

Experience : 2- 4 years relevant only ( this is a Junior position with us )


Must have skillset :

GCP - 2 years minimum working Experience

Python and Pyspark - 2 years

SQL - 2 years

Excellent communication

Worked with global stakeholders


Who we are:

Randstad Sourceright’s global talent solutions provides instant access to experienced recruitment

and contingent workforce management support by combining technology, analytics and deep

global and local expertise. Our operations consist of the client aligned service delivery teams

operating across RPO, MSP and Blended Workforce Solutions. We are certified as a “great place to

work” for the last 3 consecutive years and are recognized as the best place to work by Glassdoor

Group Objective

The mission of the business intelligence team is to create a data-driven culture that empowers leaders to integrate data into daily decisions and strategic planning. We aim to provide visibility, transparency, and guidance regarding the quantity and quality of results, activities, financial KPIs, and leading indicators to identify trends aimed at data-based decision-making easily.

Position Objective

As a Senior Data Engineer, you will be responsible for designing, architecting, and implementing robust data solutions in a cloud-based environment (GCP). You will partner with other data engineers and technical teams to ensure the availability, reliability, and performance of our data systems.

Position Summary

Programming & Code Writing

  • Architect and build complex data pipelines using advanced cloud data technologies
  • Lead efforts to optimize data pipelines for performance, scalability, and cost-efficiency
  • Define industry best practices for building data pipelines
  • Ensure data security, compliance, and governance standards are met.
  • Partner with leadership team to define and implement agile and DevOps methodologies


Consulting & Partnership

  • Serve as subject matter expert and define data architecture and infrastructure requirements
  • Partner with business analysts to plan project execution including appropriate product and technical specifications, direction, resources, and establishing realistic completion times
  • Understand data technology trends and identify opportunities to implement new technologies and provide forward-thinking recommendations
  • Proactively partner with internal stakeholders to bridge gaps, provide historical references, and design the appropriate processes


Troubleshooting & Continuous Improvement

  • Design and implement a robust data observability process
  • Resolve escalated reporting requests and communicate proactively and timely
  • Troubleshoot, and provide technical guidance to resolve issues related to misaligned or inaccurate data or data fields or new customer requirements
  • Maintain new release, migration, and sprint schedules for software upgrades, enhancements, and fixes to aid with product evolution
  • Write QA/QC Scripts to conduct first round of testing and partner with BA team for test validation for new developments prior to moving to production
  • Use industry knowledge & feedback to aid in the development of technology roadmap and future product(s) vision
  • Document standard ways of working via QRGs, intranet pages, and video series


Senior activities

  • Drive day-to-day development activities of development team in close collaboration with on-site and off-shore resources, scrum masters and product owners
  • Bootstrapping a data engineering team at an early stage in the team’s evolution
  • Provide leadership on technical front in difficult situations facilitate contentious discussions, and report up when necessary
  • Guide, mentor and coach offshore resources
  • Provide input in forming a long-term data strategy


Education

  • Master’s degree in Computer Science / Information Technology or related field, highly preferred


Experience

  • Extensive knowledge of BI concepts and related technologies that help drive sustainable technical solutions
  • Extensive Experience with data lakes, ETL and data warehouses
  • Advanced experience of building data pipelines
  • Passion for building quality BI software
  • Project Management and/or process improvement experience highly preferred

Knowledge, Skills, and Abilities

  • Polyglot coder and expert level in multiple languages including, Python, R, Java, SQL, relational databases, ERP, DOMO or other data visualization tools i.e. Tableau
  • Advanced and proven experience with Google cloud platform (GCP) is preferred. But experience with Microsoft Azure / Amazon will be considered.
  • Any exposure to Kafka, Spark, and Scala will be an added advantage.
  • Should demonstrate a strong understanding of OOPS concepts and methodologies
  • Expert level understanding of data engineering
  • Intrinsic motivation and problem-solving
  • Proactive leadership, project management, time management, and problem-solving skills
  • Demonstrated continuous improvement, process documentation, and workflow skills
  • Extensive experience with data analysis , modeling, and data pipelining including data cleaning, standardizing, scaling, tuning, scheduling and deployment
  • Experience composing detailed technical documentation and procedures for data models
  • Ability to prioritize and manage multiple projects, tasks, and meeting deadlines while maintaining quality
  • Strong drive and commitment for delivering outstanding results
  • Strong follow up and service orientation


Supervisory Responsibility

Provides guidance, leadership, or training to junior employees

Directly responsible for supervising non-exempt, clerical, or office administrative personnel

Directly responsible for supervising exempt, professional, or technical employees

Directly responsible for supervising supervisory/managerial employees


Organizational Structure:

Job Title this position reports: Manager, Data Engineering

This advertiser has chosen not to accept applicants from your region.

Data Engineer

New Delhi, Delhi Trigyn Technologies

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Description:

Job Description
Experience: Minimum 5 yrs.
Work Location: Client office in Delhi. Remote working options are not available.

We are seeking a skilled Data Engineer with at least 5 years of experience to join our data analytics team, focusing on building robust data pipelines and systems to support the creation of dynamic dashboards. The role involves designing, building, and optimizing data architecture, enabling real-time data flow for visualization and analytics. The Data Engineer will be responsible for managing ETL processes, ensuring data quality, and supporting the scalable integration of various data sources into our analytics platform.
The ideal candidate should have extensive experience in working with complex data architectures, managing ETL workflows, and ensuring seamless data integration across platforms. They should also have a deep understanding of cloud technologies and database management.

Key Responsibilities:
•Data Pipeline Development
o Design, build, and maintain scalable ETL (Extract, Transform, Load) processes for collecting, storing, and processing structured and unstructured data from multiple sources.
o Develop workflows to automate data extraction from APIs, databases, and external sources.
o Ensure data pipelines are optimized for performance and handle large data volumes with minimal latency.
•Data Integration and Management
o Integrate data from various sources (., databases, APIs, cloud storage) into the centralized daIta warehouse or data lake to support real-time dashboards.
o Ensure smooth data flow and seamless integration with analytics tools like Power BI and Tableau.
o Manage and maintain data storage solutions, including relational (SQL-based) and NoSQL databases.

•Data Quality and Governance
o Implement data validation checks and quality assurance processes to ensure data accuracy, consistency, and integrity.
o Develop monitoring systems to identify and troubleshoot data inconsistencies, duplications, or errors during ingestion and processing.
o Ensure compliance with data governance policies and standards, including data protection regulations such as the Digital Personal Data Protection (DPDP) Act.
•Database Management and Optimization
o Design and manage both relational and NoSQL databases, ensuring efficient storage, query performance, and reliability.
o Optimize database performance, ensuring fast query execution times and efficient data retrieval for dashboard visualization.
o Implement data partitioning, indexing, and replication strategies to support large-scale data operations.
•Data Security and Compliance
o Ensure that all data processes adhere to security best practices, including encryption, authentication, and access control.
o Implement mechanisms for secure data storage and transmission, especially for sensitive government or public sector data.
o Conduct regular audits of data pipelines and storage systems to ensure compliance with relevant data protection regulations.
• Cloud Infrastructure and Deployment

o Deploy and manage cloud-based data solutions using AWS, Azure, or GCP, including data lakes, data warehouses, and cloud-native ETL tools.
o Set up cloud infrastructure to support high availability, fault tolerance, and scalability of data systems.
o Monitor cloud usage and optimize costs for data storage, processing, and retrieval.
•Performance Monitoring and Troubleshooting
o Continuously monitor data pipeline performance and data ingestion times to identify bottlenecks and areas for improvement

Troubleshoot and resolve any data flow issues, ensuring high availability and reliability of data for dashboards and analytics.
o Implement logging and alerting mechanisms to detect and address any operational issues proactively.
Qualifications:
•Education: Bachelor’s degree in Computer Science, Information Technology, Data Engineering, or a related field. A Master’s degree is a plus.
•Experience: At least 5 years of hands-on experience as a Data Engineer, preferably in a data analytics or dashboarding environment.

This advertiser has chosen not to accept applicants from your region.

Data engineer

Delhi, Delhi Insight Global

Posted today

Job Viewed

Tap Again To Close

Job Description

permanent
100% Remote Data Engineer Required Skills & Experience5+ years of experience as a Data EngineerExpertise in Python, Pyspark and SQLHands-on Python/Py Spark coding experienceStrong Big Data experience, GCP preferredTelemetry experienceStreaming experience with analytics and data processingBusiness analytics experience and working directly with business partners to implement feedbackNice to Have Skills & ExperienceStrong understanding of Io T systems, especially in: Refrigeration units, HVAC systems, Controllers and sensorsFamiliarity with equipment and systems from Emerson, Honeywell, or similarAbility to explain how data flows through Io T units and how sensors are designed and monitoredJob DescriptionWe are seeking a Senior Data Engineer with hands on coding expertise in Python and Pyspark. We are looking for someone skilled in Telemetry, Big Data and backend engineering to help implement an application for large retailer. Domain experience in Io T systems, HVAC/refrigeration equipment, and other mechanical solutions is highly desired to have a better understanding of the project and be able to communicate with business stakeholders.
This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Data Engineer Jobs View All Jobs in Delhi