62 Etl Processes jobs in Delhi

Data Engineering Manager

Delhi, Delhi ₹1200000 - ₹3600000 Y YipitData

Posted today

Job Viewed

Tap Again To Close

Job Description

About Us:

YipitData is the leading market research and analytics firm for the disruptive economy and most recently raised $475M from The Carlyle Group at a valuation of over $1B. Every day, our proprietary technology analyzes billions of alternative data points to uncover actionable insights across sectors like software, AI, cloud, e-commerce, ridesharing, and payments.

Our data and research teams transform raw data into strategic intelligence, delivering accurate, timely, and deeply contextualized analysis that our customers—ranging from the world's top investment funds to Fortune 500 companies—depend on to drive high-stakes decisions. From sourcing and licensing novel datasets to rigorous analysis and expert narrative framing, our teams ensure clients get not just data, but clarity and confidence.

We operate globally with offices in the US (NYC, Austin, Miami, Mountain View), APAC (Hong Kong, Shanghai, Beijing, Guangzhou, Singapore), and India. Our award-winning, people-centric culture—recognized by Inc. as a Best Workplace for three consecutive years—emphasizes transparency, ownership, and continuous mastery.

What It's Like to Work at YipitData:

YipitData isn't a place for coasting—it's a launchpad for ambitious, impact-driven professionals. From day one, you'll take the lead on meaningful work, accelerate your growth, and gain exposure that shapes careers.

Why Top Talent Chooses YipitData:

  • Ownership That Matters: You'll lead high-impact projects with real business outcomes
  • Rapid Growth: We compress years of learning into months
  • Merit Over Titles: Trust and responsibility are earned through execution, not tenure
  • Velocity with Purpose: We move fast, support each other, and aim high—always with purpose and intention

If your ambition is matched by your work ethic—and you're hungry for a place where growth, impact, and ownership are the norm—YipitData might be the opportunity you've been waiting for.

This is a remote opportunity based in India.

  • Standard IST working hours are permitted with the exception of 2-3 days per week, when you will join meetings with the US and LatAm team. On these days, work hours will be between 2:30 - 10:30pm IST. (Please note that we allow for flexibility on the following days to make up for the previous day's late work schedule)

Why You Should Apply NOW:

We're scaling fast and need a hands-on Data Engineering Manager to join our dynamic Data Engineering team who can both lead people and shape data architecture. The ideal candidate possesses 3+ years of managing data engineers and 5+ years of experience working with PySpark, Python is a must. Data Bricks/ Snow Apache Iceberg/ Apache Flink/ and various orchestration tools, ETL pipelines, and data modeling.

As our Data Engineering Manager, you will own the data-orchestration strategy end-to-end. You'll lead and mentor a team of engineers while researching, planning, and institutionalizing best practices that boost our pipeline performance, reliability, and cost-efficiency. This is a hands-on leadership role for someone who thrives on deep technical challenges, enjoys rolling up their sleeves to debug or design, and can chart a clear, forward-looking roadmap for various data engineering projects.

As Our Data Engineer Manager, You Will:

  • Report directly to the Director of Data Engineering, who will provide significant, hands-on training on cutting-edge data tools and techniques.
  • Hire, onboard, and develop a high-performing team—1-on-1s, growth plans, and performance reviews.
  • Manage a team of 3-5 Data Engineers.
  • Serve as the team's technical north star—review PRs, pair program, and set engineering standards.
  • Architect and evolve our data platform (batch & streaming) for scale, cost, and reliability.
  • Own the end-to-end vision and strategic roadmap for various projects.
  • Create documentation, architecture diagrams, and other training materials.
  • Translate product and analytics needs into a clear data engineering roadmap and OKRs.

You Are Likely To Succeed If:

  • You hold a Bachelor's or Master's degree in Computer Science, STEM, or a related technical discipline.
  • 7+ years in data engineering (or adjacent), including 2-3+ years formally managing 1-3 engineers.
  • Experience in PySpark, Python is a must.
  • Experience with Data Bricks/ Snow Apache Iceberg/ Apache Flink/ Snowflake/ Databricks/Microsoft Fabrics
  • Proven experience designing and operating large-scale orchestration and ETL/ELT pipelines.
  • A track record of mentoring engineers, elevating team productivity, and hiring bar-raising talent.
  • The ability to distill complex technical topics into crisp updates for non-technical partners.
  • You are eager to constantly learn new technologies.
  • You are a self-starter who enjoys working with both internal and external stakeholders.
  • You have exceptional verbal and written communication skills.

  • Nice to have: Experience with Airflow, Docker, or equivalent.

What We Offer:

Our compensation package includes comprehensive benefits, perks, and a competitive salary:

  • We care about your personal life, and we mean it. We offer flexible work hours, flexible vacation, a generous 401K match, parental leave, team events, wellness budget, learning reimbursement, and more
  • Your growth at YipitData is determined by the impact that you are making, not by tenure, unnecessary facetime, or office politics. Everyone at YipitData is empowered to learn, self-improve, and master their skills in an environment focused on ownership, respect, and trust. See more on our high-impact, high-opportunity work environment above
  • The final offer may be determined by a number of factors, including, but not limited to, the applicant's experience, knowledge, skills, abilities, as well as internal team benchmarks.

We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, marital status, disability, gender, gender identity or expression, or veteran status. We are proud to be an equal-opportunity employer.

Job Applicant Privacy Notice

This advertiser has chosen not to accept applicants from your region.

Data Engineering Internship

Delhi, Delhi ₹90000 - ₹2800000 Y Nb Freight Logistics

Posted today

Job Viewed

Tap Again To Close

Job Description

Data Engineering Intern – NB Freight Logistics

Location:
Remote

Duration:
3 Months

Stipend:
₹9,000/month (starting from the 2nd month, subject to satisfactory performance, punctuality, and dedication during the initial 3–4 week evaluation period).

About Us

NB Freight Logistics is a global logistics company specializing in freight forwarding, customs clearance, warehousing, and supply chain solutions. We leverage technology and data to optimize global trade operations.

Role Overview

We are seeking a
Data Engineering Intern
to help us design and develop scalable data pipelines and funnels for logistics and supply chain operations. This role is ideal for candidates who want to apply their technical skills to real-world logistics challenges while learning how data flows power efficiency in global supply.

Responsibilities

  • Assist in designing and building
    data pipelines
    for logistics data integration
  • Develop and optimize
    data funnels
    to support operational workflows
  • Work with structured and unstructured datasets across freight, customs, and warehousing
  • Ensure data quality, cleaning, and transformation for downstream analytics
  • Collaborate with analysts and operations teams to support decision-making

Requirements

  • Knowledge of
    Python, SQL, or Java
    for data handling
  • Familiarity with
    ETL processes, APIs, or cloud data platforms
    (AWS/GCP/Azure preferred)
  • Understanding of
    data modeling and pipeline automation
  • Problem-solving skills and attention to detail
  • Interest in applying data engineering concepts to
    logistics and supply chain

What You'll Gain

  • Hands-on experience in
    data engineering for logistics
  • Practical exposure to pipeline development and funnel optimization
  • Mentorship from industry experts
  • Internship certificate & experience letter
This advertiser has chosen not to accept applicants from your region.

Data Engineering Role

Delhi, Delhi 100x.inc

Posted today

Job Viewed

Tap Again To Close

Job Description

Minimum Requirements:

- At least 3 years of professional experience in Data Engineering
- Demonstrated end-to-end ownership of ETL pipelines
- Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
- Strong proficiency in MySQL (non-negotiable)
- Working knowledge of Docker: setup, deployment, and troubleshooting

Highly Preferred Skills:

- Experience with orchestration tools such as Airflow or similar
- Hands-on with PySpark
- Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
- Exposure to DLT (Data Load Tool)

Ideal Candidate Profile:

The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.

Key Responsibilities:

- Architect, build, and optimize scalable data pipelines and workflows
- Manage AWS resources end-to-end: from configuration to optimization and debugging
- Work closely with product and engineering to enable high-velocity business impact
- Automate and scale data processes—manual workflows are not part of the culture
- Build foundational data systems that drive critical business decisions
This advertiser has chosen not to accept applicants from your region.

Data Engineering Role

New Delhi, Delhi 100x.inc

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

Minimum Requirements:

  • At least 3 years of professional experience in Data Engineering
  • Demonstrated end-to-end ownership of ETL pipelines
  • Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
  • Strong proficiency in MySQL (non-negotiable)
  • Working knowledge of Docker: setup, deployment, and troubleshooting


Highly Preferred Skills:

  • Experience with orchestration tools such as Airflow or similar
  • Hands-on with PySpark
  • Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
  • Exposure to DLT (Data Load Tool)


Ideal Candidate Profile:

The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.


Key Responsibilities:

  • Architect, build, and optimize scalable data pipelines and workflows
  • Manage AWS resources end-to-end: from configuration to optimization and debugging
  • Work closely with product and engineering to enable high-velocity business impact
  • Automate and scale data processes—manual workflows are not part of the culture
  • Build foundational data systems that drive critical business decisions


This advertiser has chosen not to accept applicants from your region.

Data Engineering Role

Narela, Delhi 100x.inc

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

Minimum Requirements:

  • At least 3 years of professional experience in Data Engineering
  • Demonstrated end-to-end ownership of ETL pipelines
  • Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
  • Strong proficiency in MySQL (non-negotiable)
  • Working knowledge of Docker: setup, deployment, and troubleshooting


Highly Preferred Skills:

  • Experience with orchestration tools such as Airflow or similar
  • Hands-on with PySpark
  • Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
  • Exposure to DLT (Data Load Tool)


Ideal Candidate Profile:

The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.


Key Responsibilities:

  • Architect, build, and optimize scalable data pipelines and workflows
  • Manage AWS resources end-to-end: from configuration to optimization and debugging
  • Work closely with product and engineering to enable high-velocity business impact
  • Automate and scale data processes—manual workflows are not part of the culture
  • Build foundational data systems that drive critical business decisions


This advertiser has chosen not to accept applicants from your region.

Data Engineering Role

Delhi, Delhi 100x.inc

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

Minimum Requirements:

  • At least 3 years of professional experience in Data Engineering
  • Demonstrated end-to-end ownership of ETL pipelines
  • Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
  • Strong proficiency in MySQL (non-negotiable)
  • Working knowledge of Docker: setup, deployment, and troubleshooting


Highly Preferred Skills:

  • Experience with orchestration tools such as Airflow or similar
  • Hands-on with PySpark
  • Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
  • Exposure to DLT (Data Load Tool)


Ideal Candidate Profile:

The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.


Key Responsibilities:

  • Architect, build, and optimize scalable data pipelines and workflows
  • Manage AWS resources end-to-end: from configuration to optimization and debugging
  • Work closely with product and engineering to enable high-velocity business impact
  • Automate and scale data processes—manual workflows are not part of the culture
  • Build foundational data systems that drive critical business decisions


This advertiser has chosen not to accept applicants from your region.

Data Engineering Azure databricks

Delhi, Delhi EXL

Posted today

Job Viewed

Tap Again To Close

Job Description

Data Engineer (DE) Consultant is responsible for designing, developing, and maintaining data assets and data related products by liaising with multiple stakeholders.

Responsibilities:

- Work with stakeholders to understand the data requirements to design, develop, and maintain complex ETL processes.
- Create the data integration and data diagram documentation.
- Lead the data validation, UAT and regression test for new data asset creation.
- Create and maintain data models, including schema design and optimization.
- Create and manage data pipelines that automate the flow of data, ensuring data quality and consistency.

Qualifications and Skills:

- Strong knowledge on Python and Pyspark
- Expectation is to have ability to write Pyspark scripts for developing data workflows.
- Strong knowledge on SQL, Hadoop, Hive, Azure, Databricks and Greenplum
- Expectation is to write SQL to query metadata and tables from different data management system such as, Oracle, Hive, Databricks and Greenplum.
- Familiarity with big data technologies like Hadoop, Spark, and distributed computing frameworks.
- Expectation is to use Hue and run Hive SQL queries, schedule Apache Oozie jobs to automate the data workflows.
- Good working experience of communicating with the stakeholders and collaborate effectively with the business team for data testing.
- Expectation is to have strong problem-solving and troubleshooting skills.
- Expectation is to establish comprehensive data quality test cases, procedures and implement automated data validation processes.
- Degree in Data Science, Statistics, Computer Science or other related fields or an equivalent combination of education and experience.
- 4-7 years of experience in Data Engineer.
- Proficiency in programming languages commonly used in data engineering, such as Python, Pyspark, SQL.
- Experience in Azure cloud computing platform, such as developing ETL processes using Azure Data Factory, big data processing and analytics with Azure Databricks.
- Strong communication, problem solving and analytical skills with the ability to do time management and multi-tasking with attention to detail and accuracy.
This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Etl processes Jobs in Delhi !

Data Engineering Azure databricks

Delhi, Delhi EXL

Posted 6 days ago

Job Viewed

Tap Again To Close

Job Description

Data Engineer (DE) Consultant is responsible for designing, developing, and maintaining data assets and data related products by liaising with multiple stakeholders.

Responsibilities:

  • Work with stakeholders to understand the data requirements to design, develop, and maintain complex ETL processes.
  • Create the data integration and data diagram documentation.
  • Lead the data validation, UAT and regression test for new data asset creation.
  • Create and maintain data models, including schema design and optimization.
  • Create and manage data pipelines that automate the flow of data, ensuring data quality and consistency.

Qualifications and Skills:

  • Strong knowledge on Python and Pyspark
  • Expectation is to have ability to write Pyspark scripts for developing data workflows.
  • Strong knowledge on SQL, Hadoop, Hive, Azure, Databricks and Greenplum
  • Expectation is to write SQL to query metadata and tables from different data management system such as, Oracle, Hive, Databricks and Greenplum.
  • Familiarity with big data technologies like Hadoop, Spark, and distributed computing frameworks.
  • Expectation is to use Hue and run Hive SQL queries, schedule Apache Oozie jobs to automate the data workflows.
  • Good working experience of communicating with the stakeholders and collaborate effectively with the business team for data testing.
  • Expectation is to have strong problem-solving and troubleshooting skills.
  • Expectation is to establish comprehensive data quality test cases, procedures and implement automated data validation processes.
  • Degree in Data Science, Statistics, Computer Science or other related fields or an equivalent combination of education and experience.
  • 4-7 years of experience in Data Engineer.
  • Proficiency in programming languages commonly used in data engineering, such as Python, Pyspark, SQL.
  • Experience in Azure cloud computing platform, such as developing ETL processes using Azure Data Factory, big data processing and analytics with Azure Databricks.
  • Strong communication, problem solving and analytical skills with the ability to do time management and multi-tasking with attention to detail and accuracy.
This advertiser has chosen not to accept applicants from your region.

Data Engineering Azure databricks

New Delhi, Delhi EXL

Posted 6 days ago

Job Viewed

Tap Again To Close

Job Description

Data Engineer (DE) Consultant is responsible for designing, developing, and maintaining data assets and data related products by liaising with multiple stakeholders.

Responsibilities:

  • Work with stakeholders to understand the data requirements to design, develop, and maintain complex ETL processes.
  • Create the data integration and data diagram documentation.
  • Lead the data validation, UAT and regression test for new data asset creation.
  • Create and maintain data models, including schema design and optimization.
  • Create and manage data pipelines that automate the flow of data, ensuring data quality and consistency.

Qualifications and Skills:

  • Strong knowledge on Python and Pyspark
  • Expectation is to have ability to write Pyspark scripts for developing data workflows.
  • Strong knowledge on SQL, Hadoop, Hive, Azure, Databricks and Greenplum
  • Expectation is to write SQL to query metadata and tables from different data management system such as, Oracle, Hive, Databricks and Greenplum.
  • Familiarity with big data technologies like Hadoop, Spark, and distributed computing frameworks.
  • Expectation is to use Hue and run Hive SQL queries, schedule Apache Oozie jobs to automate the data workflows.
  • Good working experience of communicating with the stakeholders and collaborate effectively with the business team for data testing.
  • Expectation is to have strong problem-solving and troubleshooting skills.
  • Expectation is to establish comprehensive data quality test cases, procedures and implement automated data validation processes.
  • Degree in Data Science, Statistics, Computer Science or other related fields or an equivalent combination of education and experience.
  • 4-7 years of experience in Data Engineer.
  • Proficiency in programming languages commonly used in data engineering, such as Python, Pyspark, SQL.
  • Experience in Azure cloud computing platform, such as developing ETL processes using Azure Data Factory, big data processing and analytics with Azure Databricks.
  • Strong communication, problem solving and analytical skills with the ability to do time management and multi-tasking with attention to detail and accuracy.
This advertiser has chosen not to accept applicants from your region.

Data Engineering Azure databricks

Narela, Delhi EXL

Posted 6 days ago

Job Viewed

Tap Again To Close

Job Description

Data Engineer (DE) Consultant is responsible for designing, developing, and maintaining data assets and data related products by liaising with multiple stakeholders.

Responsibilities:

  • Work with stakeholders to understand the data requirements to design, develop, and maintain complex ETL processes.
  • Create the data integration and data diagram documentation.
  • Lead the data validation, UAT and regression test for new data asset creation.
  • Create and maintain data models, including schema design and optimization.
  • Create and manage data pipelines that automate the flow of data, ensuring data quality and consistency.

Qualifications and Skills:

  • Strong knowledge on Python and Pyspark
  • Expectation is to have ability to write Pyspark scripts for developing data workflows.
  • Strong knowledge on SQL, Hadoop, Hive, Azure, Databricks and Greenplum
  • Expectation is to write SQL to query metadata and tables from different data management system such as, Oracle, Hive, Databricks and Greenplum.
  • Familiarity with big data technologies like Hadoop, Spark, and distributed computing frameworks.
  • Expectation is to use Hue and run Hive SQL queries, schedule Apache Oozie jobs to automate the data workflows.
  • Good working experience of communicating with the stakeholders and collaborate effectively with the business team for data testing.
  • Expectation is to have strong problem-solving and troubleshooting skills.
  • Expectation is to establish comprehensive data quality test cases, procedures and implement automated data validation processes.
  • Degree in Data Science, Statistics, Computer Science or other related fields or an equivalent combination of education and experience.
  • 4-7 years of experience in Data Engineer.
  • Proficiency in programming languages commonly used in data engineering, such as Python, Pyspark, SQL.
  • Experience in Azure cloud computing platform, such as developing ETL processes using Azure Data Factory, big data processing and analytics with Azure Databricks.
  • Strong communication, problem solving and analytical skills with the ability to do time management and multi-tasking with attention to detail and accuracy.
This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Etl Processes Jobs View All Jobs in Delhi