Data Engineering Role

Delhi, Delhi 100x.inc

Posted today

Job Viewed

Tap Again To Close

Job Description

Minimum Requirements:

- At least 3 years of professional experience in Data Engineering
- Demonstrated end-to-end ownership of ETL pipelines
- Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
- Strong proficiency in MySQL (non-negotiable)
- Working knowledge of Docker: setup, deployment, and troubleshooting

Highly Preferred Skills:

- Experience with orchestration tools such as Airflow or similar
- Hands-on with PySpark
- Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
- Exposure to DLT (Data Load Tool)

Ideal Candidate Profile:

The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.

Key Responsibilities:

- Architect, build, and optimize scalable data pipelines and workflows
- Manage AWS resources end-to-end: from configuration to optimization and debugging
- Work closely with product and engineering to enable high-velocity business impact
- Automate and scale data processes—manual workflows are not part of the culture
- Build foundational data systems that drive critical business decisions

Compensation range: ₹8.4–12 LPA (fixed base), excluding equity, performance bonus, and revenue share components.
This advertiser has chosen not to accept applicants from your region.

Data Engineering Leader

Ghaziabad, Uttar Pradesh beBeeDataEngineering

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Title: Data Engineering Leader

We are seeking a highly skilled Data Engineering Leader to join our team. The ideal candidate will have a strong background in data engineering, cloud computing, and software development.

The successful candidate will be responsible for designing, building, and maintaining large-scale data pipelines and architectures that support business intelligence and analytics. They will also collaborate with cross-functional teams to develop and implement data-driven solutions that drive business outcomes.

Key Responsibilities:
  • Design and implement large-scale data pipelines using cloud-based technologies such as AWS EMR, Lambda, and S3
  • Develop and maintain scalable data architectures that support high-volume data ingestion and processing
  • Collaborate with data scientists and analysts to develop and implement data-driven solutions that drive business outcomes
  • Work with engineering teams to design and implement data integration layers that connect disparate data sources
  • Develop and maintain data quality and governance processes to ensure data accuracy and consistency
Requirements:
  • Bachelor's degree in Computer Science or related field
  • 7-10 years of experience in data engineering, cloud computing, and software development
  • Experience leading and delivering data warehousing and analytics projects
  • Strong knowledge of big data tools and technologies such as Hadoop, Hive, Spark, and Presto
  • Hands-on experience with SQL, Python, Java, and Scala programming languages
  • Experience working with cloud computing platforms such as AWS, GCP, and Azure
  • Strong understanding of data modeling, data architecture, and data governance
This advertiser has chosen not to accept applicants from your region.

Data Engineering Expert

Ghaziabad, Uttar Pradesh beBeeEngineering

Posted today

Job Viewed

Tap Again To Close

Job Description

As a data engineering professional, you will play a key role in supporting strategic data initiatives. The ideal candidate will have hands-on expertise in Databricks, SQL, and Python, and a strong understanding of life sciences data.

Key Responsibilities:
  • Designing and optimizing scalable data pipelines
  • Transforming complex datasets to support business intelligence efforts

You will be comfortable working in a fast-paced environment and collaborating with cross-functional teams to ensure data quality, accessibility, and performance.

This advertiser has chosen not to accept applicants from your region.

Data Engineering Role

Ghaziabad, Uttar Pradesh 100x

Posted today

Job Viewed

Tap Again To Close

Job Description

Minimum Requirements:

  • At least 3 years of professional experience in Data Engineering
  • Demonstrated end-to-end ownership of ETL pipelines
  • Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
  • Strong proficiency in MySQL (non-negotiable)
  • Working knowledge of Docker: setup, deployment, and troubleshooting

Highly Preferred Skills:

  • Experience with orchestration tools such as Airflow or similar
  • Hands-on with PySpark
  • Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
  • Exposure to DLT (Data Load Tool)

Ideal Candidate Profile:

The role demands a builder's mindset over a maintainer's. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they've read this section in full.

Key Responsibilities:

  • Architect, build, and optimize scalable data pipelines and workflows
  • Manage AWS resources end-to-end: from configuration to optimization and debugging
  • Work closely with product and engineering to enable high-velocity business impact
  • Automate and scale data processes—manual workflows are not part of the culture
  • Build foundational data systems that drive critical business decisions

Compensation range: ₹8.4–12 LPA (fixed base), excluding equity, performance bonus, and revenue share components.

This advertiser has chosen not to accept applicants from your region.

Data Engineering Role

Ghaziabad, Uttar Pradesh 100x.inc

Posted today

Job Viewed

Tap Again To Close

Job Description

Minimum Requirements:

  • At least 3 years of professional experience in Data Engineering
  • Demonstrated end-to-end ownership of ETL pipelines
  • Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
  • Strong proficiency in MySQL (non-negotiable)
  • Working knowledge of Docker: setup, deployment, and troubleshooting


Highly Preferred Skills:

  • Experience with orchestration tools such as Airflow or similar
  • Hands-on with PySpark
  • Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
  • Exposure to DLT (Data Load Tool)


Ideal Candidate Profile:

The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.


Key Responsibilities:

  • Architect, build, and optimize scalable data pipelines and workflows
  • Manage AWS resources end-to-end: from configuration to optimization and debugging
  • Work closely with product and engineering to enable high-velocity business impact
  • Automate and scale data processes—manual workflows are not part of the culture
  • Build foundational data systems that drive critical business decisions


Compensation range: ₹8.4–12 LPA (fixed base), excluding equity, performance bonus, and revenue share components.

This advertiser has chosen not to accept applicants from your region.

Data Engineering Role

New Delhi, Delhi 100x.inc

Posted 6 days ago

Job Viewed

Tap Again To Close

Job Description

Minimum Requirements:

  • At least 3 years of professional experience in Data Engineering
  • Demonstrated end-to-end ownership of ETL pipelines
  • Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
  • Strong proficiency in MySQL (non-negotiable)
  • Working knowledge of Docker: setup, deployment, and troubleshooting


Highly Preferred Skills:

  • Experience with orchestration tools such as Airflow or similar
  • Hands-on with PySpark
  • Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
  • Exposure to DLT (Data Load Tool)


Ideal Candidate Profile:

The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.


Key Responsibilities:

  • Architect, build, and optimize scalable data pipelines and workflows
  • Manage AWS resources end-to-end: from configuration to optimization and debugging
  • Work closely with product and engineering to enable high-velocity business impact
  • Automate and scale data processes—manual workflows are not part of the culture
  • Build foundational data systems that drive critical business decisions


Compensation range: ₹8.4–12 LPA (fixed base), excluding equity, performance bonus, and revenue share components.

This advertiser has chosen not to accept applicants from your region.

Data Engineering Role

Faridabad, Haryana 100x.inc

Posted 6 days ago

Job Viewed

Tap Again To Close

Job Description

Minimum Requirements:

  • At least 3 years of professional experience in Data Engineering
  • Demonstrated end-to-end ownership of ETL pipelines
  • Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
  • Strong proficiency in MySQL (non-negotiable)
  • Working knowledge of Docker: setup, deployment, and troubleshooting


Highly Preferred Skills:

  • Experience with orchestration tools such as Airflow or similar
  • Hands-on with PySpark
  • Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
  • Exposure to DLT (Data Load Tool)


Ideal Candidate Profile:

The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.


Key Responsibilities:

  • Architect, build, and optimize scalable data pipelines and workflows
  • Manage AWS resources end-to-end: from configuration to optimization and debugging
  • Work closely with product and engineering to enable high-velocity business impact
  • Automate and scale data processes—manual workflows are not part of the culture
  • Build foundational data systems that drive critical business decisions


Compensation range: ₹8.4–12 LPA (fixed base), excluding equity, performance bonus, and revenue share components.

This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Big data technologies Jobs in Noida !

Data Engineering Role

Delhi, Delhi 100x.inc

Posted 6 days ago

Job Viewed

Tap Again To Close

Job Description

Minimum Requirements:

  • At least 3 years of professional experience in Data Engineering
  • Demonstrated end-to-end ownership of ETL pipelines
  • Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
  • Strong proficiency in MySQL (non-negotiable)
  • Working knowledge of Docker: setup, deployment, and troubleshooting


Highly Preferred Skills:

  • Experience with orchestration tools such as Airflow or similar
  • Hands-on with PySpark
  • Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
  • Exposure to DLT (Data Load Tool)


Ideal Candidate Profile:

The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.


Key Responsibilities:

  • Architect, build, and optimize scalable data pipelines and workflows
  • Manage AWS resources end-to-end: from configuration to optimization and debugging
  • Work closely with product and engineering to enable high-velocity business impact
  • Automate and scale data processes—manual workflows are not part of the culture
  • Build foundational data systems that drive critical business decisions


Compensation range: ₹8.4–12 LPA (fixed base), excluding equity, performance bonus, and revenue share components.

This advertiser has chosen not to accept applicants from your region.

Data Engineering Role

Noida, Uttar Pradesh 100x.inc

Posted 6 days ago

Job Viewed

Tap Again To Close

Job Description

Minimum Requirements:

  • At least 3 years of professional experience in Data Engineering
  • Demonstrated end-to-end ownership of ETL pipelines
  • Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
  • Strong proficiency in MySQL (non-negotiable)
  • Working knowledge of Docker: setup, deployment, and troubleshooting


Highly Preferred Skills:

  • Experience with orchestration tools such as Airflow or similar
  • Hands-on with PySpark
  • Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
  • Exposure to DLT (Data Load Tool)


Ideal Candidate Profile:

The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.


Key Responsibilities:

  • Architect, build, and optimize scalable data pipelines and workflows
  • Manage AWS resources end-to-end: from configuration to optimization and debugging
  • Work closely with product and engineering to enable high-velocity business impact
  • Automate and scale data processes—manual workflows are not part of the culture
  • Build foundational data systems that drive critical business decisions


Compensation range: ₹8.4–12 LPA (fixed base), excluding equity, performance bonus, and revenue share components.

This advertiser has chosen not to accept applicants from your region.

Data Engineering Role

Ghaziabad, Uttar Pradesh 100x.inc

Posted 6 days ago

Job Viewed

Tap Again To Close

Job Description

Minimum Requirements:

  • At least 3 years of professional experience in Data Engineering
  • Demonstrated end-to-end ownership of ETL pipelines
  • Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
  • Strong proficiency in MySQL (non-negotiable)
  • Working knowledge of Docker: setup, deployment, and troubleshooting


Highly Preferred Skills:

  • Experience with orchestration tools such as Airflow or similar
  • Hands-on with PySpark
  • Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
  • Exposure to DLT (Data Load Tool)


Ideal Candidate Profile:

The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.


Key Responsibilities:

  • Architect, build, and optimize scalable data pipelines and workflows
  • Manage AWS resources end-to-end: from configuration to optimization and debugging
  • Work closely with product and engineering to enable high-velocity business impact
  • Automate and scale data processes—manual workflows are not part of the culture
  • Build foundational data systems that drive critical business decisions


Compensation range: ₹8.4–12 LPA (fixed base), excluding equity, performance bonus, and revenue share components.

This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Big Data Technologies Jobs View All Jobs in Noida