120 Big Data Technologies jobs in Noida
Data Engineering Role
Posted today
Job Viewed
Job Description
- At least 3 years of professional experience in Data Engineering
- Demonstrated end-to-end ownership of ETL pipelines
- Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
- Strong proficiency in MySQL (non-negotiable)
- Working knowledge of Docker: setup, deployment, and troubleshooting
Highly Preferred Skills:
- Experience with orchestration tools such as Airflow or similar
- Hands-on with PySpark
- Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
- Exposure to DLT (Data Load Tool)
Ideal Candidate Profile:
The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.
Key Responsibilities:
- Architect, build, and optimize scalable data pipelines and workflows
- Manage AWS resources end-to-end: from configuration to optimization and debugging
- Work closely with product and engineering to enable high-velocity business impact
- Automate and scale data processes—manual workflows are not part of the culture
- Build foundational data systems that drive critical business decisions
Compensation range: ₹8.4–12 LPA (fixed base), excluding equity, performance bonus, and revenue share components.
Data Engineering Leader
Posted today
Job Viewed
Job Description
Job Title: Data Engineering Leader
We are seeking a highly skilled Data Engineering Leader to join our team. The ideal candidate will have a strong background in data engineering, cloud computing, and software development.
The successful candidate will be responsible for designing, building, and maintaining large-scale data pipelines and architectures that support business intelligence and analytics. They will also collaborate with cross-functional teams to develop and implement data-driven solutions that drive business outcomes.
Key Responsibilities:- Design and implement large-scale data pipelines using cloud-based technologies such as AWS EMR, Lambda, and S3
- Develop and maintain scalable data architectures that support high-volume data ingestion and processing
- Collaborate with data scientists and analysts to develop and implement data-driven solutions that drive business outcomes
- Work with engineering teams to design and implement data integration layers that connect disparate data sources
- Develop and maintain data quality and governance processes to ensure data accuracy and consistency
- Bachelor's degree in Computer Science or related field
- 7-10 years of experience in data engineering, cloud computing, and software development
- Experience leading and delivering data warehousing and analytics projects
- Strong knowledge of big data tools and technologies such as Hadoop, Hive, Spark, and Presto
- Hands-on experience with SQL, Python, Java, and Scala programming languages
- Experience working with cloud computing platforms such as AWS, GCP, and Azure
- Strong understanding of data modeling, data architecture, and data governance
Data Engineering Expert
Posted today
Job Viewed
Job Description
As a data engineering professional, you will play a key role in supporting strategic data initiatives. The ideal candidate will have hands-on expertise in Databricks, SQL, and Python, and a strong understanding of life sciences data.
Key Responsibilities:- Designing and optimizing scalable data pipelines
- Transforming complex datasets to support business intelligence efforts
You will be comfortable working in a fast-paced environment and collaborating with cross-functional teams to ensure data quality, accessibility, and performance.
Data Engineering Role
Posted today
Job Viewed
Job Description
Minimum Requirements:
- At least 3 years of professional experience in Data Engineering
- Demonstrated end-to-end ownership of ETL pipelines
- Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
- Strong proficiency in MySQL (non-negotiable)
- Working knowledge of Docker: setup, deployment, and troubleshooting
Highly Preferred Skills:
- Experience with orchestration tools such as Airflow or similar
- Hands-on with PySpark
- Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
- Exposure to DLT (Data Load Tool)
Ideal Candidate Profile:
The role demands a builder's mindset over a maintainer's. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they've read this section in full.
Key Responsibilities:
- Architect, build, and optimize scalable data pipelines and workflows
- Manage AWS resources end-to-end: from configuration to optimization and debugging
- Work closely with product and engineering to enable high-velocity business impact
- Automate and scale data processes—manual workflows are not part of the culture
- Build foundational data systems that drive critical business decisions
Compensation range: ₹8.4–12 LPA (fixed base), excluding equity, performance bonus, and revenue share components.
Data Engineering Role
Posted today
Job Viewed
Job Description
Minimum Requirements:
- At least 3 years of professional experience in Data Engineering
- Demonstrated end-to-end ownership of ETL pipelines
- Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
- Strong proficiency in MySQL (non-negotiable)
- Working knowledge of Docker: setup, deployment, and troubleshooting
Highly Preferred Skills:
- Experience with orchestration tools such as Airflow or similar
- Hands-on with PySpark
- Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
- Exposure to DLT (Data Load Tool)
Ideal Candidate Profile:
The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.
Key Responsibilities:
- Architect, build, and optimize scalable data pipelines and workflows
- Manage AWS resources end-to-end: from configuration to optimization and debugging
- Work closely with product and engineering to enable high-velocity business impact
- Automate and scale data processes—manual workflows are not part of the culture
- Build foundational data systems that drive critical business decisions
Compensation range: ₹8.4–12 LPA (fixed base), excluding equity, performance bonus, and revenue share components.
Data Engineering Role
Posted 6 days ago
Job Viewed
Job Description
Minimum Requirements:
- At least 3 years of professional experience in Data Engineering
- Demonstrated end-to-end ownership of ETL pipelines
- Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
- Strong proficiency in MySQL (non-negotiable)
- Working knowledge of Docker: setup, deployment, and troubleshooting
Highly Preferred Skills:
- Experience with orchestration tools such as Airflow or similar
- Hands-on with PySpark
- Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
- Exposure to DLT (Data Load Tool)
Ideal Candidate Profile:
The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.
Key Responsibilities:
- Architect, build, and optimize scalable data pipelines and workflows
- Manage AWS resources end-to-end: from configuration to optimization and debugging
- Work closely with product and engineering to enable high-velocity business impact
- Automate and scale data processes—manual workflows are not part of the culture
- Build foundational data systems that drive critical business decisions
Compensation range: ₹8.4–12 LPA (fixed base), excluding equity, performance bonus, and revenue share components.
Data Engineering Role
Posted 6 days ago
Job Viewed
Job Description
Minimum Requirements:
- At least 3 years of professional experience in Data Engineering
- Demonstrated end-to-end ownership of ETL pipelines
- Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
- Strong proficiency in MySQL (non-negotiable)
- Working knowledge of Docker: setup, deployment, and troubleshooting
Highly Preferred Skills:
- Experience with orchestration tools such as Airflow or similar
- Hands-on with PySpark
- Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
- Exposure to DLT (Data Load Tool)
Ideal Candidate Profile:
The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.
Key Responsibilities:
- Architect, build, and optimize scalable data pipelines and workflows
- Manage AWS resources end-to-end: from configuration to optimization and debugging
- Work closely with product and engineering to enable high-velocity business impact
- Automate and scale data processes—manual workflows are not part of the culture
- Build foundational data systems that drive critical business decisions
Compensation range: ₹8.4–12 LPA (fixed base), excluding equity, performance bonus, and revenue share components.
Be The First To Know
About the latest Big data technologies Jobs in Noida !
Data Engineering Role
Posted 6 days ago
Job Viewed
Job Description
Minimum Requirements:
- At least 3 years of professional experience in Data Engineering
- Demonstrated end-to-end ownership of ETL pipelines
- Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
- Strong proficiency in MySQL (non-negotiable)
- Working knowledge of Docker: setup, deployment, and troubleshooting
Highly Preferred Skills:
- Experience with orchestration tools such as Airflow or similar
- Hands-on with PySpark
- Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
- Exposure to DLT (Data Load Tool)
Ideal Candidate Profile:
The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.
Key Responsibilities:
- Architect, build, and optimize scalable data pipelines and workflows
- Manage AWS resources end-to-end: from configuration to optimization and debugging
- Work closely with product and engineering to enable high-velocity business impact
- Automate and scale data processes—manual workflows are not part of the culture
- Build foundational data systems that drive critical business decisions
Compensation range: ₹8.4–12 LPA (fixed base), excluding equity, performance bonus, and revenue share components.
Data Engineering Role
Posted 6 days ago
Job Viewed
Job Description
Minimum Requirements:
- At least 3 years of professional experience in Data Engineering
- Demonstrated end-to-end ownership of ETL pipelines
- Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
- Strong proficiency in MySQL (non-negotiable)
- Working knowledge of Docker: setup, deployment, and troubleshooting
Highly Preferred Skills:
- Experience with orchestration tools such as Airflow or similar
- Hands-on with PySpark
- Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
- Exposure to DLT (Data Load Tool)
Ideal Candidate Profile:
The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.
Key Responsibilities:
- Architect, build, and optimize scalable data pipelines and workflows
- Manage AWS resources end-to-end: from configuration to optimization and debugging
- Work closely with product and engineering to enable high-velocity business impact
- Automate and scale data processes—manual workflows are not part of the culture
- Build foundational data systems that drive critical business decisions
Compensation range: ₹8.4–12 LPA (fixed base), excluding equity, performance bonus, and revenue share components.
Data Engineering Role
Posted 6 days ago
Job Viewed
Job Description
Minimum Requirements:
- At least 3 years of professional experience in Data Engineering
- Demonstrated end-to-end ownership of ETL pipelines
- Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
- Strong proficiency in MySQL (non-negotiable)
- Working knowledge of Docker: setup, deployment, and troubleshooting
Highly Preferred Skills:
- Experience with orchestration tools such as Airflow or similar
- Hands-on with PySpark
- Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
- Exposure to DLT (Data Load Tool)
Ideal Candidate Profile:
The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.
Key Responsibilities:
- Architect, build, and optimize scalable data pipelines and workflows
- Manage AWS resources end-to-end: from configuration to optimization and debugging
- Work closely with product and engineering to enable high-velocity business impact
- Automate and scale data processes—manual workflows are not part of the culture
- Build foundational data systems that drive critical business decisions
Compensation range: ₹8.4–12 LPA (fixed base), excluding equity, performance bonus, and revenue share components.