689 Senior Data Engineer jobs in Noida
Data Engineer
Posted today
Job Viewed
Job Description
Job Description Summary
We are seeking a highly skilled Data Engineer to join our growing team. The ideal candidate has strong experience building and maintaining robust, scalable, cloud-native data pipelines and Datawarehouse using tools such as Snowflake, Five Tran, Airflow, and DBT. You will work closely with data analysts, scientists, and engineering teams to ensure reliable, timely, and secure data delivery.
Key Responsibilities
- Design, develop, and maintain batch and streaming data pipelines to load DataMart’s.
- Implement scalable data transformations using Snowflake stored procedures and orchestrate workflows via Airflow or equivalent tools.
- Integrate with data platforms such as Snowflake, ensuring efficient data storage and retrieval.
- Write optimized SQL and Python scripts for data manipulation and ETL processes.
- Maintain data quality, observability, and pipeline reliability through monitoring and alerting.
- Collaborate with analytics and business teams to deliver high-impact data solutions.
- Adhere to best practices for version control, documentation, and CI/CD in a collaborative environment.
Qualifications
- Bachelor’s degree in information technology or related field; or an equivalent combination of education and experience sufficient to successfully perform the key accountabilities of the job required.
- Experience with data ingestion and orchestration tools like Five Tran, Airflow, Python
- Exposure and good understanding of D365 ERP data
- Prior experience working in fast-paced product or analytics teams.
Experience
- 5+ years of hands-on experience in data engineering.
- Strong experience with:
- Snowflake or similar cloud data warehouses.
- Airflow or other orchestration tools.
- SQL and Python.
- Strong hands-on experience in building transformation pipelines using python, Airflow and Snowflake Stored procedures
- Write optimized SQL and Python scripts for data manipulation and ETL processes.
- Maintain data quality, observability, and pipeline reliability through monitoring and alerting.
- Hands-on experience with AWS, Azure, or GCP services.
- Good understanding of data architecture, security, and performance tuning.
- Familiarity with version control (e.g., Git), CI/CD tools, and agile workflows.
Data Engineer
Posted today
Job Viewed
Job Description
We are seeking skilled and motivated Spark & Databricks Developers to join our dynamic team for a long-term project. The ideal candidate will have strong hands-on experience in Apache Spark , Databricks , and GitHub-based development workflows .
Key Responsibilities:
- Design, develop, and optimize big data pipelines using Apache Spark .
- Build and maintain scalable data solutions on Databricks .
- Collaborate with cross-functional teams for data integration and transformation.
- Manage version control and code collaboration using GitHub .
- Ensure data quality , performance tuning, and job optimization.
- Participate in code reviews , testing, and documentation activities.
Must-Have Skills:
- 5–8 years of experience in Data Engineering or related roles
- Strong hands-on expertise in Apache Spark (Batch & Streaming)
- Proficiency in Databricks for developing and managing data workflows
- Experience with GitHub (version control, pull requests, branching strategies)
- Good understanding of Data Lake and Data Warehouse architectures
- Strong SQL and Python scripting skills
- In-depth knowledge of Python programming
Good-to-Have Skills:
- Experience with Azure Data Lake , AWS S3 , or GCP BigQuery
- Familiarity with Delta Lake and Databricks SQL
- Exposure to CI/CD pipelines and DevOps practices
- Experience with ETL tools or data modeling
- Understanding of data governance , security , and performance tuning best practices
Data Engineer
Posted today
Job Viewed
Job Description
Job Description Summary
We are seeking a highly skilled Data Engineer to join our growing team. The ideal candidate has strong experience building and maintaining robust, scalable, cloud-native data pipelines and Datawarehouse using tools such as Snowflake, Five Tran, Airflow, and DBT . You will work closely with data analysts, scientists, and engineering teams to ensure reliable, timely, and secure data delivery.
Experience
- 5+ years of hands-on experience in data engineering .
- Strong experience with:
- Snowflake or similar cloud data warehouses.
- Airflow or other orchestration tools.
- SQL and Python .
- Strong hands-on experience in building transformation pipelines using python, Airflow and Snowflake Stored procedures
- Write optimized SQL and Python scripts for data manipulation and ETL processes.
- Maintain data quality, observability, and pipeline reliability through monitoring and alerting.
- Hands-on experience with AWS, Azure, or GCP services.
- Good understanding of data architecture, security, and performance tuning.
- Familiarity with version control (e.g., Git), CI/CD tools, and agile workflows.
Key Responsibilities
- Design, develop, and maintain batch and streaming data pipelines to load DataMart’s.
- Implement scalable data transformations using Snowflake stored procedures and orchestrate workflows via Airflow or equivalent tools.
- Integrate with data platforms such as Snowflake, ensuring efficient data storage and retrieval.
- Write optimized SQL and Python scripts for data manipulation and ETL processes.
- Maintain data quality, observability, and pipeline reliability through monitoring and alerting.
- Collaborate with analytics and business teams to deliver high-impact data solutions.
- Adhere to best practices for version control, documentation, and CI/CD in a collaborative environment.
Qualifications
- Bachelor’s degree in information technology or related field; or an equivalent combination of education and experience sufficient to successfully perform the key accountabilities of the job required.
- Experience with data ingestion and orchestration tools like Five Tran, Airflow, Python
- Exposure and good understanding of D365 ERP data
- Prior experience working in fast-paced product or analytics teams.
Data Engineer
Posted today
Job Viewed
Job Description
Required Information
Role**
Microsoft Azure Data Engineer
Required Technical Skill Set**
SQL, ADF, ADB, ETL/Data background
Desired Experience Range 4
Location of Requirement
India
Desired Competencies (Technical/Behavioral Competency)
Must-Have**
(Ideally should not be more than 3-5)
Strong handson with Azure Data Factory (ADF), Azure Databricks, ADLS, SQL, ETL/ELT Pipelines – building, orchestrating, and optimizing data pipelines. DevOps (version control (Git)
Good-to-Have
Water industry domain knowledge
SN
Responsibility of / Expectations from the Role
1
Deliver clean, reliable and scalable data pipelines
2
Ensure data availability and quality
3
Excellent communication and documentation abilities
4
Strong analytical skil
Data Engineer
Posted today
Job Viewed
Job Description
Sikich ( is a global company specializing in technology enabled-professional services. Sikich draws on a diverse portfolio of technology solutions to deliver transformative digital strategies and ranks as one of the largest CPA firms in the United States. Our dynamic environment attracts top-notch employees who enjoy being at the cutting edge and seeing every day how their work makes a difference.
Key Responsibilities
Data Engineering & Architecture
- Design and implement data solutions using ADF, Microsoft Fabric, Synapse, Databricks.
- Build data models for scalable data processing, endpoint processing, and integration.
- Develop and optimize data pipelines (real-time, batch, and BYOD).
Data Modeling & Analytics
- Create semantic models and dimensional models in Power BI.
- Build enterprise dashboards, paginated reports, and self-service analytics solutions.
- Implement row-level security, complex DAX measures, and model optimizations.
Platform Operations & Governance
- Apply DataOps practices, monitoring, and performance tuning.
- Ensure data security, quality, and governance.
- Integrate predictive analytics with Azure ML where required.
Required Skills & Experience
- 5–7 years of data engineering experience (minimum 3 years in Microsoft technologies).
- Strong expertise in Microsoft Fabric, Azure Synapse Analytics, Databricks, Power BI.
- Proficiency in SQL, T-SQL, Spark SQL, Python/Scala, DAX, Power Query (M).
- Experience with real-time streaming (Event Hubs/Kafka) and batch processing.
- Proven track record in implementing medallion architecture and enterprise BI solutions.
Preferred Certifications (nice to have)
- Microsoft Certified: Azure Data Engineer Associate / Fabric Analytics Engineer Associate / Power BI Data Analyst Associate
- Databricks Certified Data Engineer Associate
Qualifications
- Bachelor’s degree in computer science, Data Science, Engineering or related field (master’s preferred).
- Strong problem-solving, communication, and client-facing skills.
Why Join Us?
At Sikich, you will work on cutting-edge Microsoft data projects, collaborate with global teams, and contribute to solutions that shape the future of data-driven business decisions. We value innovation, collaboration, and continuous learning.
Data Engineer
Posted today
Job Viewed
Job Description
This opportunity is ideal for a determined and proactive individual who has a wide range of skills in a variety of database administration, reporting and dashboarding disciplines. This role requires an analytical thinker who pays significant attention to detail in their work.
We are looking for an experienced data engineer with extensive experience in TSQL (stored procedures, functions, triggers, ad hoc queries), SSRS (design/development, subscriptions, query performance tuning and BAU support), ETL, Azure data storage, Azure data factor and pipelines, Azure Synapse and data warehousing and some exposure to Power BI and Azure analytics.
Role Responsibilities
- Understanding business needs, designing and building solutions that meet those needs.
- Capable of architecture, design and implementing warehouse and reporting solutions.
- Extracting data from bespoke applications and third-party products.
- Consolidating multiple data sources into a data warehouse and/or data lake.
- Ensuring data is accessible and structured to facilitate reporting needs.
- Setting up dashboards and reports in data visualization and reporting tools.
- Building resilient, reliable, secure and cost-effective solutions.
- Staying ahead of issues with proactive monitoring and alerting.
- Implementing best practices.
- Collaborate as part of an agile team to design, develop, and maintain your team’s software
- Collaborate with product management to prioritize and plan necessary technical debt remediation and refactoring of our data structures and stored procedures
- Design and implement repeatable data pull requests and execute ad-hoc requests as needed
Knowledge, Skills and Experience Requirements
- Bachelor’s degree in computer science or a related field or work experience.
- 10+ years of strong experience with SQL Servers and relational databases.
- Strong experience with querying and optimizing complex data sets.
- Strong experience with creating and optimizing complex stored procedures, functions and triggers.
- Strong experience working with older legacy code and systems.
- Strong experience in data engineering or related roles.
- End-to-end knowledge from product databases to information visualization.
- Experience in Azure cloud technologies, including ADF, Functions, DLS, Synapse, etc.
- Experience in relational database design and architecture.
- Expert in data visualization and reporting using SSRS and Power BI.
- Experience with SSIS, ADF or any other ETL tools.
- Experience with Data warehouse and data modelling (single-layer Data warehouse / Kimball).
- Experience with .NET is a plus.
- Experience with Azure is a plus.
- Experience working in an agile environment is a plus.
- Experience in AWS cloud data technologies is beneficial.
- Teamwork and collaboration – be willing to share your thoughts and seek ideas and feedback from others in return.
- Excellent verbal and written communication skills.
- Comfortable working with staff at all levels.
- Ability to self-organize, prioritize and track tasks.
- Can work on their own or in a team.
Data Engineer
Posted today
Job Viewed
Job Description
About RevX
RevX helps app businesses acquire and reengage users via programmatic to retain, monetize, and accelerate revenue. We're all about taking your app businesses to a new growth level. We rely on data science, innovative technology, and AI, and a skilled team, to create and deliver seamless ad experiences to delight your app users. That’s why RevX is the ideal partner for app marketers that demand trustworthy insights, a hands-on team, and a commitment to growth. We help you build sound mobile strategies, combining programmatic UA, app re engagement, and performance branding to drive real and verifiable results so you can scale your business: with real users, high retention, and incremental revenue.
About the Role
We are seeking a forward-thinking Data Engineer who can bridge the gap between traditional data pipelines and modern Generative AI (GenAI)-enabled analytics tools. You'll design intelligent internal analytics systems using SQL, automation platforms like n8n, BI tools like Looker, and GenAI interfaces such as ChatGPT, Gemini, or LangChain.
This is a unique opportunity to innovate at the intersection of data engineering, AI, and product analytics.
Key Responsibilities
- Design, build, and maintain analytics workflows/tools leveraging GenAI platforms (e.g., ChatGPT, Gemini etc.) and automation tools (e.g., n8n, Looker etc.).
- Collaborate with product, marketing, and engineering teams to identify and deliver data-driven insights.
- Use SQL to query data from data warehouses (BigQuery, Redshift, Snowflake, etc.) and transform it for analysis or reporting.
- Build automated reporting and insight generation systems using visual dashboards and GenAI-based interfaces.
- Evaluate GenAI tools and APIs for applicability in data analytics workflows.
- Explore use cases where GenAI can assist in natural language querying, automated summarization, and explanatory analytics.
- Work closely with business teams to enable self-service analytics via intuitive GenAI-powered interfaces.
- Design and maintain robust data pipelines to ensure timely and accurate ingestion, transformation, and availability of data across systems.
- Implement best practices in data modeling, testing, and monitoring to ensure data quality and reliability in analytics workflows.
Requirements
- 3+ years of experience in data analysis or a related field.
- Strong proficiency in SQL with the ability to work across large datasets.
- Hands-on experience building data tools/workflows using any of the following: n8n, Looker/LookML, ChatGPT API, Gemini, LangChain, or similar.
- Familiarity with GenAI concepts, LLMs, prompt engineering, and their practical application in data querying and summarization.
- Excellent problem-solving skills and a mindset to automate and optimize wherever possible.
- Strong communication skills with the ability to translate complex data into actionable insights for non-technical stakeholders.
Nice to Have
- Prior experience in AdTech (ad operations, performance marketing, attribution, audience insights, etc.).
- Experience with Python, Jupyter Notebooks, or scripting for data manipulation.
- Familiarity with cloud platforms like Google Cloud Platform (GCP) or AWS.
- Knowledge of data visualization tools like Tableau, Power BI, or Looker etc.
Why Join Us?
- Work on the cutting edge of GenAI and data analytics innovation.
- Contribute to building scalable analytics tools that empower entire teams.
- Be part of a fast-moving, experimentation-driven culture where your ideas matter.
For more information visit
Be The First To Know
About the latest Senior data engineer Jobs in Noida !
Data Engineer
Posted today
Job Viewed
Job Description
Join us on our mission to elevate customer experiences for people around the world. As a member of the Everise family, you will be part of a global experience company that believes in being people-first, celebrating diversity and incubating innovation. Our dedication to our purpose and people is being recognized by our employees and the industry. Our 4.6/5 rating on Glassdoor and our shiny, growing wall of Best Place to Work awards is a testament to our investment in our culture. Through the power of diversity, we celebrate all cultures for their uniqueness and strengths. With 13 centers around the world and a robust work at home program, we believe great things happen when we work with people who think differently from us. Find a job you’ll love today!
We are looking for a skilled and experienced Data Engineer to design, build, and optimize scalable data pipelines and architectures that power data-driven decision-making across the organization. Candidate with a proven track record of writing complex stored procedures and optimizing query performance on large datasets.
Requirement:
- Architect, develop, and maintain scalable and secure data pipelines to process structured and unstructured data from diverse sources.
- Collaborate with data scientists, BI analysts and business stakeholders to understand data requirements.
- Optimize data workflows and processing for performance, ensure data quality, reliability and governance
- Hands-on experience with modern data platforms such as Snowflake, Redshift, BigQuery, or Databricks.
- Strong knowledge of T-SQL and SQL Server Management Studio (SSMS)
- Experience in writing complex stored procedures, Views and query performance tuning on large datasets
- Strong understanding of database management systems (SQL,NoSQL) and data warehousing concepts.
- Good knowledge and hands on experience in tuning the Database at Memory level, able to tweak SQL queries.
- In-depth knowledge of data modeling principles and methodologies (e.g., relational, dimensional, NoSQL).
- Excellent analytical and problem-solving skills with a meticulous attention to detail.
- Hands-on experience with data transformation techniques, including data mapping, cleansing, and validation.
- Proven ability to work independently and manage multiple priorities in a fast-paced environment.
- Work closely with cross-functional teams to gather and analyse requirements, develop database solutions, and support application development efforts
- Knowledge of cloud database solutions (e.g., Azure SQL Database, AWS RDS).
If you’ve got the skills to succeed and the motivation to make it happen, we look forward to hearing from you.
Data Engineer
Posted today
Job Viewed
Job Description
Location: Any Xebia location (Hybrid, 3 days office per week)
Experience: 6+ years
⏳ Notice Period: Immediate to 2 weeks – only apply if you can join early
We are seeking an experienced Data Engineer with strong expertise in Databricks, Python, SQL, and Postgres. The ideal candidate will also have hands-on experience with Vector Databases (pgvector, Qdrant, Pinecone, etc.) and exposure to Generative AI use cases such as RAG pipelines and embedding-based search.
Key Skills & Expertise
- Proficiency in Python for data engineering (PySpark, pandas, APIs).
- Strong SQL skills for data modeling, optimization, and analytics.
- Hands-on expertise in Databricks (workflows, notebooks, Delta Lake).
- Experience with Azure Data Factory, Azure Data Lake Storage Gen2, and Azure Synapse Analytics.
- Solid understanding of Postgres (schema design, indexing, query tuning).
- Exposure to Generative AI concepts and integration with Azure OpenAI or similar services.
- Familiarity with CI/CD and Git-based workflows.
How to Apply
Interested candidates, please share your details at with the following:
- Total Experience
- Relevant Experience
- Current CTC
- Expected CTC
- Notice Period (Immediate to 2 weeks only)
- Current Location
- Preferred Location
- LinkedIn Profile URL
Data Engineer
Posted today
Job Viewed
Job Description
Experience: 5+Years
Location: (Remote)
Senior Data Engineer
The Senior Data Engineer will design, develop, monitor and maintain a robust and scalable data platform used by other data analyst and engineering teams to deliver powerful insights to both internal and external stakeholders
Responsibilities
● Design, build and maintain data infrastructure that powers both batch and realtime processing of billions of records a day.
● Improve the data quality and reliability of data pipelines through monitoring, validation and failure detection
● Design, build and maintain a central data cataloging system to ease integration and discovery of datasets
● Develop data pipelines that provide fast, optimized, and robust end-to-end analytical solutions
● Automate manual processes and create a platform in favor of self-service data consumption
● Deploy and configure components to production environments.
● Participate in on-call schedule to provide emergency incident support
● Mentor and train teammates on design and operation of data platform
● Stay current with industry trends, making recommendations as needed to help the company excel
● Other job-related duties as assigned Skills,
Experience and Qualifications
● Bachelor’s Degree in Computer Science or Engineering a plus
● 5+ years of relevant industry experience in Data Engineering working with large scale data driven systems
● Deep knowledge of dimensional modeling and designing schemas and data sets optimized for OLAP environment
● Experience fine-tuning queries around large, complex data sets
● Extensive experience working with an MPP data platform (ie Snowflake, BigQuery, or Databricks)
● Experience designing and implementing workflows on a modern orchestration framework (ie Airflow, Prefect)
● Advanced Data Build Tool (DBT) experience (ie macro design, generalizing tests, using different incremental strategies)
● Deep understanding of SQL and data warehouse systems, especially Snowflake (SnowPro certification preferred)
● Experience working with and administering a BI tool (Tableau preferred)
● Expertise in object-oriented and/or functional programming languages (Python preferred)
● Strong programming skills, able to write modular, maintainable code
● Understanding of DevOps principles such as automating of CI/CD pipelines and Infrastructure as code
● Understanding of polyglot data persistence (relational, key/value, document, column)
● Excellent problem-solving skills and the ability to proactively solve issues
● Excellent communication and organizational skills and proven ability to complete tasks and meet deadlines
● Ability to be flexible with working in tandem with a team of engineers or alone, as required