86 Snowflake Azure Data Engineer Adf Snowflake jobs in Mount
Senior Data Engineer
Posted 2 days ago
Job Viewed
Job Description
Overview
As a Senior Data Engineer, you'll play a crucial role in storing, processing, modelling, and applying data science to make data and insights available for analytics and business intelligence (BI) systems. This position offers a unique opportunity to work with cutting-edge products and world-class clients in a remote-first environment.
Key Responsibilities
- Architect and build robust data systems and pipelines
- Analyse, organise, and prepare raw data for modelling and analytics
- Evaluate business needs and objectives
- Combine raw information from diverse sources
- Enhance data quality and reliability
- Identify opportunities for data acquisition
- Develop analytical reports using data science techniques
Required Skills
Data Engineering Expertise
- Strong data modelling and SQL/database design skills
- Proficiency in ETL/ELT processes
- Expert-level SQL and Python programming
- Understanding of different data modelling techniques (e.g Kimban, star and snowflake schemas etc)
- Data quality techniques
- Data normalisation Cloud and Big Data Technologies
- Familiarity with cloud data warehouses (AWS, Azure, GCP, or Snowflake)
MLOps
- Proficiency in MLOps practices, including model deployment, monitoring, and management
- Familiarity with tools and frameworks for MLOps, such as MLflow or similar.
Additional Desirable Skills
- CI/CD knowledge
- Experience with JIRA/Asana for project management
- Familiarity with Airflow, Fivetran, Matillion
- Agile working methodology
Senior Data Engineer
Posted 3 days ago
Job Viewed
Job Description
Required Skills & Experience:
- 4+ years of hands-on experience with Azure Databricks using PySpark .
- 2+ years of experience in Databricks Workflows and Unity Catalog .
- 3+ years working with Azure Data Factory (ADF) .
- 3+ years of experience in Azure Data Lake Storage Gen2 (ADLS Gen2) .
- 3+ years of experience in Azure SQL development and optimization.
- 5+ years overall experience on the Azure Cloud platform .
- 2+ years of hands-on Python programming, including package/module development.
Lead Data Engineer
Posted 3 days ago
Job Viewed
Job Description
Data Engineer - WFH
Posted 3 days ago
Job Viewed
Job Description
We are seeking a Data Engineer with 2-3 years of experience to join a client-facing role focused on building and maintaining scalable data pipelines , robust data models , and modern data warehousing solutions. You'll work with a variety of tools and frameworks, including Apache Spark , Snowflake , and Python , to deliver clean, reliable, and timely data for advanced analytics and reporting.
Key Responsibilities
- Design and develop scalable Data Pipelines to support batch and real-time processing
- Implement efficient Extract, Transform, Load (ETL) processes using tools like Apache Spark and dbt
- Develop and optimize queries using SQL for data analysis and warehousing
- Build and maintain Data Warehousing solutions using platforms like Snowflake or BigQuery
- Collaborate with business and technical teams to gather requirements and create accurate Data Models
- Write reusable and maintainable code in Python (Programming Language) for data ingestion, processing, and automation
- Ensure end-to-end Data Processing integrity, scalability, and performance
- Follow best practices for data governance , security , and compliance
Required Skills & Experience
- 3–4 years of experience in Data Engineering or a similar role
- Strong proficiency in SQL and Python (Programming Language)
- Experience with Extract, Transform, Load (ETL) frameworks and building data pipelines
- Solid understanding of Data Warehousing concepts and architecture
- Hands-on experience with Snowflake , Apache Spark , or similar big data technologies
- Proven experience in Data Modeling and data schema design
- Exposure to Data Processing frameworks and performance optimization techniques
- Familiarity with cloud platforms like AWS , GCP , or Azure
⭐ Nice to Have
- Experience with streaming data pipelines (e.g., Kafka, Kinesis)
- Exposure to CI/CD practices in data development
- Prior work in consulting or multi-client environments
- Understanding of data quality frameworks and monitoring strategies
Senior Data Engineer
Posted 8 days ago
Job Viewed
Job Description
* Key Responsibilities:
Design, build, and maintain scalable data pipelines using DBT and Airflow.
Develop and optimize SQL queries and data models in Snowflake.
Implement ETL/ELT workflows, ensuring data quality, performance, and reliability.
Work with Python for data processing, automation, and integration tasks.
Handle JSON data structures for data ingestion, transformation, and APIs.
Leverage AWS services (e.g., S3, Lambda, Glue, Redshift) for cloud-based data solutions.
Ensure compliance with data security and privacy regulations such as GLBA, PCI-DSS, GDPR, CCPA, and CPRA by implementing proper data encryption, access controls, and data retention policies. Collaborate with data analysts, engineers, and business teams to deliver high-quality data products.
* Requirements:
Strong expertise in SQL, Snowflake, and DBT for data modeling and transformation.
Proficiency in Python and Airflow for workflow automation.
Experience working with AWS cloud services.
Ability to handle JSON data formats and integrate APIs.
Understanding of data governance, security, and compliance frameworks related to financial and personal data regulations (GLBA, PCI-DSS, GDPR, CCPA, CPRA).
Strong problem-solving skills and experience in optimizing data pipelines
AWS Data Engineer
Posted 8 days ago
Job Viewed
Job Description
About Holcim
Holcim is the leading partner for sustainable construction, creating value across the built environment from infrastructure and industry to buildings.
We offer high-value end-to-end Building Materials and Building Solutions - from foundations and flooring to roofing and walling - powered by premium brands including ECOPlanet, ECOPact and ECOCycle®.
More than 45,000 talented Holcim employees in 45 attractive markets - across Europe, Latin America and Asia, Middle East & Africa - are driven by our purpose to build progress for people and the planet, with sustainability and innovation at the core of everything we do.
About The Role
The Data Engineer will play an important role in enabling business for Data Driven Operations and Decision making in Agile and Product-centric IT environment.
Education / Qualification
- BE / B. Tech from IIT or Tier I / II colleges
- Certification in Cloud Platforms AWS or GCP
Experience
- Total Experience of 4-8years
- Hands on experience in python coding is must .
- Experience in data engineering which includes laudatory account
- Hands-on experience in Big Data cloud platforms like AWS(redshift, Glue, Lambda), Data Lakes, and Data Warehouses, Data Integration, data pipeline.
- Experience in SQL, writing code in spark engine using python,pyspark.
- Experience in data pipeline and workflow management tools ( such as Azkaban, Luigi, Airflow etc.)
Key Personal Attributes
- Business focused, Customer & Service minded
- Strong Consultative and Management skills
- Good Communication and Interpersonal skills
Senior Data Engineer
Posted 9 days ago
Job Viewed
Job Description
Primary Responsibilities -
• Create and maintain data storage solutions including Azure SQL Database, Azure Data Lake, and Azure Blob Storage.
• Design, implement, and maintain data pipelines for data ingestion, processing, and transformation in Azure Create data models for analytics purposes
• Utilizing Azure Data Factory or comparable technologies, create and maintain ETL (Extract, Transform, Load) operations
• Use Azure Data Factory and Databricks to assemble large, complex data sets
• Implementing data validation and cleansing procedures will ensure the quality, integrity, and dependability of the data.
• Ensure data security and compliance
• Collaborate with data engineers, and other stakeholders to understand requirements and translate them into scalable and reliable data platform architectures
Required skills:
• Blend of technical expertise, analytical problem-solving, and collaboration with cross-functional teams
• Azure DevOps
• Apache Spark, Python
• SQL proficiency
• Azure Databricks knowledge
• Big data technologies
Be The First To Know
About the latest Snowflake azure data engineer adf snowflake Jobs in Mount !
Lead Data Engineer
Posted 9 days ago
Job Viewed
Job Description
Job Summary:
Lead Data Engineer to design, develop, and maintain data pipelines and ETL workflows for processing large-scale structured/unstructured data. The ideal candidate will have expertise in AWS Data Services (S3, Workflows, Databricks, SQL) along with big data processing, real-time analytics, and cloud data integration and Team Leading Experience.
Key Responsibilities:
- Redesign optimized and scalable ETL using Spark, Python, SQL, UDF.
- Implement ETL/ELT databricks workflows for structured data processing.
- Quickly able to analyze the issues.
- Create Data quality check using Unity Catalog.
- Create DataStream in Adverity.
- Drive daily status call and sprint planning meeting.
- Ensure security, quality, and compliance of data pipelines.
- Contribute to CI/CD integration, observability, and documentation.
- Collaborate with data architects and analysts to meet business requirements.
Qualifications:
- 8+ years of experience in data engineering; 2+ years working on AWS services.
- Hands-on with tools like S3, Databricks, or Workflows.
- Good to have knowledge in Adverity.
- Good to have experience in any ticketing tool like Asana or JIRA.
- Good to have experience Data Analyzing.
- Strong SQL and data processing skills (e.g., PySpark, Python).
Cloud Data Engineer
Posted 9 days ago
Job Viewed
Job Description
About Lemongrass
Lemongrass is a software-enabled services provider, synonymous with SAP on Cloud, focused on delivering superior, highly automated Managed Services to Enterprise customers. Our customers span multiple verticals and geographies across the Americas, EMEA and APAC. We partner with AWS, SAP, Microsoft and other global technology leaders.
We are seeking an experienced Cloud Data Engineer with a strong background in AWS, Azure, and GCP. The ideal candidate will have extensive experience with cloud-native ETL tools such as AWS DMS, AWS Glue, Kafka, Azure Data Factory, GCP Dataflow, and other ETL tools like Informatica, SAP Data Intelligence, etc. You will be responsible for designing, implementing, and maintaining robust data pipelines and building scalable data lakes. Experience with various data platforms like Redshift, Snowflake, Databricks, Synapse, Snowflake and others is essential. Familiarity with data extraction from SAP or ERP systems is a plus.
Key Responsibilities:
Design and Development:
- Design, develop, and maintain scalable ETL pipelines using cloud-native tools (AWS DMS, AWS Glue, Kafka, Azure Data Factory, GCP Dataflow, etc.).
- Architect and implement data lakes and data warehouses on cloud platforms (AWS, Azure, GCP).
- Develop and optimize data ingestion, transformation, and loading processes using Databricks, Snowflake, Redshift, BigQuery and Azure Synapse.
- Implement ETL processes using tools like Informatica, SAP Data Intelligence, and others.
- Develop and optimize data processing jobs using Spark Scala.
Data Integration and Management:
- Integrate various data sources, including relational databases, APIs, unstructured data, and ERP systems into the data lake.
- Ensure data quality and integrity through rigorous testing and validation.
- Perform data extraction from SAP or ERP systems when necessary.
Performance Optimization:
- Monitor and optimize the performance of data pipelines and ETL processes.
- Implement best practices for data management, including data governance, security, and compliance.
Collaboration and Communication:
- Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions.
- Collaborate with cross-functional teams to design and implement data solutions that meet business needs.
Documentation and Maintenance:
- Document technical solutions, processes, and workflows.
- Maintain and troubleshoot existing ETL pipelines and data integrations.
Qualifications
Education:
- Bachelor’s degree in Computer Science, Information Technology, or a related field. Advanced degrees are a plus.
Experience:
- 7+ years of experience as a Data Engineer or in a similar role.
- Proven experience with cloud platforms: AWS, Azure, and GCP.
- Hands-on experience with cloud-native ETL tools such as AWS DMS, AWS Glue, Kafka, Azure Data Factory, GCP Dataflow, etc.
- Experience with other ETL tools like Informatica, SAP Data Intelligence, etc.
- Experience in building and managing data lakes and data warehouses.
- Proficiency with data platforms like Redshift, Snowflake, BigQuery, Databricks, and Azure Synapse.
- Experience with data extraction from SAP or ERP systems is a plus.
- Strong experience with Spark and Scala for data processing.
Skills:
- Strong programming skills in Python, Java, or Scala.
- Proficient in SQL and query optimization techniques.
- Familiarity with data modeling, ETL/ELT processes, and data warehousing concepts.
- Knowledge of data governance, security, and compliance best practices.
- Excellent problem-solving and analytical skills.
- Strong communication and collaboration skills.
Preferred Qualifications:
- Experience with other data tools and technologies such as Apache Spark, or Hadoop.
- Certifications in cloud platforms (AWS Certified Data Analytics – Specialty, Google Professional Data Engineer, Microsoft Certified: Azure Data Engineer Associate).
- Experience with CI/CD pipelines and DevOps practices for data engineering
- Selected applicant will be subject to a background investigation, which will be conducted and the results of which will be used in compliance with applicable law.
What we offer in return:
- Remote Working: Lemongrass always has been and always will offer 100% remote work
- Flexibility: Work where and when you like most of the time
- Training: A subscription to A Cloud Guru and generous budget for taking certifications and other resources you’ll find helpful
- State of the art tech: An opportunity to learn and run the latest industry standard tools
- Team: Colleagues who will challenge you giving the chance to learn from them and them from you
Lemongrass Consulting is proud to be an Equal Opportunity and Affirmative Action employer. We do not discriminate on the basis of race, religion, color, national origin, religious creed, gender, sexual orientation, gender identity, gender expression, age, genetic information, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics
Lead Data Engineer
Posted 11 days ago
Job Viewed
Job Description
Job Summary
We are looking for a Data Engineer with strong experience in cloud platforms (AWS & Azure) , Scala programming , and a solid understanding of data architecture and governance frameworks . You will play a key role in building, optimizing, and maintaining scalable data pipelines and systems while ensuring data quality, security, and compliance across the organization.
Key Responsibilities
Data Engineering & Development
- Design and develop reliable, scalable ETL/ELT data pipelines using Scala , SQL , and orchestration tools.
- Integrate and process structured, semi-structured, and unstructured data from various sources (APIs, databases, flat files, etc.).
- Develop solutions on AWS (e.g., S3, Glue, Redshift, EMR) and Azure (e.g., Data Factory, Synapse, Blob Storage).
Cloud & Infrastructure
- Build cloud-native data solutions that align with enterprise architecture standards.
- Leverage IaC tools (Terraform, CloudFormation, ARM templates) to deploy and manage infrastructure.
- Monitor performance, cost, and security posture of data environments in both AWS and Azure.
Data Architecture & Governance
- Collaborate with data architects to define and implement logical and physical data models.
- Apply data governance principles including data cataloging , lineage tracking , data privacy , and compliance (e.g., GDPR) .
- Support the enforcement of data policies and data quality standards across data domains.
Collaboration & Communication
- Work cross-functionally with data analysts, scientists, architects, and business stakeholders to support data needs.
- Participate in Agile ceremonies and contribute to sprint planning and reviews.
- Maintain clear documentation of pipelines, data models, and data flows.
Required Qualifications
- Bachelor's degree in Computer Science, Engineering, or a related field.
- 3–6 years of experience in data engineering or data platform development.
- Hands-on experience with AWS and Azure data services.
- Proficient in Scala for data processing (e.g., Spark, Kafka Streams).
- Strong SQL skills and familiarity with distributed systems.
- Experience with orchestration tools (e.g., Apache Airflow, Azure Data Factory).