2,106 Python Data Engineer jobs in India
Python Data Engineer
Job Viewed
Job Description
HARMAN’s engineers and designers are creative, purposeful and agile. As part of this team, you’ll combine your technical expertise with innovative ideas to help drive cutting-edge solutions in the car, enterprise and connected ecosystem. Every day, you will push the boundaries of creative design, and HARMAN is committed to providing you with the opportunities, innovative technologies and resources to build a successful career.
A Career at HARMAN
As a technology leader that is rapidly on the move, HARMAN is filled with people who are focused on making life better. Innovation, inclusivity and teamwork are a part of our DNA. When you add that to the challenges we take on and solve together, you’ll discover that at HARMAN you can grow, make a difference and be proud of the work you do everyday.
Introduction: A Career at HARMAN Automotive
We’re a global, multi-disciplinary team that’s putting the innovative power of technology to work and transforming tomorrow. At HARMAN Automotive, we give you the keys to fast-track your career.
About the Role
We're seeking an experienced Cloud Platform and Data Engineering Specialist with expertise in GCP Google Cloud Platform ) or Azure to join our team. The ideal candidate will have a strong background in cloud computing, data engineering, and DevOps.
What you will do
1. Cloud Platform Management : Manage and optimize cloud infrastructure (GCP), ensuring scalability, security, and performance.
2. Data Engineering : Design and implement data pipelines, data warehousing, and data processing solutions.
3. Kubernetes and GKE : Develop and deploy applications using Kubernetes and Google Kubernetes Engine (GKE).
4. Python Development : Develop and maintain scripts and applications using Python.
What You Need to Be Successful
1. Experience: 3-6 years of experience in cloud computing, data engineering, and DevOps.
2. Technical Skills:
1. Strong understanding of GCP (Google Cloud Platform) or Azure.
2. Experience with Kubernetes and GKE.
3. Proficiency in Python programming language (8/10).
4. Basic understanding of data engineering and DevOps practices.
3. Soft Skills:
1. Excellent problem-solving skills and attention to detail.
2. Strong communication and collaboration skills.
Bonus Points if You Have
1. GCP: Experience with GCP services, including Compute Engine, Storage, and BigQuery.
2. Data Engineering: Experience with data engineering tools, such as Apache Beam, Dataflow, or BigQuery.
3. DevOps: Experience with DevOps tools, such as Jenkins, GitLab CI/CD, or Cloud Build.
What Makes You Eligible
1. GCP Expertise: Strong expertise in GCP is preferred, but Azure experience will be considered in worst-case scenarios.
2. Python Proficiency: Proficiency in Python programming language is essential.
3. Kubernetes and GKE: Experience with Kubernetes and GKE is required.
What We Offer
- Competitive salary and benefits package
- Opportunities for professional growth and development
- Collaborative and dynamic work environment
- Access to cutting-edge technologies and tools
- Recognition and rewards for outstanding performance through BeBrilliant
- Chance to work with a renowned German OEM
- You are expected to work all 5 days in a week
You Belong Here
HARMAN is committed to making every employee feel welcomed, valued, and empowered. No matter what role you play, we encourage you to share your ideas, voice your distinct perspective, and bring your whole self with you – all within a support-minded culture that celebrates what makes each of us unique. We also recognize that learning is a lifelong pursuit and want you to flourish. We proudly offer added opportunities for training, development, and continuing education, further empowering you to live the career you want.
About HARMAN: Where Innovation Unleashes Next-Level Technology
Ever since the 1920s, we’ve been amplifying the sense of sound. Today, that legacy endures, with integrated technology platforms that make the world smarter, safer, and more connected.
Across automotive, lifestyle, and digital transformation solutions, we create innovative technologies that turn ordinary moments into extraordinary experiences. Our renowned automotive and lifestyle solutions can be found everywhere, from the music we play in our cars and homes to venues that feature today’s most sought-after performers, while our digital transformation solutions serve humanity by addressing the world’s ever-evolving needs and demands. Marketing our award-winning portfolio under 16 iconic brands, such as JBL, Mark Levinson, and Revel, we set ourselves apart by exceeding the highest engineering and design standards for our customers, our partners and each other.
If you’re ready to innovate and do work that makes a lasting impact, join our talent community today!
HARMAN is proud to be an Equal Opportunity / Affirmative Action employer. All qualified applicants will receive consideration for employment without regard torace, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics.
Job No Longer Available
This position is no longer listed on WhatJobs. The employer may be reviewing applications, filled the role, or has removed the listing.
However, we have similar jobs available for you below.
Python Data Engineer
Posted today
Job Viewed
Job Description
HARMAN’s engineers and designers are creative, purposeful and agile. As part of this team, you’ll combine your technical expertise with innovative ideas to help drive cutting-edge solutions in the car, enterprise and connected ecosystem. Every day, you will push the boundaries of creative design, and HARMAN is committed to providing you with the opportunities, innovative technologies and resources to build a successful career.
A Career at HARMAN
As a technology leader that is rapidly on the move, HARMAN is filled with people who are focused on making life better. Innovation, inclusivity and teamwork are a part of our DNA. When you add that to the challenges we take on and solve together, you’ll discover that at HARMAN you can grow, make a difference and be proud of the work you do everyday.
Introduction: A Career at HARMAN Automotive
We’re a global, multi-disciplinary team that’s putting the innovative power of technology to work and transforming tomorrow. At HARMAN Automotive, we give you the keys to fast-track your career.
About the Role
We're seeking an experienced Cloud Platform and Data Engineering Specialist with expertise in GCP Google Cloud Platform ) or Azure to join our team. The ideal candidate will have a strong background in cloud computing, data engineering, and DevOps.
What you will do
1. Cloud Platform Management : Manage and optimize cloud infrastructure (GCP), ensuring scalability, security, and performance.
2. Data Engineering : Design and implement data pipelines, data warehousing, and data processing solutions.
3. Kubernetes and GKE : Develop and deploy applications using Kubernetes and Google Kubernetes Engine (GKE).
4. Python Development : Develop and maintain scripts and applications using Python.
What You Need to Be Successful
1. Experience: 3-6 years of experience in cloud computing, data engineering, and DevOps.
2. Technical Skills:
1. Strong understanding of GCP (Google Cloud Platform) or Azure.
2. Experience with Kubernetes and GKE.
3. Proficiency in Python programming language (8/10).
4. Basic understanding of data engineering and DevOps practices.
3. Soft Skills:
1. Excellent problem-solving skills and attention to detail.
2. Strong communication and collaboration skills.
Bonus Points if You Have
1. GCP: Experience with GCP services, including Compute Engine, Storage, and BigQuery.
2. Data Engineering: Experience with data engineering tools, such as Apache Beam, Dataflow, or BigQuery.
3. DevOps: Experience with DevOps tools, such as Jenkins, GitLab CI/CD, or Cloud Build.
What Makes You Eligible
1. GCP Expertise: Strong expertise in GCP is preferred, but Azure experience will be considered in worst-case scenarios.
2. Python Proficiency: Proficiency in Python programming language is essential.
3. Kubernetes and GKE: Experience with Kubernetes and GKE is required.
What We Offer
- Competitive salary and benefits package
- Opportunities for professional growth and development
- Collaborative and dynamic work environment
- Access to cutting-edge technologies and tools
- Recognition and rewards for outstanding performance through BeBrilliant
- Chance to work with a renowned German OEM
- You are expected to work all 5 days in a week
You Belong Here
HARMAN is committed to making every employee feel welcomed, valued, and empowered. No matter what role you play, we encourage you to share your ideas, voice your distinct perspective, and bring your whole self with you – all within a support-minded culture that celebrates what makes each of us unique. We also recognize that learning is a lifelong pursuit and want you to flourish. We proudly offer added opportunities for training, development, and continuing education, further empowering you to live the career you want.
About HARMAN: Where Innovation Unleashes Next-Level Technology
Ever since the 1920s, we’ve been amplifying the sense of sound. Today, that legacy endures, with integrated technology platforms that make the world smarter, safer, and more connected.
Across automotive, lifestyle, and digital transformation solutions, we create innovative technologies that turn ordinary moments into extraordinary experiences. Our renowned automotive and lifestyle solutions can be found everywhere, from the music we play in our cars and homes to venues that feature today’s most sought-after performers, while our digital transformation solutions serve humanity by addressing the world’s ever-evolving needs and demands. Marketing our award-winning portfolio under 16 iconic brands, such as JBL, Mark Levinson, and Revel, we set ourselves apart by exceeding the highest engineering and design standards for our customers, our partners and each other.
If you’re ready to innovate and do work that makes a lasting impact, join our talent community today!
HARMAN is proud to be an Equal Opportunity / Affirmative Action employer. All qualified applicants will receive consideration for employment without regard torace, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics.
Python Data Engineer
Posted today
Job Viewed
Job Description
Talent Worx is a growing services & recruitment consulting firm, we are hiring for our client which is a leading Big4 Consulting firm and provider of financial intelligence, data analytics, and AI-driven solutions, empowering businesses worldwide with insights for confident decision-making. Join to work on cutting-edge technologies, drive digital transformation, and shape the future of global markets. We are looking for a Python Data Engineer with deep expertise in API development, big data processing, and cloud deployment. The ideal candidate will have experience with FastAPI, PySpark, and DevOps pipelines, and be capable of leading a team while delivering high-performance, scalable data-driven applications.
Key Responsibilities- API Development: Design, build, and maintain scalable APIs using FastAPI and RESTful principles.
- Big Data Processing: Develop efficient data pipelines using PySpark to process and analyze large-scale datasets.
- Full-Stack Integration: Collaborate with frontend teams to implement end-to-end feature integration and ensure seamless user experiences.
- CI/CD Pipelines: Create and manage CI/CD workflows using GitHub Actions and Azure DevOps to support reliable and automated deployments.
- Containerization: Build and deploy containerized applications using Docker for both development and production environments.
- Team Leadership: Lead and mentor a team of engineers; conduct code reviews and provide technical guidance to ensure best practices and quality standards.
- Code Optimization: Write clean, maintainable, and high-performance Python code with a focus on scalability and reusability.
- Cloud Deployment: Deploy, monitor, and maintain applications in Azure or other cloud platforms ensuring high availability and resilience.
- Cross-Functional Collaboration: Work with product managers, designers, and other stakeholders to transform business requirements into technical solutions.
- Documentation: Maintain clear and comprehensive documentation for APIs, systems, and workflows to support ongoing development and maintenance.
Requirements
- Programming: Advanced proficiency in Python , with hands-on experience in FastAPI and REST API development.
- Big Data: Expertise in PySpark and large-scale data processing techniques.
- DevOps Tools: Strong knowledge of GitHub Actions , Azure DevOps , and Docker .
- Cloud Platforms: Experience with Azure or similar cloud environments for deployment and scaling.
- System Integration: Demonstrated experience in backend-to-frontend integration.
- Leadership: Proven track record of leading and mentoring software development teams.
- Collaboration: Excellent communication skills
Note: We are looking for immediate joiners only - who can join us in less than a month.
Python + Data Engineer
Posted today
Job Viewed
Job Description
Wissen Technology isHiring forPython + Data Engineer
About Wissen Technology: Wissen Technology is a globally recognized organization known for building solid technology teams, working with major financial institutions, and delivering high-quality solutions in IT services. With a strong presence in the financial industry, we provide cutting-edge solutions to address complex business challenges.
Experience: 5-9 Years
Location: Mumbai
Key Responsibilities
Required Skills:
The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015. Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products. We offer an array of services including Core Business Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud Adoption, Mobility, Digital Adoption, Agile & DevOps, Quality Assurance & Test Automation.
Over the years, Wissen Group has successfully delivered $1 billion worth of projects for more than 20 of the Fortune 500 companies. Wissen Technology provides exceptional value in mission critical projects for its clients, through thought leadership, ownership, and assured on-time deliveries that are always ‘first time right’. The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them with the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients.
We have been certified as a Great Place to Work company for two consecutive years ) and voted as the Top 20 AI/ML vendor by CIO Insider. Great Place to Work Certification is recognized world over by employees and employers alike and is considered the ‘Gold Standard’. Wissen Technology has created a Great Place to Work by excelling in all dimensions - High-Trust, High-Performance Culture, Credibility, Respect, Fairness, Pride and Camaraderie.
Website:
LinkedIn:
Wissen Leadership:
Wissen Live:
Wissen Thought Leadership:
Employee Speak:
Great Place to Work:
About Wissen Interview Process:
Latest in Wissen in CIO Insider:
Python Data Engineer
Posted today
Job Viewed
Job Description
Job Title: Data EngineerJob Summary:We are looking for a proficient Data Engineer with expertise in Amazon Redshift, Python, ApacheAirflow, dbt (Data Build Tool), API integration, and AWS. This role will be responsible for developingand maintaining scalable data pipelines, integrating data from multiple sources, and ensuring thatour data architecture supports business intelligence, reporting, and analytics requirements. You willcollaborate with cross-functional teams to build and optimize our data infrastructure and provideclean, high-quality data to the business.Key Responsibilities: Data Pipeline Development: Build and maintain robust ETL/ELT pipelines using Python, ApacheAirflow, and dbt to extract, transform, and load data from various sources into Amazon Redshift. Amazon Redshift Management: Design, optimize, and maintain Amazon Redshift clusters,ensuring the warehouse is capable of handling large-scale data efficiently. API Integration: Develop solutions to integrate external APIs for data ingestion, ensuring properdata extraction, transformation, and integration into our data infrastructure. Data Modeling: Create and maintain scalable data models in Redshift that support analytics andreporting needs, including designing star and snowflake schemas for optimized querying. AWS Infrastructure Management: Leverage AWS services such as S3, Lambda, EC2, andCloudWatch to build and maintain a scalable and cost-efficient data architecture. dbt (Data Build Tool): Use dbt to manage and automate SQL transformations, ensuring modular,reusable, and well-documented data transformation logic. Workflow Orchestration: Utilize Apache Airflow to orchestrate and automate data workflows,ensuring reliable data pipelines and scheduled jobs. Data Quality & Testing: Implement and maintain data validation checks and testing frameworksto ensure data integrity, accuracy, and compliance across all data pipelines. Collaboration: Work closely with data scientists, analysts, and product teams to understand dataneeds and provide technical solutions that meet business objectives. Performance Optimization: Tune SQL queries and manage the performance of Redshift clustersto ensure fast, efficient data access and analysis. Data Governance: Enforce data governance policies to ensure compliance with security, privacy,and data quality standards throughout the data lifecycle.Key Skills & Qualifications: Bachelors/Masters degree in Computer Science, Engineering, Data Science, or a related field. 3+ years of experience in data engineering with expertise in Amazon Redshift, Python, and AWS. Strong experience with Apache Airflow for workflow scheduling and orchestration. Hands-on experience with dbt (Data Build Tool) for managing SQL transformations and datamodels. Proficiency in API development and integration, including the use of RESTful APIs for dataingestion. Extensive experience with AWS services such as S3, Lambda, EC2, RDS, and CloudWatch.
Expertise in data modeling concepts and designing efficient data structures (e.g., star schemas,snowflake schemas) in a data warehouse environment. Advanced knowledge of SQL for querying and optimizing large datasets in Redshift. Experience building ETL/ELT pipelines and integrating data from multiple sources, includingstructured and unstructured data. Familiarity with version control systems like Git and best practices for code management anddeployment automation. Knowledge of data governance principles, including data security, privacy, and quality control.Preferred Qualifications: Experience with real-time data processing tools such as Kafka or Kinesis. Familiarity with data visualization tools like Tableau, Looker, or Power BI. Knowledge of other data warehousing solutions like Snowflake or Google BigQuery. Experience with DevOps practices for managing infrastructure and CI/CD pipelines (Docker,Kubernetes). Understanding of machine learning pipelines and how data engineering supports AI/ML initiatives.Soft Skills: Strong analytical and problem-solving skills. Ability to work independently and as part of a cross-functional team. Strong written and verbal communication skills, with the ability to explain technical concepts tonon-technical stakeholders. Detail-oriented, proactive, and self-motivated with a focus on continuous improvement. Strong organizational and project management skills to handle multiple tasks
Job Type
Payroll
CategoriesData Engineer (Software and Web Development)
Cloud Architects (Software and Web Development)
DevOps Engineers (Software and Web Development)
Database Administrator (Software and Web Development)
Software Engineer (Software and Web Development)
Must have Skills- Python - 3 Years
- Amazon Redshift - 3 Years
- AWS - 3 Years
- Apache Airflow - 1 Years
- ETL(Extract, Transform, Load) - 1 Years
- Kubernetes - 1 Years
Python + Data Engineer
Posted today
Job Viewed
Job Description
Wissen Technology isHiring forPython + Data Engineer
About Wissen Technology: Wissen Technology is a globally recognized organization known for building solid technology teams, working with major financial institutions, and delivering high-quality solutions in IT services. With a strong presence in the financial industry, we provide cutting-edge solutions to address complex business challenges
Experience: 5-9 Years
Location:Bangalore
Key Responsibilities
Required Skills:
The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015. Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products. We offer an array of services including Core Business Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud Adoption, Mobility, Digital Adoption, Agile & DevOps, Quality Assurance & Test Automation.
Over the years, Wissen Group has successfully delivered $1 billion worth of projects for more than 20 of the Fortune 500 companies. Wissen Technology provides exceptional value in mission critical projects for its clients, through thought leadership, ownership, and assured on-time deliveries that are always ‘first time right’. The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them with the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients.
We have been certified as a Great Place to Work company for two consecutive years ) and voted as the Top 20 AI/ML vendor by CIO Insider. Great Place to Work Certification is recognized world over by employees and employers alike and is considered the ‘Gold Standard’. Wissen Technology has created a Great Place to Work by excelling in all dimensions - High-Trust, High-Performance Culture, Credibility, Respect, Fairness, Pride and Camaraderie.
Website:
LinkedIn:
Wissen Leadership:
Wissen Live:
Wissen Thought Leadership:
Employee Speak:
Great Place to Work:
About Wissen Interview Process:
Latest in Wissen in CIO Insider:
Python Data Engineer
Posted 2 days ago
Job Viewed
Job Description
- 7+ years of experience in application development using Python and backend technologies.
- Strong ability to understand and interpret business requirements into well-defined technical solutions .
Backend Development:
- Expertise in Python backend development with frameworks like Tornado and FastAPI .
- Strong understanding of RESTful APIs, token authentication, and data compression .
- Experience in working with scalable application architectures .
Database & Cloud:
- Experience working with RDBMS models in cloud environments.
- Understanding of databases like ClickHouse, MS SQL, PostgreSQL, and Snowflake (added advantage).
- Basic knowledge of cloud technologies such as Databricks, ADLS, and OAuth/Secrets .
Software Development & DevOps:
- Experience with version control systems (Bitbucket, Git CLI) and issue tracking systems (JIRA).
- Hands-on experience with DevOps tools for CI/CD, build, and deployment .
- Familiarity with private package registries like JFrog .
Soft Skills & Methodologies:
- Strong problem-solving, debugging, and investigative skills.
- Ability to analyze log files, error messages, and performance issues to find root causes.
- Experience working in Agile environments at scale.
- Excellent verbal and written communication skills .
Python Data Engineer
Posted 2 days ago
Job Viewed
Job Description
Job Title: Data EngineerJob Summary:We are looking for a proficient Data Engineer with expertise in Amazon Redshift, Python, ApacheAirflow, dbt (Data Build Tool), API integration, and AWS. This role will be responsible for developingand maintaining scalable data pipelines, integrating data from multiple sources, and ensuring thatour data architecture supports business intelligence, reporting, and analytics requirements. You willcollaborate with cross-functional teams to build and optimize our data infrastructure and provideclean, high-quality data to the business.Key Responsibilities: Data Pipeline Development: Build and maintain robust ETL/ELT pipelines using Python, ApacheAirflow, and dbt to extract, transform, and load data from various sources into Amazon Redshift. Amazon Redshift Management: Design, optimize, and maintain Amazon Redshift clusters,ensuring the warehouse is capable of handling large-scale data efficiently. API Integration: Develop solutions to integrate external APIs for data ingestion, ensuring properdata extraction, transformation, and integration into our data infrastructure. Data Modeling: Create and maintain scalable data models in Redshift that support analytics andreporting needs, including designing star and snowflake schemas for optimized querying. AWS Infrastructure Management: Leverage AWS services such as S3, Lambda, EC2, andCloudWatch to build and maintain a scalable and cost-efficient data architecture. dbt (Data Build Tool): Use dbt to manage and automate SQL transformations, ensuring modular,reusable, and well-documented data transformation logic. Workflow Orchestration: Utilize Apache Airflow to orchestrate and automate data workflows,ensuring reliable data pipelines and scheduled jobs. Data Quality & Testing: Implement and maintain data validation checks and testing frameworksto ensure data integrity, accuracy, and compliance across all data pipelines. Collaboration: Work closely with data scientists, analysts, and product teams to understand dataneeds and provide technical solutions that meet business objectives. Performance Optimization: Tune SQL queries and manage the performance of Redshift clustersto ensure fast, efficient data access and analysis. Data Governance: Enforce data governance policies to ensure compliance with security, privacy,and data quality standards throughout the data lifecycle.Key Skills & Qualifications: Bachelors/Masters degree in Computer Science, Engineering, Data Science, or a related field. 3+ years of experience in data engineering with expertise in Amazon Redshift, Python, and AWS. Strong experience with Apache Airflow for workflow scheduling and orchestration. Hands-on experience with dbt (Data Build Tool) for managing SQL transformations and datamodels. Proficiency in API development and integration, including the use of RESTful APIs for dataingestion. Extensive experience with AWS services such as S3, Lambda, EC2, RDS, and CloudWatch.
Expertise in data modeling concepts and designing efficient data structures (e.g., star schemas,snowflake schemas) in a data warehouse environment. Advanced knowledge of SQL for querying and optimizing large datasets in Redshift. Experience building ETL/ELT pipelines and integrating data from multiple sources, includingstructured and unstructured data. Familiarity with version control systems like Git and best practices for code management anddeployment automation. Knowledge of data governance principles, including data security, privacy, and quality control.Preferred Qualifications: Experience with real-time data processing tools such as Kafka or Kinesis. Familiarity with data visualization tools like Tableau, Looker, or Power BI. Knowledge of other data warehousing solutions like Snowflake or Google BigQuery. Experience with DevOps practices for managing infrastructure and CI/CD pipelines (Docker,Kubernetes). Understanding of machine learning pipelines and how data engineering supports AI/ML initiatives.Soft Skills: Strong analytical and problem-solving skills. Ability to work independently and as part of a cross-functional team. Strong written and verbal communication skills, with the ability to explain technical concepts tonon-technical stakeholders. Detail-oriented, proactive, and self-motivated with a focus on continuous improvement. Strong organizational and project management skills to handle multiple tasks
Job Type
Payroll
CategoriesData Engineer (Software and Web Development)
Cloud Architects (Software and Web Development)
DevOps Engineers (Software and Web Development)
Database Administrator (Software and Web Development)
Software Engineer (Software and Web Development)
Must have Skills- Python - 3 Years
- Amazon Redshift - 3 Years
- AWS - 3 Years
- Apache Airflow - 1 Years
- ETL(Extract, Transform, Load) - 1 Years
- Kubernetes - 1 Years
Be The First To Know
About the latest Python data engineer Jobs in India !
Python Data Engineer
Posted 2 days ago
Job Viewed
Job Description
Job Title: Data EngineerJob Summary:We are looking for a proficient Data Engineer with expertise in Amazon Redshift, Python, ApacheAirflow, dbt (Data Build Tool), API integration, and AWS. This role will be responsible for developingand maintaining scalable data pipelines, integrating data from multiple sources, and ensuring thatour data architecture supports business intelligence, reporting, and analytics requirements. You willcollaborate with cross-functional teams to build and optimize our data infrastructure and provideclean, high-quality data to the business.Key Responsibilities: Data Pipeline Development: Build and maintain robust ETL/ELT pipelines using Python, ApacheAirflow, and dbt to extract, transform, and load data from various sources into Amazon Redshift. Amazon Redshift Management: Design, optimize, and maintain Amazon Redshift clusters,ensuring the warehouse is capable of handling large-scale data efficiently. API Integration: Develop solutions to integrate external APIs for data ingestion, ensuring properdata extraction, transformation, and integration into our data infrastructure. Data Modeling: Create and maintain scalable data models in Redshift that support analytics andreporting needs, including designing star and snowflake schemas for optimized querying. AWS Infrastructure Management: Leverage AWS services such as S3, Lambda, EC2, andCloudWatch to build and maintain a scalable and cost-efficient data architecture. dbt (Data Build Tool): Use dbt to manage and automate SQL transformations, ensuring modular,reusable, and well-documented data transformation logic. Workflow Orchestration: Utilize Apache Airflow to orchestrate and automate data workflows,ensuring reliable data pipelines and scheduled jobs. Data Quality & Testing: Implement and maintain data validation checks and testing frameworksto ensure data integrity, accuracy, and compliance across all data pipelines. Collaboration: Work closely with data scientists, analysts, and product teams to understand dataneeds and provide technical solutions that meet business objectives. Performance Optimization: Tune SQL queries and manage the performance of Redshift clustersto ensure fast, efficient data access and analysis. Data Governance: Enforce data governance policies to ensure compliance with security, privacy,and data quality standards throughout the data lifecycle.Key Skills & Qualifications: Bachelors/Masters degree in Computer Science, Engineering, Data Science, or a related field. 3+ years of experience in data engineering with expertise in Amazon Redshift, Python, and AWS. Strong experience with Apache Airflow for workflow scheduling and orchestration. Hands-on experience with dbt (Data Build Tool) for managing SQL transformations and datamodels. Proficiency in API development and integration, including the use of RESTful APIs for dataingestion. Extensive experience with AWS services such as S3, Lambda, EC2, RDS, and CloudWatch.
Expertise in data modeling concepts and designing efficient data structures (e.g., star schemas,snowflake schemas) in a data warehouse environment. Advanced knowledge of SQL for querying and optimizing large datasets in Redshift. Experience building ETL/ELT pipelines and integrating data from multiple sources, includingstructured and unstructured data. Familiarity with version control systems like Git and best practices for code management anddeployment automation. Knowledge of data governance principles, including data security, privacy, and quality control.Preferred Qualifications: Experience with real-time data processing tools such as Kafka or Kinesis. Familiarity with data visualization tools like Tableau, Looker, or Power BI. Knowledge of other data warehousing solutions like Snowflake or Google BigQuery. Experience with DevOps practices for managing infrastructure and CI/CD pipelines (Docker,Kubernetes). Understanding of machine learning pipelines and how data engineering supports AI/ML initiatives.Soft Skills: Strong analytical and problem-solving skills. Ability to work independently and as part of a cross-functional team. Strong written and verbal communication skills, with the ability to explain technical concepts tonon-technical stakeholders. Detail-oriented, proactive, and self-motivated with a focus on continuous improvement. Strong organizational and project management skills to handle multiple tasks
Job Type
Payroll
CategoriesData Engineer (Software and Web Development)
Cloud Architects (Software and Web Development)
DevOps Engineers (Software and Web Development)
Database Administrator (Software and Web Development)
Software Engineer (Software and Web Development)
Must have Skills- Python - 3 Years
- Amazon Redshift - 3 Years
- AWS - 3 Years
- Apache Airflow - 1 Years
- ETL(Extract, Transform, Load) - 1 Years
- Kubernetes - 1 Years
Python - Data Engineer/Consultant Specialist
Posted today
Job Viewed
Job Description
Some careers shine brighter than others.
If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further.
HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions.
We are currently seeking an experienced professional to join our team in the role of Consultant Specialist.
In this role, you will:
To be successful in this role, you should meet the following requirements:
Python - Data Engineer/Consultant Specialist
Posted today
Job Viewed
Job Description
Some careers shine brighter than others.
If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further.
HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions.
We are currently seeking an experienced professional to join our team in the role of Consultant Specialist.
In this role, you will:
To be successful in this role, you should meet the following requirements: