2,106 Python Data Engineer jobs in India

Python Data Engineer

Bengaluru, Karnataka HARMAN International

Job Viewed

Tap Again To Close

Job Description

HARMAN’s engineers and designers are creative, purposeful and agile. As part of this team, you’ll combine your technical expertise with innovative ideas to help drive cutting-edge solutions in the car, enterprise and connected ecosystem. Every day, you will push the boundaries of creative design, and HARMAN is committed to providing you with the opportunities, innovative technologies and resources to build a successful career.

A Career at HARMAN

As a technology leader that is rapidly on the move, HARMAN is filled with people who are focused on making life better. Innovation, inclusivity and teamwork are a part of our DNA. When you add that to the challenges we take on and solve together, you’ll discover that at HARMAN you can grow, make a difference and be proud of the work you do everyday.

Introduction: A Career at HARMAN Automotive

We’re a global, multi-disciplinary team that’s putting the innovative power of technology to work and transforming tomorrow. At HARMAN Automotive, we give you the keys to fast-track your career.

  • Engineer audio systems and integrated technology platforms that augment the driving experience
  • Combine ingenuity, in-depth research, and a spirit of collaboration with design and engineering excellence
  • Advance in-vehicle infotainment, safety, efficiency, and enjoyment
  • About the Role

    We're seeking an experienced Cloud Platform and Data Engineering Specialist with expertise in GCP Google Cloud Platform ) or Azure to join our team. The ideal candidate will have a strong background in cloud computing, data engineering, and DevOps.

    What you will do

    1. Cloud Platform Management : Manage and optimize cloud infrastructure (GCP), ensuring scalability, security, and performance.

    2. Data Engineering : Design and implement data pipelines, data warehousing, and data processing solutions.

    3. Kubernetes and GKE : Develop and deploy applications using Kubernetes and Google Kubernetes Engine (GKE).

    4. Python Development : Develop and maintain scripts and applications using Python.

    What You Need to Be Successful

    1. Experience: 3-6 years of experience in cloud computing, data engineering, and DevOps.

    2. Technical Skills:

    1. Strong understanding of GCP (Google Cloud Platform) or Azure.

    2. Experience with Kubernetes and GKE.

    3. Proficiency in Python programming language (8/10).

    4. Basic understanding of data engineering and DevOps practices.

    3. Soft Skills:

    1. Excellent problem-solving skills and attention to detail.

    2. Strong communication and collaboration skills.

    Bonus Points if You Have

    1. GCP: Experience with GCP services, including Compute Engine, Storage, and BigQuery.

    2. Data Engineering: Experience with data engineering tools, such as Apache Beam, Dataflow, or BigQuery.

    3. DevOps: Experience with DevOps tools, such as Jenkins, GitLab CI/CD, or Cloud Build.

    What Makes You Eligible

    1. GCP Expertise: Strong expertise in GCP is preferred, but Azure experience will be considered in worst-case scenarios.

    2. Python Proficiency: Proficiency in Python programming language is essential.

    3. Kubernetes and GKE: Experience with Kubernetes and GKE is required.

     What We Offer

     - Competitive salary and benefits package

    - Opportunities for professional growth and development

    - Collaborative and dynamic work environment

    - Access to cutting-edge technologies and tools

    - Recognition and rewards for outstanding performance through BeBrilliant

    - Chance to work with a renowned German OEM

    - You are expected to work all 5 days in a week

    You Belong Here

    HARMAN is committed to making every employee feel welcomed, valued, and empowered. No matter what role you play, we encourage you to share your ideas, voice your distinct perspective, and bring your whole self with you – all within a support-minded culture that celebrates what makes each of us unique. We also recognize that learning is a lifelong pursuit and want you to flourish. We proudly offer added opportunities for training, development, and continuing education, further empowering you to live the career you want.

    About HARMAN: Where Innovation Unleashes Next-Level Technology

    Ever since the 1920s, we’ve been amplifying the sense of sound. Today, that legacy endures, with integrated technology platforms that make the world smarter, safer, and more connected.

    Across automotive, lifestyle, and digital transformation solutions, we create innovative technologies that turn ordinary moments into extraordinary experiences. Our renowned automotive and lifestyle solutions can be found everywhere, from the music we play in our cars and homes to venues that feature today’s most sought-after performers, while our digital transformation solutions serve humanity by addressing the world’s ever-evolving needs and demands. Marketing our award-winning portfolio under 16 iconic brands, such as JBL, Mark Levinson, and Revel, we set ourselves apart by exceeding the highest engineering and design standards for our customers, our partners and each other.

    If you’re ready to innovate and do work that makes a lasting impact, join our talent community today!

    HARMAN is proud to be an Equal Opportunity / Affirmative Action employer. All qualified applicants will receive consideration for employment without regard torace, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics.

    This advertiser has chosen not to accept applicants from your region.

    Job No Longer Available

    This position is no longer listed on WhatJobs. The employer may be reviewing applications, filled the role, or has removed the listing.

    However, we have similar jobs available for you below.

    Python Data Engineer

    Bengaluru, Karnataka HARMAN International

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    HARMAN’s engineers and designers are creative, purposeful and agile. As part of this team, you’ll combine your technical expertise with innovative ideas to help drive cutting-edge solutions in the car, enterprise and connected ecosystem. Every day, you will push the boundaries of creative design, and HARMAN is committed to providing you with the opportunities, innovative technologies and resources to build a successful career.

    A Career at HARMAN

    As a technology leader that is rapidly on the move, HARMAN is filled with people who are focused on making life better. Innovation, inclusivity and teamwork are a part of our DNA. When you add that to the challenges we take on and solve together, you’ll discover that at HARMAN you can grow, make a difference and be proud of the work you do everyday.

    Introduction: A Career at HARMAN Automotive

    We’re a global, multi-disciplinary team that’s putting the innovative power of technology to work and transforming tomorrow. At HARMAN Automotive, we give you the keys to fast-track your career.

  • Engineer audio systems and integrated technology platforms that augment the driving experience
  • Combine ingenuity, in-depth research, and a spirit of collaboration with design and engineering excellence
  • Advance in-vehicle infotainment, safety, efficiency, and enjoyment
  • About the Role

    We're seeking an experienced Cloud Platform and Data Engineering Specialist with expertise in GCP Google Cloud Platform ) or Azure to join our team. The ideal candidate will have a strong background in cloud computing, data engineering, and DevOps.

    What you will do

    1. Cloud Platform Management : Manage and optimize cloud infrastructure (GCP), ensuring scalability, security, and performance.

    2. Data Engineering : Design and implement data pipelines, data warehousing, and data processing solutions.

    3. Kubernetes and GKE : Develop and deploy applications using Kubernetes and Google Kubernetes Engine (GKE).

    4. Python Development : Develop and maintain scripts and applications using Python.

    What You Need to Be Successful

    1. Experience: 3-6 years of experience in cloud computing, data engineering, and DevOps.

    2. Technical Skills:

    1. Strong understanding of GCP (Google Cloud Platform) or Azure.

    2. Experience with Kubernetes and GKE.

    3. Proficiency in Python programming language (8/10).

    4. Basic understanding of data engineering and DevOps practices.

    3. Soft Skills:

    1. Excellent problem-solving skills and attention to detail.

    2. Strong communication and collaboration skills.

    Bonus Points if You Have

    1. GCP: Experience with GCP services, including Compute Engine, Storage, and BigQuery.

    2. Data Engineering: Experience with data engineering tools, such as Apache Beam, Dataflow, or BigQuery.

    3. DevOps: Experience with DevOps tools, such as Jenkins, GitLab CI/CD, or Cloud Build.

    What Makes You Eligible

    1. GCP Expertise: Strong expertise in GCP is preferred, but Azure experience will be considered in worst-case scenarios.

    2. Python Proficiency: Proficiency in Python programming language is essential.

    3. Kubernetes and GKE: Experience with Kubernetes and GKE is required.

     What We Offer

     - Competitive salary and benefits package

    - Opportunities for professional growth and development

    - Collaborative and dynamic work environment

    - Access to cutting-edge technologies and tools

    - Recognition and rewards for outstanding performance through BeBrilliant

    - Chance to work with a renowned German OEM

    - You are expected to work all 5 days in a week

    You Belong Here

    HARMAN is committed to making every employee feel welcomed, valued, and empowered. No matter what role you play, we encourage you to share your ideas, voice your distinct perspective, and bring your whole self with you – all within a support-minded culture that celebrates what makes each of us unique. We also recognize that learning is a lifelong pursuit and want you to flourish. We proudly offer added opportunities for training, development, and continuing education, further empowering you to live the career you want.

    About HARMAN: Where Innovation Unleashes Next-Level Technology

    Ever since the 1920s, we’ve been amplifying the sense of sound. Today, that legacy endures, with integrated technology platforms that make the world smarter, safer, and more connected.

    Across automotive, lifestyle, and digital transformation solutions, we create innovative technologies that turn ordinary moments into extraordinary experiences. Our renowned automotive and lifestyle solutions can be found everywhere, from the music we play in our cars and homes to venues that feature today’s most sought-after performers, while our digital transformation solutions serve humanity by addressing the world’s ever-evolving needs and demands. Marketing our award-winning portfolio under 16 iconic brands, such as JBL, Mark Levinson, and Revel, we set ourselves apart by exceeding the highest engineering and design standards for our customers, our partners and each other.

    If you’re ready to innovate and do work that makes a lasting impact, join our talent community today!

    HARMAN is proud to be an Equal Opportunity / Affirmative Action employer. All qualified applicants will receive consideration for employment without regard torace, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics.

    This advertiser has chosen not to accept applicants from your region.

    Python Data Engineer

    Hyderabad, Andhra Pradesh Talent Worx

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Job Description

    Talent Worx is a growing services & recruitment consulting firm, we are hiring for our client which is a leading Big4 Consulting firm and provider of financial intelligence, data analytics, and AI-driven solutions, empowering businesses worldwide with insights for confident decision-making. Join to work on cutting-edge technologies, drive digital transformation, and shape the future of global markets. We are looking for a Python Data Engineer with deep expertise in API development, big data processing, and cloud deployment. The ideal candidate will have experience with FastAPI, PySpark, and DevOps pipelines, and be capable of leading a team while delivering high-performance, scalable data-driven applications.

    Key Responsibilities
    • API Development: Design, build, and maintain scalable APIs using FastAPI and RESTful principles.
    • Big Data Processing: Develop efficient data pipelines using PySpark to process and analyze large-scale datasets.
    • Full-Stack Integration: Collaborate with frontend teams to implement end-to-end feature integration and ensure seamless user experiences.
    • CI/CD Pipelines: Create and manage CI/CD workflows using GitHub Actions and Azure DevOps to support reliable and automated deployments.
    • Containerization: Build and deploy containerized applications using Docker for both development and production environments.
    • Team Leadership: Lead and mentor a team of engineers; conduct code reviews and provide technical guidance to ensure best practices and quality standards.
    • Code Optimization: Write clean, maintainable, and high-performance Python code with a focus on scalability and reusability.
    • Cloud Deployment: Deploy, monitor, and maintain applications in Azure or other cloud platforms ensuring high availability and resilience.
    • Cross-Functional Collaboration: Work with product managers, designers, and other stakeholders to transform business requirements into technical solutions.
    • Documentation: Maintain clear and comprehensive documentation for APIs, systems, and workflows to support ongoing development and maintenance.

    Requirements

        • Programming: Advanced proficiency in Python , with hands-on experience in FastAPI and REST API development.
        • Big Data: Expertise in PySpark and large-scale data processing techniques.
        • DevOps Tools: Strong knowledge of GitHub Actions , Azure DevOps , and Docker .
        • Cloud Platforms: Experience with Azure or similar cloud environments for deployment and scaling.
        • System Integration: Demonstrated experience in backend-to-frontend integration.
        • Leadership: Proven track record of leading and mentoring software development teams.
        • Collaboration: Excellent communication skills

          Note: We are looking for immediate joiners only - who can join us in less than a month.
    This advertiser has chosen not to accept applicants from your region.

    Python + Data Engineer

    Mumbai, Maharashtra Wissen

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Wissen Technology isHiring forPython + Data Engineer

    About Wissen Technology: Wissen Technology is a globally recognized organization known for building solid technology teams, working with major financial institutions, and delivering high-quality solutions in IT services. With a strong presence in the financial industry, we provide cutting-edge solutions to address complex business challenges.

    Role Overview:We are seeking a skilled and innovative Python Data Engineer with expertise in designing and implementing data solutions using the AWS cloud platform. The ideal candidate will be responsible for building and maintaining scalable, efficient, and secure data pipelines while leveraging Python and AWS services to enable robust data analytics and decision-making processes.
    Experience: 5-9 Years

    Location: Mumbai

    Key Responsibilities

  • Design, develop, and optimize data pipelines using Python and AWS services such as Glue, Lambda, S3, EMR, Redshift, Athena, and Kinesis.
  • Implement ETL/ELT processes to extract, transform, and load data from various sources into centralized repositories (, data lakes or data warehouses).
  • Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions.
  • Monitor, troubleshoot, and enhance data workflows for performance and cost optimization.
  • Ensure data quality and consistency by implementing validation and governance practices.
  • Work on data security best practices in compliance with organizational policies and regulations.
  • Automate repetitive data engineering tasks using Python scripts and frameworks.
  • Leverage CI/CD pipelines for deployment of data workflows on AWS.
  • Required Skills:

  • Professional Experience: 5+ years of experience in data engineering or a related field.
  • Programming: Strong proficiency in Python, with experience in libraries like pandas, pyspark, or boto3.
  • AWS Expertise: Hands-on experience with core AWS services for data engineering, such as:
  • -AWS Glue for ETL/ELT.
  • -S3 for storage.
  • -Redshift or Athena for data warehousing and querying.
  • -Lambda for serverless compute.
  • -Kinesis or SNS/SQS for data streaming.
  • -IAM Roles for security.
  • Databases: Proficiency in SQL and experience with relational (, PostgreSQL, MySQL) and NoSQL (, DynamoDB) databases.
  • Data Processing: Knowledge of big data frameworks (, Hadoop, Spark) is a plus.
  • DevOps: Familiarity with CI/CD pipelines and tools like Jenkins, Git, and CodePipeline.
  • Version Control: Proficient with Git-based workflows.
  • Problem Solving: Excellent analytical and debugging skills.
  • The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015. Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products. We offer an array of services including Core Business Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud Adoption, Mobility, Digital Adoption, Agile & DevOps, Quality Assurance & Test Automation. 

    Over the years, Wissen Group has successfully delivered $1 billion worth of projects for more than 20 of the Fortune 500 companies. Wissen Technology provides exceptional value in mission critical projects for its clients, through thought leadership, ownership, and assured on-time deliveries that are always ‘first time right’. The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them with the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients. 

    We have been certified as a Great Place to Work company for two consecutive years ) and voted as the Top 20 AI/ML vendor by CIO Insider. Great Place to Work Certification is recognized world over by employees and employers alike and is considered the ‘Gold Standard’. Wissen Technology has created a Great Place to Work by excelling in all dimensions - High-Trust, High-Performance Culture, Credibility, Respect, Fairness, Pride and Camaraderie.

    Website:

    LinkedIn:

    Wissen Leadership:

    Wissen Live:

    Wissen Thought Leadership:

    Employee Speak:

    Great Place to Work:

    About Wissen Interview Process:

    Latest in Wissen in CIO Insider:

    This advertiser has chosen not to accept applicants from your region.

    Python Data Engineer

    Bengaluru, Karnataka Crazy Solutions

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Python Data Engineer

    Job Title: Data EngineerJob Summary:We are looking for a proficient Data Engineer with expertise in Amazon Redshift, Python, ApacheAirflow, dbt (Data Build Tool), API integration, and AWS. This role will be responsible for developingand maintaining scalable data pipelines, integrating data from multiple sources, and ensuring thatour data architecture supports business intelligence, reporting, and analytics requirements. You willcollaborate with cross-functional teams to build and optimize our data infrastructure and provideclean, high-quality data to the business.Key Responsibilities: Data Pipeline Development: Build and maintain robust ETL/ELT pipelines using Python, ApacheAirflow, and dbt to extract, transform, and load data from various sources into Amazon Redshift. Amazon Redshift Management: Design, optimize, and maintain Amazon Redshift clusters,ensuring the warehouse is capable of handling large-scale data efficiently. API Integration: Develop solutions to integrate external APIs for data ingestion, ensuring properdata extraction, transformation, and integration into our data infrastructure. Data Modeling: Create and maintain scalable data models in Redshift that support analytics andreporting needs, including designing star and snowflake schemas for optimized querying. AWS Infrastructure Management: Leverage AWS services such as S3, Lambda, EC2, andCloudWatch to build and maintain a scalable and cost-efficient data architecture. dbt (Data Build Tool): Use dbt to manage and automate SQL transformations, ensuring modular,reusable, and well-documented data transformation logic. Workflow Orchestration: Utilize Apache Airflow to orchestrate and automate data workflows,ensuring reliable data pipelines and scheduled jobs. Data Quality & Testing: Implement and maintain data validation checks and testing frameworksto ensure data integrity, accuracy, and compliance across all data pipelines. Collaboration: Work closely with data scientists, analysts, and product teams to understand dataneeds and provide technical solutions that meet business objectives. Performance Optimization: Tune SQL queries and manage the performance of Redshift clustersto ensure fast, efficient data access and analysis. Data Governance: Enforce data governance policies to ensure compliance with security, privacy,and data quality standards throughout the data lifecycle.Key Skills & Qualifications: Bachelors/Masters degree in Computer Science, Engineering, Data Science, or a related field. 3+ years of experience in data engineering with expertise in Amazon Redshift, Python, and AWS. Strong experience with Apache Airflow for workflow scheduling and orchestration. Hands-on experience with dbt (Data Build Tool) for managing SQL transformations and datamodels. Proficiency in API development and integration, including the use of RESTful APIs for dataingestion. Extensive experience with AWS services such as S3, Lambda, EC2, RDS, and CloudWatch.

    Expertise in data modeling concepts and designing efficient data structures (e.g., star schemas,snowflake schemas) in a data warehouse environment. Advanced knowledge of SQL for querying and optimizing large datasets in Redshift. Experience building ETL/ELT pipelines and integrating data from multiple sources, includingstructured and unstructured data. Familiarity with version control systems like Git and best practices for code management anddeployment automation. Knowledge of data governance principles, including data security, privacy, and quality control.Preferred Qualifications: Experience with real-time data processing tools such as Kafka or Kinesis. Familiarity with data visualization tools like Tableau, Looker, or Power BI. Knowledge of other data warehousing solutions like Snowflake or Google BigQuery. Experience with DevOps practices for managing infrastructure and CI/CD pipelines (Docker,Kubernetes). Understanding of machine learning pipelines and how data engineering supports AI/ML initiatives.Soft Skills: Strong analytical and problem-solving skills. Ability to work independently and as part of a cross-functional team. Strong written and verbal communication skills, with the ability to explain technical concepts tonon-technical stakeholders. Detail-oriented, proactive, and self-motivated with a focus on continuous improvement. Strong organizational and project management skills to handle multiple tasks

    Job Type


    Payroll

    Categories

    Data Engineer (Software and Web Development)

    Cloud Architects (Software and Web Development)

    DevOps Engineers (Software and Web Development)

    Database Administrator (Software and Web Development)

    Software Engineer (Software and Web Development)

    Must have Skills
    • Python - 3 Years
    • Amazon Redshift - 3 Years
    • AWS - 3 Years
    • Apache Airflow - 1 Years
    • ETL(Extract, Transform, Load) - 1 Years
    • Kubernetes - 1 Years


    This advertiser has chosen not to accept applicants from your region.

    Python + Data Engineer

    Bengaluru, Karnataka Wissen

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Wissen Technology isHiring forPython + Data Engineer

    About Wissen Technology: Wissen Technology is a globally recognized organization known for building solid technology teams, working with major financial institutions, and delivering high-quality solutions in IT services. With a strong presence in the financial industry, we provide cutting-edge solutions to address complex business challenges

    Role Overview:We are seeking a skilled and innovative Python Data Engineer with expertise in designing and implementing data solutions using the AWS cloud platform. The ideal candidate will be responsible for building and maintaining scalable, efficient, and secure data pipelines while leveraging Python and AWS services to enable robust data analytics and decision-making processes.
    Experience: 5-9 Years

    Location:Bangalore

    Key Responsibilities

  • Design, develop, and optimize data pipelines using Python and AWS services such as Glue, Lambda, S3, EMR, Redshift, Athena, and Kinesis.
  • Implement ETL/ELT processes to extract, transform, and load data from various sources into centralized repositories (, data lakes or data warehouses).
  • Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions.
  • Monitor, troubleshoot, and enhance data workflows for performance and cost optimization.
  • Ensure data quality and consistency by implementing validation and governance practices.
  • Work on data security best practices in compliance with organizational policies and regulations.
  • Automate repetitive data engineering tasks using Python scripts and frameworks.
  • Leverage CI/CD pipelines for deployment of data workflows on AWS.
  • Required Skills:

  • Professional Experience: 5+ years of experience in data engineering or a related field.
  • Programming: Strong proficiency in Python, with experience in libraries like pandas, pyspark, or boto3.
  • AWS Expertise: Hands-on experience with core AWS services for data engineering, such as:
  • -AWS Glue for ETL/ELT.
  • -S3 for storage.
  • -Redshift or Athena for data warehousing and querying.
  • -Lambda for serverless compute.
  • -Kinesis or SNS/SQS for data streaming.
  • -IAM Roles for security.
  • Databases: Proficiency in SQL and experience with relational (, PostgreSQL, MySQL) and NoSQL (, DynamoDB) databases.
  • Data Processing: Knowledge of big data frameworks (, Hadoop, Spark) is a plus.
  • DevOps: Familiarity with CI/CD pipelines and tools like Jenkins, Git, and CodePipeline.
  • Version Control: Proficient with Git-based workflows.
  • Problem Solving: Excellent analytical and debugging skills.
  • The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015. Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products. We offer an array of services including Core Business Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud Adoption, Mobility, Digital Adoption, Agile & DevOps, Quality Assurance & Test Automation. 

    Over the years, Wissen Group has successfully delivered $1 billion worth of projects for more than 20 of the Fortune 500 companies. Wissen Technology provides exceptional value in mission critical projects for its clients, through thought leadership, ownership, and assured on-time deliveries that are always ‘first time right’. The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them with the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients. 

    We have been certified as a Great Place to Work company for two consecutive years ) and voted as the Top 20 AI/ML vendor by CIO Insider. Great Place to Work Certification is recognized world over by employees and employers alike and is considered the ‘Gold Standard’. Wissen Technology has created a Great Place to Work by excelling in all dimensions - High-Trust, High-Performance Culture, Credibility, Respect, Fairness, Pride and Camaraderie.

    Website:

    LinkedIn:

    Wissen Leadership:

    Wissen Live:

    Wissen Thought Leadership:

    Employee Speak:

    Great Place to Work:

    About Wissen Interview Process:

    Latest in Wissen in CIO Insider:

    This advertiser has chosen not to accept applicants from your region.

    Python Data Engineer

    Ahvi Infotech

    Posted 2 days ago

    Job Viewed

    Tap Again To Close

    Job Description

    full-time
    Python Backend Developer


    Experience:
    • 7+ years of experience in application development using Python and backend technologies.
    • Strong ability to understand and interpret business requirements into well-defined technical solutions .
    Key Skills & Requirements:

    Backend Development:

    • Expertise in Python backend development with frameworks like Tornado and FastAPI .
    • Strong understanding of RESTful APIs, token authentication, and data compression .
    • Experience in working with scalable application architectures .

    Database & Cloud:

    • Experience working with RDBMS models in cloud environments.
    • Understanding of databases like ClickHouse, MS SQL, PostgreSQL, and Snowflake (added advantage).
    • Basic knowledge of cloud technologies such as Databricks, ADLS, and OAuth/Secrets .

    Software Development & DevOps:

    • Experience with version control systems (Bitbucket, Git CLI) and issue tracking systems (JIRA).
    • Hands-on experience with DevOps tools for CI/CD, build, and deployment .
    • Familiarity with private package registries like JFrog .

    Soft Skills & Methodologies:

    • Strong problem-solving, debugging, and investigative skills.
    • Ability to analyze log files, error messages, and performance issues to find root causes.
    • Experience working in Agile environments at scale.
    • Excellent verbal and written communication skills .


    This advertiser has chosen not to accept applicants from your region.

    Python Data Engineer

    Bangalore, Karnataka Crazy Solutions

    Posted 2 days ago

    Job Viewed

    Tap Again To Close

    Job Description

    full-time
    Python Data Engineer

    Job Title: Data EngineerJob Summary:We are looking for a proficient Data Engineer with expertise in Amazon Redshift, Python, ApacheAirflow, dbt (Data Build Tool), API integration, and AWS. This role will be responsible for developingand maintaining scalable data pipelines, integrating data from multiple sources, and ensuring thatour data architecture supports business intelligence, reporting, and analytics requirements. You willcollaborate with cross-functional teams to build and optimize our data infrastructure and provideclean, high-quality data to the business.Key Responsibilities: Data Pipeline Development: Build and maintain robust ETL/ELT pipelines using Python, ApacheAirflow, and dbt to extract, transform, and load data from various sources into Amazon Redshift. Amazon Redshift Management: Design, optimize, and maintain Amazon Redshift clusters,ensuring the warehouse is capable of handling large-scale data efficiently. API Integration: Develop solutions to integrate external APIs for data ingestion, ensuring properdata extraction, transformation, and integration into our data infrastructure. Data Modeling: Create and maintain scalable data models in Redshift that support analytics andreporting needs, including designing star and snowflake schemas for optimized querying. AWS Infrastructure Management: Leverage AWS services such as S3, Lambda, EC2, andCloudWatch to build and maintain a scalable and cost-efficient data architecture. dbt (Data Build Tool): Use dbt to manage and automate SQL transformations, ensuring modular,reusable, and well-documented data transformation logic. Workflow Orchestration: Utilize Apache Airflow to orchestrate and automate data workflows,ensuring reliable data pipelines and scheduled jobs. Data Quality & Testing: Implement and maintain data validation checks and testing frameworksto ensure data integrity, accuracy, and compliance across all data pipelines. Collaboration: Work closely with data scientists, analysts, and product teams to understand dataneeds and provide technical solutions that meet business objectives. Performance Optimization: Tune SQL queries and manage the performance of Redshift clustersto ensure fast, efficient data access and analysis. Data Governance: Enforce data governance policies to ensure compliance with security, privacy,and data quality standards throughout the data lifecycle.Key Skills & Qualifications: Bachelors/Masters degree in Computer Science, Engineering, Data Science, or a related field. 3+ years of experience in data engineering with expertise in Amazon Redshift, Python, and AWS. Strong experience with Apache Airflow for workflow scheduling and orchestration. Hands-on experience with dbt (Data Build Tool) for managing SQL transformations and datamodels. Proficiency in API development and integration, including the use of RESTful APIs for dataingestion. Extensive experience with AWS services such as S3, Lambda, EC2, RDS, and CloudWatch.

    Expertise in data modeling concepts and designing efficient data structures (e.g., star schemas,snowflake schemas) in a data warehouse environment. Advanced knowledge of SQL for querying and optimizing large datasets in Redshift. Experience building ETL/ELT pipelines and integrating data from multiple sources, includingstructured and unstructured data. Familiarity with version control systems like Git and best practices for code management anddeployment automation. Knowledge of data governance principles, including data security, privacy, and quality control.Preferred Qualifications: Experience with real-time data processing tools such as Kafka or Kinesis. Familiarity with data visualization tools like Tableau, Looker, or Power BI. Knowledge of other data warehousing solutions like Snowflake or Google BigQuery. Experience with DevOps practices for managing infrastructure and CI/CD pipelines (Docker,Kubernetes). Understanding of machine learning pipelines and how data engineering supports AI/ML initiatives.Soft Skills: Strong analytical and problem-solving skills. Ability to work independently and as part of a cross-functional team. Strong written and verbal communication skills, with the ability to explain technical concepts tonon-technical stakeholders. Detail-oriented, proactive, and self-motivated with a focus on continuous improvement. Strong organizational and project management skills to handle multiple tasks

    Job Type


    Payroll

    Categories

    Data Engineer (Software and Web Development)

    Cloud Architects (Software and Web Development)

    DevOps Engineers (Software and Web Development)

    Database Administrator (Software and Web Development)

    Software Engineer (Software and Web Development)

    Must have Skills
    • Python - 3 Years
    • Amazon Redshift - 3 Years
    • AWS - 3 Years
    • Apache Airflow - 1 Years
    • ETL(Extract, Transform, Load) - 1 Years
    • Kubernetes - 1 Years


    This advertiser has chosen not to accept applicants from your region.
    Be The First To Know

    About the latest Python data engineer Jobs in India !

    Python Data Engineer

    Bangalore, Karnataka Crazy Solutions

    Posted 2 days ago

    Job Viewed

    Tap Again To Close

    Job Description

    full-time
    Python Data Engineer

    Job Title: Data EngineerJob Summary:We are looking for a proficient Data Engineer with expertise in Amazon Redshift, Python, ApacheAirflow, dbt (Data Build Tool), API integration, and AWS. This role will be responsible for developingand maintaining scalable data pipelines, integrating data from multiple sources, and ensuring thatour data architecture supports business intelligence, reporting, and analytics requirements. You willcollaborate with cross-functional teams to build and optimize our data infrastructure and provideclean, high-quality data to the business.Key Responsibilities: Data Pipeline Development: Build and maintain robust ETL/ELT pipelines using Python, ApacheAirflow, and dbt to extract, transform, and load data from various sources into Amazon Redshift. Amazon Redshift Management: Design, optimize, and maintain Amazon Redshift clusters,ensuring the warehouse is capable of handling large-scale data efficiently. API Integration: Develop solutions to integrate external APIs for data ingestion, ensuring properdata extraction, transformation, and integration into our data infrastructure. Data Modeling: Create and maintain scalable data models in Redshift that support analytics andreporting needs, including designing star and snowflake schemas for optimized querying. AWS Infrastructure Management: Leverage AWS services such as S3, Lambda, EC2, andCloudWatch to build and maintain a scalable and cost-efficient data architecture. dbt (Data Build Tool): Use dbt to manage and automate SQL transformations, ensuring modular,reusable, and well-documented data transformation logic. Workflow Orchestration: Utilize Apache Airflow to orchestrate and automate data workflows,ensuring reliable data pipelines and scheduled jobs. Data Quality & Testing: Implement and maintain data validation checks and testing frameworksto ensure data integrity, accuracy, and compliance across all data pipelines. Collaboration: Work closely with data scientists, analysts, and product teams to understand dataneeds and provide technical solutions that meet business objectives. Performance Optimization: Tune SQL queries and manage the performance of Redshift clustersto ensure fast, efficient data access and analysis. Data Governance: Enforce data governance policies to ensure compliance with security, privacy,and data quality standards throughout the data lifecycle.Key Skills & Qualifications: Bachelors/Masters degree in Computer Science, Engineering, Data Science, or a related field. 3+ years of experience in data engineering with expertise in Amazon Redshift, Python, and AWS. Strong experience with Apache Airflow for workflow scheduling and orchestration. Hands-on experience with dbt (Data Build Tool) for managing SQL transformations and datamodels. Proficiency in API development and integration, including the use of RESTful APIs for dataingestion. Extensive experience with AWS services such as S3, Lambda, EC2, RDS, and CloudWatch.

    Expertise in data modeling concepts and designing efficient data structures (e.g., star schemas,snowflake schemas) in a data warehouse environment. Advanced knowledge of SQL for querying and optimizing large datasets in Redshift. Experience building ETL/ELT pipelines and integrating data from multiple sources, includingstructured and unstructured data. Familiarity with version control systems like Git and best practices for code management anddeployment automation. Knowledge of data governance principles, including data security, privacy, and quality control.Preferred Qualifications: Experience with real-time data processing tools such as Kafka or Kinesis. Familiarity with data visualization tools like Tableau, Looker, or Power BI. Knowledge of other data warehousing solutions like Snowflake or Google BigQuery. Experience with DevOps practices for managing infrastructure and CI/CD pipelines (Docker,Kubernetes). Understanding of machine learning pipelines and how data engineering supports AI/ML initiatives.Soft Skills: Strong analytical and problem-solving skills. Ability to work independently and as part of a cross-functional team. Strong written and verbal communication skills, with the ability to explain technical concepts tonon-technical stakeholders. Detail-oriented, proactive, and self-motivated with a focus on continuous improvement. Strong organizational and project management skills to handle multiple tasks

    Job Type


    Payroll

    Categories

    Data Engineer (Software and Web Development)

    Cloud Architects (Software and Web Development)

    DevOps Engineers (Software and Web Development)

    Database Administrator (Software and Web Development)

    Software Engineer (Software and Web Development)

    Must have Skills
    • Python - 3 Years
    • Amazon Redshift - 3 Years
    • AWS - 3 Years
    • Apache Airflow - 1 Years
    • ETL(Extract, Transform, Load) - 1 Years
    • Kubernetes - 1 Years


    This advertiser has chosen not to accept applicants from your region.

    Python - Data Engineer/Consultant Specialist

    Hyderabad, Andhra Pradesh HSBC

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Some careers shine brighter than others.

    If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further.

    HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions.

    We are currently seeking an experienced professional to join our team in the role of Consultant Specialist.

    In this role, you will:

  • Design, develop, and optimize data pipelines using Azure Databricks, PySpark, and Prophesy.
  • Implement and maintain ETL/ELT pipelines using Azure Data Factory (ADF) and Apache Airflow for orchestration.
  • Develop and optimize complex SQL queries and Python-based data transformation logic.
  • Work with version control systems (GitHub, Azure DevOps) to manage code and deployment processes.
  • Automate deployment of data pipelines using CI/CD practices in Azure DevOps.
  • Ensure data quality, security, and compliance with best practices.
  • Monitor and troubleshoot performance issues in data pipelines.
  • Collaborate with cross-functional teams to define data requirements and strategies.
  • Requirements

    To be successful in this role, you should meet the following requirements:

  • 6+ years of experience in data engineering, working with Azure Databricks, PySpark, and SQL.
  • Hands-on experience with Prophesy for data pipeline development.
  • Proficiency in Python for data processing and transformation.
  • Experience with Apache Airflow for workflow orchestration.
  • Strong expertise in Azure Data Factory (ADF) for building and managing ETL processes.
  • Familiarity with GitHub and Azure DevOps for version control and CI/CD automation.
  • Solid understanding of data modelling, warehousing, and performance optimization.
  • Ability to work in an agile environment and manage multiple priorities effectively.
  • Excellent problem-solving skills and attention to detail.
  • Experience with Delta Lake and Lakehouse architecture.
  • Hands-on experience with Terraform or Infrastructure as Code (IaC).
  • Understanding of machine learning workflows in a data engineering context.
  • This advertiser has chosen not to accept applicants from your region.

    Python - Data Engineer/Consultant Specialist

    Hyderabad, Andhra Pradesh HSBC

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Some careers shine brighter than others.

    If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further.

    HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions.

    We are currently seeking an experienced professional to join our team in the role of Consultant Specialist.

    In this role, you will:

  • Design, develop, and optimize data pipelines using Azure Databricks, PySpark, and Prophesy.
  • Implement and maintain ETL/ELT pipelines using Azure Data Factory (ADF) and Apache Airflow for orchestration.
  • Develop and optimize complex SQL queries and Python-based data transformation logic.
  • Work with version control systems (GitHub, Azure DevOps) to manage code and deployment processes.
  • Automate deployment of data pipelines using CI/CD practices in Azure DevOps.
  • Ensure data quality, security, and compliance with best practices.
  • Monitor and troubleshoot performance issues in data pipelines.
  • Collaborate with cross-functional teams to define data requirements and strategies.
  • Requirements

    To be successful in this role, you should meet the following requirements:

  • 5+ years of experience in data engineering, working with Azure Databricks, PySpark, and SQL.
  • Hands-on experience with Prophesy for data pipeline development.
  • Proficiency in Python for data processing and transformation.
  • Experience with Apache Airflow for workflow orchestration.
  • Strong expertise in Azure Data Factory (ADF) for building and managing ETL processes.
  • Familiarity with GitHub and Azure DevOps for version control and CI/CD automation.
  • Solid understanding of data modelling, warehousing, and performance optimization.
  • Ability to work in an agile environment and manage multiple priorities effectively.
  • Excellent problem-solving skills and attention to detail.
  • Experience with Delta Lake and Lakehouse architecture.
  • Hands-on experience with Terraform or Infrastructure as Code (IaC).
  • Understanding of machine learning workflows in a data engineering context.
  • This advertiser has chosen not to accept applicants from your region.
     

    Nearby Locations

    Other Jobs Near Me

    Industry

    1. request_quote Accounting
    2. work Administrative
    3. eco Agriculture Forestry
    4. smart_toy AI & Emerging Technologies
    5. school Apprenticeships & Trainee
    6. apartment Architecture
    7. palette Arts & Entertainment
    8. directions_car Automotive
    9. flight_takeoff Aviation
    10. account_balance Banking & Finance
    11. local_florist Beauty & Wellness
    12. restaurant Catering
    13. volunteer_activism Charity & Voluntary
    14. science Chemical Engineering
    15. child_friendly Childcare
    16. foundation Civil Engineering
    17. clean_hands Cleaning & Sanitation
    18. diversity_3 Community & Social Care
    19. construction Construction
    20. brush Creative & Digital
    21. currency_bitcoin Crypto & Blockchain
    22. support_agent Customer Service & Helpdesk
    23. medical_services Dental
    24. medical_services Driving & Transport
    25. medical_services E Commerce & Social Media
    26. school Education & Teaching
    27. electrical_services Electrical Engineering
    28. bolt Energy
    29. local_mall Fmcg
    30. gavel Government & Non Profit
    31. emoji_events Graduate
    32. health_and_safety Healthcare
    33. beach_access Hospitality & Tourism
    34. groups Human Resources
    35. precision_manufacturing Industrial Engineering
    36. security Information Security
    37. handyman Installation & Maintenance
    38. policy Insurance
    39. code IT & Software
    40. gavel Legal
    41. sports_soccer Leisure & Sports
    42. inventory_2 Logistics & Warehousing
    43. supervisor_account Management
    44. supervisor_account Management Consultancy
    45. supervisor_account Manufacturing & Production
    46. campaign Marketing
    47. build Mechanical Engineering
    48. perm_media Media & PR
    49. local_hospital Medical
    50. local_hospital Military & Public Safety
    51. local_hospital Mining
    52. medical_services Nursing
    53. local_gas_station Oil & Gas
    54. biotech Pharmaceutical
    55. checklist_rtl Project Management
    56. shopping_bag Purchasing
    57. home_work Real Estate
    58. person_search Recruitment Consultancy
    59. store Retail
    60. point_of_sale Sales
    61. science Scientific Research & Development
    62. wifi Telecoms
    63. psychology Therapy
    64. pets Veterinary
    View All Python Data Engineer Jobs