Data Engineering

Coimbatore, Tamil Nadu EXL

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

Responsibilities:

  • Work with stakeholders to understand the data requirements to design, develop, and maintain complex ETL processes.
  • Create the data integration and data diagram documentation.
  • Lead the data validation, UAT and regression test for new data asset creation.
  • Create and maintain data models, including schema design and optimization.
  • Create and manage data pipelines that automate the flow of data, ensuring data quality and consistency.

Qualifications and Skills:

  • Strong knowledge on Python and Pyspark
  • Expectation is to have ability to write Pyspark scripts for developing data workflows.
  • Strong knowledge on SQL, Hadoop, Hive, Azure, Databricks and Greenplum
  • Expectation is to write SQL to query metadata and tables from different data management system such as, Oracle, Hive, Databricks and Greenplum.
  • Familiarity with big data technologies like Hadoop, Spark, and distributed computing frameworks.
  • Expectation is to use Hue and run Hive SQL queries, schedule Apache Oozie jobs to automate the data workflows.
  • Good working experience of communicating with the stakeholders and collaborate effectively with the business team for data testing.
  • Expectation is to have strong problem-solving and troubleshooting skills.
  • Expectation is to establish comprehensive data quality test cases, procedures and implement automated data validation processes.
  • Degree in Data Science, Statistics, Computer Science or other related fields or an equivalent combination of education and experience.
  • 3-7 years of experience in Data Engineer.
  • Proficiency in programming languages commonly used in data engineering, such as Python, Pyspark, SQL.
  • Experience in Azure cloud computing platform, such as developing ETL processes using Azure Data Factory, big data processing and analytics with Azure Databricks.
  • Strong communication, problem solving and analytical skills with the ability to do time management and multi-tasking with attention to detail and accuracy.
This advertiser has chosen not to accept applicants from your region.

Azure Data Engineering

Coimbatore, Tamil Nadu LTIMindtree

Posted 4 days ago

Job Viewed

Tap Again To Close

Job Description

About the job


Are you looking for a new career challenge? With LTIMindtree, are you ready to embark on a data-driven career? Working for global leading manufacturing client for providing an engaging product experience through best-in-class PIM implementation and building rich, relevant, and trusted product information across channels and digital touchpoints so their end customers can make an informed purchase decision – will surely be a fulfilling experience.


Location: Coimbatore


Please apply through below link.

ttps://forms.office.com/r/0y3W38SkZH


Please share your resume on email:


Responsibilities

Develop scalable pipelines to efficiently process transform data using Spark

Design and develop a scalable and robust framework for generating PDF reports using Python Spark

Utilize Snowflake Spark SQL to perform aggregations on high volume of data

Develop Stored Procedures Views Indexes Triggers and Functions in Snowflake Database to maintain data and share with downstream applications in form of APIs

Should use Snowflake features Streams Tasks Snowpipes etc wherever needed in the development flow

Leverage Azure Databricks and Datalake for data processing and storage

Develop APIs using Pythons Flask framework to support front end applications

Collaborate with Architects and Business stakeholders to understand reporting requirements

Maintain and improve existing reporting pipelines and infrastructure

Qualifications

Proven experience as a Data Engineer with a strong understanding of data pipelines and ETL processes

Proficiency in Python with experience in data manipulation libraries such as Pandas and Numpy

Experience with SQL Snowflake Spark for data querying and aggregations

Familiarity with Azure cloud services such as Data Factory Databricks and Datalake

Experience developing APIs using frameworks like Flask is a plus

Excellent communication and collaboration skills

Ability to work independently and manage multiple tasks effectively


Mandatory Skills: Python, SQL, Spark, Azure Data Factory, Azure Datalake, Azure Databricks

Azure Service Bus and Azure Event hubs


Why join us?

  • Work in industry leading implementations for Tier-1 clients
  • Accelerated career growth and global exposure
  • Collaborative, inclusive work environment rooted in innovation
  • Exposure to best-in-class automation framework
  • Innovation first culture: We embrace automation, AI insights and clean data


Know someone who fits this perfectly? Tag them – let’s connect the

This advertiser has chosen not to accept applicants from your region.

Azure Data Engineering

Coimbatore, Tamil Nadu LTIMindtree

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

About the job

Are you looking for a new career challenge? With LTIMindtree, are you ready to embark on a data-driven career? Working for global leading manufacturing client for providing an engaging product experience through best-in-class PIM implementation and building rich, relevant, and trusted product information across channels and digital touchpoints so their end customers can make an informed purchase decision – will surely be a fulfilling experience.

Location: Coimbatore

Please apply through below link.
ttps://forms.office.com/r/0y3W38SkZH

Please share your resume on email:

Responsibilities
Develop scalable pipelines to efficiently process transform data using Spark
Design and develop a scalable and robust framework for generating PDF reports using Python Spark
Utilize Snowflake Spark SQL to perform aggregations on high volume of data
Develop Stored Procedures Views Indexes Triggers and Functions in Snowflake Database to maintain data and share with downstream applications in form of APIs
Should use Snowflake features Streams Tasks Snowpipes etc wherever needed in the development flow
Leverage Azure Databricks and Datalake for data processing and storage
Develop APIs using Pythons Flask framework to support front end applications
Collaborate with Architects and Business stakeholders to understand reporting requirements
Maintain and improve existing reporting pipelines and infrastructure
Qualifications
Proven experience as a Data Engineer with a strong understanding of data pipelines and ETL processes
Proficiency in Python with experience in data manipulation libraries such as Pandas and Numpy
Experience with SQL Snowflake Spark for data querying and aggregations
Familiarity with Azure cloud services such as Data Factory Databricks and Datalake
Experience developing APIs using frameworks like Flask is a plus
Excellent communication and collaboration skills
Ability to work independently and manage multiple tasks effectively

Mandatory Skills: Python, SQL, Spark, Azure Data Factory, Azure Datalake, Azure Databricks
Azure Service Bus and Azure Event hubs

Why join us?
Work in industry leading implementations for Tier-1 clients
Accelerated career growth and global exposure
Collaborative, inclusive work environment rooted in innovation
Exposure to best-in-class automation framework
Innovation first culture: We embrace automation, AI insights and clean data

Know someone who fits this perfectly? Tag them – let’s connect the
This advertiser has chosen not to accept applicants from your region.

Data Engineering - Lead

Coimbatore, Tamil Nadu Verdantas

Posted today

Job Viewed

Tap Again To Close

Job Description

Join Verdantas – A Top #ENR 81 Firm Driving Sustainable Progress

We are seeking an Engineering Lead in Pune to architect and implement scalable, secure AI/ML data solutions aligned with business objectives. The role involves designing robust batch and real-time data architectures, leading ETL/ELT pipeline development, managing data lakes and warehouses, ensuring data quality and governance, mentoring data engineers, collaborating with stakeholders, and deploying cloud-based data infrastructure with CI/CD and cost optimization. Candidates should have 8+ years in data engineering with leadership experience, strong skills in SQL, Python, distributed processing frameworks, and cloud-native data services, with preferred certifications including Google Professional Data Engineer, Azure Data Engineer Associate, and AWS Certified Data Analytics Specialty. The position requires in-office work in Pune, Maharashtra, India.


Key Responsibilities :

Architecture & Strategy

  • Design and implement robust data architectures (batch and real-time) using modern data platforms.
  • Define data engineering standards, best practices, and governance policies.
  • Evaluate and recommend tools and technologies for data ingestion, storage, processing, and orchestration.

Development & Operations

  • Lead the development of ETL/ELT pipelines using tools like Apache Spark, Airflow, dbt, or Azure Data Factory.
  • Build and manage data lakes, data warehouses, and data marts.
  • Ensure data quality, lineage, and observability across pipelines.

Team Leadership

  • Mentor and guide data engineers, fostering a culture of technical excellence and continuous improvement.
  • Collaborate with data scientists, analysts, and business stakeholders to understand data needs and deliver solutions.
  • Manage project timelines, resource allocation, and delivery milestones.

Cloud & DevOps Integration

  • Deploy and manage data infrastructure on cloud platforms (AWS, Azure, GCP).
  • Implement CI/CD pipelines for data workflows and infrastructure as code (IaC).
  • Optimize performance and cost of data systems.

Skills & Qualifications

  • Bachelor’s or master’s in computer science, Data Engineering, or related field.
  • 8+ years of experience in data engineering, with 2+ years in a leadership role.
  • Strong expertise in SQL, Python, and distributed data processing frameworks (Spark, Flink).
  • Experience with cloud-native data services (e.g., AWS Glue, Azure Synapse, BigQuery).
  • Familiarity with data modeling, data governance, and metadata management.
  • Excellent problem-solving, communication, and stakeholder management skills.

Preferred Certifications:

  • Google Professional Data Engineer
  • Microsoft Certified: Azure Data Engineer Associate
  • AWS Certified Data Analytics – Specialty


Why Join Us?

At our Pune office , you’ll be part of a vibrant, innovative environment that fuses local excellence with global impact. We foster a people-first culture and empower our employees with tools, support, and opportunities to thrive.


What We Offer:

  • Be part of a global vision with the agility of a local team.
  • Work on high-impact projects that shape industries and communities.
  • Thrive in a collaborative and dynamic office culture.
  • Access continuous learning and professional development programs.
  • Grow with clear paths for career progression and recognition.
  • An employee-centric approach that values your well-being and ideas.


Ready to Build the Future with Us?

Join us at Verdantas , and make a meaningful impact—professionally and environmentally. Be part of a visionary team driving innovation, sustainability, and transformative solutions that shape the future.”

This advertiser has chosen not to accept applicants from your region.

Azure data engineering

Coimbatore, Tamil Nadu LTIMindtree

Posted today

Job Viewed

Tap Again To Close

Job Description

permanent
About the job Are you looking for a new career challenge? With LTIMindtree, are you ready to embark on a data-driven career? Working for global leading manufacturing client for providing an engaging product experience through best-in-class PIM implementation and building rich, relevant, and trusted product information across channels and digital touchpoints so their end customers can make an informed purchase decision – will surely be a fulfilling experience. Location: Coimbatore Please apply through below link. ttps://forms.office.com/r/0y3 W38 Sk ZH Please share your resume on email: I. Responsibilities Develop scalable pipelines to efficiently process transform data using Spark Design and develop a scalable and robust framework for generating PDF reports using Python Spark Utilize Snowflake Spark SQL to perform aggregations on high volume of data Develop Stored Procedures Views Indexes Triggers and Functions in Snowflake Database to maintain data and share with downstream applications in form of APIs Should use Snowflake features Streams Tasks Snowpipes etc wherever needed in the development flow Leverage Azure Databricks and Datalake for data processing and storage Develop APIs using Pythons Flask framework to support front end applications Collaborate with Architects and Business stakeholders to understand reporting requirements Maintain and improve existing reporting pipelines and infrastructure Qualifications Proven experience as a Data Engineer with a strong understanding of data pipelines and ETL processes Proficiency in Python with experience in data manipulation libraries such as Pandas and Numpy Experience with SQL Snowflake Spark for data querying and aggregations Familiarity with Azure cloud services such as Data Factory Databricks and Datalake Experience developing APIs using frameworks like Flask is a plus Excellent communication and collaboration skills Ability to work independently and manage multiple tasks effectively Mandatory Skills: Python, SQL, Spark, Azure Data Factory, Azure Datalake, Azure Databricks Azure Service Bus and Azure Event hubs Why join us? Work in industry leading implementations for Tier-1 clients Accelerated career growth and global exposure Collaborative, inclusive work environment rooted in innovation Exposure to best-in-class automation framework Innovation first culture: We embrace automation, AI insights and clean data Know someone who fits this perfectly? Tag them – let’s connect the
This advertiser has chosen not to accept applicants from your region.

Data Engineering with GCP

Coimbatore, Tamil Nadu People Prime Worldwide

Posted today

Job Viewed

Tap Again To Close

Job Description

Client: LTIMINDTREE

Job Type: Contract

Role: Data Engineering with GCP


Experience: 6 to 12y

Work Location: Bengaluru, Gurugram


Payroll on : People Prime World Wide

Notice : 0 to 15 days


Job description:


We are looking for 4 Consultants and require only immediate joiners

We Cater to business needs and inorder to be Comply there is set platform ODL where all the variables are getting derived and using those variables we enable Thresholds and referrals gets generated at Lucy end Along with TM we are creating data layers for different portfolios like Sanctions Cadence Payments Hence we are looking for only Tenured candidates who can understand the technical needs and to comply with regulators Below is the detailed JD

1Good experience in SQL Queries

2Understanding of databases data flows and data architecture

3Experience on Python and GCP platform

4Willingness to Collaborate with CrossFunctional teams to drive validation and Project Execution

5Review and evaluate policies and procedures

6To develop implement monitor and manage the compliance needs as per the AEMP70 policy

Skills

Mandatory Skills : Apache Spark,Java,Python,Scala,SparkSQL,Databricks

This advertiser has chosen not to accept applicants from your region.

Associate Architect - Data Engineering

Coimbatore, Tamil Nadu Response Informatics

Posted 3 days ago

Job Viewed

Tap Again To Close

Job Description

About the Role:

We are seeking an experienced Data Architect to lead the transformation of enterprise data

solutions, with a strong focus on migrating Alteryx workflows into Azure Databricks. The

ideal candidate will have deep expertise in the Microsoft Azure ecosystem, including Azure

Data Factory, Databricks, Synapse Analytics, Microsoft Fabric, and a strong

background in data architecture, governance, and distributed computing. This role

requires both strategic thinking and hands-on architectural leadership to ensure scalable,

secure, and high-performance data solutions.


Key Responsibilities:

Define the overall migration strategy for transforming Alteryx workflows into

scalable, cloud-native data solutions on Azure Databricks.

Architect end-to-end data frameworks leveraging Databricks, Delta Lake, Azure

Data Lake, and Synapse.

Establish best practices, standards, and governance frameworks for pipeline

design, orchestration, and data lifecycle management.

Guide engineering teams in re-engineering Alteryx workflows into distributed Spark-

based architectures.

Collaborate with business stakeholders to ensure solutions align with analytics,

reporting, and advanced AI/ML initiatives.

Oversee data quality, lineage, and security compliance across the data

ecosystem.

Drive CI/CD adoption, automation, and DevOps practices for Azure Databricks

and related services.

Provide architectural leadership, design reviews, and mentorship to engineering

and analytics teams.

Optimize solutions for performance, scalability, and cost-efficiency within Azure.

Participate in enterprise architecture forums and influence data strategy across the

organization.


Required Skills and Qualifications:

10+ years of experience in data architecture, engineering, or solution design.

Proven expertise in Alteryx workflows and their modernization into Azure

Databricks (Spark, PySpark, SQL, Delta Lake).

Deep knowledge of the Microsoft Azure data ecosystem:

o Azure Data Factory (ADF)

o Azure Synapse Analytics


o Microsoft Fabric

o Azure Databricks

Strong background in data governance, lineage, security, and compliance

frameworks.

Demonstrated experience in architecting data lakes, data warehouses, and

analytics platforms.

Proficiency in Python, SQL, and Apache Spark for prototyping and design

validation.

Excellent leadership, communication, and stakeholder management skills.


Preferred Qualifications:

Microsoft Azure certifications (e.g., Azure Solutions Architect Expert, Azure Data

Engineer Associate).

Experience leading large-scale migration programs or modernization initiatives.

Familiarity with enterprise architecture frameworks (TOGAF, Zachman).

Exposure to machine learning enablement on Azure Databricks.

Strong understanding of Agile delivery and working in multi-disciplinary teams.

This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Data engineering Jobs in Coimbatore !

Senior Manager - Data Engineering Lead

Coimbatore, Tamil Nadu DIAGEO India

Posted 22 days ago

Job Viewed

Tap Again To Close

Job Description

Job Title: Senior Manager - Data Engineering Lead


Qualification: Bachelor’s or master’s degree in computer science, Data Engineering, or related field.


Required skillset:

  • Experience in data engineering.
  • Proven experience in cloud platforms (AWS, Azure, or GCP) and data services (Glue, Synapse, Big Query, Databricks, etc.).
  • Hands-on experience with tools like Apache Spark, Kafka, Airflow, dbt, and modern orchestration platforms.
  • Technical Skills
  • Proficient in SQL, Python/Scala/Java.
  • Strong understanding of modern data Lake concepts (e.g., Snowflake, Redshift, BigQuery).
  • Familiarity with CI/CD, Infrastructure as Code (e.g., Terraform), and DevOps for data.

Nice to Have:

  • Prior experience working in a regulated industry (alcohol, pharma, tobacco, etc.).
  • Exposure to demand forecasting, route-to-market analytics, or distributor performance management.
  • Knowledge of CRM, ERP, or supply chain systems (e.g., Salesforce, SAP, Oracle).
  • Familiarity with marketing attribution models and campaign performance tracking.


Preferred Attributes:

  • Strong analytical and problem-solving skills.
  • Excellent communication and stakeholder engagement abilities.
  • Passion for data-driven innovation and delivering business impact.
  • Certification in cloud platforms or data engineering (e.g., Google Cloud Professional Data Engineer).
  • Excellent communication and stakeholder management skills.

Key Accountabilities:

  • Design and implement scalable, high-performance data architecture solutions aligned with enterprise strategy.
  • Define standards and best practices for data modelling, metadata management, and data governance.
  • Collaborate with business stakeholders, data scientists, and application architects to align data infrastructure with business needs.
  • Guide the selection of technologies, including cloud-native and hybrid data architecture patterns (e.g., Lambda/Kappa architectures).
  • Lead the development, deployment, and maintenance of end-to-end data pipelines using ETL/ELT frameworks.
  • Manage ingestion from structured and unstructured data sources (APIs, files, databases, streaming sources).
  • Optimize data workflows for performance, reliability, and cost efficiency.
  • Ensure data quality, lineage, cataloging, and security through automated validation and monitoring.


  • Oversee data lake design, implementation, and daily operations (e.g., Azure Data Lake, AWS S3, GCP BigLake).
  • Implement access controls, data lifecycle management, and partitioning strategies.
  • Monitor and manage performance, storage costs, and data availability in real time.
  • Ensure compliance with enterprise data policies and regulatory requirements (e.g., GDPR, CCPA).


  • Lead and mentor a team of data engineers and architects.
  • Establish a culture of continuous improvement, innovation, and operational excellence.
  • Work closely with IT, DevOps, and InfoSec teams to ensure secure and scalable infrastructure.


Flexible Working Statement: Flexibility is key to our success. From part-time and compressed hours to different locations, our people work flexibly in ways to suit them. Talk to us about what flexibility means to you so that you’re supported from day one.


Diversity statement: Our purpose is to celebrate life, every day, everywhere. And creating an inclusive culture, where everyone feels valued and that they can belong, is a crucial part of this.

We embrace diversity in the broadest possible sense. This means that you’ll be welcomed and celebrated for who you are just by being you. You’ll be part of and help build and champion an inclusive culture that celebrates people of different gender, ethnicity, ability, age, sexual orientation, social class, educational backgrounds, experiences, mindsets, and more.

Our ambition is to create the best performing, most trusted and respected consumer products companies in the world. Join us and help transform our business as we take our brands to the next level and build new ones as part of shaping the next generation of celebrations for consumers around the world.

This advertiser has chosen not to accept applicants from your region.

Learning Support Specialist (AI, ML, Data Science, Data Engineering)

Coimbatore, Tamil Nadu Emeritus

Posted today

Job Viewed

Tap Again To Close

Job Description

About Emeritus:


Emeritus is committed to teaching the skills of the future by making high-quality education accessible and affordable to individuals, companies, and governments around the world. It does this by collaborating with more than 50 top-tier universities across the United States, Europe, Latin America, Southeast Asia, India and China.

Emeritus’ short courses, degree programs, professional certificates, and senior executive programs help individuals learn new skills and transform their lives, companies and organizations. Its unique model of state-of-the-art technology, curriculum innovation, and hands-on instruction from senior faculty, mentors and coaches has educated more than 250,000 individuals across 80+ countries.

Founded in 2015, Emeritus, part of Eruditus Group, has more than 2,000 employees globally and offices in Mumbai, New Delhi, Shanghai, Singapore, Palo Alto, Mexico City, New York, Boston, London, and Dubai. Following its $650 million Series E funding round in August 2021, the Company is valued at $3.2 billion, and is backed by Accel, SoftBank Vision Fund 2, the Chan Zuckerberg Initiative, Leeds Illuminate, Prosus Ventures, Sequoia Capital India, and Bertelsmann.



About the Role:


The Learning Support Specialist serves as both a subject matter expert and mentor, playing a pivotal role in the learning experience. You will guide learners through their educational journey in programs focused on one or more areas including machine learning, artificial intelligence, data engineering, data science, and data analytics , supporting learners from beginners to career-advancing professionals.


Day-to-day, you will:

  • Respond to learner questions with clear, actionable guidance
  • Provide constructive feedback on assignments and projects
  • Break down complex technical concepts into digestible explanations
  • Mentor learners at varying experience levels, ensuring each feels supported and motivated
  • Collaborate with internal teams to identify course improvements
  • Help resolve delivery challenges and escalations


What we’re looking for : A professional who combines deep technical expertise with strong interpersonal skills and genuine passion for education. The ideal candidate seamlessly blends technical knowledge with empathy and exceptional communication abilities.


This is a full-time, remote position in a dynamic edtech environment where learner success is a top priority.


Skills and Qualifications:


  • We’re looking for candidates with ANY ONE of these backgrounds:
  • Professional experience: 2+ years’ in data engineering, data science, or data analytics OR
  • Academic background : PhD (or pursuing PhD) in computer science with a focus on data-related specialties OR
  • Teaching experience : Teaching, tutoring, or teaching assistant experience in data, mathematics, or ML/AI OR
  • Support experience : Learning support in data-related technical bootcamps or higher education
  • Strong background in mathematics (statistics, calculus, linear algebra).
  • Strong academic or professional grounding in machine learning and artificial intelligence .
  • Proficiency in Python and libraries such as NumPy, Pandas (JavaScript experience is a plus).
  • Familiarity with GitHub and version control workflows.
  • Experience with at least one data visualization tool (e.g., Tableau, Power BI).
  • Comfort with cloud platforms (Azure preferred) is optional but advantageous.
  • Strong written and verbal communication skills for working with a diverse learner base.
  • Experience with learning management systems (e.g., Canvas) is a plus.


Preferred Qualifications:


  • Familiarity with Slack, Teams, or similar collaboration tools.
  • Experience with support/service software or ticketing systems .
  • Exposure to bug tracking and feedback tools .


Emeritus provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.


In press:

This advertiser has chosen not to accept applicants from your region.

Senior Full Stack SDE with Data Engineering for Analytics

Coimbatore, Tamil Nadu Truckmentum

Posted today

Job Viewed

Tap Again To Close

Job Description

Summary

Truckmentum is seeking a Senior Full Stack Software Development Engineer (SDE) with deep data engineering experience to help us build cutting-edge software and data infrastructure for our AI-driven Trucking Science-as-a-Service platform. We’re creating breakthrough data science to transform trucking — and we’re looking for engineers who share our obsession with solving complex, real-world problems with software, data, and intelligent systems.


You’ll be part of a team responsible for the development of dynamic web applications, scalable data pipelines, and high-performance backend services that drive better decision-making across the $4 trillion global trucking industry. This is a hands-on role focused on building solutions by combining Python-based full stack development with scalable, modern data engineering.


About Truckmentum

Just about every sector of the global economy depends on trucking. In the US alone, trucks move 70%+ of all freight by weight (90+% by value) and account for $40 billion in annual spending (globally 4+ trillion per year). Despite this, almost all key decisions in trucking are made manually by people with limited decision support. This results in significant waste and lost opportunities. We view this as a great opportunity.


Truckmentum is a self-funded seed stage venture. We are now validating our key data science breakthroughs with customer data and our MVP product launch to confirm product-market fit. We will raise 4-6 million in funding this year to scale our Data Science-as-a-Service platform and bring our vision to market at scale.


Our Vision and Approach to Technology

T he back of our business cards reads “Moneyball for Trucking”, which means quantifying hard-to-quantiify hidden insights, and then using those insights to make much better business decision. If you don’t want “Moneyball for Trucking” on the back of your business card, then Truckmentum isn’t a good fit.


Great technology begins with customer obsession. We are obsessed with trucking companies' needs, opportunities, and processes, and with building our solutions into the rhythm of their businesses. We prioritize rapid development and iteration of large scale, complex data science problems, backed by actionable, dynamic data visualizations. We believe in an Agile, lean approach to software engineering, backed by a structured CI/CD approach, professional engineering practices, clean architecture, clean code and testing.


Our technology stack includes AWS Cloud, MySQL, Snowflake, Python, SQLAlchemy, Pandas, Streamlit and AGGrid to accelerate development of web visualization and interfaces.


About the Role

As a Senior Full Stack SDE with Data Engineering for Analytics, you will be responsible for designing and building the software systems, user interfaces, and data infrastructure that power Truckmentum’s analytics, data science, and decision support platform. This is a true full stack role — you’ll work across frontend, backend, and data layers using Python, Streamlit, Snowflake, and modern DevOps practices. You’ll help architect and implement a clean, extensible system that supports complex machine learning models, large-scale data processing, and intuitive business-facing applications.


You will report to the CEO (Will Payson), a transportation science expert with 25 years in trucking, who has delivered $1B+ in annual savin s for FedEx and Amazon. You will also work closely with the CMO/Head of Product, Tim Liu, who has 20+ years of experience in building and commercializing customer-focused digital platforms including in logistics.


Responsibilities and Goals

- Design and build full stack applications using Python, Streamlit, and modern web frameworks to power internal tools, analytics dashboards, and customer-facing products.

- Develop scalable data pipelines to ingest, clean, transform, and serve data from diverse sources into Snowflake and other cloud-native databases.

- Implement low-latency, high-availability backend services to support data science, decision intelligence, and interactive visualizations.

- Integrate front-end components with backend systems and ensure seamless interaction between UI, APIs, and data layers.

- Collaborate with data scientists / ML engineers to deploy models, support experimentation, and enable rapid iteration on analytics use cases.

- Define and evolve our data strategy and architecture, including schemas, governance, versioning, and access patterns across business units and use cases.

- Implement DevOps best practices, including testing, CI/CD automation, and observability, to improve reliability and reduce technical debt.

- Ensure data integrity and privacy through validation, error handling, and secure design.

- Contribute to product planning and roadmaps by working with cross-functional teams to estimate scope, propose solutions, and deliver value iteratively.


Required Qualifications

- 5+ years of professional software development experience, with a proven track record of building enterprise-grade, production-ready software applications for businesses or consumers, working in an integrated development team using Agile and Git / GitHub.

- Required technology experience with the following technologies in a business context:

  • Python as primary programming language (5+ years’ experience)
  • Pandas, Numpy, SQL
  • AWS and/or GCP cloud configuration / deployment
  • Git / GitHub
  • Snowflake, and/or Redshift or Big Query
  • Docker
  • Airflow, Prefect or other DAG orchestration technology
  • Front end engineering (e.g., HTML/CSS, JavaScript, and component-based frameworks)

- Hands-on experience with modern front-end technologies — HTML/CSS, JavaScript, and component-based frameworks (e.g., Streamlit, React, or similar).

- Experience designing and managing scalable data pipelines, data processing jobs, and ETL/ELT

- Experience in defining Data Architecture and Date Engineering Architecture, including robust pipelines, and building and using cloud services (AWS and/or GCP)

- Experience building and maintaining well-structured APIs and microservices in a cloud environment.

- Working knowledge of, and experience applying, data validation, privacy, and governance

- Comfort working in a fast-paced, startup environment with evolving priorities and an Agile mindset.

- Strong communication and collaboration skills — able to explain technical tradeoffs to both technical and non-technical stakeholders.


Desirable Experience (i.e., great but not required.)

- Desired technology experience with the following technologies in a business context:

  • Snowflake
  • Streamlit
  • Folium, Plotly, AG Grid
  • Kubernetes
  • Javascript, CSS
  • Flask, Fast API and SQLAlchemy

- Exposure to machine learning workflows and collaboration with data scientists or MLOps teams.

- Experience building or scaling analytics tools, business intelligence systems, or SaaS data products.

- Familiarity with geospatial data and visualization libraries (e.g., Folium, Plotly, AG Grid).

- Knowledge of CI/CD tools (e.g., GitHub Actions, Docker, Terraform) and modern DevOps practices.

- Contributions to early-stage product development — especially at high-growth startups.

- Passion for transportation and logistics, and for applying technology to operational systems.


Why Join Truckmentum

At Truckmentum, we’re not just building software — we’re rewriting the rules for one of the largest and most essential industries in the world. If you’re excited by real-world impact, data-driven decision making, and being part of a company where you’ll see your work shape the product and the business, this is your kind of team.


Some of the factors that make this a great opportunity include:

- Massive market opportunity: Trucking is a $4T+ global indust y / strong customer interest in solution

- Real business impact: Our tech has already shown a 5% operating margin gain at pilot customers.

- Builder’s culture: You’ll help define architecture, shape best practices, and influence our direction.

- Tight feedback loop: We work directly with real customers and iterate fast.

- Tech stack you’ll love: Python, Streamlit, Snowflake, Pandas, AWS — clean, modern, focused.

- Mission-driven team: We’re obsessed with bringing "Moneyball for Trucks" to life — combining science, strategy, and empathy to make the complex simple, and the invisible visible


We value intelligence, curiosity, humility, clean code, measurable impact, clear thinking, hard work and a focus on delivering results. If that sounds like your kind of team, we’d love to meet you.


  • PS. If you read this far, we assume you are focused and detail oriented. If you think this job sounds interesting, please fill in a free personality profile on and email a link to the outcome to to move your application to the top the pile.
This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Data Engineering Jobs View All Jobs in Coimbatore