1,652 Data Pipelines jobs in India
Pioneering Data Pipelines
Posted today
Job Viewed
Job Description
We are seeking a skilled Data Engineering Professional to design and implement production-grade pipelines that our autonomous agents can learn from and operate.
The Role
As a Data Engineer, you will be responsible for building and optimizing complex data pipelines using dbt, Airflow, and Spark. You will own the modeling layer in dbt, orchestrate workflows in Airflow, and develop high-performance Spark jobs for large-scale batch and incremental workloads.
Your Responsibilities
- Model complex schemas in dbt across hundreds of tables
- Build advanced Airflow DAGs with sophisticated dependency and failure handling
- Author high-performance Spark jobs for large-scale batch and incremental workloads
- Codify lineage, testing, and metadata so agents can reason about pipeline state
- Profile and tune query performance across warehouses and lakehouse engines
About You
- You have 4+ years of experience in data engineering or analytics engineering, shipping pipelines at scale
- You have deep experience with dbt, including macros, custom tests, and refactoring legacy models
- You have a track record of building and debugging complex Airflow DAGs (Sensors, TaskGroups, SubDAG patterns)
- You are a Spark power-user capable of distributed joins, window functions, and memory tuning
- You have solid Python, Git, and CI discipline
- Bonus: experience with Iceberg, Delta, or DataFusion; prior RL or agent work
What We Offer
- A competitive salary and meaningful equity
- A remote-first work environment with optional office space
- An opportunity to work with a senior team that values clean code and measurable impact
Why This Role is Ideal for You
- You enjoy working on complex data engineering problems and implementing scalable solutions
- You have strong communication skills and can collaborate effectively with cross-functional teams
- You are passionate about staying up-to-date with industry trends and best practices in data engineering
Creating Scalable Data Pipelines
Posted today
Job Viewed
Job Description
Data Engineer Role Overview
">At TensorStax, we're building the next generation of autonomous agents for data engineering. These agents will learn from and operate on production-grade pipelines designed by talented engineers like you.
">The Job Description
">- ">
- Create advanced, scalable data pipelines that our agents can learn from and operate efficiently. ">
- Owning the modeling layer in dbt, orchestration in Airflow, and heavy-lift workloads in Spark, you'll build complex systems that drive business outcomes. ">
- You will codify lineage, testing, and metadata to enable agents to reason about pipeline state and make informed decisions. ">
- Partner with the agent research team to expose realistic failure modes, data drifts, and SLA violations for RL training. ">
About This Opportunity
">- ">
- 4+ years of experience in data engineering or analytics engineering, with a track record of shipping high-quality pipelines at scale. ">
- Deep expertise in dbt, including macros, custom tests, and refactoring legacy models. ">
- Strong background in building and debugging complex Airflow DAGs (Sensors, TaskGroups, SubDAG patterns). ">
- Proficiency in Spark, with experience in distributed joins, window functions, and memory tuning. ">
- Solid understanding of Python, Git, and CI principles. ">
- Bonus: experience with Iceberg, Delta, or DataFusion; prior RL or agent work. ">
Why Join Us
">- ">
- Work in a tight-knit team that values clean code, measurable impact, and collaboration. ">
- Competitive salary and meaningful equity opportunities. ">
- Remote-first work environment with optional office presence. ">
- Dedicated hardware budget for personal development. ">
The Ideal Candidate
">- ">
- Able to design and implement complex data pipelines that meet business requirements. ">
- Familiarity with containerization and deployment on Kubernetes-backed infrastructure. ">
- Possesses strong problem-solving skills, with the ability to debug complex systems. ">
- Excellent communication skills, with the ability to collaborate effectively with cross-functional teams. ">
- A self-motivated individual who takes ownership of projects and delivers high-quality results. ">
Action Steps
">- ">
- We encourage qualified applicants to apply with their resume and cover letter. ">
- Please describe your relevant experience and how you can contribute to our mission. ">
Key Skills
">- ">
- dbt, Airflow, Spark, Python, Git, CI/CD, Containerization, Kubernetes. ">
- Iceberg, Delta, DataFusion, RL, Agent Development. ">
Keyword
">Data Engineering "> ),Software Developer - Data Pipelines (Python)
Posted today
Job Viewed
Job Description
Team: Development - Alpha Data
Position Overview:
We are seeking an experienced Python developer to join our Alpha Data team, responsible for delivering a vast quantity of data served to users worldwide. You will be a cornerstone of a growing Data team, becoming a technical subject matter expert and developing strong working relationships with quant researchers, traders, and fellow colleagues across our Technology organisation.
Alpha Data teams are able to deploy valuable data to the rest of the Squarepoint business at speed. Ingestion pipelines and data transformation jobs are resilient and highly maintainable, while the data models are carefully designed in close collaboration with our researchers for efficient query construction and alpha generation.
We achieve an economy of scale through building new frameworks, libraries, and services used to increase the team's quality of life, throughput, and code quality. Teamwork and collaboration are encouraged, excellence is rewarded and diversity of thought and creative solutions are valued. Our emphasis is on a culture of learning, development, and growth.
Required Qualifications:
Nice to haves
Software Developer - Risk Data Pipelines
Posted today
Job Viewed
Job Description
Department: Risk Technology
Position Overview:
Risk Technology develops core services and systems required by Squarepoint’s systematic and quantitative trading strategies, such as real-time risk controls, position/inventory/P&L monitoring, internal order routing, and various pre and post-trading services.
Risk Data Pipelines develops software on top of the core Risk Technology platform to handle market or asset-class specific processing, including:
This role provides an opportunity to learn many aspects of the way hedge funds operate through close collaboration with trading and quantitative teams, and as a developer you will:
Required Qualifications:
Nice to Have:
Data Engineering Lead - Scalable Data Pipelines
Posted today
Job Viewed
Job Description
**Data Engineer - Permissionless Growth Team
Join a top 10 sports media platform in the U.S. that generates over a billion pageviews a year and 30m+ monthly active users per month.
Building Data Pipelines for Global Scale
Posted today
Job Viewed
Job Description
EssentiallySports is the home for the underserved fan, delivering storytelling that goes beyond the headlines. As a media platform, we combine deep audience insights with cultural trends to meet fandom where it lives and where it goes next.
Our Mission:
- We focus on the user and let everything else follow.
- We hire for intent, not experience.
- We give you the freedom to serve the customer and the team instead of investors.
- We leverage technology to tap into niche markets.
- We prioritize action, integrity, freedom, strong communication, and responsibility.
We're a top 10 sports media platform in the US, generating over a billion pageviews a year and 30m+ monthly active users per month. This massive traffic fuels our data-driven culture, allowing us to build owned audiences at scale through organic growth—a model we take pride in, with zero Customer Acquisition Cost (CAC).
The next phase of ES growth is around newsletter initiatives. In less than nine months, we've built a robust newsletter brand with 700,000+ highly engaged readers and impressive performance metrics:
- Five newsletter brands
- 700k+ subscribers
- Open rates of 40-46%
This role is for a data engineer with growth and business acumen, in the 'permissionless growth' team. Someone who can connect pipelines for millions of users while knitting a story about how and why.
Responsibilities:
- Owning data pipelines from web to Athena to email, end-to-end
- Making key decisions and seeing them through to successful user sign-ups
- Using data science to find real insights, translating to user engagement
- Pushing changes every weekday
- Personalization at scale: leveraging fan behavior data to tailor content and improve lifetime value
Who are we looking for?
- At least two years of professional data engineering experience
- A self-starter who drives initiatives and thinks about business insights as much as engineering
- Excited to pick up AI and integrate it at various touchpoints
- Strong experience in data analysis, growth marketing, or audience development (media or newsletters)
- Awareness of Athena, Glue, Jupyter, or intent to pick them up
- Comfortable working with tools like Google Analytics, SQL, email marketing platforms, and data visualization tools
- Collaborative and proactive mindset to spot opportunities and translate into real growth
- Ability to thrive in startups with fast-paced environments and take ownership of working through ambiguity
Benefits:
- Fully remote job
- Flexible working hours
- Freedom to own problem statements and make your own solutions
- Working directly with the founding team
- Releasing features at scale to millions of users on day one
- Bi-annual offsites coined 'epic' by the team
What sets us apart?
We're a small, lean team with huge impact on company success. You'll work with experienced leaders/founders who have built and led multiple product, tech, data, and design teams, and grown EssentiallySports to a global scale.
Staff Software Engineer, Data Pipelines & APIs (India)
Posted today
Job Viewed
Job Description
About Mixpanel
Mixpanel is an event analytics platform for builders who need answers from their data at their fingertips—no SQL required. When everyone in the organization can see and learn from the impact of their work on product, marketing, and company revenue metrics, they are poised to make better decisions.
Over 9,000 paid customers, including companies like Netflix, Pinterest, Sweetgreen, and Samsara, use Mixpanel to understand their customers and measure progress. Our commitment is to provide the most comprehensive and reliable analytics platform accessible and trusted by all.
About Mixpanel
Mixpanel is powered by a custom distributed database. This system ingests more than 1 Trillion user-generated events every month while ensuring end-to-end latencies of under a minute and queries typically scan more than 1 Quadrillion events over the span of a month. Over the last year, our inbound traffic has doubled. As our existing customers grow in volume and we add new ones, we expect this growth in traffic to continue. The Distributed Systems engineering teams are responsible for adding new capabilities and ensuring the smooth operation of the underlying systems.
About the Team
The Data Pipeline & API team is responsible for the Data APIs, Pipelines, and Integrations that power real-time movement of customer data and support trillions of requests each month. These systems are critical to Mixpanel’s product and business, enabling core workflows for external customers. This includes data APIs, which are foundational to customer onboarding and must be highly reliable and scalable; data export APIs, which help us maintain feature parity and support the needs of scaling customers; and integrations like warehouse connectors and cohort exports, which reduce friction, drive adoption, and expand our partner ecosystem. This role ensures these systems are robust, efficient, and aligned with a product strategy that emphasizes impact, growth, and long-term reliability.
Responsibilities
As our first engineer in India you'll be responsible for:
A typical project requires a thorough understanding of how not just your service works but also how it interacts with other components. Here are some projects we've worked on in the past to give you an idea of what to expect.
If projects like the ones listed above excite you, the Distributed Systems engineering team will be a great fit.
We're Looking For Someone Who Has
#LI-Hybrid
Benefits and Perks
*please note that benefits and perks for contract positions will vary*
Culture Values
Be The First To Know
About the latest Data pipelines Jobs in India !
Senior Informatica Data Engineer (Data Pipelines & Cloud Solutions)
Posted today
Job Viewed
Job Description
Job Summary:
Synechron is seeking a versatile and experienced Senior Informatica Data Engineer to support and contribute to the development of innovative software solutions for our clients. This role offers the opportunity to work with a broad range of modern, open-source, cloud-based, and enterprise database technologies, helping organizations optimize their data management and integration processes. The ideal candidate will be involved in supporting complex systems, troubleshooting, and continuously enhancing technological capabilities in line with industry best practices.
Software Requirements:
Required:
Preferred:
Overall Responsibilities:
Technical Skills (By Category):
Programming Languages:
Database/Data Management:
Cloud Technologies:
Frameworks and Libraries:
Development Tools and Methodologies:
Security Protocols:
Experience Requirements:
Day-to-Day Activities:
Qualifications:
Professional Competencies:
Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference’ is committed to fostering an inclusive culture – promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.
All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant’s gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
Senior Group Data Engineering Manager(Data Pipelines, ADF, ADB, Python, SQL)
Posted today
Job Viewed
Job Description
Job Description
We’re AtkinsRéalis, a world-leading Design, Engineering and Project Management organization. Created by the integration of long-standing organizations dating back to 1911, we are a world-leading professional services and project management company dedicated to engineering a better future for our planet and its people. We create sustainable solutions that connect people, data and technology to transform the world's infrastructure and energy systems. We deploy global capabilities locally to our clients and deliver unique end-to-end services across the whole life cycle of an asset including consulting, advisory & environmental services, intelligent networks & cybersecurity, design & engineering, procurement, project & construction management, operations & maintenance, decommissioning and capital. The breadth and depth of our capabilities are delivered to clients in key strategic sectors such as Engineering Services, Nuclear, Operations & Maintenance and Capital.
News and information are available at or follow us on LinkedIn.
Our teams are proud to deliver on some of the most prestigious projects across the world. It's thanks to our talented people and their diverse thinking, expertise, and knowledge. Join us and you'll be part of our genuinely collaborative environment, where everyone is supported to make the most of their talents and expertise.
When it comes to work-life balance, AtkinsRéalis is a great place to be. So, let's discuss how our flexible and remote working policies can support your priorities. We're passionate about are work while valuing each other equally. So, ask us about some of our recent pledges for Women's Equality and being a 'Disability Confidence' and 'Inclusive Employer’.
EAI-AtkinsRéalis is a vibrant and continuously growing team. It is an important part of GTC-AtkinsRéalis and widely recognized for its high and quality project deliveries. This would be a vital role to take EAI one step forward in providing data solutions to our business and client. This role would simultaneously work on multiple projects and would provide planning, designing and delivery of data driven projects. Effective communication and a team player are important characteristics of this role.
Key Activities for This Role
Experience & Skills Required:
What We Can Offer You
Why work for AtkinsRéalis?
We at AtkinsRéalis are committed to developing its people both personally and professionally. Our colleagues have the advantage of access to a high ranging training portfolio and development activities designed to help make the best of individual’s abilities and talents. We also actively support staff in achieving corporate membership of relevant institutions.
Meeting Your Needs
To help you get the most out of life in and outside of work, we offer employees ‘Total Reward’.
Making sure you're supported is important to us. So, if you identify as having a disability, tell us ahead of your interview, and we’ll discuss any adjustments you might need.
Additional Information
We are an equal opportunity, drug-free employer committed to promoting a diverse and inclusive community - a place where we can all be ourselves, thrive and develop. To help embed inclusion for all, from day one, we offer a range of family friendly, inclusive employment policies, flexible working arrangements and employee networks to support staff from different backgrounds. As an Equal Opportunities Employer, we value applications from all backgrounds, cultures and ability.
We care about your privacy and are committed to protecting your privacy. Please consult our Privacy Notice on our Careers site to know more about how we collect, use and transfer your Personal Data.
Link: Equality, diversity & inclusion | Atkins India (atkinsrealis.com)
Worker Type
Employee
Job Type
RegularAt AtkinsRéalis, we seek to hire individuals with diverse characteristics, backgrounds and perspectives. We strongly believe that world-class talent makes no distinctions based on gender, ethnic or national origin, sexual identity and orientation, age, religion or disability, but enriches itself through these differences.
Data Scientist (Cloud Management, SQL, Building cloud data pipelines, Python, Power BI, GCP)
Posted today
Job Viewed
Job Description
Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level.
Job Description:
Job Summary
UPS Marketing team is looking for a talented and driven Data Scientist to drive its strategic objectives in the areas of pricing, revenue management, market analysis and evidence/data-based decision making. This role will work across multiple channels and teams to drive tangible results in the organization. You will focus on developing metrics for multiple channels and markets, applying advanced statistical modeling where appropriate and pioneering new analytical methods in a variety of fast paced and rapidly evolving consumer channels. This high visibility position will work with multiple levels of the organization, including senior leadership to bring analytical capabilities to the forefront of pricing, rate setting, and optimization of our go-to-market offers. You will contribute to rapidly evolving UPS Marketing analytical capabilities by working amongst a collaborative team of Data Scientists, Analysts and multiple business stakeholders.
Responsibilities:
Become a subject matter expert on UPS business processes, data and analytical capabilities to help define and solve business needs using data and advanced statistical methods
Analyze and extract insights from large-scale structured and unstructured data utilizing multiple platforms and tools.
Understand and apply appropriate methods for cleaning and transforming data
Work across multiple stake holders to develop, maintain and improve models in production
Take the initiative to create and execute analyses in a proactive manner
Deliver complex analytical and visualizations to broader audiences including upper management and executives
Deliver analytics and insights to support strategic decision making
Understand the application of AI/ML when appropriate to solve complex business problems
Qualifications
Expertise in R, SQL, Python.
Strong analytical skills and attention to detail.
Able to engage key business and executive-level stakeholders to translate business problems to high level analytics solution approach.
Expertise with statistical techniques, machine learning or operations research and their application in business applications.
Deep understanding of data management pipelines and experience in launching moderate scale advanced analytics projects in production at scale.
Proficient in Azure, Google Cloud environment
Experience implementing open-source technologies and cloud services; with or without the use of enterprise data science platforms.
Solid oral and written communication skills, especially around analytical concepts and methods.
Ability to communicate data through a story framework to convey data-driven results to technical and non-technical audience.
Master’s Degree in a quantitative field of mathematics, computer science, physics, economics, engineering, statistics (operations research, quantitative social science, etc.), international equivalent, or equivalent job experience.
Bonus Qualifications
Experience with pricing methodologies and revenue management
Experience using PySpark, Azure Databricks, Google BigQuery and Vertex AI
Creating and implementing NLP/LLM projects
Experience utilizing and applying neurals networks and other AI methodologies
Familiarity with Data architecture and engineering
Employee Type:
UPS is committed to providing a workplace free of discrimination, harassment, and retaliation.