25,332 Senior Data Engineer jobs in India
Data Engineer- Lead Data Engineer
Posted today
Job Viewed
Job Description
Role Overview
We are seeking an experienced Lead Data Engineer to join our Data Engineering team at Paytm, India's leading digital payments and financial services platform. This is a critical role responsible for designing, building, and maintaining large-scale, real-time data streams that process billions of transactions and user interactions daily. Data accuracy and stream reliability are essential to our operations, as data quality issues can result in financial losses and impact customer
a Lead Data Engineer at Paytm, you will be responsible for building robust data systems that support India's largest digital payments ecosystem. You'll architect and implement reliable, real-time data streaming solutions where precision and data correctness are fundamental requirements. Your work will directly support millions of users across merchant payments, peer-to-peer transfers, bill payments, and financial services, where data accuracy is crucial for maintaining customer confidence and operational excellence.
This role requires expertise in designing fault-tolerant, scalable data architectures that maintain high uptime standards while processing peak transaction loads during festivals and high-traffic events. We place the highest priority on data quality and system reliability, as our customers depend on accurate, timely information for their financial decisions. You'll collaborate with cross-functional teams including data scientists, product managers, and risk engineers to deliver data solutions that enable real-time fraud detection, personalized recommendations, credit scoring, and regulatory compliance reporting.
Key technical challenges include maintaining data consistency across distributed systems with demanding performance requirements, implementing comprehensive data quality frameworks with real-time validation, optimizing query performance on large datasets, and ensuring complete data lineage and governance across multiple business domains. At Paytm, reliable data streams are fundamental to our operations and our commitment to protecting customers' financial security and maintaining India's digital payments
Responsibilities
Data Stream Architecture & DevelopmentDesign and implement reliable, scalable data streams handling high-volume transaction data with strong data integrity controlsBuild real-time processing systems using modern data engineering frameworks (Java/Python stack) with excellent performance characteristicsDevelop robust data ingestion systems from multiple sources with built-in redundancy and monitoring capabilitiesImplement comprehensive data quality frameworks, ensuring the 4 C's: Completeness, Consistency, Conformity, and Correctness - ensuring data reliability that supports sound business decisionsDesign automated data validation, profiling, and quality monitoring systems with proactive alerting capabilitiesInfrastructure & Platform ManagementManage and optimize distributed data processing platforms with high availability requirements to ensure consistent service deliveryDesign data lake and data warehouse architectures with appropriate partitioning and indexing strategies for optimal query performanceImplement CI/CD processes for data engineering workflows with comprehensive testing and reliable deployment proceduresEnsure high availability and disaster recovery for critical data systems to maintain business continuity
Performance & OptimizationMonitor and optimize streaming performance with focus on latency reduction and operational efficiencyImplement efficient data storage strategies including compression, partitioning, and lifecycle management with cost considerationsTroubleshoot and resolve complex data streaming issues in production environments with effective response protocolsConduct proactive capacity planning and performance tuning to support business growth and data volume increases
Collaboration & Leadership Work closely with data scientists, analysts, and product teams to understand important data requirements and service level expectationsMentor junior data engineers with emphasis on data quality best practices and customer-focused approachParticipate in architectural reviews and help establish data engineering standards that prioritize reliability and accuracyDocument technical designs, processes, and operational procedures with focus on maintainability and knowledge sharing
Required Qualifications
Experience & EducationBachelor's or Master's degree in Computer Science, Engineering, or related technical field
7+ years (Senior) of hands-on data engineering experience
Proven experience with large-scale data processing systems (preferably in fintech/payments domain)
Experience building and maintaining production data streams processing TB/PB scale data with strong performance and reliability standards
Technical Skills & RequirementsProgramming Languages:
Expert-level proficiency in both Python and Java; experience with Scala preferred
Big Data Technologies: Apache Spark (PySpark, Spark SQL, Spark with Java), Apache Kafka, Apache Airflow
Cloud Platforms: AWS (EMR, Glue, Redshift, S3, Lambda) or equivalent Azure/GCP services
Databases: Strong SQL skills, experience with both relational (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Cassandra, Redis)
Data Quality Management: Deep understanding of the 4 C's framework - Completeness, Consistency, Conformity, and Correctness
Data Governance: Experience with data lineage tracking, metadata management, and data cataloging
Data Formats & Protocols: Parquet, Avro, JSON, REST APIs, GraphQLContainerization & DevOps: Docker, Kubernetes, Git, GitLab/GitHub with CI/CD pipeline experience
Monitoring & Observability: Experience with Prometheus, Grafana, or similar monitoring tools
Data Modeling: Dimensional modeling, data vault, or similar methodologies
Streaming Technologies: Apache Flink, Kinesis, or Pulsar experience is a plus
Infrastructure as Code: Terraform, CloudFormation (preferred)
Java-specific: Spring Boot, Maven/Gradle, JUnit for building robust data services
Preferred Qualifications
Domain Expertise
Previous experience in fintech, payments, or banking industry with solid understanding of regulatory compliance and financial data requirementsUnderstanding of financial data standards, PCI DSS compliance, and data privacy regulations where compliance is essential for business operationsExperience with real-time fraud detection or risk management systems where data accuracy is crucial for customer protection
Advanced Technical Skills (Preferred)
Experience building automated data quality frameworks covering all 4 C's dimensionsKnowledge of machine learning stream orchestration (MLflow, Kubeflow)Familiarity with data mesh or federated data architecture patternsExperience with change data capture (CDC) tools and techniques
Leadership & Soft SkillsStrong problem-solving abilities with experience debugging complex distributed systems in production environmentsExcellent communication skills with ability to explain technical concepts to diverse stakeholders while highlighting business valueExperience mentoring team members and leading technical initiatives with focus on building a quality-oriented cultureProven track record of delivering projects successfully in dynamic, fast-paced financial technology environments
Data Engineer- Senior Data Engineer
Posted today
Job Viewed
Job Description
The Role
We're looking for a senior AI engineer who can build production-grade agentic AI systems. You'll be working at the intersection of cutting-edge AI research and scalable engineering, creating autonomous agents that can reason, plan, and execute complex tasks reliably at scale.
What We Need
Agentic AI & LLM Engineering
You should have hands-on experience with:
Multi-agent systems: Building agents that coordinate, communicate, and work together on complex workflows
Agent orchestration: Designing systems where AI agents can plan multi-step tasks, use tools, and make autonomous decisions
LLMOps Experience: End-to-End LLM Lifecycle Management - hands-on experience managing the complete LLM workflow from prompt engineering and dataset curation through model fine-tuning, evaluation, and deployment. This includes versioning prompts, managing training datasets, orchestrating distributed training jobs, and implementing automated model validation pipelines. Production LLM Infrastructure - experience building and maintaining production LLM serving infrastructure including model registries, A/B testing frameworks for comparing model versions, automated rollback mechanisms, and monitoring systems that track model performance, latency, and cost metrics in real-time.
AI Observability: Experience implementing comprehensive monitoring and tracing for AI systems, including prompt tracking, model output analysis, cost monitoring, and agent decision-making visibility across complex workflows.
Evaluation frameworks: Creating comprehensive testing for agent performance, safety, and goal achievement
LLM inference optimization: Scaling model serving with techniques like batching, caching, and efficient frameworks (vLLM, TensorRT-LLM)
Systems Engineering
Strong backend development skills including:
Python expertise: FastAPI, Django, or Flask for building robust APIs that handle agent workflows
Distributed systems: Microservices, event-driven architectures, and message queues (Kafka, RabbitMQ) for agent coordination
Database strategy: Vector databases, traditional SQL/NoSQL, and caching layers optimized for agent state management
Web-scale design: Systems handling millions of requests with proper load balancing and fault tolerance
DevOps (Non-negotiable)
Kubernetes: Working knowledge required - deployments, services, cluster management
Containerization: Docker with production optimization and security best practices
CI/CD: Automated testing and deployment pipelines
Infrastructure as Code: Terraform, Helm charts
Monitoring: Prometheus, Grafana for tracking complex agent behaviors
Programing Language : Java , Python
What You'll Build
You'll architect the infrastructure that powers our autonomous AI systems:
Agent Orchestration Platform: Multi-agent coordination systems that handle complex, long-running workflows with proper state management and failure recovery.
Evaluation Infrastructure: Comprehensive frameworks that assess agent performance across goal achievement, efficiency, safety, and decision-making quality.
Production AI Services: High-throughput systems serving millions of users with intelligent resource management and robust fallback mechanisms.
Training Systems: Scalable pipelines for SFT and DPO that continuously improve agent capabilities based on real-world performance and human feedback.
Who You Are
You've spent serious time in production environments building AI systems that actually work. You understand the unique challenges of agentic AI - managing state across long conversations, handling partial failures in multi-step processes, and ensuring agents stay aligned with their intended goals.
You've dealt with the reality that the hardest problems aren't always algorithmic. Sometimes it's about making an agent retry gracefully when an API call fails, or designing an observability layer that catches when an agent starts behaving unexpectedly, or building systems that can scale from handling dozens of agent interactions to millions.
You're excited about the potential of AI agents but pragmatic about the engineering work required to make them reliable in production.
Data Engineer _ Data
Posted today
Job Viewed
Job Description
Summary: The Data Engineer in the Data & AI division is responsible for designing, developing, and maintaining robust data pipelines, ensuring the efficient and secure movement, transformation, and storage of data across business systems. The ideal candidate will support analytics and AI initiatives, enabling data-driven decision-making within the organisation.
Role: Data & AI Data Engineer
Location: Bangalore
Shift timings: General Shift
Roles & Responsibilities:
- Design, develop, and maintain scalable and reliable data pipelines to support analytics, reporting, and AI-driven solutions.
- Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver appropriate data solutions.
- Optimise data extraction, transformation, and loading (ETL) processes for performance, scalability, and data quality.
- Implement data models, build and maintain data warehouses and lakes, and ensure data security and compliance.
- Monitor data pipeline performance and troubleshoot issues in a timely manner.
- Document data processes, pipelines, and architecture for knowledge sharing and audit purposes.
- Stay updated with industry trends and recommend best practices in data engineering and AI integration.
Must-Have Skills:
- Demonstrated proficiency in SQL and at least one programming language (Python, Java, or Scala).
- Experience with cloud platforms such as Azure, AWS, or Google Cloud (Data Factory, Databricks, Glue, BigQuery, etc.).
- Expertise in building and managing ETL pipelines and workflows.
- Strong understanding of relational and non-relational databases.
- Knowledge of data modelling, data warehousing, and data lake architectures.
- Experience with version control systems (e.g., Git) and CI/CD principles.
- Excellent problem-solving and communication skills.
Preferred skills:
- Experience with big data frameworks (Spark, Hadoop, Kafka, etc.).
- Familiarity with containerisation and orchestration tools (Docker, Kubernetes, Airflow).
- Understanding of data privacy regulations (GDPR, etc.) and data governance practices.
- Exposure to machine learning or AI model deployment pipelines.
- P ands-on experience with reporting and visualisation tools (Power BI, Tableau, etc.).
We are Navigators in the Age of Transformation: We use sophisticated technology to transform clients into the digital age, but our top priority is our positive impact on human experience. We ease anxiety and fear around digital transformation and replace it with opportunity. Launch IT is an equal opportunity employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Launch IT is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation.
About Company: Launch IT India is wholly owned subsidiary of The Planet Group ; ) a US company, offers attractive compensation and work environment for the prospective employees. Launch is an entrepreneurial business and technology consultancy. We help businesses and people navigate from current state to future state. Technology, tenacity, and creativity fuel our solutions with offices in Bellevue, Sacramento, Dallas, San Francisco, Hyderabad & Washington D.C.
Data Warehousing Engineer
Posted today
Job Viewed
Job Description
Who are we looking for?
- We are looking for 7+ Years of database development experience should have min 5+ years of relevant experience. Strong SQL experience in creating database objects like Tables, Stored Procedures, DDL/DML Triggers, Views, Indexes, Cursors, Functions & User defined data types.
Technical Skills:
- Looking for 7+ Years of database development experience and should have min 5+ years of relevant experience.
- Strong PLSQL experience in creating database objects like Tables, Stored Procedures, DDL/DML Triggers, Views, Indexes, Cursors, Functions & User defined data types.
- Expertise in using Oracle Performance tuning concepts with Oracle hints and EXPLAIN PLAN tool
- Strong experience using SQL and PL/SQL features like Built In Functions, Analytical Functions, Cursors, Cursor variables, Native dynamic SQL, bulk binding techniques and Packages/Procedures/Functions wherever applicable to process data in an efficient manner
- Strong understanding of Data Warehousing and Extraction Transformation Loading (ETL)
- Sound understanding of RDBMS (Oracle)
- Should have used Oracle SQL Loader/External File Utilities to load files
- Good to have experience with Snowflake cloud data platform including Snowflake utilities like SnowSQL, SnowPipe, data loading within cloud (AWS or Azure)
Data Engineer
Posted today
Job Viewed
Job Description
Job Title: Data Engineer Department: Business Intelligence Location: Chennai
Position Overview
Develops effective business intelligence solutions for OEC by designing, developing, and deploying systems to support business intelligence, reporting, data warehousing, and integration with enterprise applications. Implements user access controls and data security measures. Assists in the evaluation of the business to support delivery of effective business intelligence and reporting solutions. Maintains documentation of processes, reports, applications, and procedures.
Key Responsibilities
Develops and deploys systems to support business intelligence, reporting, and data warehouse solutions using both Azure and AWS cloud platforms; implements robust user access controls and data security measures across cloud environments.
Owns the design, development, and maintenance of ongoing metrics, reports, analyses, and dashboards using tools like Amazon Quick Sight, Microsoft Azure BI solutions to drive key business decisions.
Design, build, and maintain scalable data pipelines and ETL processes using Azure and AWS services such as Azure Data Factory, AWS Glue, Amazon Redshift, and S3, ensuring efficient data extraction, transformation, and loading across cloud environments.
Anticipates opportunities for improvement in cross-cloud analytics solutions; outlines and identifies alternate solutions leveraging the strengths of both AWS and Azure platforms and presents to management for final approval and implementation.
Develops and performs system testing across Azure and AWS environments, fixes defects identified during testing; re-executes unit tests to validate results and ensure compatibility between cloud platforms.
Assists in coordination efforts with management and end-users to evaluate business needs and support delivery of effective business intelligence and reporting
solutions that leverage the cost-optimization benefits of both Microsoft workloads on AWS and native Azure services.
Develops and tests database scripts, stored procedures, triggers, functions, Azure Data Factory pipelines, AWS Glue jobs, and other back-end processes to support seamless system integration across cloud environments.
Assists with tabular and multi-dimensional modelling in addition to the development of the Enterprise Data Warehouse using both AWS and Azure cloud data warehouse solutions.
Skills & Qualifications
- At least 3 years of experience in business intelligence, data engineering and reporting required.
- A bachelor's degree from an accredited college or university is required, with a focus in Information Technology, Computer Science, or related discipline.
- Strong proven track record of developing and testing database scripts, stored procedures, triggers, functions, SSIS Packages, AWS, Azure cloud services and other back-end processes to support system integration.
- Experience using BI/Analytics/Statistical tools like Power BI, Tableau, Business Objects, SSRS, SSAS, and Excel.
- Strong SQL skills, business intelligence tool expertise, AWS Cloud services, and report development skills.
- Knowledge of data modelling, data quality, ETL/SQL server, and SSAS (DAX/MDX) database queries.
- Understanding of configuration, deployment, and database servers.
- Strong writing and verbal communication skills.
- Ability to work collaboratively in a functional team environment.
- Refined analytical thinking and problem-solving skills.
Data Engineer
Posted today
Job Viewed
Job Description
We at BEACON Consulting are expanding our core tech team with 2 engineers to build and scale data-driven platforms that process, analyze, and visualize large-scale digital data streams. Youll work at the intersection of data engineering, machine learning, and real-time analytics, creating systems that transform raw digital data into structured, actionable intelligence.
Please note: Immediate joiners are preferred for this role.
Responsibilities
-Build and maintain data pipelines to fetch and process data from multiple APIs (Twitter, YouTube, Meta, etc.)
-Implement and integrate NLP/ML models (Transformers, VADER, OpenAI, Hugging Face, etc.)
-Design and manage interactive dashboards (Looker Studio, Power BI, Streamlit, Tableau)
-Automate reporting and analysis workflows using Python, SQL, and Excel/Google Sheets
-Conduct data validation, anomaly detection, and trend analysis
-Collaborate with internal teams to convert outputs into strategic insights
Required Skills
-Strong in Python (pandas, numpy, requests, matplotlib/plotly)
-Proficient with APIs (REST, OAuth, rate-limit handling)
-Familiar with NLP & ML libraries
-Experience in SQL and database management (BigQuery, MySQL, Postgres)
-Comfortable with Excel/Google Sheets automation
-Skilled in data visualization & dashboarding (Looker Studio, Power BI, Tableau, or Streamlit)
-Good grasp of statistics and ability to interpret trends Eligibility -Minimum 1 year of relevant experience in data engineering, analytics, or similar role -Hands-on experience with APIs (fetching, cleaning, structuring data) is mandatory
-Proficiency in Excel and dashboarding (Looker Studio / Power BI / Tableau) is a must
-Prior exposure to NLP or social media data will be preferred
-Proficiency in Telugu mandatory
Why Join Us
-Best-in-class pay with performance-linked growth opportunities
-Work on high-impact, large-scale data products in a fast-moving environment
-Continuous learning across data engineering, AI, and applied analytics -Ownership & visibility work closely with leadership and see your ideas implemented at scale
-Exposure to cutting-edge tech stacks and real-time problem-solving
Data Engineer
Posted today
Job Viewed
Job Description
About the Role
- Your days are dynamic and impactful. You will spearhead GTM programs aimed at driving significant pipeline and revenue growth. Collaborating closely with the Front End, Inside Sales, and Demand Gen teams, you'll harness extensive knowledge of regional execution performance to identify trends and craft strategies.
- Your expertise will support the sales organization in smashing their quarterly and yearly pipeline targets, through meticulous project management and strategy execution.
A Day in the Life
- Your days are dynamic and impactful. You will spearhead GTM programs aimed at driving significant pipeline and revenue growth. Collaborating closely with the Front End, Inside Sales, and Demand Gen teams, you'll harness extensive knowledge of regional execution performance to identify trends and craft strategies.
- Your expertise will support the sales organization in smashing their quarterly and yearly pipeline targets, through meticulous project management and strategy execution.
Data Engineering & Warehousing
- Design, build, and optimize ETL/ELT pipelines leveraging Snowflake, Python/SQL, dbt, and Airflow.
- Develop and maintain dimensional data models with an emphasis on quality, governance, and time-series performance tracking.
- Implement real-time monitoring and observability tools to ensure system reliability and alerting for mission-critical data pipelines.Salesforce & Platform Integrations
- Architect and manage data integrations with Salesforce (SFDC), Jira, HRIS, and various third-party APIs to centralize and operationalize data across platforms.
Enable efficient data exchange and automation across core operational tools to support reporting, compliance, and analytics needs.AI Workflows & Agent Platform Engineering
Design and implement AI-driven workflows using micro-agent platforms such as n8n, , Relevance AI, or similar.
- Integrate these platforms with internal systems for automated task execution, decision support, and self-service AI capabilities across operational teams.
- Support development and deployment of AI co-pilots, compliance automation, and intelligent alerting systems.Collaboration, Enablement & Best Practices
- Collaborate closely with Central Ops, Legal, IT, and Engineering teams to drive automation, compliance, and cross-functional enablement
- Champion documentation, self-service data tools, and training resources to empower internal teams with easy access to data and automation solutions.
- Establish and maintain best practices for scalable, maintainable, and secure data and AI workflow engineering.
What You Need
- 3-5 years of hands-on experience in technical roles involving system integration, automation, or data engineering in SaaS/B2B environments.
- Proven experience with Salesforce (SFDC), including data integration, workflow automation, and API-based solutions.
- Strong proficiency in Python, with practical experience in developing automation scripts, data workflows, and operational tooling.
- Familiarity with data platforms and databases (e.g., Snowflake, Redshift, BigQuery) to support reliable data flow and integration.
- Experience designing or deploying AI workflows using micro-agent platforms such as n8n, , Relevance AI, or similar tools.
- Solid understanding of REST APIs, and experience with real-time data orchestration and system integrations.
- Bonus: Exposure to SuperAGI, Slack integrations, Jira, or observability and alerting tools is a plus.
- A proactive, problem-solving mindset, with the ability to work effectively in fast-paced, cross-functional environments.
Be The First To Know
About the latest Senior data engineer Jobs in India !
Data Engineer
Posted today
Job Viewed
Job Description
Your range of tasks:
- Further development and performance optimization of the existing ETL solutions (ODBC extractions from SAP towards Infor d/EPM via MSSQL/SSIS)
- Contribution to technical and content-related further development of reporting solutions (including monthly reporting, planning tools, ad-hoc analysis)
- Development, implementation and testing of new data-loads, reports & dashboards in the team mainly for Finance & Controlling users
- Collaboration with business units in case of improvements, new requirements, or new projects as well as training for planning and reporting tools
Your profile:
- You have a degree in technical or business studies or comparable professional experience
- You have practical experience in working with relational databases, advanced programming skills in SQL and in Integration Services (SSIS) as well as scripting in C#
- Therefore, you are a professional user of Microsoft Server SQL Server Management Studio as well as Microsoft Visual Studio.
- You have know-how in dealing with BI cubes, the implementation of BI processes (together with IT experts) or ideally experience with Infor BI tools (d/EPM; Application Studio; Application Engine)
- Having Finance & Controlling business knowledge and experience in creating reports / visualization would be beneficial
- You apply your good methodological and problem-solving skills in a goal-oriented manner
- Your personal strengths include the ability to work in a team, to work independently and to think innovatively.
- You have an above-average commitment and a hands-on mentality
- Business fluent English as well as very good project management skills are required, SAP knowledge is an advantage. German beneficial.
- Reliability, willingness to learn, structure and accuracy complete your competence profile.
,
*Soft Skills *
- Intercultural Skills
- Assuming Personal Responsibility and Demonstrating Reliability
- Ability to Work Under Pressure/Stress Resistance/Resiliency
- Customer Orientation
- Ability to Cooperate
- Empathy and Communication Tailored to the Target Group
- Conceptual Skills
- Time Management
Professional Expertise
- Bachelor's degree in Computer Science, Information Systems or related field
- Minimum 3 years of professional experience into SSIS, SQL, Vision Studio etc.
- Strong written and oral English communication skills
- Knowledge and experience with agile methodology is an added advantage
,
*Ihre Aufgaben
Your range of tasks: *
- Further development and performance optimization of the existing ETL solutions (ODBC extractions from SAP towards Infor d/EPM via MSSQL/SSIS)
- Contribution to technical and content-related further development of reporting solutions (including monthly reporting, planning tools, ad-hoc analysis)
- Development, implementation and testing of new data-loads, reports & dashboards in the team mainly for Finance & Controlling users
- Collaboration with business units in case of improvements, new requirements, or new projects as well as training for planning and reporting tools
Your profile:
- You have a degree in technical or business studies or comparable professional experience
- You have practical experience in working with relational databases, advanced programming skills in SQL and in Integration Services (SSIS) as well as scripting in C#
- Therefore, you are a professional user of Microsoft Server SQL Server Management Studio as well as Microsoft Visual Studio.
- You have know-how in dealing with BI cubes, the implementation of BI processes (together with IT experts) or ideally experience with Infor BI tools (d/EPM; Application Studio; Application Engine)
- Having Finance & Controlling business knowledge and experience in creating reports / visualization would be beneficial
- You apply your good methodological and problem-solving skills in a goal-oriented manner
- Your personal strengths include the ability to work in a team, to work independently and to think innovatively.
- You have an above-average commitment and a hands-on mentality
- Business fluent English as well as very good project management skills are required, SAP knowledge is an advantage. German beneficial.
- Reliability, willingness to learn, structure and accuracy complete your competence profile.
*Ihr Profil
Soft Skills *
- Intercultural Skills
- Assuming Personal Responsibility and Demonstrating Reliability
- Ability to Work Under Pressure/Stress Resistance/Resiliency
- Customer Orientation
- Ability to Cooperate
- Empathy and Communication Tailored to the Target Group
- Conceptual Skills
- Time Management
Professional Expertise
- Bachelor's degree in Computer Science, Information Systems or related field
- Minimum 3 years of professional experience into SSIS, SQL, Vision Studio etc.
- Strong written and oral English communication skills
- Knowledge and experience with agile methodology is an added advantage
Freitext
Zusatzinformationen (intern)
für Job Wrapping
Data Engineer
Posted today
Job Viewed
Job Description
We're Hiring:
Data Engineer
Locations: Gurugram, Bengaluru, Pune, Chennai, Hyderabad, Mumbai, Bhopal
Work Mode: Hybrid (12 days/month onsite)
Contract: 6 Months (Extendable)
Experience: 6–12 Years
PF & BGV Mandatory
We're looking for someone with deep expertise in:
Databricks – workflows, notebooks, Delta Lake
Python – PySpark, pandas, API integration
SQL & Postgres – schema design, indexing, query optimization
Azure Stack – Data Factory, Data Lake Gen2, Synapse Analytics
Vector Databases – pgvector, Qdrant, Pinecone
Generative AI – RAG pipelines, embedding-based search
DevOps – CI/CD, Git workflows
Job Types: Contractual / Temporary, Freelance
Contract length: 6 months
Pay: ₹70, ₹80,000.00 per month
Work Location: Remote
Data Engineer
Posted today
Job Viewed
Job Description
Company Description
Optimum Solutions is an enterprise Digital & IT solutions and services company that engineers digital transformation for enterprises characterized by agility, efficiency, and innovation. Headquartered in Singapore, Optimum Solutions has a talented workforce of over 4500 employees across offices and global delivery centers in 9 countries.
Role Description
This is a full-time, on-site role for a Data Engineer located in Chennai. The Data Engineer will be responsible for data modeling, implementing Extract Transform Load (ETL) processes, managing data warehousing, and performing data analytics. Day-to-day tasks will include designing and building scalable data pipelines, optimizing data architectures, and collaborating with cross-functional teams to develop data-driven solutions.
Qualifications
- Experience with Data Engineering and Data Modeling
- Proficiency in Extract Transform Load (ETL) processes
- Knowledge of Data Warehousing and Data Analytics
- Strong problem-solving and analytical skills
- Ability to work collaboratively in a team environment
- Bachelor's degree in Computer Science, Information Technology, or related field
- Experience in cloud-based data platforms is a plus