20,393 Senior Data Engineer jobs in India
Senior Data Engineer / Data Engineer
Posted today
Job Viewed
Job Description
Desired Experience: 3-8 years
Salary: Best-in-industry
Location: Gurgaon ( 5 days onsite)
Overview:
You will act as a key member of the Data consulting team, working directly with the partners and senior stakeholders of the clients designing and implementing big data and analytics solutions. Communication and organisation skills are keys for this position, along with a problem-solution attitude.
What is in it for you:
Opportunity to work with a world class team of business consultants and engineers solving some of the most complex business problems by applying data and analytics techniques
Fast track career growth in a highly entrepreneurial work environment
Best-in-industry renumeration package
Essential Technical Skills:
Technical expertise with emerging Big Data technologies, such as: Python, Spark, Hadoop, Clojure, Git, SQL and Databricks; and visualization tools: Tableau and PowerBI
Experience with cloud, container and micro service infrastructures
Experience working with divergent data sets that meet the requirements of the Data Science and Data Analytics teams
Hands-on experience with data modelling, query techniques and complexity analysis
Desirable Skills:
Experience/Knowledge of working in an agile environment and experience with agile methodologies such as Scrum
Experience of working with development teams and product owners to understand their requirement
Certifications on any of the above areas will be preferred.
Your duties will include:
Develop data solutions within a Big Data Azure and/or other cloud environments
Working with divergent data sets that meet the requirements of the Data Science and Data Analytics teams
Build and design Data Architectures using Azure Data factory, Databricks, Data lake, Synapse
Liaising with CTO, Product Owners and other Operations teams to deliver engineering roadmaps showing key items such as upgrades, technical refreshes and new versions
Perform data mapping activities to describe source data, target data and the high-level or detailed transformations that need to occur;
Assist Data Analyst team in developing KPIs and reporting in tools viz. Power BI, Tableau
Data Integration, Transformation, Modelling
Maintaining all relevant documentation and knowledge bases
Research and suggest new database products, services and protocols
Essential Personal Traits:
You should be able to work independently and communicate effectively with remote teams.
Timely communication/escalation of issues/dependencies to higher management.
Curiosity to learn and apply emerging technologies to solve business problems
** Interested candidate please send thier resume on - and **
Data Engineer- Lead Data Engineer
Posted today
Job Viewed
Job Description
Role Overview
We are seeking an experienced Lead Data Engineer to join our Data Engineering team at Paytm, India's leading digital payments and financial services platform. This is a critical role responsible for designing, building, and maintaining large-scale, real-time data streams that process billions of transactions and user interactions daily. Data accuracy and stream reliability are essential to our operations, as data quality issues can result in financial losses and impact customer trust.
As a Lead Data Engineer at Paytm, you will be responsible for building robust data systems that support India's largest digital payments ecosystem. You'll architect and implement reliable, real-time data streaming solutions where precision and data correctness are fundamental requirements . Your work will directly support millions of users across merchant payments, peer-to-peer transfers, bill payments, and financial services, where data accuracy is crucial for maintaining customer confidence and operational excellence.
This role requires expertise in designing fault-tolerant, scalable data architectures that maintain high uptime standards while processing peak transaction loads during festivals and high-traffic events. We place the highest priority on data quality and system reliability, as our customers depend on accurate, timely information for their financial decisions. You'll collaborate with cross-functional teams including data scientists, product managers, and risk engineers to deliver data solutions that enable real-time fraud detection, personalized recommendations, credit scoring, and regulatory compliance reporting.
Key technical challenges include maintaining data consistency across distributed systems with demanding performance requirements, implementing comprehensive data quality frameworks with real-time validation, optimizing query performance on large datasets, and ensuring complete data lineage and governance across multiple business domains. At Paytm, reliable data streams are fundamental to our operations and our commitment to protecting customers' financial security and maintaining India's digital payments infrastructure.
Key Responsibilities
Data Stream Architecture & Development Design and implement reliable, scalable data streams handling high-volume transaction data with strong data integrity controlsBuild real-time processing systems using modern data engineering frameworks (Java/Python stack) with excellent performance characteristicsDevelop robust data ingestion systems from multiple sources with built-in redundancy and monitoring capabilitiesImplement comprehensive data quality frameworks, ensuring the 4 C's: Completeness, Consistency, Conformity, and Correctness - ensuring data reliability that supports sound business decisionsDesign automated data validation, profiling, and quality monitoring systems with proactive alerting capabilities Infrastructure & Platform Management Manage and optimize distributed data processing platforms with high availability requirements to ensure consistent service deliveryDesign data lake and data warehouse architectures with appropriate partitioning and indexing strategies for optimal query performanceImplement CI/CD processes for data engineering workflows with comprehensive testing and reliable deployment proceduresEnsure high availability and disaster recovery for critical data systems to maintain business continuity
Performance & Optimization Monitor and optimize streaming performance with focus on latency reduction and operational efficiencyImplement efficient data storage strategies including compression, partitioning, and lifecycle management with cost considerationsTroubleshoot and resolve complex data streaming issues in production environments with effective response protocolsConduct proactive capacity planning and performance tuning to support business growth and data volume increases
Collaboration & Leadership Work closely with data scientists, analysts, and product teams to understand important data requirements and service level expectationsMentor junior data engineers with emphasis on data quality best practices and customer-focused approachParticipate in architectural reviews and help establish data engineering standards that prioritize reliability and accuracyDocument technical designs, processes, and operational procedures with focus on maintainability and knowledge sharing
Required Qualifications
Experience & Education Bachelor's or Master's degree in Computer Science, Engineering, or related technical field
7+ years (Senior) of hands-on data engineering experience
Proven experience with large-scale data processing systems (preferably in fintech/payments domain)
Experience building and maintaining production data streams processing TB/PB scale data with strong performance and reliability standards
Technical Skills & RequirementsProgramming Languages:
Expert-level proficiency in both Python and Java; experience with Scala preferred
Big Data Technologies: Apache Spark (PySpark, Spark SQL, Spark with Java), Apache Kafka, Apache Airflow
Cloud Platforms: AWS (EMR, Glue, Redshift, S3, Lambda) or equivalent Azure/GCP services
Databases: Strong SQL skills, experience with both relational (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Cassandra, Redis)
Data Quality Management: Deep understanding of the 4 C's framework - Completeness, Consistency, Conformity, and Correctness
Data Governance: Experience with data lineage tracking, metadata management, and data cataloging
Data Formats & Protocols: Parquet, Avro, JSON, REST APIs, GraphQL Containerization & DevOps: Docker, Kubernetes, Git, GitLab/GitHub with CI/CD pipeline experience
Monitoring & Observability: Experience with Prometheus, Grafana, or similar monitoring tools
Data Modeling: Dimensional modeling, data vault, or similar methodologies
Streaming Technologies: Apache Flink, Kinesis, or Pulsar experience is a plus
Infrastructure as Code: Terraform, CloudFormation (preferred)
Java-specific: Spring Boot, Maven/Gradle, JUnit for building robust data services
Preferred Qualifications
Domain Expertise
Previous experience in fintech, payments, or banking industry with solid understanding of regulatory compliance and financial data requirementsUnderstanding of financial data standards, PCI DSS compliance, and data privacy regulations where compliance is essential for business operationsExperience with real-time fraud detection or risk management systems where data accuracy is crucial for customer protection
Advanced Technical Skills (Preferred)
Experience building automated data quality frameworks covering all 4 C's dimensionsKnowledge of machine learning stream orchestration (MLflow, Kubeflow)Familiarity with data mesh or federated data architecture patternsExperience with change data capture (CDC) tools and techniques
Leadership & Soft Skills Strong problem-solving abilities with experience debugging complex distributed systems in production environmentsExcellent communication skills with ability to explain technical concepts to diverse stakeholders while highlighting business valueExperience mentoring team members and leading technical initiatives with focus on building a quality-oriented cultureProven track record of delivering projects successfully in dynamic, fast-paced financial technology environments
What We Offer
Opportunity to work with cutting-edge technology at scaleCompetitive salary and equity compensation
Comprehensive health and wellness benefits
Professional development opportunities and conference attendanceFlexible working arrangements
Chance to impact millions of users across India's digital payments ecosystem
Application Process
Interested candidates should submit:
Updated resume highlighting relevant data engineering experience with emphasis on real-time systems and data quality
Portfolio or GitHub profile showcasing data engineering projects, particularly those involving high-throughput streaming systems
Cover letter explaining interest in fintech/payments domain and understanding of data criticality in financial services
References from previous technical managers or senior colleagues who can attest to your data quality standards
PI1255d80c7d
Senior Data Engineer / Data Engineer
Posted today
Job Viewed
Job Description
LOOKING FOR IMMEDIATE JOINERS OR 15 DAYS NOTICE PERIODS AND THIS IS WORK FROM HOME OPPORTUNITY
Position: Senior Data Engineer / Data Engineer
Desired Experience: 3-8 years
Salary: Best-in-industry
You will act as a key member of the Data consulting team, working directly with the partners and senior
stakeholders of the clients designing and implementing big data and analytics solutions. Communication
and organisation skills are keys for this position, along with a problem-solution attitude.
What is in it for you:
Opportunity to work with a world class team of business consultants and engineers solving some of
the most complex business problems by applying data and analytics techniques
Fast track career growth in a highly entrepreneurial work environment
Best-in-industry renumeration package
Essential Technical Skills:
Technical expertise with emerging Big Data technologies, such as: Python, Spark, Hadoop, Clojure,
Git, SQL and Databricks; and visualization tools: Tableau and PowerBI
Experience with cloud, container and micro service infrastructures
Experience working with divergent data sets that meet the requirements of the Data Science and
Data Analytics teams
Hands-on experience with data modelling, query techniques and complexity analysis
Desirable Skills:
Experience/Knowledge of working in an agile environment and experience with agile
methodologies such as Scrum
Experience of working with development teams and product owners to understand their
requirement
Certifications on any of the above areas will be preferred.
Your duties will include:
Develop data solutions within a Big Data Azure and/or other cloud environments
Working with divergent data sets that meet the requirements of the Data Science and Data Analytics
teams
Build and design Data Architectures using Azure Data factory, Databricks, Data lake, Synapse
Liaising with CTO, Product Owners and other Operations teams to deliver engineering roadmaps
showing key items such as upgrades, technical refreshes and new versions
Perform data mapping activities to describe source data, target data and the high-level or
detailed transformations that need to occur;
Assist Data Analyst team in developing KPIs and reporting in tools viz. Power BI, Tableau
Data Integration, Transformation, Modelling
Maintaining all relevant documentation and knowledge bases
Research and suggest new database products, services and protocols
Essential Personal Traits:
You should be able to work independently and communicate effectively with remote teams.
Timely communication/escalation of issues/dependencies to higher management.
Curiosity to learn and apply emerging technologies to solve business problems
Data Engineer- Lead Data Engineer
Posted today
Job Viewed
Job Description
Data Engineer/ Senior Data Engineer
Posted today
Job Viewed
Job Description
Job Title | Data Engineer |
Local Job Title | Data Engineer |
Reports To | BIA Manager |
Position Summary:
The BIA Data Engineer Designs, implements, and maintains complex data engineering solutions in the Business Intelligence and Analytics team
Responsible for design, development, implementation, testing, documentation, and support of analytical and data solutions/projects requiring data aggregation/data pipelines/ETL/ELT from multiple sources into an efficient reporting mechanism, database/data warehouse using appropriate tools like Informatica, Azure Data Factory, SSIS. This includes interacting with business to gather requirements, analysis, and creation of functional and technical specs, testing, training, escalation, and follow-up.
Support of the applications would include resolving issues reported by users. Issues could be caused by bugs in the application or user errors or programming errors. Resolution process will include, but not limited to, investigate known bugs on software vendor support website, create tickets or service requests with software vendor, develop scripts to fix data issues, make program changes, test fixes and apply the changes to production.
These tasks and activities will be completed with the help and under the guidance of the supervisor. Participation in team and / or project meetings, to schedule work and discuss status, will be required.
The position also requires staying abreast with changes in technology, programming languages, and software development tools.
Duties
Data Pipeline/ETL (40%): Designs and implements data stores and ETL data flows and data pipelines to connect and prepare operational systems data for analytics and business intelligence (BI) systems. |
Support & Operations (10%): Manages production deployments and automation, monitoring, job control and production support. Works with business users to test programs in Development and Quality. Investigates issues using vendor support website(s). |
Data Modeling/Designing Datasets (10%): Reviews and understands business requirements for development tasks assigned and applies standard data modelling and design techniques based upon a detailed understanding of requirements. |
Data Architecture and Technical Infrastructure (10%): Plans and drives the development of data engineering solutions ensuring that solutions balance functional and non-functional requirements. Monitors application of data standards and architectures including security and compliance. |
SDLC Methodology & Project Management (5%): Contributes to technical transitions between development, testing, and production phases of solutions' lifecycle, and the facilitation of the change control, problem management, and communication processes. |
Data Governance and Data Quality (5%): Identifies and investigates data quality/integrity problems, determine impact and provide solutions to problems. |
Metadata Management & Documentation (5%): Documents all processes and mappings related to Data Pipelines work and follows development best practices as adopted by the BIA team |
End-User Support, Education and Enablement (5%): Contributes to training and Data Literacy initiatives within the team and End user community. |
Innovation, Continuous Improvement & Optimization (5%): Continuously improves and optimizes existing Data Engineering assets/processes. |
Partnership and Community Building (5%): Collaborates with other IT teams, Business Community, data scientists and other architects to meet business requirements. Interact with DBAs on data designs optimal for data engineering solutions performance. |
VDart: Digital Consulting & Staffing Solutions
VDart is a leading digital consulting and staffing company founded in 2007 and headquartered in Alpharetta, Georgia. VDart is one of the top staffing companies in USA & also provides technology solutions, supporting businesses in their digital transformation journey. Digital consulting services, Digital consulting agency
Core Services:
- Digital consulting and transformation solutions
- Comprehensive staffing services (contract, temporary, permanent placements)
- Talent management and workforce solutions
- Technology implementation services
Key Industries: VDart primarily serves industries such as automotive and mobility, banking and finance, healthcare and life sciences, and energy and utilities. VDart - Products, Competitors, Financials, Employees, Headquarters Locations
Notable Partnership: VDart Digital has been a consistently strong supplier for Toyota since 2017 Digital Transformation Consulting & Strategy Services | VDart Digital , demonstrating their capability in large-scale digital transformation projects.
With over 17 years of experience, VDart combines staffing expertise with technology solutions to help organizations modernize their operations and build skilled workforce capabilities across multiple sectors.
Data Engineer
Posted today
Job Viewed
Job Description
Entity:
Technology
Job Family Group:
Job Description:
About bp/team
Bp's Technology organization is the central organization for all software and platform development. We build all the technology that powers bp’s businesses, from upstream energy production to downstream energy delivery to our customers. We have a variety of teams depending on your areas of interest, including infrastructure and backend services through to customer-facing web and native applications. We encourage our teams to adapt quickly by using native AWS and Azure services, including serverless, and enable them to pick the best technology for a given problem. This is meant to empower our software and platform engineers while allowing them to learn and develop themselves.
Responsibilities
- Part of a cross-disciplinary team, working closely with other data engineers, software engineers, data scientists, data managers and business partners.
- Architects, designs, implements and maintains reliable and scalable data infrastructure to move, process and serve data.
- Writes, deploys and maintains software to build, integrate, manage, maintain, and quality-assure data at bp.
- Adheres to and advocates for software engineering standard methodologies (e.g. technical design, technical design review, unit testing, monitoring & alerting, checking in code, code review, documentation)
- Responsible for deploying secure and well-tested software that meets privacy and compliance requirements; develops, maintains and improves CI / CD pipeline.
- Responsible for service reliability and following site-reliability engineering best practices: on-call rotations for services they maintain, responsible for defining and maintaining SLAs. Design, build, deploy and maintain infrastructure as code.
- Containerizes server deployments. Actively contributes to improve developer velocity. Mentors others.
Qualifications
- BS degree or equivalent experience in computer science or related field.
- Deep and hands-on experience designing, planning, building, productionizing, maintaining and documenting reliable and scalable data infrastructure and data products in complex environments.
- Development experience in one or more object-oriented programming languages (e.g. Python, Scala, Java, C#)
- Sophisticated database and SQL knowledge
- Experience designing and implementing large-scale distributed data systems Deep knowledge and hands-on experience in technologies across all data lifecycle stages.
- Strong stakeholder management and ability to lead initiatives through technical influence
- Continuous learning and improvement mindset Desired
- No prior experience in the energy industry required
Travel Requirement
Relocation Assistance:
Remote Type:
Skills:
Legal Disclaimer:
We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, socioeconomic status, neurodiversity/neurocognitive functioning, veteran status or disability status. Individuals with an accessibility need may request an adjustment/accommodation related to bp’s recruiting process (e.g., accessing the job application, completing required assessments, participating in telephone screenings or interviews, etc.). If you would like to request an adjustment/accommodation related to the recruitment process, please contact us .
If you are selected for a position and depending upon your role, your employment may be contingent upon adherence to local policy. This may include pre-placement drug screening, medical review of physical fitness for the role, and background checks.
Data Engineer
Posted today
Job Viewed
Job Description
Entity:
Technology
Job Family Group:
Job Description:
You will work with
For driving application simplification across the organization, focusing on reducing operational complexity and technical debt. They work closely with the wider digital delivery and digital core teams to identify and pursue simplification opportunities. The team collaborates with business partners to align simplification efforts with broader transformation goals and drive measurable improvements in operational efficiency.
Let me tell you about the role
A Data Engineer designs and builds scalable data management systems that support application simplification efforts. They develop and maintain databases and large-scale processing systems to enable efficient data collection, analysis, and integration. Key responsibilities include ensuring data accuracy for modeling and analytics, optimizing data pipelines for scalability, and collaborating with data scientists to drive data-driven decision-making. They play a crucial role in supporting the team’s goals by enabling seamless access to reliable data for simplification initiatives.
What you will deliver
As part of a cross-disciplinary team, you will collaborate with data engineers, software engineers, data scientists, data managers, and business partners to architect, design, implement, and maintain reliable and scalable data infrastructure for moving, processing, and serving data. You will write, deploy, and maintain software to build, integrate, manage, and assure the quality of data. Adhering to software engineering best practices, you will ensure technical design, unit testing, monitoring, code reviews, and documentation are followed. You will also ensure the deployment of secure, well-tested software that meets privacy and compliance standards, and improve the CI/CD pipeline. Additionally, you will be responsible for service reliability, on-call rotations, SLAs, and maintaining infrastructure as code. By containerizing server deployments and mentoring others, you'll actively contribute to improving developer velocity.
What you will need to be successful (experience and qualifications)
Essential
- Deep, hands-on expertise in designing, building, and maintaining scalable data infrastructure and products in complex environments
- Development experience in object-oriented programming languages (e.g., Python, Scala, Java, C#)
- Advanced knowledge of databases and SQL
- Experience designing and implementing large-scale distributed data systems
- Strong understanding of technologies across all stages of the data lifecycle
- Ability to manage stakeholders effectively and lead initiatives through technical influence
- Continuous learning approach
- BS degree in computer science or a related field (or equivalent experience)
About bp
Our purpose is to deliver energy to the world, today and tomorrow. For over 100 years, bp has focused on discovering, developing, and producing oil and gas in the nations where we operate. We are one of the few companies globally that can provide governments and customers with an integrated energy offering. Delivering our strategy sustainably is fundamental to achieving our ambition to be a net zero company by 2050 or sooner!
We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Travel Requirement
Relocation Assistance:
Remote Type:
Skills:
Legal Disclaimer:
We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, socioeconomic status, neurodiversity/neurocognitive functioning, veteran status or disability status. Individuals with an accessibility need may request an adjustment/accommodation related to bp’s recruiting process (e.g., accessing the job application, completing required assessments, participating in telephone screenings or interviews, etc.). If you would like to request an adjustment/accommodation related to the recruitment process, please contact us .
If you are selected for a position and depending upon your role, your employment may be contingent upon adherence to local policy. This may include pre-placement drug screening, medical review of physical fitness for the role, and background checks.
Be The First To Know
About the latest Senior data engineer Jobs in India !
Data Engineer
Posted 2 days ago
Job Viewed
Job Description
+ Develops high performance distributed data warehouses, distributed analytic systems and cloud architectures
+ Participates in developing relational and non-relational data models designed for optimal storage and retrieval
+ Develops, tests, and debugs batch and streaming data pipelines (ETL/ELT) to populate databases and object stores from multiple data sources using a variety of scripting languages; provide recommendations to improve data reliability, efficiency and quality
+ Works along-side data scientists, supporting the development of high-performance algorithms, models and prototypes
+ Implements data quality metrics, standards, guidelines; automates data quality checks / routines as part of data processing frameworks; validates flow of information
+ Ensures that Data Warehousing and Big Data systems meet business requirements and industry practices including but not limited to automation of system builds, security requirements, performance requirements and logging/monitoring requirements
**Knowledge, Skills, and Abilities**
-
Ability to translate a logical data model into a relational or non-relational solution
-
Expert in one or more of the following ETL tools: SSIS, Azure Data Factory, AWS Glue, Matillion, Talend, Informatica, Fivetran
-
Hands on experience in setting up End to End cloud based data lakes
-
Hands-on experience in database development using views, SQL scripts and transformations
-
Ability to translate complex business problems into data-driven solutions
-
Working knowledge of reporting tools like Power BI , Tableau etc
-
Ability to identify data quality issues that could affect business outcomes
-
Flexibility in working across different database technologies and propensity to learn new platforms on-the-fly
-
Strong interpersonal skills
-
Team player prepared to lead or support depending on situation"
WWT will consider for employment, without regard to disability, a disabled applicant who satisfies the requisite skill, experience, education, and other job-related requirements of the job and is capable of performing the essential requirements of the job with or without reasonable accommodation. World Wide Technology is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, sex (including pregnancy), sexual orientation, gender identity, national origin, age, disability, veteran status, genetic information, or other characteristics protected by law. We are committed to working with and providing reasonable accommodations to individuals with disabilities. If you have a disability and you believe you need a reasonable accommodation in order to search for a job opening or to submit an online application, please call and ask for Human Resources.
Data Engineer
Posted 2 days ago
Job Viewed
Job Description
Texas Instruments manufactures tens of billions of analog and embedded processing semiconductors annually, across more than 80,000 different products, and deliver them to more than 100,000 customers around the globe. A core element of our strategy is to invest in our internal manufacturing capacity - both in wafer fabs and assembly-test(AT) sites. Our AT manufacturing is undergoing significant expansion, modernization and automation to meet the increasing customer demand for decades ahead, and we are growing our internal manufacturing and operations to more than 90%. In addition to owning our manufacturing capacity, we also own our process, packaging and test technology development, enabling us introduce new product designs with highest quality and efficiency.
**About the job:**
At the Manufacturing Solutions IT organization in TI, we are investing in forward looking manufacturing solutions to deliver differentiated analog and embedded processing semiconductor products with highest quality and efficiency. Our charter involves building AT manufacturing solutions based on a compelling roadmap by investing in futuristic, scalable and high performing data technologies while maintaining good support for existing tools & technologies. And we are on lookout for a highly motivated AT Data Solutions Engineer to drive the next generation AT manufacturing technology through smart, connected systems. The candidate will be part of the Data Solutions team with emphasis on data quality, ETL, data pipelines, operations, stability, and scalability of our critical database and replication platforms. You will be responsible for designing, developing and maintaining ETL solutions for AT manufacturing data.
**Role Responsibilities:**
+ Software development experience including development, requirements gathering and documentation; well-versed with code reviews, testing, and deployment to production
+ Collaborate with stakeholders to develop requirements, create level of effort estimates and develop effective solutions following appropriate security measures with focus on data accuracy.
+ Have a good understanding of hardware provisioned in TI landscape
+ Experience to perform root cause analysis to answer specific business questions/ environment issues, identify opportunities for improvement/corrective action and guide the managed services team execute the solution
+ Work with Regional and Global IT team with focus on culture built on teamwork and collaboration
+ The ability to balance work across multiple projects and priorities with overlapping schedule is required; work with a sense of urgency on issues/needs and think strategically to provide recommendations that align with our organization goals
**Technical Skills:**
+ Extensive knowledge and understanding of Oracle database + performance tuning, Unix, Linux and Windows OS. NoSQL and clickhouse experience is a plus.
+ Expert level experience DB and ETL design, SQL, PL/SQL, Python, Unix Shell Scripts and working knowledge of Java is a plus
+ Expert in streaming technologies like kafka and flink or similar technologies
+ Experience in spark, containerization and Kubernetes deployments
+ Experience with collaboration tools like Jira, Confluence and BitBucket
+ Excellent debugging, problem-solving skills and root cause analysis
+ Experience in testing tools or testing methodologies is a plus
+ Technical expertise with data models, data mining, and segmentation techniques
**Why TI?**
+ Engineer your future. We empower our employees to truly own their career and development. Come collaborate with some of the smartest people in the world to shape the future of electronics.
+ We're different by design. Diverse backgrounds and perspectives are what push innovation forward and what make TI stronger. We value each and every voice, and look forward to hearing yours. Meet the people of TI ( Benefits that benefit you. We offer competitive pay and benefits designed to help you and your family live your best life. Your well-being is important to us.
**About Texas Instruments**
Texas Instruments Incorporated (Nasdaq: TXN) is a global semiconductor company that designs, manufactures and sells analog and embedded processing chips for markets such as industrial, automotive, personal electronics, communications equipment and enterprise systems. At our core, we have a passion to create a better world by making electronics more affordable through semiconductors. This passion is alive today as each generation of innovation builds upon the last to make our technology more reliable, more affordable and lower power, making it possible for semiconductors to go into electronics everywhere. Learn more at TI.com .
Texas Instruments is an equal opportunity employer and supports a diverse, inclusive work environment.
If you are interested in this position, please apply to this requisition.
**Minimum Requirements:**
+ Bachelor's degree in Electrical Engineering, Computer Engineering or related field of study
+ Up to 2 years of experience in Data Engineering, building data products, worked on integrations and ETLs, developed analytical solutions and dashboards.
**Soft Skills :**
+ Exhibits strong written and verbal communications skills - able to communicate with vendors, stakeholders and management at all levels.
+ Desire to grow skills and responsibilities
+ Fosters strong relationships within functional and business areas.
+ High degree of judgment and initiative in resolving highly complex problems and developing recommendations and standards
+ Working effectively with cross functional teams located globally
+ Ability to lead change by effectively building commitment and winning support for initiatives
+ Self-motivated and able to provide technical leadership
+ Critical thinking and problem-solving skills
+ Demonstrating TI Ambitions, Values and complying to Our code of conduct.
**ECL/GTC Required:** Yes
Data Engineer
Posted 2 days ago
Job Viewed
Job Description
**Who is USP?**
The U.S. Pharmacopeial Convention (USP) is an independent scientific organization that collaborates with the world's leading health and science experts to develop rigorous quality standards for medicines, dietary supplements, and food ingredients. At USP, we believe that scientific excellence is driven by a commitment to fairness, integrity, and global collaboration. This belief is embedded in our core value of Passion for Quality and is demonstrated through the contributions of more than 1,300 professionals across twenty global locations, working to strengthen the supply of safe, high-quality medicines worldwide.
At USP, we value inclusive scientific collaboration and recognize that attracting diverse expertise strengthens our ability to develop trusted public health standards. We foster an organizational culture that supports equitable access to mentorship, professional development, and leadership opportunities. Our partnerships, standards, and research reflect our belief that ensuring broad participation in scientific leadership results in stronger, more impactful outcomes for global health.
USP is proud to be an equal employment opportunity employer (EEOE) and is committed to ensuring fair, merit-based selection processes that enable the best scientific minds-regardless of background-to contribute to advancing public health solutions worldwide. We provide reasonable accommodations to individuals with disabilities and uphold policies that create an inclusive and collaborative work environment.
**Brief Job Overview**
The Data Engineer position is a hands-on non-supervisory role within the Data Strategy & Analytics team. The position is an individual contributor role that will serve a critical function by designing, building, and executing efficient and governed data pipelines. This role is integral to transitioning our development of data assets into production environments. The data engineer will collaborate with the data strategy program manager and data scientists by establishing robust data processing practices and tools in support of a data science capability. The position requires some time overlap for meetings with team members based in other time zones, primarily Eastern Standard Time (EST).
**How will YOU create impact here at USP?**
As part of our mission to advance scientific rigor and public health standards, you will play a vital role in increasing global access to high-quality medicines through public standards and related programs. USP prioritizes scientific integrity, regulatory excellence, and evidence-based decision-making to ensure health systems worldwide can rely on strong, tested, and globally relevant quality standards.
Additionally, USP's People and Culture division, in partnership with the Equity Office, invests in leadership and workforce development to equip all employees with the skills to create high-performing, inclusive teams. This includes training in equitable management practices and tools to promote engaged, collaborative, and results-driven work environments.
The Data Engineer has the following responsibilities:
+ Design, build, and implement data pipelines to support advanced use cases for product grade data science products
+ Manage data ingestion, processing and production process as well as manage tools and platforms
+ Integrate external datasets with USP data
+ Interface with internal and external teams of data scientists and data engineers on process development
+ Collaborate with IT to integrate data warehouse, analytics applications, and data science platforms
+ Support transition to agile approaches for data asset development and deployment
+ Support transition to a SDLC approach to data asset and data science product development and deployment
**Who is USP Looking For?**
The successful candidate will have a demonstrated understanding of our mission, commitment to excellence through inclusive and equitable behaviors and practices, ability to quickly build credibility with stakeholders, along with the following competencies and experience:
+ Bachelor's Degree in Computer Science, Engineering, Mathematics, or related technical area
+ Minimum 3 years of data engineering specific experience designing, building, and supporting data pipelines
+ Experience in at least one modern relational database
+ Strong core SQL development experience and writing efficient SQL
+ Very strong development ability in Python including experience working with pandas, numpy, scikit-learn, selenium, beautifulsoup, and regex
+ Hands-on experience with ETL/ELT tools and concepts
+ Strong communication and collaboration skills with strength in working in teams
+ Strong problem solving and time management skills
+ Very strong learning agility- ability to pick up new tools and capabilities quickly
**Additional Desired Preferences**
+ Experience with SDLC and Agile work environments
+ Experience working in cloud platforms (AWS, Azure, Google Cloud Platform, etc.)
+ Proven ability to clearly define priorities and focus on delivering work products
+ Ability to handle multiple priorities and complex projects in a fast-paced environment.
+ Substantial experience with Git for version control
+ Core data modeling ability
+ Ability to derive data modeling decisions and data engineering strategy to meet data strategy objectives
+ Direct experience supporting Data Science teams in moving from development to production environments
+ Prior experience in a scientific based industry or related content area a plus (pharma, life sciences, public health, research, etc.)
+ Experience with Apache Spark and/or pySpark
+ Experience working in cloud platforms (AWS, Azure, Google Cloud Platform, etc.)
+ Experience with Continuous Integration and Continuous Delivery (Deployment) software development approach
**Supervisory Responsibilities**
None, this is an individual contributor role.
**Benefits**
USP provides the benefits to protect yourself and your family today and tomorrow. From company-paid time off and comprehensive healthcare options to retirement savings, you can have peace of mind that your personal and financial well-being is protected.
Note: USP does not accept unsolicited resumes from 3rd party recruitment agencies and is not responsible for fees from recruiters or other agencies except under specific written agreement with USP.
**Job Category** Information Technology
**Job Type** Full-Time