2,836 Nosql Database jobs in India

Data Engineer

Hyderabad, Andhra Pradesh Amgen

Posted 2 days ago

Job Viewed

Tap Again To Close

Job Description

Join Amgen's Mission of Serving Patients
At Amgen, if you feel like you're part of something bigger, it's because you are. Our shared mission-to serve patients living with serious illnesses-drives all that we do.
Since 1980, we've helped pioneer the world of biotech in our fight against the world's toughest diseases. With our focus on four therapeutic areas -Oncology, Inflammation, General Medicine, and Rare Disease- we reach millions of patients each year. As a member of the Amgen team, you'll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives.
Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you'll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career.
**What you will do**
Let's do this. Let's change the world. In this vital role you will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management.
+ Design, develop, and maintain complex ETL/ELT data pipelines in Databricks using PySpark, Scala, and SQL to process large-scale datasets
+ Understand the biotech/pharma or related domains & build highly efficient data pipelines to migrate and deploy complex data across systems
+ Design and Implement solutions to enable unified data access, governance, and interoperability across hybrid cloud environments
+ Ingest and transform structured and unstructured data from databases (PostgreSQL, MySQL, SQL Server, MongoDB etc.), APIs, logs, event streams, images, pdf, and third-party platforms
+ Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring
+ Expert in data quality, data validation and verification frameworks
+ Innovate, explore and implement new tools and technologies to enhance efficient data processing
+ Proactively identify and implement opportunities to automate tasks and develop reusable frameworks
+ Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value
+ Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories
+ Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle
+ Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions
**What we expect of you**
We are all different, yet we all use our unique contributions to serve patients. We are looking for highly motivated expert Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks.
**Basic Qualifications:**
+ Master's degree and 1 to 3 years of Computer Science, IT or related field experience OR
+ Bachelor's degree and 3 to 5 years of Computer Science, IT or related field experience OR
+ Diploma and 7 to 9 years of Computer Science, IT or related field experience
+ Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies
+ Proficiency in workflow orchestration, performance tuning on big data processing
+ Strong understanding of AWS services
+ Ability to quickly learn, adapt and apply new technologies
+ Strong problem-solving and analytical skills
+ Excellent communication and teamwork skills
+ Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices
**Preferred Qualifications:**
+ AWS Certified Data Engineer preferred
+ Databricks Certificate preferred
+ Scaled Agile SAFe certification preferred
+ Data Engineering experience in Biotechnology or pharma industry
+ Experience in writing APIs to make the data available to the consumers
+ Experienced with SQL/NOSQL database, vector database for large language models
+ Experienced with data modeling and performance tuning for both OLAP and OLTP databases
+ Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and DevOps
**Soft Skills:**
+ Excellent analytical and troubleshooting skills.
+ Strong verbal and written communication skills
+ Ability to work effectively with global, virtual teams
+ High degree of initiative and self-motivation.
+ Ability to manage multiple priorities successfully.
+ Team-oriented, with a focus on achieving team goals.
+ Ability to learn quickly, be organized and detail oriented.
+ Strong presentation and public speaking skills.
**What you can expect of us**
As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we'll support your journey every step of the way.
In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards.
**Apply now and make a lasting impact with the Amgen team.**
**careers.amgen.com**
As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease.
Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law.
We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Chennai, Tamil Nadu UnitedHealth Group

Posted today

Job Viewed

Tap Again To Close

Job Description

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start **Caring. Connecting. Growing together.**
The Data Engineering Analyst, using technical and analytical skills, is responsible for supporting members on, not limiting to, ongoing data refreshes, support requests, troubleshooting and implementations - which are delivered on time and with utmost quality. Data Analysts must have ownership of complete analysis and implementation of an issue to its final solution, including creative problem solving and technical decision making.
**Primary Responsibilities:**
+ Implementation
+ Completing file mapping based on layouts and requirements provided by Business Analysts/Project Managers; Writing business logic (in SQL or other languages as appropriate to the particular business intelligence technology) for transforming data into product specifications; running and interpreting quality checks against loaded data; spinning up the product front-end; completing quality testing and articulating potential issues back to the BA/PM
+ Production:
+ End-to-end management of the ongoing data load process, including monitoring of feeds and outreach when data is delayed; creating customized reports based on specifications provided by Business Analysts; Completing configuration, mapping, or logic changes as described and prioritized by BAs; Troubleshooting issues raised by BAs from root cause identification to resolution
+ In all phases of the client life cycle, the Data Engineering Analyst is responsible for writing complex SQL queries to complete business requirements. He/she must have the ability to analyze and understand data to support decision making. He/she is also responsible for working on projects focused on continuous improvement to improve team's effectiveness
+ Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regard to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
**Required Qualifications:**
+ Graduate degree or equivalent experience
+ Bachelor's degree in Computer Science or any engineering
+ 2+ years of experience working with ETL & Data warehousing services
+ Experience with or an interest in learning to work with complex health care databases
+ Solid analytical and problem solving skills
+ Passion to work with lot of data in a challenging environment
+ Technical aptitude for learning new technologies
+ Intermediate to expert knowledge on Azure, Databricks, Scala, Spark SQL, Python(good to have)
+ Knowledge on Relational Databases like Oracle, SQL Server, MYSQL
+ Knowledge on SQL and procedural scripting
+ Ability to contribute to technical documentation of specifications and processes
+ Ability to participate in the ongoing invention, testing and use of tools used by the team to improve processes and bring scale
+ Intermediate skills using Microsoft Excel and Microsoft word
**Preferred Qualifications:**
+ Ability to query large data sets using SQL and query optimization skills.
+ Working knowledge of health information systems
+ Clinical or medical data analysis highly preferred
+ Understanding of clinical processes and vocabulary
**Soft Skills:**
+ Highly analytical, curious and creative
+ Solid organization skills, very detail oriented, with careful attention to work processes
+ Takes ownership of responsibilities and follows through hand-offs to other groups
+ Competence with PCs and Windows Office applications
+ Enjoys a fast paced environment and the opportunity to quickly learn new skills
+ Ability to work creatively and flexibly, both independently and as part of a team
+ Good written and oral communications skills
+ Ability to analyze user requests, define requirements, investigate and report conclusions
_At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission._
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Pune, Maharashtra Ensono

Posted today

Job Viewed

Tap Again To Close

Job Description

Data EngineerPune, IndiaJR
**Data Engineer**
At Ensono, our Purpose is to be a relentless ally, disrupting the status quo and unleashing our clients to Do Great Things! We enable our clients to achieve key business outcomes that reshape how our world runs. As an expert technology adviser and managed service provider with cross-platform certifications, Ensono empowers our clients to keep up with continuous change and embrace innovation.
We can Do Great Things because we have great Associates. The Ensono Core Values unify our diverse talents and are woven into how we do business. These five traits are the key to achieving our purpose.
HONESTY, RELIABILITY, COLLABORATION, CURIOSITY, PASSION
At Ensono, we are evolving into a **software-first Managed Services Provider** -a place where **AI, automation, and human expertise work together** to deliver **10x productivity** for our clients. Our **Envision Operating System** is the backbone of this transformation, orchestrating operations across mainframe, distributed, and cloud environments.
The **Data Engineer** plays a pivotal role in this vision. You won't just be moving data from point A to point B-you'll be **designing the pipelines and platforms that power predictive services, anomaly detection, and intelligent automation** . From ServiceNow tickets to mainframe telemetry, you'll turn raw, messy signals into **high-quality, AI/ML-ready datasets** that fuel real-time insights and proactive operations.
This is not a back-office role-it's a **frontline enabler of transformation** . The work you do directly impacts uptime, cost optimization, and the ability for Ensono to move from manual, reactive support to a **zero-touch, predictive model** .
We are looking for engineers who don't just architect pipelines but **get stuff done** -builders who can deliver working solutions, iterate quickly, and collaborate with data scientists, ML engineers, and ops teams to make sure models don't just run in notebooks but actually change how work gets done.
If you want to be part of the team that's **rewiring managed services for the AI era** , this is your role.
**What You Will Do:**
+ **Data Pipeline Development** - Build, optimize, and maintain ELT/ETL pipelines that move, clean, and organize data from ServiceNow, mainframe, distributed, and cloud systems.
+ **Integration with ServiceNow** - Develop robust data extraction, transformation, and ingestion patterns tailored for operational data (incidents, alerts, changes, requests) to make it AI/ML-ready.
+ **Data Infrastructure & Architecture** - Design scalable data models, storage frameworks, and integration layers in Snowflake and other modern platforms.
+ **Data Quality & Governance** - Implement standards, monitoring, and validation frameworks to ensure clean, trustworthy data across all pipelines.
+ **Collaboration with AI/ML Teams** - Partner with Data Scientists, ML Engineers, and MLOps to deliver production-grade datasets powering predictive models, anomaly detection, and intelligent runbooks.
+ **Automation & Optimization** - Identify opportunities to streamline data workflows, reduce manual intervention, and lower costs while improving reliability.
+ **Cross-functional Enablement** - Work with Finance, Procurement, Cloud Ops, Mainframe Ops, and Service Operations teams to ensure data is aligned to high-value business outcomes.
**We want all new Associates to succeed in their roles at Ensono. That's why we've outlined the job requirements below. To be considered for this role, it's important that you meet all Required Qualifications. If you do not meet all of the Preferred Qualifications, we still encourage you to apply.**   
**Required Skills & Experience**
+ **Strong SQL & Data Modeling** skills.
+ Expertise in **ELT/ETL pipeline development** and orchestration.
+ **Python** (must-have) plus at least one of Java, Scala, or C++.
+ Hands-on experience with **Snowflake** or equivalent cloud data warehouse platforms.
+ Proven experience **extracting, transforming, and operationalizing data from ServiceNow and common monitoring platforms / other enterprise systems (Workday, Concur, etc)** .
+ Familiarity with observability tooling and distributed data systems.
+ Knowledge of enterprise data governance, compliance, and lineage.
+ 4+ years of experience preferred.
+ Bonus: experience working directly with AI/ML feature pipelines.
**Mindset & Values**
+ **Get Stuff Done** - You're biased toward execution and results, not endless design cycles.
+ **Business Impact Driven** - You build pipelines that _move the needle_ on uptime, cost reduction, and predictive operations.
+ **Collaborative Partner** - You thrive in a cross-functional environment, sitting at the intersection of Ops, AI/ML, and business stakeholders.
+ **Continuous Learner** - Always looking for ways to apply new tools and technologies to accelerate delivery.
**Success Looks Like**
+ Reliable pipelines that pull ServiceNow data into Snowflake for **real-time incident prediction** .
+ Faster transition of AI/ML proof of concepts into production pipelines.
+ Demonstrated cost savings through **automated workload optimization** and **capacity forecasting** .
+ Enabling predictive services to scale seamlessly across mainframe, distributed, and cloud environments.
**Why Ensono?**
Ensono is a place where we unleash Associates to Do Great Things - for our clients and for your career. This could mean achieving a professional goal, collaborating with your team on an innovative idea, learning a new skill, reaching a wellness milestone, or engaging in your community through volunteer programs. Whatever it means to you, we want Ensono to be the place where you can do great things.
We value flexibility and work-life balance. Positions that are not required to be onsite to support a client may offer the ability to work remotely or hybrid at an Ensono office location.
JR
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Chennai, Tamil Nadu UnitedHealth Group

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start **Caring. Connecting. Growing together.**
**Primary Responsibilities:**
+ In this role, the resource will "own" data source maintenance & monitoring and improvement projects from the data analytics perspective, including:
+ Bachelor's degree, or work equivalent, in health sciences, health information systems, computer science, mathematics, engineering or related field
+ Conduct gap analysis to compare the client's data structure and content to that of the standard expected for that EMR, billing, or adjudicated claims data source
+ Look through the client's data to locate data elements of interest to our application. Complex cases involve reconciliation of data in multiple locations, clever ways of imputing information from sub-par data, and de-duplication strategies
+ Work closely with engineering and operations teams as the project is implemented, to answer questions and resolve data issues that arise
+ Contribute to technical documentation of specifications and processes
+ Participate in the ongoing invention, testing and use of tools used by the team to improve processes and bring scale
+ Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regard to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
**Required Qualifications:**
+ 3+ years of experience working with data, analyzing data and understanding data
+ 3+ years of experience working with SQL (SQL Server or Oracle)
+ 3+ years of experience working with Scripting (Python or PowerShell, Batch scripting)
+ Best to know Cloud (AWS or Azure) & PowerBI
+ Understanding of relational data bases and their principles of operation
+ Intermediate skills using Microsoft Excel
+ Proven good communication skill for client facing calls or emails
**Preferred Qualification:**
+ Added advantage knowing UNIX, any ETL tool
_At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission._
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Bengaluru, Karnataka Takeda Pharmaceuticals

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

By clicking the "Apply" button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda's Privacy Notice ( and Terms of Use ( . I further attest that all information I submit in my employment application is true to the best of my knowledge.
**Job Description**
**The Future Begins Here**
At Takeda, we are leading digital evolution and global transformation. By building innovative solutions and future-ready capabilities, we are meeting the need of patients, our people, and the planet.
Bangalore, the city, which is India's epicenter of Innovation, has been selected to be home to Takeda's recently launched Innovation Capability Center. We invite you to join our digital transformation journey. In this role, you will have the opportunity to boost your skills and become the heart of an innovative engine that is contributing to global impact and improvement.
**At Takeda's ICC we Unite in Diversity**
Takeda is committed to creating an inclusive and collaborative workplace, where individuals are recognized for their backgrounds and abilities they bring to our company. We are continuously improving our collaborators journey in Takeda, and we welcome applications from all qualified candidates. Here, you will feel welcomed, respected, and valued as an important contributor to our diverse team.
**Job Summary:**
As a Data Engineer, you will be building and maintaining data systems and construct datasets that are easy to analyze and support Business Intelligence requirements as well as downstream systems.
+ Develops and maintains scalable data pipelines and builds out new integrations using AWS native technologies to support continuing increases in data source, volume, and complexity.
+ Collaborates with analytics and business teams to improve data models that feed business intelligence tools and dashboards, increasing data accessibility and fostering data-driven decision making across the organization.
+ Implements processes and systems to drive data reconciliation, monitor data quality, ensuring production data is always accurate and available for key stakeholders, downstream systems, and business processes that depend on it.
+ Writes unit/integration/performance test scripts, contributes to engineering wiki, and documents work.
+ Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues.
+ Works closely with a team of frontend and backend engineers, product managers, and analysts.
+ Works with DevOps and Cloud Center of Excellence to deploy data pipeline solutions in Takeda AWS environments meeting security and performance requirements.
**Skills and Qualifications**
+ Bachelors' Degree, from an accredited institution in Engineering, Computer Science, or related field.
+ 3+ years of experience in software, data, data warehouse, data lake, and analytics reporting development.
+ Strong experience in data/Big Data, data integration, data model, modern database (Graph, SQL, No-SQL, etc.) query languages and AWS cloud technologies including DMS, Lambda, Databricks, SQS, Step Functions, Data Streaming, Visualization, etc.
+ Solid experience in DBA, dimensional modeling, SQL optimization - Aurora is preferred.
+ Experience designing, building, maintaining data integrations using SOAP/REST web services/API, as well as schema design and dimensional data modeling.
+ Excellent written and verbal communication skills including the
**What Takeda Can Offer You**
+ Takeda is certified as a Top Employer, not only in India, but also globally. No investment we make pays greater dividends than taking good care of our people.
+ At Takeda, you take the lead on building and shaping your own career.
+ Joining the ICC in Bangalore will give you access to high-end technology, continuous training and a diverse and inclusive network of colleagues who will support your career growth.
**Benefits**
It is our priority to provide competitive compensation and a benefit package that bridges your personal life with your professional career. Amongst our benefits are:
Competitive Salary + Performance Annual Bonus
+ Flexible work environment, including hybrid working
+ Comprehensive Healthcare Insurance Plans for self, spouse, and children
+ Group Term Life Insurance and Group Accident Insurance programs
+ Health & Wellness programs including annual health screening, weekly health sessions for employees.
+ Employee Assistance Program
+ Broad Variety of learning platforms
+ Diversity, Equity, and Inclusion Programs
+ No Meeting Days
+ Reimbursements - Home Internet & Mobile Phone
+ Employee Referral Program
+ Leaves - Paternity Leave (4 Weeks) , Maternity Leave (up to 26 weeks), Bereavement Leave (5 days)
**About ICC in Takeda**
+ Takeda is leading a digital revolution. We're not just transforming our company; we're improving the lives of millions of patients who rely on our medicines every day.
+ As an organization, we are committed to our cloud-driven business transformation and believe the ICCs are the catalysts of change for our global organization.
#Li-Hybrid
**Locations**
IND - Bengaluru
**Worker Type**
Employee
**Worker Sub-Type**
Regular
**Time Type**
Full time
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Pune, Maharashtra Citigroup

Posted 2 days ago

Job Viewed

Tap Again To Close

Job Description

**The Role**
The Data Engineer is accountable for developing high quality data products to support the Bank's regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team.
**Responsibilities**
+ Developing and supporting scalable, extensible, and highly available data solutions
+ Deliver on critical business priorities while ensuring alignment with the wider architectural vision
+ Identify and help address potential risks in the data supply chain
+ Follow and contribute to technical standards
+ Design and develop analytical data models
**Required Qualifications & Work Experience**
+ First Class Degree in Engineering/Technology (4-year graduate course)
+ 3 to 4 years' experience implementing data-intensive solutions using agile methodologies
+ Experience of relational databases and using SQL for data querying, transformation and manipulation
+ Experience of modelling data for analytical consumers
+ Ability to automate and streamline the build, test and deployment of data pipelines
+ Experience in cloud native technologies and patterns
+ A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training
+ Excellent communication and problem-solving skills
**T** **echnical Skills (Must Have)**
+ **ETL:** Hands on experience of building data pipelines. Proficiency in at least one of the data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica
+ **Big Data** :Exposure to 'big data' platforms such as Hadoop, Hive or Snowflake for data storage and processing
+ **Data Warehousing & Database Management** : Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design
+ **Data Modeling & Design** : Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures
+ **Languages** : Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala
+ **DevOps** : Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management
**Technical Skills (Valuable)**
+ **Ab Initio** : Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows
+ **Cloud** : Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs
+ **Data Quality & Controls** : Exposure to data validation, cleansing, enrichment and data controls
+ **Containerization** : Fair understanding of containerization platforms like Docker, Kubernetes
+ **File Formats** : Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta
+ **Others** : Basics of Job scheduler like Autosys. Basics of Entitlement management
Certification on any of the above topics would be an advantage.
---
**Job Family Group:**
Technology
---
**Job Family:**
Digital Software Engineering
---
**Time Type:**
---
**Most Relevant Skills**
Please see the requirements listed above.
---
**Other Relevant Skills**
For complementary skills, please see above and/or contact the recruiter.
---
_Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law._
_If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review_ _Accessibility at Citi ( _._
_View Citi's_ _EEO Policy Statement ( _and the_ _Know Your Rights ( _poster._
Citi is an equal opportunity and affirmative action employer.
Minority/Female/Veteran/Individuals with Disabilities/Sexual Orientation/Gender Identity.
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Hyderabad, Andhra Pradesh Amgen

Posted 2 days ago

Job Viewed

Tap Again To Close

Job Description

**Data Engineer**
**What you will do**
Let's do this. Let's change the world. In this vital role as Data Engineer at Amgen you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes
+ Design, develop, and optimize data pipelines/workflows using Databricks (Spark, Delta Lake) for ingestion, transformation, and processing of large-scale data. A knowledge of Medallion Architecture will be an added advantage.
+ Build ETL pipeline with Informatica or other ETL tools.
+ Support data governance and metadata management.
+ Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions
+ Identify and resolve complex data-related challenges
+ Adhere to best practices for coding, testing, and designing reusable code/component
+ Analyze business and technical requirements and begin translating them into simple development tasks
+ Execute unit and integration tests, and contribute to maintaining software quality
+ Identify and fix bugs and defects during development or testing phases
+ Contribute to the maintenance and support of applications by monitoring performance and reporting issues
+ Use CI/CD pipelines as part of DevOps practices and assist in the release process
**What we expect of you**
We are all different, yet we all use our unique contributions to serve patients.
**Basic Qualifications:**
+ Master's/Bachelor's degree and 4 to 8 years of Computer Science, IT or related field experience
**Preferred Qualifications:**
+ Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing
+ Knowledge of Python/R, Databricks, cloud data platforms
+ Strong understanding of data governance frameworks, tools, and best practices.
+ Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA)
**Must-Have Skills:**
+ Bachelor's or master's degree in computer science, Data Science, or a related field.
+ Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), python for workflow orchestration, performance tuning on big data processing
+ Proficiency in data analysis tools (eg. SQL)
+ Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores
+ Strong programming skills in Python, PySpark, and SQL.
+ Familiarity with Informatica and/or other ETL tools.
+ Experience working with cloud data services (Azure, AWS, or GCP).
+ Strong understanding of data modeling, entity relationships
**Professional Certifications**
+ AWS Certified Data Engineer (preferred)
+ Databricks Certificate (preferred)
**Soft Skills:**
+ Excellent problem-solving and analytical skills
+ Strong communication and interpersonal abilities
+ High attention to detail and commitment to quality
+ Ability to prioritize tasks and work under pressure
+ Team-oriented with a proactive and collaborative mindset
+ Willingness to mentor junior developers and promote best practices
+ Adaptable to changing project requirements and evolving technology
This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Nosql database Jobs in India !

Data Engineer

Mumbai, Maharashtra Mondelez International

Posted 2 days ago

Job Viewed

Tap Again To Close

Job Description

**Job Description**
**Are You Ready to Make It Happen at Mondelēz International?**
**Join our Mission to Lead the Future of Snacking. Make It With Pride.**
You will provide technical contributions to the data science process. In this role, you are the internally recognized expert in data, building infrastructure and data pipelines/retrieval mechanisms to support our data needs
**How you will contribute**
You will:
+ Operationalize and automate activities for efficiency and timely production of data visuals
+ Assist in providing accessibility, retrievability, security and protection of data in an ethical manner
+ Search for ways to get new data sources and assess their accuracy
+ Build and maintain the transports/data pipelines and retrieve applicable data sets for specific use cases
+ Understand data and metadata to support consistency of information retrieval, combination, analysis, pattern recognition and interpretation
+ Validate information from multiple sources.
+ Assess issues that might prevent the organization from making maximum use of its information assets
**What you will bring**
A desire to drive your future and accelerate your career and the following experience and knowledge:
+ Extensive experience in data engineering in a large, complex business with multiple systems such as SAP, internal and external data, etc. and experience setting up, testing and maintaining new systems
+ Experience of a wide variety of languages and tools (e.g. script languages) to retrieve, merge and combine data
+ Ability to simplify complex problems and communicate to a broad audience
Are You Ready to Make It Happen at Mondelēz International?
Join our Mission to Lead the Future of Snacking. Make It with Pride
**In This Role**
As a DaaS Data Engineer, you will have the opportunity to design and build scalable, secure, and cost-effective cloud-based data solutions. You will develop and maintain data pipelines to extract, transform, and load data into data warehouses or data lakes, ensuring data quality and validation processes to maintain data accuracy and integrity. You will ensure efficient data storage and retrieval for optimal performance, and collaborate closely with data teams, product owners, and other stakeholders to stay updated with the latest cloud technologies and best practices.
**Role & Responsibilities:**
+ **Design and Build:** Develop and implement scalable, secure, and cost-effective cloud-based data solutions.
+ **Manage Data Pipelines:** Develop and maintain data pipelines to extract, transform, and load data into data warehouses or data lakes.
+ **Ensure Data Quality:** Implement data quality and validation processes to ensure data accuracy and integrity.
+ **Optimize** **Data Storage:** Ensure efficient data storage and retrieval for optimal performance.
+ **Collaborate and Innovate:** Work closely with data teams, product owners, and stay updated with the latest cloud technologies and best practices.
**Technical Requirements:**
+ **Programming:** Python
+ **Database:** SQL, PL/SQL, Postgres SQL, Bigquery, Stored Procedure / Routines.
+ **ETL & Integration:** AecorSoft, Talend, DBT, Databricks (Optional),Fivetran.
+ **Data Warehousing:** SCD, Schema Types, Data Mart.
+ **Visualization:** PowerBI (Optional), Tableau (Optional), Looker.
+ **GCP Cloud Services:** Big Query, GCS.
+ **Supply Chain:** IMS + Shipment functional knowledge good to have.
+ **Supporting Technologies:** Erwin, Collibra, Data Governance, Airflow.
**Soft Skills:**
+ **Problem-Solving:** The ability to identify and solve complex data-related challenges.
+ **Communication:** Effective communication skills to collaborate with Product Owners, analysts, and stakeholders.
+ **Analytical Thinking:** The capacity to analyze data and draw meaningful insights.
+ **Attention to Detail:** Meticulousness in data preparation and pipeline development.
+ **Adaptability:** The ability to stay updated with emerging technologies and trends in the data **engineering field.**
Within Country Relocation support available and for candidates voluntarily moving internationally some minimal support is offered through our Volunteer International Transfer Policy
**Business Unit Summary**
**At Mondelēz International, our purpose is to empower people to snack right by offering the right snack, for the right moment, made the right way. That means delivering a broad range of delicious, high-quality snacks that nourish life's moments, made with sustainable ingredients and packaging that consumers can feel good about.**
**We have a rich portfolio of strong brands globally and locally including many household names such as** **_Oreo_** **,** **_belVita_** **and** **_LU_** **biscuits;** **_Cadbury Dairy Milk_** **,** **_Milka_** **and** **_Toblerone_** **chocolate;** **_Sour Patch Kids_** **candy and** **_Trident_** **gum. We are proud to hold the top position globally in biscuits, chocolate and candy and the second top position in gum.**
**Our 80,000 makers and bakers are located in more** **than 80 countries** **and we sell our products in** **over 150 countries** **around the world. Our people are energized for growth and critical to us living our purpose and values. We are a diverse community that can make things happen-and happen fast.**
Mondelēz International is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation or preference, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law.
**Job Type**
Regular
Data Science
Analytics & Data Science
At Mondelēz International, our purpose is to empower people to snack right through offering the right snack, for the right moment, made the right way. That means delivering a broader range of delicious, high-quality snacks that nourish life's moments, made with sustainable ingredients and packaging that consumers can feel good about.
We have a rich portfolio of strong brands - both global and local. Including many household names such as Oreo, belVita and LU biscuits; Cadbury Dairy Milk, Milka and Toblerone chocolate; Sour Patch Kids candy and Trident gum. We are proud to hold the number 1 position globally in biscuits, chocolate and candy as well as the No. 2 position in gum
Our 80,000 Makers and Bakers are located in our operations in more than 80 countries and are working to sell our products in over 150 countries around the world. They are energized for growth and critical to us living our purpose and values. We are a diverse community that can make things happen, and happen fast.
Join us and Make It An Opportunity!
Mondelez Global LLC is an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected Veteran status, sexual orientation, gender identity, gender expression, genetic information, or any other characteristic protected by law. Applicants who require accommodation to participate in the job application process may contact for assistance.
This advertiser has chosen not to accept applicants from your region.

Data Engineer

CAI

Posted today

Job Viewed

Tap Again To Close

Job Description

Data Engineer
**Req number:**
R6413
**Employment type:**
Full time
**Worksite flexibility:**
Remote
**Who we are**
CAI is a global technology services firm with over 8,500 associates worldwide and a yearly revenue of $1 billion+. We have over 40 years of excellence in uniting talent and technology to power the possible for our clients, colleagues, and communities. As a privately held company, we have the freedom and focus to do what is right-whatever it takes. Our tailor-made solutions create lasting results across the public and commercial sectors, and we are trailblazers in bringing neurodiversity to the enterprise.
**Job Summary**
We are seeking a motivated Data Engineer to join our dynamic team. As a Data Engineer, you will play a crucial role in building cloud-based data lake and analytics architectures using AWS and Databricks and is proficient in Python programming for data processing and automation. This is a Full-time and Remote position.
**Job Description**
We are looking for a Data Engineer that has experience in building data products using Databricks and related technologies. This position will be Full-time and Remote position.
**What You'll Do**
+ Design, develop, and maintain data lakes and data pipelines on AWS using ETL frameworks and Databricks
+ Integrate and transform large-scale data from multiple heterogeneous sources into a centralized data lake environment
+ Implement and manage Delta Lake architecture using Databricks Delta or Apache Hudi
+ Develop end-to-end data workflows using PySpark, Databricks Notebooks, and Python scripts for ingestion, transformation, and enrichment
+ Design and develop data warehouses and data marts for analytical workloads using Snowflake, Redshift, or similar systems
+ Design and evaluate data models (Star, Snowflake, Flattened) for analytical and transactional systems
+ Optimize data storage, query performance, and cost across the AWS and Databricks ecosystem
+ Build and maintain CI/CD pipelines for Databricks notebooks, jobs, and Python-based data processing scripts
+ Collaborate with data scientists, analysts, and stakeholders to deliver high-performance, reusable data assets
+ Maintain and manage code repositories (Git) and promote best practices in version control, testing, and deployment
+ Participate in making major technical and architectural decisions for data engineering initiatives
+ Monitor and troubleshoot Databricks clusters, Spark jobs, and ETL processes for performance and reliability
+ Coordinate with business and technical teams through all phases of the software development life cycle
**What You'll Need**
**Required**
+ 5+ years of experience building and managing Data Lake Architecture on AWS Cloud
+ 3+ years of experience with AWS Data services such as S3, Glue, Lake Formation, EMR, Kinesis, RDS, DMS, and Redshift
+ 3+ years of experience building Data Warehouses on Snowflake, Redshift, HANA, Teradata, or Exasol
+ 3+ years of hands-on experience working with Apache Spark or PySpark, on Databricks
+ 3+ years of experience implementing Delta Lakes using Databricks Delta or Apache Hudi
+ 3+ years of experience in ETL development using Databricks, AWS Glue, or other modern frameworks
+ Proficiency in Python for data engineering, automation, and API integrations.
+ Experience in Databricks Jobs, Workflows, and Cluster Management
+ Experience with CI/CD pipelines and Infrastructure as Code (IaC) tools like Terraform or CloudFormation is a plus
+ Bachelor's degree in computer science, Information Technology, Data Science, or related field
**Physical Demands**
+ This role involves mostly sedentary work, with occasional movement around the office to attend meetings, etc.
+ Ability to perform repetitive tasks on a computer, using a mouse, keyboard, and monitor
**Reasonable accommodation statement**
If you require a reasonable accommodation in completing this application, interviewing, completing any pre-employment testing, or otherwise participating in the employment selection process, please direct your inquiries to or (888) 824 - 8111.
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Bangalore, Karnataka NTT America, Inc.

Posted 2 days ago

Job Viewed

Tap Again To Close

Job Description

**Make an impact with NTT DATA**
Join a company that is pushing the boundaries of what is possible. We are renowned for our technical excellence and leading innovations, and for making a difference to our clients and society. Our workplace embraces diversity and inclusion - it's a place where you can grow, belong and thrive.
**Your day at NTT DATA**
The Data Engineer is a seasoned subject matter expert, responsible for the transformation of data into a structured format that can be easily analyzed in a query or report.
This role is responsible for developing structured data sets that can be reused or complimented by other data sets and reports.
This role analyzes the data sources and data structure and designs and develops data models to support the analytics requirements of the business which includes management / operational / predictive / data science capabilities.
**Key responsibilities:**
+ Creates data models in a structured data format to enable analysis thereof.
+ Designs and develops scalable extract, transformation and loading (ETL) packages from the business source systems and the development of ETL routines to populate data from sources,
+ Participates in the transformation of object and data models into appropriate database schemas within design constraints.
+ Interprets installation standards to meet project needs and produce database components as required.
+ Creates test scenarios and be responsible for participating in thorough testing and validation to support the accuracy of data transformations.
+ Accountable for running data migrations across different databases and applications, for example MS Dynamics, Oracle, SAP and other ERP systems.
+ Works across multiple IT and business teams to define and implement data table structures and data models based on requirements.
+ Accountable for analysis, and development of ETL and migration documentation.
+ Collaborates with various stakeholders to evaluate potential data requirements.
+ Accountable for the definition and management of scoping, requirements, definition, and prioritization activities for small-scale changes and assist with more complex change initiatives.
+ Collaborates with various stakeholders, contributing to the recommendation of improvements in automated and non-automated components of the data tables, data queries and data models.
**To thrive in this role, you need to have:**
+ Seasoned knowledge of the definition and management of scoping requirements, definition and prioritization activities.
+ Seasoned understanding of database concepts, object and data modelling techniques and design principles and conceptual knowledge of building and maintaining physical and logical data models.
+ Seasoned expertise in Microsoft Azure Data Factory, SQL Analysis Server, SAP Data Services, SAP BTP.
+ Seasoned understanding of data architecture landscape between physical and logical data models
+ Analytical mindset with excellent business acumen skills.
+ Problem-solving aptitude with the ability to communicate effectively, both written and verbal.
+ Ability to build effective relationships at all levels within the organization.
+ Seasoned expert in programing languages (Perl, bash, Shell Scripting, Python, etc.).
**Academic qualifications and certifications:**
+ Bachelor's degree or equivalent in computer science, software engineering, information technology, or a related field.
+ Relevant certifications preferred such as SAP, Microsoft Azure etc.
+ Certified Data Engineer, Certified Professional certification preferred.
**Required experience:**
+ Seasoned experience as a data engineering, data mining within a fast-paced environment.
+ Proficient in building modern data analytics solutions that delivers insights from large and complex data sets with multi-terabyte scale.
+ Seasoned experience with architecture and design of secure, highly available and scalable systems.
+ Seasoned proficiency in automation, scripting and proven examples of successful implementation.
+ Seasoned proficiency using scripting language (Perl, bash, Shell Scripting, Python, etc.).
+ Seasoned experience with big data tools like Hadoop, Cassandra, Storm etc.
+ Seasoned experience in any applicable language, preferably .NET.
+ Seasoned proficiency in working with SAP, SQL, MySQL databases and Microsoft SQL.
+ Seasoned experience working with data sets and ordering data through MS Excel functions, e.g. macros, pivots.
**Workplace type** **:**
Remote Working
**About NTT DATA**
NTT DATA is a $30+ billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long-term success. We invest over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure, and connectivity. We are also one of the leading providers of digital and AI infrastructure in the world. NTT DATA is part of NTT Group and headquartered in Tokyo.
**Equal Opportunity Employer**
NTT DATA is proud to be an Equal Opportunity Employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, colour, gender, sexual orientation, religion, nationality, disability, pregnancy, marital status, veteran status, or any other protected category. Join our growing global team and accelerate your career with us. Apply today.
**Third parties fraudulently posing as NTT DATA recruiters**
NTT DATA recruiters will never ask job seekers or candidates for payment or banking information during the recruitment process, for any reason. Please remain vigilant of third parties who may attempt to impersonate NTT DATA recruiters-whether in writing or by phone-in order to deceptively obtain personal data or money from you. All email communications from an NTT DATA recruiter will come from an **@nttdata.com** email address. If you suspect any fraudulent activity, please contact us ( ) .
This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Nosql Database Jobs