1,384 Data Manipulation jobs in India
Data Transformation Architect
Posted today
Job Viewed
Job Description
Talworx is an emerging recruitment consulting and service firm, we are hiring for our client which provides governments, businesses, and individuals with market data, expertise, and technology solutions for confident decision-making. Our services span from global energy solutions to sustainable finance solutions. From helping our customers perform investment analysis to guiding them through sustainability and energy transition across supply chains, our solutions help unlock new opportunities and solve challenges.
Location: Hybrid work from office from nearest client branch, Preferrably from Hyderabad, Bangalore, Gurgaon.
The Team:You will be part of a rapidly growing organization, joining the team of highly motivated and professional Data Scientists and Machine Learning Engineers within Market Intelligence division. Market Intelligence provides financial and industry data, research, news and analytics to investment professionals, government agencies, corporations, and universities worldwide. We integrate news, comprehensive market and sector-specific data and analytics into a variety of tools to help clients track performance, generate alpha, identify investment ideas, understand competitive and industry dynamics, perform valuations, and assess credit risk.
The Impact:We are looking for candidates that are passionate to architect, build, scale and deploy Machine Learning Models and their associated Software Technology components that provide timely and essential intelligence to our disparate array of customers across the globe.
What's in it for you:We provide a highly inclusive work environment where in you can bring your whole self to work to assist us in achieving its mission of being one of the leading providers of the highest quality risk evaluations and analytical information to the world's financial markets. As an integral part of our team, you will be working on cutting edge state-of-the-art technology stack. In this role, you will work on multiple data science projects in collaboration with internal and external project owners on the product, commercial, and data team. You will be responsible for providing machine learning engineers support, create data pipeline for modeling, scale models, develop APIs to help move machine learning models in production. You will collaborate with the data scientists and production-oriented software engineers
.
Responsibilities Responsibilities:
- Construct machine learning lifecycle management including data collection, normalization, and standardization within a data pipeline construction
- Architect technology components and build applications and interfaces for customer's consumption
- Experiment, develop and productionize high quality machine learning services and platforms to make huge technology and business impact.
- Develop hosting platform for machine learning models.
- Create pipelines to query and retrieve and update data for existing applications to keep them updated
- Supervise the scaling and management of the machine learning modeling ecosystem
- Work alongside data scientists and product owners to improve aspects of their lines of business through machine learning
What We're Looking For:
- Extensive problem-solving ability in designing complex data and cloud architectures,
- Must possess strong hands on experience with Spark, AWS, Bigdata (Hadoop), Python and pyspark
- Possess excellent verbal & written communication skills
- Expertise in application, data, and infrastructure architecture disciplines
- Advanced knowledge of architecture and design across all systems
- Proficiency in multiple machine learning programming languages including Python, PySpark, or Scala
- 5 + years of experience with big-data technologies such as Hadoop, Spark, SparkML, etc
- Able to understand various data structures and common methods in data transformation
- Knowledge of industry-wide technology trends and best practices
- Ability to work in large, collaborative teams to achieve organizational goals
- Passionate about building an innovative culture
- Familiarity with MLOps and ModelOps
- Docker, Kubernetes, AWS, Python
- Familiarity with or clear interest in learning about financial Markets.
Data Transformation Architect
Posted today
Job Viewed
Job Description
Talworx is an emerging recruitment consulting and service firm, we are hiring for our client which provides governments, businesses, and individuals with market data, expertise, and technology solutions for confident decision-making. Our services span from global energy solutions to sustainable finance solutions. From helping our customers perform investment analysis to guiding them through sustainability and energy transition across supply chains, our solutions help unlock new opportunities and solve challenges.
Location: Hybrid work from office from nearest client branch, Preferrably from Hyderabad, Bangalore, Gurgaon.
The Team:You will be part of a rapidly growing organization, joining the team of highly motivated and professional Data Scientists and Machine Learning Engineers within Market Intelligence division. Market Intelligence provides financial and industry data, research, news and analytics to investment professionals, government agencies, corporations, and universities worldwide. We integrate news, comprehensive market and sector-specific data and analytics into a variety of tools to help clients track performance, generate alpha, identify investment ideas, understand competitive and industry dynamics, perform valuations, and assess credit risk.
The Impact:We are looking for candidates that are passionate to architect, build, scale and deploy Machine Learning Models and their associated Software Technology components that provide timely and essential intelligence to our disparate array of customers across the globe.
What's in it for you:We provide a highly inclusive work environment where in you can bring your whole self to work to assist us in achieving its mission of being one of the leading providers of the highest quality risk evaluations and analytical information to the world's financial markets. As an integral part of our team, you will be working on cutting edge state-of-the-art technology stack. In this role, you will work on multiple data science projects in collaboration with internal and external project owners on the product, commercial, and data team. You will be responsible for providing machine learning engineers support, create data pipeline for modeling, scale models, develop APIs to help move machine learning models in production. You will collaborate with the data scientists and production-oriented software engineers
.
Responsibilities Responsibilities:
- Construct machine learning lifecycle management including data collection, normalization, and standardization within a data pipeline construction
- Architect technology components and build applications and interfaces for customer's consumption
- Experiment, develop and productionize high quality machine learning services and platforms to make huge technology and business impact.
- Develop hosting platform for machine learning models.
- Create pipelines to query and retrieve and update data for existing applications to keep them updated
- Supervise the scaling and management of the machine learning modeling ecosystem
- Work alongside data scientists and product owners to improve aspects of their lines of business through machine learning
What We're Looking For:
- Extensive problem-solving ability in designing complex data and cloud architectures,
- Must possess strong hands on experience with Spark, AWS, Bigdata (Hadoop), Python and pyspark
- Possess excellent verbal & written communication skills
- Expertise in application, data, and infrastructure architecture disciplines
- Advanced knowledge of architecture and design across all systems
- Proficiency in multiple machine learning programming languages including Python, PySpark, or Scala
- 5 + years of experience with big-data technologies such as Hadoop, Spark, SparkML, etc
- Able to understand various data structures and common methods in data transformation
- Knowledge of industry-wide technology trends and best practices
- Ability to work in large, collaborative teams to achieve organizational goals
- Passionate about building an innovative culture
- Familiarity with MLOps and ModelOps
- Docker, Kubernetes, AWS, Python
- Familiarity with or clear interest in learning about financial Markets.
Data Transformation Specialist
Posted today
Job Viewed
Job Description
About the Role
We are seeking a highly skilled data transformation specialist to support our organization's data integration and business intelligence initiatives. The ideal candidate will be responsible for designing, developing, and maintaining data transformation processes, as well as data warehouse design and modeling.
Responsibilities:
- Design and develop data transformation processes to support data integration and business intelligence initiatives.
- Utilize SQL to write complex queries and stored procedures for data extraction and transformation.
- Implement and manage data transformation processes using ETL tools such as SSIS.
- Design and model data warehouses to support reporting and analytics needs.
- Ensure data accuracy, quality, and integrity through effective testing and validation procedures.
- Collaborate with stakeholders to understand data requirements and deliver solutions that meet their needs.
- Monitor and troubleshoot data transformation processes to ensure optimal performance and resolve any issues promptly.
- Document data transformation processes, workflows, and data mappings to ensure clarity and maintainability.
- Stay current with industry trends and best practices in data transformation, data integration, and data warehousing.
Requirements:
- Minimum 4+ years of experience as a data transformation specialist or in a similar role.
- Proficiency in SQL for writing complex queries and stored procedures.
- Experience with ETL tools such as SSIS for developing and managing data transformation processes.
- Knowledge of Azure Data Factory and its application in data transformation processes.
- Experience in data warehouse design and modeling.
- Knowledge of Microsoft's Azure cloud suite.
- Strong problem-solving and analytical skills.
- Excellent communication and interpersonal skills.
- Strong attention to detail and commitment to data quality.
- Bachelor's degree in Computer Science, Information Technology, or a related field is preferred.
Data Transformation Specialist
Posted today
Job Viewed
Job Description
Data Transformation Specialist
Posted today
Job Viewed
Job Description
HI Folks
Please check the JD and share your updated resume to my email and ping me on whatsapp ( ) along with your resume
Databricks Engineer
Location: REMOTE
Duration: 12 months with extensions
Openings: 3
REQUIRED SKILLS AND EXPERIENCE
- 3–5 years of experience in data engineering roles
- Strong hands-on experience with Databricks for data processing and pipeline development.
- Proficiency in SQL for data querying, transformation, and troubleshooting.
- Solid programming skills in Python for data manipulation and automation.
- Proven experience working with pharmaceutical or life sciences data, including familiarity with industry-specific data structures and compliance considerations
JOB DESCRIPTION
We are seeking 3 skilled and motivated Databricks Data Engineers with 3–5 years of experience to support data engineering initiatives in the pharmaceutical domain. The ideal candidate will have hands-on expertise in Databricks, SQL, and Python, and a strong understanding of pharma/life sciences data. This role involves building and optimizing data pipelines, transforming complex datasets, and enabling scalable data solutions that support analytics and business intelligence efforts. The candidate should be comfortable working in a fast-paced environment and collaborating with cross-functional teams to ensure data quality, accessibility, and performance.
Data Transformation Engineer
Posted today
Job Viewed
Job Description
Data Transformation Specialist
Posted today
Job Viewed
Job Description
About Position:
We are seeking Data Engineer with hands on experience in Python, Pyspark, Azure Databricks, ADF, etc.,
- Role: Data Engineer
- Location: All Persistent Location
- Experience: 4 to 6 years
- Job Type: Full Time Employment
What You'll Do:
- Technical Design and implement client requirements.
- Build data pipeline.
- Perform transformations using pyspark in data bricks.
- Create automated workflows with the help of triggers, Scheduled Jobs in Airflow.
- Design and development of Airflow dags to orchestrate data processing jobs.
- Design and development for ensuring timely notifications through email alerts in the event of job failures or critical system issues.
- Developing and implementing code to write the logic data.
- Direct customer interaction to understand & gather requirements.
- Provision of technical input to customers for analysis and design work.
- Support to customer queries.
Expertise You'll Bring:
- Programming Languages: Python, SQL
- Big Data Frameworks
- Apache Spark / PySpark, Delta Lake
- Workflow Orchestration: Apache Airflow, Azure Data Factory (ADF)
- Cloud Platforms: Azure (Data Lake Gen2, Blob Storage, ADF)
- Databricks Ecosystem: Databricks Notebooks, Unity Catalog, Databricks Jobs API
- Data Storage & Databases: ADLS Gen2
- Data Governance & Security: Unity Catalog
- CI/CD & DevOps: Git Azure DevOps
- Data Formats: Parquet JSON, CSV
Benefits:
- Competitive salary and benefits package
- Culture focused on talent development with quarterly growth opportunities and company-sponsored higher education and certifications
- Opportunity to work with cutting-edge technologies
- Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards
- Annual health check-ups
- Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents
Values-Driven, People-Centric & Inclusive Work Environment:
Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds.
- We support hybrid work and flexible hours to fit diverse lifestyles.
- Our office is accessibility-friendly, with ergonomic setups and assistive technologies to support employees with physical disabilities.
- If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment
Let’s unleash your full potential at Persistent - persistent.Com/careers .
“Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.”
Be The First To Know
About the latest Data manipulation Jobs in India !
Data Transformation Engineer
Posted today
Job Viewed
Job Description
Position: Database Migration Engineer (MongoDB)
Experience: 6+
Location: Remote (India)
We are looking for a highly skilled Database Migration Engineer to lead the transformation of large-scale SQL environments. This role involves splitting monolithic databases, managing release pipelines, and overseeing data migration—including moving data from SQL Server to MongoDB .
Key Responsibilities:
- Split a large MS SQL database into multiple smaller databases while maintaining data integrity
- Develop and manage SQL scripting and versioning processes for releases
- Migrate data from East DB to West DB and decommission the old instance
- Lead SQL-to-MongoDB migration efforts in collaboration with development teams
- Optimize performance, scalability, and security of database systems
Required Skills:
- Deep expertise in MS SQL Server and T-SQL
- Strong knowledge of data modeling and relational database design
- Ability to write and manage complex SQL scripts for querying and transformation
- Experience with CI/CD pipelines for databases using Git or similar tools
- Familiarity with MongoDB and NoSQL migration strategies
- Ability to create detailed documentation, migration plans, and risk assessments
- Strong collaboration skills across technical and business teams
Soft Skills:
- Strong analytical and troubleshooting skills
- Clear and structured communication style
- Experience working with Dev teams, DBAs, architects, and BAs
- Leadership qualities and the ability to mentor junior team members
Data Transformation Strategist
Posted today
Job Viewed
Job Description
Data Migration Strategy Expert
We’re looking for an Data Migration Strategy Expert to join our transformation initiative and help shape the future
What You’ll Do
- Support cross-release data migration execution teams on all aspects of the migration process, including templates, trackers, data verification, testing, and tooling .
- Drive alignment and consistency across migration activities to ensure adherence, governance, and best practices .
- Collaborate closely with PMO , PQM , and Data Migration Execution Teams to coordinate and execute the overall data migration strategy .
- Manage and enhance data migration tools and documentation , ensuring accuracy, transparency, and traceability throughout the migration lifecycle.
- Contribute to data verification testing and ensure data integrity post-migration.
What we need from you:
- Proven experience with data migration execution in large-scale SAP programs.
- Strong working knowledge of HP ALM , JIRA , and SAP tools such as Change Management (ChaRM) , ITSM , Document Management , and Testing tools .
- Deep understanding of data migration methodologies, quality assurance , and compliance standards .
Director, Clinical Data Transformation
Posted today
Job Viewed
Job Description
At Lilly, we unite caring with discovery to make life better for people around the world. We are a global healthcare leader headquartered in Indianapolis, Indiana. Our employees around the world work to discover and bring life-changing medicines to those who need them, improve the understanding and management of disease, and give back to our communities through philanthropy and volunteerism. We give our best effort to our work, and we put people first. We're looking for people who are determined to make life better for people around the world.
The role
Lilly is reimagining how clinical data is designed, transformed, and delivered—from our digital trial foundation to modern statistical compute. We're seeking a Director‑level leader who brings deep clinical data and technology expertise and a passion for industry standards. You will inform enterprise strategy and project roadmaps with best practices from across the industry and actively help shape the standards themselves (e.g., USDM/Digital Data Flow (DDF), CDISC SDTM/ADaM/Define‑XML, ICH M11, HL7/FHIR).
Role Overview:
The Director, Clinical Data Transformation & Delivery will drive enterprise-wide transformation of Lilly's clinical data strategy, partnering with senior leaders to align global functions and deliver measurable business outcomes. This leader will champion talent development, diversity, and inclusion, and serve as a change agent for digital innovation. Representing Lilly externally, the Director will shape industry standards and best practices, ensuring Lilly remains at the forefront of clinical data excellence.
Roadmap influence & standards assurance
- Continuously scan the landscape (regulators, standards bodies, peer sponsors, vendors) and translate best practices into clear recommendations that shape Lilly's clinical tech strategy and roadmaps.
- Define and socialize standards‑first reference patterns (protocol digitization, data flow, metadata‑driven transformations, quality-by-design) for use by delivery teams and partners.
- Establish advisory guardrails (conformance principles, decision frameworks, interoperability guidance) to keep programs aligned with enterprise architecture and inspection readiness.
- Drive enterprise-wide transformation in clinical data strategy, ensuring alignment across global functions and geographies.
Shape industry standards
- Represent Lilly in relevant working groups/communities (e.g., CDISC, TransCelerate DDF/USDM, HL7/FHIR, DIA, SCDM) and co‑create standards: contribute use cases, position papers, pilots, implementation guides, and public comments.
- Bring back clear adoption guidance (what's ready now vs. emerging), identify de‑risks, and propose pragmatic transition plans for programs and platforms.
- Partner with senior leaders to shape and execute Lilly's vision for digital clinical development.
- Represent Lilly as a thought leader in global industry forums, shaping the future of clinical data standards and practices.
Technical & regulatory advisory
- Advise on metadata‑driven SDTM/ADaM pipelines, document/analysis automation, and responsible AI patterns that improve quality, transparency, and cycle time.
- Recommend inspection‑ready controls (GxP, 21 CFR Part 11, Computer Software Assurance) and data protection practices appropriate for clinical data and documents.
- Develop, support and implement talent strategies to build a robust pipeline of future leaders in clinical data and technology.
- Translate technical innovation into measurable improvements in cycle time, data quality, regulatory compliance, and patient outcomes.
- Establish KPIs and metrics to track impact of transformation initiatives.
- Serve as a change agent, driving adoption of new standards, technologies, and ways of working across Lilly.
Hyderabad site delivery lead
- Serve as the Hyderabad clinical tech delivery lead, shaping ways of working with scaled delivery partners and suppliers; influence quality, budget discipline, and timelines through standards and guidance.
- Mentor and upskill engineers, analysts, and partner teams on standards‑first design, interoperability, and responsible AI.
What you'll bring
Must‑have qualifications
- 10+ years across clinical data/biometrics technology or standards—spanning data collection, curation, transformation, and analysis.
- Deep, hands‑on knowledge of CDISC (SDTM, ADaM, Define‑XML) and working familiarity with USDM/DDF and ICH M11.
- Demonstrated success advising strategy and shaping roadmaps in a global, regulated environment—translating standards into actionable architecture and guardrails.
- Working understanding of cloud data platforms (e.g., AWS), APIs/integration, and metadata management; strong grasp of GxP/21 CFR Part 11/CSA expectations.
- Exceptional influence without authority, stakeholder engagement, and written/oral communication skills across technical and non‑technical audiences.
Preferred qualifications
- Experience participating in or leading workstreams with CDISC, TransCelerate DDF/USDM, HL7/FHIR, DIA, SCDM, or similar communities.
- Familiarity with Veeva Vault (CDMS/CTMS/eTMF), Medidata Rave/CDS, eCOA, and clinical labs data standards/terminology.
- Background with metadata‑driven SDTM/ADaM automation, code/lineage controls, and validation approaches.
- Practical insight into AI/ML and GenAI for clinical data quality, transformation, and content automation—implemented with appropriate guardrails.
- Prior advisory engagement with scaled delivery centers or suppliers in India.
Work model & travel
- Hybrid work from Hyderabad, India, collaborating across global time zones.
- Travel ~10–20% (domestic + international), based on business needs.
Lilly is dedicated to helping individuals with disabilities to actively engage in the workforce, ensuring equal opportunities when vying for positions. If you require accommodation to submit a resume for a position at Lilly, please complete the accommodation request form ) for further assistance. Please note this is for individuals to request an accommodation as part of the application process and any other correspondence will not receive a response.
Lilly does not discriminate on the basis of age, race, color, religion, gender, sexual orientation, gender identity, gender expression, national origin, protected veteran status, disability or any other legally protected status.
WeAreLilly