2,667 Cloud Data jobs in India
Informatica Cloud Data Management Administrator
Posted today
Job Viewed
Job Description
- We are seeking an experienced Informatica Cloud Data Management Administrator to manage, maintain, and support Informatica Cloud environments. The candidate will be responsible for ensuring the smooth operation of cloud data integration processes, monitoring system performance, troubleshooting issues, and collaborating with cross-functional teams to implement data management solutions.
- Administer and monitor Informatica Intelligent Cloud Services (IICS) environments to ensure optimal performance and availability.
- Manage user roles, permissions, and security configurations within the Informatica Cloud platform.
- Schedule, monitor, and troubleshoot data integration jobs and workflows.
- Collaborate with developers and data architects to support the design and deployment of cloud data integration solutions.
- Maintain and optimize data mappings, transformations, and workflows in Informatica Cloud.
- Implement and enforce best practices for cloud data management, data quality, and metadata management.
- Handle incident management and provide timely resolution for production issues.
- Perform platform upgrades, patches, and configuration changes as needed.
- Maintain documentation for system configurations, processes, and procedures.
- Provide support and training to end-users and stakeholders.
- Stay current with Informatica Cloud platform updates, features, and industry trends.
- 4 to 6 years of experience working with Informatica Cloud Data Management or Informatica Intelligent Cloud Services (IICS) administration.
- Strong knowledge of cloud data integration concepts, ETL/ELT processes, and best practices.
- Experience with Informatica Cloud user and security management.
- Ability to monitor and troubleshoot data workflows and performance issues.
- Familiarity with scheduling tools, job monitoring, and alert management.
- Experience working with REST APIs, cloud connectors, and data sources such as databases, SaaS applications, and cloud storage.
- Good understanding of data quality and metadata management principles.
- Strong problem-solving and communication skills.
- Experience with cloud platforms such as AWS, Azure, or Google Cloud is a plus.
- Knowledge of scripting or automation tools is an advantage.
- Informatica Cloud certifications.
- Experience with hybrid cloud environments (on-premise + cloud).
- Knowledge of data governance and compliance requirements.
- Familiarity with Agile development methodologies.
Skills Required
Aws, Azure, Google Cloud, Rest Apis, Etl, ELT
Cloud Data Engineer

Posted 3 days ago
Job Viewed
Job Description
**Req number:**
R5934
**Employment type:**
Full time
**Worksite flexibility:**
Remote
**Who we are**
CAI is a global technology services firm with over 8,500 associates worldwide and a yearly revenue of $1 billion+. We have over 40 years of excellence in uniting talent and technology to power the possible for our clients, colleagues, and communities. As a privately held company, we have the freedom and focus to do what is right-whatever it takes. Our tailor-made solutions create lasting results across the public and commercial sectors, and we are trailblazers in bringing neurodiversity to the enterprise.
**Job Summary**
We are seeking a motivated Cloud Data Engineer that has experience in building data products using Databricks and related technologies. This is a Full-time and Remote position.
**Job Description**
**What You'll Do**
+ Analyze and understand existing data warehouse implementations to support migration and consolidation efforts.
+ Reverse-engineer legacy stored procedures (PL/SQL, SQL) and translate business logic into scalable Spark SQL code within Databricks notebooks.
+ Design and develop data lake solutions on AWS using S3 and Delta Lake architecture, leveraging Databricks for processing and transformation.
+ Build and maintain robust data pipelines using ETL tools with ingestion into S3 and processing in Databricks.
+ Collaborate with data architects to implement ingestion and transformation frameworks aligned with enterprise standards.
+ Evaluate and optimize data models (Star, Snowflake, Flattened) for performance and scalability in the new platform.
+ Document ETL processes, data flows, and transformation logic to ensure transparency and maintainability.
+ Perform foundational data administration tasks including job scheduling, error troubleshooting, performance tuning, and backup coordination.
+ Work closely with cross-functional teams to ensure smooth transition and integration of data sources into the unified platform.
+ Participate in Agile ceremonies and contribute to sprint planning, retrospectives, and backlog grooming.
+ Triage, debug and fix technical issues related to Data Lakes.
+ Maintain and Manage Code repositories like Git.
**What You'll Need**
+ 5+ years of experience working with **Databricks** , including Spark SQL and Delta Lake implementations.
+ 3 + years of experience in designing and implementing data lake architectures on Databricks.
+ Strong SQL and PL/SQL skills with the ability to interpret and refactor legacy stored procedures.
+ Hands-on experience with data modeling and warehouse design principles.
+ Proficiency in at least one programming language (Python, Scala, Java).
+ Bachelor's degree in Computer Science, Information Technology, Data Engineering, or related field.
+ Experience working in Agile environments and contributing to iterative development cycles. Experience working on Agile projects and Agile methodology in general.
+ Databricks cloud certification is a big plus.
+ Exposure to enterprise data governance and metadata management practices.
**Physical Demands**
+ This role involves mostly sedentary work, with occasional movement around the office to attend meetings, etc.
+ Ability to perform repetitive tasks on a computer, using a mouse, keyboard, and monitor.
**Reasonable accommodation statement**
If you require a reasonable accommodation in completing this application, interviewing, completing any pre-employment testing, or otherwise participating in the employment selection process, please direct your inquiries to or (888) 824 - 8111.
Cloud Data Engineer
Posted 4 days ago
Job Viewed
Job Description
Key Responsibilities:
• Design, develop, and maintain cloud-based solutions on Azure or AWS.
• Implement and manage real-time data streaming and messaging systems using Kafka.
• Develop scalable applications and services using Java and Python.
• Deploy, manage, and monitor containerized applications using Kubernetes.
• Build and optimize big data processing pipelines using Databricks.
• Manage and maintain databases, including SQL Server and Snowflake, and write
complex SQL scripts.
• Work with Unix/Linux commands to manage and monitor system operations.
• Collaborate with cross-functional teams to ensure seamless integration of cloud-based
solutions.
Key Skills:
• Expertise in Azure or AWS cloud platforms.
• Proficiency in Kafka, Java, Python, and Kubernetes.
• Hands-on experience with Databricks for big data processing.
• Strong database management skills with SQL Server, Snowflake, and advanced SQL
scripting.
• Solid understanding of Unix/Linux commands.
General Requirements for Both Off-Shore Roles:
• Bachelor’s degree in computer science, Engineering, or a related field (or equivalent
experience).
• 5+ years of experience in cloud and data engineering roles.
• Strong problem-solving and analytical skills.
• Excellent communication and collaboration abilities.
• Proven ability to work in a fast-paced, agile environment.
Cloud Data Architect
Posted 4 days ago
Job Viewed
Job Description
Job Description: Cloud Data Architect
Job Overview
We are seeking a highly skilled and motivated Data Architect with 5–7 years of experience in data architecture, modeling, and enterprise data solutions. The ideal candidate will be familiar with Microsoft Fabric or equivalent modern data platforms, and possess hands-on expertise in designing scalable data models and systems. Prior consulting experience and a strong understanding of data integration, governance, and analytics design are highly desirable.
Key Responsibilities
- Design, implement, and maintain enterprise data architectures for analytics and operational use.
- Develop and optimize conceptual, logical, and physical data models that support business reporting and analytics needs.
- Lead end-to-end data solution designs using Microsoft Fabric, Azure Synapse, or similar platforms.
- Translate business and functional requirements into technical data architecture specifications.
- Collaborate with stakeholders, data engineers, and business teams to design scalable, secure, and high-performance data environments.
- Provide data integration strategies for structured data across multiple systems.
- Ensure adherence to data governance, data quality, and security standards in all architecture designs.
- Participate in architecture reviews, technical design sessions, and consulting engagements with internal or external clients.
- Support modernization efforts by guiding migration to cloud-native data platforms.
Required Qualifications
- 5–7 years in data architecture, data engineering, or analytics solution delivery.
- Hands-on experience with Microsoft Fabric , Azure Data Services (Data Lake, Synapse, Data Factory), or equivalent platforms like Snowflake, Databricks.
- Strong data modeling (dimensional and normalized), metadata management, and data lineage understanding.
- Proficiency in Python or R, SQL, DAX, Power BI, and data pipeline tools.
- Solid understanding of modern data warehouse/lakehouse architectures and enterprise data integration patterns.
- Ability to manage stakeholder expectations, translate business needs into technical requirements, and deliver value through data solutions.
- Strong communication and documentation skills.
- Bachelor’s degree in Computer Science, Information Systems, Engineering, or a related field.
Preferred Skills
- Experience with tools such as Microsoft Fabric, Azure Data Factory, Databricks, or similar platforms.
- Knowledge of Python or R for data manipulation or ML integration.
- Familiarity with DevOps or CI/CD for BI deployments.
- Exposure to data governance tools (e.g., Purview) and practices.
- Experience working with Agile/Scrum methodologies.
Cloud Data Engineer
Posted today
Job Viewed
Job Description
Key Responsibilities:
• Design, develop, and maintain cloud-based solutions on Azure or AWS.
• Implement and manage real-time data streaming and messaging systems using Kafka.
• Develop scalable applications and services using Java and Python.
• Deploy, manage, and monitor containerized applications using Kubernetes.
• Build and optimize big data processing pipelines using Databricks.
• Manage and maintain databases, including SQL Server and Snowflake, and write
complex SQL scripts.
• Work with Unix/Linux commands to manage and monitor system operations.
• Collaborate with cross-functional teams to ensure seamless integration of cloud-based
solutions.
Key Skills:
• Expertise in Azure or AWS cloud platforms.
• Proficiency in Kafka, Java, Python, and Kubernetes.
• Hands-on experience with Databricks for big data processing.
• Strong database management skills with SQL Server, Snowflake, and advanced SQL
scripting.
• Solid understanding of Unix/Linux commands.
General Requirements for Both Off-Shore Roles:
• Bachelor’s degree in computer science, Engineering, or a related field (or equivalent
experience).
• 5+ years of experience in cloud and data engineering roles.
• Strong problem-solving and analytical skills.
• Excellent communication and collaboration abilities.
• Proven ability to work in a fast-paced, agile environment.
Cloud Data Engineer
Posted 3 days ago
Job Viewed
Job Description
• Design, develop, and maintain cloud-based solutions on Azure or AWS.
• Implement and manage real-time data streaming and messaging systems using Kafka.
• Develop scalable applications and services using Java and Python.
• Deploy, manage, and monitor containerized applications using Kubernetes.
• Build and optimize big data processing pipelines using Databricks.
• Manage and maintain databases, including SQL Server and Snowflake, and write
complex SQL scripts.
• Work with Unix/Linux commands to manage and monitor system operations.
• Collaborate with cross-functional teams to ensure seamless integration of cloud-based
solutions.
Key Skills:
• Expertise in Azure or AWS cloud platforms.
• Proficiency in Kafka, Java, Python, and Kubernetes.
• Hands-on experience with Databricks for big data processing.
• Strong database management skills with SQL Server, Snowflake, and advanced SQL
scripting.
• Solid understanding of Unix/Linux commands.
General Requirements for Both Off-Shore Roles:
• Bachelor’s degree in computer science, Engineering, or a related field (or equivalent
experience).
• 5+ years of experience in cloud and data engineering roles.
• Strong problem-solving and analytical skills.
• Excellent communication and collaboration abilities.
• Proven ability to work in a fast-paced, agile environment.
Cloud Data Architect
Posted 4 days ago
Job Viewed
Job Description
Job Overview
We are seeking a highly skilled and motivated Data Architect with 5–7 years of experience in data architecture, modeling, and enterprise data solutions. The ideal candidate will be familiar with Microsoft Fabric or equivalent modern data platforms, and possess hands-on expertise in designing scalable data models and systems. Prior consulting experience and a strong understanding of data integration, governance, and analytics design are highly desirable.
Key Responsibilities
- Design, implement, and maintain enterprise data architectures for analytics and operational use.
- Develop and optimize conceptual, logical, and physical data models that support business reporting and analytics needs.
- Lead end-to-end data solution designs using Microsoft Fabric, Azure Synapse, or similar platforms.
- Translate business and functional requirements into technical data architecture specifications.
- Collaborate with stakeholders, data engineers, and business teams to design scalable, secure, and high-performance data environments.
- Provide data integration strategies for structured data across multiple systems.
- Ensure adherence to data governance, data quality, and security standards in all architecture designs.
- Participate in architecture reviews, technical design sessions, and consulting engagements with internal or external clients.
- Support modernization efforts by guiding migration to cloud-native data platforms.
Required Qualifications
- 5–7 years in data architecture, data engineering, or analytics solution delivery.
- Hands-on experience with Microsoft Fabric , Azure Data Services (Data Lake, Synapse, Data Factory), or equivalent platforms like Snowflake, Databricks.
- Strong data modeling (dimensional and normalized), metadata management, and data lineage understanding.
- Proficiency in Python or R, SQL, DAX, Power BI, and data pipeline tools.
- Solid understanding of modern data warehouse/lakehouse architectures and enterprise data integration patterns.
- Ability to manage stakeholder expectations, translate business needs into technical requirements, and deliver value through data solutions.
- Strong communication and documentation skills.
- Bachelor’s degree in Computer Science, Information Systems, Engineering, or a related field.
Preferred Skills
- Experience with tools such as Microsoft Fabric, Azure Data Factory, Databricks, or similar platforms.
- Knowledge of Python or R for data manipulation or ML integration.
- Familiarity with DevOps or CI/CD for BI deployments.
- Exposure to data governance tools (e.g., Purview) and practices.
- Experience working with Agile/Scrum methodologies.
Be The First To Know
About the latest Cloud data Jobs in India !
Cloud Data Engineer
Posted today
Job Viewed
Job Description
Key Responsibilities:
• Design, develop, and maintain cloud-based solutions on Azure or AWS.
• Implement and manage real-time data streaming and messaging systems using Kafka.
• Develop scalable applications and services using Java and Python.
• Deploy, manage, and monitor containerized applications using Kubernetes.
• Build and optimize big data processing pipelines using Databricks.
• Manage and maintain databases, including SQL Server and Snowflake, and write
complex SQL scripts.
• Work with Unix/Linux commands to manage and monitor system operations.
• Collaborate with cross-functional teams to ensure seamless integration of cloud-based
solutions.
Key Skills:
• Expertise in Azure or AWS cloud platforms.
• Proficiency in Kafka, Java, Python, and Kubernetes.
• Hands-on experience with Databricks for big data processing.
• Strong database management skills with SQL Server, Snowflake, and advanced SQL
scripting.
• Solid understanding of Unix/Linux commands.
General Requirements for Both Off-Shore Roles:
• Bachelor’s degree in computer science, Engineering, or a related field (or equivalent
experience).
• 5+ years of experience in cloud and data engineering roles.
• Strong problem-solving and analytical skills.
• Excellent communication and collaboration abilities.
• Proven ability to work in a fast-paced, agile environment.
Cloud Data Architect
Posted today
Job Viewed
Job Description
Job Description: Cloud Data Architect
Job Overview
We are seeking a highly skilled and motivated Data Architect with 5–7 years of experience in data architecture, modeling, and enterprise data solutions. The ideal candidate will be familiar with Microsoft Fabric or equivalent modern data platforms, and possess hands-on expertise in designing scalable data models and systems. Prior consulting experience and a strong understanding of data integration, governance, and analytics design are highly desirable.
Key Responsibilities
- Design, implement, and maintain enterprise data architectures for analytics and operational use.
- Develop and optimize conceptual, logical, and physical data models that support business reporting and analytics needs.
- Lead end-to-end data solution designs using Microsoft Fabric, Azure Synapse, or similar platforms.
- Translate business and functional requirements into technical data architecture specifications.
- Collaborate with stakeholders, data engineers, and business teams to design scalable, secure, and high-performance data environments.
- Provide data integration strategies for structured data across multiple systems.
- Ensure adherence to data governance, data quality, and security standards in all architecture designs.
- Participate in architecture reviews, technical design sessions, and consulting engagements with internal or external clients.
- Support modernization efforts by guiding migration to cloud-native data platforms.
Required Qualifications
- 5–7 years in data architecture, data engineering, or analytics solution delivery.
- Hands-on experience with Microsoft Fabric , Azure Data Services (Data Lake, Synapse, Data Factory), or equivalent platforms like Snowflake, Databricks.
- Strong data modeling (dimensional and normalized), metadata management, and data lineage understanding.
- Proficiency in Python or R, SQL, DAX, Power BI, and data pipeline tools.
- Solid understanding of modern data warehouse/lakehouse architectures and enterprise data integration patterns.
- Ability to manage stakeholder expectations, translate business needs into technical requirements, and deliver value through data solutions.
- Strong communication and documentation skills.
- Bachelor’s degree in Computer Science, Information Systems, Engineering, or a related field.
Preferred Skills
- Experience with tools such as Microsoft Fabric, Azure Data Factory, Databricks, or similar platforms.
- Knowledge of Python or R for data manipulation or ML integration.
- Familiarity with DevOps or CI/CD for BI deployments.
- Exposure to data governance tools (e.g., Purview) and practices.
- Experience working with Agile/Scrum methodologies.
Cloud Data Engineer
Posted today
Job Viewed
Job Description
Req number:
R5934Employment type:
Full timeWorksite flexibility:
Remote Who we areCAI is a global technology services firm with over 8,500 associates worldwide and a yearly revenue of $1 billion+. We have over 40 years of excellence in uniting talent and technology to power the possible for our clients, colleagues, and communities. As a privately held company, we have the freedom and focus to do what is right—whatever it takes. Our tailor-made solutions create lasting results across the public and commercial sectors, and we are trailblazers in bringing neurodiversity to the enterprise.
Job Summary
We are seeking a motivated Cloud Data Engineer that has experience in building data products using Databricks and related technologies. This is a Full-time and Remote position.Job Description
What You’ll Do
- Analyze and understand existing data warehouse implementations to support migration and consolidation efforts.
- Reverse-engineer legacy stored procedures (PL/SQL, SQL) and translate business logic into scalable Spark SQL code within Databricks notebooks.
- Design and develop data lake solutions on AWS using S3 and Delta Lake architecture, leveraging Databricks for processing and transformation.
- Build and maintain robust data pipelines using ETL tools with ingestion into S3 and processing in Databricks.
- Collaborate with data architects to implement ingestion and transformation frameworks aligned with enterprise standards.
- Evaluate and optimize data models (Star, Snowflake, Flattened) for performance and scalability in the new platform.
- Document ETL processes, data flows, and transformation logic to ensure transparency and maintainability.
- Perform foundational data administration tasks including job scheduling, error troubleshooting, performance tuning, and backup coordination.
- Work closely with cross-functional teams to ensure smooth transition and integration of data sources into the unified platform.
- Participate in Agile ceremonies and contribute to sprint planning, retrospectives, and backlog grooming.
- Triage, debug and fix technical issues related to Data Lakes.
- Maintain and Manage Code repositories like Git.
What You'll Need
- 5+ years of experience working with Databricks , including Spark SQL and Delta Lake implementations.
- 3 + years of experience in designing and implementing data lake architectures on Databricks.
- Strong SQL and PL/SQL skills with the ability to interpret and refactor legacy stored procedures.
- Hands-on experience with data modeling and warehouse design principles.
- Proficiency in at least one programming language (Python, Scala, Java).
- Bachelor’s degree in Computer Science, Information Technology, Data Engineering, or related field.
- Experience working in Agile environments and contributing to iterative development cycles. Experience working on Agile projects and Agile methodology in general.
- Databricks cloud certification is a big plus.
- Exposure to enterprise data governance and metadata management practices.
Physical Demands
- This role involves mostly sedentary work, with occasional movement around the office to attend meetings, etc.
- Ability to perform repetitive tasks on a computer, using a mouse, keyboard, and monitor.
Reasonable accommodation statement
If you require a reasonable accommodation in completing this application, interviewing, completing any pre-employment testing, or otherwise participating in the employment selection process, please direct your inquiries to or (888) 824 – 8111.