254 Gcp Engineer jobs in India
System Administration
Posted 25 days ago
Job Viewed
Job Description
Education
B.E./B.Tech/MCA in Computer Science
Experience
Must have 3 - 5 Years of experience in the field of Linux Administration.
Mandatory Skills/Knowledge
Redhat :-
1. Should have good experience of Linux Administration (OS Installation, Virtualization, Performance Monitoring/Optimization, Kernel Tuning, LVM management, File System Management, Security Management)
2. Should have very good experience in shell scripting and configuration management (Ansible).
3. Must have experience of Install and Configure Pacemaker based high availability Cluster.
4. Must have experience of troubleshooting common cluster issues.
5. Should have worked with Shared Storage and Multipathing.
6. Should have experience in Repository Creation and Management.
7. Should have experience in Os upgrade and patch management.
Preferred Skills/Knowledge
1. Should have worked on Automating tasks by using Shell Scripts, Ansible
2. Basic understanding of Public Clouds (AWS/Azure/GCP /OpenStack)
3. Fundamental understanding of Ceph Storage Solution
Desired Certifications
1. RHEL / Redhat Certified Specialist
Soft Skills
1. Must have good troubleshooting skills
2. Must be ready to learn new technologies and acquire new skills
3. Must be a Team Player
4. Should be good in Spoken and Written English
System Administration
Posted 23 days ago
Job Viewed
Job Description
Company Overview
Repplen Projects Private Limited is a leading entity in the construction industry, boasting a workforce of employees and headquartered in Erode, Tamil Nadu. We pride ourselves on delivering exceptional construction services characterized by integrity, trust, and innovation. As we handle expansive infrastructure projects and commercial ventures, we are committed to precision and excellence in every endeavor, building structures that defy time. For more information, visit our website at Repplen Projects Private Limited .
Job Overview
We are seeking a skilled System Administrator to join our team in Erode, with an employment type of Full-Time. This mid-level position requires a candidate with 4 to 6 years of relevant work experience. The role focuses on managing and optimizing our IT infrastructure, ensuring seamless network and server operations to support our construction projects effectively.
Qualifications and Skills
- Strong expertise in network configuration for efficient and reliable connectivity solutions. (Mandatory skill)
- Proficiency in managing LAN and WAN networks to ensure optimal performance and security. (Mandatory skill)
- Experience in configuring and maintaining firewalls to protect organizational data. (Mandatory skill)
- Solid experience in Windows Server Management for smooth operation and maintenance of server systems.
- Knowledge of cloud platforms such as AWS, Azure, or GCP for scalable and secure cloud-based services.
- Ability to set up and maintain IP CCTV systems for security and monitoring purposes.
- Strong analytical skills to troubleshoot and resolve technical issues effectively and efficiently.
- Excellent communication skills to collaborate with cross-functional teams and provide training as needed.
Roles and Responsibilities
- Manage and maintain all network hardware and software, ensuring optimal performance and stability.
- Configure, deploy, and troubleshoot routers, switches, and firewalls across the organization.
- Oversee server infrastructure, including Windows Server installations and cloud platforms.
- Implement and monitor network security measures to protect data from unauthorized access.
- Handle the setup, management, and operation of IP CCTV systems for organizational security.
- Collaborate closely with IT and operational teams to support construction project needs efficiently.
- Ensure high availability and reliability of network services, swiftly addressing any issues that arise.
- Regularly update systems and applications to keep up with the latest technology advancements.
GCP Infrastructure engineer
Posted 1 day ago
Job Viewed
Job Description
NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now.
We are currently seeking a GCP Infrastructure engineer to join our team in Chennai, Tamil Nādu (IN-TN), India (IN).
Job Summary:
We are seeking a highly skilled GCP Infrastructure Engineer to design, build, and manage the cloud infrastructure that powers Generative AI (GenAI) applications at scale. In this role, you will leverage Google Cloud Platform (GCP) Vertex AI, IBM Watsonx, and containerization technologies such as Docker and Kubernetes (GKE) to deliver secure, scalable, and high-performance AI solutions. You will own the end-to-end infrastructure lifecycle - from design and provisioning to automation, monitoring, and optimization - while enabling data scientists and ML engineers to seamlessly deploy and operate GenAI workloads.
Key Responsibilities:
Cloud Infrastructure & Platform Engineering
+ Design, provision, and maintain scalable, secure, and cost-efficient infrastructure for GenAI applications on GCP.
+ Deploy and manage containerized workloads using Docker and Kubernetes (GKE).
+ Configure and optimize Vertex AI and IBM Watsonx platforms for training, fine-tuning, and serving LLMs and other generative models.
+ Implement high-performance GPU/TPU clusters to support distributed training and large-scale inference.
+ Ensure business continuity through backup, disaster recovery, and multi-region deployments.
Automation & Reliability
+ Develop and maintain Infrastructure as Code (IaC) templates with Terraform, or Cloud Deployment Manager.
+ Adopt GitOps practices (Flux) for infrastructure lifecycle management.
+ Build and optimize CI/CD pipelines for data pipelines, model workflows, and GenAI applications.
+ Apply SRE principles (SLIs, SLOs, SLAs) to guarantee platform reliability and uptime.
Security, Governance & Compliance
+ Embed DevSecOps best practices across the infrastructure lifecycle, including policy-as-code, vulnerability scanning, and secrets management.
+ Enforce identity and access management (IAM), network segmentation, and data encryption in compliance with standards (HIPAA, SOX, GDPR, FedRAMP).
+ Collaborate with enterprise security and compliance teams to implement governance frameworks for GenAI platforms.
Monitoring, Observability & Cost Optimization
+ Implement observability stacks (Prometheus, Grafana, Cloud Monitoring, Datadog) for both infra health and ML-specific metrics (model drift, data anomalies).
+ Define KPIs to monitor system health, performance, and adoption across AI workloads.
+ Optimize cloud cost efficiency for GPU/TPU-intensive workloads using autoscaling, preemptible instances, and utilization monitoring.
Collaboration & Enablement
+ Partner with data scientists, ML engineers, and software teams to streamline GenAI application development and deployment.
+ Provide onboarding, documentation, and reusable templates to enable faster adoption of AI infrastructure.
+ Stay current with the latest advancements in GenAI, cloud-native infrastructure, and container orchestration.
Required Education
Bachelor's or master's degree in computer science, Software Engineering, or a related field.
Required Experience
+ **5+ years** of experience in cloud infrastructure engineering, **DevOps,** or platform engineering.
+ Experience with GenAI use cases (chatbots, content generation, code assistants, etc.).
+ Strong hands-on expertise with **Google Cloud Platform (GCP),** especially **Vertex** **AI.**
+ Experience with **IBM Watsonx for AI application** deployment and management.
+ Proven skills in **Docker, Kubernetes (GKE),** and container orchestration at scale.
+ Proficiency in **Python, Bash,** or other relevant scripting languages.
+ Strong understanding of cloud networking, IAM, and security best practices.
+ Experience with CI/CD tools (GitHub Actions, GitLab CI, Jenkins) and IaC tools (Terraform, Pulumi, Ansible, Deployment Manager).
+ Familiarity with data pipelines and integration tools (Dataflow, Apache Beam, Pub/Sub, Kafka).
+ Excellent problem-solving, debugging, and communication skills.
Preferred Experience
+ Experience in MLOps practices for model deployment, monitoring, and retraining.
+ Exposure to multi-cloud or hybrid cloud environments (GCP, AWS, Azure, on-prem).
+ Hands-on experience with feature stores (Vertex AI Feature Store, Feast) and ML observability tools (EvidentlyAI, Fiddler).
+ Knowledge of distributed training frameworks (Horovod, DeepSpeed, PyTorch Distributed).
+ Contributions to open-source projects in infrastructure, MLOps, or GenAI.
+ Experience managing infrastructure in regulated industries.
Preferred Certifications:
+ Google Cloud Certified - Professional Cloud Architect
+ Google Cloud Certified - Machine Learning Engineer
+ Certified Kubernetes Administrator (CKA) or Certified Kubernetes Application Developer (CKAD)
+ IBM Certified Watsonx Generative AI Engineer - Associate
+ IBM Certified Solution Architect - Cloud Pak for Data
+ Other relevant certifications in AI, Machine Learning, or Cloud-Native technologies.
**About NTT DATA**
NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com ( possible, we hire locally to NTT DATA offices or client sites. This ensures we can provide timely and effective support tailored to each client's needs. While many positions offer remote or hybrid work options, these arrangements are subject to change based on client requirements. For employees near an NTT DATA office or client site, in-office attendance may be required for meetings or events, depending on business needs. At NTT DATA, we are committed to staying flexible and meeting the evolving needs of both our clients and employees. NTT DATA recruiters will never ask for payment or banking information and will only use @nttdata.com and @talent.nttdataservices.com email addresses. If you are requested to provide payment or disclose banking information, please submit a contact us form, .
**_NTT DATA endeavors to make_** **_ **_accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at_** **_ **_._** **_This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here ( . If you'd like more information on your EEO rights under the law, please click here ( . For Pay Transparency information, please click here ( ._**
GCP Infrastructure engineer
Posted 1 day ago
Job Viewed
Job Description
NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now.
We are currently seeking a GCP Infrastructure engineer to join our team in Chennai, Tamil Nādu (IN-TN), India (IN).
Job Summary:
We are seeking a highly skilled GCP Infrastructure Engineer to design, build, and manage the cloud infrastructure that powers Generative AI (GenAI) applications at scale. In this role, you will leverage Google Cloud Platform (GCP) Vertex AI, IBM Watsonx, and containerization technologies such as Docker and Kubernetes (GKE) to deliver secure, scalable, and high-performance AI solutions. You will own the end-to-end infrastructure lifecycle - from design and provisioning to automation, monitoring, and optimization - while enabling data scientists and ML engineers to seamlessly deploy and operate GenAI workloads.
Key Responsibilities:
Cloud Infrastructure & Platform Engineering
+ Design, provision, and maintain scalable, secure, and cost-efficient infrastructure for GenAI applications on GCP.
+ Deploy and manage containerized workloads using Docker and Kubernetes (GKE).
+ Configure and optimize Vertex AI and IBM Watsonx platforms for training, fine-tuning, and serving LLMs and other generative models.
+ Implement high-performance GPU/TPU clusters to support distributed training and large-scale inference.
+ Ensure business continuity through backup, disaster recovery, and multi-region deployments.
Automation & Reliability
+ Develop and maintain Infrastructure as Code (IaC) templates with Terraform, or Cloud Deployment Manager.
+ Adopt GitOps practices (Flux) for infrastructure lifecycle management.
+ Build and optimize CI/CD pipelines for data pipelines, model workflows, and GenAI applications.
+ Apply SRE principles (SLIs, SLOs, SLAs) to guarantee platform reliability and uptime.
Security, Governance & Compliance
+ Embed DevSecOps best practices across the infrastructure lifecycle, including policy-as-code, vulnerability scanning, and secrets management.
+ Enforce identity and access management (IAM), network segmentation, and data encryption in compliance with standards (HIPAA, SOX, GDPR, FedRAMP).
+ Collaborate with enterprise security and compliance teams to implement governance frameworks for GenAI platforms.
Monitoring, Observability & Cost Optimization
+ Implement observability stacks (Prometheus, Grafana, Cloud Monitoring, Datadog) for both infra health and ML-specific metrics (model drift, data anomalies).
+ Define KPIs to monitor system health, performance, and adoption across AI workloads.
+ Optimize cloud cost efficiency for GPU/TPU-intensive workloads using autoscaling, preemptible instances, and utilization monitoring.
Collaboration & Enablement
+ Partner with data scientists, ML engineers, and software teams to streamline GenAI application development and deployment.
+ Provide onboarding, documentation, and reusable templates to enable faster adoption of AI infrastructure.
+ Stay current with the latest advancements in GenAI, cloud-native infrastructure, and container orchestration.
Required Education
Bachelor's or master's degree in computer science, Software Engineering, or a related field.
Required Experience
+ **5+ years** of experience in cloud infrastructure engineering, **DevOps,** or platform engineering.
+ Experience with GenAI use cases (chatbots, content generation, code assistants, etc.).
+ Strong hands-on expertise with **Google Cloud Platform (GCP),** especially **Vertex** **AI.**
+ Experience with **IBM Watsonx for AI application** deployment and management.
+ Proven skills in **Docker, Kubernetes (GKE),** and container orchestration at scale.
+ Proficiency in **Python, Bash,** or other relevant scripting languages.
+ Strong understanding of cloud networking, IAM, and security best practices.
+ Experience with CI/CD tools (GitHub Actions, GitLab CI, Jenkins) and IaC tools (Terraform, Pulumi, Ansible, Deployment Manager).
+ Familiarity with data pipelines and integration tools (Dataflow, Apache Beam, Pub/Sub, Kafka).
+ Excellent problem-solving, debugging, and communication skills.
Preferred Experience
+ Experience in MLOps practices for model deployment, monitoring, and retraining.
+ Exposure to multi-cloud or hybrid cloud environments (GCP, AWS, Azure, on-prem).
+ Hands-on experience with feature stores (Vertex AI Feature Store, Feast) and ML observability tools (EvidentlyAI, Fiddler).
+ Knowledge of distributed training frameworks (Horovod, DeepSpeed, PyTorch Distributed).
+ Contributions to open-source projects in infrastructure, MLOps, or GenAI.
+ Experience managing infrastructure in regulated industries.
Preferred Certifications:
+ Google Cloud Certified - Professional Cloud Architect
+ Google Cloud Certified - Machine Learning Engineer
+ Certified Kubernetes Administrator (CKA) or Certified Kubernetes Application Developer (CKAD)
+ IBM Certified Watsonx Generative AI Engineer - Associate
+ IBM Certified Solution Architect - Cloud Pak for Data
+ Other relevant certifications in AI, Machine Learning, or Cloud-Native technologies.
**About NTT DATA**
NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com ( possible, we hire locally to NTT DATA offices or client sites. This ensures we can provide timely and effective support tailored to each client's needs. While many positions offer remote or hybrid work options, these arrangements are subject to change based on client requirements. For employees near an NTT DATA office or client site, in-office attendance may be required for meetings or events, depending on business needs. At NTT DATA, we are committed to staying flexible and meeting the evolving needs of both our clients and employees. NTT DATA recruiters will never ask for payment or banking information and will only use @nttdata.com and @talent.nttdataservices.com email addresses. If you are requested to provide payment or disclose banking information, please submit a contact us form, .
**_NTT DATA endeavors to make_** **_ **_accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at_** **_ **_._** **_This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here ( . If you'd like more information on your EEO rights under the law, please click here ( . For Pay Transparency information, please click here ( ._**
GCP Cloud Engineer
Posted 1 day ago
Job Viewed
Job Description
Job description
Google Cloud Platform DevOps Engineer
ParadigmIT is seeking a seasoned GCP DevOps Engineer to design implement and manage scalable secure and resilient infrastructure on Google Cloud Platform GCP.
The ideal candidate will have deep expertise in DevOps, CI/CD Infrastructure as Code IaC container orchestration and cloud security.
Key Responsibilities
- Build scalable, secure, and highly available infrastructures using GCP services such as Compute Engine, Kubernetes Engine, Cloud Functions, BigQuery and Cloud Storage
- Develop and maintain CI/CD pipelines using tools like Jenkins GitLab CI or Cloud Build
- Automate infrastructure provisioning using Terraform Google Cloud Deployment Manager or similar IaC tools
- Design and deploy AI/ML pipelines leveraging Google AI Services, including model training, evaluation, deployment, and monitoring with platforms like Vertex AI.
- Manage containerized applications using Docker and Kubernetes GKE
- Monitor system performance and ensure high availability using tools like Prometheus Grafana and Google Cloud Observability
- Implement and enforce cloud security best practices including IAM encryption and secure coding
- Collaborate with cross-functional teams to streamline deployments and troubleshoot issues
- Optimize cloud costs and resource utilization
- Document infrastructure deployment processes and operational procedures
Required Skills Qualifications
- Minimum 5 years of hands-on experience with GCP including compute, storage, networking, database, security, and AI services
- Skilled in Python, Go, or Bash; proficient in containerization (Kubernetes, Docker); adept in monitoring and troubleshooting across stack.
- Strong knowledge in cloud architecture is a big plus
- Experience architecting and deploying AI/ML solutions; deep understanding of the AI lifecycle, tools, and Google AI service offerings.
- Strong understanding of cloud networking security and DevOps methodologies
- Experience with monitoring and logging tools Google Cloud Observability Prometheus Grafana
- Familiarity with version control systems like Git
- Strong leadership, communication, problem solving with a proactive and collaborative approach to continuous learning
Preferred Certifications
- Google Cloud Professional Cloud DevOps Engineer
Google Cloud Professional Cloud Architect
GCP Data Engineer
Posted 2 days ago
Job Viewed
Job Description
Job title: GCP Data Engineer
Years of experience: 4-15 years
Walk in drive date: 25-October 2025 (Saturday)
Time: 9AM-1PM
Drive locations:
- Hyderabad: TCS Synergy Park Phase1 ,Premises No 2-56/1/36, Gachibowli, Opposite IIIT Hyderabad Campus, Seri Lingampally, RR District, Hyderabad, Telangana
- Bangalore: Tata Consultancy Services, Think Campus, JRD Auditorium Cafteria, Electronic City, Bangalore
- Chennai: TCS Siruseri GS-4-2F Building- 1/G1, SIPCOT IT Park Navalur, Siruseri, Tamil Nadu
Role:
- Hands-on development experience on Google Cloud Platform, Cloud Composer, Big query, Pub sub, Data proc, Data flow, CDAP, Big table, GCS
- Hands-on development Experience on Airflow DAG creation.
- Hands-on development experience on Data migration Pipeline creation on PubSub with DataProc DataFlow.
- Hands-on development Experience on Cloud function creation.
- Hands-on development experience in shell scripting, pySpark, Scala programming language
- ETL jobs development using Spark and Python
- Python / Java / Scala programming
- Debugging/troubleshooting of Spark jobs
- Performance tuning experience for Hadoop/Spark jobs
- Good understanding on Datawarehouse knowledge and data modelling
- Hands-on development experience in Big Query and performance tuning of BQ queries, BQ Data Load Jobs
GCP Data Engineer
Posted 2 days ago
Job Viewed
Job Description
Job Title: GCP Data Engineer
Experience: 4–7 Years
Location: Bangalore / Gurgaon
Employment Type: Full-Time
About the Role
We are looking for an experienced GCP Data Engineer with a strong background in Big Data, PySpark, and Python , and hands-on experience with core Google Cloud Platform (GCP) services. The ideal candidate will be responsible for designing, building, and optimizing scalable data pipelines and analytics solutions that drive business insights.
Key Responsibilities
- Design, develop, and maintain scalable and reliable data pipelines and ETL workflows using PySpark , Python , and GCP native tools.
- Work with large-scale datasets on BigQuery , Dataproc , Dataflow , Pub/Sub , and Cloud Storage .
- Collaborate with data architects, analysts, and business stakeholders to define data models, transformation logic, and performance optimization strategies.
- Implement best practices for data quality, data governance, and security on GCP.
- Monitor and troubleshoot production data pipelines to ensure reliability and performance.
- Optimize data workflows for cost efficiency and scalability in a cloud environment.
- Integrate data from multiple sources, both batch and streaming, into centralized analytical platforms.
Required Skills & Experience
- 4–7 years of hands-on experience as a Data Engineer in large-scale data environments.
- Strong expertise in Google Cloud Platform (GCP) — including BigQuery, Dataproc, Dataflow, Pub/Sub, Cloud Storage, Composer , etc.
- Proven experience with Big Data technologies and distributed data processing using PySpark and Spark SQL .
- Strong programming skills in Python for data processing and automation.
- Solid understanding of ETL design patterns , data warehousing , and data modeling concepts .
- Experience with Airflow or other workflow orchestration tools.
- Strong debugging, performance tuning, and problem-solving skills.
Be The First To Know
About the latest Gcp engineer Jobs in India !
GCP MLOps Engineer
Posted 3 days ago
Job Viewed
Job Description
About the Role:
We are seeking a hands-on Machine Learning Engineer who will build, deploy and operationalise ML solutions using GCP and Vertex AI. You will partner with data scientists, data engineers, product teams, and cloud infrastructure teams to deliver scalable, production-ready ML systems that drive business impact.
Key Responsibilities:
- Design and build ML pipelines: data ingestion, feature engineering, model training, evaluation, deployment, monitoring.
- Use Vertex AI services such as CustomTraining, AutoML, Vertex AI Pipelines, Model Registry, Feature Store for end-to-end ML lifecycle.
- Leverage GCP services like BigQuery, Cloud Storage (GCS), Dataflow, Pub/Sub, Cloud Functions, etc., to support ML workflows.
- Deploy trained models as endpoints (online/batch inference) and set up monitoring for model drift, performance metrics, version control.
- Apply best-practices in MLOps: CI/CD for ML, containerisation (Docker), orchestration (Kubernetes/GKE or equivalent), infrastructure as code (Terraform/Deployment Manager).
- Work with cross-functional teams to translate business requirements into ML solutions, document architectures and decisions, ensure security/governance/compliance.
Required Skills & Experience:
- 5-10 years (or depending on seniority) of experience in ML engineering / software + ML production systems.
- Hands-on experience with GCP services and especially Vertex AI (training, pipelines, deployment, monitoring).
- Strong programming ability in Python; experience with ML frameworks (TensorFlow, PyTorch, scikit-learn).
- Experience in building data pipelines, feature engineering, working with large scale data.
- Familiarity with model deployment/service architecture (online vs batch), containerisation, orchestration.
- Good understanding of ML lifecycle, versioning, model governance, monitoring and operational issues.
- Comfortable collaborating in agile teams and working with product/engineering stakeholders.
Nice to Have:
- GCP certification such as Google Cloud Professional Machine Learning Engineer.
- Experience with GenAI / LLMs and Vertex AI Model Garden or RAG workflows.
- Exposure to infrastructure as code (Terraform), kubernetes, and cloud security/governance frameworks.
GCP Data Engineer
Posted 5 days ago
Job Viewed
Job Description
About Position:
We are seeking the GCP Data Engineer with hands on experience in data engineering, python, jaba, etc.,
- Role: GCP Data Engineer
- Job Location: Pune
- Experience: 5 to 15 Years
- Job Type: Full Time Employment
What You'll Do:
- Design, build, and manage data pipelines on Google Cloud Platform (GCP).
- Work with BigQuery, Dataflow, Pub/Sub, and Cloud Storage for data processing.
- Develop and optimize ETL workflows for structured and unstructured data.
- Collaborate with data analysts and scientists to ensure data accuracy and availability.
- Monitor data jobs, troubleshoot issues, and improve performance.
- Ensure data security, reliability, and best practices in all projects.
Expertise You'll Bring:
- Strong experience with GCP services like BigQuery, Dataflow, Pub/Sub, and Cloud Storage.
- Solid understanding of ETL concepts and data pipeline development.
- Proficiency in SQL and Python for data transformation and analysis.
- Experience in handling large datasets and optimizing performance.
- Knowledge of data warehousing, data modeling, and cloud best practices.
- Good problem-solving and collaboration skills.
Benefits:
- Competitive salary and benefits package
- Culture focused on talent development with quarterly growth opportunities and company-sponsored higher education and certifications
- Opportunity to work with cutting-edge technologies
- Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards
- Annual health check-ups
- Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents
Values-Driven, People-Centric & Inclusive Work Environment:
Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds.
- We support hybrid work and flexible hours to fit diverse lifestyles.
- Our office is accessibility-friendly, with ergonomic setups and assistive technologies to support employees with physical disabilities.
- If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment
Let’s unleash your full potential at Persistent - persistent.com/careers
“Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.”
GCP Data Engineer
Posted 5 days ago
Job Viewed
Job Description
About Position:
We are seeking the GCP Data Engineer with hands on experience in data engineering, python, jaba, etc.,
- Role: GCP Data Engineer
- Job Location: Pune
- Experience: 5 to 10 Years
- Job Type: Full Time Employment
What You'll Do:
- Design, build, and manage data pipelines on Google Cloud Platform (GCP).
- Work with BigQuery, Dataflow, Pub/Sub, and Cloud Storage for data processing.
- Develop and optimize ETL workflows for structured and unstructured data.
- Collaborate with data analysts and scientists to ensure data accuracy and availability.
- Monitor data jobs, troubleshoot issues, and improve performance.
- Ensure data security, reliability, and best practices in all projects.
Expertise You'll Bring:
- Strong experience with GCP services like BigQuery, Dataflow, Pub/Sub, and Cloud Storage.
- Solid understanding of ETL concepts and data pipeline development.
- Proficiency in SQL and Python for data transformation and analysis.
- Experience in handling large datasets and optimizing performance.
- Knowledge of data warehousing, data modeling, and cloud best practices.
- Good problem-solving and collaboration skills.
Benefits:
- Competitive salary and benefits package
- Culture focused on talent development with quarterly growth opportunities and company-sponsored higher education and certifications
- Opportunity to work with cutting-edge technologies
- Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards
- Annual health check-ups
- Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents
Values-Driven, People-Centric & Inclusive Work Environment:
Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds.
- We support hybrid work and flexible hours to fit diverse lifestyles.
- Our office is accessibility-friendly, with ergonomic setups and assistive technologies to support employees with physical disabilities.
- If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment
Let’s unleash your full potential at Persistent - persistent.com/careers
“Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.”