Infrastructure Specialist-AWS DevOps

Posted 1 day ago
Job Viewed
Job Description
In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology
**Your role and responsibilities**
* Responsible for IT Infrastructure cross-platform technology areas demonstrating design and build expertise.
* Responsible for developing, architecting, and building AWS Cloud services with best practices, blueprints, patterns, high-availability and multi-region disaster recovery.
* Strong communication and collaboration skills
**Required technical and professional expertise**
* BE / B Tech in any stream, M.Sc. (Computer Science/IT) / M.C.A, with Minimum 8-10 plus years of experience
* Must have 8 + yrs of relevant experience in Python/ Java, AWS, Terraform/(IaC)
* Experience in Kubernetes, Docker, Shell scripting.
* Experienced in scripting languages Python (not someone who can write small scripts
**Preferred technical and professional experience**
* Experience using DevOps tools in a cloud environment, such as Ansible, Artifactory, Docker, GitHub, Jenkins, Kubernetes, Maven, and Sonar Qube
* Experience installing and configuring different application servers such as JBoss, Tomcat, and WebLogic
* Experience using monitoring solutions like CloudWatch, ELK Stack, and Prometheus
IBM is committed to creating a diverse environment and is proud to be an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender, gender identity or expression, sexual orientation, national origin, caste, genetics, pregnancy, disability, neurodivergence, age, veteran status, or other characteristics. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
AWS Data Engineer (Pyspark)
Posted 2 days ago
Job Viewed
Job Description
Experience: 5-10 Yrs
Location - Bangalore,Chennai,Hyderabad,Pune,Kochi,Bhubaneshawar,Kolkata
Key Skills
AWS Lambda, Python, Boto3 ,Pyspark, Glue
Must have Skills
- Strong experience in Python to package, deploy and monitor data science apps
- Knowledge in Python based automation
- Knowledge of Boto3 and related Python packages
- Working experience in AWS and AWS Lambda
Good to have (Knowledge)
- Bash scripting and Unix
- Data science models testing, validation and tests automation
- Knowledge of AWS SageMaker
TCS Hiring for AWS Senior Data Engineer with Pyspark, AWS, Glue_kochi
Posted 2 days ago
Job Viewed
Job Description
Job Title: AWS Senior Data Engineer with Pyspark, AWS, Glue
Location: Kochi
Experience: 6 to 10 Years
Notice Period: 30-45 days
Job Description:
Must: PySpark, AWS(ETL Concepts, S3, Glue, EMR, Redshift, DMS, AppFlow) ,Qlik Replicate, Data Testing
Nice To Have: Hadoop, Teradata Background, IaC(Cloud Formation / Terraform), Git
Kind Regards,
Priyankha M
Data Engineer-Data Platforms-AWS

Posted 1 day ago
Job Viewed
Job Description
In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.
**Your role and responsibilities**
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform
* Responsibilities:
* Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS
* Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform
* Experience in developing streaming pipelines
* Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc
**Required technical and professional expertise**
* Total 3-5+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills
* Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ;
* Minimum 3 years of experience on Cloud Data Platforms on AWS;
* Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB
* Good to excellent SQL skills
**Preferred technical and professional experience**
* Certification in AWS and Data Bricks or Cloudera Spark Certified developers
IBM is committed to creating a diverse environment and is proud to be an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender, gender identity or expression, sexual orientation, national origin, caste, genetics, pregnancy, disability, neurodivergence, age, veteran status, or other characteristics. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
Data Engineer-Data Platforms-AWS

Posted 1 day ago
Job Viewed
Job Description
In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.
**Your role and responsibilities**
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform
* Responsibilities:
* Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS
* Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform
* Experience in developing streaming pipelines
* Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc
**Required technical and professional expertise**
* Total 3-5+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills
* Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ;
* Minimum 3 years of experience on Cloud Data Platforms on AWS;
* Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB
* Good to excellent SQL skills
**Preferred technical and professional experience**
* Certification in AWS and Data Bricks or Cloudera Spark Certified developers
IBM is committed to creating a diverse environment and is proud to be an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender, gender identity or expression, sexual orientation, national origin, caste, genetics, pregnancy, disability, neurodivergence, age, veteran status, or other characteristics. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
Data Engineer-Data Platforms-AWS

Posted 1 day ago
Job Viewed
Job Description
In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.
**Your role and responsibilities**
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform
* Responsibilities:
* Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS
* Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform
* Experience in developing streaming pipelines
* Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc
**Required technical and professional expertise**
* Total 3-5+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills
* Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ;
* Minimum 3 years of experience on Cloud Data Platforms on AWS;
* Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB
* Good to excellent SQL skills
**Preferred technical and professional experience**
* Certification in AWS and Data Bricks or Cloudera Spark Certified developers
IBM is committed to creating a diverse environment and is proud to be an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender, gender identity or expression, sexual orientation, national origin, caste, genetics, pregnancy, disability, neurodivergence, age, veteran status, or other characteristics. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
Data Engineer-Data Platforms-AWS

Posted 1 day ago
Job Viewed
Job Description
In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.
**Your role and responsibilities**
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform
* Responsibilities:
* Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS
* Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform
* Experience in developing streaming pipelines
* Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc
**Required technical and professional expertise**
* Total 3-5+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills
* Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ;
* Minimum 3 years of experience on Cloud Data Platforms on AWS;
* Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB
* Good to excellent SQL skills
**Preferred technical and professional experience**
* Certification in AWS and Data Bricks or Cloudera Spark Certified developers
IBM is committed to creating a diverse environment and is proud to be an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender, gender identity or expression, sexual orientation, national origin, caste, genetics, pregnancy, disability, neurodivergence, age, veteran status, or other characteristics. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
Be The First To Know
About the latest Aws Jobs in Kochi !
Data Engineer-Data Platforms-AWS

Posted 1 day ago
Job Viewed
Job Description
In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.
**Your role and responsibilities**
* As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform
* Responsibilities:
* Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS
* Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform
* Experience in developing streaming pipelines
* Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc.
**Required technical and professional expertise**
* Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ;
* Minimum 3 years of experience on Cloud Data Platforms on AWS;
* Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB
* Good to excellent SQL skills
* Exposure to streaming solutions and message brokers like Kafka technologies.
**Preferred technical and professional experience**
* Certification in AWS and Data Bricks or Cloudera Spark Certified developers
IBM is committed to creating a diverse environment and is proud to be an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender, gender identity or expression, sexual orientation, national origin, caste, genetics, pregnancy, disability, neurodivergence, age, veteran status, or other characteristics. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
Data Engineer-Data Platforms-AWS

Posted 1 day ago
Job Viewed
Job Description
In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.
**Your role and responsibilities**
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform
* Responsibilities:
* Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS
* Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform
* Experience in developing streaming pipelines
* Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc
**Required technical and professional expertise**
* Total 3-5+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills
* Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ;
* Minimum 3 years of experience on Cloud Data Platforms on AWS;
* Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB
* Good to excellent SQL skills
**Preferred technical and professional experience**
* Certification in AWS and Data Bricks or Cloudera Spark Certified developers
IBM is committed to creating a diverse environment and is proud to be an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender, gender identity or expression, sexual orientation, national origin, caste, genetics, pregnancy, disability, neurodivergence, age, veteran status, or other characteristics. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
Senior Software Architect - Cloud Infrastructure
Posted 4 days ago
Job Viewed
Job Description
Responsibilities:
- Architect, design, and implement highly scalable, resilient, and performant cloud infrastructure solutions.
- Lead the design and development of microservices-based architectures.
- Define and enforce coding standards, design patterns, and best practices for cloud development.
- Oversee the implementation of robust CI/CD pipelines for automated build, test, and deployment processes.
- Evaluate and select appropriate cloud services, technologies, and tools to meet business requirements.
- Provide technical leadership and mentorship to engineering teams.
- Conduct code reviews and architectural design sessions.
- Ensure the security, reliability, and cost-effectiveness of cloud infrastructure.
- Collaborate with product managers, engineers, and stakeholders to define technical roadmaps.
- Troubleshoot and resolve complex technical issues related to cloud infrastructure and application performance.
Qualifications:
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
- Minimum of 10 years of experience in software development, with at least 5 years in a Software Architect or Lead Engineer role focusing on cloud infrastructure.
- Extensive experience with major cloud platforms (AWS, Azure, or GCP), including services like EC2, S3, Lambda, Kubernetes, Docker, etc.
- Deep understanding of microservices architecture, distributed systems, and API design.
- Proficiency in multiple programming languages (e.g., Java, Python, Go).
- Experience with infrastructure as code (IaC) tools such as Terraform or CloudFormation.
- Strong knowledge of CI/CD principles and tools (e.g., Jenkins, GitLab CI, CircleCI).
- Excellent understanding of database technologies (SQL and NoSQL).
- Proven ability to lead technical discussions and influence architectural decisions.
- Strong communication and collaboration skills, with the ability to articulate complex technical concepts.