1,526 Aws Data jobs in India
AWS Data Engineer
Posted 2 days ago
Job Viewed
Job Description
**Skill:** AWS Data Engineer
**Experiences** : 6 - 13 years
**Location** : AIA Kochi
We are seeking a highly skilled Sr. Support Analyst with 6 to 13 years of experience to join our team. The ideal candidate will have expertise in AWS Lambda AWS IAM Apache Spark Python DB Performance Optimization Data Modelling Amazon S3 and TSQL. Experience in Property & Casualty Insurance is a plus. This is a hybrid role with rotational shifts and no travel required. Should have strong experience in supporting existing environments in above technologies
**Responsibilities**
+ Develop and maintain scalable applications using AWS Lambda and other AWS services.
+ Implement and manage AWS IAM policies to ensure secure access control.
+ Providing support for the already developed framework and ensuring continuity of applications during support work
+ Working with customer and on: offshore team members on resolving failures and coming up with permanent fixes.
+ Oversee the development and maintenance of data pipelines and workflows
+ Providing automation and improvement recommendations and tracking implementation to closure.
+ Preparing timeline for L3 development and tracking to closure.
+ Utilize Apache Spark for large-scale data processing and analytics.
+ Write efficient and maintainable code in Python for various applications.
+ Optimize database performance to ensure fast and reliable data retrieval.
+ Design and implement data models to support business requirements.
+ Manage and maintain data storage solutions using Amazon S3.
+ Write complex TSQL queries for data manipulation and reporting.
+ Collaborate with cross-functional teams to gather and analyze requirements.
+ Provide technical guidance and mentorship to junior developers.
+ Ensure code quality through code reviews and automated testing.
+ Troubleshoot and resolve issues in a timely manner.
+ Stay updated with the latest industry trends and technologies.
**Qualifications**
+ Must have 8 to 10 years of experience in software development.
+ Must have strong experience with AWS Lambda and AWS IAM.
+ Must have expertise in Apache Spark and Python.
+ Must have experience in DB Performance Optimization and Data Modelling.
+ Must have experience with Amazon S3 and TSQL.
+ Nice to have experience in Property & Casualty Insurance.
+ Must be able to work in a hybrid model with rotational shifts.
+ Must have excellent problem-solving and analytical skills.
+ Must have strong communication and collaboration skills.
+ Must be able to work independently and as part of a team.
+ Must be detail-oriented and able to manage multiple tasks.
+ Must be committed to continuous learning and improvement.
+ Must have a proactive and positive attitude.
**Certifications Required**
AWS Certified Developer Apache Spark Certification Python Certification
Cognizant is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law.
AWS Data Engineer
Posted today
Job Viewed
Job Description
Role
AWS Data Engineer
Experience
8+ years
Location
Remote
Time Zone
UK
Duration
2 months (Extendable)
Job Description
- Design, development, and implementation of performant ETL pipelines using python API (pySpark) of Apache Spark on AWS EMR.
- Writing reusable, testable, and efficient code
- Integration of data storage solutions in spark – especially with AWS S3 object storage. Performance tuning of pySpark scripts.
- Need to ensure overall build delivery quality is good and on-time delivery is done at all times.
- Should be able to handle meetings with customers with ease.
- Need to have excellent communication skills to interact with the customer.
- Be a team player and willing to work in an onsite-offshore model, mentor other folks in the team (onsite as well as offshore)
*5+ years of experience in programming with python. Strong proficiency in python
*Familiarity with functional programming concepts
*3+ years of hands-on experience in developing ETL data pipelines using pySpark on AWS EMR
*Experience in building pipelines and data lake for large enterprises on AWS
*Good understanding of Spark’s Dataframe and API
*Experience in configuring EMR clusters on AWS
*Experience in dealing with AWS S3 object storage from Spark.
*Experience in troubleshooting spark jobs. Knowledge of monitoring spark jobs using Spark UI
*Performance tuning of Spark jobs.
*Understanding fundamental design principles behind business processes
Process Knowledge and Expertise:
- Demonstrated experience in change management processes, including understanding of governance frameworks and preparation of supporting artefacts required for approvals.
- Strong clarity on the path to production, with hands-on involvement in deployments, testing cycles, and obtaining business sign-offs.
- Proven track record in technical solution design, with the ability to provide architectural guidance and support implementation strategies.
Databricks-Specific Skills:
- Experience in at least developing and delivering end-to-end Proof of Concept (POC) solutions covering the below:
- Basic proficiency in Databricks, including creating jobs and configuring clusters.
- Exposure to connecting external data sources (e.g., Amazon S3) to Databricks.
- Understanding of Unity Catalog and its role in data governance.
- Familiarity with notebook orchestration and implementing modular code structures to enhance scalability and maintainability.
Important Pointers:
- Candidates must have actual hands-on work experience , not just home projects or academic exercises.
- Profiles should clearly state how much experience they have in each skill area , as this helps streamline the interview process.
- Candidates must know their CV/profile inside out , including all projects and responsibilities listed. Any ambiguity or lack of clarity on the candidate’s part can lead to immediate rejection, as we value accuracy and ownership.
- They should be able to confidently explain their past experience, challenges handled, and technical contributions.
AWS Data Engineer
Posted 2 days ago
Job Viewed
Job Description
Role - AWS Data Engineer
Required Technical Skill Set - AWS,Snowflake,ETL,Python,Pyspark, DBT
Experience Range - 6+ Years
Technical/Behavioral Competency:
• 5+ years experience with Sowflake development and integration
• 3-7 years experience AWS cloud and AWS services such as S3 Buckets, Lambda, Glue, API Gateway, SQS queues, RDS, Redshift;
• Experience in developing and supporting web applications including familiarity with web technologies and frameworks (Angular js, React.js).
• 3-5 years’ experience with automation of DevOps build using GitLab/Bitbucket/Jenkins/Maven;
• Experience in integration technologies like Fivetran, Dbt
• Experience working directly with technical and business teams.
• Familiar with software DevOps CI/CD tools, such Git, Jenkins, Linux, and Shell Script.
• Ability to learn quickly, be organized and detail oriented.
• Understanding of database schema design.
• Proficient in one of the coding languages (Python, Java, Scala);
Automated testing experience
AWS Data Engineer
Posted 6 days ago
Job Viewed
Job Description
AWS Data Engineer
Job Location: Bengaluru
Experience Required: 5+ Years
Mandatory Skills: AWS Services, ETL, ETL Integration, CodePipeline, Jenkins, Glue, EMR, Athena, ECS, EKS, Kubernetes, CloudWatch, Prometheus, Grafana, Python, Shell, or PowerShell
Job Description:
We are looking for an experienced AWS Engineer with around 5-8 years of hands-on experience in designing, deploying, and managing applications on the AWS cloud platform. The ideal candidate should have strong expertise in AWS services, automation, and cloud security, along with the ability to collaborate with cross-functional teams to deliver scalable and secure cloud solutions.
Key Responsibilities
- 5-8 years of professional work experience in a relevant field
- Design, develop, and maintain scalable data pipelines and ETL workflows on AWS.
- Strong SQL skills and experience with structured/unstructured data processing.
- Ingest, process, and transform large datasets using AWS services (Glue, EMR, Athena, Redshift, Lambda).
- Build and optimize data lake and data warehouse solutions leveraging S3, Lake Formation, and Redshift Spectrum.
- Collaborate with stakeholders to design and integrate APIs/microservices for data access and consumption.
- Develop reusable, testable, and efficient code in Python/Java/Scala/Node.js for data engineering solutions.
- Work with DevOps teams to automate deployments, manage CI/CD pipelines, and ensure reliability (CodePipeline, Jenkins, GitHub Actions).
- Build serverless solutions using AWS Lambda, API Gateway, and DynamoDB.
- Ensure best practices for data security, governance, monitoring, and cost optimization on AWS.
- Troubleshoot, optimize, and improve the performance of data workflows and applications.
Required Skills & Qualifications
- 5+ years of experience in data engineering with strong AWS expertise.
- Hands-on with AWS services such as Glue, EMR, Athena, Redshift, Lambda, S3, RDS, EC2, and VPC.
- Familiarity with CI/CD pipelines and DevOps practices for data deployments.
- Exposure to container orchestration (ECS, EKS, Kubernetes) and monitoring tools (CloudWatch, Prometheus, Grafana).
- Strong scripting skills in Python, Shell, or PowerShell.
Education / Qualification :
- BE/ B.Tech / BCA / B.Sc. / M.CA / M.TECH / Any Graduate
Please share your resume at
AWS Data Engineer
Posted 6 days ago
Job Viewed
Job Description
Dear Candidate
Greetings from TATA Consultancy Services
Job Openings at TCS
Skill : AWS Data Engineer
Exp range: 5 yrs to 8 yrs
Location: HYD
Notice period –30 days
Pls find the Job Description below.
- Good hands-on experience in Python programming and Pyspark/Scala
- Data Engineering experience using AWS core services (Lambda, Glue, EMR, S3 and RedShift)
- Good knowledge on any of the ETL tools and SQL’s
Must-Have Skills AWS Pyspark/ Python/Hive -Primary
AWS Glue, Lambda, Athena-Secondary
AWS Python , pyspark, Glue lambdaAWS , CI CD
If you are Interested in the above opportunity kindly share your updated resume to immediately with the details below (Mandatory)
Name:
Contact No.
Email id:
Skillset:
Total exp:
Relevant Exp:
Fulltime highest qualification (Year of completion with percentage scored):
Current organization details (Payroll company):
Current CTC:
Expected CTC:
Notice period:
Current location:
Any gaps between your education or career (If yes pls specify the duration):
Will you be able to join within 30 days? (Yes/NO)
Is HYD location fine? (Yes /NO)
AWS Data Engineer
Posted 6 days ago
Job Viewed
Job Description
Greetings From TCS!
TCS is hiring Candidate for:
Redshift, Lambda, SQL, AWS Bedrock services
Interview Date: Weekend InPerson Drive 18-Oct-25
Mode of interview: in person
Experience Range: 6 to 9 yrs
Location of Requirement- Hyderabad
Last Date of form submission: Today/ASAP
Follow me for more Job updates!
AWS Data Engineer
Posted 6 days ago
Job Viewed
Job Description
We are Hiring AWS Data Engineers at Coforge Ltd.
Job Location: Bangalore
Experience Required: 5 to 7 Years.
Availability: Immediate joiners preferred
Send your CV to
WhatsApp: for any queries
Role Overview:-
Coforge Ltd. is seeking a skilled AWS Engineer with 5–7 years of hands-on experience in designing, deploying, and managing cloud-native applications on AWS. This is a high-priority project , and we are looking for someone who can hit the ground running and contribute to building scalable, secure, and efficient cloud solutions.
Key Responsibilities:-
- Design and maintain scalable data pipelines and ETL workflows on AWS.
- Work with structured and unstructured data using SQL and AWS services.
- Utilize AWS Glue, EMR, Athena, Redshift, Lambda for data processing.
- Build and optimize data lakes and warehouses using S3, Lake Formation, Redshift Spectrum.
- Develop APIs/microservices for data access and integration.
- Write efficient, reusable code in Python, Java, Scala, or Node.js.
- Collaborate with DevOps teams to manage CI/CD pipelines (CodePipeline, Jenkins, GitHub Actions).
- Build serverless solutions using Lambda, API Gateway, DynamoDB.
- Ensure best practices in data security, governance, and cost optimization.
- Troubleshoot and enhance performance of data workflows and applications.
Required Skills & Qualifications:-
- 5+ years of experience in data engineering with strong AWS expertise.
- Proficiency in AWS services: Glue, EMR, Athena, Redshift, Lambda, S3, RDS, EC2, VPC.
- Experience with CI/CD and DevOps practices.
- Familiarity with container orchestration (ECS, EKS, Kubernetes).
- Knowledge of monitoring tools like CloudWatch, Prometheus, Grafana.
- Strong scripting skills in Python, Shell, or PowerShell.
Be The First To Know
About the latest Aws data Jobs in India !
AWS Data Engineer
Posted 6 days ago
Job Viewed
Job Description
We are seeking a skilled Data Engineer (AWS- Scala Mandatory) to join our dynamic team. This role involves building and maintaining robust data pipelines, designing scalable databases, and collaborating with stakeholders to deliver high-quality data solutions.
Responsibilities
- Build and maintain data pipelines using AWS Glue , Python , EMR/Spark , and Kafka/Kinesis streaming.
- Design and develop relational databases, preferably using Aurora Postgres .
- Write complex SQL queries and stored procedures .
- Troubleshoot and resolve data quality issues.
- Collaborate with stakeholders to understand data requirements and develop effective solutions.
- Document data pipelines, workflows, and procedures.
Required Skills
- Strong understanding of cloud technologies (AWS preferred), including services like Aurora/RDS , Glue , Lambda , EC2 , S3 , EMR , Step Functions , CloudTrail , and CloudWatch .
- Hands-on experience building data pipelines using Glue , Python , EMR/Spark , and Kafka/Kinesis .
- Extensive experience in relational database design and development , especially with Aurora Postgres .
- Proficiency in Postgres SQL , including writing complex queries and stored procedures.
- Excellent analytical and problem-solving skills.
AWS Data Engineer
Posted 6 days ago
Job Viewed
Job Description
Job Title: AWS Data Engineer
Exp Range: 5-14 Years
Job Location: Bangalore or Bhubaneswar or Hyderabad
Job Description:
•Bachelor's degree in Engineering , Technology computer science, statistics, biostatistics, mathematics, biology or other health related field or equivalent experience that provides the skills and knowledge necessary to perform the job.
•Experience in data engineering, building data pipelines to manage heterogenous data ingestions or similar in data integration across multiple sources including collected data.
•Experience with Python/R, SQL, NoSQL
•Cloud experience (i.e. AWS, AZURE or GCP)
•Experience with GitLab, GitHub
•Experience with Jenkins, GitLab
•Experience deploying data pipelines in the cloud
•Experience with Apache Spark (databricks)
•Experience setting up and working with data warehouse, data lakes (eg: snowflake, Amazon RedShift etc.,)
•Experience setting up ELT and ETL
AWS Data Engineer
Posted 6 days ago
Job Viewed
Job Description
Dear Candidate
Greetings from TATA Consultancy Services
Job Openings at TCS
Skill: AWS Data Engineer
Exp range: 6 yrs to 10 yrs
location : Bangalore
Notice period – 30 – 45 days
Pls find the Job Description below.
- Hands-on experience in Pyspark, Glue
- Experience on EMR, S3, IAM, Lambda, Cloud formation, Python
- AMI Rehydration, Python, ELB and other AWS components knowledge
- AWS monitoring setup and building the traffic dashboard and SNOW incidents for AWS related traffic and services
- CI/CD build and deployment support
- Ensure reliability of data pipelines, ETL processes, data transformations
- Identify areas of improvement in data quality processes, propose solutions to enhance data reliability, accuracy
- Implement data quality best practices, optimize data workflows
Must-Have Skills - Python, Pyspark , Hive, Snowflake (S3,Lambda, Glue, EMR and RedShift)
If you are Interested in the above opportunity kindly share your updated resume to immediately with the details below (Mandatory)
Name:
Contact No.
Email id:
Skillset:
Total exp:
Relevant Exp:
Fulltime highest qualification (Year of completion with percentage scored):
Current organization details (Payroll company):
Current CTC:
Expected CTC:
Notice period:
Current location:
Any gaps between your education or career (If yes pls specify the duration):
Will you be able to join within 30/45 days? (Yes/NO)
Is Bangalore location is fine? (Yes /NO)