Hadoop Developer
Posted 13 days ago
Job Viewed
Job Description
Dear Applicants,
Greeting from TCS!
I hope your day is going well!
At Tata Consultancy Services, were looking for someone like you for an Associate Consultant position. This opportunity aligns nicely with your experience in creating immersive technologies and innovative solutions.
TCS has urgent requirements for "Hadoop Developer"
Education: Minimum 15 years of full-time education (10th, 12th and Graduation)
Skills: HDFS, Hive, Spark, Sqoop, Flume, Oozie, Unix Script, Autosys
Role: Hadoop Developer
Exp: <5yrs
Location: Mumbai / Chennai / Hyderabad / Bangalore
---
JD~
Strong hands-on skills required on:
- Good understanding of Hadoop concepts including file system and Map Reduce.
- Hands on experience on Spark framework, Unix scripting, Hive queries, wriring UDF in Hive. Theortical knowledge and POC alone will not suffice.
- Good knowledge in Software Development Life Cycle and Project Development Lifecycle.
- Associate should be able to work independently and should have strong debugging skill in both Hive and Spark. Associate should have experience developing large scale systems, experience debugging and performance tuning, excellent software design skills, communication skills and ability to work with client partners.
---
Lets connect soon!
Hadoop Developer
Posted today
Job Viewed
Job Description
Greeting from TCS!
I hope your day is going well!
At Tata Consultancy Services, were looking for someone like you for an Associate Consultant position. This opportunity aligns nicely with your experience in creating immersive technologies and innovative solutions.
TCS has urgent requirements for "Hadoop Developer"
Education: Minimum 15 years of full-time education (10th, 12th and Graduation)
Skills: HDFS, Hive, Spark, Sqoop, Flume, Oozie, Unix Script, Autosys
Role: Hadoop Developer
Exp:
Location: Mumbai / Chennai / Hyderabad / Bangalore
---
JD~
Strong hands-on skills required on:
- Good understanding of Hadoop concepts including file system and Map Reduce.
- Hands on experience on Spark framework, Unix scripting, Hive queries, wriring UDF in Hive. Theortical knowledge and POC alone will not suffice.
- Good knowledge in Software Development Life Cycle and Project Development Lifecycle.
- Associate should be able to work independently and should have strong debugging skill in both Hive and Spark. Associate should have experience developing large scale systems, experience debugging and performance tuning, excellent software design skills, communication skills and ability to work with client partners.
---
Lets connect soon!
Hadoop Developer
Posted today
Job Viewed
Job Description
Dear Applicants,
Greeting from TCS!
I hope your day is going well!
At Tata Consultancy Services, were looking for someone like you for an Associate Consultant position. This opportunity aligns nicely with your experience in creating immersive technologies and innovative solutions.
TCS has urgent requirements for "Hadoop Developer"
Education: Minimum 15 years of full-time education (10th, 12th and Graduation)
Skills: HDFS, Hive, Spark, Sqoop, Flume, Oozie, Unix Script, Autosys
Role: Hadoop Developer
Exp: <5yrs
Location: Mumbai / Chennai / Hyderabad / Bangalore
---
JD~
Strong hands-on skills required on:
- Good understanding of Hadoop concepts including file system and Map Reduce.
- Hands on experience on Spark framework, Unix scripting, Hive queries, wriring UDF in Hive. Theortical knowledge and POC alone will not suffice.
- Good knowledge in Software Development Life Cycle and Project Development Lifecycle.
- Associate should be able to work independently and should have strong debugging skill in both Hive and Spark. Associate should have experience developing large scale systems, experience debugging and performance tuning, excellent software design skills, communication skills and ability to work with client partners.
---
Lets connect soon!
Hadoop Developer
Posted 13 days ago
Job Viewed
Job Description
Dear Applicants,
Greeting from TCS!
I hope your day is going well!
At Tata Consultancy Services, were looking for someone like you for an Associate Consultant position. This opportunity aligns nicely with your experience in creating immersive technologies and innovative solutions.
TCS has urgent requirements for "Hadoop Developer"
Education: Minimum 15 years of full-time education (10th, 12th and Graduation)
Skills: HDFS, Hive, Spark, Sqoop, Flume, Oozie, Unix Script, Autosys
Role: Hadoop Developer
Exp: <5yrs
Location: Mumbai / Chennai / Hyderabad / Bangalore
---
JD~
Strong hands-on skills required on:
- Good understanding of Hadoop concepts including file system and Map Reduce.
- Hands on experience on Spark framework, Unix scripting, Hive queries, wriring UDF in Hive. Theortical knowledge and POC alone will not suffice.
- Good knowledge in Software Development Life Cycle and Project Development Lifecycle.
- Associate should be able to work independently and should have strong debugging skill in both Hive and Spark. Associate should have experience developing large scale systems, experience debugging and performance tuning, excellent software design skills, communication skills and ability to work with client partners.
---
Lets connect soon!
Hadoop Developer
Posted today
Job Viewed
Job Description
Dear Applicants,
Greeting from TCS!
I hope your day is going well!
At Tata Consultancy Services, were looking for someone like you for an Associate Consultant position. This opportunity aligns nicely with your experience in creating immersive technologies and innovative solutions.
TCS has urgent requirements for "Hadoop Developer"
Education: Minimum 15 years of full-time education (10th, 12th and Graduation)
Skills: HDFS, Hive, Spark, Sqoop, Flume, Oozie, Unix Script, Autosys
Role: Hadoop Developer
Exp: <5yrs
Location: Mumbai / Chennai / Hyderabad / Bangalore
---
JD~
Strong hands-on skills required on:
- Good understanding of Hadoop concepts including file system and Map Reduce.
- Hands on experience on Spark framework, Unix scripting, Hive queries, wriring UDF in Hive. Theortical knowledge and POC alone will not suffice.
- Good knowledge in Software Development Life Cycle and Project Development Lifecycle.
- Associate should be able to work independently and should have strong debugging skill in both Hive and Spark. Associate should have experience developing large scale systems, experience debugging and performance tuning, excellent software design skills, communication skills and ability to work with client partners.
---
Lets connect soon!
Hadoop Data Engineer
Posted today
Job Viewed
Job Description
Position- Data Engineer
Location- Pune, Chennai, Bangalore, Gurgaon
Experience-5 to 9 Years
Notice Period: 0 to 45 Days or Serving Notice period
- Skills: Spark and Scala(Must have) Hadoop, Hive, Impala, Advanced SQL,OS (Unix), CI/CD Tools (Git, Jenkins, Nexus), Agile (Jira, Confluence), Relational Databases
Job Description:
- Big Data/Hadoop Experience particularly in ingesting data and implementing Data ingestion pipelines, SQOOP, HADOOP, HDFS, HIVE, IMPALA, Java, Scala, Spark
- Scala is Mandatory
Data Engineer with below responsibilities:
- Lead Data Engineer to build data pipelines to support implementation of data science and analytics use cases.
- Candidate needs to be able to develop code in the corresponding language (see technical skills), test it, and follow up with the implementation into Production environment.
- Candidate will also take part in the solution design phase, so experience in analysis requirements is desirable.
- Technical expert to lead a squad of engineers for a Data Product and implement data transformation projects in Hadoop.
Candidate will lead a team of 3-5 data engineers with minimum supervision and integrate with wider IT and business project teams.
Good work experience on Big Data Platforms like Hadoop, Spark, Scala, Hive, Impala, SQL
Experience working on Data Engineering projects.
Good Understanding of SQL
Good understanding of Unix and HDFS commands
Communication
Experience working on Data Analytics and Pyspark
Exposure to Tableau & SAS.
Note: Note-Candidates who have previously worked at TCS not eligible to apply.
Only relevant experience candidates can apply.
Hadoop Support Engineer
Posted 13 days ago
Job Viewed
Job Description
Hadoop Support Engineer
Bengaluru, KA
WFO
We are seeking a dedicated Hadoop Support Engineer to provide comprehensive support and on-call services for our Hadoop infrastructure. This role is pivotal in ensuring the continuous operation and stability of our Hadoop clusters, addressing incidents promptly, and supporting end-users with technical queries. The ideal candidate will possess strong Hadoop administration skills, effective troubleshooting capabilities, and excellent communication skills to interact with various stakeholders.
Key Responsibilities:
- On-Call Support: Serve as a primary point of contact in a rotating on-call schedule, providing 24/7 support to swiftly address and resolve critical incidents affecting Hadoop operations.
- Incident Management and Resolution: Take ownership of incident management processes by diagnosing issues, implementing fixes, and documenting solutions in a ticketing system. Conduct post-incident reviews to identify root causes and prevent recurrence.
- Monitoring and Alerts: Establish and maintain robust monitoring and alerting systems using tools like Nagios, Grafana, or Prometheus to proactively detect and mitigate potential issues before they escalate.
- User and Developer Support: Assist end-users and developers with technical queries related to Hadoop operations, providing guidance and support to optimize their use of the system. Educate users on best practices and system capabilities.
- System Maintenance: Conduct routine maintenance tasks including software patching, upgrades, and configuration changes to ensure system reliability and security. Schedule maintenance activities to minimize business disruption.
Qualifications:
- 3 years of experience in Hadoop administration and support, with a strong focus on operational stability and incident resolution.
- Proficiency in Hadoop ecosystem components such as HDFS, YARN, Hive, and Spark.
- Experience with Linux system administration and scripting (e.g., Bash, Python)
- Experience with configuration management tools such as Ansible
Be The First To Know
About the latest Hadoop Jobs in Bengaluru !
Apache Hadoop Developer
Posted today
Job Viewed
Job Description
At BairesDev, we've been leading the way in technology projects for over 15 years. We deliver cutting-edge solutions to giants like Google and the most innovative startups in Silicon Valley. Our diverse 4,000+ team, composed of the world's Top 1% of tech talent, works on roles that drive significant impact worldwide.
When you apply for this position, you're taking the first step in a process that goes beyond the ordinary. We aim to align your passions and skills with our vacancies, setting you on a path to exceptional career development and success.
Apache Hadoop Developer at BairesDev
We are seeking an Apache Hadoop Developer with expertise in big data ecosystem, HDFS architecture, MapReduce paradigm, and YARN resource management. The technology stack includes popular open source big data technologies - Apache Hadoop, and related ecosystem tools. Thus, there is ample scope for collaboration and contribution to open source. The ideal candidate will develop and optimize large-scale distributed data processing systems, contribute to open source projects, and collaborate with engineering teams to deliver innovative big data solutions.
What You'll Do:
- Contribute actively to Apache Hadoop and related open source projects, spending 20% of your time on upstream development.
- Develop and optimize big data ecosystem solutions for large-scale data processing.
- Implement HDFS architecture components to ensure reliable distributed data storage.
- Build MapReduce paradigm applications for scalable data processing workflows.
- Design and optimize YARN resource management systems for cluster efficiency.
- Collaborate with engineering teams on system integration and deployment.
What we are looking for:
- 10+ years of experience in software development.
- Strong expertise in big data ecosystem and distributed data processing systems.
- Deep understanding of HDFS architecture and distributed file system concepts.
- Proficiency in MapReduce paradigm and parallel processing techniques.
- Experience with YARN resource management and cluster orchestration.
- Core contributions to Apache open source projects (Hadoop, HDFS, Spark, or similar).
- Advanced level of English.
How we make your work (and your life) easier:
- Excellent compensation in USD or your local currency if preferred.
- Paid parental leave, vacation, & national holidays.
- Innovative and multicultural work environment.
- Collaborate and learn from the global Top 1% of talent in each area.
- Supportive environment with mentorship, promotions, skill development, and diverse growth opportunities.
Join a global team where your unique talents can truly thrive and make a significant impact
Apply now