23,411 Hadoop jobs in India
Hadoop Admin
Posted 2 days ago
Job Viewed
Job Description
Role: Cloudera Admin
Experience: 6 to 8 Years
Job Location: Bangalore, Hyderabad, Chennai, Mumbai, Pune
Hybrid Mode
FTE with LTIMindtree
Notice Period: Immediate to 15 days
Please do not apply Notice Period with more than 15 days of Notice Period
Mandatory Skills: Build Cloudera Cluster, migrate data, performance tuning, pre/post platform support for application migration, Linux commands
Thanks & Regards,
Prabal Pandey
Hadoop Administrator
Posted today
Job Viewed
Job Description
Work Location:
Pune
(Work-from-office)
Notice Period:
Immediate to 30 Days
Experience:
2+ Years
Must Have Skills:
- Strong Hadoop Administration- (Cloudera CDP/CDH, Hortonworks HDP).
- Big Data Ecosystem Tools- YARN, HDFS, Zookeeper, Hive, Spark, HBase, and Atlas.
- Linux/Unix System Administration- RedHat, CentOS, Ubuntu, with deep knowledge of OS internals.
- Cluster Security & Encryption- Kerberos, data-at-rest/data-in-transit encryption, Ranger
- Scripting & Automation- Shell/Bash/Python
If you believe your skills align with this role, we encourage you to
apply directly
.
If you know someone who would be a strong fit, please feel free to
refer or share this opportunity
with them
Hadoop Administrator
Posted today
Job Viewed
Job Description
Level - L3
- Administer, support Linux systems in large-scale production container environments.
- Automate infrastructure using Ansible; manage Hadoop clusters and containers.
- Monitor performance via Grafana/Prometheus; ensure system configuration compliance.
- Collaborate cross-functionally; strong communication and scripting skills are essential.
- Support DR/BCP, maintain HIPAA/PHI compliance, ensure infrastructure security.
Hadoop Admin
Posted today
Job Viewed
Job Description
Who We Are
At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities.
The Role
As a Data Scientist at Kyndryl you are the bridge between business problems and innovative solutions, using a powerful blend of well-defined methodologies, statistics, mathematics, domain expertise, consulting, and software engineering. You'll wear many hats, and each day will present a new puzzle to solve, a new challenge to conquer.
You will dive deep into the heart of our business, understanding its objectives and requirements – viewing them through the lens of business acumen, and converting this knowledge into a data problem. You'll collect and explore data, seeking underlying patterns and initial insights that will guide the creation of hypotheses.
In this role, you will embark on a transformative process of business understanding, data understanding, and data preparation. Utilizing statistical and mathematical modeling techniques, you'll have the opportunity to create models that defy convention – models that hold the key to solving intricate business challenges. With an acute eye for accuracy and generalization, you'll evaluate these models to ensure they not only solve business problems but do so optimally.
Cluster management and maintenance
- Deployment: Install, configure, and deploy multi-node Hadoop clusters and all related ecosystem components on Linux environments.
- Node management: Commission and decommission data nodes to scale the cluster based on data and resource needs.
- Configuration: Manage and update service configurations for components like HDFS, YARN, Hive, and Spark.
- Upgrades and patches: Plan and execute upgrades, including rolling out patches and version upgrades for the Hadoop ecosystem.
- Automation: Use configuration management tools like Ansible, Puppet, or Chef to automate cluster deployment and management tasks.
Monitoring and performance tuning
- System health: Monitor the overall health and performance of the Hadoop cluster using tools like Cloudera Manager or Ambari.
- Job tracking: Keep a close eye on running jobs and resource utilization to ensure they are performing efficiently.
- Performance optimization: Tune the cluster, including Hadoop MapReduce routines and YARN resource allocation, to improve performance.
- Troubleshooting: Diagnose and resolve cluster issues, including application errors, system failures, and configuration problems.
Security and access control
- Security setup: Implement and maintain security protocols, such as Kerberos authentication, across the Hadoop environment.
- User administration: Create and manage users and groups within Hadoop and the underlying Linux operating system.
- Access control: Manage permissions to ensure data governance policies are enforced.
Data management and administration
- Data migration: Manage the movement of data into and out of HDFS using tools like Sqoop and Flume.
- Capacity planning: Monitor disk space usage and forecast future storage needs to ensure sufficient capacity.
- Backup and recovery: Design and implement backup and disaster recovery procedures to protect cluster data.
- Log management: Collect, review, and manage log files from the Hadoop daemons for auditing and troubleshooting purposes.
Collaboration and support
- Inter-team coordination: Work closely with data engineers, developers, and business intelligence teams to support their big data application needs.
- Escalation point: Serve as the point of contact for vendor escalation to resolve critical issues.
Documentation: Maintain comprehensive documentation of cluster configurations, procedures, and best practices.
Who You Are
Who You Are
You're good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you're open and borderless – naturally inclusive in how you work with others.
Required Skills and Experience
Required qualifications and skills
- 10 years of experience as Hadoop Admin
- Technical expertise: Strong knowledge of the Hadoop ecosystem, including HDFS, YARN, MapReduce, Hive, and Spark.
- Linux proficiency: Excellent command of Linux/UNIX commands and shell scripting, as Hadoop is primarily run on Linux.
- Problem-solving: Exceptional analytical and troubleshooting skills to diagnose complex cluster issues.
- Communication: Strong interpersonal and communication skills to collaborate with various technical and business teams.
- Automation skills: Hands-on experience with automation tools like Ansible or Puppet.
- Cloud experience (optional): Experience with cloud-based Hadoop services like Amazon EMR or Azure HDInsight is a plus.
•
• Database knowledge: Familiarity with database concepts and SQL can be beneficial, particularly for managing Hive and HBase databases.
Preferred Skills and Experience
- Degree in a quantitative discipline, such as mathematics, statistics, computer science, or mechanical engineering
- Professional certification, e.g., Open Certified Data Scientist
- Cloud platform certification, e.g., AWS Certified Machine Learning – Specialty, Google Cloud Professional Machine Learning Engineer, or Microsoft Certified: Azure Data Scientist Associate
- Understanding of social coding and Integrated Development Environments, e.g., GitHub and Visual Studio
- Experience in at least one domain, e.g., cybersecurity, IT service management, financial services, or health care
Being You
Diversity is a whole lot more than what we look like or where we come from, it's how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we're not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That's the Kyndryl Way.
What You Can Expect
With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed.
Get Referred
If you know someone that works at Kyndryl, when asked 'How Did You Hear About Us' during the application process, select 'Employee Referral' and enter your contact's Kyndryl email address.
Hadoop Developer
Posted today
Job Viewed
Job Description
1 Extensive experience with Design and Implementation of Big data solutions, preferably using Cloudera distribution
2 Working knowledge on Cloudera and Apache tools/utilities
- Extensive experience and hands on implementation experience with Spark, Scala, Impala, Hive, Kafka, SQOOP
4.Extensive knowledge with Data frames, Data Sets and RDDs
Hadoop Developer
Posted today
Job Viewed
Job Description
Job role - Hadoop developer
Experience - 2 to 10 years
Location - PAN INDIA
This Job Opportunity is for TOP Leading MNCs / Permanent role.
- Role- Hadoop Developer / Module Lead
Technical skills- Hadoop, Spark, Scala, Impala, Hive, Kafka
Must-Have
- Extensive experience and hands on implementation experience with Spark, Scala, Impala, Hive, Kafka, SQOOP
- Good-to-Have
- Extensive knowledge with Data frames, Data Sets and RDDs.
Hadoop Administrator
Posted today
Job Viewed
Job Description
Level - L3
- Administer, support Linux systems in large-scale production container environments.
- Automate infrastructure using Ansible; manage Hadoop clusters and containers.
- Monitor performance via Grafana/Prometheus; ensure system configuration compliance.
- Collaborate cross-functionally; strong communication and scripting skills are essential.
- Support DR/BCP, maintain HIPAA/PHI compliance, ensure infrastructure security.
Be The First To Know
About the latest Hadoop Jobs in India !
Hadoop Administrator
Posted today
Job Viewed
Job Description
Level - L3
- Administer, support Linux systems in large-scale production container environments.
- Automate infrastructure using Ansible; manage Hadoop clusters and containers.
- Monitor performance via Grafana/Prometheus; ensure system configuration compliance.
- Collaborate cross-functionally; strong communication and scripting skills are essential.
- Support DR/BCP, maintain HIPAA/PHI compliance, ensure infrastructure security.
Hadoop Admin
Posted today
Job Viewed
Job Description
Job description:
Job Description
Role Purpose
The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists.
͏
Do
- Oversee and support process by reviewing daily transactions on performance parameters
- Review performance dashboard and the scores for the team
- Support the team in improving performance parameters by providing technical support and process guidance
- Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions
- Ensure standard processes and procedures are followed to resolve all client queries
- Resolve client queries as per the SLA's defined in the contract
- Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting
- Document and analyze call logs to spot most occurring trends to prevent future problems
- Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution
- Ensure all product information and disclosures are given to clients before and after the call/email requests
- Avoids legal challenges by monitoring compliance with service agreements
͏
- Handle technical escalations through effective diagnosis and troubleshooting of client queries
- Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements
- If unable to resolve the issues, timely escalate the issues to TA & SES
- Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions
- Troubleshoot all client queries in a user-friendly, courteous and professional manner
- Offer alternative solutions to clients (where appropriate) with the objective of retaining customers' and clients' business
- Organize ideas and effectively communicate oral messages appropriate to listeners and situations
- Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLA's
͏
- Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client
- Mentor and guide Production Specialists on improving technical knowledge
- Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist
- Develop and conduct trainings (Triages) within products for production specialist as per target
- Inform client about the triages being conducted
- Undertake product trainings to stay current with product features, changes and updates
- Enroll in product specific and any other trainings per client requirements/recommendations
- Identify and document most common problems and recommend appropriate resolutions to the team
- Update job knowledge by participating in self learning opportunities and maintaining personal networks
͏
Deliver
NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance
Experience: 5-8 Years
.
Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.
Hadoop Admin
Posted today
Job Viewed
Job Description
Experience: 4 to 7 Years
Job Description: -
- 4+ years of hands-on experience - Hadoop, System administration with sound knowledge
In Unix based Operating System internals.
Working experience on Cloudera CDP and CDH and Hortonworks HDP Distribution.
Linux experience (RedHat, CentOS, Ubuntu).
xperience in setting up and supporting Hadoop environment (Cloud and On-premises).
bility to setup, configure and implement security for Hadoop clusters using Kerberos.
bility to implement data-at-rest encryption (required), data-in-transit encryption(optional).
bility to setup and troubleshoot Data Replication peers and policies.
xperience in setting up services like YARN, HDFS, Zookeeper, Hive, Spark, HBase etc.
illing to work in 24x7 rotating shifts including weekends and public holidays.
nows Hadoop command line interface.
cripting background (shell/bash, python etc.)- automation, configuration management.
nowledge of Ranger, SSL, Atlas etc.
nowledge of Hadoop data related Auditing methods.
xcellent communication, interpersonal skills.
bility to closely work with the infrastructure, networking, development teams.
etting up the platform using AWS cloud native services
Job Type: Full-time
Pay: ₹100, ₹900,000.00 per year
Benefits:
- Provident Fund
Work Location: In person