5,793 Senior Data jobs in India
Big Data Engineer
Posted 2 days ago
Job Viewed
Job Description
**Responsibilities:**
+ Utilize knowledge of applications development procedures and concepts, and basic knowledge of other technical areas to identify and define necessary system enhancements
+ Identify and analyze issues, make recommendations, and implement solutions
+ Utilize knowledge of business processes, system processes, and industry standards to solve complex issues
+ Analyze information and make evaluative judgements to recommend solutions and improvements
+ Conduct testing and debugging, utilize script tools, and write basic code for design specifications
+ Assess applicability of similar experiences and evaluate options under circumstances not covered by procedures
+ Develop working knowledge of Citi's information systems, procedures, standards, client server application development, network operations, database administration, systems administration, data center operations, and PC-based applications
+ Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency.
**Qualifications:**
+ 3 to 5 years of relevant experience
+ Experience in programming/debugging used in business applications
+ Working knowledge of industry practice and standards
+ Comprehensive knowledge of specific business area for application development
+ Working knowledge of program languages
+ Consistently demonstrates clear and concise written and verbal communication
**Education:**
+ Bachelor's degree/University degree or equivalent experience
This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required.
Additional Job Description
We are looking for a Big Data Engineer that will work on the collecting, storing, processing, and analyzing of huge sets of data. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them. You will also be responsible for integrating them with the architecture used across the company.
Responsibilities
- Selecting and integrating any Big Data tools and frameworks required to provide requested capabilities
- Implementing data wrangling, scarping, cleaning using both Java or Python
Strong experience on data structure.
Skills and Qualifications
- Proficient understanding of distributed computing principles
- Proficient in Java or Python and some part of machine learning
- Proficiency with Hadoop v2, MapReduce, HDFS, Pyspark, Spark
- Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming
- Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala
- Experience with Spark
- Experience with integration of data from multiple data sources
- Experience with NoSQL databases, such as HBase, Cassandra, MongoDB
- Knowledge of various ETL techniques and frameworks, such as Flume
- Experience with various messaging systems, such as Kafka or RabbitMQ
- Experience with Big Data ML toolkits, such as Mahout, SparkML, or H2O
- Good understanding of Lambda Architecture, along with its advantages and drawbacks
- Experience with Cloudera/MapR/Hortonworks
This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required.
---
**Job Family Group:**
Technology
---
**Job Family:**
Applications Development
---
**Time Type:**
Full time
---
**Most Relevant Skills**
Please see the requirements listed above.
---
**Other Relevant Skills**
For complementary skills, please see above and/or contact the recruiter.
---
_Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law._
_If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review_ _Accessibility at Citi ( _._
_View Citi's_ _EEO Policy Statement ( _and the_ _Know Your Rights ( _poster._
Citi is an equal opportunity and affirmative action employer.
Minority/Female/Veteran/Individuals with Disabilities/Sexual Orientation/Gender Identity.
Big Data Engineer
Posted 2 days ago
Job Viewed
Job Description
**Responsibilities:**
+ Design, development of BigData applications/ pipelines using Spark, Scala, SQL, Pyspark, Python, Java
+ Consult with users, clients, and other technology groups on issues, and recommend programming solutions, install, and support customer exposure systems
+ Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency.
**Qualifications:**
+ 4-8 years of experience in software development, building large scale distributed data processing systems or large-scale applications
+ Designing & developing Big Data solutions with at least one end to end implementation.
+ Strong Hands-on experience in following technical skills: Apache Spark, Scala/ Java, XML/ JSON/ Parquet/ Avro, SQL, Linux, Hadoop Ecosystem (HDFS, Spark, Impala, HIVE, HBASE etc.), Kafka.
+ Performance analysis, troubleshooting and issue resolution and Exposure to latest Cloudera offerings like Ozone, Iceberg.
+ Intermediate level experience in Applications Development role
+ Consistently demonstrates clear and concise written and verbal communication
+ Demonstrated problem-solving and decision-making skills
+ Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements
**Education:**
+ Bachelor's degree/University degree or equivalent experience
This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required.
---
**Job Family Group:**
Technology
---
**Job Family:**
Applications Development
---
**Time Type:**
Full time
---
**Most Relevant Skills**
Please see the requirements listed above.
---
**Other Relevant Skills**
For complementary skills, please see above and/or contact the recruiter.
---
_Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law._
_If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review_ _Accessibility at Citi ( _._
_View Citi's_ _EEO Policy Statement ( _and the_ _Know Your Rights ( _poster._
Citi is an equal opportunity and affirmative action employer.
Minority/Female/Veteran/Individuals with Disabilities/Sexual Orientation/Gender Identity.
Big Data Developer
Posted today
Job Viewed
Job Description
Role: Lead/Architect
Required Technical Skill Set: Bigdata, Hadoop or Data Warehousing Tools and Cloud Computing
Desired Experience Range: 4 to10 Years
Location: Chennai and Bengaluru
Must-Have:
- Working experience of Hadoop, Hive SQL’s, Spark, Bigdata Eco System Tools.
- Should be able to tweak queries and work on performance enhancement.
- The candidate will be responsible for delivering code, setting up environment, connectivity, deploying the code in production after testing.
- The candidate should have strong functional and technical knowledge to deliver what is required and he/she should be well acquainted with Banking terminologies.
- Occasionally, the candidate may have to be responsible as a primary contact and/or driver for small to medium size projects.
- The candidate should have strong DevOps and Agile Development Framework knowledge.
Good-to-Have:
- Preferable to have good technical knowledge on Cloud computing, AWS or Azure Cloud Services.
- Strong conceptual and creative problem-solving skills, ability to work with considerable ambiguity, ability to learn new and complex concepts quickly.
- Experience in working with teams in a complex organization involving multiple reporting lines · Knowledge in BI tools like MSTR, Tableau is an added advantage.
Big Data Developer
Posted 2 days ago
Job Viewed
Job Description
Greetings from TCS!
TCS is hiring for- Big Data
Required Skill Set: Synapse Analytics with Pyspark, Cosmos DB, Scala, spark
Desired Experience Range: 6 to 10 Years
Job Location: Chennai, Bengaluru, Pune and Mumbai
Required Skills: Synapse Analytics with Pyspark, Cosmos DB, Scala,
Good to have: Agile, Scala
Responsibility of / Expectations from the Role
1.Transfer and transform data with Azure Synapse Analytics pipelines.
2.Build a data pipeline in Azure Synapse Analytics
3.Work with hybrid transactional and analytical processing (HTAP) solutions using Azure Synapse Analytics
4.Plan hybrid transactional and analytical processing
5.Implement Azure Synapse Link with Azure Cosmos DB
6.Do impact analysis and come up with estimates
7.Take responsibility for end-to-end deliverable.
8.Create Project Plan & Work on Implementation Strategy
9.Need to Handle Customer Communications and Management Reporting
Big Data Engineer
Posted 2 days ago
Job Viewed
Job Description
Greetings from TCS!
We are hiring for Bigdata Developer
Job location: Chennai, Bangalore, Hyderabad, Mumbai, Kolkata, Pune, Indore, Ahmedabad, Kochi, Gurgaon
Desired experience: 5 to 10 years
Required Technical Skill Set: HDFS, Hive, Spark, Sqoop, Flume, Oozie, Unix Script, Autosys
Must-Have:
- Hands-on experience in Big data, Hadoop
- Hands-on experience in Hive, Sqoop, Spark, Hive
- Hands on experience with Data Transformations, data structures, Metadata, SQL, workload management
- Coding experience in SQL
- Good communication skills and team skills
- Effective communication and independently
Responsibilities:
- As a Developer, must be able to do Requirements gathering and data analysis for the multiple business processes involved
- Development of Data ingestion framework using Spark, Hive, Sqoop tech stack
- Good analytical ability
- Good communication skills
- Good understanding of ETL and BI concepts
Regards
Bodhisatwa Ray
Big Data Developer
Posted 2 days ago
Job Viewed
Job Description
Dear Candidates,
Greetings from Tata Consultancy Services!
TCS has been a great pioneer in feeding the fire of young techies like you. We are a global leader in the technology arena and there’s nothing that can stop us from growing together.
Kindly attach your updated resumes
What we are looking for?
- Role: Big Data Developer
- Experience: 4 to 10 years (Appropriate)
- Location: Bangalore Preferably
- Notice Period: 0 to 90 days
Skills required:
- Hands-on experience of Hadoop, Python, PySpark, Hive, Big Data Eco System Tools .
- Should be able to develop, tweak queries and work on performance enhancement.
- Solid understanding of object-oriented programming and HDFS concepts
- The candidate will be responsible for delivering code, setting up environment, connectivity, deploying the code in production after testing.
- Preferable to have good DWH/ Data Lake knowledge.
- Conceptual and creative problem-solving skills, ability to work with considerable ambiguity, ability to learn new and complex concepts quickly.
- Experience in working with teams in a complex organization involving multiple reporting lines
- The candidate should have good DevOps and Agile Development Framework knowledge.
Thanks & Regards
Malleswari M
BFSI TAG Team
Big Data Developer
Posted 5 days ago
Job Viewed
Job Description
Greetings from TCS!
TCS is hiring for Big Data
Location: - Chennai/Mumbai/Pune
Desired Experience Range: 5 to 12 years
Must-Have**
- Working experience of Hadoop, Hive SQL’s, Spark, Bigdata Eco System Tools.
- Should be able to tweak queries and work on performance enhancement.
- The candidate will be responsible for delivering code, setting up environment, connectivity, deploying the code in production after testing.
- The candidate should have strong functional and technical knowledge to deliver what is required and he/she should be well acquainted with Banking terminologies. Occasionally, the candidate may have to be responsible as a primary contact and/or driver for small to medium size projects.
- The candidate should have strong DevOps and Agile Development Framework knowledge.
Good-to-Have
- Preferable to have good technical knowledge on Cloud computing, AWS or Azure Cloud Services.
- Strong conceptual and creative problem-solving skills, ability to work with considerable ambiguity, ability to learn new and complex concepts quickly.
- Experience in working with teams in a complex organization involving multiple reporting lines
- Solid understanding of object-oriented programming and HDFS concepts
Thanks
Anshika
Be The First To Know
About the latest Senior data Jobs in India !
Big Data Developer
Posted 5 days ago
Job Viewed
Job Description
Job Description:
We are seeking a highly skilled Full Stack Big Data Engineer to join our team. The ideal candidate will have strong expertise in big data technologies, cloud platforms, microservices, and system design, with the ability to build scalable and efficient data-driven applications. This role requires hands-on experience across data engineering, backend development, and cloud deployment, along with a strong foundation in modern DevOps and monitoring practices.
Key Responsibilities:
- Design, build, and optimize big data pipelines using Scala, PySpark, Spark SQL, Spark Streaming, and Databricks.
- Develop and maintain real-time data processing solutions using Kafka Streams or similar event-driven platforms.
- Implement cloud-based solutions on Azure, leveraging services such as Azure Data Factory (ADF) and Azure Functions.
- Build scalable microservices with Core Java (8+) and Spring Boot.
- Collaborate on system design , including API development and event-driven architecture.
- Contribute to front-end development (JavaScript, React) as needed.
- Ensure application reliability through monitoring tools such as Grafana, New Relic, or similar.
- Utilize modern CI/CD tools (Git, Jenkins, Kubernetes, ArgoCD, etc.) for deployment and version control.
- Work cross-functionally with data engineers, software developers, and architects to deliver high-quality solutions.
Qualifications:
- 5+ years of professional experience as a Software/Data Engineer or Full Stack Engineer.
- Strong programming skills in Scala, Python, and Java .
- Experience with Databricks, Spark SQL, Spark Streaming , and PySpark .
- Hands-on experience with Azure cloud services and data engineering tools.
- Solid knowledge of microservices development with Spring Boot.
- Familiarity with event-driven platforms such as Kafka.
- Experience with CI/CD pipelines and containerization/orchestration tools.
- Strong problem-solving and communication skills.
- Bachelor’s or master’s degree in computer science, Engineering, or a related field (preferred).
Nice to Have:
- Experience with API design and event-driven architecture .
- Frontend development experience with React and JavaScript .
Big Data Engineer
Posted 5 days ago
Job Viewed
Job Description
Coforge Ltd is Hiring for Big Data Engineer – AWS, Spark & Scala.
Must Have Skills:- AWS, Spark & Scala.
Experience Required: 3 to 6 Years
Job Locations: Pune, Hyderabad, Greater Noida Only.
Send your CV to:
For queries, contact via WhatsApp:
Key Responsibilities:-
• Design, develop, and optimize Big Data architectures leveraging AWS services for large-scale, complex data processing.
• Build and maintain data pipelines using Spark (Scala) for both structured and unstructured datasets.
• Architect and operationalize data engineering and analytics platforms (AWS preferred; Hortonworks, Cloudera, or MapR experience a plus).
• Implement and manage AWS services including EMR, Glue, Kinesis, DynamoDB, Athena, CloudFormation, API Gateway, and S3.
• Work on real-time streaming solutions using Kafka and AWS Kinesis.
• Support ML model operationalization on AWS (deployment, scheduling, and monitoring).
• Analyze source system data and data flows to ensure high-quality, reliable data delivery for business needs.
• Write highly efficient SQL queries and support data warehouse initiatives using Apache NiFi, Airflow, and Kylo.
• Collaborate with cross-functional teams to provide technical leadership, mentor team members, and strengthen the data engineering capability.
• Troubleshoot and resolve complex technical issues, ensuring scalability, performance, and security of data solutions.
Mandatory Skills & Qualifications:-
• Solid hands-on experience in Big Data Technologies (AWS, Scala, Hadoop, and Spark Mandatory)
• Proven expertise in Spark with Scala
• Hands-on experience with: AWS services (EMR, Glue, Lambda, S3, CloudFormation, API Gateway, Athena, Lake Formation)
Big Data Developer
Posted 5 days ago
Job Viewed
Job Description
We are seeking a highly skilled and experienced Senior Big Data Engineer with strong expertise in Streaming Data and Big Data Technologies , AWS & Scala Mandatory . The ideal candidate will have hands-on experience in architecting and implementing scalable data solutions using modern cloud-native tools and frameworks.
Key Responsibilities:
- Architect and implement scalable data pipelines for real-time, batch, structured, and unstructured data .
- Design and develop solutions using AWS services such as EMR, Kinesis, Glue, Athena, S3, CloudFormation, API Gateway .
- Work extensively with streaming platforms like Kafka, Flink, Spark Streaming .
- Develop and optimize data ingestion workflows using Apache NiFi, Airflow, Sqoop, Oozie .
- Build and maintain data lakes and analytics platforms using AWS Lake Formation .
- Hands-on development using Scala with Spark for distributed data processing.
- Work with NoSQL databases including DynamoDB, HBase , and Hadoop ecosystem tools like MapReduce, Hive, HDFS .
- Collaborate with cross-functional teams to deliver high-performance data solutions.
Technical Skills Required:
- Mandatory: Spark, Scala, AWS, Hadoop
- Big Data Tools: EMR, Glue, Hive, HDFS, HBase, MapReduce
- Streaming & Messaging: Kafka, Kinesis, Flink, Spark Streaming
- Data Ingestion: Apache NiFi, Airflow, Sqoop, Oozie
- Cloud Platforms: AWS (preferred), Hortonworks, Cloudera, MapR
- Programming: Scala (with Spark), Python (optional)
- Databases: DynamoDB, NoSQL, HBase
- Others: AWS Athena, Lake Formation, CloudFormation