23,936 Senior Data Engineer jobs in India

Big Data Engineer

Pune, Maharashtra Citigroup

Posted 3 days ago

Job Viewed

Tap Again To Close

Job Description

The Applications Development Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities.
**Responsibilities:**
+ Utilize knowledge of applications development procedures and concepts, and basic knowledge of other technical areas to identify and define necessary system enhancements
+ Identify and analyze issues, make recommendations, and implement solutions
+ Utilize knowledge of business processes, system processes, and industry standards to solve complex issues
+ Analyze information and make evaluative judgements to recommend solutions and improvements
+ Conduct testing and debugging, utilize script tools, and write basic code for design specifications
+ Assess applicability of similar experiences and evaluate options under circumstances not covered by procedures
+ Develop working knowledge of Citi's information systems, procedures, standards, client server application development, network operations, database administration, systems administration, data center operations, and PC-based applications
+ Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency.
**Qualifications:**
+ 3 to 5 years of relevant experience
+ Experience in programming/debugging used in business applications
+ Working knowledge of industry practice and standards
+ Comprehensive knowledge of specific business area for application development
+ Working knowledge of program languages
+ Consistently demonstrates clear and concise written and verbal communication
**Education:**
+ Bachelor's degree/University degree or equivalent experience
This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required.
Additional Job Description
We are looking for a Big Data Engineer that will work on the collecting, storing, processing, and analyzing of huge sets of data. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them. You will also be responsible for integrating them with the architecture used across the company.
Responsibilities
- Selecting and integrating any Big Data tools and frameworks required to provide requested capabilities
- Implementing data wrangling, scarping, cleaning using both Java or Python
Strong experience on data structure.
Skills and Qualifications
- Proficient understanding of distributed computing principles
- Proficient in Java or Python and some part of machine learning
- Proficiency with Hadoop v2, MapReduce, HDFS, Pyspark, Spark
- Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming
- Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala
- Experience with Spark
- Experience with integration of data from multiple data sources
- Experience with NoSQL databases, such as HBase, Cassandra, MongoDB
- Knowledge of various ETL techniques and frameworks, such as Flume
- Experience with various messaging systems, such as Kafka or RabbitMQ
- Experience with Big Data ML toolkits, such as Mahout, SparkML, or H2O
- Good understanding of Lambda Architecture, along with its advantages and drawbacks
- Experience with Cloudera/MapR/Hortonworks
This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required.
---
**Job Family Group:**
Technology
---
**Job Family:**
Applications Development
---
**Time Type:**
Full time
---
**Most Relevant Skills**
Please see the requirements listed above.
---
**Other Relevant Skills**
For complementary skills, please see above and/or contact the recruiter.
---
_Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law._
_If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review_ _Accessibility at Citi ( _._
_View Citi's_ _EEO Policy Statement ( _and the_ _Know Your Rights ( _poster._
Citi is an equal opportunity and affirmative action employer.
Minority/Female/Veteran/Individuals with Disabilities/Sexual Orientation/Gender Identity.
This advertiser has chosen not to accept applicants from your region.

Big Data Engineer

Pune, Maharashtra Nice Software Solutions Pvt. Ltd.

Posted 4 days ago

Job Viewed

Tap Again To Close

Job Description

Big Data Engineer (PySpark)

Location: Pune/Nagpur (WFO)

Experience: 8 - 12 Years

Employment Type: Full-time


Job Overview

We are looking for an experienced Big Data Engineer with strong expertise in PySpark and Big Data ecosystems. The ideal candidate will be responsible for designing, developing, and optimizing scalable data pipelines while ensuring high performance and reliability.


Key Responsibilities

  • Design, develop, and maintain data pipelines using PySpark and related Big Data technologies.
  • Work with HDFS, Hive, Sqoop , and other tools in the Hadoop ecosystem.
  • Write efficient HiveQL and SQL queries to handle large-scale datasets.
  • Perform performance tuning and optimization of distributed data systems.
  • Collaborate with cross-functional teams in an Agile environment to deliver high-quality solutions.
  • Manage and schedule workflows using Apache Airflow or Oozie .
  • Troubleshoot and resolve issues in data pipelines to ensure reliability and accuracy.


Required Skills

  • Proven experience in Big Data Engineering with a focus on PySpark.
  • Strong knowledge of HDFS, Hive, Sqoop , and related tools.
  • Proficiency in SQL/HiveQL for large datasets.
  • Expertise in performance tuning and optimization of distributed systems.
  • Familiarity with Agile methodology and collaborative team practices.
  • Experience with workflow orchestration tools (Airflow/Oozie ).
  • Strong problem-solving, analytical, and communication skills.


Good to Have

  • Knowledge of data modeling and data warehousing concepts.
  • Exposure to DevOps practices and CI/CD pipelines for data engineering.
  • Experience with other Big Data frameworks such as Spark Streaming or Kafka .
This advertiser has chosen not to accept applicants from your region.

Big Data Engineer

Pune, Maharashtra Nice Software Solutions Pvt. Ltd.

Posted today

Job Viewed

Tap Again To Close

Job Description

Big Data Engineer (PySpark)

Location: Pune/Nagpur (WFO)

Experience: 8 - 12 Years

Employment Type: Full-time

Job Overview

We are looking for an experienced Big Data Engineer with strong expertise in PySpark and Big Data ecosystems. The ideal candidate will be responsible for designing, developing, and optimizing scalable data pipelines while ensuring high performance and reliability.

Key Responsibilities

  • Design, develop, and maintain data pipelines using PySpark and related Big Data technologies.
  • Work with HDFS, Hive, Sqoop, and other tools in the Hadoop ecosystem.
  • Write efficient HiveQL and SQL queries to handle large-scale datasets.
  • Perform performance tuning and optimization of distributed data systems.
  • Collaborate with cross-functional teams in an Agile environment to deliver high-quality solutions.
  • Manage and schedule workflows using Apache Airflow or Oozie.
  • Troubleshoot and resolve issues in data pipelines to ensure reliability and accuracy.

Required Skills

  • Proven experience in Big Data Engineering with a focus on PySpark.
  • Strong knowledge of HDFS, Hive, Sqoop, and related tools.
  • Proficiency in SQL/HiveQL for large datasets.
  • Expertise in performance tuning and optimization of distributed systems.
  • Familiarity with Agile methodology and collaborative team practices.
  • Experience with workflow orchestration tools (Airflow/Oozie).
  • Strong problem-solving, analytical, and communication skills.

Good to Have

  • Knowledge of data modeling and data warehousing concepts.
  • Exposure to DevOps practices and CI/CD pipelines for data engineering.
  • Experience with other Big Data frameworks such as Spark Streaming or Kafka.
This advertiser has chosen not to accept applicants from your region.

Big Data Engineer

Pune, Maharashtra Nice Software Solutions Pvt. Ltd.

Posted 4 days ago

Job Viewed

Tap Again To Close

Job Description

Big Data Engineer (PySpark)
Location: Pune/Nagpur (WFO)
Experience: 8 - 12 Years
Employment Type: Full-time

Job Overview
We are looking for an experienced Big Data Engineer with strong expertise in PySpark and Big Data ecosystems. The ideal candidate will be responsible for designing, developing, and optimizing scalable data pipelines while ensuring high performance and reliability.

Key Responsibilities
Design, develop, and maintain data pipelines using PySpark and related Big Data technologies.
Work with HDFS, Hive, Sqoop , and other tools in the Hadoop ecosystem.
Write efficient HiveQL and SQL queries to handle large-scale datasets.
Perform performance tuning and optimization of distributed data systems.
Collaborate with cross-functional teams in an Agile environment to deliver high-quality solutions.
Manage and schedule workflows using Apache Airflow or Oozie .
Troubleshoot and resolve issues in data pipelines to ensure reliability and accuracy.

Required Skills
Proven experience in Big Data Engineering with a focus on PySpark.
Strong knowledge of HDFS, Hive, Sqoop , and related tools.
Proficiency in SQL/HiveQL for large datasets.
Expertise in performance tuning and optimization of distributed systems.
Familiarity with Agile methodology and collaborative team practices.
Experience with workflow orchestration tools ( Airflow/Oozie ).
Strong problem-solving, analytical, and communication skills.

Good to Have
Knowledge of data modeling and data warehousing concepts.
Exposure to DevOps practices and CI/CD pipelines for data engineering.
Experience with other Big Data frameworks such as Spark Streaming or Kafka .
This advertiser has chosen not to accept applicants from your region.

Big Data Engineer

Bengaluru, Karnataka Benison Technologies

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Roles and Responsibilities: 
What You’ll Do:
You are an experienced Software Engineer, Data/Data Engineer who’s passionate about developing products that are simple, intuitive, and beautiful. You will be making an impact on HG’s core data platform/processing engine using modern Big Data techniques particularly working with enterprise applications for B2B. We are a small engineering focused team who are delivering innovative features in a fast-growing market. We build Big Data systems utilizing cutting edge technologies and solutions that allow our developers to learn and develop while shipping amazing products.
You will be involved in building data pipelines at a large scale to enable business teams to work with data and analyze metrics that support and drive the business. You will partner with cross functional teams to identify opportunities and continuously develop and improve processes for efficiency.
What You'll Be Responsible For:
  • Collaborating with Product Development Teams to build the most effective solutions
  • Developing and enhancing features in our databases, backend apps, front end UI, and Data as a Service (DAAS) product
  • Building data pipelines to support our data ingesting, cleansing, enriching, and presentation efforts in support of our flagship SaaS applications
  • Authoring and scheduling workflows monitored by Airflow
  • Responsible for the design and architecture of data collection
  • Collaborating with cross-functional teams to design and implement impactful solutions to department and business problems
  • Support end users and teammates on code-related questions and issues
  • Attend daily stand-up meetings, planning sessions, encourage others, and collaborate at a rapid pace
  • What You’ll Need:
  • BS, MS, or in Computer Science or related technical discipline
  • 2+ years of experience in data engineering, or an equivalent combination of education and experience
  • Experience using Scala/Java, Spark, Airflow and Hadoop are essential for success in this position
  • 2 or more years of designing and programming in a work setting
  • Advanced experience in SQL of any flavor (MySQL,Postgres, Snowflake, Sql server etc)
  • Proficient in Java or Scala (understand and have real-world experience with design patterns)
  • Experience with Amazon Web Services (EC2, S3, RDS, EMR, ELB etc.) or similar cloud platforms
  • Experience with web services using REST
  • Comfortable working with CI/CD and automation environments such as Docker, Kubernetes, Terraform or similar
  • Understand pragmatic agile development practices
  • Proven track record of successful project delivery
  • This advertiser has chosen not to accept applicants from your region.

    Big Data Engineer

    Chennai, Tamil Nadu ADCI MAA 12 SEZ

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Amazon Retail Financial Intelligence Systems is seeking a seasoned and talented Senior Data Engineer to join the Fortune Platform team. Fortune is a fast growing team with a mandate to build tools to automate profit-and-loss forecasting and planning for the Physical Consumer business. We are building the next generation Business Intelligence solutions using big data technologies such as Apache Spark, Hive/Hadoop, and distributed query engines. As a Data Engineer in Amazon, you will be working in a large, extremely complex and dynamic data environment. You should be passionate about working with big data and are able to learn new technologies rapidly and evaluate them critically. You should have excellent communication skills and be able to work with business owners to translate business requirements into system solutions. You are a self-starter, comfortable with ambiguity, and working in a fast-paced and ever-changing environment. Ideally, you are also experienced with at least one of the programming languages such as Java, C++, Spark/Scala, Python, etc.

    Major Responsibilities:
    - Work with a team of product and program managers, engineering leaders, and business leaders to build data architectures and platforms to support business
    - Design, develop, and operate high-scalable, high-performance, low-cost, and accurate data pipelines in distributed data processing platforms
    - Recognize and adopt best practices in data processing, reporting, and analysis: data integrity, test design, analysis, validation, and documentation
    - Keep up to date with big data technologies, evaluate and make decisions around the use of new or existing software products to design the data architecture
    - Design, build and own all the components of a high-volume data warehouse end to end.
    - Provide end-to-end data engineering support for project lifecycle execution (design, execution and risk assessment)
    - Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers
    - Interface with other technology teams to extract, transform, and load (ETL) data from a wide variety of data sources
    - Own the functional and nonfunctional scaling of software systems in your ownership area.
    - Implement big data solutions for distributed computing.



    Key job responsibilities
    As a DE on our team, you will be responsible for leading the data modelling, database design, and launch of some of the core data pipelines. You will have significant influence on our overall strategy by helping define the data model, drive the database design, and spearhead the best practices to delivery high quality products.

    About the team
    Profit intelligence systems measures, predicts true profit(/loss) for each item as a result of a specific shipment to an Amazon customer. Profit Intelligence is all about providing intelligent ways for Amazon to understand profitability across retail business. What are the hidden factors driving the growth or profitability across millions of shipments each day?

    We compute the profitability of each and every shipment that gets shipped out of Amazon. Guess what, we predict the profitability of future possible shipments too. We are a team of agile, can-do engineers, who believe that not only are moon shots possible but that they can be done before lunch. All it takes is finding new ideas that challenge our preconceived notions of how things should be done. Process and procedure matter less than ideas and the practical work of getting stuff done. This is a place for exploring the new and taking risks.

    We push the envelope in using cloud services in AWS as well as the latest in distributed systems, forecasting algorithms, and data mining.

    BASIC QUALIFICATIONS

    - 3+ years of data engineering experience
    - Experience with data modeling, warehousing and building ETL pipelines
    - Experience with SQL

    PREFERRED QUALIFICATIONS

    - Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions
    - Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases)

    Our inclusive culture empowers Amazonians to deliver the best results for our customers.
    This advertiser has chosen not to accept applicants from your region.

    Big Data Engineer

    Chennai, Tamil Nadu Saaki Argus & Averil Consulting

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Hiring for Big Data:


    LOCATION: Chennai, Bengaluru, Hyderabad.


    EXPERIENCE: 7-10


    Notice Period : Immediate Joiner or 30 Day


    Key Skills


    • Hands-on experience on technologies like Python, SQL, Snowflake, HDFS , Hive, Scala, Spark, AWS, HBase and Cassandra.
    • Good knowledge in Data Warehousing concepts.
    • Proficient in Hadoop distributions such as Cloudera, Hortonworks.
    • Good working experience on technologies like Python , Scala, SQL & PL/SQL
    • Developers design and build the foundational architecture to manage massive-scale data storage, processing, and analysis using distributed, cloud-based systems and platforms.
    • Coding Big Data Pipelines.
    • Managing Big Data Infrastructure and Pipelines.

    This advertiser has chosen not to accept applicants from your region.
    Be The First To Know

    About the latest Senior data engineer Jobs in India !

    Big Data Engineer

    Mumbai, Maharashtra Confidential

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Job Summary

    Exciting Opportunity Alert !

    We're on the hunt for passionate individuals to join our dynamic team as Data Engineer

    Job Profile : Data Engineers

    Experience : Minimum 5 to Maximum 8 Yrs of exp

    Location :  Chennai / Pune

    Mandatory Skills : Big Data | Hadoop | pyspark | spark | sparkSql | Hive


    Skills Required
    Big Data, Hadoop, Pyspark, Spark, Sparksql, Hive
    This advertiser has chosen not to accept applicants from your region.

    Big Data Engineer

    Navi Mumbai, Maharashtra Confidential

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Description

    We are seeking a skilled Big Data Engineer to join our team in India. The ideal candidate will be responsible for designing and implementing robust big data solutions that enable data-driven decision-making. You will work collaboratively with other teams to analyze vast amounts of data, ensuring high-quality data processing and storage.

    Responsibilities
    • Design, develop, and maintain scalable big data solutions.
    • Analyze large datasets to derive actionable insights.
    • Implement data pipelines to process and transform data from various sources.
    • Collaborate with data scientists and analysts to optimize data usage.
    • Ensure data integrity and quality through rigorous testing and validation.
    • Monitor and troubleshoot data systems for performance issues.
    • Stay up to date with emerging technologies and best practices in big data.
    Skills and Qualifications
    • Bachelor's degree in Computer Science, Engineering, or a related field.
    • 3-7 years of experience in big data technologies such as Hadoop, Spark, and Kafka.
    • Proficiency in programming languages such as Java, Python, or Scala.
    • Strong knowledge of SQL and NoSQL databases (e.g., MongoDB, Cassandra).
    • Experience with data warehousing solutions and ETL processes.
    • Familiarity with cloud platforms like AWS, Azure, or Google Cloud.
    • Ability to work in a fast-paced environment and manage multiple priorities.

    Education
    Bachelor Of Computer Application (B.C.A), Master in Computer Application (M.C.A), Post Graduate Diploma in Computer Applications (PGDCA), Bachelor Of Technology (B.Tech/B.E)
    Skills Required
    Hadoop, Spark, Scala, Python, Sql, Nosql, Data Modeling, Etl, Cloud Services, Data Warehousing
    This advertiser has chosen not to accept applicants from your region.

    Big Data Engineer

    Delhi, Delhi Confidential

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Description

    We are seeking a skilled Big Data Engineer to join our dynamic team in India. The ideal candidate will be responsible for developing and maintaining our big data infrastructure, ensuring efficient data processing and analytics. The role requires a strong understanding of big data technologies, programming skills, and the ability to work collaboratively in a fast-paced environment.

    Responsibilities
    • Design, develop, and maintain scalable data pipelines and architectures to handle large volumes of data.
    • Collaborate with data scientists and analysts to understand data requirements and provide data solutions.
    • Implement ETL processes to extract, transform, and load data from various sources into data warehouses and data lakes.
    • Optimize and tune existing data systems for performance and efficiency.
    • Ensure data quality and integrity by implementing appropriate validation and monitoring processes.
    • Stay updated with emerging technologies in big data and recommend best practices to enhance data processes.
    Skills and Qualifications
    • 4-8 years of experience in big data technologies and data engineering roles.
    • Proficiency in Hadoop ecosystem (HDFS, MapReduce, Hive, Pig) and distributed computing frameworks.
    • Strong programming skills in languages such as Java, Scala, or Python.
    • Experience with data warehousing solutions such as Amazon Redshift, Google BigQuery, or Snowflake.
    • Familiarity with NoSQL databases like MongoDB, Cassandra, or HBase.
    • Knowledge of data modeling techniques and data architecture principles.
    • Experience with data pipeline orchestration tools such as Apache Airflow or Apache NiFi.
    • Strong analytical and problem-solving skills, with a focus on data-driven decision-making.

    Education
    Master in Computer Application (M.C.A), Post Graduate Diploma in Computer Applications (PGDCA), Bachelor Of Technology (B.Tech/B.E), Bachelor Of Computer Application (B.C.A)
    Skills Required
    Hadoop, Spark, Kafka, Nosql, Etl, Sql, Python, Data Warehousing, Aws, Docker
    This advertiser has chosen not to accept applicants from your region.
     

    Nearby Locations

    Other Jobs Near Me

    Industry

    1. request_quote Accounting
    2. work Administrative
    3. eco Agriculture Forestry
    4. smart_toy AI & Emerging Technologies
    5. school Apprenticeships & Trainee
    6. apartment Architecture
    7. palette Arts & Entertainment
    8. directions_car Automotive
    9. flight_takeoff Aviation
    10. account_balance Banking & Finance
    11. local_florist Beauty & Wellness
    12. restaurant Catering
    13. volunteer_activism Charity & Voluntary
    14. science Chemical Engineering
    15. child_friendly Childcare
    16. foundation Civil Engineering
    17. clean_hands Cleaning & Sanitation
    18. diversity_3 Community & Social Care
    19. construction Construction
    20. brush Creative & Digital
    21. currency_bitcoin Crypto & Blockchain
    22. support_agent Customer Service & Helpdesk
    23. medical_services Dental
    24. medical_services Driving & Transport
    25. medical_services E Commerce & Social Media
    26. school Education & Teaching
    27. electrical_services Electrical Engineering
    28. bolt Energy
    29. local_mall Fmcg
    30. gavel Government & Non Profit
    31. emoji_events Graduate
    32. health_and_safety Healthcare
    33. beach_access Hospitality & Tourism
    34. groups Human Resources
    35. precision_manufacturing Industrial Engineering
    36. security Information Security
    37. handyman Installation & Maintenance
    38. policy Insurance
    39. code IT & Software
    40. gavel Legal
    41. sports_soccer Leisure & Sports
    42. inventory_2 Logistics & Warehousing
    43. supervisor_account Management
    44. supervisor_account Management Consultancy
    45. supervisor_account Manufacturing & Production
    46. campaign Marketing
    47. build Mechanical Engineering
    48. perm_media Media & PR
    49. local_hospital Medical
    50. local_hospital Military & Public Safety
    51. local_hospital Mining
    52. medical_services Nursing
    53. local_gas_station Oil & Gas
    54. biotech Pharmaceutical
    55. checklist_rtl Project Management
    56. shopping_bag Purchasing
    57. home_work Real Estate
    58. person_search Recruitment Consultancy
    59. store Retail
    60. point_of_sale Sales
    61. science Scientific Research & Development
    62. wifi Telecoms
    63. psychology Therapy
    64. pets Veterinary
    View All Senior Data Engineer Jobs