Big Data Analytics Developer
Posted today
Job Viewed
Job Description
The Data Engineer is accountable for developing high quality data products to support the Bank's regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team.
Responsibilities
- Developing and supporting scalable, extensible, and highly available data solutions
- Deliver on critical business priorities while ensuring alignment with the wider architectural vision
- Identify and help address potential risks in the data supply chain
- Follow and contribute to technical standards
- Design and develop analytical data models
Required Qualifications & Work Experience
- First Class Degree in Engineering/Technology (4-year graduate course)
- 4+ years' experience implementing data-intensive solutions using agile methodologies
- Experience of relational databases and using SQL for data querying, transformation and manipulation
- Experience of modelling data for analytical consumers
- Ability to automate and streamline the build, test and deployment of data pipelines
- Experience in cloud native technologies and patterns
- A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training
- Excellent communication and problem-solving skills
T
echnical Skills (Must Have)
- ETL: Hands on experience of building data pipelines. Proficiency in at least one of the data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica
- Big Data: Exposure to 'big data' platforms such as Hadoop, Hive or Snowflake for data storage and processing
- Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design
- Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures
- Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala
- DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management
Technical Skills (Valuable)
- Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows
- Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs
- Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls
- Containerization: Fair understanding of containerization platforms like Docker, Kubernetes
- File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta
- Others: Basics of Job scheduler like Autosys. Basics of Entitlement management
Certification on any of the above topics would be an advantage.
Job Family Group:
Technology
Job Family:
Applications Development
Time Type:
Full time
Most Relevant Skills
Please see the requirements listed above.
Other Relevant Skills
For complementary skills, please see above and/or contact the recruiter.
Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law.
If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review
Accessibility at Citi.
View Citi's EEO Policy Statement and the Know Your Rights poster.
Big Data Engineer
Posted 4 days ago
Job Viewed
Job Description
**Responsibilities:**
+ Design, development of BigData applications/ pipelines using Spark, Scala, SQL, Pyspark, Python, Java
+ Consult with users, clients, and other technology groups on issues, and recommend programming solutions, install, and support customer exposure systems
+ Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency.
**Qualifications:**
+ 4-8 years of experience in software development, building large scale distributed data processing systems or large-scale applications
+ Designing & developing Big Data solutions with at least one end to end implementation.
+ Strong Hands-on experience in following technical skills: Apache Spark, Scala/ Java, XML/ JSON/ Parquet/ Avro, SQL, Linux, Hadoop Ecosystem (HDFS, Spark, Impala, HIVE, HBASE etc.), Kafka.
+ Performance analysis, troubleshooting and issue resolution and Exposure to latest Cloudera offerings like Ozone, Iceberg.
+ Intermediate level experience in Applications Development role
+ Consistently demonstrates clear and concise written and verbal communication
+ Demonstrated problem-solving and decision-making skills
+ Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements
**Education:**
+ Bachelor's degree/University degree or equivalent experience
This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required.
---
**Job Family Group:**
Technology
---
**Job Family:**
Applications Development
---
**Time Type:**
Full time
---
**Most Relevant Skills**
Please see the requirements listed above.
---
**Other Relevant Skills**
For complementary skills, please see above and/or contact the recruiter.
---
_Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law._
_If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review_ _Accessibility at Citi ( _._
_View Citi's_ _EEO Policy Statement ( _and the_ _Know Your Rights ( _poster._
Citi is an equal opportunity and affirmative action employer.
Minority/Female/Veteran/Individuals with Disabilities/Sexual Orientation/Gender Identity.
Big Data Developer
Posted 1 day ago
Job Viewed
Job Description
Job Title : Senior Big Data Engineer (Python)
Experience : 5+ Years
Locations : Chennai, Bangalore, Gurugram
Employment Type : Full-Time
Notice Period : Immediate to 30 Days Preferred
Job Summary
We are seeking an experienced Big Data Engineer with strong expertise in Python to design, build, and manage large-scale data pipelines and analytics solutions. The ideal candidate will have hands-on experience working with Big Data technologies and cloud platforms, and a passion for writing efficient, scalable, and maintainable code.
Key Responsibilities
- Design, develop, and maintain scalable big data pipelines using Python and other data processing tools.
- Work with distributed data processing frameworks like Spark, Hadoop, Hive, or similar.
- Implement ETL processes for structured and unstructured data from various sources.
- Collaborate with data scientists, analysts, and other engineering teams to understand data needs.
- Optimize data workflows and ensure data quality, integrity, and security.
- Deploy solutions on cloud platforms (AWS/GCP/Azure) and automate data workflows.
- Monitor data pipelines, troubleshoot issues, and ensure high availability and performance.
Required Skills & Qualifications
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
- Minimum 5 years of hands-on experience in Big Data engineering roles.
- Strong proficiency in Python for data processing and scripting.
- Experience with Apache Spark , Hadoop ecosystem (Hive, HDFS, HBase), Kafka.
- Solid understanding of data warehousing concepts and data modeling.
- Hands-on experience with SQL and NoSQL databases.
- Familiarity with cloud data platforms like AWS (EMR, S3, Glue) , Azure , or Google Cloud .
- Good understanding of CI/CD, containerization (Docker, Kubernetes), and version control (Git).
Preferred Skills (Nice to Have)
- Experience with Airflow or other workflow orchestration tools.
- Knowledge of real-time data processing (e.g., Spark Streaming, Flink).
- Exposure to data governance, data lineage, and catalog tools.
Why Join Us?
- Opportunity to work on cutting-edge Big Data solutions with global clients.
- Collaborative and innovation-driven work culture.
- Competitive compensation and career growth opportunities.
- Flexible work environment with hybrid (based on role).
Big Data Developer
Posted 2 days ago
Job Viewed
Job Description
Job Title: Big Data Developer (Java/Python)
Location: Chennai, Bangalore, Gurugram
Experience Required: 5+ years (5 to 15 yrs)
Joining: Immediate or Early Joiners Preferred
Employment Type: Full-time
Job Summary:
We are looking for a passionate and experienced Big Data Developer with expertise in either Java or Python to join our dynamic team. The ideal candidate will have a strong background in designing and implementing large-scale data processing systems and a solid understanding of modern data technologies. Candidates who are available to join immediately or at short notice will be given preference.
Key Responsibilities:
- Design, develop, and maintain scalable Big Data solutions using Hadoop ecosystem, Spark, and other distributed frameworks.
- Build and optimize data pipelines for batch and real-time data processing.
- Collaborate with data scientists, analysts, and other developers to integrate data-driven solutions into production.
- Write efficient, testable, and reusable code using Java or Python.
- Work closely with DevOps teams to deploy and monitor applications on cloud/on-prem infrastructure.
- Ensure data integrity, security, and performance tuning of large-scale data systems.
Technical Skills:
- Strong programming skills in Java or Python (both is a plus).
- Hands-on experience with Big Data technologies such as Hadoop , Hive , HDFS , Spark , Kafka , etc.
- Familiarity with data modeling, ETL pipelines, and data warehousing concepts.
- Good understanding of SQL and NoSQL databases.
- Experience with cloud platforms such as AWS, GCP, or Azure is a plus.
- Knowledge of CI/CD tools and containerization (Docker/Kubernetes) is desirable.
Required Qualifications:
- Bachelor's or Master’s degree in Computer Science, Engineering, or a related field.
- Minimum 5 years of experience in Big Data development with strong coding background in Java or Python.
- Strong problem-solving skills and ability to work independently or in a team.
- Excellent communication and collaboration skills.
Nice to Have:
- Experience with data lakes, lakehouses, or real-time analytics.
- Exposure to tools like Airflow, NiFi, or similar workflow orchestration tools.
Why Join Us?
- Opportunity to work with cutting-edge Big Data technologies.
- Collaborative and innovative work environment.
- Competitive compensation and benefits.
- Immediate onboarding for early joiners.
Big Data Developer
Posted 2 days ago
Job Viewed
Job Description
- 4+ years of hands on development experience in programming languages such as JAVA,SCALA using Maven, Apache Spark Frameworks and Unix Shell scripting
- Should be comfortable with Unix File system as well as HDFS commands
- Should have worked on query languages such as Oracle SQL, Hive SQL, Spark SQL, Impala, HBase DB Should be flexible
- Should have good communication and customer management skills
- Should have knowledge on Big data Data Ingestion tools such as SQOOP and KAFKA. Should be aware of the components in Big Data ecosystem. Should have worked on building projects using Eclipse IDE, Tectia Client, Oracle SQL Developer
Big Data Developer
Posted today
Job Viewed
Job Description
Job Title: Developer
Work Location: Chennai TN
Skill Required: XDigital : BigData and Hadoop Ecosystems
Experience Range in Required Skills: 4-6 years
Job Description: BigData and Hadoop
Essential Skills:
BigData and Hadoop
Big Data Engineer_C
Posted today
Job Viewed
Job Description
Hi All,
Skill: Bigdata Engineer
Exp: 6-9 Years
Location: Pune, Chennai
F2F Interview on 19th Jul 2025. Who are interested please send me your updated resume.
Mandatory Skills: PySpark, spark, python , GCP, SCALA, SQL, Hadoop, Hive, AWS, GCP
Key Responsibilities:
- Design, develop, and maintain scalable data pipelines and ETL workflows using PySpark, Hadoop, and Hive.
- Deploy and manage big data workloads on cloud platforms like GCP and AWS.
- Work closely with cross-functional teams to understand data requirements and deliver high-quality solutions.
- Optimize data processing jobs for performance and cost-efficiency on cloud infrastructure.
- Implement automation and CI/CD pipelines to streamline deployment and monitoring of data workflows.
- Ensure data security, governance, and compliance in cloud environments.
- Troubleshoot and resolve data issues, monitoring job executions and system health.
Mandatory Skills:
- PySpark: Strong experience in developing data processing jobs and ETL pipelines.
- Google Cloud Platform (GCP): Hands-on experience with BigQuery, Dataflow, Dataproc, or similar services.
- Hadoop Ecosystem: Expertise with Hadoop, Hive, and related big data tools.
- AWS: Familiarity with AWS data services like S3, EMR, Glue, or Redshift.
- Strong SQL and data modeling skills.
Good to Have:
- Experience with CI/CD tools and DevOps practices (Jenkins, GitLab, Terraform, etc.).
- Containerization and orchestration knowledge (Docker, Kubernetes).
- Experience with Infrastructure as Code (IaC).
- Knowledge of data governance and data security best practices.
Be The First To Know
About the latest Big data Jobs in Chennai !
Big Data Lead
Posted today
Job Viewed
Job Description
Investor Services, a leading business line in the Citi Services and offers the full spectrum of capabilities to clients including Custody, Fund Accounting, Investment Accounting, Fund Administration, Middle Office Services, Performance and Risk Analytics, Transfer Agency and Securities lending across multiple jurisdictions.
Investor Services has embarked on significant business growth through digital transformation and technology investments. To meet the objectives of the business, Investor Services Technology is undergoing an exciting platform modernization journey focused on improving agility, scalability, and simplifying the architecture. Program Execution is underway since 2022, with transformational leaders dedicated to this multiyear program and moving to a new next generation architecture on a cloud native platform.
The Applications Development Technology Senior Data Lead Analyst is a senior level position responsible for establishing and implementing canonical data architecture and drive data governance in coordination with the Technology Team. The overall objective of this role is to lead data analysis and programming activities for suite of applications across Investor Services and also drive standardization and modernization of Data strategy and architecture across Services.
Responsibilities:
- Design & implement Spark, Hive, Scala pipelines using Medallion model
- Architect data integration across custody platforms
- Embed automation in metadata, data dictionary, data catalogues, and quality checks
- Ensure reliability, cost optimization, and data governance
- Mentor engineers and stay hands-on with architecture reviews
- Code Quality and Standards: Ensure application design adheres to the overall architecture blueprint. Develop and enforce standards for coding, testing, debugging, and implementation. Conduct code reviews to ensure code quality and compliance with standards.
- Collaboration and Communication: Collaborate with cross-functional teams, including architects, infrastructure engineers, and business analysts, to deliver integrated solutions. Consistently demonstrate clear and concise written and verbal communication.
- Mentoring and Coaching: Serve as an advisor or coach to mid-level developers and analysts, allocating work as necessary and providing guidance on technical best practices.
- Risk Management: Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients, and assets. Drive compliance with applicable laws, rules, and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct, and business practices, and escalating, managing, and reporting control issues with transparency.
- DevOps Practices: Implement and promote DevOps practices, including continuous integration, continuous delivery, and automated testing.
- Containerization and Orchestration: Utilize Openshift for container orchestration, ensuring applications are scalable, resilient, and easily deployed.
- Version Control: Manage source code using GitHub, following established branching strategies and code review processes.
Skills:
- Big Data Engineering: Hive, Spark, Scala, Delta Lake, performance tuning
- Data Architecture: Medallion, Data Mesh, multi-zone Data Lake
- GenAI for Data: metadata, test data, code gen, lineage
- Cloud & DevOps: AWS/Azure, GitHub Actions, Airflow
- Data Governance: schema evolution, contracts, observability
Education:
- Bachelor's or Master's in CS, Data Engineering, or related field
- 12+ years in data engineering, 5+ in financial services (Custody preferred)
-
Job Family Group:
Technology
-
Job Family:
Applications Development
-
Time Type:
Full time
-
Most Relevant Skills
Please see the requirements listed above.
-
Other Relevant Skills
For complementary skills, please see above and/or contact the recruiter.
-
Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law.
If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi .
View Citi's EEO Policy Statement and the Know Your Rights poster.
Big Data Engineer
Posted today
Job Viewed
Job Description
The Applications Development Intermediate Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities.
Responsibilities:
- Design, development of BigData applications/ pipelines using Spark, Scala, SQL, Pyspark, Python, Java
- Consult with users, clients, and other technology groups on issues, and recommend programming solutions, install, and support customer exposure systems
- Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency.
Qualifications:
- 4-8 years of experience in software development, building large scale distributed data processing systems or large-scale applications
- Designing & developing Big Data solutions with at least one end to end implementation.
- Strong Hands-on experience in following technical skills: Apache Spark, Scala/ Java, XML/ JSON/ Parquet/ Avro, SQL, Linux, Hadoop Ecosystem (HDFS, Spark, Impala, HIVE, HBASE etc.), Kafka.
- Performance analysis, troubleshooting and issue resolution and Exposure to latest Cloudera offerings like Ozone, Iceberg.
- Intermediate level experience in Applications Development role
- Consistently demonstrates clear and concise written and verbal communication
- Demonstrated problem-solving and decision-making skills
- Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements
Education:
- Bachelor's degree/University degree or equivalent experience
This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required.
-
Job Family Group:
Technology
-
Job Family:
Applications Development
-
Time Type:
Full time
-
Most Relevant Skills
Please see the requirements listed above.
-
Other Relevant Skills
For complementary skills, please see above and/or contact the recruiter.
-
Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law.
If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi .
View Citi's EEO Policy Statement and the Know Your Rights poster.