363 Data Scientists jobs in Delhi
Big Data Developer - Java, Big data, Spring
Posted 1 day ago
Job Viewed
Job Description
Analyzes and investigates
Provides explanations and interpretations within area of expertise
Participate in scrum process and deliver stories/features according to the schedule
Collaborate with team, architects and product stakeholders to understand the scope and design of a deliverable
Participate in product support activities as needed by the team.
Understand product architecture, features being built and come up with product improvement ideas and POCs
Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
Qualifications -
Required Qualifications:
Undergraduate degree or equivalent experience
Proven experience using Bigdata tech stack
Sound knowledge on Java and Spring framework with good exposure to Spring Batch, Spring Data, Spring Web services, Python
Proficient with Bigdata ecosystem (Sqoop, Spark, Hadoop, Hive, HBase)
Proficient with Unix/Linux eco systems and shell scripting skills
Proven Java, Kafka, Spark, Big Data, Azure ,analytical and problem solving skills
Proven solid analytical and communication skills
Big Data Developer
Posted today
Job Viewed
Job Description
Position: Big Data Engineer
Experience: 4+ years
Location: All India-Remote, Hyderabad- Hybrid
Notice Period: Immediate/7 days joiners mandate
Job Overview:
Must have skills- Big Data, Scala, AWS and Python or Java
Big Data Developer
Posted today
Job Viewed
Job Description
Must have Skills:
Kotlin/Scala/Java
Spark
SQL
Spark Streaming
Any cloud (AWS preferable)
Kafka /Kinesis/Any streaming services
Object-Oriented Programming
Hive, ETL/ELT design experience
CICD experience (ETL pipeline deployment)
Data Modeling experience
Good to Have Skills:
Git/similar version control tool
Knowledge in CI/CD, Microservices
Role Objective:
Big Data Engineer will be responsible for expanding and optimizing our data and database architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products
Roles & Responsibilities:
Sound knowledge in Spark architecture and distributed computing and Spark streaming.
Proficient in Spark – including RDD and Data frames core functions, troubleshooting and performance tuning.
Good understanding in object-oriented concepts and hands on experience on Kotlin/Scala/Java with excellent programming logic and technique.
Good in functional programming and OOPS concept on Kotlin/Scala/Java
Good experience in SQL
Managing the team of Associates and Senior Associates and ensuring the utilization is maintained across the project.
Able to mentor new members for onboarding to the project.
Understand the client requirement and able to design, develop from scratch and deliver.
AWS cloud experience would be preferable.
Experience in analyzing, re-architecting, and re-platforming on-premises data warehouses to data platforms on cloud (AWS is preferred)
Leading the client calls to flag off any delays, blockers, escalations and collate all the requirements.
Managing project timing, client expectations and meeting deadlines.
Should have played project and team management roles.
Facilitate meetings within the team on regular basis.
Understand business requirement and analyze different approaches and plan deliverables and milestones for the project.
Optimization, maintenance, and support of pipelines.
Strong analytical and logical skills.
Ability to comfortably tackling new challenges and learn
Big Data Architect
Posted 1 day ago
Job Viewed
Job Description
Location - Bangalore/ Pune
Experience: 10Yrs to 16 Years
Experienced profile with strong integration data architecture, data modeling, database design, proficient in SQL and familiar with at least one cloud platforms. Good understanding of data integration and management tools (MuleSoft/IBM Sterling Integrator/Talend/Informatica.) Knowledge of ETL, Data Warehousing and Big Data technologies
Skills Requirements:
Strong organizational and communication skills.
Work with Client Architect, Drive Data architecture related client workshops, internal meetings, proposals etc.
Strong understanding of NiFi architecture and components
Experience with data formats like JSON, XML, and Avro
Knowledge of data protocols like HTTP, TCP, and Kafka
Coach and create a Data strategy, vision for the larger team, provide subject matter training
Data governance principles and data quality including database design, data modeling and Cloud architecture
Familiarity with data governance and security best practices
Knowledge of containerization and orchestration (Docker and Kubernetes)
Responsibilities :
High level Designs, data architecture, data pipelines for Apache NiFi, AI-NEXT platform
Ensures database performance, data quality, integrity, and security
Guide team for solution implementation
Partner with Internal Product architect team, engineering team, security team etc.
Support pre-sales team for Data Solution
Optimize and troubleshoot NiFi workflows for performance, scalability, and reliability
Collaborate with cross-functional teams to integrate NiFi with other systems including Databases, API and cloud services and other backend apps
Big Data Developer
Posted today
Job Viewed
Job Description
Job description
Key Responsibilities:
- Big Data Architecture: Design, develop, and maintain scalable and distributed data architectures capable of processing large volumes of data.
- Data Storage Solutions: Implement and optimize data storage solutions using technologies such as Hadoop , Spark , and PySpark .
- PySpark Development: Develop and implement efficient ETL processes using PySpark to extract, transform, and load large datasets.
- Performance Optimization: Optimize PySpark applications for better performance, scalability, and resource management.
Qualifications:
- Proven experience as a Big Data Engineer with a strong focus on PySpark .
- Deep understanding of Big Data processing frameworks and technologies.
- Strong proficiency in PySpark for developing and optimizing ETL processes and data transformations.
- Experience with distributed computing and parallel processing.
- Ability to collaborate in a fast-paced, innovative environment.
Skills
PySpark, Big Data, Python.
Big Data Developer
Posted today
Job Viewed
Job Description
Job description
Key Responsibilities:
- Big Data Architecture: Design, develop, and maintain scalable and distributed data architectures capable of processing large volumes of data.
- Data Storage Solutions: Implement and optimize data storage solutions using technologies such as Hadoop , Spark , and PySpark .
- PySpark Development: Develop and implement efficient ETL processes using PySpark to extract, transform, and load large datasets.
- Performance Optimization: Optimize PySpark applications for better performance, scalability, and resource management.
Qualifications:
- Proven experience as a Big Data Engineer with a strong focus on PySpark .
- Deep understanding of Big Data processing frameworks and technologies.
- Strong proficiency in PySpark for developing and optimizing ETL processes and data transformations.
- Experience with distributed computing and parallel processing.
- Ability to collaborate in a fast-paced, innovative environment.
Skills
PySpark, Big Data, Python.
Big Data Developer
Posted today
Job Viewed
Job Description
Job description
Key Responsibilities:
- Big Data Architecture: Design, develop, and maintain scalable and distributed data architectures capable of processing large volumes of data.
- Data Storage Solutions: Implement and optimize data storage solutions using technologies such as Hadoop , Spark , and PySpark .
- PySpark Development: Develop and implement efficient ETL processes using PySpark to extract, transform, and load large datasets.
- Performance Optimization: Optimize PySpark applications for better performance, scalability, and resource management.
Qualifications:
- Proven experience as a Big Data Engineer with a strong focus on PySpark .
- Deep understanding of Big Data processing frameworks and technologies.
- Strong proficiency in PySpark for developing and optimizing ETL processes and data transformations.
- Experience with distributed computing and parallel processing.
- Ability to collaborate in a fast-paced, innovative environment.
Skills
PySpark, Big Data, Python.
Be The First To Know
About the latest Data scientists Jobs in Delhi !
Big Data Developer
Posted 1 day ago
Job Viewed
Job Description
Experience: 5 to 9 years
Must have Skills:
- Kotlin/Scala/Java
- Spark
- SQL
- Spark Streaming
- Any cloud (AWS preferable)
- Kafka /Kinesis/Any streaming services
- Object-Oriented Programming
- Hive, ETL/ELT design experience
- CICD experience (ETL pipeline deployment)
- Data Modeling experience
Good to Have Skills:
- Git/similar version control tool
- Knowledge in CI/CD, Microservices
Role Objective:
Big Data Engineer will be responsible for expanding and optimizing our data and database architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products
Roles & Responsibilities:
- Sound knowledge in Spark architecture and distributed computing and Spark streaming.
- Proficient in Spark – including RDD and Data frames core functions, troubleshooting and performance tuning.
- Good understanding in object-oriented concepts and hands on experience on Kotlin/Scala/Java with excellent programming logic and technique.
- Good in functional programming and OOPS concept on Kotlin/Scala/Java
- Good experience in SQL
- Managing the team of Associates and Senior Associates and ensuring the utilization is maintained across the project.
- Able to mentor new members for onboarding to the project.
- Understand the client requirement and able to design, develop from scratch and deliver.
- AWS cloud experience would be preferable.
- Experience in analyzing, re-architecting, and re-platforming on-premises data warehouses to data platforms on cloud (AWS is preferred)
- Leading the client calls to flag off any delays, blockers, escalations and collate all the requirements.
- Managing project timing, client expectations and meeting deadlines.
- Should have played project and team management roles.
- Facilitate meetings within the team on regular basis.
- Understand business requirement and analyze different approaches and plan deliverables and milestones for the project.
- Optimization, maintenance, and support of pipelines.
- Strong analytical and logical skills.
- Ability to comfortably tackling new challenges and learn
Big Data Developer
Posted 1 day ago
Job Viewed
Job Description
Experience: 5 to 9 years
Must have Skills:
- Kotlin/Scala/Java
- Spark
- SQL
- Spark Streaming
- Any cloud (AWS preferable)
- Kafka /Kinesis/Any streaming services
- Object-Oriented Programming
- Hive, ETL/ELT design experience
- CICD experience (ETL pipeline deployment)
- Data Modeling experience
Good to Have Skills:
- Git/similar version control tool
- Knowledge in CI/CD, Microservices
Role Objective:
Big Data Engineer will be responsible for expanding and optimizing our data and database architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products
Roles & Responsibilities:
- Sound knowledge in Spark architecture and distributed computing and Spark streaming.
- Proficient in Spark – including RDD and Data frames core functions, troubleshooting and performance tuning.
- Good understanding in object-oriented concepts and hands on experience on Kotlin/Scala/Java with excellent programming logic and technique.
- Good in functional programming and OOPS concept on Kotlin/Scala/Java
- Good experience in SQL
- Managing the team of Associates and Senior Associates and ensuring the utilization is maintained across the project.
- Able to mentor new members for onboarding to the project.
- Understand the client requirement and able to design, develop from scratch and deliver.
- AWS cloud experience would be preferable.
- Experience in analyzing, re-architecting, and re-platforming on-premises data warehouses to data platforms on cloud (AWS is preferred)
- Leading the client calls to flag off any delays, blockers, escalations and collate all the requirements.
- Managing project timing, client expectations and meeting deadlines.
- Should have played project and team management roles.
- Facilitate meetings within the team on regular basis.
- Understand business requirement and analyze different approaches and plan deliverables and milestones for the project.
- Optimization, maintenance, and support of pipelines.
- Strong analytical and logical skills.
- Ability to comfortably tackling new challenges and learn
Big Data Developer
Posted 1 day ago
Job Viewed
Job Description
Experience: 5 to 9 years
Must have Skills:
- Kotlin/Scala/Java
- Spark
- SQL
- Spark Streaming
- Any cloud (AWS preferable)
- Kafka /Kinesis/Any streaming services
- Object-Oriented Programming
- Hive, ETL/ELT design experience
- CICD experience (ETL pipeline deployment)
- Data Modeling experience
Good to Have Skills:
- Git/similar version control tool
- Knowledge in CI/CD, Microservices
Role Objective:
Big Data Engineer will be responsible for expanding and optimizing our data and database architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products
Roles & Responsibilities:
- Sound knowledge in Spark architecture and distributed computing and Spark streaming.
- Proficient in Spark – including RDD and Data frames core functions, troubleshooting and performance tuning.
- Good understanding in object-oriented concepts and hands on experience on Kotlin/Scala/Java with excellent programming logic and technique.
- Good in functional programming and OOPS concept on Kotlin/Scala/Java
- Good experience in SQL
- Managing the team of Associates and Senior Associates and ensuring the utilization is maintained across the project.
- Able to mentor new members for onboarding to the project.
- Understand the client requirement and able to design, develop from scratch and deliver.
- AWS cloud experience would be preferable.
- Experience in analyzing, re-architecting, and re-platforming on-premises data warehouses to data platforms on cloud (AWS is preferred)
- Leading the client calls to flag off any delays, blockers, escalations and collate all the requirements.
- Managing project timing, client expectations and meeting deadlines.
- Should have played project and team management roles.
- Facilitate meetings within the team on regular basis.
- Understand business requirement and analyze different approaches and plan deliverables and milestones for the project.
- Optimization, maintenance, and support of pipelines.
- Strong analytical and logical skills.
- Ability to comfortably tackling new challenges and learn