240 Data Scientists jobs in Chandigarh
Big Data Developer - Java, Big data, Spring
Posted today
Job Viewed
Job Description
Primary Responsibilities:
- Analyzes and investigates
- Provides explanations and interpretations within area of expertise
- Participate in scrum process and deliver stories/features according to the schedule
- Collaborate with team, architects and product stakeholders to understand the scope and design of a deliverable
- Participate in product support activities as needed by the team.
- Understand product architecture, features being built and come up with product improvement ideas and POCs
- Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
Qualifications -
Required Qualifications:
- Undergraduate degree or equivalent experience
- Proven experience using Bigdata tech stack
- Sound knowledge on Java and Spring framework with good exposure to Spring Batch, Spring Data, Spring Web services, Python
- Proficient with Bigdata ecosystem (Sqoop, Spark, Hadoop, Hive, HBase)
- Proficient with Unix/Linux eco systems and shell scripting skills
- Proven Java, Kafka, Spark, Big Data, Azure ,analytical and problem solving skills
- Proven solid analytical and communication skills
Big Data Developer - Java, Big data, Spring
Posted today
Job Viewed
Job Description
Primary Responsibilities:
- Analyzes and investigates
- Provides explanations and interpretations within area of expertise
- Participate in scrum process and deliver stories/features according to the schedule
- Collaborate with team, architects and product stakeholders to understand the scope and design of a deliverable
- Participate in product support activities as needed by the team.
- Understand product architecture, features being built and come up with product improvement ideas and POCs
- Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
Qualifications -
Required Qualifications:
- Undergraduate degree or equivalent experience
- Proven experience using Bigdata tech stack
- Sound knowledge on Java and Spring framework with good exposure to Spring Batch, Spring Data, Spring Web services, Python
- Proficient with Bigdata ecosystem (Sqoop, Spark, Hadoop, Hive, HBase)
- Proficient with Unix/Linux eco systems and shell scripting skills
- Proven Java, Kafka, Spark, Big Data, Azure ,analytical and problem solving skills
- Proven solid analytical and communication skills
Big Data Developer - Java, Big data, Spring
Posted today
Job Viewed
Job Description
Primary Responsibilities:
- Analyzes and investigates
- Provides explanations and interpretations within area of expertise
- Participate in scrum process and deliver stories/features according to the schedule
- Collaborate with team, architects and product stakeholders to understand the scope and design of a deliverable
- Participate in product support activities as needed by the team.
- Understand product architecture, features being built and come up with product improvement ideas and POCs
- Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
Qualifications -
Required Qualifications:
- Undergraduate degree or equivalent experience
- Proven experience using Bigdata tech stack
- Sound knowledge on Java and Spring framework with good exposure to Spring Batch, Spring Data, Spring Web services, Python
- Proficient with Bigdata ecosystem (Sqoop, Spark, Hadoop, Hive, HBase)
- Proficient with Unix/Linux eco systems and shell scripting skills
- Proven Java, Kafka, Spark, Big Data, Azure ,analytical and problem solving skills
- Proven solid analytical and communication skills
Big Data Developer
Posted today
Job Viewed
Job Description
Experience: 5 to 9 years
Must have Skills:
- Kotlin/Scala/Java
- Spark
- SQL
- Spark Streaming
- Any cloud (AWS preferable)
- Kafka /Kinesis/Any streaming services
- Object-Oriented Programming
- Hive, ETL/ELT design experience
- CICD experience (ETL pipeline deployment)
- Data Modeling experience
Good to Have Skills:
- Git/similar version control tool
- Knowledge in CI/CD, Microservices
Role Objective:
Big Data Engineer will be responsible for expanding and optimizing our data and database architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products
Roles & Responsibilities:
- Sound knowledge in Spark architecture and distributed computing and Spark streaming.
- Proficient in Spark – including RDD and Data frames core functions, troubleshooting and performance tuning.
- Good understanding in object-oriented concepts and hands on experience on Kotlin/Scala/Java with excellent programming logic and technique.
- Good in functional programming and OOPS concept on Kotlin/Scala/Java
- Good experience in SQL
- Managing the team of Associates and Senior Associates and ensuring the utilization is maintained across the project.
- Able to mentor new members for onboarding to the project.
- Understand the client requirement and able to design, develop from scratch and deliver.
- AWS cloud experience would be preferable.
- Experience in analyzing, re-architecting, and re-platforming on-premises data warehouses to data platforms on cloud (AWS is preferred)
- Leading the client calls to flag off any delays, blockers, escalations and collate all the requirements.
- Managing project timing, client expectations and meeting deadlines.
- Should have played project and team management roles.
- Facilitate meetings within the team on regular basis.
- Understand business requirement and analyze different approaches and plan deliverables and milestones for the project.
- Optimization, maintenance, and support of pipelines.
- Strong analytical and logical skills.
- Ability to comfortably tackling new challenges and learn
Big Data Developer
Posted today
Job Viewed
Job Description
Experience: 5 to 9 years
Must have Skills:
- Kotlin/Scala/Java
- Spark
- SQL
- Spark Streaming
- Any cloud (AWS preferable)
- Kafka /Kinesis/Any streaming services
- Object-Oriented Programming
- Hive, ETL/ELT design experience
- CICD experience (ETL pipeline deployment)
- Data Modeling experience
Good to Have Skills:
- Git/similar version control tool
- Knowledge in CI/CD, Microservices
Role Objective:
Big Data Engineer will be responsible for expanding and optimizing our data and database architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products
Roles & Responsibilities:
- Sound knowledge in Spark architecture and distributed computing and Spark streaming.
- Proficient in Spark – including RDD and Data frames core functions, troubleshooting and performance tuning.
- Good understanding in object-oriented concepts and hands on experience on Kotlin/Scala/Java with excellent programming logic and technique.
- Good in functional programming and OOPS concept on Kotlin/Scala/Java
- Good experience in SQL
- Managing the team of Associates and Senior Associates and ensuring the utilization is maintained across the project.
- Able to mentor new members for onboarding to the project.
- Understand the client requirement and able to design, develop from scratch and deliver.
- AWS cloud experience would be preferable.
- Experience in analyzing, re-architecting, and re-platforming on-premises data warehouses to data platforms on cloud (AWS is preferred)
- Leading the client calls to flag off any delays, blockers, escalations and collate all the requirements.
- Managing project timing, client expectations and meeting deadlines.
- Should have played project and team management roles.
- Facilitate meetings within the team on regular basis.
- Understand business requirement and analyze different approaches and plan deliverables and milestones for the project.
- Optimization, maintenance, and support of pipelines.
- Strong analytical and logical skills.
- Ability to comfortably tackling new challenges and learn
Big Data Developer
Posted today
Job Viewed
Job Description
Experience: 5 to 9 years
Must have Skills:
- Kotlin/Scala/Java
- Spark
- SQL
- Spark Streaming
- Any cloud (AWS preferable)
- Kafka /Kinesis/Any streaming services
- Object-Oriented Programming
- Hive, ETL/ELT design experience
- CICD experience (ETL pipeline deployment)
- Data Modeling experience
Good to Have Skills:
- Git/similar version control tool
- Knowledge in CI/CD, Microservices
Role Objective:
Big Data Engineer will be responsible for expanding and optimizing our data and database architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products
Roles & Responsibilities:
- Sound knowledge in Spark architecture and distributed computing and Spark streaming.
- Proficient in Spark – including RDD and Data frames core functions, troubleshooting and performance tuning.
- Good understanding in object-oriented concepts and hands on experience on Kotlin/Scala/Java with excellent programming logic and technique.
- Good in functional programming and OOPS concept on Kotlin/Scala/Java
- Good experience in SQL
- Managing the team of Associates and Senior Associates and ensuring the utilization is maintained across the project.
- Able to mentor new members for onboarding to the project.
- Understand the client requirement and able to design, develop from scratch and deliver.
- AWS cloud experience would be preferable.
- Experience in analyzing, re-architecting, and re-platforming on-premises data warehouses to data platforms on cloud (AWS is preferred)
- Leading the client calls to flag off any delays, blockers, escalations and collate all the requirements.
- Managing project timing, client expectations and meeting deadlines.
- Should have played project and team management roles.
- Facilitate meetings within the team on regular basis.
- Understand business requirement and analyze different approaches and plan deliverables and milestones for the project.
- Optimization, maintenance, and support of pipelines.
- Strong analytical and logical skills.
- Ability to comfortably tackling new challenges and learn
Big Data Architect
Posted 1 day ago
Job Viewed
Job Description
Job Tittle- Big Data Architect
Location - Bangalore/ Pune
Experience: 10Yrs to 16 Years
Experienced profile with strong integration data architecture, data modeling, database design, proficient in SQL and familiar with at least one cloud platforms. Good understanding of data integration and management tools (MuleSoft/IBM Sterling Integrator/Talend/Informatica.) Knowledge of ETL, Data Warehousing and Big Data technologies
Skills Requirements:
- Strong organizational and communication skills.
- Work with Client Architect, Drive Data architecture related client workshops, internal meetings, proposals etc.
- Strong understanding of NiFi architecture and components
- Experience with data formats like JSON, XML, and Avro
- Knowledge of data protocols like HTTP, TCP, and Kafka
- Coach and create a Data strategy, vision for the larger team, provide subject matter training
- Data governance principles and data quality including database design, data modeling and Cloud architecture
- Familiarity with data governance and security best practices
- Knowledge of containerization and orchestration (Docker and Kubernetes)
Responsibilities :
- High level Designs, data architecture, data pipelines for Apache NiFi, AI-NEXT platform
- Ensures database performance, data quality, integrity, and security
- Guide team for solution implementation
- Partner with Internal Product architect team, engineering team, security team etc.
- Support pre-sales team for Data Solution
- Optimize and troubleshoot NiFi workflows for performance, scalability, and reliability
Collaborate with cross-functional teams to integrate NiFi with other systems including Databases, API and cloud services and other backend apps
Be The First To Know
About the latest Data scientists Jobs in Chandigarh !
Big Data Architect
Posted 1 day ago
Job Viewed
Job Description
Job Tittle- Big Data Architect
Location - Bangalore/ Pune
Experience: 10Yrs to 16 Years
Experienced profile with strong integration data architecture, data modeling, database design, proficient in SQL and familiar with at least one cloud platforms. Good understanding of data integration and management tools (MuleSoft/IBM Sterling Integrator/Talend/Informatica.) Knowledge of ETL, Data Warehousing and Big Data technologies
Skills Requirements:
- Strong organizational and communication skills.
- Work with Client Architect, Drive Data architecture related client workshops, internal meetings, proposals etc.
- Strong understanding of NiFi architecture and components
- Experience with data formats like JSON, XML, and Avro
- Knowledge of data protocols like HTTP, TCP, and Kafka
- Coach and create a Data strategy, vision for the larger team, provide subject matter training
- Data governance principles and data quality including database design, data modeling and Cloud architecture
- Familiarity with data governance and security best practices
- Knowledge of containerization and orchestration (Docker and Kubernetes)
Responsibilities :
- High level Designs, data architecture, data pipelines for Apache NiFi, AI-NEXT platform
- Ensures database performance, data quality, integrity, and security
- Guide team for solution implementation
- Partner with Internal Product architect team, engineering team, security team etc.
- Support pre-sales team for Data Solution
- Optimize and troubleshoot NiFi workflows for performance, scalability, and reliability
Collaborate with cross-functional teams to integrate NiFi with other systems including Databases, API and cloud services and other backend apps
Big Data Architect
Posted 1 day ago
Job Viewed
Job Description
Job Tittle- Big Data Architect
Location - Bangalore/ Pune
Experience: 10Yrs to 16 Years
Experienced profile with strong integration data architecture, data modeling, database design, proficient in SQL and familiar with at least one cloud platforms. Good understanding of data integration and management tools (MuleSoft/IBM Sterling Integrator/Talend/Informatica.) Knowledge of ETL, Data Warehousing and Big Data technologies
Skills Requirements:
- Strong organizational and communication skills.
- Work with Client Architect, Drive Data architecture related client workshops, internal meetings, proposals etc.
- Strong understanding of NiFi architecture and components
- Experience with data formats like JSON, XML, and Avro
- Knowledge of data protocols like HTTP, TCP, and Kafka
- Coach and create a Data strategy, vision for the larger team, provide subject matter training
- Data governance principles and data quality including database design, data modeling and Cloud architecture
- Familiarity with data governance and security best practices
- Knowledge of containerization and orchestration (Docker and Kubernetes)
Responsibilities :
- High level Designs, data architecture, data pipelines for Apache NiFi, AI-NEXT platform
- Ensures database performance, data quality, integrity, and security
- Guide team for solution implementation
- Partner with Internal Product architect team, engineering team, security team etc.
- Support pre-sales team for Data Solution
- Optimize and troubleshoot NiFi workflows for performance, scalability, and reliability
Collaborate with cross-functional teams to integrate NiFi with other systems including Databases, API and cloud services and other backend apps
Sn. Data Scientists- AI/ML- GEN AI- Work location : Across india | EXP: 4 - 12 years
Posted 14 days ago
Job Viewed
Job Description
Data Scientists- AI/ML- GEN AI- Across india | EXP: 4 - 10 years
data scientists with total of around 4-10 years of experience and atleast 4-10 years of relevant data science, analytics, and AI/ML Python; data science; AI/ML; GEN AI
Primary Skills :
- Excellent understanding and hand-on experience of data-science and machine learning techniques & algorithms for supervised & unsupervised problems, NLP and computer vision and GEN AI. Good applied statistics skills, such as distributions, statistical inference & testing, etc.
- Excellent understanding and hand-on experience on building Deep-learning models for text & image analytics (such as ANNs, CNNs, LSTM, Transfer Learning, Encoder and decoder, etc).
- Proficient in coding in common data science language & tools such as R, Python.
- Experience with common data science toolkits, such as NumPy, Pandas, Matplotlib, StatsModel, Scikitlearn, SciPy, NLTK, Spacy, OpenCV etc.
- Experience with common data science frameworks such as Tensorflow, Keras, PyTorch, XGBoost,etc.
- Exposure or knowledge in cloud (Azure/AWS).
- Experience on deployment of model in production.