222 Data Scientists jobs in Chandigarh
Big Data Specialist
Posted 2 days ago
Job Viewed
Job Description
Role Overview
We are seeking a highly skilled Big Data Engineer to join our team. The ideal candidate will have strong experience in building, maintaining, and optimizing large-scale data pipelines and distributed data processing systems. This role involves working closely with cross-functional teams to ensure the reliability, scalability, and performance of data solutions.
Key Responsibilities
- Design, develop, and maintain scalable data pipelines and ETL processes.
- Work with large datasets using Hadoop ecosystem tools (Hive, Spark).
- Build and optimize real-time and batch data processing solutions using Kafka and Spark Streaming.
- Write efficient, high-performance SQL queries to extract, transform, and load data.
- Develop reusable data frameworks and utilities in Python.
- Collaborate with data scientists, analysts, and product teams to deliver reliable data solutions.
- Monitor, troubleshoot, and optimize big data workflows for performance and cost efficiency.
Must-Have Skills
- Strong hands-on experience with Hive and SQL for querying and data transformation.
- Proficiency in Python for data manipulation and automation.
- Expertise in Apache Spark (batch and streaming).
- Experience working with Kafka for streaming data pipelines.
Good-to-Have Skills
- Experience with workflow orchestration tools (Airflow etc.)
- Knowledge of cloud-based big data platforms (AWS EMR, GCP Dataproc, Azure HDInsight).
- Familiarity with CI/CD pipelines and version control (Git).
Big Data Specialist
Posted 2 days ago
Job Viewed
Job Description
Role Overview
We are seeking a highly skilled Big Data Engineer to join our team. The ideal candidate will have strong experience in building, maintaining, and optimizing large-scale data pipelines and distributed data processing systems. This role involves working closely with cross-functional teams to ensure the reliability, scalability, and performance of data solutions.
Key Responsibilities
- Design, develop, and maintain scalable data pipelines and ETL processes.
- Work with large datasets using Hadoop ecosystem tools (Hive, Spark).
- Build and optimize real-time and batch data processing solutions using Kafka and Spark Streaming.
- Write efficient, high-performance SQL queries to extract, transform, and load data.
- Develop reusable data frameworks and utilities in Python.
- Collaborate with data scientists, analysts, and product teams to deliver reliable data solutions.
- Monitor, troubleshoot, and optimize big data workflows for performance and cost efficiency.
Must-Have Skills
- Strong hands-on experience with Hive and SQL for querying and data transformation.
- Proficiency in Python for data manipulation and automation.
- Expertise in Apache Spark (batch and streaming).
- Experience working with Kafka for streaming data pipelines.
Good-to-Have Skills
- Experience with workflow orchestration tools (Airflow etc.)
- Knowledge of cloud-based big data platforms (AWS EMR, GCP Dataproc, Azure HDInsight).
- Familiarity with CI/CD pipelines and version control (Git).
Big Data Specialist
Posted 2 days ago
Job Viewed
Job Description
Role Overview
We are seeking a highly skilled Big Data Engineer to join our team. The ideal candidate will have strong experience in building, maintaining, and optimizing large-scale data pipelines and distributed data processing systems. This role involves working closely with cross-functional teams to ensure the reliability, scalability, and performance of data solutions.
Key Responsibilities
- Design, develop, and maintain scalable data pipelines and ETL processes.
- Work with large datasets using Hadoop ecosystem tools (Hive, Spark).
- Build and optimize real-time and batch data processing solutions using Kafka and Spark Streaming.
- Write efficient, high-performance SQL queries to extract, transform, and load data.
- Develop reusable data frameworks and utilities in Python.
- Collaborate with data scientists, analysts, and product teams to deliver reliable data solutions.
- Monitor, troubleshoot, and optimize big data workflows for performance and cost efficiency.
Must-Have Skills
- Strong hands-on experience with Hive and SQL for querying and data transformation.
- Proficiency in Python for data manipulation and automation.
- Expertise in Apache Spark (batch and streaming).
- Experience working with Kafka for streaming data pipelines.
Good-to-Have Skills
- Experience with workflow orchestration tools (Airflow etc.)
- Knowledge of cloud-based big data platforms (AWS EMR, GCP Dataproc, Azure HDInsight).
- Familiarity with CI/CD pipelines and version control (Git).
Big Data Engineer - Scala
Posted today
Job Viewed
Job Description
Job Title: Big Data Engineer – Scala
Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.
Experience: 7–10 Years (Minimum 3+ years in Scala)
Notice Period: Immediate to 30 Days
Mode of Work: Hybrid
Click the link below to learn more about the role and take the AI Interview to begin your application journey:
Role Overview
We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.
Key Responsibilities
- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.
- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.
- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .
- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .
- Collaborate with cross-functional teams to design scalable cloud-based data architectures .
- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .
- Build monitoring and alerting systems leveraging Splunk or equivalent tools .
- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.
- Contribute to product development with a focus on scalability, maintainability, and performance.
Mandatory Skills
- Scala – Minimum 3+ years of hands-on experience.
- Strong expertise in Spark (PySpark) and Python .
- Hands-on experience with Apache Kafka .
- Knowledge of NiFi / Airflow for orchestration.
- Strong experience in Distributed Data Systems (5+ years) .
- Proficiency in SQL and query optimization.
- Good understanding of Cloud Architecture .
Preferred Skills
- Exposure to messaging technologies like Apache Kafka or equivalent.
- Experience in designing intuitive, responsive UIs for data analytics visualization.
- Familiarity with Splunk or other monitoring/alerting solutions .
- Hands-on experience with CI/CD tools (Git, Jenkins).
- Strong grasp of software engineering concepts, data modeling, and optimization techniques .
Big Data Engineer - Scala
Posted today
Job Viewed
Job Description
Job Title: Big Data Engineer – Scala
Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.
Experience: 7–10 Years (Minimum 3+ years in Scala)
Notice Period: Immediate to 30 Days
Mode of Work: Hybrid
Click the link below to learn more about the role and take the AI Interview to begin your application journey:
Role Overview
We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.
Key Responsibilities
- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.
- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.
- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .
- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .
- Collaborate with cross-functional teams to design scalable cloud-based data architectures .
- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .
- Build monitoring and alerting systems leveraging Splunk or equivalent tools .
- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.
- Contribute to product development with a focus on scalability, maintainability, and performance.
Mandatory Skills
- Scala – Minimum 3+ years of hands-on experience.
- Strong expertise in Spark (PySpark) and Python .
- Hands-on experience with Apache Kafka .
- Knowledge of NiFi / Airflow for orchestration.
- Strong experience in Distributed Data Systems (5+ years) .
- Proficiency in SQL and query optimization.
- Good understanding of Cloud Architecture .
Preferred Skills
- Exposure to messaging technologies like Apache Kafka or equivalent.
- Experience in designing intuitive, responsive UIs for data analytics visualization.
- Familiarity with Splunk or other monitoring/alerting solutions .
- Hands-on experience with CI/CD tools (Git, Jenkins).
- Strong grasp of software engineering concepts, data modeling, and optimization techniques .
Big Data Engineer - Scala
Posted today
Job Viewed
Job Description
Job Title: Big Data Engineer – Scala
Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.
Experience: 7–10 Years (Minimum 3+ years in Scala)
Notice Period: Immediate to 30 Days
Mode of Work: Hybrid
Click the link below to learn more about the role and take the AI Interview to begin your application journey:
Role Overview
We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.
Key Responsibilities
- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.
- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.
- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .
- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .
- Collaborate with cross-functional teams to design scalable cloud-based data architectures .
- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .
- Build monitoring and alerting systems leveraging Splunk or equivalent tools .
- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.
- Contribute to product development with a focus on scalability, maintainability, and performance.
Mandatory Skills
- Scala – Minimum 3+ years of hands-on experience.
- Strong expertise in Spark (PySpark) and Python .
- Hands-on experience with Apache Kafka .
- Knowledge of NiFi / Airflow for orchestration.
- Strong experience in Distributed Data Systems (5+ years) .
- Proficiency in SQL and query optimization.
- Good understanding of Cloud Architecture .
Preferred Skills
- Exposure to messaging technologies like Apache Kafka or equivalent.
- Experience in designing intuitive, responsive UIs for data analytics visualization.
- Familiarity with Splunk or other monitoring/alerting solutions .
- Hands-on experience with CI/CD tools (Git, Jenkins).
- Strong grasp of software engineering concepts, data modeling, and optimization techniques .
Senior Database Administrator(Big Data)
Posted today
Job Viewed
Job Description
POSITION SUMMARY: Performs and plans management and administration of technology in area of specialty. Works with team members to ensure technology is effectively supporting enterprise systems. Shares knowledge and provides mentoring to other IT associates.
ESSENTIAL FUNCTIONS:
- Analyze existing database systems and recommend improvements
- Work with IT Manager to establish database security structures, Sox compliance
- Manage and maintain Cloudera Data Platform (CDP) clusters, ensuring high availability and performance.
- Administer and monitor Apache Kafka (open-source) clusters, including topic management, performance tuning, and security.
- Perform database administration for Oracle, MySQL, and PostgreSQL systems, including backup, recovery, and performance tuning.
- Develop and maintain automation scripts using Shell scripting and Ansible for deployment, monitoring, and configuration management.
- Collaborate with data engineers and developers to optimize data pipelines and ensure data integrity across platforms.
- Implement and maintain security best practices across all data platforms.
- Troubleshoot and resolve issues across the data stack in a timely manner.
- Fix ongoing issues, act as an individual contributor, and take full ownership of the environment.
- Create and maintain database access systems including tables, indexes, views, stored procedures, and triggers
- Administer database replication between servers, across Local Area Network and Wide Area Network
- Tune and optimize database objects
- Provide 24x7 on call support
- Provides technical input to solution development plans and concept documents
- Demonstrates understanding of business impacts of issues and how they relate to IT owned solutions
- Communicates effectively in written and verbal form in a cross-functional setting with business partners
- Shares knowledge and actively mentors less experienced IT Administrators (Administrator 1 & 2)
- Performs and plans major version upgrades
- Design and implement effective monitoring of enterprise systems
- Tunes systems using experience and deep knowledge of area
- Foresees risks and communicates and mitigates those before problems arise
- Documents technical designs and procedures for team library
- Plans and manages technical projects involving other teams
- Communicates technology effectively to non-technical associates and management
- Recognizes and understands technology impacts within the business
- Contributes to technical research on new technologies, processes or procedures with a desire to continuously learn new technologies
OTHER RESPONSIBILITIES:
- Participates in project definition activities including estimation of delivery
- Perform other duties as necessary
EDUCATION EXPERIENCE AND SKILLS REQUIRED:
- Bachelor of Science Degree in Computer Science, Information Technology, Management Information Systems, Business or another relevant field AND a minimum of 6 years relevant experience OR equivalent combination of education and relevant experience
- Outstanding academics with the demonstrated ability to apply learned knowledge
- Fluency in English is required
- Experience with Relational or NoSQL database technologies
- Demonstrated ability to implement new technologies effectively
- Demonstrated strong and effective verbal, written, and interpersonal communication skills in a small team setting
- Consistently demonstrates quality and effectiveness in work documentation and organization
- Experience with Big Data Technologies supporting a broad variety of solutions including HDFS, Kubernetes, Kafka, Elastic, SQL Server
DESIRABLE QUALIFICATIONS:
- Superior academics
- Previous experience working in a team environment
Be The First To Know
About the latest Data scientists Jobs in Chandigarh !
Senior Database Administrator(Big Data)
Posted today
Job Viewed
Job Description
POSITION SUMMARY: Performs and plans management and administration of technology in area of specialty. Works with team members to ensure technology is effectively supporting enterprise systems. Shares knowledge and provides mentoring to other IT associates.
ESSENTIAL FUNCTIONS:
- Analyze existing database systems and recommend improvements
- Work with IT Manager to establish database security structures, Sox compliance
- Manage and maintain Cloudera Data Platform (CDP) clusters, ensuring high availability and performance.
- Administer and monitor Apache Kafka (open-source) clusters, including topic management, performance tuning, and security.
- Perform database administration for Oracle, MySQL, and PostgreSQL systems, including backup, recovery, and performance tuning.
- Develop and maintain automation scripts using Shell scripting and Ansible for deployment, monitoring, and configuration management.
- Collaborate with data engineers and developers to optimize data pipelines and ensure data integrity across platforms.
- Implement and maintain security best practices across all data platforms.
- Troubleshoot and resolve issues across the data stack in a timely manner.
- Fix ongoing issues, act as an individual contributor, and take full ownership of the environment.
- Create and maintain database access systems including tables, indexes, views, stored procedures, and triggers
- Administer database replication between servers, across Local Area Network and Wide Area Network
- Tune and optimize database objects
- Provide 24x7 on call support
- Provides technical input to solution development plans and concept documents
- Demonstrates understanding of business impacts of issues and how they relate to IT owned solutions
- Communicates effectively in written and verbal form in a cross-functional setting with business partners
- Shares knowledge and actively mentors less experienced IT Administrators (Administrator 1 & 2)
- Performs and plans major version upgrades
- Design and implement effective monitoring of enterprise systems
- Tunes systems using experience and deep knowledge of area
- Foresees risks and communicates and mitigates those before problems arise
- Documents technical designs and procedures for team library
- Plans and manages technical projects involving other teams
- Communicates technology effectively to non-technical associates and management
- Recognizes and understands technology impacts within the business
- Contributes to technical research on new technologies, processes or procedures with a desire to continuously learn new technologies
OTHER RESPONSIBILITIES:
- Participates in project definition activities including estimation of delivery
- Perform other duties as necessary
EDUCATION EXPERIENCE AND SKILLS REQUIRED:
- Bachelor of Science Degree in Computer Science, Information Technology, Management Information Systems, Business or another relevant field AND a minimum of 6 years relevant experience OR equivalent combination of education and relevant experience
- Outstanding academics with the demonstrated ability to apply learned knowledge
- Fluency in English is required
- Experience with Relational or NoSQL database technologies
- Demonstrated ability to implement new technologies effectively
- Demonstrated strong and effective verbal, written, and interpersonal communication skills in a small team setting
- Consistently demonstrates quality and effectiveness in work documentation and organization
- Experience with Big Data Technologies supporting a broad variety of solutions including HDFS, Kubernetes, Kafka, Elastic, SQL Server
DESIRABLE QUALIFICATIONS:
- Superior academics
- Previous experience working in a team environment
Senior Database Administrator(Big Data)
Posted today
Job Viewed
Job Description
POSITION SUMMARY: Performs and plans management and administration of technology in area of specialty. Works with team members to ensure technology is effectively supporting enterprise systems. Shares knowledge and provides mentoring to other IT associates.
ESSENTIAL FUNCTIONS:
- Analyze existing database systems and recommend improvements
- Work with IT Manager to establish database security structures, Sox compliance
- Manage and maintain Cloudera Data Platform (CDP) clusters, ensuring high availability and performance.
- Administer and monitor Apache Kafka (open-source) clusters, including topic management, performance tuning, and security.
- Perform database administration for Oracle, MySQL, and PostgreSQL systems, including backup, recovery, and performance tuning.
- Develop and maintain automation scripts using Shell scripting and Ansible for deployment, monitoring, and configuration management.
- Collaborate with data engineers and developers to optimize data pipelines and ensure data integrity across platforms.
- Implement and maintain security best practices across all data platforms.
- Troubleshoot and resolve issues across the data stack in a timely manner.
- Fix ongoing issues, act as an individual contributor, and take full ownership of the environment.
- Create and maintain database access systems including tables, indexes, views, stored procedures, and triggers
- Administer database replication between servers, across Local Area Network and Wide Area Network
- Tune and optimize database objects
- Provide 24x7 on call support
- Provides technical input to solution development plans and concept documents
- Demonstrates understanding of business impacts of issues and how they relate to IT owned solutions
- Communicates effectively in written and verbal form in a cross-functional setting with business partners
- Shares knowledge and actively mentors less experienced IT Administrators (Administrator 1 & 2)
- Performs and plans major version upgrades
- Design and implement effective monitoring of enterprise systems
- Tunes systems using experience and deep knowledge of area
- Foresees risks and communicates and mitigates those before problems arise
- Documents technical designs and procedures for team library
- Plans and manages technical projects involving other teams
- Communicates technology effectively to non-technical associates and management
- Recognizes and understands technology impacts within the business
- Contributes to technical research on new technologies, processes or procedures with a desire to continuously learn new technologies
OTHER RESPONSIBILITIES:
- Participates in project definition activities including estimation of delivery
- Perform other duties as necessary
EDUCATION EXPERIENCE AND SKILLS REQUIRED:
- Bachelor of Science Degree in Computer Science, Information Technology, Management Information Systems, Business or another relevant field AND a minimum of 6 years relevant experience OR equivalent combination of education and relevant experience
- Outstanding academics with the demonstrated ability to apply learned knowledge
- Fluency in English is required
- Experience with Relational or NoSQL database technologies
- Demonstrated ability to implement new technologies effectively
- Demonstrated strong and effective verbal, written, and interpersonal communication skills in a small team setting
- Consistently demonstrates quality and effectiveness in work documentation and organization
- Experience with Big Data Technologies supporting a broad variety of solutions including HDFS, Kubernetes, Kafka, Elastic, SQL Server
DESIRABLE QUALIFICATIONS:
- Superior academics
- Previous experience working in a team environment
Scala Big Data Lead Engineer - 7 YoE - Immediate Joiner - Any UST Location
Posted 13 days ago
Job Viewed
Job Description
If you are highly interested and available immediately , please submit your resume along with your total experience, current CTC, notice period, and current location details to
Key Responsibilities:
- Design, develop, and optimize data pipelines and ETL workflows .
- Work with Apache Hadoop, Airflow, Kubernetes, and Containers to streamline data processing.
- Implement data analytics and mining techniques to drive business insights.
- Manage cloud-based big data solutions on GCP and Azure .
- Troubleshoot Hadoop log files and work with multiple data processing engines for scalable data solutions.
Required Skills & Qualifications:
- Proficiency in Scala, Spark, PySpark, Python, and SQL .
- Strong hands-on experience with Hadoop ecosystem, Hive, Pig, and MapReduce .
- Experience in ETL, Data Warehouse Design, and Data Cleansing .
- Familiarity with data pipeline orchestration tools like Apache Airflow .
- Knowledge of Kubernetes, Containers, and cloud platforms such as GCP and Azure .
If you are a seasoned big data engineer with a passion for Scala and cloud technologies , we invite you to apply for this exciting opportunity!