239 Data Scientists jobs in Kochi
Big Data Developer
Posted today
Job Viewed
Job Description
We are seeking a highly skilled Big Data Engineer to join our growing team. In this role, you will be responsible for designing, building, and maintaining robust data pipelines that handle high-volume financial data, including stocks, cryptocurrencies, and third-party data sources. You will play a critical role in ensuring data integrity, scalability, and real-time availability across our platforms.
Key Responsibilities :-
- Design, develop, and manage end-to-end data pipelines for stocks, crypto, and other financial datasets.
- Integrate third-party APIs and data feeds into internal systems.
- Build and optimize data ingestion, storage, and transformation workflows (batch and real-time).
- Ensure data quality, consistency, and reliability across all pipelines.
- Collaborate with data scientists, analysts, and backend engineers to provide clean, structured, and scalable datasets.
- Monitor, troubleshoot, and optimize pipeline performance.
- Implement ETL/ELT best practices, data governance, and security protocols.
- Contribute to the scalability and automation of our data infrastructure.
Requirements :-
- Proven experience as a Big Data Engineer / Data Engineer (preferably in financial or crypto domains).
- Strong expertise in Python, SQL, and distributed data systems.
- Hands-on experience with data pipeline tools (e.g., Apache Spark, Kafka, Airflow, Flink, Prefect).
- Experience with cloud platforms (AWS, GCP, or Azure) and data warehousing (Snowflake, BigQuery, Redshift, etc.).
- Knowledge of API integrations and handling real-time streaming data.
- Familiarity with databases (relational and NoSQL) and data modeling.
- Solid understanding of stocks, cryptocurrencies, and financial data structures (preferred).
- Strong problem-solving skills with the ability to handle large-scale data challenges.
Big Data Developer
Posted 2 days ago
Job Viewed
Job Description
We are seeking a highly skilled Big Data Engineer to join our growing team. In this role, you will be responsible for designing, building, and maintaining robust data pipelines that handle high-volume financial data, including stocks, cryptocurrencies, and third-party data sources. You will play a critical role in ensuring data integrity, scalability, and real-time availability across our platforms.
Key Responsibilities :-
- Design, develop, and manage end-to-end data pipelines for stocks, crypto, and other financial datasets.
- Integrate third-party APIs and data feeds into internal systems.
- Build and optimize data ingestion, storage, and transformation workflows (batch and real-time).
- Ensure data quality, consistency, and reliability across all pipelines.
- Collaborate with data scientists, analysts, and backend engineers to provide clean, structured, and scalable datasets.
- Monitor, troubleshoot, and optimize pipeline performance.
- Implement ETL/ELT best practices, data governance, and security protocols.
- Contribute to the scalability and automation of our data infrastructure.
Requirements :-
- Proven experience as a Big Data Engineer / Data Engineer (preferably in financial or crypto domains).
- Strong expertise in Python, SQL, and distributed data systems.
- Hands-on experience with data pipeline tools (e.g., Apache Spark, Kafka, Airflow, Flink, Prefect).
- Experience with cloud platforms (AWS, GCP, or Azure) and data warehousing (Snowflake, BigQuery, Redshift, etc.).
- Knowledge of API integrations and handling real-time streaming data.
- Familiarity with databases (relational and NoSQL) and data modeling.
- Solid understanding of stocks, cryptocurrencies, and financial data structures (preferred).
- Strong problem-solving skills with the ability to handle large-scale data challenges.
Big Data Developer
Posted 2 days ago
Job Viewed
Job Description
We are seeking a highly skilled Big Data Engineer to join our growing team. In this role, you will be responsible for designing, building, and maintaining robust data pipelines that handle high-volume financial data, including stocks, cryptocurrencies, and third-party data sources. You will play a critical role in ensuring data integrity, scalability, and real-time availability across our platforms.
Key Responsibilities :-
- Design, develop, and manage end-to-end data pipelines for stocks, crypto, and other financial datasets.
- Integrate third-party APIs and data feeds into internal systems.
- Build and optimize data ingestion, storage, and transformation workflows (batch and real-time).
- Ensure data quality, consistency, and reliability across all pipelines.
- Collaborate with data scientists, analysts, and backend engineers to provide clean, structured, and scalable datasets.
- Monitor, troubleshoot, and optimize pipeline performance.
- Implement ETL/ELT best practices, data governance, and security protocols.
- Contribute to the scalability and automation of our data infrastructure.
Requirements :-
- Proven experience as a Big Data Engineer / Data Engineer (preferably in financial or crypto domains).
- Strong expertise in Python, SQL, and distributed data systems.
- Hands-on experience with data pipeline tools (e.g., Apache Spark, Kafka, Airflow, Flink, Prefect).
- Experience with cloud platforms (AWS, GCP, or Azure) and data warehousing (Snowflake, BigQuery, Redshift, etc.).
- Knowledge of API integrations and handling real-time streaming data.
- Familiarity with databases (relational and NoSQL) and data modeling.
- Solid understanding of stocks, cryptocurrencies, and financial data structures (preferred).
- Strong problem-solving skills with the ability to handle large-scale data challenges.
Big Data Specialist
Posted 5 days ago
Job Viewed
Job Description
Role Overview
We are seeking a highly skilled Big Data Engineer to join our team. The ideal candidate will have strong experience in building, maintaining, and optimizing large-scale data pipelines and distributed data processing systems. This role involves working closely with cross-functional teams to ensure the reliability, scalability, and performance of data solutions.
Key Responsibilities
- Design, develop, and maintain scalable data pipelines and ETL processes.
- Work with large datasets using Hadoop ecosystem tools (Hive, Spark).
- Build and optimize real-time and batch data processing solutions using Kafka and Spark Streaming.
- Write efficient, high-performance SQL queries to extract, transform, and load data.
- Develop reusable data frameworks and utilities in Python.
- Collaborate with data scientists, analysts, and product teams to deliver reliable data solutions.
- Monitor, troubleshoot, and optimize big data workflows for performance and cost efficiency.
Must-Have Skills
- Strong hands-on experience with Hive and SQL for querying and data transformation.
- Proficiency in Python for data manipulation and automation.
- Expertise in Apache Spark (batch and streaming).
- Experience working with Kafka for streaming data pipelines.
Good-to-Have Skills
- Experience with workflow orchestration tools (Airflow etc.)
- Knowledge of cloud-based big data platforms (AWS EMR, GCP Dataproc, Azure HDInsight).
- Familiarity with CI/CD pipelines and version control (Git).
Big Data Specialist
Posted 5 days ago
Job Viewed
Job Description
Role Overview
We are seeking a highly skilled Big Data Engineer to join our team. The ideal candidate will have strong experience in building, maintaining, and optimizing large-scale data pipelines and distributed data processing systems. This role involves working closely with cross-functional teams to ensure the reliability, scalability, and performance of data solutions.
Key Responsibilities
- Design, develop, and maintain scalable data pipelines and ETL processes.
- Work with large datasets using Hadoop ecosystem tools (Hive, Spark).
- Build and optimize real-time and batch data processing solutions using Kafka and Spark Streaming.
- Write efficient, high-performance SQL queries to extract, transform, and load data.
- Develop reusable data frameworks and utilities in Python.
- Collaborate with data scientists, analysts, and product teams to deliver reliable data solutions.
- Monitor, troubleshoot, and optimize big data workflows for performance and cost efficiency.
Must-Have Skills
- Strong hands-on experience with Hive and SQL for querying and data transformation.
- Proficiency in Python for data manipulation and automation.
- Expertise in Apache Spark (batch and streaming).
- Experience working with Kafka for streaming data pipelines.
Good-to-Have Skills
- Experience with workflow orchestration tools (Airflow etc.)
- Knowledge of cloud-based big data platforms (AWS EMR, GCP Dataproc, Azure HDInsight).
- Familiarity with CI/CD pipelines and version control (Git).
Big Data Engineer - Scala
Posted today
Job Viewed
Job Description
Job Title: Big Data Engineer – Scala
Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.
Experience: 7–10 Years (Minimum 3+ years in Scala)
Notice Period: Immediate to 30 Days
Mode of Work: Hybrid
Click the link below to learn more about the role and take the AI Interview to begin your application journey:
Role Overview
We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.
Key Responsibilities
- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.
- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.
- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .
- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .
- Collaborate with cross-functional teams to design scalable cloud-based data architectures .
- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .
- Build monitoring and alerting systems leveraging Splunk or equivalent tools .
- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.
- Contribute to product development with a focus on scalability, maintainability, and performance.
Mandatory Skills
- Scala – Minimum 3+ years of hands-on experience.
- Strong expertise in Spark (PySpark) and Python .
- Hands-on experience with Apache Kafka .
- Knowledge of NiFi / Airflow for orchestration.
- Strong experience in Distributed Data Systems (5+ years) .
- Proficiency in SQL and query optimization.
- Good understanding of Cloud Architecture .
Preferred Skills
- Exposure to messaging technologies like Apache Kafka or equivalent.
- Experience in designing intuitive, responsive UIs for data analytics visualization.
- Familiarity with Splunk or other monitoring/alerting solutions .
- Hands-on experience with CI/CD tools (Git, Jenkins).
- Strong grasp of software engineering concepts, data modeling, and optimization techniques .
Big Data Engineer - Scala
Posted 3 days ago
Job Viewed
Job Description
Job Title: Big Data Engineer – Scala
Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.
Experience: 7–10 Years (Minimum 3+ years in Scala)
Notice Period: Immediate to 30 Days
Mode of Work: Hybrid
Click the link below to learn more about the role and take the AI Interview to begin your application journey:
Role Overview
We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.
Key Responsibilities
- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.
- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.
- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .
- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .
- Collaborate with cross-functional teams to design scalable cloud-based data architectures .
- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .
- Build monitoring and alerting systems leveraging Splunk or equivalent tools .
- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.
- Contribute to product development with a focus on scalability, maintainability, and performance.
Mandatory Skills
- Scala – Minimum 3+ years of hands-on experience.
- Strong expertise in Spark (PySpark) and Python .
- Hands-on experience with Apache Kafka .
- Knowledge of NiFi / Airflow for orchestration.
- Strong experience in Distributed Data Systems (5+ years) .
- Proficiency in SQL and query optimization.
- Good understanding of Cloud Architecture .
Preferred Skills
- Exposure to messaging technologies like Apache Kafka or equivalent.
- Experience in designing intuitive, responsive UIs for data analytics visualization.
- Familiarity with Splunk or other monitoring/alerting solutions .
- Hands-on experience with CI/CD tools (Git, Jenkins).
- Strong grasp of software engineering concepts, data modeling, and optimization techniques .
Be The First To Know
About the latest Data scientists Jobs in Kochi !
Big Data Engineer - Scala
Posted 3 days ago
Job Viewed
Job Description
Job Title: Big Data Engineer – Scala
Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.
Experience: 7–10 Years (Minimum 3+ years in Scala)
Notice Period: Immediate to 30 Days
Mode of Work: Hybrid
Click the link below to learn more about the role and take the AI Interview to begin your application journey:
Role Overview
We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.
Key Responsibilities
- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.
- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.
- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .
- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .
- Collaborate with cross-functional teams to design scalable cloud-based data architectures .
- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .
- Build monitoring and alerting systems leveraging Splunk or equivalent tools .
- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.
- Contribute to product development with a focus on scalability, maintainability, and performance.
Mandatory Skills
- Scala – Minimum 3+ years of hands-on experience.
- Strong expertise in Spark (PySpark) and Python .
- Hands-on experience with Apache Kafka .
- Knowledge of NiFi / Airflow for orchestration.
- Strong experience in Distributed Data Systems (5+ years) .
- Proficiency in SQL and query optimization.
- Good understanding of Cloud Architecture .
Preferred Skills
- Exposure to messaging technologies like Apache Kafka or equivalent.
- Experience in designing intuitive, responsive UIs for data analytics visualization.
- Familiarity with Splunk or other monitoring/alerting solutions .
- Hands-on experience with CI/CD tools (Git, Jenkins).
- Strong grasp of software engineering concepts, data modeling, and optimization techniques .
Scala Big Data Lead Engineer - 7 YoE - Immediate Joiner - Any UST Location
Posted 16 days ago
Job Viewed
Job Description
If you are highly interested and available immediately , please submit your resume along with your total experience, current CTC, notice period, and current location details to
Key Responsibilities:
- Design, develop, and optimize data pipelines and ETL workflows .
- Work with Apache Hadoop, Airflow, Kubernetes, and Containers to streamline data processing.
- Implement data analytics and mining techniques to drive business insights.
- Manage cloud-based big data solutions on GCP and Azure .
- Troubleshoot Hadoop log files and work with multiple data processing engines for scalable data solutions.
Required Skills & Qualifications:
- Proficiency in Scala, Spark, PySpark, Python, and SQL .
- Strong hands-on experience with Hadoop ecosystem, Hive, Pig, and MapReduce .
- Experience in ETL, Data Warehouse Design, and Data Cleansing .
- Familiarity with data pipeline orchestration tools like Apache Airflow .
- Knowledge of Kubernetes, Containers, and cloud platforms such as GCP and Azure .
If you are a seasoned big data engineer with a passion for Scala and cloud technologies , we invite you to apply for this exciting opportunity!
Scala Big Data Lead Engineer - 7 YoE - Immediate Joiner - Any UST Location
Posted 16 days ago
Job Viewed
Job Description
If you are highly interested and available immediately , please submit your resume along with your total experience, current CTC, notice period, and current location details to
Key Responsibilities:
- Design, develop, and optimize data pipelines and ETL workflows .
- Work with Apache Hadoop, Airflow, Kubernetes, and Containers to streamline data processing.
- Implement data analytics and mining techniques to drive business insights.
- Manage cloud-based big data solutions on GCP and Azure .
- Troubleshoot Hadoop log files and work with multiple data processing engines for scalable data solutions.
Required Skills & Qualifications:
- Proficiency in Scala, Spark, PySpark, Python, and SQL .
- Strong hands-on experience with Hadoop ecosystem, Hive, Pig, and MapReduce .
- Experience in ETL, Data Warehouse Design, and Data Cleansing .
- Familiarity with data pipeline orchestration tools like Apache Airflow .
- Knowledge of Kubernetes, Containers, and cloud platforms such as GCP and Azure .
If you are a seasoned big data engineer with a passion for Scala and cloud technologies , we invite you to apply for this exciting opportunity!