360 Data Scientists jobs in Delhi
Big Data Engineer
Posted today
Job Viewed
Job Description
Work Location : Pan India
Experience : 6+ Years
Notice Period : Immediate - 30 days
Mandatory Skills : Big Data, Python, SQL, Spark/Pyspark, AWS Cloud
JD and required Skills & Responsibilities :
Actively participate in all phases of the software development lifecycle, including requirements gathering, functional and technical design, development, testing, roll-out, and support.
Solve complex business problems by utilizing a disciplined development methodology.
Produce scalable, flexible, efficient, and supportable solutions using appropriate technologies.
Analyse the source and target system data. Map the transformation that meets the requirements.
Interact with the client and onsite coordinators during different phases of a project.
Design and implement product features in collaboration with business and Technology stakeholders.
Anticipate, identify, and solve issues concerning data management to improve data quality.
Clean, prepare, and optimize data at scale for ingestion and consumption.
Support the implementation of new data management projects and re-structure the current data architecture.
Implement automated workflows and routines using workflow scheduling tools.
Understand and use continuous integration, test-driven development, and production deployment frameworks.
Participate in design, code, test plans, and dataset implementation performed by other data engineers in support of maintaining data engineering standards.
Analyze and profile data for the purpose of designing scalable solutions.
Troubleshoot straightforward data issues and perform root cause analysis to proactively resolve product issues.
Required Skills :
5+ years of relevant experience developing Data and analytic solutions.
Experience building data lake solutions leveraging one or more of the following AWS, EMR, S3, Hive & PySpark
Experience with relational SQL.
Experience with scripting languages such as Python.
Experience with source control tools such as GitHub and related dev process.
Experience with workflow scheduling tools such as Airflow.
In-depth knowledge of AWS Cloud (S3, EMR, Databricks)
Has a passion for data solutions.
Has a strong problem-solving and analytical mindset
Working experience in the design, Development, and test of data pipelines.
Experience working with Agile Teams.
Able to influence and communicate effectively, both verbally and in writing, with team members and business stakeholders
Able to quickly pick up new programming languages, technologies, and frameworks.
Bachelor's degree in computer science
Big Data Engineer
Posted today
Job Viewed
Job Description
We are seeking an experienced and driven Data Engineer with 5+ years of hands-on experience in building scalable data infrastructure and systems. You will play a key role in designing and developing robust, high-performance ETL pipelines and managing large-scale datasets to support critical business functions. This role requires deep technical expertise, strong problem-solving skills, and the ability to thrive in a fast-paced, evolving environment.
Key Responsibilities :
Design, develop, and maintain scalable and reliable ETL/ELT pipelines for processing large volumes of data (terabytes and beyond).
Model and structure data for performance, scalability, and usability.
Work with cloud infrastructure (preferably Azure) to build and optimize data workflows.
Leverage distributed computing frameworks like Apache Spark and Hadoop for large-scale data processing.
Build and manage data lake/lakehouse architectures in alignment with best practices.
Optimize ETL performance and manage cost-effective data operations.
Collaborate closely with cross-functional teams including data science, analytics, and software engineering.
Ensure data quality, integrity, and security across all stages of the data lifecycle.
Required Skills & Qualifications :
7 to 10 years of relevant experience in bigdata engineering.
Advanced proficiency in Python,
Strong skills in SQL for complex data manipulation and analysis.
Hands-on experience with Apache Spark, Hadoop, or similar distributed systems.
Proven track record of handling large-scale datasets (TBs) in production environments.
Cloud development experience with Azure (preferred), AWS, or GCP.
Solid understanding of data lake and data lakehouse architectures.
Expertise in ETL performance tuning and cost optimization techniques.
Knowledge of data structures, algorithms, and modern software engineering practices.
Soft Skills :
Strong communication skills with the ability to explain complex technical concepts clearly and concisely.
Self-starter who learns quickly and takes ownership.
High attention to detail with a strong sense of data quality and reliability.
Comfortable working in an agile, fast-changing environment with incomplete requirements.
Preferred Qualifications :
Experience with tools like Apache Airflow, Azure Data Factory, or similar.
Familiarity with CI/CD and DevOps in the context of data engineering.
Knowledge of data governance, cataloging, and access control principles.
Skills : Python,Sql,Aws,Azure, Hadoop
Sn. Data Scientists- AI/ML- GEN AI- Work location : Across india | EXP: 4 - 12 years
Posted 25 days ago
Job Viewed
Job Description
Data Scientists- AI/ML- GEN AI- Across india | EXP: 4 - 10 years
data scientists with total of around 4-10 years of experience and atleast 4-10 years of relevant data science, analytics, and AI/ML Python; data science; AI/ML; GEN AI
Primary Skills :
- Excellent understanding and hand-on experience of data-science and machine learning techniques & algorithms for supervised & unsupervised problems, NLP and computer vision and GEN AI. Good applied statistics skills, such as distributions, statistical inference & testing, etc.
- Excellent understanding and hand-on experience on building Deep-learning models for text & image analytics (such as ANNs, CNNs, LSTM, Transfer Learning, Encoder and decoder, etc).
- Proficient in coding in common data science language & tools such as R, Python.
- Experience with common data science toolkits, such as NumPy, Pandas, Matplotlib, StatsModel, Scikitlearn, SciPy, NLTK, Spacy, OpenCV etc.
- Experience with common data science frameworks such as Tensorflow, Keras, PyTorch, XGBoost,etc.
- Exposure or knowledge in cloud (Azure/AWS).
- Experience on deployment of model in production.
Sn. Data Scientists- AI/ML- GEN AI- Work location : Across india | EXP: 4 - 12 years
Posted 25 days ago
Job Viewed
Job Description
Data Scientists- AI/ML- GEN AI- Across india | EXP: 4 - 10 years
data scientists with total of around 4-10 years of experience and atleast 4-10 years of relevant data science, analytics, and AI/ML Python; data science; AI/ML; GEN AI
Primary Skills :
- Excellent understanding and hand-on experience of data-science and machine learning techniques & algorithms for supervised & unsupervised problems, NLP and computer vision and GEN AI. Good applied statistics skills, such as distributions, statistical inference & testing, etc.
- Excellent understanding and hand-on experience on building Deep-learning models for text & image analytics (such as ANNs, CNNs, LSTM, Transfer Learning, Encoder and decoder, etc).
- Proficient in coding in common data science language & tools such as R, Python.
- Experience with common data science toolkits, such as NumPy, Pandas, Matplotlib, StatsModel, Scikitlearn, SciPy, NLTK, Spacy, OpenCV etc.
- Experience with common data science frameworks such as Tensorflow, Keras, PyTorch, XGBoost,etc.
- Exposure or knowledge in cloud (Azure/AWS).
- Experience on deployment of model in production.
Sn. Data Scientists- AI/ML- GEN AI- Work location : Across india | EXP: 4 - 12 years
Posted 25 days ago
Job Viewed
Job Description
Data Scientists- AI/ML- GEN AI- Across india | EXP: 4 - 10 years
data scientists with total of around 4-10 years of experience and atleast 4-10 years of relevant data science, analytics, and AI/ML Python; data science; AI/ML; GEN AI
Primary Skills :
- Excellent understanding and hand-on experience of data-science and machine learning techniques & algorithms for supervised & unsupervised problems, NLP and computer vision and GEN AI. Good applied statistics skills, such as distributions, statistical inference & testing, etc.
- Excellent understanding and hand-on experience on building Deep-learning models for text & image analytics (such as ANNs, CNNs, LSTM, Transfer Learning, Encoder and decoder, etc).
- Proficient in coding in common data science language & tools such as R, Python.
- Experience with common data science toolkits, such as NumPy, Pandas, Matplotlib, StatsModel, Scikitlearn, SciPy, NLTK, Spacy, OpenCV etc.
- Experience with common data science frameworks such as Tensorflow, Keras, PyTorch, XGBoost,etc.
- Exposure or knowledge in cloud (Azure/AWS).
- Experience on deployment of model in production.
Senior Big Data Engineer
Posted 4 days ago
Job Viewed
Job Description
With a focus on innovation and acceleration, Veltris empowers clients to build, modernize, and scale intelligent products that deliver connected, AI-powered experiences. Our experience-centric approach, agile methodologies, and exceptional talent enable us to streamline product development, maximize platform ROI, and drive meaningful business outcomes across both digital and physical ecosystems.
In a strategic move to strengthen our healthcare offerings and expand industry capabilities, Veltris has acquired BPK Technologies. This acquisition enhances our domain expertise, broadens our go-to-market strategy, and positions us to deliver even greater value to enterprise and mid-market clients in healthcare and beyond.
Position-Senior Big Data Engineer
Must have Big Data analytics platform experience.
• Key stacks: Spark, Druid, Drill, ClickHouse.
• 8+ years experience in Python/Java, CI/CD, infrastructure & cloud, Terraform, plus depth in:
o Big Data pipelines: Spark, Kafka, Glue, EMR, Hudi, Schema Registry, Data Lineage.
o Graph DBs: Neo4j, Neptune, JanusGraph, Dgraph.
Preferred Qualifications:
• Master’s degree (M.Tech/MS) or Ph.D. in Computer Science, Information Technology, Data Science, Artificial Intelligence, Machine Learning, Software Engineering, or a related technical field.
• Candidates with an equivalent combination of education and relevant industry experience will also be considered.
Disclaimer:
The information provided herein is for general informational purposes only and reflects the current strategic direction and service offerings of Veltris. While we strive for accuracy, Veltris makes no representations or warranties regarding the completeness, reliability, or suitability of the information for any specific purpose. Any statements related to business growth, acquisitions, or future plans, including the acquisition of BPK Technologies, are subject to change without notice and do not constitute a binding commitment. Veltris reserves the right to modify its strategies, services, or business relationships at its sole discretion. For the most up-to-date and detailed information, please contact Veltris directly
GCP Big Data Engineer
Posted 11 days ago
Job Viewed
Job Description
We are seeking an experienced GCP Big Data Engineer with 8–10 years of expertise in designing, developing, and optimizing large-scale data processing solutions. The ideal candidate will bring strong leadership capabilities, technical depth, and a proven track record of delivering end-to-end big data solutions in cloud environments.
Key Responsibilities:-
- Lead and mentor teams in designing scalable and efficient ETL pipelines on Google Cloud Platform (GCP) .
- Drive best practices for data modeling, data integration, and data quality management .
- Collaborate with stakeholders to define data engineering strategies aligned with business goals.
- Ensure high performance, scalability, and reliability in data systems using SQL and PySpark .
Must-Have Skills:-
- GCP expertise in data engineering services (BigQuery, Dataflow, Dataproc, Pub/Sub, Cloud Storage).
- Strong programming in SQL & PySpark .
- Hands-on experience in ETL pipeline design, development, and optimization .
- Strong problem-solving and leadership skills with experience guiding data engineering teams.
Qualification:-
- Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field .
- Relevant certifications in GCP Data Engineering preferred.
Be The First To Know
About the latest Data scientists Jobs in Delhi !
GCP Big Data Engineer
Posted 11 days ago
Job Viewed
Job Description
We are seeking an experienced GCP Big Data Engineer with 8–10 years of expertise in designing, developing, and optimizing large-scale data processing solutions. The ideal candidate will bring strong leadership capabilities, technical depth, and a proven track record of delivering end-to-end big data solutions in cloud environments.
Key Responsibilities:-
- Lead and mentor teams in designing scalable and efficient ETL pipelines on Google Cloud Platform (GCP) .
- Drive best practices for data modeling, data integration, and data quality management .
- Collaborate with stakeholders to define data engineering strategies aligned with business goals.
- Ensure high performance, scalability, and reliability in data systems using SQL and PySpark .
Must-Have Skills:-
- GCP expertise in data engineering services (BigQuery, Dataflow, Dataproc, Pub/Sub, Cloud Storage).
- Strong programming in SQL & PySpark .
- Hands-on experience in ETL pipeline design, development, and optimization .
- Strong problem-solving and leadership skills with experience guiding data engineering teams.
Qualification:-
- Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field .
- Relevant certifications in GCP Data Engineering preferred.
GCP Big Data Engineer
Posted 11 days ago
Job Viewed
Job Description
We are seeking an experienced GCP Big Data Engineer with 8–10 years of expertise in designing, developing, and optimizing large-scale data processing solutions. The ideal candidate will bring strong leadership capabilities, technical depth, and a proven track record of delivering end-to-end big data solutions in cloud environments.
Key Responsibilities:-
- Lead and mentor teams in designing scalable and efficient ETL pipelines on Google Cloud Platform (GCP) .
- Drive best practices for data modeling, data integration, and data quality management .
- Collaborate with stakeholders to define data engineering strategies aligned with business goals.
- Ensure high performance, scalability, and reliability in data systems using SQL and PySpark .
Must-Have Skills:-
- GCP expertise in data engineering services (BigQuery, Dataflow, Dataproc, Pub/Sub, Cloud Storage).
- Strong programming in SQL & PySpark .
- Hands-on experience in ETL pipeline design, development, and optimization .
- Strong problem-solving and leadership skills with experience guiding data engineering teams.
Qualification:-
- Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field .
- Relevant certifications in GCP Data Engineering preferred.
Senior Big Data Engineer
Posted 12 days ago
Job Viewed
Job Description
Veltris is a Digital Product Engineering Services partner committed to driving technology-enabled transformation across enterprises, businesses, and industries. We specialize in delivering next-generation solutions for sectors including healthcare, technology, communications, manufacturing, and finance.
With a focus on innovation and acceleration, Veltris empowers clients to build, modernize, and scale intelligent products that deliver connected, AI-powered experiences. Our experience-centric approach, agile methodologies, and exceptional talent enable us to streamline product development, maximize platform ROI, and drive meaningful business outcomes across both digital and physical ecosystems.
In a strategic move to strengthen our healthcare offerings and expand industry capabilities, Veltris has acquired BPK Technologies. This acquisition enhances our domain expertise, broadens our go-to-market strategy, and positions us to deliver even greater value to enterprise and mid-market clients in healthcare and beyond.
Position-Senior Big Data Engineer
Must have Big Data analytics platform experience.
• Key stacks: Spark, Druid, Drill, ClickHouse.
• 8+ years experience in Python/Java, CI/CD, infrastructure & cloud, Terraform, plus depth in:
o Big Data pipelines: Spark, Kafka, Glue, EMR, Hudi, Schema Registry, Data Lineage.
o Graph DBs: Neo4j, Neptune, JanusGraph, Dgraph.
Preferred Qualifications:
• Master’s degree (M.Tech/MS) or Ph.D. in Computer Science, Information Technology, Data Science, Artificial Intelligence, Machine Learning, Software Engineering, or a related technical field.
• Candidates with an equivalent combination of education and relevant industry experience will also be considered.
Disclaimer :
The information provided herein is for general informational purposes only and reflects the current strategic direction and service offerings of Veltris. While we strive for accuracy, Veltris makes no representations or warranties regarding the completeness, reliability, or suitability of the information for any specific purpose. Any statements related to business growth, acquisitions, or future plans, including the acquisition of BPK Technologies, are subject to change without notice and do not constitute a binding commitment. Veltris reserves the right to modify its strategies, services, or business relationships at its sole discretion. For the most up-to-date and detailed information, please contact Veltris directly