343 Data Scientist jobs in Delhi
Data Scientist
Posted 3 days ago
Job Viewed
Job Description
The data specialist is responsible for managing Customer Care EMEA data analysis, and reporting, to measure and track c ustomer care performance in the direct and indirect markets . In addition, this individual will use high level critical thinking and experience to shape and deliver improvements to current data capture and analysis (both within and in addition to CRM and Power BI) to ensure continuous improvement and increased visibility and accuracy with KPI Management.
KEY RESPONSIBILITIES
+ Proactively analyse data, identify trends to anticipate potential risk and opportunities, and provide actionable insights to department leaders.
+ D ata Management: Collect, organize, and maintain large datasets from various sources, ensuring data accuracy and integrity.
+ Data Analysis: Analyze data using statistical methods and data mining techniques to extract valuable insights and trends.
+ Reporting: Create and present data reports and visualizations to communicate findings to both technical and non-technical stakeholders.
+ Collaboration: Work closely with cross-functional teams, including IT, marketing, and finance, to ensure effective data utilization and support business objectives .
+ Data Quality Assurance: Conduct regular audits and validations of data to maintain hig h-quality standards and compliance with data governance policies.
REQUIREMENTS FOR THIS POSITION
+ Expert in Excel , Power BI ( with programming skills ) and strong appetence to systems.
+ Proficiency with CRM system at the Administrator level or above; demonstrated by either company- issued certificate or a minimum of 3 yrs. hands-on experience required
+ Ability to plan, multi-task and manage time effectively
+ Must be self-motivated, creative, collaborative and enjoy meeting demands of a fast-paced organization
+ Must be self-motivating with a sense of urgency, positive outlook and focused on quality work.
+ Maintain attention to details and accuracy.
+ Data driven and analytical. Must enjoy data mining and digging into/analyzing numbers, capable of translating customer focused problems into data driven solutions .
+ Possess strong analytical skills, pays close attention to detail and is a strong team player (i.e. no job is too small or too large) with an exceptional work ethic
+ Excellent problem resolution skills, written and verbal communications skills, and consistently displays a high level of professional conduct
+ Great data visualization and presentation skills to provide recommendations based on data insights.
+ Communication Skills: Ability to effectively communicate complex data insights to diverse audiences, including non-technical stakeholders.
Education
+ A bachelor's degree in b usiness, computer science, data analytics, statistics, or a related field
Language
+ Fluent English
IT and Office Technology
+ Advanced proficiency with Microsoft Excel, Word, Outlook, and PowerPoint
+ Proficient with online communication platforms (MS Teams, Zoom)
+ Experience using MS Whiteboard or Miro
Join our winning team today. Together, we'll accelerate the real-life impact of tomorrow's science and technology. We partner with customers across the globe to help them solve their most complex challenges, architecting solutions that bring the power of science to life.
For more information, visit .
Data Scientist
Posted today
Job Viewed
Job Description
JD- DATA SCIENTIST
Work experience - For Juniors- 0-2 years, For Associates- 2-4 years
Qualification - Master’s/Bachelor’s in Computer science/Data science/Mathematics/Statistics
or related disciplines.
Job Brief:
In this role, you should be highly analytical with a knack for analysis, math and statistics.
Critical thinking and problem solving skills are essential for interpreting data. We also want
to see a passion for machine learning and research.
Must have Skills:
● Prior experience in working with Flask based APIs for backend development
● Extensive knowledge of ML frameworks, libraries, data structures, data modeling, and software
architecture
● Outstanding communication and organizational skills
● Strong project management skills
● High proficiency in using Python/Spark, Hadoop platforms,Tableau, SQL, NoSQL, SAS or R
● Experience in using services like Azure, Databricks, AWS, Snowflake
● Proficient with Numpy, Pandas, Matplotlib, Scipy, Web Scraping, Data Structures, Sklearn, NLP, Deep
Learning
● Strong mathematical and statistical skills
● Experience deploying scalable ML solutions on AWS or Azure.
● Knowledge of Pytorch/Tensorflow,Kubernetes, Docker.
● Experience with the LLM stack (Langchain, Langsmith, Uptrain, Llamaindex,
haystack) and vector databases (FAISS,Pinecone, opensearch)
● NLP and Image analysis solutions into production.
Good to have skills:
● Selecting features, building and optimizing classifiers using machine learning techniques
● Data mining using state-of-the-art methods
● Extending company’s data with third party sources of information when needed
● Enhancing data collection procedures to include information that is relevant for building analytic
systems
● Procession, cleansing, and verifying the integrity of data used for analysis
● Work on time bound projects
Data Scientist
Posted today
Job Viewed
Job Description
Role: Data Scientist, Delhi (Gurugram), India - Salary Band P2
Fiddlehead Technology is a Canadian leader in advanced analytics and AI-driven solutions, helping global companies unlock value from their data. We specialize in applying machine learning, predictive forecasting, and Generative AI to solve complex business problems and empower smarter decision-making.
Our culture thrives on innovation, collaboration, and continuous learning. We invest in our people by offering structured opportunities for professional development, a healthy work-life balance, and exposure to cutting-edge AI/ML projects across industries. At Fiddlehead, employees are encouraged to explore, create, and grow while contributing to high-impact solutions.
Fiddlehead Technology is a data science company with over 10 years of experience helping
consumer-packaged goods (CPG) companies harness the power of machine learning and
AI. We transform data into actionable insights, building predictive models that drive
efficiency, growth, and competitive advantage. With increasing demand for our solutions,
we’re expanding our global team.
We are seeking Data Scientists to collaborate with our Canadian team in developing
advanced forecasting models and optimization algorithms for leading CPG manufacturers
and service providers. In this role, you’ll monitor model performance in production,
addressing challenges like data drift and concept drift, while delivering data-driven insights
that shape business decisions.
What You’ll Bring
• Education and/or professional experience in data science and forecasting
• Proficiency with forecasting tools and libraries, ideally in Python
• Knowledge of machine learning and statistical concepts
• Strong analytical and problem-solving abilities
• Ability to communicate complex findings to non-technical stakeholders
• High attention to detail and data accuracy
• Degree in Statistics, Data Science, Computer Science, Engineering, or related field
(Bachelor’s, Master’s, or PhD)
At Fiddlehead, you’ll work on meaningful projects that advance predictive forecasting and
sustainability in the CPG industry. We offer a collaborative, inclusive, and supportive
environment that prioritizes professional development, work-life balance, and continuous
learning. Our team members enjoy dedicated time to expand their skills while contributing
to innovative solutions with real-world impact.
We carefully review every application and are committed to providing a response. Candidates selected will be invited to an in-person or virtual interview. To ensure equal access, we provide accommodations during the recruitment process for applicants with disabilities. If you require accommodations, please reach out to our team through the contact page on our website. At Fiddlehead, we are dedicated to fostering an inclusive and accessible environment where every employee and customer is respected, valued, and supported. We welcome applications from women, Indigenous peoples, persons with disabilities, ethnic and visible minorities, members of the LGBT+ community, and others who can help enrich the diversity of our workforce.
We offer a competitive compensation package with performance-based incentives and opportunities to contribute to impactful projects. Employees benefit from mentorship, training, and active participation in AI communities, all within a collaborative culture that values innovation, creativity, and professional growth.
Data Scientist
Posted today
Job Viewed
Job Description
Company Description
VOSMOS is a part of Kestone Integrated Marketing Services and promoted by CL Educate, the parent company of Career Launcher. We aim to build virtual worlds that are simplified, connected, and immersive. These worlds will be self-sufficient, unified, and constantly evolving to suit the mindset of the occupants of the Virtual Cosmos. VOSMOS has 3 distinct products under the brand – Metaverse, Virtual Events, and Customized technology solutions. From education to business to art, VOSMOS is making it easy for everyone to remain connected in the virtual world.
Role Description
As a Senior Data Scientist – AI & Gen AI, you will be a hands-on builder responsible for designing, developing, and deploying intelligent solutions across the marketing and customer experience stack. From writing production-ready ML code and training models to prompt engineering and fine-tuning LLMs, this role is for someone who enjoys working end-to-end — not just leading, but doing. You’ll work closely with our product, marketing, and engineering teams to embed intelligence across everything we do — from personalization engines to Gen AI content workflows.
What You’ll Do
- Build, train, and deploy machine learning and deep learning models for use cases like personalization, recommendation, segmentation, and content generation
- Fine-tune pre-trained LLMs (e.g., GPT, LLaMA, Claude) and build RAG-based systems
Design intelligent workflows using LangChain, vector databases, and prompt engineering
Write clean, scalable code in Python using ML/AI libraries like PyTorch, scikit-learn, Hugging Face, etc.
Work with APIs from OpenAI, Anthropic, Cohere, or open-source equivalents
Build prototypes quickly and iterate based on results and feedback
Work with structured and unstructured data (text, behavioral, campaign, content, customer data) Clean, preprocess, and build data pipelines that are ML-ready
Implement feedback loops and performance tracking mechanisms
Translate marketing and business requirements into actionable ML use cases
Work with product and design teams to bring AI-first features to life
Stay updated with the latest advancements in Gen AI, LLMs, and model optimization
What We’re Looking For
0+ years in technology, with at least 5+ years in data science / AI roles
Degree in Computer Science, Data Science, or related field (preferably from a premier tech institution – IITs, IISc, BITS, etc.)
eep hands-on experience in:
- Machine Learning (classification, regression, clustering, etc.)
- Deep Learning (CNNs, RNNs, Transformers)
- Gen AI (LLMs, prompt engineering, LangChain, RAG pipelines)
- Python, PyTorch/TensorFlow, scikit-learn, Hugging Face
- API integration, OpenAI, Anthropic, or similar platforms
- Cloud platforms (AWS/GCP/Azure) and basic DevOps for deployment
Strong problem-solving and debugging skills
Comfortable working in an agile, evolving startup-like environment
Nice to Have
Experience with marketing data, campaign optimization, content analytics
Exposure to MLOps, vector databases (e.g., Pinecone, FAISS), or streaming data
Understanding of personalization algorithms, recommender systems, or chat-based AI agents
What You Get
An opportunity to build AI solutions from scratch and own the outcomes
Work on real-world problems with real business impact
A collaborative team of technologists, marketers, and creatives
Freedom to experiment with the latest tools and models
Competitive compensation with performance-linked incentives
Data Scientist
Posted today
Job Viewed
Job Description
Role Overview: Data Scientist
Location: Remote/ Indore/ Mumbai/ Chennai/ Gurugram
Experience: Min 5 Years
Work Mode: Remote
Notice Period: Max. 30 Days (45 for Notice Serving)
Interview Process: 2 Rounds
Interview Mode: Virtual Face-to-Face
Interview Timeline: 1 Week
Industry: Must be from a BPO/ KPO/ Shared Services or Healthcare Org.
Key Responsibilities:
- AI/ML Development & Research
- Design, develop, and deploy advanced machine learning and deep learning models to solve complex business problems.
- Implement and optimize Large Language Models (LLMs) and Generative AI solutions for real-world applications.
- Build agent-based AI systems with autonomous decision-making capabilities.
- Conduct cutting-edge research on emerging AI technologies and explore their practical applications.
- Perform model evaluation, validation, and continuous optimization to ensure high performance.
- Cloud Infrastructure & Full-Stack Development:
- Architect and implement scalable, cloud-native ML/AI solutions using AWS, Azure, or GCP.
- Develop full-stack applications that seamlessly integrate AI models with modern web technologies.
- Build and maintain robust ML pipelines using cloud services (e.g., SageMaker, ML Engine).
- Implement CI/CD pipelines to streamline ML model deployment and monitoring processes.
- Design and optimize cloud infrastructure to support high-performance computing workloads.
- Data Engineering & Database Management
- Design and implement data pipelines to enable large-scale data processing and real-time analytics.
- Work with both SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra) to manage structured and unstructured data.
- Optimize database performance to support machine learning workloads and real-time applications.
- Implement robust data governance frameworks and ensure data quality assurance practices.
- Manage and process streaming data to enable real-time decision-making.
- Leadership & Collaboration
- Mentor junior data scientists and assist in technical decision-making to drive innovation.
- Collaborate with cross-functional teams, including product, engineering, and business stakeholders, to develop solutions that align with organizational goals.
- Present findings and insights to both technical and non-technical audiences in a clear and actionable manner.
- Lead proof-of-concept projects and innovation initiatives to push the boundaries of AI/ML applications.
Required Qualifications:
- Education & Experience
- Master’s or PhD in Computer Science, Data Science, Statistics, Mathematics, or a related field.
- 5+ years of hands-on experience in data science and machine learning, with a focus on real-world applications.
- 3+ years of experience working with deep learning frameworks and neural networks.
- 2+ years of experience with cloud platforms and full-stack development.
- Technical Skills - Core AI/ML
- Machine Learning: Proficient in Scikit-learn, XGBoost, LightGBM, and advanced ML algorithms.
- Deep Learning: Expertise in TensorFlow, PyTorch, Keras, CNNs, RNNs, LSTMs, and Transformers.
- Large Language Models: Experience with GPT, BERT, T5, fine-tuning, and prompt engineering.
- Generative AI: Hands-on experience with Stable Diffusion, DALL-E, text-to-image, and text generation models.
- Agentic AI: Knowledge of multi-agent systems, reinforcement learning, and autonomous agents.
- Technical Skills - Development & Infrastructure
- Programming: Expertise in Python, with proficiency in R, Java/Scala, JavaScript/TypeScript.
- Cloud Platforms: Proficient with AWS (SageMaker, EC2, S3, Lambda), Azure ML, or Google Cloud AI.
- Databases: Proficiency with SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, Cassandra, DynamoDB).
- Full-Stack Development: Experience with React/Vue.js, Node.js, FastAPI, Flask, Docker, Kubernetes.
- MLOps: Experience with MLflow, Kubeflow, model versioning, and A/B testing frameworks.
- Big Data: Expertise in Spark, Hadoop, Kafka, and streaming data processing.
Non Negotiables:
- Cloud Infrastructure - ML/AI solutions on AWS, Azure, or GCP
- Build and maintain ML pipelines using cloud services (SageMaker, ML Engine, etc.)
- Implement CI/CD pipelines for ML model deployment and monitoring
- Work with both SQL and NoSQL databases (PostgreSQL, MongoDB, Cassandra, etc.)
- Machine Learning: Scikit-learn
- Deep Learning: TensorFlow
- Programming: Python (expert), R, Java/Scala, JavaScript/TypeScript
- Cloud Platforms: AWS (SageMaker, EC2, S3, Lambda)
- Vector databases and embeddings (Pinecone, Weaviate, Chroma)
- Knowledge of LangChain, LlamaIndex, or similar LLM frameworks.
- Industry: Must be a BPO or Healthcare Org.
Data Scientist
Posted today
Job Viewed
Job Description
Role: Data Scientist, Delhi (Gurugram), India - Salary Band P2
Fiddlehead Technology is a Canadian leader in advanced analytics and AI-driven solutions, helping global companies unlock value from their data. We specialize in applying machine learning, predictive forecasting, and Generative AI to solve complex business problems and empower smarter decision-making.
Our culture thrives on innovation, collaboration, and continuous learning. We invest in our people by offering structured opportunities for professional development, a healthy work-life balance, and exposure to cutting-edge AI/ML projects across industries. At Fiddlehead, employees are encouraged to explore, create, and grow while contributing to high-impact solutions.
Fiddlehead Technology is a data science company with over 10 years of experience helping
consumer-packaged goods (CPG) companies harness the power of machine learning and
AI. We transform data into actionable insights, building predictive models that drive
efficiency, growth, and competitive advantage. With increasing demand for our solutions,
we're expanding our global team.
We are seeking Data Scientists to collaborate with our Canadian team in developing
advanced forecasting models and optimization algorithms for leading CPG manufacturers
and service providers. In this role, you'll monitor model performance in production,
addressing challenges like data drift and concept drift, while delivering data-driven insights
that shape business decisions.
What You'll Bring
• Education and/or professional experience in data science and forecasting
• Proficiency with forecasting tools and libraries, ideally in Python
• Knowledge of machine learning and statistical concepts
• Strong analytical and problem-solving abilities
• Ability to communicate complex findings to non-technical stakeholders
• High attention to detail and data accuracy
• Degree in Statistics, Data Science, Computer Science, Engineering, or related field
(Bachelor's, Master's, or PhD)
At Fiddlehead, you'll work on meaningful projects that advance predictive forecasting and
sustainability in the CPG industry. We offer a collaborative, inclusive, and supportive
environment that prioritizes professional development, work-life balance, and continuous
learning. Our team members enjoy dedicated time to expand their skills while contributing
to innovative solutions with real-world impact.
We carefully review every application and are committed to providing a response. Candidates selected will be invited to an in-person or virtual interview. To ensure equal access, we provide accommodations during the recruitment process for applicants with disabilities. If you require accommodations, please reach out to our team through the contact page on our website. At Fiddlehead, we are dedicated to fostering an inclusive and accessible environment where every employee and customer is respected, valued, and supported. We welcome applications from women, Indigenous peoples, persons with disabilities, ethnic and visible minorities, members of the LGBT+ community, and others who can help enrich the diversity of our workforce.
We offer a competitive compensation package with performance-based incentives and opportunities to contribute to impactful projects. Employees benefit from mentorship, training, and active participation in AI communities, all within a collaborative culture that values innovation, creativity, and professional growth.
Data Scientist
Posted today
Job Viewed
Job Description
Key Responsibilities
Fundamental background in RF / RAN / Wireless NPI, Design and Optimization across 4G/5G.
Perform advanced analytics on diverse telecom datasets (PM, CM, FM, SDK, call trace, probe data, crowdsourced, and GIS data)
Be The First To Know
About the latest Data scientist Jobs in Delhi !
Data Scientist
Posted today
Job Viewed
Job Description
Overview
Esri is the world leader in geographic information systems (GIS) and developer of ArcGIS, the leading mapping and analytics software used in 75 percent of Fortune 500 companies. At the Esri R&D Center-New Delhi, we are applying cutting-edge AI and deep learning techniques to revolutionize geospatial analysis and derive insight from imagery and location data. We are passionate about applying data science and artificial intelligence to solve some of the world's biggest challenges.
Our team develops tools, APIs, and AI models for geospatial analysts and data scientists, enabling them to leverage the latest research in spatial data science, AI and geospatial deep learning.
As a Data Scientist, you will develop deep learning models using libraries such as PyTorch and create APIs and tools for training and deploying them on satellite imagery. If you are passionate about deep learning applied to remote sensing and GIS, developing AI and deep learning models, and love maps or geospatial datasets/imagery, this is the place to be
Responsibilities
- Develop tools, APIs and pretrained models for geospatial AI
- Integrate ArcGIS with popular deep learning libraries such as PyTorch
- Develop APIs and model architectures for computer vision and deep learning applied to geospatial imagery
- Author and maintain geospatial data science samples using ArcGIS and machine learning/deep learning libraries
- Curate and pre/post-process data for deep learning models and transform it into geospatial information
- Perform comparative studies of various deep learning model architectures
Requirements
- 2 to 6 years of experience with Python, in data science and deep learning
- Self-learner with coursework in and extensive knowledge of machine learning and deep learning
- Experience with Python machine learning and deep learning libraries such as PyTorch, Scikit-learn, NumPy, Pandas
- Expertise in one or more of the following areas:
- Traditional and deep learning-based computer vision techniques with the ability to develop deep learning models for computer vision tasks (image classification, object detection, semantic and instance segmentation, GANs, super-resolution, image inpainting, and more)
- Convolutional neural networks such as VGG, ResNet, Faster R-CNN, Mask R-CNN, and others
- Transformer models applied to computer vision
- Expertise in 3D deep learning with Point Clouds, meshes, or Voxels with the ability to develop 3D geospatial deep learning models, such as PointCNN, MeshCNN, and more
- Experience in data visualization in Jupyter Notebooks using matplotlib and other libraries
- Experience with hyperparameter-tuning and training models to a high level of accuracy
- Bachelor's in computer science, engineering, or related disciplines from IITs and other top-tier engineering colleges
- Existing work authorization for India
Recommended Qualifications
- Experience applying deep learning to satellite or medical imagery or geospatial datasets
- Familiarity with ArcGIS suite of products and concepts of GIS
LI-PK1
The Company
At Esri, diversity is more than just a word on a map. When employees of different experiences, perspectives, backgrounds, and cultures come together, we are more innovative and ultimately a better place to work. We believe in having a diverse workforce that is unified under our mission of creating positive global change. We understand that diversity, equity, and inclusion is not a destination but an ongoing process. We are committed to the continuation of learning, growing, and changing our workplace so every employee can contribute to their life's best work. Our commitment to these principles extends to the global communities we serve by creating positive change with GIS technology. For more information on Esri's Racial Equity and Social Justice initiatives, please visit our website here.
If you don't meet all of the preferred qualifications for this position, we encourage you to still apply
Esri is an equal opportunity employer (EOE) and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law. If you need reasonable accommodation for any part of the employment process, please email and let us know the nature of your request and your contact information. Please note that only those inquiries concerning a request for reasonable accommodation will be responded to from this e-mail address.
Esri Privacy Esri takes our responsibility to protect your privacy seriously. We are committed to respecting your privacy by providing transparency in how we acquire and use your information, giving you control of your information and preferences, and holding ourselves to the highest national and international standards, including CCPA and GDPR compliance.
Data Scientist
Posted today
Job Viewed
Job Description
is building the future of startup intelligence with products like Vencore AI Engine and Vencore GOV Engine. Our mission is to revolutionize how investors, VCs, and governments evaluate and support startups.
As a Data Scientist, you will work at the intersection of AI, machine learning, and venture capital analytics. Your role will involve developing predictive models, building scoring systems, and transforming raw data into insights that shape billion-dollar decisions.
What you will do
Design, build, and optimize ML models for startup evaluation
Work on large datasets including financial, government, and startup ecosystem data
Collaborate with AI engineers and product teams to refine scoring algorithms
Research and implement state-of-the-art techniques in AI and data science
Present insights that directly influence investment decisions and policies
What we are looking for
Strong foundation in statistics, machine learning, and Python or R
Experience with libraries like TensorFlow, PyTorch, Scikit-learn, or similar
Hands-on experience with NLP, data cleaning, and predictive modeling
Curiosity to explore financial and startup ecosystem data
Ability to work in a fast-paced, early-stage startup environment
Bonus points if you
Have experience in fintech, govtech, or investment analytics
Are comfortable with cloud platforms (AWS, GCP, or Azure)
Have worked with generative AI or agent-based systems
What we offer
Opportunity to build AI that directly impacts the future of venture capital and governance
Fast-paced growth environment with exposure to cutting-edge AI and ML work
Competitive compensation and performance-based incentives
A chance to be part of an early-stage company with global ambitions
Location: Gurgaon, Noida, Delhi NCR – in-office preferred, hybrid flexibility available
Data Scientist
Posted today
Job Viewed
Job Description
- Responsibilities
- Sensor Data Understanding & Preprocessing
- Clean, denoise, and preprocess high-frequency time-series data from edge devices.
- Handle missing, corrupted, or delayed telemetry from IoT sources.
- Develop domain knowledge of physical sensors and their behaviour (e.g., vibration patterns, strain profiles).
- Exploratory & Statistical Analysis
- Perform statistical and exploratory data analysis (EDA) on structured/unstructured sensor data.
- Identify anomalies, patterns, and correlations in multi-sensor environments.
- Feature Engineering
- Generate meaningful time-domain and frequency-domain features (e.g., FFT, wavelets).
- Implement scalable feature extraction pipelines.
- Model Development
- Build and validate ML models for:
- Anomaly detection (e.g., vibration spikes)
- Event classification (e.g., tilt angle breaches)
- Predictive maintenance (e.g., time-to-failure)
- Leverage traditional ML and deep learning and LLMs
- Deployment & Integration
- Work with Data Engineers to integrate models into real-time data pipelines and edge/cloud platforms.
- Package and containerize models (e.g., with Docker) for scalable deployment.
- Monitoring & Feedback
- Track model performance post-deployment and retrain/update as needed.
- Design feedback loops using human-in-the-loop or rule-based corrections.
- Collaboration & Communication
- Collaborate with hardware, firmware, and data engineering teams.
- Translate physical phenomena into data problems and insights.
- Document approaches, models, and assumptions for reproducibility.
Key Deliverables
- Reusable preprocessing and feature extraction modules for sensor data.
- Accurate and explainable ML models for anomaly/event detection.
- Model deployment artifacts (Docker images, APIs) for cloud or edge execution.
- Jupyter notebooks and dashboards (streamlit) for diagnostics, visualization, and insight generation.
- Model monitoring reports and performance metrics with retraining pipelines.
- Domain-specific data dictionaries and technical knowledge bases.
- Contribution to internal documentation and research discussions.
- Build deep understanding and documentation of sensor behavior and characteristics.
Technologies
Languages & Libraries
- Python (NumPy, Pandas, SciPy, Scikit-learn, PyTorch/TensorFlow)
- Bash (for data ops & batch jobs)
Signal Processing & Feature Extraction
- FFT, DWT, STFT (via SciPy, Librosa, tsfresh)
- Time-series modeling (sktime, statsmodels, Prophet)
Machine Learning & Deep Learning
- Scikit-learn (traditional ML)
- PyTorch / TensorFlow / Keras (deep learning)
- XGBoost / LightGBM (tabular modeling)
Data Analysis & Visualization
- Jupyter, Matplotlib, Seaborn, Plotly, Grafana (for dashboards)
Model Deployment
- Docker (for containerizing ML models)
- FastAPI / Flask (for ML inference APIs)
- GitHub Actions (CI/CD for models)
- ONNX / TorchScript (for lightweight deployment)
Data Engineering Integration
- Kafka (real-time data ingestion)
- S3 (model/data storage)
- Trino / Athena (querying raw and processed data)
- Argo Workflows / Airflow (model training pipelines)
Monitoring & Observability
- Prometheus / Grafana (model & system monitoring)