26 Big Data Technologies jobs in Indore
Learning Support Specialist - ML&AI, Data Science and Data Engineering
Posted 16 days ago
Job Viewed
Job Description
About Emeritus:
Emeritus is committed to teaching the skills of the future by making high-quality education accessible and affordable to individuals, companies, and governments around the world. It does this by collaborating with more than 50 top-tier universities across the United States, Europe, Latin America, Southeast Asia, India and China.
Emeritus’ short courses, degree programs, professional certificates, and senior executive programs help individuals learn new skills and transform their lives, companies and organizations. Its unique model of state-of-the-art technology, curriculum innovation, and hands-on instruction from senior faculty, mentors and coaches has educated more than 250,000 individuals across 80+ countries.
Founded in 2015, Emeritus, part of Eruditus Group, has more than 2,000 employees globally and offices in Mumbai, New Delhi, Shanghai, Singapore, Palo Alto, Mexico City, New York, Boston, London, and Dubai. Following its $650 million Series E funding round in August 2021, the Company is valued at $3.2 billion, and is backed by Accel, SoftBank Vision Fund 2, the Chan Zuckerberg Initiative, Leeds Illuminate, Prosus Ventures, Sequoia Capital India, and Bertelsmann.
About the Role:
The Learning Support Specialist is a subject matter expert and largely impacts the student experience by guiding students through their learning journey.
Candidates must have industry experience, demonstrated knowledge of the course subject area, and strong interpersonal skills. This is a full-time, remote role.
The purpose of this position is to provide assistance and aid learners enrolled in programs within the fields of machine learning and AI, data engineering, data science, and data analytics. The successful candidate will have proven experience as a data engineer and/or data scientist. The candidate must have excellent time management skills and the desire to work in a fast-paced educational tech environment where supplementing the educational journey of learners is priority. We are looking for a professional who does not mind a busy schedule and wants to provide excellent internal and external customer service.
Roles and Responsibilities:
- Monitor and respond to all student inquiries via the learning management system and learner support software, using compassion and understanding for the student learning journey within 24 hours or less of submission
- Rely on industry knowledge to quickly and clearly guide students through complex assignments and questions
- Provide prompt feedback that includes pointed questions that will help guide students towards correct answers and allows them to build problem-solving skills
- Work with students who are both new to the subject matter, as well as those with experience or who are further along in their careers
- Suggest course improvements to assigned Program Delivery Manager (PDM) and Designer to ensure content is communicated clearly, accurately, and effectively in videos, assignments and collateral materials
- Communicate and collaborate with the Emeritus team on a regular basis to enhance course delivery and to solve unexpected course challenges
Skills and Qualifications:
- 5+ years of extensive work experience in the field of data engineering, data science, or data analytics
- Professional and/or academic experience in machine learning and artificial intelligence
- Proficiency in JavaScript, Python, and the use of GitHub
- Experience using data visualization tools (i.e. Tableau)
- Experience using Microsoft Azure
- Advanced academic background in mathematics, particularly statistics, calculus, and linear algebra
- Excellent verbal and written communication and interpersonal skills with an ability to listen effectively, respond appropriately, and maintain a mutual comfort level while working with a diverse student population
- Experience using Canvas or other related learning management system
Preferred Qualifications:
- Experience using Slack and Teams, as communication tools
- Experience working with support/service software or ticketing system
- Experience with using bug tracking and feedback tools
Emeritus provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
In press:
Associate Architect - Data Engineering
Posted 5 days ago
Job Viewed
Job Description
About the Role:
We are seeking an experienced Data Architect to lead the transformation of enterprise data
solutions, with a strong focus on migrating Alteryx workflows into Azure Databricks. The
ideal candidate will have deep expertise in the Microsoft Azure ecosystem, including Azure
Data Factory, Databricks, Synapse Analytics, Microsoft Fabric, and a strong
background in data architecture, governance, and distributed computing. This role
requires both strategic thinking and hands-on architectural leadership to ensure scalable,
secure, and high-performance data solutions.
Key Responsibilities:
Define the overall migration strategy for transforming Alteryx workflows into
scalable, cloud-native data solutions on Azure Databricks.
Architect end-to-end data frameworks leveraging Databricks, Delta Lake, Azure
Data Lake, and Synapse.
Establish best practices, standards, and governance frameworks for pipeline
design, orchestration, and data lifecycle management.
Guide engineering teams in re-engineering Alteryx workflows into distributed Spark-
based architectures.
Collaborate with business stakeholders to ensure solutions align with analytics,
reporting, and advanced AI/ML initiatives.
Oversee data quality, lineage, and security compliance across the data
ecosystem.
Drive CI/CD adoption, automation, and DevOps practices for Azure Databricks
and related services.
Provide architectural leadership, design reviews, and mentorship to engineering
and analytics teams.
Optimize solutions for performance, scalability, and cost-efficiency within Azure.
Participate in enterprise architecture forums and influence data strategy across the
organization.
Required Skills and Qualifications:
10+ years of experience in data architecture, engineering, or solution design.
Proven expertise in Alteryx workflows and their modernization into Azure
Databricks (Spark, PySpark, SQL, Delta Lake).
Deep knowledge of the Microsoft Azure data ecosystem:
o Azure Data Factory (ADF)
o Azure Synapse Analytics
o Microsoft Fabric
o Azure Databricks
Strong background in data governance, lineage, security, and compliance
frameworks.
Demonstrated experience in architecting data lakes, data warehouses, and
analytics platforms.
Proficiency in Python, SQL, and Apache Spark for prototyping and design
validation.
Excellent leadership, communication, and stakeholder management skills.
Preferred Qualifications:
Microsoft Azure certifications (e.g., Azure Solutions Architect Expert, Azure Data
Engineer Associate).
Experience leading large-scale migration programs or modernization initiatives.
Familiarity with enterprise architecture frameworks (TOGAF, Zachman).
Exposure to machine learning enablement on Azure Databricks.
Strong understanding of Agile delivery and working in multi-disciplinary teams.
Senior Manager - Data Engineering Lead
Posted 5 days ago
Job Viewed
Job Description
Job Title: Senior Manager - Data Engineering Lead
Qualification: Bachelor’s or master’s degree in computer science, Data Engineering, or related field.
Required skillset:
- Experience in data engineering.
- Proven experience in cloud platforms (AWS, Azure, or GCP) and data services (Glue, Synapse, Big Query, Databricks, etc.).
- Hands-on experience with tools like Apache Spark, Kafka, Airflow, dbt, and modern orchestration platforms.
- Technical Skills
- Proficient in SQL, Python/Scala/Java.
- Strong understanding of modern data Lake concepts (e.g., Snowflake, Redshift, BigQuery).
- Familiarity with CI/CD, Infrastructure as Code (e.g., Terraform), and DevOps for data.
Nice to Have:
- Prior experience working in a regulated industry (alcohol, pharma, tobacco, etc.).
- Exposure to demand forecasting, route-to-market analytics, or distributor performance management.
- Knowledge of CRM, ERP, or supply chain systems (e.g., Salesforce, SAP, Oracle).
- Familiarity with marketing attribution models and campaign performance tracking.
Preferred Attributes:
- Strong analytical and problem-solving skills.
- Excellent communication and stakeholder engagement abilities.
- Passion for data-driven innovation and delivering business impact.
- Certification in cloud platforms or data engineering (e.g., Google Cloud Professional Data Engineer).
- Excellent communication and stakeholder management skills.
Key Accountabilities:
- Design and implement scalable, high-performance data architecture solutions aligned with enterprise strategy.
- Define standards and best practices for data modelling, metadata management, and data governance.
- Collaborate with business stakeholders, data scientists, and application architects to align data infrastructure with business needs.
- Guide the selection of technologies, including cloud-native and hybrid data architecture patterns (e.g., Lambda/Kappa architectures).
- Lead the development, deployment, and maintenance of end-to-end data pipelines using ETL/ELT frameworks.
- Manage ingestion from structured and unstructured data sources (APIs, files, databases, streaming sources).
- Optimize data workflows for performance, reliability, and cost efficiency.
- Ensure data quality, lineage, cataloging, and security through automated validation and monitoring.
- Oversee data lake design, implementation, and daily operations (e.g., Azure Data Lake, AWS S3, GCP BigLake).
- Implement access controls, data lifecycle management, and partitioning strategies.
- Monitor and manage performance, storage costs, and data availability in real time.
- Ensure compliance with enterprise data policies and regulatory requirements (e.g., GDPR, CCPA).
- Lead and mentor a team of data engineers and architects.
- Establish a culture of continuous improvement, innovation, and operational excellence.
- Work closely with IT, DevOps, and InfoSec teams to ensure secure and scalable infrastructure.
Flexible Working Statement: Flexibility is key to our success. From part-time and compressed hours to different locations, our people work flexibly in ways to suit them. Talk to us about what flexibility means to you so that you’re supported from day one.
Diversity statement: Our purpose is to celebrate life, every day, everywhere. And creating an inclusive culture, where everyone feels valued and that they can belong, is a crucial part of this.
We embrace diversity in the broadest possible sense. This means that you’ll be welcomed and celebrated for who you are just by being you. You’ll be part of and help build and champion an inclusive culture that celebrates people of different gender, ethnicity, ability, age, sexual orientation, social class, educational backgrounds, experiences, mindsets, and more.
Our ambition is to create the best performing, most trusted and respected consumer products companies in the world. Join us and help transform our business as we take our brands to the next level and build new ones as part of shaping the next generation of celebrations for consumers around the world.
Data Science Specialist
Posted 16 days ago
Job Viewed
Job Description
We're Hiring for Data Science Instructors (100% Teaching Role) (Night Shift Only)
for Full Time & Permanent Remote opportunity!
Vacancy - 5
Minimum 4 years (Full Time) of experience in Teaching/Non Teaching in Data Science is a MUST
Budget - 50-75k Per Month (Gross) (Fixed) + Incentives As per Class Performance
SkillArbitrage is looking for passionate educators to empower aspiring data professionals
What we’re looking for: #Python, #Excel, #SQL, #Tableau, #PowerBI, #ML, and #datavisualization
Substantial Experience in IT (Non Teaching Domain)
Ability to simplify complex concepts with clarity
Strong judgment, creativity, and content development skills
Strong proficiency in English communication
Responsibilities :
Develop and deliver engaging curriculum.
Provide constructive feedback and assess student progress.
Collaborate with students and team members effectively.
If you’re excited to inspire and shape the future of data science, send updated CVs to
You can also directly reach me on (WhatsApp only)
MLOPS Data Science
Posted 16 days ago
Job Viewed
Job Description
Key Responsibilities:
- Develop, train, and validate predictive and analytical models using machine learning techniques.
- Collaborate with data engineers and business teams to define data requirements and success metrics.
- Deploy machine learning models into production using ML Ops best practices.
- Build automated pipelines for model training, testing, monitoring, and retraining.
- Optimize model performance and ensure scalability and reliability.
- Monitor model drift, performance degradation, and ensure continuous improvement.
- Implement CI/CD practices for machine learning workflows.
Technical Skills Required:
- Strong proficiency in Python (Pandas, NumPy, Scikit-learn, PyTorch/TensorFlow).
- Experience with ML Ops tools/frameworks (MLflow, Kubeflow, Airflow, SageMaker, Vertex AI, or Azure ML).
- Good understanding of data engineering concepts (ETL, data pipelines, APIs).
- Hands-on with containerization and orchestration (Docker, Kubernetes).
- Knowledge of cloud platforms (AWS, GCP, or Azure) for ML model deployment.
- Strong background in statistics, machine learning algorithms, and model evaluation techniques .
- Familiarity with version control (Git) and CI/CD pipelines.
Preferred Qualifications:
- Bachelor’s/Master’s degree in Computer Science, Data Science, Statistics, or related field.
- Experience in handling large-scale data pipelines and real-time model serving .
- Exposure to feature stores and automated model monitoring.
- Strong problem-solving, analytical, and communication skills.
Data Science Engineer
Posted 16 days ago
Job Viewed
Job Description
Winpra is revolutionizing the real estate industry through advanced AI-powered analytics. We're building cutting-edge solutions that combine sophisticated data science and machine learning with real estate expertise to deliver unprecedented insights and automation capabilities through our innovative platform.
Position Overview
We're seeking a Data Science Engineer to join our innovative team and help shape the future of real estate analytics. In this role, you'll develop and implement advanced statistical models, machine learning algorithms, and data pipelines that power our platform's predictive analytics and market insights.
Key Responsibilities
- Design and implement machine learning models for real estate market analysis and prediction
- Develop and maintain scalable data processing pipelines
- Create sophisticated statistical models for property valuation and market trends
- Design and implement reinforcement learning solutions for dynamic pricing and decision optimization
- Build and optimize data collection and validation processes
- Implement feature engineering techniques to improve model performance
- Develop automated reporting systems and interactive dashboards
- Collaborate with full-stack and AI engineers to integrate models into production
- Conduct A/B tests and experiments to validate model improvements
- Document methodologies, algorithms, and technical specifications
- Monitor and optimize model performance in production
Required Qualifications
- Bachelor's or Master's degree in Data Science, Computer Science, Statistics, or related field
- Minimum of 3 years of experience in data science / data analytics related domain.
- Strong programming skills in Python and experience with data science libraries (NumPy, Pandas, Scikit-learn, Pytorch/Tensorflow)
- Proven experience in developing and deploying machine learning models
- Demonstrated experience in reinforcement learning and its practical applications
- Expertise in statistical analysis and mathematical modeling
- Experience with SQL and database management
- Proficiency in data visualization tools (Matplotlib, Seaborn, D3 or similar)
- Strong understanding of machine learning algorithms and their applications
- Experience with version control systems (Git)
- Excellent problem-solving and analytical skills
- Strong communication skills to present technical findings to non-technical stakeholders
Preferred Qualifications
- Experience in real estate analytics or related domain
- Knowledge of deep learning frameworks (TensorFlow, PyTorch)
- Familiarity with cloud computing platforms (AWS/GCP/Azure)
- Knowledge of MLOps practices and tools
- Experience with time series analysis and forecasting
- Background in geospatial data analysis
- Contributions to open-source data science projects
What We Offer
- Opportunity to work on cutting-edge AI technology in the real estate sector
- Collaborative environment with a team of experienced engineers
- Professional development and growth opportunities
- Competitive salary and benefits package
- Remote-friendly work environment
Data Science Intern
Posted 16 days ago
Job Viewed
Job Description
NLP Data Science Intern
Did you notice a shortage of food at supermarkets during covid? Have you heard about the recent issues in the global shipping industry? or perhaps you’ve heard about the shortages of microchips? These problems are called supply chain disruptions. They have been increasing in frequency and severity. Supply chain disruptions are threatening our very way of life.
Our vision is to advance society’s capacity to withstand shocks and stresses. Kavida.ai believes the only way to ensure security is through supply chain resiliency. We are on a mission to help companies proactively manage disruption supply chain disruption risks using integrated data.
Our Story
In March 2020 over 35 academics, data scientists, students, and software engineering volunteers came together to address the food shortage issues caused by the pandemic - Covid19foodsupply.com. A core team of 9 was formed and spun off into a startup and the rest is history.
Our investors include one of the world's largest supply chain quality & compliance monitoring companies, a £1.25bn apparel manufacturer, and some very impressive angel investors.
Social Impact:
Social impact is in our DNA. We believe private sector innovation is the only way to address social problems at scale. If we achieve our mission, humanity will always have access to its essential goods for sustenance. No more shortages of food, PPE, medicine, etc.
Our Culture :
Idea Meritocracy:
The best ideas win. We only care about what is right, not who is right. We know arriving at the best answer requires constructive tension. Sometimes it can get heated but it's never personal. Everyone contributes to better ideas knowing they will be heard but also challenged.
Drivers Not Passengers:
We think as owners who drive the bus, not as passengers. We are self-starters and never wait for instructions. We are hungry for autonomy, trust, and responsibility. Everyone is a leader because we know leadership is a trait, not a title. Leaders drive growth and navigate the chaos.
We Figure Out The Answers:
We trust our ability to figure stuff out. We do not need all the information to start answering the question. We can connect the dots and answer difficult questions with logic.
Customer & Mission Obsessed:
Our customers are our heroes and we are obsessed with helping them. We are obsessed with; understanding their supply chains better, resolving their biggest headaches, and advancing their competitiveness.
Learning and growth
We all take personal responsibility for becoming smarter, wiser, more skilled, happier. We are obsessed with learning about our industry and improving our own skills. We are obsessed with our personal growth; to become more.
Job Description:
As a member of our Research team, you will be responsible for researching, developing, and coding Agents using state-of-the-art LLM's with automated pipelines.
- Write code for the development of our ML engines and micro-services pipelines.
- use, optimize, train, and evaluate state-of-the-art GPT models.
- research and Develop Agentic pipelines using LLM's.
- research and develop RAG based pipeline using vector DB's .
Essential Requirements:
- prompt engineering and Agentic LLm frameworks like langchain/llama index
- good enough undersanding of vectors/tensors and RAG pipelines
- Knowledge of building NLP systems using transfer learning or building custom NLP systems from scratch using TensorFlow or PyTorch.
- In-depth knowledge of DSA, async, python, and containers.
- Knowledge of transformers and NLP techniques is essential, and deployment experience is a significant advantage.
Salary Range: ₹15000 - ₹25000
We are offering a full-time internship position to final-year students. The internship will last for an initial period of 6-12 months before converting to a full-time job, depending on suitability for both parties. If the applicant is a student who needs to return to university, they can continue with the program on a part-time basis.
Be The First To Know
About the latest Big data technologies Jobs in Indore !
Data Science Analyst
Posted 16 days ago
Job Viewed
Job Description
Are you passionate about transforming data into actionable insights? Do you thrive in a fast-paced, innovative environment? We’re looking for a Data Science Analyst to join our team and help us drive data-informed decisions that shape the future of our business.
What You’ll Do
- Analyze complex datasets to uncover trends, patterns, and actionable insights.
- Develop predictive models and statistical analyses to support business strategies.
- Collaborate with cross-functional teams to understand data needs and deliver solutions.
- Build dashboards and visualizations to communicate findings effectively to stakeholders.
- Stay ahead of data trends and implement best practices in analytics and machine learning.
What We’re Looking For
- Bachelor's or Master’s degree in Data Science, Statistics, Computer Science, or a related field.
- Strong programming skills in Python, R, or SQL.
- Experience with data visualization tools (Tableau, Power BI, etc.).
- Solid understanding of machine learning algorithms and statistical methods.
- Excellent problem-solving and communication skills.
- A curious mindset with a passion for exploring data.
Why Join Us?
- Opportunity to work on exciting, high-impact projects.
- Collaborative and supportive work environment.
- Resources for professional development and growth.
- A chance to shape the future of data-driven decision-making.
Senior Full Stack SDE with Data Engineering for Analytics
Posted 16 days ago
Job Viewed
Job Description
Summary
Truckmentum is seeking a Senior Full Stack Software Development Engineer (SDE) with deep data engineering experience to help us build cutting-edge software and data infrastructure for our AI-driven Trucking Science-as-a-Service platform. We’re creating breakthrough data science to transform trucking — and we’re looking for engineers who share our obsession with solving complex, real-world problems with software, data, and intelligent systems.
You’ll be part of a team responsible for the development of dynamic web applications, scalable data pipelines, and high-performance backend services that drive better decision-making across the $4 trillion global trucking industry. This is a hands-on role focused on building solutions by combining Python-based full stack development with scalable, modern data engineering.
About Truckmentum
Just about every sector of the global economy depends on trucking. In the US alone, trucks move 70%+ of all freight by weight (90+% by value) and account for $40 billion in annual spending (globally 4+ trillion per year). Despite this, almost all key decisions in trucking are made manually by people with limited decision support. This results in significant waste and lost opportunities. We view this as a great opportunity.
Truckmentum is a self-funded seed stage venture. We are now validating our key data science breakthroughs with customer data and our MVP product launch to confirm product-market fit. We will raise 4-6 million in funding this year to scale our Data Science-as-a-Service platform and bring our vision to market at scale.
Our Vision and Approach to Technology
T he back of our business cards reads “Moneyball for Trucking”, which means quantifying hard-to-quantiify hidden insights, and then using those insights to make much better business decision. If you don’t want “Moneyball for Trucking” on the back of your business card, then Truckmentum isn’t a good fit.
Great technology begins with customer obsession. We are obsessed with trucking companies' needs, opportunities, and processes, and with building our solutions into the rhythm of their businesses. We prioritize rapid development and iteration of large scale, complex data science problems, backed by actionable, dynamic data visualizations. We believe in an Agile, lean approach to software engineering, backed by a structured CI/CD approach, professional engineering practices, clean architecture, clean code and testing.
Our technology stack includes AWS Cloud, MySQL, Snowflake, Python, SQLAlchemy, Pandas, Streamlit and AGGrid to accelerate development of web visualization and interfaces.
About the Role
As a Senior Full Stack SDE with Data Engineering for Analytics, you will be responsible for designing and building the software systems, user interfaces, and data infrastructure that power Truckmentum’s analytics, data science, and decision support platform. This is a true full stack role — you’ll work across frontend, backend, and data layers using Python, Streamlit, Snowflake, and modern DevOps practices. You’ll help architect and implement a clean, extensible system that supports complex machine learning models, large-scale data processing, and intuitive business-facing applications.
You will report to the CEO (Will Payson), a transportation science expert with 25 years in trucking, who has delivered $1B+ in annual savin s for FedEx and Amazon. You will also work closely with the CMO/Head of Product, Tim Liu, who has 20+ years of experience in building and commercializing customer-focused digital platforms including in logistics.
Responsibilities and Goals
- Design and build full stack applications using Python, Streamlit, and modern web frameworks to power internal tools, analytics dashboards, and customer-facing products.
- Develop scalable data pipelines to ingest, clean, transform, and serve data from diverse sources into Snowflake and other cloud-native databases.
- Implement low-latency, high-availability backend services to support data science, decision intelligence, and interactive visualizations.
- Integrate front-end components with backend systems and ensure seamless interaction between UI, APIs, and data layers.
- Collaborate with data scientists / ML engineers to deploy models, support experimentation, and enable rapid iteration on analytics use cases.
- Define and evolve our data strategy and architecture, including schemas, governance, versioning, and access patterns across business units and use cases.
- Implement DevOps best practices, including testing, CI/CD automation, and observability, to improve reliability and reduce technical debt.
- Ensure data integrity and privacy through validation, error handling, and secure design.
- Contribute to product planning and roadmaps by working with cross-functional teams to estimate scope, propose solutions, and deliver value iteratively.
Required Qualifications
- 5+ years of professional software development experience, with a proven track record of building enterprise-grade, production-ready software applications for businesses or consumers, working in an integrated development team using Agile and Git / GitHub.
- Required technology experience with the following technologies in a business context:
- Python as primary programming language (5+ years’ experience)
- Pandas, Numpy, SQL
- AWS and/or GCP cloud configuration / deployment
- Git / GitHub
- Snowflake, and/or Redshift or Big Query
- Docker
- Airflow, Prefect or other DAG orchestration technology
- Front end engineering (e.g., HTML/CSS, JavaScript, and component-based frameworks)
- Hands-on experience with modern front-end technologies — HTML/CSS, JavaScript, and component-based frameworks (e.g., Streamlit, React, or similar).
- Experience designing and managing scalable data pipelines, data processing jobs, and ETL/ELT
- Experience in defining Data Architecture and Date Engineering Architecture, including robust pipelines, and building and using cloud services (AWS and/or GCP)
- Experience building and maintaining well-structured APIs and microservices in a cloud environment.
- Working knowledge of, and experience applying, data validation, privacy, and governance
- Comfort working in a fast-paced, startup environment with evolving priorities and an Agile mindset.
- Strong communication and collaboration skills — able to explain technical tradeoffs to both technical and non-technical stakeholders.
Desirable Experience (i.e., great but not required.)
- Desired technology experience with the following technologies in a business context:
- Snowflake
- Streamlit
- Folium, Plotly, AG Grid
- Kubernetes
- Javascript, CSS
- Flask, Fast API and SQLAlchemy
- Exposure to machine learning workflows and collaboration with data scientists or MLOps teams.
- Experience building or scaling analytics tools, business intelligence systems, or SaaS data products.
- Familiarity with geospatial data and visualization libraries (e.g., Folium, Plotly, AG Grid).
- Knowledge of CI/CD tools (e.g., GitHub Actions, Docker, Terraform) and modern DevOps practices.
- Contributions to early-stage product development — especially at high-growth startups.
- Passion for transportation and logistics, and for applying technology to operational systems.
Why Join Truckmentum
At Truckmentum, we’re not just building software — we’re rewriting the rules for one of the largest and most essential industries in the world. If you’re excited by real-world impact, data-driven decision making, and being part of a company where you’ll see your work shape the product and the business, this is your kind of team.
Some of the factors that make this a great opportunity include:
- Massive market opportunity: Trucking is a $4T+ global indust y / strong customer interest in solution
- Real business impact: Our tech has already shown a 5% operating margin gain at pilot customers.
- Builder’s culture: You’ll help define architecture, shape best practices, and influence our direction.
- Tight feedback loop: We work directly with real customers and iterate fast.
- Tech stack you’ll love: Python, Streamlit, Snowflake, Pandas, AWS — clean, modern, focused.
- Mission-driven team: We’re obsessed with bringing "Moneyball for Trucks" to life — combining science, strategy, and empathy to make the complex simple, and the invisible visible
We value intelligence, curiosity, humility, clean code, measurable impact, clear thinking, hard work and a focus on delivering results. If that sounds like your kind of team, we’d love to meet you.
- PS. If you read this far, we assume you are focused and detail oriented. If you think this job sounds interesting, please fill in a free personality profile on and email a link to the outcome to to move your application to the top the pile.
Associate Director- Data Science
Posted 10 days ago
Job Viewed
Job Description
Job description
Company Description
Live Connections is a search and recruitment organization that specializes in finding and placing professionals across all sectors. With over 25 years of cumulative recruitment experience, Live Connections has placed over 20,000 professionals across 350+ clients in multiple sectors and functions. The company has a global presence in 4 countries and is known for its professional, well-trained, and customer-centric team of engineers and MBA's who are relationship-driven and handle clients or candidates with a human quotient.
Position: Director
Experience: 16+ Years
Budget: 50–55 LPA
Mode: Remote
Key Responsibilities:
- Data Analysis and Preprocessing: Analyze and preprocess diverse datasets relevant to the
mortgage industry, ensuring data quality and relevance for model training.
- Model Development and Fine-Tuning:Research and implement state-of-the-art NLP models,
focusing on pre-training as well instruction tuning pre-trained LLMs for mortgage-specific
applications.Utilize techniques like RLHF to improve model alignment with human preferences
and enhance decision-making capabilities.
- Algorithm Implementation: Develop and optimize machine learning algorithms to enhance
model performance, accuracy, and efficiency. Experiment with different architectures and open-
source models to identify the best fit for project requirements.
- Collaboration: Work with domain experts to incorporate industry knowledge into model
development, ensuring outputs are relevant and actionable.
- Experimentation: Conduct experiments to validate model hypotheses, analyze results, and
iterate on model improvements.
- Documentation: Maintain comprehensive documentation of methodologies, experiments, and results to support transparency and reproducibility.
- Ethics and Bias Mitigation: Ensure responsible AI practices are followed by identifying potential biases in data and models, implementing strategies to mitigate them.
Required Skills:
- Technical Expertise: Strong background in machine learning, deep learning, and NLP. Proficiency
in Python and experience with ML frameworks such as TensorFlow or PyTorch.
- NLP Knowledge: Experience with NLP frameworks and libraries (e.g., Hugging Face
Transformers) for developing language models.
- Data Handling: Proficiency in handling large datasets, feature engineering, and statistical
analysis.
- Problem Solving: Strong analytical skills with the ability to solve complex problems using data- driven approaches.
- Communication: Excellent communication skills to effectively collaborate with technical teamsand non-technical stakeholders.
Preferred Qualifications:
Educational Background: Master’s or Ph.D. in Data Science, Computer Science, Statistics, or a
related field.
Cloud Computing: Familiarity with cloud platforms (e.g., AWS, Azure) for scalable computing
solutions.
Ethics Awareness: Understanding of ethical considerations in AI development, including bias
detection and mitigation.
Interested candidate please drop your updated resumes at OR Contact us at