5,413 Statistical Software jobs in India
Data Science
Posted 1 day ago
Job Viewed
Job Description
Data Science professional with a proven track record in training Engineering, IT, Diploma, Polytechnic and Technical candidates.
With over a 7 yrs of experience in Artificial Intelligence, Machine Learning, Big Data, and Cloud Computing, Specialise in delivering industry-oriented, hands-on training that equips candidates with the technical proficiency required in today's data-driven world.
Data Science
Posted 1 day ago
Job Viewed
Job Description
Company Description
Teks Academy creates an excellent environment for career-focused students, offering real-world training to prepare them for future success. We provide both offline and online learning facilities, allowing students to choose the mode of learning that best fits their schedule and convenience.
Role Description
This is a full-time, on-site role located in Hyderabad for a Data Science Trainer. The trainer will be responsible for delivering lessons, developing curriculum, and mentoring students in the areas of data science and full stack Python development. Day-to-day tasks include preparing instructional materials, conducting classes, and evaluating students' progress.
1)
Data Science Trainer Qualifications:
- Strong knowledge of
Data Science, Data Analytics, Data Analysis
, and
Generative AI
. - Proficiency in
analytical skills, statistics
, and
machine learning concepts
. - Hands-on expertise with tools/libraries such as
Python (Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch)
,
SQL
, and
data visualization
(Matplotlib/Power BI/Tableau). - Excellent communication and instructional skills
to explain complex topics clearly. - Ability to
engage and motivate students
with practical examples. - Previous
teaching/training experience
in Data Science is a plus. - Bachelor's degree in a relevant field;
advanced degrees/certifications
are advantageous.
Key Responsibilities:
- Deliver
interactive sessions
on Data Science, Machine Learning, and AI. - Design curriculum
and keep course materials updated with industry trends. - Guide students through
projects, assignments
, and
real-world datasets
. - Assess and mentor students to
ensure understanding and career readiness
. - Conduct
workshops/webinars
to enhance learning outcomes. - Support
placement teams
with technical interviews/mock sessions.
2)
Full Stack Python TrainerQualifications:
- Strong knowledge in
Python programming
,
front-end
(HTML, CSS, JavaScript), and
back-end frameworks
(Django/Flask/FastAPI). - Proficiency in
REST API development
,
databases (MySQL, PostgreSQL, MongoDB)
, and
version control (Git/GitHub)
. - Knowledge of
software design patterns
,
OOP
,
MVC architecture
, and
CI/CD pipelines
. - Excellent
problem-solving and debugging skills
. - Strong
communication and instructional skills
. - Previous
teaching/training experience
in full-stack development is a plus. - Bachelor's degree in Computer Science/IT;
certifications
in Python or web development are advantageous.
Key Responsibilities:
- Deliver
comprehensive full-stack development training
(front-end + back-end). - Develop real-world projects
for students to apply their learning. - Teach
best practices
in coding, deployment, and security. - Mentor students
on technical queries and career preparation. - Update curriculum based on
industry trends
and frameworks. - Evaluate and provide feedback on
student performance
.
Data Science
Posted 1 day ago
Job Viewed
Job Description
We are seeking an experienced and passionate Data Science & Full Stack Developer Mentor cum Faculty to train and mentor students. The candidate should have strong industry knowledge, hands-on coding expertise, and the ability to simplify concepts for learners.
Key Responsibilities:
Deliver interactive classes in Data Science and Full Stack Development (Python, Django/Flask, JavaScript, React, , SQL/NoSQL).
Teach key Data Science modules: Python for Data Science, Machine Learning, AI Concepts, Data Visualization (Power BI/Tableau), and Statistics.
Guide students with real-world projects, coding practices, and career readiness.
Develop & update curriculum, training material, and assignments.
Conduct assessments, mock interviews, and workshops.
Support placement team by preparing students for technical interviews.
Requirements:
Bachelor's/Master's in Computer Science, Engineering, Data Science, or related field.
0-2 years of hands-on experience in Data Science & Full Stack Development.
Proficiency in Python, SQL, JavaScript, React, , Django/Flask.
Strong understanding of Machine Learning & Data Analytics tools.
Excellent communication and mentoring skills.
Prior teaching/training experience preferred.
Job Types: Full-time, Fresher
Pay: ₹321, ₹1,435,412.59 per year
Benefits:
- Cell phone reimbursement
- Internet reimbursement
Application Question(s):
- What are your salary expectations for this role?
- If offered the position, how soon can you join us?
- How long do you plan to continue with us if offered this role?
Language:
- English (Preferred)
Work Location: In person
Data Science
Posted 1 day ago
Job Viewed
Job Description
Tableau, SQL database, and Python (All are Mandatory)
EX :
5+ years
CTC –
Max 25 LPA
Location :
Bangalore
NP :
0 to 30 days
Data Science
Posted 1 day ago
Job Viewed
Job Description
About the Internship:
We are offering an exciting opportunity for freshers and recent graduates to work on a live AI-driven project. If you are passionate about Data Science, AI, and Full-Stack Development, this internship will give you practical exposure to building and scaling an AI product from the ground up.
Internship Details:
- Duration: 4 Months
- Type: Internship (Unpaid – Experience & Mentorship only)
- Location: Remote / Mumbai preferred
- Eligibility: Freshers or recently passed-out candidates
Responsibilities:
- Build and fine-tune AI/ML models (time-series forecasting, regression, prediction).
- Develop and maintain data pipelines to handle real-world business datasets.
- Contribute to front-end dashboards for visualizing AI insights.
- Work on back-end development (APIs, data integration, model deployment).
- Create impactful data visualizations that translate AI outputs into business insights.
- Collaborate with mentors and product team to improve AI features.
Requirements:
- Knowledge of Python and data libraries (pandas, NumPy, scikit-learn, statsmodels, Prophet).
- Understanding of AI/ML concepts (forecasting, predictive modeling, regression).
- Basic knowledge of front-end frameworks (React/Angular/Vue).
- Basic knowledge of back-end frameworks (Flask, Django, ).
- Familiarity with databases (SQL, NoSQL).
- Problem-solving skills, curiosity, and eagerness to learn.
What You'll Gain:
- Hands-on experience in AI development with real datasets.
- Learn to integrate AI models into full-stack applications.
- Mentorship from professionals working on AI-driven projects.
- Opportunity to work on a live AI product and see your work make impact.
- Certificate of Internship & Letter of Recommendation on successful completion.
How to Apply:
Apply via Indeed with your resume and a short note on why you're excited to work on an AI-related project. or send your resume at
If you are looking to kickstart your career in AI, Data Science, and Full-Stack Development, this internship is a chance to work on something real, impactful, and career-shaping.
Job Types: Part-time, Fresher, Internship
Contract length: 4 months
Pay: From ₹3,000.00 per month
Expected hours: 48 per week
Work Location: Remote
Data Science
Posted 1 day ago
Job Viewed
Job Description
- Experience in Python (Should be >4years).
- Experience in NLP (Should be >4years).
- Experience in hands-on use of deep learning framework (Should be >4years). Any framework out of Pytorch OR Tensorflow, is mandatory.
- Experience with handling Text or Speech Dataset
- Experience in using Transformer based models (like BERT, T5, RoBerta). (Should be >2projects)
Skills:
- Proven experience as an NLP and ML Engineer or similar role
- Have worked on open-source LLM (like Falcon and llama) for various text generation task
- Have done SFT or PEFT on decoder based LLM for a specific text generation task
- Understanding of NLP techniques for text representation, semantic extraction techniques, data structures and modeling
- Have worked on BERT models for intent classifiers and entity extraction techniques
- Ability to write robust and testable code
- Experience with machine learning frameworks (PyTorch)
- Knowledge of Python, Groovy Script, BPMN
- An analytical mind with problem-solving abilities
- Degree in Computer Science, Mathematics, Computational Linguistics or similar field
Responsibilities:
- Study and transform data science prototypes
- Design NLP applications
- Use effective text representations to transform natural language into useful features
- Find and implement the right algorithms and tools for NLP tasks
- Develop NLP systems according to requirements
- Train the developed model and run evaluation experiments
- Perform statistical analysis of results and refine models
- Extend ML libraries and frameworks to apply in NLP tasks
- Remain updated in the rapidly changing field of machine learning
Mandatory Skills Required - Data Science, GenAI, Vertex AI, Python, NLP, Deep Learning Frameworks
Qualification
BE/BTech/MCA/Ph.D. from tier I/II institute in CS or related field
Location
Noida / Gurgaon / Pune / Bangalore ( Hybrid Model )
Data Science
Posted 1 day ago
Job Viewed
Job Description
Job Role- Data Scientist (Sr. Consultant)
At Deloitte, we do not offer you just a job, but a career in the highly sought-after Risk Management field. We are the business leader in the risk market. We work with a vision to make the world more prosperous, trustworthy, and safe. Our clients, primarily based outside of India, are large, complex organizations that constantly evolve and innovate to build better products and services. In the process, they encounter various risks and the work we do to help them address these risks is increasingly important to their success—and to the strength of the economy and public security.
By joining us, you will get to work with diverse teams of professionals who design, manage, and implement risk- centric solutions across a variety of risk domains. In the process, you will gain exposure to the risk-centric challenges faced in today's world by organizations across a range of industry sectors and become subject matter experts in those areas and develop into a well-rounded professional who not only has the depth in few risk domains, but also has width of exposure to wide variety of risk domains.
So, if you are someone who believes in disrupting through innovation and execution of ideas, Deloitte Risk and Financial Advisory is the place to be
Work you'll do
The key job responsibilities will be to:
- Develop database schemas, tables and dictionaries
- Develop, implement and optimize stored procedures, functions and indexes
- Ensure the data quality and integrity in databases
- Create complex functions, scripts, stored procedures and triggers to support application development
- Fix any issues related to database performance and ensuring stability, reliability and security
- Design, create, and implement database systems based on the end user's requirements
- Prepare documentations for database applications
- Memory management for database systems
- Develop best practices for database design and development activities
The Team
Our Financial Technology practice develops and licenses a growing family of proprietary software products (see ) to assist financial institutions with a number of complex topics, such as accounting for credit deteriorated assets and the administration of investments in leveraged loans.
We are looking to add dedicated software engineers to our team. In addition to competitive compensation and benefits, we provide excellent opportunities for growth and learning and invest in our talent development.
Qualifications
Required:
- Bachelor's degree in computer science or related field
- At least 5 to 7 years of experience as a SQL developer, with strong understanding of Microsoft SQL Server database
- Strong experience with Python coding and libraries (Pandas, NumPy, PySpark etc.)
- Hands-on experience with machine learning algorithms and frameworks
- Understanding and implementation of AI and generative AI solutions
- Proficiency in data visualization & data analytics
- Knowledge of best practices when dealing with relational databases
- Capable of troubleshooting common database issues
- Familiar with tools that can aid with profiling server resource usage and optimizing it
- Knowledge in performance optimization techniques
- Excellent verbal and written communication
Our purpose
Deloitte's purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities.
Our people and culture
Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work.
Professional development
At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India.
Benefits to help you thrive
At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you.
Recruiting tips
From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters.
Requisition code:
Be The First To Know
About the latest Statistical software Jobs in India !
Data Science
Posted 1 day ago
Job Viewed
Job Description
Please note this is a 3 months FULL TIME INTERNSHIP
Role & responsibilities
- Train and fine-tune ML/LLM models
- Build scalable APIs with FastAPI
- Work on backend apps using Django
- Analyze datasets using pandas, NumPy, scikit-learn
- Collaborate with Full Stack Developers to deliver AI/ML features
- Eagerness to learn emerging AI/ML technologies and apply them to real-world problems
- understanding and Support in fine-tuning pretrained models (e.g., GPT, LLaMA, Mistral, BERT) for specific use cases.
- Learn to implement RAG pipelines with vector databases (e.g., FAISS, Pinecone) for context-aware AI/ML solutions.
Preferred candidate profile
- Python with strong OOP understanding
- Familiarity with AI frameworks like PyTorch, TensorFlow (Preferred)
- Exposure to ML tools (pandas, scikit-learn, NumPy)
- Strong understanding of Backend FastAPI / Django
- Interest in LLMs, APIs, and production-ready ML
- Good problem-solving, logical reasoning, and analytical skills
- Basic knowledge on Frontend ReactJS/Nextjs and Dashboards Academic project experience or previous work experience in AI, ML, NLP, or data engineering
- Good problem-solving, logical reasoning, and analytical skills
Data Science
Posted 1 day ago
Job Viewed
Job Description
Why Join Iris?
Are you ready to do the best work of your career at one of
India's Top 25 Best Workplaces in IT industry
? Do you want to grow in an award-winning culture that
truly values your talent and ambitions
?
Join Iris Software — one of the
fastest-growing IT services companies
— where
you own and shape your success story
.
About Us
At Iris Software, our vision is to be our client's most trusted technology partner, and the first choice for the industry's top professionals to realize their full potential.
With over 4,300 associates across India, U.S.A, and Canada, we help our enterprise clients thrive with technology-enabled transformation across financial services, healthcare, transportation & logistics, and professional services.
Our work covers complex, mission-critical applications with the latest technologies, such as high-value complex Application & Product Engineering, Data & Analytics, Cloud, DevOps, Data & MLOps, Quality Engineering, and Business Automation.
Working with Us
At Iris, every role is more than a job — it's a launchpad for growth.
Our Employee Value Proposition,
"Build Your Future. Own Your Journey."
reflects our belief that people thrive when they have ownership of their career and the right opportunities to shape it.
We foster a culture where your potential is valued, your voice matters, and your work creates real impact. With cutting-edge projects, personalized career development, continuous learning and mentorship, we support you to grow and become your best — both personally and professionally.
Curious what it's like to work at Iris? Head to this video for an inside look at the people, the passion, and the possibilities. Watch it here.
Job Description
We are seeking a Data Science Engineer to design, build, and optimize scalable data and machine learning systems. This role requires strong software engineering skills, a deep understanding of data science workflows, and the ability to work cross-functionally to translate business problems into production-level data solutions.
Key Responsibilities
- Design, implement, and maintain data science pipelines from data ingestion to model deployment.
- Collaborate with data scientists to operationalize ML models and algorithms in production environments.
- Develop robust APIs and services for ML model inference and integration.
- Build and optimize large-scale data processing systems using Spark, Pandas, or similar tools.
- Ensure data quality and pipeline reliability through rigorous testing, validation, and monitoring.
- Work with cloud infrastructure (AWS) for scalable ML deployment.
- Manage model versioning, feature engineering workflows, and experiment tracking.
- Optimize performance of models and pipelines for latency, cost, and throughput.
Required Qualifications
- Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related field.
- 5+ years of experience in a data science, ML engineering, or software engineering role.
- Proficiency in Python (preferred) and SQL; knowledge of Java, Scala, or C++ is a plus.
- Experience with data science libraries like Scikit-learn, XGBoost, TensorFlow, or PyTorch.
- Familiarity with ML deployment tools such as ML flow, Sage Maker, or Vertex AI.
- Solid understanding of data structures, algorithms, and software engineering best practices.
- Experience working with databases (SQL, NoSQL) and data lakes (e.g., Delta Lake, Big Query).
Preferred Qualifications
- Experience with containerization and orchestration (Docker, Kubernetes).
- Experience working in Agile or cross-functional teams.
- Familiarity with streaming data platforms (Kafka, Spark Streaming, Flink).
Soft Skills
- Strong communication skills to bridge technical and business teams.
- Excellent problem-solving and analytical thinking.
- Self-motivated and capable of working independently or within a team.
- Passion for data and a curiosity-driven mindset.
Mandatory Competencies
Data Science and Machine Learning - Data Science and Machine Learning - AI/ML
Data Science and Machine Learning - Data Science and Machine Learning - Python
Database - Database Programming - SQL
Cloud - AWS - Tensorflow on AWS, AWS Glue, AWS EMR, Amazon Data Pipeline, AWS Redshift
Data Science and Machine Learning - Data Science and Machine Learning - Pytorch
Data Science and Machine Learning - Data Science and Machine Learning - AWS Sagemaker
Tech - Data Structure and Algorithms
Programming Language - Java - Core Java (java 8+)
Programming Language - Scala - Scala
DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Containerization (Docker, Kubernetes)
Agile - Agile - Extreme Programming
Middleware - Message Oriented Middleware - Messaging (JMS, ActiveMQ, RabitMQ, Kafka, SQS, ASB etc)
Beh - Communication and collaboration
Perks And Benefits For Irisians
Iris provides world-class benefits for a personalized employee experience. These benefits are designed to support financial, health and well-being needs of Irisians for a holistic professional and personal growth. Click here to view the benefits.
Data Science
Posted 1 day ago
Job Viewed
Job Description
By clicking the "Apply" button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda's Privacy Notice and Terms of Use. I further attest that all information I submit in my employment application is true to the best of my knowledge.
Job DescriptionThe Future Begins Here
At Takeda, we are leading digital evolution and global transformation. By building innovative solutions and future-ready capabilities, we are meeting the need of patients, our people, and the planet.
Bengaluru, the city, which is India's epicenter of Innovation, has been selected to be home to Takeda's recently launched Innovation Capability Center. We invite you to join our digital transformation journey. In this role, you will have the opportunity to boost your skills and become the heart of an innovative engine that is contributing to global impact and improvement.
At Takeda's ICC we Unite in Diversity
Takeda is committed to creating an inclusive and collaborative workplace, where individuals are recognized for their backgrounds and abilities they bring to our company. We are continuously improving our collaborators journey in Takeda, and we welcome applications from all qualified candidates. Here, you will feel welcomed, respected, and valued as an important contributor to our diverse team.
The Opportunity
As a Data Scientist, you will have the opportunity to apply your analytical skills and expertise to extract meaningful insights from vast amounts of data. We are currently seeking a talented and experienced individual to join our team and contribute to our data-driven decision-making process.
Objectives/Purpose: Emphasize the design, development, and deployment of AI/Gen AI models in production environments, along with a strong focus on data engineering and end-to-end architectural understanding.
Accountabilities:
- Data Engineering: Design and implement robust data pipelines and ETL processes to support AI/Gen AI model development and deployment.
- AI/Gen AI Model Deployment: Lead the deployment of AI/Gen AI models in production environments, ensuring scalability, reliability, and maintainability.
- End-to-End Architecture: Develop and maintain comprehensive architectural documentation, ensuring alignment with enterprise architecture principles.
- Compliance and Standards: Ensure all solutions comply with architectural, security, and privacy standards.
- Collaboration: Work closely with data scientists, data engineers, and other stakeholders to deliver high-quality AI solutions.
- Continuous Improvement: Establish and institutionalize regular reviews for solution adoption and continuous improvement.
- Continuously improve model performance by analyzing and refining model architectures and processes.
- Familiarity with containerization tools (e.g., Docker, Kubernetes) for deploying models.
- Experience with model monitoring and performance tracking in production environments.
Education, Behavioral Competencies, and Skills
- Education: Bachelor's or Master's degree in Computer Science, Data Engineering, or related fields.
- Experience: 5+ years of experience in data engineering, AI/Gen AI model deployment, and end-to-end architectural design.
- Technical Skills: Proficiency in SQL, Databricks, Python, R, and cloud platforms (e.g., AWS, Azure, GCP). Experience with MLOps tools and frameworks.
- Architectural Skills: Strong understanding of data architecture, data warehousing, and data governance.
- Soft Skills: Excellent communication and leadership skills, with the ability to mentor and guide development teams.
Additional Sections
- AI/Gen AI Focus: Highlight specific experience with AI/Gen AI technologies and frameworks (e.g., TensorFlow, PyTorch, Hugging Face).
- Data Engineering Focus: Emphasize experience with data engineering tools and technologies (e.g., Apache Spark, Kafka, Airflow).
- Production Environment: Detail experience in deploying and managing AI models in production environments, including monitoring and optimization.
WHAT TAKEDA CAN OFFER YOU:
- Takeda is certified as a Top Employer, not only in India, but also globally. No investment we make pays greater dividends than taking good care of our people.
- At Takeda, you take the lead on building and shaping your own career.
- Joining the ICC in Bengaluru will give you access to high-end technology, continuous training and a diverse and inclusive network of colleagues who will support your career growth.
BENEFITS:
It is our priority to provide competitive compensation and a benefit package that bridges your personal life with your professional career. Amongst our benefits are:
- Competitive Salary + Performance Annual Bonus
- Flexible work environment, including hybrid working
- Comprehensive Healthcare Insurance Plans for self, spouse, and children
- Group Term Life Insurance and Group Accident Insurance programs
- Health & Wellness programs including annual health screening, weekly health sessions for employees.
- Employee Assistance Program
- 3 days of leave every year for Voluntary Service in additional to Humanitarian Leaves
- Broad Variety of learning platforms
- Diversity, Equity, and Inclusion Programs
- Reimbursements – Home Internet & Mobile Phone
- Employee Referral Program
- Leaves – Paternity Leave (4 Weeks) , Maternity Leave (up to 26 weeks), Bereavement Leave (5 calendar days)
ABOUT ICC IN TAKEDA:
- Takeda is leading a digital revolution. We're not just transforming our company; we're improving the lives of millions of patients who rely on our medicines every day.
- As an organization, we are committed to our cloud-driven business transformation and believe the ICCs are the catalysts of change for our global organization.
Locations
IND - Bengaluru
Worker TypeEmployee
Worker Sub-TypeRegular
Time TypeFull time