1089 Data Mining jobs in Bengaluru
Data Mining Executive
Posted today
Job Viewed
Job Description
Roles and Responsibilities
- Identify new business opportunities through data mining techniques to drive revenue growth.
- Develop and implement effective strategies for extracting valuable insights from large datasets.
- Design and maintain databases, dashboards, and reports using Excel and other tools.
- Collaborate with cross-functional teams to integrate data analysis into business decision-making processes.
Desired Candidate Profile
- 0-3 years of experience in data extraction, analysis, or a related field.
- Proficiency in working with large datasets using various software tools (e.g., Excel).
- Excellent communication skills with the ability to present complex technical information simply.
Professional, Statistical Analysis
Posted today
Job Viewed
Job Description
Calling all innovators – find your future at Fiserv.
We're Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day – quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we're involved. If you want to make an impact on a global scale, come make a difference at Fiserv.
Job Title
Professional, Statistical Analysis
Requirements:
- Necessary- Cash Advance/ banking/ Lending business understanding.
- Necessary- Knowledge of market base sizing and portfolio analytics.
- Data Analytics and univariate analytics using Excel, Python.
- A/b testing of prices & strategies, elasticity of demand.
- Good to have- understanding of marketing campaign framework and executions.
- Eagerness to learn new software (like Palantir, Snowflake)
- Excellent communication & Interpersonal skills.
Role & Responsibilities:
- Portfolio Monitoring & reporting.
- Analysis of total addressable market (TAM) and scope analysis.
- Enhancing business rule engine and optimization of credit risk pipeline.
- Ad hoc Campaign analysis and execution on need basis.
- Conduct A/B price testing.
- Maintain elaborate analysis and process documentation, update trackers.
- Work closely with business/product team understand their need and provide the solutions.
- Oversee technical pipeline upgrades by communicating requirements to DevOps team and validating outcomes.
- Publish weekly MIS for Stakeholders summarizing current status of capabilities.
Technical Skills:
- MS Excel (Must Have)
- MS Powerpoint
- SQL
- Python
- PySpark (Good to have)
- PowerBI (Good to have)
Thank you for considering employment with Fiserv. Please:
- Apply using your legal name
- Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable).
Our commitment to Diversity and Inclusion:
Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law.
Note to agencies:
Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions.
Warning about fake job posts:
Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address.
Professional, Statistical Analysis
Posted 2 days ago
Job Viewed
Job Description
We're Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day - quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we're involved. If you want to make an impact on a global scale, come make a difference at Fiserv.
**Job Title**
Professional, Statistical Analysis
Requirements:
+ Necessary- Cash Advance/ banking/ Lending business understanding.
+ Necessary- Knowledge of market base sizing and portfolio analytics.
+ Data Analytics and univariate analytics using Excel, Python.
+ A/b testing of prices & strategies, elasticity of demand.
+ Good to have- understanding of marketing campaign framework and executions.
+ Eagerness to learn new software (like Palantir, Snowflake)
+ Excellent communication & Interpersonal skills.
Role & Responsibilities:
+ Portfolio Monitoring & reporting.
+ Analysis of total addressable market (TAM) and scope analysis.
+ Enhancing business rule engine and optimization of credit risk pipeline.
+ Ad hoc Campaign analysis and execution on need basis.
+ Conduct A/B price testing.
+ Maintain elaborate analysis and process documentation, update trackers.
+ Work closely with business/product team understand their need and provide the solutions.
+ Oversee technical pipeline upgrades by communicating requirements to DevOps team and validating outcomes.
+ Publish weekly MIS for Stakeholders summarizing current status of capabilities.
Technical Skills:
+ MS Excel (Must Have)
+ MS Powerpoint
+ SQL
+ Python
+ PySpark (Good to have)
+ PowerBI (Good to have)
Thank you for considering employment with Fiserv. Please:
+ Apply using your legal name
+ Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable).
**Our commitment to Diversity and Inclusion:**
Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law.
**Note to agencies:**
Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions.
**Warning about fake job posts:**
Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address.
Professional, Statistical Analysis

Posted 3 days ago
Job Viewed
Job Description
We're Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day - quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we're involved. If you want to make an impact on a global scale, come make a difference at Fiserv.
**Job Title**
Professional, Statistical Analysis
**What Does a great Analytics Professional do at Fiserv?**
Fiserv is looking for a talented analytical professional who plays a crucial role in an organization by leveraging their skills and expertise to extract valuable insights from data, enabling data-driven decision-making and contributing to the organization's success. Using product knowledge, subject matter expertise and technical skills, you will provide the highest level of service to resolve the client/customer's issues.
**What will you do:**
+ Deep dive data analysis, detect patterns in data and present analysis to business
+ Work with business for requirements gathering and building analytical solution
+ Understand business drivers and how data is used to inform and drive decisions and behaviors.
+ Conducting complex analysis using statistical and other quantitative techniques.
+ Work with large volumes of data and apply ML techniques towards solving business problems
+ Experience with data manipulation, feature engineering, and data preprocessing techniques.
+ Applying statistical methods & techniques for problem solving
+ Ensure timely delivery of the project
+ Results are oriented towards excellent communication and interpersonal skills.
+ Communicate validation findings and recommendations to stakeholders in a clear and concise manner, both verbally and through written reports or presentations.
+ Effective communication stakeholders.
**What you will need to have:**
+ Bachelor's degree in Statistics/mathematics or equivalent qualification
+ 3-5 years of experience in solving business problems by applying advanced statistics.
+ Hands-on experience of statistical tools like Python and R.
+ Expertise in Python with relevant libraries and frameworks (e.g., TensorFlow, PyTorch, scikit-learn).
+ Hands-on experience with machine learning frameworks along with solid understanding of statistical modeling, hypothesis testing, and experimental design.
**What would be nice to have:**
+ Good to have analysis using Cloud databases like Snowflake and Azure.
+ Knowledge of GenAI and its usage.
+ Good to have banking and Payment domain.
Thank you for considering employment with Fiserv. Please:
+ Apply using your legal name
+ Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable).
**Our commitment to Diversity and Inclusion:**
Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law.
**Note to agencies:**
Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions.
**Warning about fake job posts:**
Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address.
Statistical Analysis Programmer
Posted today
Job Viewed
Job Description
Role: Statistical Programmer
Skill: SDTM/ADAM/TLF
Mode: Remote
Experience: 7+years
Job Location: PAN India
Roles & Responsibilities:
- Perform, plan co-ordinate and implement the following for complex studies: (i) the programming, testing, and documentation of statistical programs for use in creating statistical tables, figures, and listing and (ii) the programming of analysis datasets (derived datasets) and transfer files for internal and external clients and (iii) the programming quality control checks for the source data and report the data issues periodically.
- Ability to interpret project level requirements and develop programming specifications, as appropriate, for complex studies.
- Provide advanced technical expertise in conjunction with internal and external clients, and independently bring project solutions to SP teams and Statistical Programming department, for complex studies.
- Fulfill project responsibilities at the level of technical team lead for single complex studies or group of studies.
- Directly communicate with internal and client statisticians and clinical team members to ensure appropriate understanding of requirements and project timelines.
- Estimate programming scope of work, manage resource assignments, communicate project status and negotiate/re-negotiate project timelines for deliverables.
- Use and promote the use of established standards, SOP and best practices.
- Provide training and mentoring to SP team members and Statistical Programming department staff.
Qualifications:
- Bachelor's Degree Maths, Computer science, Statistics or related field and 4 years relevant experience Req Or
- Master's Degree Maths, Computer science, Statistics or related field and 3 years relevant experience Pref
- Typically requires 7+ years of prior relevant experience
- Equivalent combination of education, training and experience in lieu of degree
- Requires advanced knowledge of job area, and broad knowledge of a other related job areas, typically obtained through advanced education combined with experience.
- Advanced knowledge of statistics, programming and/or clinical drug development process;
- Advanced knowledge of computing applications such as Base SAS, SAS Graph and SAS Macro Language, where applicable
- Excellent organizational, interpersonal, leadership and communication skills
- Excellent accuracy and attention to detail
- Aptitude for mathematical calculations and problem solving
- Advanced knowledge of relevant Data Standards (such as CDISC/ADaM/SDTM)
- Ability to establish and maintain effective working relationships with coworkers, managers and clients
Statistical Analysis Programmer
Posted today
Job Viewed
Job Description
TCS is hiring Statistical Programmer
Skill - Statistical Programmer
Job Location – Mumbai, Pune, Bangalore
Experience Range – 3 to 6 Years
Educational Qualification(s) Required – Graduate/Postgraduate (Any life-science/ Engineering )
Required Skillsets:
- Proficiency in SAS / R programming languages to Program and validate ADaM, TFL and statistical analyses
- Proficiency in using statistical software and programming tools
- Familiarity with clinical study protocols, statistical analysis plans (SAPs), tables, listings, and figures (TLFs), and statistical programming documentation.
- Knowledge of CDISC standards and therapeutic area (Oncology, Immunology, Neuroscience Etc.)
- Experience in generating TLFs, programming macros, and data manipulation using SAS.
- Good to have experience in other statistical programming languages (R).
- Ability to interact professionally with statisticians, study teams, and external partners
Key Responsibilities:
- Collaborate with statisticians to develop, review, and approve Statistical Programming Plans (SPP). Implement Statistical Analysis Plans (SAP) and Statistical programming plan (SPP) to create ADaM data specifications.
- requirements.
- Develop and maintain programming documentation, such as annotated program code, programming specifications, and validation plans.
- Perform quality control checks on statistical programming deliverables to ensure accuracy, consistency, and adherence to programming standards.
- Assist in the development and implementation of standard programming macros, utilities, and tools to improve efficiency and consistency in programming tasks
Interested Candidate can mail on -
Regards,
Laharika-TCS HR
Process Mining Data Specialist
Posted today
Job Viewed
Job Description
At Alstom, we understand transport networks and what moves people. From high-speed trains, metros, monorails, and trams, to turnkey systems, services, infrastructure, signaling and digital mobility, we offer our diverse customers the broadest portfolio in the industry. Every day, 80,000 colleagues lead the way to greener and smarter mobility worldwide, connecting cities as we reduce carbon and replace cars.
Could you be the full-time Process mining Data specialist in our dynamic IS&T organization we're looking for?
Your future role
Step into a pivotal dynamic role and apply your data engineering expertise in the data-driven field of process mining. You'll be part of a forward-thinking, collaborative team driving digital transformation across the Alstom. As a Process Mining Data specialist, you will be instrumental in enabling data-driven insights and execution management through advanced data integration and modeling. You'll work closely with cross-functional teams including Manufacturing, Procurement, Supply Chain, Finance, Engineering leveraging Celonis to unlock process adoption and improvement.
Your responsibilities will include designing and maintaining robust data pipelines, ensuring high-quality data availability, and supporting process transparency through technical enablement. Specifically, we'll look to you for:
Connecting and transforming data from enterprise systems into Celonis
Designing scalable ETL/ELT pipelines and data models that reflect business processes
Monitoring data load performance, completeness, and consistency
Ensuring compliance with data governance and security standards
Supporting execution applications and automation flows
Collaborating with analysts and process owners to align technical delivery with business needs
Documenting and troubleshooting data integration and quality issues
All about you
We value passion and attitude over experience. While we don't expect you to have every single skill, we've listed some qualifications that will help you succeed and grow in this role:
- Degree in Data Engineering, Computer Science, Information Systems, or a related field
- 5+ years of experience in data engineering, preferably in a CoE or enterprise context.
- 3+ of hands-on experience with Celonis EMS and Process Mining
- Strong experience in SAP data structures and integration
- Proficiency in SQL and data transformation techniques
- Familiarity with Celonis or other process mining platforms
- Knowledge of Python or other scripting languages is a plus
- Understanding of data privacy and governance principles
- Excellent analytical and problem-solving skills
- Strong communication skills and ability to work with diverse stakeholders
- Curiosity and a continuous learning mindset
Things you'll enjoy
Join us on a lifelong transformative journey – the rail industry is here to stay, so you can grow and develop new skills and experiences throughout your career. You'll also:
- Enjoy stability, challenges, and a long-term career free from boring daily routines
- Collaborate with transverse teams and helpful colleagues
- Contribute to innovative and visible projects
- Utilize our flexible working environment
- Steer your career in whatever direction you choose across functions and countries
- Benefit from our investment in your development through award-winning learning
- Progress towards leadership roles within our IS&T function
- Benefit from a fair and dynamic reward package that recognizes your performance and potential, plus comprehensive and competitive social coverage (life, medical, pension)
You don't need to be a train enthusiast to thrive with us. We guarantee that when you step onto one of our trains with your friends or family, you'll be proud. If you're up for the challenge, we'd love to hear from you
Important to note
As a global business, we're an equal-opportunity employer that celebrates diversity across the 63 countries we operate in. We're committed to creating an inclusive workplace for everyone.
Job Segment: Supply Chain, Computer Science, Database, Procurement, Supply, Operations, Technology
Be The First To Know
About the latest Data mining Jobs in Bengaluru !
Process Mining Data Specialist
Posted today
Job Viewed
Job Description
Your future role
Step into a pivotal dynamic role and apply your data engineering expertise in the data-driven field of process mining. Youll be part of a forward-thinking, collaborative team driving digital transformation across the Alstom. As a Process Mining Data specialist, you will be instrumental in enabling data-driven insights and execution management through advanced data integration and modeling. Youll work closely with cross-functional teams including Manufacturing, Procurement, Supply Chain, Finance, Engineering leveraging Celonis to unlock process adoption and improvement.
Your responsibilities will include designing and maintaining robust data pipelines, ensuring high-quality data availability, and supporting process transparency through technical enablement. Specifically, well look to you for:
Connecting and transforming data from enterprise systems into Celonis
Designing scalable ETL/ELT pipelines and data models that reflect business processes
Monitoring data load performance, completeness, and consistency
Ensuring compliance with data governance and security standards
Supporting execution applications and automation flows
Collaborating with analysts and process owners to align technical delivery with business needs
Documenting and troubleshooting data integration and quality issues
All about you
We value passion and attitude over experience. While we don't expect you to have every single skill, we've listed some qualifications that will help you succeed and grow in this role:
- Degree in Data Engineering, Computer Science, Information Systems, or a related field
- 5+ years of experience in data engineering, preferably in a CoE or enterprise context.
- 3+ of hands-on experience with Celonis EMS and Process Mining
- Strong experience in SAP data structures and integration
- Proficiency in SQL and data transformation techniques
- Familiarity with Celonis or other process mining platforms
- Knowledge of Python or other scripting languages is a plus
- Understanding of data privacy and governance principles
- Excellent analytical and problem-solving skills
- Strong communication skills and ability to work with diverse stakeholders
- Curiosity and a continuous learning mindset.
Data Science & Machine Learning Engineer
Posted today
Job Viewed
Job Description
TE-4 Years and above
Location- Bangalore/Chennai/Hyderabad
NP- 15-30 Days max
JOB DESCRIPTION
Join our fast‑growing team to build a unified platform for data analytics, machine learning, and generative AI. You’ll integrate the AI/ML toolkit, real‑time streaming into a backed feature store, and dashboards—turning raw events into reliable features, insights, and user‑facing analytics at scale.
What you’ll do
- Design and build streaming data pipelines (exactly‑once or effectively‑once) from event sources into low‑latency feature serving and NRT and OLAP queries.
- Develop an AI/ML toolkit: reusable libraries, SDKs, and CLIs for data ingestion, feature engineering, model training, evaluation, and deployment.
- Stand up and optimize a production feature store (schemas, SCD handling, point‑in‑time correctness, TTL/compaction, backfills).
- Expose features and analytics via well‑designed APIs/Services;
integrate with model serving and retrieval for ML/GenAI use cases. - Build and operationalize Superset dashboards for monitoring data quality, pipeline health, feature drift, model performance, and business KPIs.
- Implement governance and reliability: data contracts, schema evolution, lineage, observability, alerting, and cost controls.
- Partner with UI/UX, data science, and backend teams to ship end‑to‑end workflows from data capture to real‑time inference and decisioning.
- Drive performance: benchmark and tune distributed DB (partitions, indexes, compression, merge settings), streaming frameworks, and query patterns.
- Automate with CI/CD, infrastructure‑as‑code, and reproducible environments for quick, safe releases.
Tech you may use
Languages: Python, Java/Scala, SQL
Streaming/Compute: Kafka (or Pulsar), Spark, Flink, Beam
Storage/OLAP: ClickHouse (primary), object storage (S3/GCS), Parquet/Iceberg/Delta
Orchestration/Workflow: Airflow, dbt (for transformations), Makefiles/Poetry/pipenv
ML/MLOps: MLflow/Weights & Biases, KServe/Seldon, Feast/custom feature store patterns, vector stores (optional)
Dashboards/BI: Superset (plugins, theming), Grafana for ops
Platform: Kubernetes, Docker, Terraform, GitHub Actions/GitLab CI, Prometheus/OpenTelemetry
Cloud: AWS/GCP/Azure
What we’re looking for
- 4+ years building production data/ML or streaming systems with high TPS and large data volumes.
- Strong coding skills in Python and one of Java/Scala;
solid SQL and data modeling. - Hands‑on experience with Kafka (or similar), Spark/Flink, and OLAP stores—ideally ClickHouse.
- GenAI pipelines: retrieval‑augmented generation (RAG), embeddings, prompt/tooling workflows, model evaluation at scale.
- Proven experience designing feature pipelines with point‑in‑time correctness and backfills;
understanding of online/offline consistency. - Experience instrumenting Superset dashboards tied to ClickHouse for operational and product analytics.
- Fluency with CI/CD, containerization, Kubernetes, and infrastructure‑as‑code.
- Solid grasp of distributed systems and architecture fundamentals: partitioning, consistency, idempotency, retries, batching vs. streaming, and cost/perf trade‑offs.
- Excellent collaboration skills;
ability to work cross‑functionally with DS/ML, product, and UI/UX. - Ability to pass a CodeSignal prescreen coding test.
Grid Dynamics (Nasdaq:GDYN) is a digital-native technology services provider that accelerates growth and bolsters competitive advantage for Fortune 1000 companies. Grid Dynamics provides digital transformation consulting and implementation services in omnichannel customer experience, big data analytics, search, artificial intelligence, cloud migration, and application modernization. Grid Dynamics achieves high speed-to-market, quality, and efficiency by using technology accelerators, an agile delivery culture, and its pool of global engineering talent. Founded in 2006, Grid Dynamics is headquartered in Silicon Valley with offices across the US, UK, Netherlands, Mexico, India, Central and Eastern Europe.
To learn more about Grid Dynamics, please visit . Follow us on Facebook , Twitter , and LinkedIn .
--
Trainee Intern Data Science
Posted 12 days ago
Job Viewed
Job Description
Company Overview – WhatJobs Ltd
WhatJobs is a global job search engine and career platform operating in over 50 countries. We leverage advanced technology and AI-driven tools to connect millions of job seekers with opportunities, helping businesses and individuals achieve their goals.
Position: Data Science Trainee/Intern
Location: Commercial Street
Duration: 3 Months
Type: Internship/Traineeship (with potential for full-time opportunities)
Role Overview
We are looking for enthusiastic Data Science trainees/interns eager to explore the world of data analytics, machine learning, and business insights. You will work on real-world datasets, apply statistical and computational techniques, and contribute to data-driven decision-making at WhatJobs.
Key Responsibilities
- Collect, clean, and analyze datasets to derive meaningful insights.
- Assist in building and evaluating machine learning models.
- Work with visualization tools to present analytical results.
- Support the team in developing data pipelines and automation scripts.
- Research new tools, techniques, and best practices in data science.
Requirements
- Basic knowledge of Python and data science libraries (Pandas, NumPy, Matplotlib, Scikit-learn).
- Understanding of statistics, probability, and data analysis techniques.
- Familiarity with machine learning concepts.
- Knowledge of Google Data Studio and BigQuery for reporting and data management.
- Strong analytical skills and eagerness to learn.
- Good communication and teamwork abilities.
What We Offer
- Hands-on experience with real-world data science projects.
- Guidance and mentorship from experienced data professionals.
- Opportunity to work with a global technology platform.
- Certificate of completion and potential for full-time role.