134 Evaluation Officer jobs in India
Monitoring Evaluation Officer- Nashik
Posted today
Job Viewed
Job Description
- Share
Monitoring Evaluation Officer
DIP /BE Mechanical
Freshers/ Experienced
Sal - 15 K
Ambad
- Experience- 0 - 1 Years- Salary- 1 Lac To 1 Lac 50 Thousand P.A.- Industry- Manufacturing / Production / Quality- Qualification- Diploma, B.Tech/B.E- Key Skills- Checking Officer Monitoring Evaluation OfficerAbout Company
- Contact Person-
- Address- Office - 1, First Floor, Sai Shilp Above Chaska Maska Hotel, Tapovan Link Road, Kathe Lane- Mobile
Monitoring and Evaluation Officer
Posted today
Job Viewed
Job Description
**Responsibilities**:
**-Ensuring the effective implementation of the Teach Program.**
**-The 2 key aspects of your work would be Center Management & Volunteer Management.**
**-Manage the operational aspects of 2-3 Learning Centers and the volunteers in Bangalore as well as other cities across India**
**-Be responsible for troubleshooting, addressing issues with the volunteers.**
**-Build leadership capacity among volunteers.**
**-Excellent communication skills (written and oral);**
**-Excellent project management skills with ability to lead and drive work streams from initiation to completion;**
**-Self-starter with an excellent ability to work autonomously to identify and serve needs with mínimal supervision;**
**-Ability to organise and complete multiple tasks by establishing priorities while taking into consideration special assignments, frequent interruptions, deadlines, available resources and multiple reporting relationships;**
**-Excellent data analytical skill, both Qualitative and Quantitative analysis**
**-Report writing including impact measurement and reporting**
**-Knowledge of using Softwares like Tableu, google data analytics and STRATA**
**Must Haves**:
**-Excellent relational and leadership skills to manage the volunteers.**
**-Ability to lead a team of volunteers and conduct meetings, training.**
**-Excellent communication and presentation skills.**
**-Willingness to travel once a quarter.**
**-Knowledge of data analysis and data visualization tool, advanced excel**
Pay: ₹25,500.00 - ₹33,000.00 per month
**Benefits**:
- Health insurance
- Internet reimbursement
- Paid sick time
- Paid time off
- Work from home
Schedule:
- Day shift
- Monday to Friday
- Weekend availability
Supplemental Pay:
- Quarterly bonus
Work Location: Remote
Monitoring and Evaluation Officer Ngo
Posted today
Job Viewed
Job Description
Implement Data Collection Activities: Coordinate and oversee data collection activities, ensuring that data is collected accurately, consistently, and in a timely manner.Monitor and Maintain Data Collection System: Continuously monitor and maintain the data collection system, troubleshooting any issues that arise and implementing improvements as needed.
Data Analysis: Analyze collected data to assess program impact, efficiency, and effectiveness. Utilize various analytical techniques to derive meaningful insights and make evidence-based recommendations for program improvement.
Reporting: Prepare regular reports on program performance, synthesizing data and findings into clear and concise formats for internal and external stakeholders.
Capacity Building: Provide training and support to program staff and partners on data collection methods, tools, and techniques to ensure data quality and consistency.
Evaluation Design: Contribute to the design of program evaluations, including the development of evaluation frameworks, indicators, and methodologies.
Quality Assurance: Conduct regular data quality assessments and audits to ensure the accuracy, reliability, and integrity of collected data.
Knowledge Management: Document best practices, lessons learned, and success stories related to monitoring and evaluation activities, and share findings with relevant stakeholders.
**Requirements**:
- Bachelor's or Master's degree in a relevant field such as Monitoring and Evaluation, Statistics, or a related field
- Proven experience in designing and implementing monitoring and evaluation systems, preferably in the context of development programs is preferred.
Strong analytical skills and proficiency in quantitative and qualitative data analysis techniques.
Proficiency in statistical software such as SPSS, STATA, or R, and experience with data visualization tools.
Excellent communication and interpersonal skills, with the ability to effectively convey complex information to diverse audience
Pay: ₹650,000.00 - ₹800,000.00 per year
Work Location: In person
Monitoring & Evaluation (M&e) Officer - Asia
Posted today
Job Viewed
Job Description
The Monitoring, Evaluation, Learning and Quality Assurance (MELQA) team is searching for a M&E Officer to provide technical MEL support to the Asia portfolio during project design and implementation. With regional presence in Africa, Asia, Latin America, it is critical to ensure that all M&E systems are nested within the Orbis Global M&E framework. The M&E Officer will play an integral role in these efforts by supporting country offices adapt the Global M&E framework to the country specific context.
The M&E Officer works within a matrixed team of global colleagues, united by a common vision of a world where no one is needlessly blind - and we think data is key part of achieving this vision.
**LOCATION**
Working remotely (Based in Asia: Preferred location would be in one of the countries where Orbis has long term projects - Bangladesh, India, Vietnam).
**REPORTING & WORKING RELATIONSHIPS**:
The M&E Officer will report directly to the Monitoring, Evaluation, Quality Assurance (MEQA) Specialist. S/He works closely with all members of the MELQA team, the Global Program team and liaises closely with Country Offices in Asia, Information Technology (IT), Development, and other relevant departments.
**ESSENTIAL JOB FUNCTIONS / KEY AREAS OF RESPONSIBILITY**
- Assist country program/project managers to build projects in Indicata, the project management and monitoring tools of Orbis International (where Orbis does not have country M&E Manager).
- Develop data collection and monitoring tools as needed by project
- Provide training to country team on M&E technical skills
- Provide support to program/project managers to develop Data Quality Assessment (DQA) tools and methodology as well as use of DQA tools for their respective projects.
- Support in compiling and analyzing quarterly and annual numeric data
- Take part in proposal development particularly in Theory of Change (TOC) and Log frame development, and writing up of the M&E section
- Work closely with the country teams to ensure alignment and operationalization of the MEL tools (baseline data collection, data flow, indicator reference sheet, performance targets etc.)
- Any other M&E activities as they arise
***QUALIFICATIONS & EXPERIENCE**
- Bachelor’s degree or equivalent in public health, statistics, economics, international affairs, social science, or international development
- Knowledge and demonstrated proficiency in quantitative and qualitative methods, M&E planning, M&E system improvement, evaluation methods and processes, data use and data visualization.
- Experience working in Asia
- Working knowledge of basic excel and statistical analysis soft wares (Epi info, SPSS, STATA etc.) is desirable.
- Strong in the development of simple project databases (e.g., Excel or Access) and in data management
- Knowledge of/ experience in gender mainstreaming in development projects and/or government programs
- Experience in participating in institutional strengthening and cohesiveness initiatives preferred
**SKILLS & ABILITIES**
- Ability to communicate complex data in a simple, actionable way
- Excellent organizational skills: ability to manage multiple assignments and competing priorities
- Excellent quantitative analysis and data visualization skills
- Excellent oral communication and training skills to present information and respond to questions effectively
- Exceptional writing skills to effectively create clear and concise reports, strategies and/or guidelines
- Multilingual is a plus
- Flexible, pro-active, and open-minded work style: the ability to work productively both independently and in a team-based environment, in-office and remotely.
- Ability to travel internationally 20% of the time for extended periods
AI Research Engineer (Model Evaluation)
Posted today
Job Viewed
Job Description
Join Tether and Shape the Future of Digital Finance
At Tether, we’re not just building products, we’re pioneering a global financial revolution. Our cutting-edge solutions empower businesses—from exchanges and wallets to payment processors and ATMs—to seamlessly integrate reserve-backed tokens across blockchains. By harnessing the power of blockchain technology, Tether enables you to store, send, and receive digital tokens instantly, securely, and globally, all at a fraction of the cost. Transparency is the bedrock of everything we do, ensuring trust in every transaction.
Innovate with Tether
Tether Finance: Our innovative product suite features the world’s most trusted stablecoin, USDT , relied upon by hundreds of millions worldwide, alongside pioneering digital asset tokenization services.
But that’s just the beginning:
Tether Power: Driving sustainable growth, our energy solutions optimize excess power for Bitcoin mining using eco-friendly practices in state-of-the-art, geo-diverse facilities.
Tether Data: Fueling breakthroughs in AI and peer-to-peer technology, we reduce infrastructure costs and enhance global communications with cutting-edge solutions like KEET , our flagship app that redefines secure and private data sharing.
Tether Education : Democratizing access to top-tier digital learning, we empower individuals to thrive in the digital and gig economies, driving global growth and opportunity.
Tether Evolution : At the intersection of technology and human potential, we are pushing the boundaries of what is possible, crafting a future where innovation and human capabilities merge in powerful, unprecedented ways.
Why Join Us?
Our team is a global talent powerhouse, working remotely from every corner of the world. If you’re passionate about making a mark in the fintech space, this is your opportunity to collaborate with some of the brightest minds, pushing boundaries and setting new standards. We’ve grown fast, stayed lean, and secured our place as a leader in the industry.
If you have excellent English communication skills and are ready to contribute to the most innovative platform on the planet, Tether is the place for you.
Are you ready to be part of the future?
About the job:
As a member of our AI model team, you will drive innovation across the entire AI lifecycle by developing and implementing rigorous evaluation frameworks and benchmark methodologies for pre-training, post-training, and inference. Your work will focus on designing metrics and assessment strategies that ensure our models are highly responsive, efficient, and reliable across real-world applications. You will work on a wide spectrum of systems, from resource-efficient models designed for limited hardware environments to complex, multi-modal architectures that integrate text, images, and audio.
We expect you to have deep expertise in advanced model architectures, pre-training and post-training practices, and inference evaluation frameworks. Adopting a hands-on, research-driven approach, you will develop, test, and implement novel evaluation strategies that rigorously track key performance indicators such as accuracy, latency, throughput, and memory footprint. Your evaluations will not only benchmark model performance at each stage, from the foundational pre-training phase to targeted post-training refinements and final inference but will also provide actionable insights.
A key element of this role is collaborating with cross-functional teams including product management, engineering, and operations to share your evaluation findings and integrate stakeholder feedback. You will engineer robust evaluation pipelines and performance dashboards that serve as a common reference point for all stakeholders, ensuring that the insights drive continuous improvement in model deployment strategies. The ultimate goal is to set industry-leading standards for AI model quality and reliability, delivering scalable performance and tangible value in dynamic, real-world scenarios.
Responsibilities :
Develop, test, and deploy integrated frameworks that rigorously assess models during pre-training, post-training, and inference. Define and track key performance indicators such as accuracy, loss metrics, latency, throughput, and memory footprint across diverse deployment scenarios.
Curate high-quality evaluation datasets and design standardized benchmarks to reliably measure model quality and robustness. Ensure that these benchmarks accurately reflect improvements achieved through both pre-training and post-training processes, and drive consistency in evaluation practices.
Engage with product management, engineering, data science, and operations teams to align evaluation metrics with business objectives. Present evaluation findings, actionable insights, and recommendations through comprehensive dashboards and reports that support decision-making across functions.
Systematically analyze evaluation data to identify and resolve bottlenecks across the model lifecycle. Propose and implement optimizations that enhance model performance, scalability, and resource utilization on resource-constrained platforms, ensuring efficient pre-training, post-training, and inference.
Conduct iterative experiments and empirical research to refine evaluation methodologies, staying abreast of emerging techniques and trends. Leverage insights to continuously enhance benchmarking practices and improve overall model reliability, ensuring that all stages of the model lifecycle deliver measurable value in real-world applications.
A degree in Computer Science or related field. Ideally PhD in NLP, Machine Learning, or a related field, complemented by a solid track record in AI R&D (with good publications in A* conferences).
Demonstrated experience in designing and evaluating AI models at multiple stages from pre-training, post-training, and inference. You should be proficient in developing evaluation frameworks that rigorously assess accuracy, convergence, loss improvements, and overall model robustness, ensuring each stage of the AI lifecycle delivers measurable real-world value.
Strong programming skills and hands-on expertise in evaluation benchmarks and frameworks are essential. Familiarity with building, automating, and scaling complex evaluation and benchmarking pipelines, and experience with performance metrics: latency, throughput, and memory footprint.
Proven ability to conduct iterative experiments and empirical research that drive the continuous refinement of evaluation methodologies. You should be adept at staying abreast of emerging trends and techniques, leveraging insights to enhance benchmarking practices and model reliability.
Demonstrated experience collaborating with diverse teams such as product, engineering, and operations in order to align evaluation strategies with organizational goals. You must be skilled at translating technical findings into actionable insights for stakeholders and driving process improvements across the model development lifecycle.
AI Research Engineer (Model Evaluation)
Posted today
Job Viewed
Job Description
Join Tether and Shape the Future of Digital Finance
At Tether, we’re not just building products, we’re pioneering a global financial revolution. Our cutting-edge solutions empower businesses—from exchanges and wallets to payment processors and ATMs—to seamlessly integrate reserve-backed tokens across blockchains. By harnessing the power of blockchain technology, Tether enables you to store, send, and receive digital tokens instantly, securely, and globally, all at a fraction of the cost. Transparency is the bedrock of everything we do, ensuring trust in every transaction.
Innovate with Tether
Tether Finance: Our innovative product suite features the world’s most trusted stablecoin, USDT , relied upon by hundreds of millions worldwide, alongside pioneering digital asset tokenization services.
But that’s just the beginning:
Tether Power: Driving sustainable growth, our energy solutions optimize excess power for Bitcoin mining using eco-friendly practices in state-of-the-art, geo-diverse facilities.
Tether Data: Fueling breakthroughs in AI and peer-to-peer technology, we reduce infrastructure costs and enhance global communications with cutting-edge solutions like KEET , our flagship app that redefines secure and private data sharing.
Tether Education : Democratizing access to top-tier digital learning, we empower individuals to thrive in the digital and gig economies, driving global growth and opportunity.
Tether Evolution : At the intersection of technology and human potential, we are pushing the boundaries of what is possible, crafting a future where innovation and human capabilities merge in powerful, unprecedented ways.
Why Join Us?
Our team is a global talent powerhouse, working remotely from every corner of the world. If you’re passionate about making a mark in the fintech space, this is your opportunity to collaborate with some of the brightest minds, pushing boundaries and setting new standards. We’ve grown fast, stayed lean, and secured our place as a leader in the industry.
If you have excellent English communication skills and are ready to contribute to the most innovative platform on the planet, Tether is the place for you.
Are you ready to be part of the future?
About the job:
As a member of our AI model team, you will drive innovation across the entire AI lifecycle by developing and implementing rigorous evaluation frameworks and benchmark methodologies for pre-training, post-training, and inference. Your work will focus on designing metrics and assessment strategies that ensure our models are highly responsive, efficient, and reliable across real-world applications. You will work on a wide spectrum of systems, from resource-efficient models designed for limited hardware environments to complex, multi-modal architectures that integrate text, images, and audio.
We expect you to have deep expertise in advanced model architectures, pre-training and post-training practices, and inference evaluation frameworks. Adopting a hands-on, research-driven approach, you will develop, test, and implement novel evaluation strategies that rigorously track key performance indicators such as accuracy, latency, throughput, and memory footprint. Your evaluations will not only benchmark model performance at each stage, from the foundational pre-training phase to targeted post-training refinements and final inference but will also provide actionable insights.
A key element of this role is collaborating with cross-functional teams including product management, engineering, and operations to share your evaluation findings and integrate stakeholder feedback. You will engineer robust evaluation pipelines and performance dashboards that serve as a common reference point for all stakeholders, ensuring that the insights drive continuous improvement in model deployment strategies. The ultimate goal is to set industry-leading standards for AI model quality and reliability, delivering scalable performance and tangible value in dynamic, real-world scenarios.
Responsibilities :
Develop, test, and deploy integrated frameworks that rigorously assess models during pre-training, post-training, and inference. Define and track key performance indicators such as accuracy, loss metrics, latency, throughput, and memory footprint across diverse deployment scenarios.
Curate high-quality evaluation datasets and design standardized benchmarks to reliably measure model quality and robustness. Ensure that these benchmarks accurately reflect improvements achieved through both pre-training and post-training processes, and drive consistency in evaluation practices.
Engage with product management, engineering, data science, and operations teams to align evaluation metrics with business objectives. Present evaluation findings, actionable insights, and recommendations through comprehensive dashboards and reports that support decision-making across functions.
Systematically analyze evaluation data to identify and resolve bottlenecks across the model lifecycle. Propose and implement optimizations that enhance model performance, scalability, and resource utilization on resource-constrained platforms, ensuring efficient pre-training, post-training, and inference.
Conduct iterative experiments and empirical research to refine evaluation methodologies, staying abreast of emerging techniques and trends. Leverage insights to continuously enhance benchmarking practices and improve overall model reliability, ensuring that all stages of the model lifecycle deliver measurable value in real-world applications.
A degree in Computer Science or related field. Ideally PhD in NLP, Machine Learning, or a related field, complemented by a solid track record in AI R&D (with good publications in A* conferences).
Demonstrated experience in designing and evaluating AI models at multiple stages from pre-training, post-training, and inference. You should be proficient in developing evaluation frameworks that rigorously assess accuracy, convergence, loss improvements, and overall model robustness, ensuring each stage of the AI lifecycle delivers measurable real-world value.
Strong programming skills and hands-on expertise in evaluation benchmarks and frameworks are essential. Familiarity with building, automating, and scaling complex evaluation and benchmarking pipelines, and experience with performance metrics: latency, throughput, and memory footprint.
Proven ability to conduct iterative experiments and empirical research that drive the continuous refinement of evaluation methodologies. You should be adept at staying abreast of emerging trends and techniques, leveraging insights to enhance benchmarking practices and model reliability.
Demonstrated experience collaborating with diverse teams such as product, engineering, and operations in order to align evaluation strategies with organizational goals. You must be skilled at translating technical findings into actionable insights for stakeholders and driving process improvements across the model development lifecycle.
Associate, Research Monitoring and Evaluation
Posted today
Job Viewed
Job Description
Position Overview:
The Associate, RM&E will report to Project Lead and will closely work with the RM&E team of India Country Office to ensure alignment with RM&E processes and quality standards. The role will focus on strengthening government monitoring processes, support in data quality mechanisms, assist with assessments and capacity building of key stakeholders, and regularly analysing data.
Roles and Responsibilities:
- Develop program monitoring tools and formats, including reviewing existing tools and refining them as needed.- Regularly analyse monitoring and assessment data to generate key insights.- Support the creation of quarterly presentations highlighting key programmatic trends and findings.- Contribute to the refinement, translation, and formatting of assessment tools.- Assist in the preparation and administration of government-led and internal assessments.- Ensure data is complete, accurate, and aligned with program monitoring indicators and reporting formats.- Conduct data quality checks, identify discrepancies, and make necessary corrections in coordination with the RM&E team.- Identify key issues around program quality and links between monitoring and evaluation and quality assurance and improvement.- Develop program monitoring dashboards to enable real-time tracking and support mid-course corrective actions.- Work closely with program and field teams across project locations to train them on monitoring processes and strengthen the use of program monitoring data.- Provide on-ground support during trainings and follow-up sessions for improved understanding and usage of tools.- Travel to project locations, as required, to oversee the implementation of RM&E work including program monitoring and related training & review meetings.- Carry out any other duties as assigned, aligned with RM&E priorities.
Qualifications:
Required:
- At least Post-Graduate in Social Sciences, Education, Public Policy, Development Studies, Statistics, Economics, Data Science and Survey Research or a related field.- At least three (03) years of professional experience in monitoring and evaluation.- Experience with Foundational Literacy and Numeracy (FLN) is desirable.- Strong data analysis, data visualization skills including dashboard creation.- Proficiency in Microsoft Excel, Power BI and SurveyCTO.- Ability to multitask effectively, manage multiple priorities and meet deadlines in a fast-paced environment.- Strong verbal and written communication skills in English and Hindi.
Compensation:
Room to Read offers a competitive salary with excellent benefits. Benefits include thirteenth month bonus, health insurance and a retirement plan. The non-monetary compensation includes a unique opportunity to be part of an innovative, meaningful, and rapidly growing organization that is changing transforming the lives of millions of children in developing countries on literacy and gender equality in education.
- Room to Read is a child-safe organization._
Location(s)
India - Madhya Pradesh
To be successful at Room to Read, you will also:
- Have passion for our mission and a strong desire to impact a dynamic nonprofit organization- Be a proactive and innovative thinker who achieves results and creates positive change- Have a very high level of personal and professional integrity and trustworthiness- Embrace diversity and a commitment to collaboration- Thrive in a fast-paced and fun environment
- Room to Read is proud to be an equal opportunity employer committed to identifying and developing the skills and leadership of people from diverse _backgrounds. EOE/M/F/Vet/Disabled_
- About Room to Read:_
Founded in 2000 on the belief that World Change Starts with Educated Children®, Room to Read is creating a world free from illiteracy and gender inequality through education.
We are achieving this goal by helping children in historically low-income communities develop literacy skills and a habit of reading, and by supporting girls as they build life skills to succeed in school and negotiate key life decisions.
We collaborate with governments and other partner organizations to deliver positive outcomes for children at scale.
Room to Read has benefited more than 45 million children and has worked in 24 countries and in more than 213,000 communities, providing additional support through remote solutions that facilitate learning beyond the classroom.
Be The First To Know
About the latest Evaluation officer Jobs in India !
Research Associate (Mathematical Epidemiology & Evaluation Research Group)- The Kirby Institute
Posted today
Job Viewed
Job Description
This Job is based in Australia
Employment Type : Full Time, 35 hours per week
Duration : 6 months
Remuneration : Academic Level A $91K- $121 K (based on experience) + 17% Super + Annual Leave Loading
Location : Kensington, Sydney, New South Wales
Visa sponsorship is not available for this position. Candidates must hold unrestricted work rights to be considered for this position.
About The Kirby Institute
The Kirby Institute is a world-leading health research institute at UNSW Sydney. We work to eliminate infectious diseases, globally. Our specialisation is in developing health solutions for the most at-risk communities. Putting communities at the heart of our research, we develop tests, treatments, cures and prevention strategies that have the greatest chance of success.
Why Your Role Matters
The Research Associate (Level A) will support the research efforts of UNSW through a defined project evaluating HIV pre-exposure prophylaxis (PrEP) strategies using a mathematical modelling framework. The role will involve estimating the population-level impact of oral and long-acting PrEP interventions, working in collaboration with health departments, community organisations, and other key stakeholders.
This position is within the Mathematical Epidemiology and Evaluation Research Group part of the Surveillance and Evaluation Research Program and reports to the Associate Professor and has no direct reports.
Skills And Experience
- A PhD or master's in computational modeling of infectious diseases or in a related discipline and/ or relevant work experience.
- Experience in scientific software development and the Python program language will be highly regarded.
- Demonstrated experience working with individual-based infectious disease models.
- Proven ability to conduct high quality academic research with strong attention to detail.
- Experience in software development and version control systems will be highly regarded.
- Proven commitment to proactively keeping up to date with discipline knowledge and developments.
- Demonstrated track record of publications in peer-reviewed journals and conference presentations relative to opportunity.
- Demonstrated ability to work in a team, collaborate across disciplines and build effective relationships.
- Evidence of highly developed interpersonal skills.
- Demonstrated ability to communicate and interact with a diverse range of stakeholders and students.
- An understanding of and commitment to UNSW's aims, objectives and values in action, together with relevant policies and guidelines.
- Knowledge of health & safety (psychosocial and physical) responsibilities and commitment to attending relevant health and safety training.
UNSW offer a competitive salary and access to a plethora of UNSW-perks including:
- 17% Superannuation and leave loading
- Flexible working
- An additional 3 days of leave over the Christmas Period
- Access to lifelong learning and career development
- Progressive HR practices
How To Apply
To be considered for this role, your application must include a document addressing the Selection Criteria which are outlined in the "Skills and Experience" section of the position description). Applications that do not address these selection criteria will not be considered
Please click Apply now to submit your application online. Applications submitted via email will not be accepted.
Submit your application online before Sunday 10th August at 11:30pm.
Get in Touch (for job related queries only – applications will not be accepted if sent to the contact listed):
Aarti: Talent Acquisition Associate
E: (HIDDEN TEXT)
UNSW is committed to evolving a culture that embraces equity and supports a diverse and inclusive community where everyone can participate fairly, in a safe and respectful environment. We welcome candidates from all backgrounds and encourage applications from people of diverse gender, sexual orientation, cultural and linguistic backgrounds, Aboriginal and Torres Strait Islander background, people with disability and those with caring and family responsibilities. UNSW provides workplace adjustments for people with disability, and access to flexible work options for eligible staff.
The University reserves the right not to proceed with any appointment.
Skills Required
Software Development, Version Control Systems
Research Associate - Smart Metering Evaluation - J-PAL South Asia
Posted today
Job Viewed
Job Description
Project Title: Smart Metering Evaluation, Assam
Country: India
Location: Guwahati
State Date: Nov 1 2024
Length of commitment: One year
Education: Bachelor’s in Economics/Engineering
Organization: J-PAL South Asia at IFMR
Company Description:
The Abdul Latif Jameel Poverty Action Lab (J-PAL) seeks qualified applicants for positions as Research Associates for projects on agriculture, education, environment, finance, health, labor, and governance in locations around the world. The projects study a variety of topics within the previously listed fields. The positions offer an opportunity to gain first-hand field management experience in an organization undertaking cutting-edge development research. These positions are located primarily in developing countries around the world and the principal investigators are J-PAL affiliated professors.
Project Description:
Almost a billion people in developing countries are not connected to the electricity grid. Those with power are still subject to unreliable supply and frequent outages. One cause is a widespread inability of utilities to collect on payments, forcing them to ration supply. This project will evaluate the use of new technology of smart meters to break this cycle of low-payments and low-quality that in turn causes consumers to feel justified in making incomplete payments. Smart metering and pre-payment specifically have several beneficial features, such as lowering transactions and monitoring costs and perhaps reducing liquidity constraints. There is also a growing literature on the consumer response to real-time pricing, which smart metering enables. However, there are no credible evaluations on the crucial point—can a technological intervention change norm, incentives, and payment, and thereby improve reliability and access, even in a high-theft environment?
While there have been great strides in smart metering, it is not clear that metering alone can reduce theft if the fundamental problem is collusion between collection agents and utility customers, at the expense of power suppliers. Our study (PIs: Prof. Michael Greenstone, Prof. Robin Burgess, Prof. Nicholas Ryan and Prof. Anant Sudarshan) would conduct a large-scale neighbourhood level randomized-control trial to answer this question in a critical developing country setting, working jointly on implementation with state distribution utility of Assam Power Distribution Company Limited.
Research Associate Roles and Responsibilities:
The RA will work closely with academic researchers and other field staff to perform a variety tasks including, but not limited to:
· Overseeing implementation of the evaluation in accordance to the research design, in association with our partner organization APDCL.
· Designing survey questionnaires, conducting qualitative research, conducting quantitative analysis of real time incoming consumer billing data and refining surveying instruments.
· Managing field teams: Recruit, train, and supervise both field-based and data operations teams consisting of project assistants, field managers, field-based surveyors, data entry operators and other field and office staff.
· Supervising data collection and ensuring data quality and productivity.
· Maintaining relationships with partner organizations at both headquarters and field levels
· Assisting with data cleaning, preliminary data analysis, and preparation of documents and presentations for dissemination.
· Reporting to PIs on all of the above mentioned activities.
Desired Qualifications and Experience :
Required:
A Bachelor’s degree in Economics (or related field) or Engineering
At least 1 year of work experience is necessary
Prior experience working with government partners is desired but not necessary
Prior experience with field data collection is desired but not necessary
Familiarity with impact evaluations and randomized controlled trials is required
Excellent management and organizational skills along with strong quantitative skills
Fluency in English and strong communication skills are required. Spoken fluency in Assamese is desirable.
Flexible, self-motivating, able to manage multiple tasks efficiently, and a team player
Willingness to live in Assam and travel extensively within the region
Intermediate knowledge of STATA or other data analysis tools
Demonstrated ability to manage high-level relationships with partner organizations
Desired:
Master’s degree in economics, engineering or related disciplines is preferred
Experience living in a developing country and/or liaising with government officials is a strongly desired but not necessary
Proficiency in STATA (or other tools such as R, SPSS etc.) is desired
We are looking for a commitment period of one year for this position.
Note on Work Authorizations :
Candidates must have work authorization to work in India. This covers citizens of India, Nepal or Bhutan, Persons of Indian origin (PIO), and an Overseas Citizens of India (OCI).
Data Analysis
Posted today
Job Viewed
Job Description
**Responsibilities**:
- Enhances the organization's human resources by planning, implementing, and evaluating employee relations and human resources policies, programs, and practices.
- Maintains the work structure by updating job requirements and job descriptions for all positions.
- Prepares employees for assignments by establishing and conducting orientation and training programs.
- Manages a pay plan by conducting periodic pay surveys; scheduling and conducting job evaluations; preparing pay budgets; monitoring and scheduling individual pay actions; and recommending, planning, and implementing pay structure revisions.
- Ensures planning, monitoring, and appraisal of employee work results by training managers to coach and discipline employees; scheduling management conferences with employees; hearing and resolving employee grievances; and counselling employees and supervisors.
- Implements employee benefits programs and informs employees of benefits by studying and assessing benefit needs and trends; recommending benefit programs to management; directing the processing of benefit claims; obtaining and evaluating benefit contract bids; awarding benefit contracts; and designing and conducting educational programs on benefit programs.
- Ensures legal compliance by monitoring and implementing applicable human resource federal and state requirements, conducting investigations, maintaining records, and representing the organization at hearings.