January 12, 2017

Monitoring and Evaluation: Address Ethical, Tech Considerations Early

Elaine Chang

Monitoring and Evaluation: Address Ethical, Tech Considerations Early

The increased use of mobile technology to collect and help analyze field data for monitoring and evaluation (M&E) has raised ethical questions and prompted technical discussion among ICT4D practitioners.

When it comes to monitoring, evaluation, research and learning (MERL), issues being grappled with in what ICT4D consultant Linda Raftree calls “equity-focused evaluation” include:

  • Methodology: Are representative populations being captured using mobile technology?
  • Organization: Are the upfront collection and analysis costs too steep for some organizations?
  • Ethics: How is informed consent being managed given privacy risks presented by digital data?

We’ve learned through our work with TaroWorks customers and our parent organization Grameen Foundation that attention to these and issues like maintaining data quality early in the life of the project, pays dividends.

An article published on April 2016 in the journal Evaluation  (“The role of new information and communication technologies in equity-focused evaluation: Opportunities and challenges”) highlights the call to place more emphasis on the safety, consent and confidentiality of sensitive data collected using ICT in MERL projects.

“The nature of digital data, that it is created and stored on local devices, transferred to central storage and easily duplicated and shared exponentially, opens up new risks for vulnerable populations whose data is collected by evaluators,” the authors explained. “There is a need for better guidelines and codes of conduct and greater awareness of the technical challenges. This, along with other major areas that may be ripe for negative unintended consequences, requires further attention.”

Raftree, one of the Evaluation journal contributors, makes the additional point in an article summarizing a session at the 2016 American Evaluation Association conference, that while mobile tools have allowed field staff to reach previously isolated or unreachable groups of people, for researchers to avoid population exclusion, they must look critically at their current M&E system design and application.

“The way we design evaluations and how we apply ICT tools can make all the difference between including new voices and feedback loops or reinforcing existing exclusions or even creating new gaps and exclusions,” Raftree wrote.

Monitoring and Evaluation Dashboard

Salesforce Dashboard

 

How can you design a successful monitoring and evaluation system?

Niamh Barry, the former global head of monitoring and evaluation at Grameen Foundation, has seven principles for a successful M&E design

  1. Involving everyone
  2. Knowing the project’s depth.
  3. Avoiding  confusing lingo.
  4. Keeping plan simple.
  5. Building slowly and into existing structures.
  6. Having minimal tolerance for bad data.
  7. Proving your value.

Regarding principle two, Barry said monitoring and evaluation practitioners must know their projects inside and out. Otherwise, they will not be able to anticipate how and when field staff and survey respondent’s interactions might affect results.

“If you have a field team who are going to be interacting with your target population for your project, you’ll need to know their schedule, so you know their interaction points and the purpose of each interaction point in order to integrate this within your design,” Barry said in a TaroWorks webinar. “You cannot build a system without this deep knowledge. This breaks down the silos that tend to exist.”

Additionally, Barry emphasized the importance of keeping data quality standards high for any monitoring and system. After all, Barry said:

“If you don’t have data you can trust, then you don’t have a monitoring system.”

Monitoring and Evaluation Dashboard

Credit: Luis Llerena

Ensuring data quality within your monitoring and evaluation system

To trust your data, put systems in place to recognize bad data or prevent its occurrence. Too many organizations with data-driven operations struggle with human-entry error and confusing surveys, which  generally lead to poor data quality. This is why Kevin Zeigler, special projects manager at TaroWorks customer TechnoServe, suggested several ways nonprofits and social enterprises could improve their data quality with a few helpful tips, during a recent TaroWorks webinar.

Before implementing TaroWorks mobile data collection and analysis tools, Zeigler said TechnoServe’s Latin American operation was using multiple M&E systems, which was not only expensive and time-consuming, but prone to error. To solve common reporting error problems, Zeigler recommends using templates for speedy cleaning, negative reporting and tracking errors over time.

“Data quality is everyone’s responsibility. Putting data quality on solely your M&E manager’s shoulders is just not sustainable,” Zeigler said. “It’s really a team effort to make sure we’re getting the best data we can from the field. If you invest all your time and money into implementing these mobile technology solutions, they are only as good as the people that you trust with the technology itself. So spend a lot of time training the project teams, your field staff, or anybody who is going to touch the data at all.”

POST TOPICS

About the author...

Elaine Chang

Elaine Chang

Director, Market Development and Customer Success

Elaine’s background in marketing research and data analytics has shaped her goal of helping organizations use insights from better data to create positive change in communities. She has a BS in Marketing and Finance from New York University, and an MBA from the University of Michigan. Elaine is based in the San Francisco Bay Area.

SEE THE IMPACT IN YOUR INBOX

Sign up to receive emails with TaroWorks news, industry trends and best practices.

Ⓒ 2016 TaroWorks All Rights Reserved
TaroWorks, a Grameen Foundation company.
Site by V+V
BACK TO TOP