Evaluation Squad Meetings

Expectations for OpenMRS Evaluation Squad 

We use the OpenMRS Wiki - Edit Mode for real time meeting notes. This means that everyone attending a meeting monitors and contributes to the note taking!


Notes Directory


To Address Next Meeting

  • Topics we're thinking about:
    • Usability:
      • Could we set up semi-structured guide for this work?
      • Could we add focus group for the template themselves to identify the challenges?
      • Add methods for think aloud approach 
      • Standardized tasks? Likely not, but could we create a spreadsheet of tasks as a guide?
    • System use: Indicators/methods
    • Monitoring Indicators
  • Standing meeting agenda:
    • 40 minutes of presentations (20 min slots with max 15 min presentation) | Folks can sign up here
    • 15 minutes open ended conversation
    • Review Next Steps from previous and current meeting
    • Last 5 minutes to share new resources


Grounding Priorities

In progress // to be supported by survey of community to help sculpt this section

ImplementerTop Priorities










Roadmap

DoneNowNext


System Use Indicators

What are easy-to-use system data and indicators to support system monitoring, like missingness, uptime indicators.

Subject: SYSTEM 

Person/Org: PIH / DEBBIE   DIGI BETH

Details:   System Use Indicators


Usability Studies Tools

Create tools / templates to support teams conduct varies types of usability studies

Subject: SYSTEM

Person/Org: DIGI BETH  PIH / DEBBIE BROWN / HAMISH

Beth Dunbar Debbie Munson Hamish Fraser 

Details: Usability Study Tools

EMR Readiness Tools

Strengthen site assessment tools

Subject: IMPLEMENTATION  

Details EMR Readiness Tools

Person/Org: DIGI / BETH


EMR Implementation Indicators

Build a library of indicators to monitor evaluations

Subject: IMPLEMENTATION  

Details:  

Person/Org: TBA      


evaluation (with lowercase e) questions

What are some specific focused evaluation questions on EMR success that can be relatively easily accomplished with provided tools that will give any team insight into how well the EMR is meeting it's intended needs, and used in an ongoing way to determine if system adoption and value are improving (and if successes are sustained)

Subject: SYSTEM  

Person/Org: TBA      

Cost-effectiveness Tools

How do we establish a counterfactual to EHR? How do we measure cost? How can this information be customized to various audiences (e.g. medication errors for clinical head vs clinical care decision support for a nurse) 

Subject: SYSTEM  

Person/Org: TBA      

Value Proposition Tools

How do we measure value propositions? 

Subject: SYSTEM  

Person/Org: TBA      

Clinican Interview Tools

Create tools/survey to support clinical interviews to assess satisfaction with and use of OpenMRS including data use. Create a radar plot across different cadres.

Subject:   CLINCIANS

Person/Org: TBA      

Analysing Planned EHR versus Actual EHR Tools

Create a tool to help outline what system in use compare with what was planned to identify lessons learned. How does this align with value propositions of EHRs and what are these indicators. Identify indicators of implementation success. 

Subject: SYSTEM  

Person/Org: TBA      

Leadership Survey

Create a tool/survey to support assessing use and priorities for OpenMRS including data use. Create a radar plot across different cadres.

Subject:  LEADERSHIP

Person/Org: TBA      

Program Manager Survey

Create a tool/survey to support assessing OpenMRS program managers including data use. 

Subject:  LEADERSHIP 

Person/Org: TBA      

Definition of EHRs

What is our definition /guideline of EHRs. How does this tie to long-term outcomes in EHRs

Subject: SYSTEM 

Person/Org: TBA      

Clinical Workflow Analysis Tools

Create a toolkit that supports teams in analyzing Clinical workflows and how these work with OpenMRS use.

Subject: SYSTEM  CLINCIANS

Person/Org: TBA      





Meeting Notes

2022-11-01

Attendees: Debbie, Dagim, Erica, Ian, Johnblack, Beth, Dr. Jemal, Joshua Ssebaana Suubi

System Use Indicator Discussion

This is the first pass of indicators. 

  • Central level use?
  • System down time? Up time?
    • Might want to add up time because it allows us to interpret some of these indicators more easily.

How do we collect this data?

  • Up time needs to be done by an external system
  • Try to keep these indicators to data that can be collected by the system. 

How do we prioritize indicators?

Easy to collect? 

Can be collected by the system?

Data is usually used by clinical team, not at facility level. What indicators can be disaggregated by clinical area? 


2022-10-04

Attendees: Debbie, Khaled, Jen

  • Readiness Assessment - Debbie started fleshing out a tool that might complement existing readiness assessment tools. Jen to reach out to key contacts for feedback.
  • System Use Indicators. Debbie posted an initial draft & would like feedback, collaborators. Jen to share on Talk and with others who expressed interest in these types of indicators. Plan to have a focused conversation on October 18.

2022-07-26

Attendees: Linda, Jen, Hamish, Beth, Khaled

  • Reviewed action items from previous meetings
  • Jen shared plans for more of a design conference workshop
    • Hamish could share studies on requirements of EHRs in LMICs
    • Sarah could share more of her dissertation work on implementers perspectives with commuity 
  • OpenMRS 2022 in Nigeria


2022-07-12

Attendess: Sarah, Linda, Ian

  • Sarah presented on phenomographic analysis and her phenomonographic analysis of OpenMRS implementation

2022-06-28

Attendees: Debbie, Beth, Sarah, Nancy, Steve, Jen

  • Jen overviewed idea for OHRI (openMRS HIV Reference Implementation kit) M&E toolkit
    • Conversation around should we create a list of monitoring indicators for this toolkit?
      • Debbie: has developed a draft list of indicators of successful EMR use
        • Sarah suggested grounding indicators by context (e.g. facility type) to support indicator list, and this is direction PIH is trying to go
      • Nancy: Steven and Nancy's work in Kenya in EMR maturity model and indicators around system use and institutionalization of system with repeated facility-level assessments. Paper not yet published but were able to show maturization of implementations.
  • Nancy share plans for EHR evaluation in Ethiopia 
    • Context to Ethiopia transtioning EHR from SmartCare to OpenMRS OHRI
    • Rapid evaluation now using protocol just approved
    • Curious for input from group about what would be worthwhile to know about in situation of EHR transitioning:
      • Focus on Functionality of SmartCare?
        • Seems to be usability and use focus
      • Sustainability focus?
        • Would the Global Good Maturity Model work
      • Beth to chat with classmate about Veterans Administration's recent transition, and she'll reach out
      • Look at simple choices that ease development or get in the way (e.g. shared or private repo, reuse current)
  • Review of 14 June 2022 next steps:
    • Usability:
      • Could we set up semi-structured guide for this work?
      • Could we add focus group for the template themselves to identify the challenges?
      • Add methods for think aloud approach 
      • Standardized tasks? Likely not, but could we create a spreadsheet of tasks as a guide?
    • System use: Indicators/methods for strange entry, missing-ness dashboards / Debbie
    • Monitoring data entry trends / Hamish
    • Workflow analysis tips/tricks / look at 3.0 squad, workflow 
    • Ask team member to write up data entry monitoring dashboard and resulting action / Debbie
    • Review HIS Evaluation Toolkit
  • Topics we're thinking about:
    • Usability:
      • Could we set up semi-structured guide for this work?
      • Could we add focus group for the template themselves to identify the challenges?
      • Add methods for think aloud approach 
      • Standardized tasks? Likely not, but could we create a spreadsheet of tasks as a guide?
    • System use: Indicators/methods
    • Monitoring Indicators
  • Next steps (28 June 2022)
    • Next meeting
      • Sarah  present "Research Methods: Adopting a Phenomenographic Analysis Approach to Explore the Variation in Implementers Understandings of OpenMRS Implementation" to better understand OpenMRS implementers view of the world – how they make sense of implementation.
    • Reach out to implementers to attend next meeting // Jen 
    • Begin developing "indicator bank" of monitoring indicator
      • Share or present draft list of indicators and indicator dashboard on 26 July // Debbie
    • Share usability example from OpenELIS // Beth
    • Open Actions:
      • System use: Indicators/methods for strange entry, missing-ness dashboards // Debbie
      • Monitoring data entry trends // Hamish
      • Workflow analysis tips/tricks / look at 3.0 squad, workflow // Beth
      • Ask team member to write up data entry monitoring dashboard and resulting action // Debbie

2022-06-14

Attendees: Ian, Sasha, Debbie, Beth, Hamish, Sarah, Nancy, Steve,

  • Usability studies discussion // Debbie Munson shared a usability test template
    • Debbie put together a template for field implementers and designers for new features/forms
      • Task-oriented template to get people to think about tasks (there are other materials within PIH for how to define task), user instruction, outcomes, process notes
        • Debbie noted that "user instruction" instinct is that people often want to conduct a training before usability study
        • Was task so difficult, identified a priority training area
      • How select the tasks for the usability test?
        • EMR tasks (e.g. lab results should be batched vs enter one result) vs workflow to find a form
        • Often think tools are there, but don't go through process of entering data to see how access tools
      • What's outside scope of usability testing?
        • Often done work to figure out what you'd like to usability test
        • Workflow analysis mapping is another tool: Miss in workflow in entering medications - twice as many visits, one visit for clinical and one new visit to dispense medicine, because team didn't know two folks were involved
      • How do we make this generalized to make it useful?
      • In outcomes section, could there be a close-ended set of outcomes? e.g. User couldn't finish the task
    • Workflow analysis: this is part of evaluation squad, usually a reason staff won't use some feature (too busy, poor design etc)
      • Focus group could pick up part of broader issues
      • Tools or guidance for workflow, often folks share which software 
      • How to document workflow? How to show the nuance of workflow? 
      • No standardized documents and this could be something our squad works on
    • Often when look at EHR data, we see strange entry:
      • Looking at time stamps and page loads to see page loads aren't working out
      • Useful to use tracking data entry rates to see workflows 
      • PIH has new collaboration with DataKind to look at EHR log analysis (System use analytics)
        • Maternal health module: data quality dashboard with missing-ness to change in process improvements, "why did this work so well" "how to make more generalizable"
    • Usability studies: standardized tasks depend on system (point of care vs retrospective), 
      • Even across PIH many different stakeholders and tasks, Continue to strengthen focus groups/survey tools seems important to get to the point of confidence in scenarios and tasks
      • Could we create a spreadsheet of tasks that could guide other projects? 
    • Think-aloud protocol: Beth shared that there's some work from OpenELIS review
    • HIS Evaluation Toolkit:
      • Evaluation for cost or health outcomes
    • Next steps
      • Usability:
        • Could we set up semi-structured guide for this work?
        • Could we add focus group for the template themselves to identify the challenges?
        • Add methods for think aloud approach 
        • Standardized tasks? Likely not, but could we create a spreadsheet of tasks as a guide?
      • System use: Indicators/methods for strange entry, missing-ness dashboards / Debbie
      • Monitoring data entry trends / Hamish
      • Workflow analysis tips/tricks / look at 3.0 squad, workflow 
      • Debbie to ask team member to write up data entry monitoring dashboard and resulting action
      • Review HIS Evaluation Toolkit

2022-05-17

Attendees: Hamish, Debbie, Jen, Beth, Benjamin, Steve

  • Evaluation Squad showcase review for the OpenMRS mini-meeting
    • 10 minutes of showcase to share what we've done and pitch what we're doing
  • Regrouping on squad priorities and schedule 
    • Use the OpenMRS showcase to invite folks to join 
    • Need to prioritize cards 
    • Usability discussion and importance
  • Next meeting: review the priority cards and begin breaking them up into deliverables


2022-04-19

Attendees: Hamish, Debbie, Johnblack, Ian, Jen, Steve

Agenda:

  • Focus on squad's purpose, what we're sharing & creating
  • Roadmap Review
    • Need to give more context on value of resources shared on Eval Resource page (Category/Tags, Why?)
    • No additional responses to the survey.
      • Johnblack suggested finding ways to administer the survey in-person.
      • Still have option of sharing it at the May Mini-Community Meeting
    • May Mini-Community Meeting
      • Squad Showcase: start preparing the presentation, draft by May 3
      • Hamish suggested having a block of time to present a study, paper, findings. Jen to share idea with Global Events & Marketing Team to see if this can be done.
  • M&E Key Contact List
    • Discussed idea of creating a list of M&E or evaluation cheerleaders at different organizations

2022-04-05

Attendees: Debbie, Johnblack, Beth, Hamish, Ian, Neranga, Steve, Jen, Nancy

  • Presentation by Johnblack on Scoping Survey publication on Cancer Care EHRs
    • Discussion on definitions of EHRs
      • Definition from Kenya. Steve shared that vantage point on definitions matters and should tie to long-term outcomes
      • Definition from US (based on late 2000s). Hamish shared that this surveyed from hospitals and definition was tricky. Recent publications showing this definition may not be accurate. "Comprehensive" required system to be used in all departments. 
    • Discussion on how Johnblack defined basic versus comprehensive EHRs
      • Debbie: Is this a tool that coud be used?
      • Johnblack: yes! part of Johnblack's supplementary files in paper to be shared
    • Discussion on taxonomy: what's boundary etween EHR and lab information system
      • Nancy asked if anyone is working on taxonomy like WHO or OpenHIE? 
      • Steve noted Health Data Collaborative formed to evolve the Digital Health Atlas and expand the taxonomy including simple definitions of things
  • Presentation by Beth on new squad direction to create project cards
  • Action item
    • Squad to come up with our own definition /guideline
      • should tie to long-term outcomes in EHRs
    • Evaluation squad cards:
      • What other organizations are working on these projects? 
      • Can we invite them to our squad so they can support work on it? 

2022-03-22

Attendees: Debbie, Benjamin, Beth, Hamish, Ian, Neranga, Lalitha, Sarah, Steve, Jen

Presentation by Debbie Munson: Partners in Health work in progress to develop M&E framework for our OpenMRS implementations

Materials shared

 

2022-03-08

Attendees: Sarah, Khaled, Benjamin, Beth, Debbie, Hamish, Johnblack, Lalitha, Jen

2022-02-22

Attendees: Hamish, Debbie, Nancy, Neranga, Benhamin, Lalitha, Jen, Beth

  • Hamish presentation's on overview of Informatics evaluation frameworks (link to Hamish's updated slides)
    • Nancy: what is our squad working towards in relation to these frameworks?
      • Hamish: many ways we can apply frameworks to evaluation OpenMRS (logframe, stage & relevant evaluation questions) but one idea is for us to populate examples into the frameworks. Suggests using just two dimensions to start.
    • Debbie: progression towards impact, often focus on impact on patient outcomes but often have to step back to look at intermediate steps (e.g. implemented properly)
    • Neranga: frequent changes in system impact evaluation model, how can we do formative and summative evaluation as implementation changes quickly? Complexity of environment where we're working in. Exploring other ways of doing this so looking at applying Development Evaluation that can accommodate changes over time. 
      • Hamish: systems don't stay stationary and if longer term evaluation, the evaluation can be out of date (results less relevant). Evaluation needs to play for lifecycle of the system and evaluation that aims at broader point (not just does it work in this specific enviornment and configuration) but does this improve quality of care with multiple techniques that can be generalizable (e.g. Kenya decision support in pediatric HIV). Working on paper in Rwanda showing decision support in certain ways are not likely to be helpful so how to design newer versions
      • Nancy: framework of complexity that underlies Developmental Evaluation, often we're not just talking about software and hardware set-up but what's broader policy environment, leadership, user levels – the "what is it" of the intervention often has fuzzy boundaries
  • Survey Update:
    • sent out on 2/5, 138 people opened the email with the link, 4 responses as of 22 Feb
  • Action items:
    • Map studies to the frameworks 

2022-02-08

Attendees: Hamish, Debbie, Jen, Johnblack, Namanya

  • Should we reschedule or skip 8 Feb Meeting? PEPFAR Data Use Community (DUC) on 2/8 at same time (Thanks Nancy Puttkammer)
  • Evaluation Presentations x 2: 20 min slots with max 15 min presentation
  • Open-ended conversation:
    • UAT
    • Survey Update - sent out on 2/5, 129 people opened the email with the link
    • Progress on Eval Resource Wiki page - Google Season of Docs announced, likely we'll see more technical writers coming to the community who could help us with our documentation.

2022-01-25

Attendees: Beth, Hamish, Lalitha, Sarah, Ian, Benjamin, Steve, Johnblack, Jen

Agenda:

  • January 25th Shiriki Webinar on EHR Evaluation recap was relevant. Recording, presentation, papers will become available
  • Presentations? - none formally prepared but a few discussions
    • Johnblack: conducted review of digital health interventions used for cancer care; findings:
      • often teams publish on clinical outcomes but not on digital health interventions
      • difficulty in either or studies: either clinical outcomes or on usability, not both
      • publication venues matter: IEEE is more on algorithims & models, though may not accept other types of publications
    • Benjamin: uncovered tools for assessment that we can review
    • Presentation themes will be helpful to develop to guide presentation development
    • Sarah: how to apply qualitiative methods and then findings from the studies
  • Eval Resources
    • Should uncover more examples including reports and publications on OpenMRS
    • Publication venues: where should we publish
    • Outline of resources for Eval Resource Wiki page.
  • Survey
    • Evaluation Squad Kickoff meeting: might be interesting when have Evaluation survey completed
    • Jen is sending email blast at end of this week; due noon PST on Friday
    • Should we engage OpenMRS leadership to help set agenda for research
  • Standing meeting agenda
    • 40 minutes of presentations (2 slots per meeting with 15 minutes
    • 15 minutes open ended conversation
    • Last 5 minutes to share new resources
  • Action Items for next meeting:
    • Evaluation Presentation: 20 min slots with max 15 min presentation
    • Draft Survey: 
      • Beth to send draft today (25 Jan)
      • Jen to circulate  by Friday
      • Steve to coordinate with the OpenMRS board to fill out Survey
    • Begin building out Eval Resource Wiki page
      • Folks feel free to add resources here
    • We'll set next meeting agenda asynchronously
    • Follow up on slack with presentations for next week
  • Meeting announcements
    • Opportunity for papers in JAMIA special global health edition // submissions due 1 June 2022 
    • January 25th Shiriki Webinar on EHR Evaluation recap was relevant. Recording, presentation, papers will become available


2022-01-11

Attendees: Beth, Hamish, Johnblack, Benjamin, Ronald, Nancy, Debbie, Steve, Ian, Neranga, Jen

Agenda:

  • Review survey of implementers' needs - for the next meeting, use Google doc
    • Structure
    • Suggestions:
      • Make this light, then get into more detail later
      • Section 4: Clarify what we mean by "evaluation," implementation stages, stakeholders
      • Gap: Clinician experience (M&E can't always prioritize this work)
      • Are we using the survey to build out the squad & its purpose OR is it to generate interest & get a sense of direction?
  • Review Wiki page with resources: Evaluation Resources
    • Curate resources to high standards
  • Discuss:
    • Creating a Squad Page
    • Planning Next Meetings?
      • Have people present studies at a squad meeting (John Black?, Steve, Nancy)? Four people for an hour, at a squad meeting (25th?)
      • Evaluation Showcase at a virtual Spring/Summer mini-meeting
    • Creating a Repository of studies
    • Studies (a couple by the end of the year, publishable, internal?)
      • PIH: data warehouse
      • Costing
      • OpenMRS 3.x | OHRI
      • Digital health maturity models