Using Digital Dashboards to Implement and Evaluate Programs

their families. The NexGen comparative analysis team’s objective is to accelerate the creation of this

supportive network across the NCR.

FRAMEWORK

The NCFRSACi’s program evaluators in the NCR i dentified challenges in collecting and reporting impacts and

outcomes as a result of insufficient processes and difficulty accessing Mental Health First Aid participant data

using their online learning portal. Additionally, the Program included a number of multi-state, multi-

disciplinary teams, and a program with numerous instructional deliverables.

In response to these complex program evaluation challenges the Missouri Program evaluation team focused

on establishing and streamlining a data collection, data reporting, and data processing system.

The Missouri Program was carefully assessed to identify the program evaluation approach that was actively

being used, to identify how data was being collected, how participant data was being accessed, where data

collection was stored, and identify the data collection methods used by instructors and the online registration

platforms. Assessment of the program evaluation approach identified the following challenges:

 The suite of mental health and suicide prevention education and training curricula were evaluated

using different software platforms and different evaluation tools.

 The number of different program offerings - as well as two funding sources running simultaneously -

created the need to track participant scholarships, and funding specific deliverables across multiple

programs and timelines.

 The instructors delivering the training were in different disciplines, working under specific guidance

from education directors. Each discipline collected participant and evaluation data differently.

 Access to program data, including participant demographics, evaluation of instructors, and

evaluation tools was limited by the individual's role. For example, instructors had access to

evaluation of the instructors teaching performance, but frequently did not have access to the

surveys completed by participants.

 Impact measures, both quantitative and qualitative, were being collected in different locations

across disciplines, within university reporting systems, and across multiple online platforms that

were not readily accessible to the program administration team. This challenged the program

evaluation team to identify multiple points of data collection located across multiple software

platforms.

17

Powered by