This was a summative evaluation undertaken to measure outcomes of the program and its success (Spaulding, 2008). It includes outcomes, gathered quantitative data, and uses statistical analysis to determine that the participants reported a decrease in their substance use as well as their perceptions of friends' alcohol and marijuana use.
Several objectives exist in a typical evaluation: documenting activities, documenting program implementation, documenting outputs of activities, and documenting end outcomes (Spaulding, 2008). The evaluation of this program will help determine how best to proceed with other similar programs, how to make this one more effective, or serve as a parameter for other programs. Further, this evaluation may contribute to the generation of a new theory on volunteer participant interventions for this age group (Glasgow & Linnen, 2008). Because the program being evaluated is seminal in its voluntary participation, rather than based on an established theory, this evaluation is likely to generate new thoughts on this type of program (Glasgow & Linnen, 2008).
Through an evaluation of this intervention for adolescents, the program might be made more effective by changing the amount of time the adolescents spent in the program. For example, if a significant number of students dropped out of the program because the duration of each session was excessive, or because the duration of the program was too lengthy, these aspects could be altered to keep the participants in the program (Glasgow & Linnen, 2008). Additionally, the program could be improved by identifying effective ways of eliciting a better response to the program (getting more students to participate) (Glasgow & Linnen, 2008). Glasgow and Linnen (2008) further suggest that the evaluation of theory-based programs or interventions can contribute to making them more effective for more individuals and create new knowledge or theories.
Although D'Amico, Anderson, Metrik, Frissell, Ellingstad, & Brown (2006) found interventions designed with the input of the intended population creates a more diverse and generalizeable intervention, I would like to have seen the specific results of the various ethnicities and genders of participants. For example, if a higher percentage of girls were affected by the program, I would want to discover how to improve the program to more effectively affect the boys. Similarly, if Latinos were less affected than Whites or vice versa, I would want to improve the program so it would be effective cross-culturally. Further, I would like to know if the program would be effective in a Latino or other diverse population, so these demographics are valuable. A program that is effective for a 41% White and a 30% Latino population does not necessarily mean the same program would be effective in a 100% Latino population.
If the overall character of the volunteer participants was not reflective of the general population (46% male, 41%White, 30% Latino, 5% Pacific Islender/Asian American, 4% African American, and 15% mixed ethnicities) (D'Amico & Edelen, 2007), I would want to modify the program so it would attract relatively equal shares of ethnicities and both genders.
D’Amico, E. J., & Edelen, M. O. (2007). Pilot test of Project CHOICE: A voluntary afterschool intervention for middle school youth. Psychology of Addictive Behaviors, 21(4), 592–598.
American Psychological Association. (2009). Criteria for the evaluation of quality improvement programs and the use of quality improvement data. American Psychologist, 64(6), 551–557.
D’Amico, E. J., Anderson, K. G., Metrik, J., Frissell, K. C., Ellingstad, T., & Brown, S. A. (2006). Adolescent self-selection of service formats: Implications for secondary interventions targeting alcohol use. American Journal on Addictions, 15, 58–66
Glasgow, R. E. & Linnan, L. A. (2008). Chapter 21, Evaluation of theory-based interventions. In Health behavior and health education: Theory, research, and practice (4th ed., pp. 487-508). San Francisco, CA: Jossey-Bass.
Spaulding, D. T. (2008). Foundations of program evaluation. In Program evaluation in practice: Core concepts and examples for discussion and analysis (pp. 3–35). San Francisco, CA: Jossey-Bass.