The following is an outline of an early stage community school evaluation. There's a deep focus on implementation, with the goal of collecting sufficient data on quality, fidelity, dosage, and variations: to i) develop a clear model and ii) inform outcome and impact data.
Primary reasons to collect implementation data:
• Improve the program (i.e., seeing what’s working and what’s not working; building on the former and addressing the latter);
• Develop a description/roadmap of the program that will inform future iterations and contribute to fidelity of implementation;
• Tie inputs to outputs and outcomes (when you can fully describe what the program looks like and what was done, it becomes easier to trace if programs have had impact).
• Schools and Students:
o What does the school look like as the work starts (and as it takes place)?
o How are programs selected and set up in schools? Why are they chosen?
o What needs do students have?
- How are student needs identified/assessed (i.e., anecedotally; by examining school data; etc)?
o Once this picture of need is created, how does school help direct students to services? How does it determine which services students need?
o How does school leadership/staff help to integrate service providers into the building? (e.g., Are programs aligned to needs? Does school leadership have goals for the work that will take place? Have systems been developed to support program work?)
o What are school community impressions of service providers/the community school model that is developing?
• Programs/Service Providers:
o What programs are in school? Are there a variety of programs that qualify the school as a hub for community services, with an array of options for kids?
- What’s their focus? What’s their capacity?
o How do programs deliver services?
- With how much fidelity are they implementing their model and why?
- How are they reaching out to students and helping them persist in services?
- How are they reaching out to families and the wider community?
o How do programs track services, what data do they collect?
- How do they use the data (e.g., use it to improve services, report to funders, share with schools?)
o How do programs feel about the work they’re doing in the school (e.g., Do they feel adequately supported by the school; do they have the resources they need to reach high need students and help them persist in services?).
• Community and Family:
o How are family/community needs identified/assessed?
o What services do families use/not use and? Which families use services? What is their persistence in services?
o What is the role of family/community in this particular iteration of the community school model?
- How do schools/programs work with families/communities to help achieve this?
- What are family/community impressions of their school becoming a community school?
o What is the job description/role of the Resource Coordinator? How is s/he contributing to the work? How is s/he embedded in the school and her/his work supported?
o How was the School Advisory Board constituted and defined? What is its scope of governance? What is its contribution to selecting and managing school-based programs?
Outcomes will be along a continuum of proximal to distal (i.e., ones that can be seen immediately, during the program, and after completion of the program).
Student achievement outcomes – whether aggregate or individual - tend to fall into one of a few categories:
• Health/mental health;
• Academic preparedness (including readiness for various stages of K-12 schooling as well as post-secondary life);
• Academic performance.
Which outcome areas are selected, where they fall along the proximal/distal continuum, and which measures are used to collect data on specific outcomes depends on: i) the specific goals of the project, and ii) the work being done. Outcomes and measures are chosen through the development of a logic model that involves key stakeholders.
A logic model exercise may also unearth additional outcomes, not focused specifically on student achievement, that could be important to measure (e.g., school, community or program outcomes, including the relationship between service provision and school/community/family well-being).
Want outcomes to cut across programs, relate to program work, and consist of indicators that programs can conceivably change over time. They should also be measured year-to-year across schools as consistently as possible.
Scope of work depends, of course, on what the budget for evaluation is. Other factors include evaluation time frame, focus, and goals.
Want to collect outcome data – and particularly student achievement data - across all schools. Hence, want to be parsimonious in terms of the number of outcome variables selected.
Implementation data could be collected in a number of ways. For example:
• Pick 1-3 schools as case studies and provide in-depth portraits of how the work is taking place;
• Collect implementation data across most/all schools via surveys/questionnaires and then do more in-depth data collection (e.g., interview, focus groups, service information) from a few representative schools.
4. Factors to consider
• Possible to collect data during Year 1 (2013-14)? If work is happening, don’t want to miss out on collecting baseline/early implementation data.
• What are your thoughts on:
o Purpose of evaluation
o Audience for evaluation
o Timing of work
o Products to be produced/distributed
• What decisions have been made to date about what programs will track?
• What decisions have been made to date about what data schools will provide programs?
• Is the role of the Resource Coordinator being defined?
• How is the role of the School Advisory Board being developed?
September 6, 2013