Introduction

Adolescence is a critical time to intervene with youth at risk of serious harmful behaviors, such as violence, and to ensure that key developmental needs are met. There are a number of research-based assessment tools and programs shown to reduce violence and support healthy development, but real-world implementation presents significant feasibility challenges (Backer & Guerra, 2011; Saul et al., 2008). Cost, lack of implementation infrastructure, and concern about local fit and buy-in all contribute to the limited use of research-based tools in community settings. These challenges are particularly prominent in community-wide initiatives which coordinate efforts across multiple systems. Successful examples of efforts and strategies to overcome these challenges are needed. This paper presents a case study of a community-based participatory research (CBPR) process to develop a research-based assessment and service referral system within the comprehensive, youth violence prevention service network in Seattle, WA. It details the phases of this process and demonstrates the feasibility and usefulness of a CBPR approach, while highlighting implications and future directions for research and practice.

Community violence prevention efforts typically include a range of prevention and intervention strategies to address violence and delinquency, and increasingly, to promote positive youth development (Allison, Edmonds, Wilson, Pope, & Farrell, 2011). The literature highlights several strategies communities use to achieve these goals, including community policing, street outreach, school-based programs, mentoring, counseling, substance abuse treatment, and youth development programs (e.g., Cunningham & Henggeler, 2001; Frattaroli et al., 2010; National Youth Violence Prevention Update: 2010–2016, 2016; Rollin, Kaiser-Ulrey, Potts, & Creason, 2003). A key feature of such initiatives is ensuring youth are matched with programs and services that fit their needs in order to promote positive outcomes (Nation et al., 2003).

The individuals and agencies responsible for assessment and referral vary by initiative, as do the professional roles and training of those charged with triaging youth into services, programs and interventions. This diversity is what makes multisite initiatives unique, but also yields challenges with the standardization of practice. In community violence prevention, for instance, there is limited use of structured assessment in general, and when practices are used, they tend to vary across sites. Referrals to services might rely on the stated interests of the youth, or recommendations from case managers or other community service providers (e.g., Frattaroli et al., 2010; Wilson, Chermak, & McGarrell, 2010). For example, in Pittsburgh’s One Vision One Life program, high-risk status and service referrals are assigned based on activities or known statuses of the clients (e.g., drug dealers, gang members), as well as personal judgement which comes from the coordinators lived experience in the community (Wilson et al., 2010).

Research in other youth-serving settings however, has shown that there is value in structured assessment, in addition to personal judgment, for service planning. The use of structured assessment can ensure youth receive services appropriate to their individual needs in order to promote positive outcomes (Vincent, Paiva-Salisbury, Cook, Guy, & Perrault, 2012; Vincent, Guy, Gershenson, & McCabe, 2012). Successful use of structured assessment in community settings is only possible, however, if these tools are translated to the sites where programming takes place. Research-based approaches that align with local values, consider organizational capacity, and draw from (or are rooted in) community knowledge are more likely to be successfully implemented, supported, and sustained (Feinberg, Bontempo, & Greenberg, 2008; Shelton, Cooper, & Stirman, 2018; Wallerstein & Duran, 2010). While structured tools appear to have potential to assist in a broad array of community prevention goals, little is known about how these tools are perceived and how they function across community practice settings.

Community-based participatory research (CBPR) is one potential approach that can support the successful translation of research-based tools to community settings. CBPR recognizes the importance of involving community members as active and equal participants in all phases of the research process (Israel, Eng, Schulz, & Parker, 2005). CBPR builds on the strengths and resources of the community, including local knowledge and perceptions of current practice. By doing so, research is often viewed as more trustworthy to community members, ultimately enhancing its usefulness by aligning the goals of science and the community (Israel, Schulz, Parker, & Becker, 2001).

Core CBPR principles include equitable partnership in all phases of research; acknowledgment of community identity, strengths and resources; co-learning and capacity building; achieving balance between research and application; cyclical and iterative development processes; dissemination of results to all partners, and involving them in wider dissemination efforts; and a commitment to sustainability (Israel et al., 2005; Minkler & Wallerstein, 2008). In the context of youth violence prevention, CBPR principles have been applied to the development and evaluation of new programs (Leff et al., 2010; Watson-Thompson et al., 2013), and have supported measurement and instrument development efforts (Hausman et al., 2013; McDonald et al., 2012; Medina et al., 2016).

The following is a case study of a partnership between the Seattle Youth Violence Prevention Initiative and researchers at the University of Washington, which applies a CBPR approach to develop, refine, and implement a research-based assessment system for use across multiple service settings.

Current Project

The Seattle Youth Violence Prevention Initiative (SYVPI) involves multiple independent youth service providers in Seattle, WA. The initiative serves youth via three primary sources: 1) Neighborhood-specific “network hub” agencies, responsible for youth intake, provision of youth development programs and referral to additional services; 2) Case management agencies, responsible for providing a moderate level of service provision, typically for youth with identified mental health, substance use, school, and/or family treatment needs; and the 3) Street outreach team, responsible for providing the most intensive outreach services to gang-involved or justice-involved youth and youth at the highest risk for violence. Youth are referred to SYVPI services through school staff, police, the juvenile court, parents/guardians, community centers, providers from youth serving agencies, or peers.

From its inception, SYVPI was charged with using a validated measure of violence risk to guide service decisions for youth referred into the initiative. However, community members felt that existing validated tools for youth violence and recidivism (e.g., Positive Achievement Change Tool, PACT: Barnoski, 2004; Structured Assessment of Violence Risk in Youth, SAVRY: Borum, Bartel, & Forth, 2003) were not sufficiently strengths-based and would be off putting to providers and youth in non-clinical, community settings. Consequently, a design team comprised of SYVPI partners and violence prevention experts developed a tool that included items from validated assessments as well as new items developed by initiative partners. The initiative then recruited prevention and youth development researchers (authors) to assess the functioning of this tool. A participatory qualitative study was developed to assess 1) provider perceptions of the value of the tool, and 2) how the tool was being used by providers for decision-making (phase 1). Study findings led to substantive modifications of the initial tool and the development of a completely new form for follow-up and progress monitoring. This process occurred in the context of a research-community collaboration (phase 2), and a preliminary validation study (phase 3). All research activities for this project were determined exempt from human subjects review by the University of Washington Institutional Review Board.

Phase 1: Qualitative Study

Sample and procedures. Interviews were conducted with 28 providers from three neighborhood network agencies, five case management agencies, and the street outreach team. Participants included program managers and supervisors, program coordinators, intake and referral specialists, case managers, and outreach workers. All participants had been using the assessment tool or were overseeing the use of the tool by other providers in their organization.

Interview questions were co-developed by the University researchers (one MSW and one PhD) and SYVPI management team (director, data manager, and policy analyst). The interview included 15 questions with follow-up prompts, which were organized within four major topic areas. Topic areas included tool use; assessment practices and processes; perceptions of tool usefulness; and perceptions of tool acceptability. The interviews lasted approximately 60 to 90 minutes, and were conducted in nine focus groups where multiple staff participated in the interview concurrently.

Participant responses were coded in three rounds of review. The first round of coding occurred in group discussions between two of the researchers (first and second author). Interview responses were reviewed and initial codes were assigned to participant responses using elemental (descriptive, in vivo, process) and affective (values, evaluation) methods (Miles, Huberman, & Saldana, 2014). Groups of codes were then condensed into themes in an iterative process. Two other members of the research team (fourth author and student intern) provided feedback on the initial set of themes based on their interpretation of the data (researcher triangulation), and themes were adjusted accordingly. Next, themes were compared to several formal and informal sources of information (i.e., data source triangulation), including: 1) notes taken by a member of the research team (first author) during pre-interview meetings with the SYVPI director regarding the purpose and use of the tool within the initiative, 2) informal observations by two of the researchers (first and second author) who attended a pre-interview staff meeting where providers discussed challenges they faced with the tool, including how assessment information should be used, and 3) rates of missing or incomplete assessment data collected prior to the interviews. In line with sound qualitative methods and CBPR principles, the set of themes was then disseminated to all providers who participated in the interviews to obtain feedback regarding the accuracy and framing of key findings. Themes were further revised to reflect feedback from the SYVPI director, policy analyst, and three of the participant groups. These analytic procedures followed a consensus coding process (Hill et al., 2005), using triangulation and member checking techniques to establish credibility and trustworthiness (Graneheim & Lundman, 2004; Miles, et al., 2014). We were able to check the consistency of our themes by corroborating findings across multiple data sources and perspectives (i.e., researcher and data source triangulation), while increasing the credibility of our findings by ensuring themes accurately reflected perceptions and experiences of the assessment process across different providers/agencies (i.e., member checking). After consensus was reached, codes were quantified into response rates within the broader set of themes following a quantitative content analysis approach (Bengtsson, 2016; Krippendorff, 2004; Neuendorf, 2017). Theme titles were developed to describe the focus of each set of responses, and codes were ranked within each category according to frequency of endorsement by analysis unit (agency/group).

Results

Four themes emerged related to providers’ perceptions of the acceptability and appropriateness of the assessment tool as initially implemented. Results are based on consensus counts across n = 9 focus groups.

Theme 1: Providers Lacked a Shared Understanding About How To Use The Tool

The interviews with initiative providers suggested a significant lack of standardized assessment practices. At the time of the interviews, the tool was used in diverse ways which were interpreted as functions of providers capacity and personal preference around conducting assessment interviews (n = 9; Table 1).

Table 1

Results from a content analysis of provider interview data.

Themes Codes Network Agencies
(n = 3)
Case Management
(n = 5)
Street Outreach Team
(n = 1)
Total Sample
(n = 9)

Acceptability
Providers lack a shared understanding about how to use the tool Tool administered differentially based on provider capacity and personal preferences 3 5 1 9
Differences in administration style vary by staff member and youth characteristics 2 4 1 7
Lack of uniformity for information collection procedures 3 4 0 7
Calls to referral source (i.e., parents/schools) to obtain more information if necessary 3 2 0 5
Assessment completed after decision-making/case planning 2 2 0 4
Helpful when one designated person completes all initial risk assessments 1 1 0 2
Uncertainty exists around the level of training received by all providers 3 2 1 5
Providers found some items difficult to administer Item and domain helpfulness/usefulness varies by agency as well as staff roles 3 5 1 9
Translating items from risk assessment into youth-friendly language can be difficult 3 3 1 7
Providers do not use the tool to answer culturally sensitive questions 0 1 0 1
Tool has the potential to be useful if item-level revisions are made 3 5 1 9

Appropriateness
The tool needs a clear purpose and relevance to case planning Tool does not serve a clear purpose in the initiative 3 5 1 9
Tool not considered useful for case planning 2 2 1 5
Providers use information from the referral form to determine youth needs 3 4 0 7
Referral form is most practical way for determining youth needs 3 3 0 6
Qualitative section on referral form most helpful for assessing youth needs 0 2 1 3
Information collection practices to develop service plans vary widely 3 5 1 9
Tool does not guide decision-making but has potential to be effective and applicable 3 4 1 8
Assessment of risk/need should be a well-defined and supported process 3 4 1 8
Develop a scoring system will improve usability 3 1 1 5
Tool format and assessment procedures should be compatible to the service setting Timeframe for completing initial assessment is too short 3 4 1 8
Uncertainty about the reliability of information collected with the tool because of time constraints and limited rapport building before assessments 3 3 1 7
Tool is too long and wordy 3 3 0 6
Tool has the potential to be useful if ordering of items is reformatted 1 1 0 2
Tool does not target the goals and missions of different agencies 1 1 0 2

I prefer to have the youth fill it out, especially if the youth characteristics indicate that more honest answers might come from having them fill it out first.

Sometimes risk assessments are completed without the youth present, via referral source or some other source. It would be better if this process was standardized. It’s a problem when indicators and outcomes can be collected without even talking to the youth.

I never give the assessment to the youth because I do not want them [the youth] to think they are “bad” or “violent” kids. I always have to clarify that you don’t have to be a violent person to be in the initiative, and just because you receive services through the initiative, it doesn’t mean that’s who you are.

Theme 2: Providers Found Some Items Difficult To Administer

Providers also indicated that the tool needed item-level and general process revisions in order to fit their needs as community-based service providers (n = 9). For example, one agency noted that mental health items were incredibly difficult to interpret as the assessment of a youth’s mental health fell outside their scope of work and/or area of expertise.

We are case managers, not psychiatrists or counselors, how are we supposed to know this [mental health status]?

Theme 3: The Tool Lacked Clear Purpose And Relevance To Case Planning

A consistent theme across all providers was that the tool, in its current form, did not serve a clear purpose in the initiative (n = 9). This resulted in the development of service plans that were guided by unstandardized and informal practices that varied widely within and across the various initiative components and sites (n = 9).

The tool feels useless as is, but if scoring is incorporated and the tool actually serves a purpose, then it could be helpful. It might be helpful for determining when the youth is supposed to be done with the initiative. It needs to guide decision-making.

Theme 4: Tool Format And Protocol For Repeated Assessments Did Not Fit With The Demands Of The Service Environment

While providers opposed the idea of replacing or significantly changing the current tool, there was widespread agreement that the tool was too long and overly wordy (n = 6). As a result, the tool was seen as a burden to service providers.

We often end up talking about one issue for a long time which means the rest of the assessment is not getting filled out. We either do follow-ups or sometimes just put “don’t know.”

Additionally, the tool was formatted in a way that did not allow for the natural progression of a therapeutic relationship, making it difficult for providers to obtain more sensitive information. One suggestion to mitigate this issue was to re-order items so providers had more time to build rapport with the youth prior to asking sensitive questions (n = 2).

It would helpful if the tool was re-formatted so that less sensitive domains/items were located at the beginning of the assessment.

Phase 2: Tool Adaptation Process

Multiple strategies, across three project steps, were used to engage and collaborate with community providers in the tool adaptation and re-implementation process. This work was accomplished using a CBPR approach, with core CBPR principles guiding the strategies used at each step of the decision-making process. Table 2 provides an overview of key project activities and partner roles during phase 2.

Table 2

Overview of project activities and partner roles during phase.

Project Step CBPR Principle Academic Researcher Role Community Provider Role Joint Decision Making

Step 1: Convene a Collaborative Workgroup
  • Disseminate results to all partners
  • Facilitate a collaborative, equitable partnership in all phases of the research
  • Build on strengths and resources within the community
  • Balance research and on-the-ground application
  • Provide overview of qualitative interview findings and recommendations for building a collaborative partnership
  • Discuss benefits of academic-community partnership and co-design process for promoting tool ownership and sustainability
  • Facilitate discussion of partner roles and workgroup goals
  • Coordinate workgroup scheduling
  • Provide suggestions for frequency of meetings and workgroup goals
  • Describe preferences for provider roles and responsibilities in revision process
  • Elicit perspectives from other (non-workgroup) initiative providers and present this information to the workgroup
  • Decide collaborative workgroup is an appropriate strategy for tool revision process
  • Agree on partner roles and responsibilities
  • Identify key purpose of structured assessment across initiative components
Step 2: Tool Adaptation and Assessment Protocol Standardization
  • Facilitate a collaborative, equitable partnership in all phases of the research
  • Build on strengths and resources within the community
  • Foster co-learning and capacity building
  • Balance research and on-the-ground application
  • Involve systems development in a cyclical and iterative process
  • Facilitate discussion of domain and item-level revisions
  • Synthesize provider feedback and make initial item-level revisions accordingly
  • Translate results from youth focus groups into initial youth self-report tool draft
  • Conduct systematic literature review to develop item weighting
  • Develop and beta test tool scoring procedures; work with initiative database vendor to develop a graphical representation of assessment data to inform case planning
  • Provide tool revision suggestions, including adding/removing items, wording, format, and structure
  • Provide feedback and suggestions on how to best implement the tool across diverse agencies/provider roles
  • Co-facilitate youth focus groups to support development of self-report short form
  • Provide feedback on scoring system and challenges with data management system
  • Review practice protocols, provide feedback and revision suggestions
  • Decide on questions, wording, format, and structure of final tool
  • Engage in iterative tool revision process, including development of youth self-report form
  • Agree on standardized practice protocols for tool use
  • Agree on scoring strategy and use of data
Step 3: Quality Assurance and Tool Re-Implementation
  • Facilitate a collaborative, equitable partnership in all phases of the research
  • Builds on strengths and resources within the community
  • Integrate co-learning and capacity building
  • Disseminate results to all partners and involve them in wider dissemination
  • Review evidence-based principles of structured assessment and components of quality assurance (QA) systems
  • Discuss common challenges that may affect systematic implementation efforts
  • Develop training on new scoring and use of data in real-time
  • Hire and work with motivational interviewing (MI) trainer to develop new tool training
  • Develop reports and wider dissemination products; Write initial resource drafts
  • Provide feedback about the type of QA activities that are realistic for service settings
  • Provide formal input during hiring process for QA manager
  • Provide feedback on QA and dissemination products
  • Develop case studies for MI training that are based on professional experience and knowledge
  • Agree on key quality assurance components
  • Hire a full-time quality assurance manager
  • Co-author technical manual, training webinar, assessment protocol resources
  • Develop and attend all-staff training on MI and revised assessment tool/practices

Note: Partner roles did not often align with any single CBPR principle; as such, roles are not listed in order of principle.

Step 1: Convening a Collaborative Workgroup

In keeping with the CBPR principle of building equitable partnerships, a workgroup was developed to respond to the concerns and suggestions raised in the phase 1 interviews. All initiative providers were invited to participate in the workgroup; ultimately, a core team of 21 members convened and embarked on the tool adaptation process. This included 12 community-based providers who served as representatives from the various initiative components (networks, case management, street outreach), three members of the SYVPI management team (director, database administrator, policy analyst), and six members of the research team (including two student interns).

At the first meeting, the research partners facilitated a discussion of the phase 1 qualitative findings, including a review of the recommendations for tool improvement that were made by the community partners during phase 1. Building on these findings, the workgroup agreed that the tool had the potential to guide service planning with the caveat that a formal revision process was needed to clarify the purpose of the tool and modify it to fit the local prevention context. Over the next two meetings, the workgroup collectively agreed on the purpose of assessment for the initiative, set goals for the workgroup process, and established partner roles. To balance research objectives with practice goals, it was decided that the revised tool and related assessment practices should 1) guide service and case planning, 2) aid in progress monitoring, and 3) collect information needed for evaluation purposes and for reporting to stakeholders and external funders (e.g., aggregate trends of who is being served and key program milestones).

With the charge of the workgroup established, subsequent meetings focused on tool revision (step 2) and re-implementation (step 3) activities, all of which occurred in the context of the workgroup which convened monthly (year 1, step 2) then bi-monthly (year 2, step 3).

Step 2: Tool Adaptation and Assessment Protocol Standardization

In line with several CBPR principles, the tool adaptation occurred in a cyclical and iterative process, integrating aspects of co-learning and building on the strengths and resources of all partners. Several workgroup sessions were dedicated to domain and item-level revisions, with research partners and initiative management co-facilitating these discussions. The research team synthesized provider feedback and made initial revisions accordingly. Revisions included: adding/removing items; re-wording items for clarity, to support case management practices (e.g., employment, trauma), and to reflect language that was less clinical and more generalizable to the different provider settings in the initiative (e.g., case management vs. youth development programs); reorganizing tool domains in order to place more sensitive topics at the end of the interviews (e.g., mental health and domestic violence). The revised tool was brought to the workgroup for review, and was sent to all providers electronically to ensure feedback was elicited from those who were not able to attend the workgroup sessions. The revision process continued across several iterations of the revise-feedback-revise cycle until the workgroup was satisfied with the final product.

In addition to wording, format, and structural changes, the tool was also adapted to fit the local context via the development of an entirely new follow-up assessment form. To help mitigate issues related to large caseloads and time constraints (challenges that were identified in the interviews), the revised tool was shortened and worded at a level appropriate for youth to complete. The youth self-report form contained a subset of items most predictive of violence/delinquency risk. The workgroup decided that the form would help providers track and monitor progress. A draft short form was developed by research partners with feedback about item inclusion and wording from community partners. Research and community partners co-facilitated two youth focus groups to ensure the form was 1) worded appropriately for the youth demographic served by the initiative, and 2) visually appealing to increase the likelihood the youth would want to fill it out. The development of the final short form followed the same revise-feedback-revise process, all of which occurred in the context of the workgroup.

Next, the assessment practice protocols were standardized across the initiative components to aid consistency in reporting. This included: 1) establishing timeframes for conducting the initial (at enrollment), follow-up (every 6 months), and closure (exit) assessments; 2) clarifying that assessments were to be completed by providers at the neighborhood network agencies, with the exception of non-traditional referrals (e.g., youth self-referral); and 3) developing a triage system for youth referrals based on the assessment scores (e.g., youth with indicated need at the high risk level are referred to the most intensive services, such as street outreach).

Following structured assessment development standards for item weighting as recommended in the literature (e.g., Barnoski, 2004), research partners conducted an extensive literature review on the risk and protective factors for youth violence to develop item weights for the tool. Based on this review, items were weighted to allow for the calculation of a low, medium, or high risk score for each domain, which informed a youth’s overall score. The scoring system was beta tested by a member of the research team and content validated in a discussion with a community partner who had significant experience conducting risk assessments, developing service plans, and working with initiative youth to meet program goals. To facilitate the use of the assessment data to inform case planning (a key tool purpose as identified by the workgroup), a strength-to-high risk/needs bar graph template was created so that providers could quickly and visually identify areas for targeted service referrals based on specific domain scores. Community partners provided feedback on the draft schematic, including perceived usefulness of the graph and potential issues associated with interpretability of results in the absence of training. Research partners worked with the database vendor to ensure the graph was built into the initiative database and made accessible to all providers who entered assessment data.

Step 3: Quality Assurance and Tool Re-Implementation

In line with evidence-based principles of assessment practices in structured settings (e.g., Vincent, Guy, & Grisso, 2012), the workgroup developed a quality assurance (QA) infrastructure and strategy for the re-implementation of the adapted tool. Research partners provided an overview of common evidence-based principles of structured assessment and quality assurance systems. Workgroup members discussed the potential benefits and challenges of various strategies, and ultimately, the community partners collectively decided on several components that seemed most relevant for the prevention context in which they operated. This included hiring a full-time quality assurance manager to provide staff with regular, dedicated support and technical assistance around assessment practices. In line with maintaining an equitable partnership in which all partners share power in the decision-making process, all community providers were invited to participate in the hiring process, including attending job talks and providing formal input (written feedback, voting) on candidates. In addition to hiring a QA manager, the workgroup also agreed on co-developing products and resources to support the ongoing fidelity of assessment practices across the initiative components (detailed below).

The provider interviews (phase 1) highlighted a clear need for more training on motivational interviewing (MI). The workgroup decided that an all staff training would ensure that providers across the initiative components had a common set of clinical skills, regardless of job roles or responsibilities. The MI training, with its focus on eliciting information, was a natural setting in which to roll out the adapted tool. To prepare for the training, a member of the research team (second author) and the initiative director (third author) worked together to hire an MI trainer and co-designed a training manual to ensure the training met the identified needs of the all initiative providers. Other parallel efforts were made to ensure the training activities facilitated a successful roll out of the revised tool. For example, a member of the research team (first author) worked collaboratively with several community partners to develop case studies to show how the tool, when filled out, was scored, including the process for translating scores to risk/need profiles to be used for service planning. To foster continued co-learning and capacity building, all members of the workgroup, including the research partners, attended and actively participated in the MI and tool re-implementation training.

The workgroup developed several additional quality assurance products and resources, all of which were intended to facilitate successful and sustainable tool re-implementation efforts. These included the development of a technical manual, which is co-authored by all members of the workgroup; a training webinar, which was intended to serve as a booster training for those less familiar with risk/needs assessment practices and for the onboarding of new providers; a frequently asked questions (FAQ) document; and a visual schematic of the standardized practice protocols (developed in step 2), which was intended as a quick-guide resource for providers who indicated a preference for visual as opposed to written resources.

Phase 3: Validation Study

The validation study addressed whether adaptations to the tool improved the assessment process through more complete and accurate data (concurrent validation) while accurately classifying youth with a higher propensity to be arrested for a violent offense (predictive validation). We also examined the degree to which youth were referred to services that aligned with their risk/need classification (an evaluation goal identified by the workgroup).

Sample and Procedures

Two independent samples of youth were used in this study: 1) Youth assessed using the original tool between September 2012 and October 2013 (n = 174); and 2) Youth assessed using the adapted tool between June 2014 and October 2015 (n = 454). Network providers completed approximately half (52%) of the original assessments, followed by street outreach workers (24%) and case managers (19%). For the adapted assessments, network providers completed the majority (76%), followed by case managers (15%) and street outreach workers (9%).

Data for this phase come from three sources: the SYVPI administrative database, Seattle Public Schools, and the Washington State Administrative Office of the Courts. Information on youth and provider characteristics, assessment data, and service referrals come from the SYVPI database. Three items from the school domain (failed class history, academic status, attendance) and two items from the court history domain (violent offenses – any history and within the past 6-months) were used for the concurrent validation. Corresponding data from official public-school and juvenile court records were linked to assessment data for the matching time periods. Any adjudicated violent offense subsequent to the initial assessment was used as the criterion for predictive validity. Service referrals that occurred after the initial assessment were also matched to the assessment data. These include case management, street outreach, employment, mentoring, Aggression Replacement Training (ART), and youth development programming.

The gamma statistic was used as a symmetrical measure of association between the data collected by SYVPI providers compared to official records (concurrent validation). Gamma values closer to 1 indicate a strong relationship between the two measures, and values close to 0 indicate little or no relationship (range –1 to 1). Risk classification proportions and Area under the Curve (AUC) calculations were used to assess tool sensitivity and specificity for predicting violent offenses by gender at 6-months post initial assessment (predictive validity). Descriptive statistics were used to assess match rates between assessment classification and service referrals.

Complete data. The adapted tool had substantially lower rates of missing data on every domain with the exception of two relationship subdomains: peer relationships (8% vs. 7%) and gang affiliation (9% vs. 7%; Table 3). The largest differences across tool versions were found in the attitudes (26% for attitudes towards school), family (22% for domestic violence), and mental health (19% for trauma) subdomains.

Table 3

Missing data item analysis for original and adapted assessment tools.

Assessment tool domains & subdomains Missing data n (%)

Original tool
(n = 174)
Adapted tool
(n = 454)
% Differencec

School
Academic status 4 (2%) 12 (3%) –1%
Attendance 15 (9%) 16 (4%) 5%
Suspension/expulsion 14 (9%) 14 (3%) 6%
Relationships
Peer 12 (7%) 35 (8%) –1%
Romantic, intimate or sexual 18 (11%) 12 (3%) 8%
Gang affiliation 12 (7%) 43 (9%) –2%
Use of Free Time 6 (3%) 7 (2%) 1%
Employment 11 (6%) 4 (1%) 5%
Aggression 17 (10%) 18 (4%) 6%
Criminal History 23 (13%) 26 (6%) 7%
Criminal statusa 113 (25%)
Substance Use
Alcohol use 30 (18%) 17 (4%) 14%
Historical alcohol usea 18 (4%)
Drug use 17 (10%) 16 (4%) 6%
Historical drug usea 22 (5%)
Family
Family issues 10 (6%) 9 (2%) 4%
Domestic violence 44 (26%) 19 (4%) 22%
Housing status 9 (5%) 13 (3%) 2%
Teen parenting and pregnancyb 161 (93%) 21 (5%)
Family member incarcerationb 120 (69%) 28 (6%)
Attitudes
Towards anti-social behavior 26 (15%) 23 (5%) 10%
Towards fighting and physical aggression to resolve conflict 18 (11%) 9 (2%) 9%
Towards school 47 (27%) 4 (1%) 26%
Mental Health
Mental health issues 30 (18%) 25 (6%) 12%
Support network 13 (7%) 13 (3%) 4%
Trauma 44 (25%) 29 (6%) 19%

Notes: Percentages rounded to the nearest hundredth. Bolded text indicates domains; non-bolded text indicates subdomains. aAdapted tool separated historical and current use into separate subdomains. bOriginal tool only provided a check box for “yes” responses – no distinction could be made between missing data and an endorsement of “not applicable.” cA negative value indicates the adapted tool had higher missingness.

Concurrent validity. The adapted tool had acceptable concurrent validity for the criminal history domain and adequate concurrent validity for the school domain. Although offense rates were relatively low for the sample (14% prior offense rate), initiative providers were able to obtain fairly accurate information regarding a youth’s criminal history as indicated by high match rates (83% and 84%) and acceptable gamma values (.32 and .33, both p < .001). For these items, there was substantial variation among provider type, as network providers had higher match rates compared to case management (87% and 90% vs. 83% and 72%), and considerably higher than street outreach (87–90% vs. 52% and 72%). For the school domain, ever failing a class had the highest match rate (65%), followed by attendance (52%) and academic status (50%). Gamma statistics for failed class history and academic status were equivalent (.29, p < .01) suggesting fair agreement for both items, while attendance was low (.15, p < .001) suggesting little agreement.

Predictive validity. Criminal offense data was available for n = 183 (88 males, 95 females) of the adapted tool sample. Classification results suggested fairly equal distribution across risk categories: 22% of males were classified as low, 43% as medium, and 35% as high risk; while 29% of females were classified as low, 31% as medium, and 40% as high risk. The adapted tool had good specificity (AUC = 0.77) for males and decent specificity for females (AUC = 0.64), suggesting the tool outperformed chance despite low incidence of violence in the sample (6 males, 4 females; 5% of sample). Five of the six males (83%; gamma = 0.84, p < .05) and three of the four females (75%; gamma = 0.45, no sig) who had violent offenses 6-months post assessment were correctly classified as high risk.

Service matching. Service referral data was available for n = 404 of the adapted tool sample. Overall, we found youth were relatively well matched to services that aligned with their indicated need. As would be expected, high risk youth received the majority of the referrals to street outreach (61%), followed by the medium risk group (28%), χ2 = 25.20, df = 2, p < .001. For mentoring, the low risk group received the highest proportion of referrals (40%) followed by the medium risk group (36%), χ2 = 21.11, df = 2, p < .001. No group differences were found for the other service types, though trends largely aligned with the specified referral protocols.

Discussion

This study illustrates the feasibility and usefulness of a CBPR approach for engaging community providers in the process of adapting research-based strategies to fit local contexts. Prior to engaging in a CBPR process, SYVPI experienced challenges in implementing a tool initially chosen to support providers in service planning. Through the CBPR process, the tool was revised and re-introduced in a modified form, which ultimately improved usability and buy-in. Further, the revised tool showed promising psychometric characteristics based on the preliminary validation analyses. Our experience demonstrates that applying CBPR principles can assist in overcoming challenges commonly associated with the direct translation of science to practice, and the particular value of building community-research partnerships to ensure translation efforts balance research and practice needs.

Implications

Research-based tools need to be fitted to the local context. Results from this study highlight multiple ways in which tools developed for structured settings (e.g., the PACT for juvenile court or SAVRY for juvenile probation) may not fit well within the community-based prevention context. Interview themes and high rates of missing data reflected a lack of shared understanding about the purpose and utility of the tool as originally implemented. These misalignments were due to provider variations in professional training and expertise, time demands, and insufficient connection to decision-making needs. These issues are not uncommon, and have been documented in other service settings. Research suggests that the use of structured tools in community settings can be impacted by limited knowledge among providers about risk and protective factors and by local beliefs about positive youth development that conflict in some ways with research-based assessment approaches (Yi, Martyn, Salerno, & Darling-Fisher, 2009). Ultimately, we found that engaging in a CBPR approach to tool adaptation can assist in overcoming challenges common to research-to-practice translation, including 1) variation in provider professional background and orientation across sites, 2) unfamiliarity with structured assessment in youth development contexts, and 3) misalignment with the workflow and decision-making needs of providers.

In the case of SYVPI, engaging in a CBPR approach to tool adaptation appeared to foster local ownership and increase buy-in for the tool. Specifically, we saw improvements in data collection practices (tool use) and data quality. We found an increase in provider motivation to follow the standardized practice protocols, as indicated by an increase in network providers conducting the enrollment assessments: 76% compared to 52% for the original tool. There was also substantially less missing data on the majority of the tool domains for the adapted tool, and referrals to services using this tool were well aligned with the co-designed practice protocol.

Assessment tools adapted through a CBPR process may need further adaptation for use in specific practice contexts. We found that, despite the application of a CBPR approach, the capacity to use structured assessment continued to vary across sites. For instance, there was a substantial discrepancy in tool usage between street outreach workers and the other initiative providers. Unlike case managers who typically work in office settings, outreach workers engage with youth and families, quite literally, on the “streets,” providing outreach services such as crisis management and violence de-escalation (e.g., Frattaroli et al., 2010). The SYVPI outreach workers were able to obtain only moderately accurate information even after participating collaboratively in the revision process. While engaging in a CBPR process may have increased buy-in for structured assessment with this group, it may not have produced a tool that fit their real-world practice needs. Further efforts may be needed to modify the tool content or format specifically for street outreach workers, or those working in unstructured practice contexts.

Building equitable partnerships, a cornerstone of CBPR, is a highly time, resource, and labor-intensive process. Building equitable partnerships can be a challenging undertaking (Horowitz, Robinson, & Seifer, 2009). In the current project, community partners had to carve out time from their already overburdened schedules to participate in the tool revision process, despite uncertainty around the ultimate benefits of the tool; research partners had to be patient with the timing (a 2-year contract evolved into a 5-year commitment), and ultimately contributed unpaid time and labor to ensure the technical manual and dissemination products were finished. The project required a willingness to be flexible regarding timelines, resources and intended outcomes. Researchers and communities engaging in translation efforts in multisite community initiatives should consider potential strategies to navigate these challenges to ensure project and partner needs and objectives are met.

Future Directions

While much progress was made to fit the assessment tool to SYVPI’s multisite practice context, additional efforts to evaluate the effectiveness of engaging in a CBPR approach to tool adaptation did not come to fruition. In the case of SYVPI or similar initiatives, future work is needed to assess provider perceptions of the benefits and challenges of the CBPR process for local adaptations of research-based products, and perceptions of the revised products at various stages of the translation process. More research is also needed to understand 1) what tools and strategies would best support the needs of less traditional providers, like street outreach workers, given the unique role they play in ensuring that the most vulnerable youth are safe and supported in the community, and 2) whether a CBPR approach can help facilitate these efforts in unique and highly malleable practice contexts, such as outreach settings.

Study Limitations

The findings presented here should be set against several study limitations. We were unable to account for staff turnover between the tool iterations; therefore, it is not entirely clear what proportion of the missing data from the original and adapted tools were attributed to staff differences rather than the tool adaptation itself. It is possible that other social or contextual factors, such as provider demographics (e.g., race/ethnicity, gender, educational background) influenced perceptions and use of the tool and levels of engagement in the CBPR process. This information was not tracked in the current study to 1) protect confidentiality during the interviews so participants felt comfortable being open and honest about the tool and assessment practices; and 2) facilitate the development of trust and relationship building between the researchers and community partners. Concurrent validity was only established for a subset of items and findings reported here may not extend to other assessment items. The predictive validity analyses were limited by the fact that only 10 youth committed violent offenses during the study timeframe. However, our preliminary results suggest that some association exists between the risk classification and violent offending (e.g., for the female sample), and would likely improve if the tool is revalidated using a larger sample.

Conclusion

This paper aimed to highlight the ways in which a CBPR approach might facilitate adaptation and translation efforts to ensure research-based approaches fit local contexts. Our study demonstrates the value in community-research partnerships, the usefulness of CBPR principles for adaptation efforts, and ultimately, that the translation of research-based approaches is a feasible prospect for comprehensive community initiatives. We find that a CBPR approach can adequately balance provider needs with research-based assessment practices to ensure youth receive well-targeted services across multiple service settings.