HSSSE and MGSSE are proven tools with time-tested results. Evidence of the credibility and trustworthiness of HSSSE and MGSSE include the following:
Grounded in the research
HSSSE and MGSSE are strongly grounded in the research and literature on student engagement, and in particular research related to the engagement of high school and middle grade students. Research describes student engagement as a multidimensional construct that includes:
- behaviors (e.g., persistence, effort, attention, taking challenging classes)
- emotions (e.g., interest, pride in success), and
- cognitive aspects (solving problems, using metacognitive strategies).
Designed to satisfy the conditions needed for self-reported data to be reliable
These conditions are:
- information is known to respondents;
- questions are phrased clearly and unambiguously;
- questions refer to recent activities;
- respondents think the questions merit a serious and thoughtful response, and;
- answering questions does not threaten or embarrass students, violate their privacy, or prompt them to respond in socially desirable ways (i.e., concede to peer pressure).
Validity and reliability of the instruments
Researchers and educators often discuss trustworthiness in terms of the validity and reliability of the instruments. These concepts are multifaceted and have diverse definitions; there are multiple methods for examining reliability and validity. However, as a general concept, reliability refers to the degree to which an instrument produces consistent results across administrations.
For example, a measure would not be reliable if one day it measured an object’s length at 14 inches and the next day it measured the same object as 13 inches. And as a general concept, validity refers to whether the results obtained from using an instrument actually measure what was intended and not something else. Evidence that supports the validity and reliability of HSSSE include the following:
- Content Validity (Face Validity). Content validity addresses the question, do the survey questions cover all possible facets of the scale or construct? This form of validity refers to the extent to which a measure represents all facets of a given construct. There are no statistical tests for this type of validity, but rather we rely on experts to determine whether or not the instrument measures the construct well. To establish content validity, the Center for Evaluation, Policy, & Research (CEPR) at Indiana University convened an external Technical Advisory Panel in 2012-13 consisting of national academic experts in student engagement, K-12 practitioners and psychometricians. The Technical Advisory Panel examined the content validity of the HSSSE categories (i.e., dimensions of engagement), subcategories and items to assess the extent to which the constructs aligned with current research and literature on student engagement. Items were revised, refined or dropped from the instrument based on recommendations from the Technical Advisory Panel. Therefore, the content validity of HSSSE is supported by the integral involvement of the Technical Advisory Panel in the development and refinement of HSSSE.
- Construct Validity. Construct validity is the degree to which an instrument measures the characteristics (or constructs) it is supposed to measure. Construct validity addresses the question, does the theoretical concept match up with a specific measurement/scale? The three dimensions of student engagement measured by HSSSE and MGSSE (i.e., cognitive engagement, emotional engagement and behavioral/social engagement) are commonly regarded in research and literature as the key dimensions of high school and middle school student engagement (Fredricks, Blumenfeld and Paris, 2004; Fredricks, McColskey,, Meli, Mordica, Montrosse, and Mooney, 2011). Confirmatory factor analyses of HSSSE data support the construct validity of the subscales for the three dimensions of cognitive engagement, emotional engagement and behavioral/social engagement.
- Response Process Validity. Response process validity addresses the question, do respondents understand the questions to mean what they are intended to mean? This form of validity refers to the extent to which the respondents understand the construct(s) in the same way it is defined by the researchers. There are no statistical tests for this type of validity, but rather data are gathered via respondent observation, interviews and feedback. To establish response process validity, the Center for Evaluation, Policy, & Research (CEPR) at Indiana University conducted focus groups and cognitive interviews with students at seven high schools using both paper and on-line versions of the instrument. Survey items were refined based on respondents’ feedback in order to establish response process validity.
- Reliability. The Center for Evaluation, Policy, & Research (CEPR) at Indiana University specifically examined internal consistency reliability. Internal consistency reliability addresses the question, do the items within a scale correlate well with each other? Internal consistency is the extent to which a group of items measure the same construct, as evidenced by how well they vary together, or intercorrelate. Internal consistency reliability is measured with Cronbach’s alpha; a Cronbach’s alpha coefficient greater than or equal to 0.70 is traditionally considered reliable in social science research (Thorndike & Thorndike-Christ, 2010). For HSSSE, the Cronbach’s alpha reliability coefficient was calculated for each of the three dimensions of student engagement (cognitive engagement, emotional engagement and behavioral/social engagement) using 2013-2015 data that included 64,911 students. Cronbach alpha was .71-.91 for the subscales of cognitive engagement, .73-.89 for the subscales of emotional engagement and .70 for behavioral/social engagement.