The advantage of using a measure comprised of multiple questions versus a single question is that it allows us to gather information on a particular concept using a multi-dimensional approach. We also know that measures are more reliable when they include a number of well-formulated questions, as this allows us to more accurately differentiate between students.
For example, a reliable measure will allow us to differentiate between students with high versus low levels of anxiety. Using a measure that is reliable will also ensure that if we were to ask students to respond to the same questions at a later date their responses would be similar. At a school or district level, it allows the client to have confidence in the results they are seeing, as they are based on responses to rigorously tested survey content.
Although there may always be a few students who may not take the survey seriously or who ‘mess around’ with their responses, their responses will have a relatively small impact on the survey results, or no impact depending upon the size of the school. From our experience, we have also witnessed that when students indicate that they may have ‘messed around with the survey’ we generally do not see that reflected in the results or the OEQ responses.
Survey results should always be viewed as a starting point or one piece of the puzzle...if a client feels their data isn’t accurate or skewed, it should be viewed as an opportunity to discuss the results with students.
Additional articles located on the Knowledge Base: Practical Tips for Generating Survey Excitement with Students
The survey also includes functionality that filters the questions that students are asked. In this way, students are never asked follow-up questions about behaviour or situations they have not experienced. For example, only students who report being a victim of bullying are asked questions about where and when the bullying occurred.
Finally, the concepts and terminology that we use are based on standard definitions that are found in the research, are age-appropriate, and reflect common language that students would hear or use. Students are aware of these concepts, and it is about making these topics more accessible. For example, with our Anxiety measure, the survey can help remove the stigma around mental health conversations and give students a safe environment in which to respond to these questions and have their voice heard. Mental health is also an important component of student well-being and academic success.
It could be argued that all items or questions on perception surveys cause students to reflect on the topics under evaluation (e.g., being a victim of bullying, not having positive friendships, their relationships with students).
It is important to evaluate the definition of anxiety and depression against the actually survey items to provide context and help understand the results.
It can be difficult to articulate what reflects a meaningful change as it is always important to take the context of the results into consideration, as well as the level/type of interventions that have been put into place.
Results at the system level are more subject to remaining relatively the same and as a result can appear to remain stagnant from year to year due to the sample sizes of larger cohorts of data. Drill-downs become a valuable tool as they help to ensure data is analyzed beyond the aggregate level. Looking at drill-downs can help show patterns (e.g., the dips always occur at grade 9, or the dips appear to be associated with the same cohort).
It is also important to go beyond the results at the district or system level and look at the individual school level results, including drill-downs. This is where you are more likely to see marked change, as well as gain information about which schools are moving the needle to engage in further discussion. Schools and districts should also be setting their own benchmarks and setting targets for how much they want to “move the needle” as part of their goals.
Additional articles located on the Knowledge Base: Long term data points
Answered by #2. Perception data will always have some level of error. Student prep is very important to counteract how students view the survey process, and should be followed by demonstrating how their responses are actioned to support change.
Additional articles from Knowledge Base: Communication and Survey Preparation
An aggregate reflects the calculated score for a school or district based on all the data collected during a given survey window. The aggregate is specific to the client, and reflects the unique responses of the participants being surveyed (e.g., parents, teachers, or students).
The OurSCHOOL norm, either one-click / interactive / replica, reflect static aggregate values that are based on a larger composition of respondents and are meant to provide a broader comparison point (e.g., Canadian students). Norms remain static for five years to ensure consistency in benchmark comparisons for school improvement planning purposes. It is important that the norm remains unchanged for a meaningful period of time to ensure these comparisons can be made and captured accurately over time (imagine having a different value to compare to each year!).