Top

Research Frequently Asked Questions Print or save as PDF

 

You say that your questions have a high level of reliability, what does that mean at a District level?
Reliability refers to the survey content and the original survey construction process and is therefore not specific to schools or districts. A reliable measure ensures that the items that have been selected for inclusion are reflective and appropriate for the measurement of the construct as we have defined it (e.g., sense of belonging, anxiety).


The advantage of using a measure comprised of multiple questions versus a single question is that it allows us to gather information on a particular concept using a multi-dimensional approach. We also know that measures are more reliable when they include a number of well-formulated questions, as this allows us to more accurately differentiate between students.

For example, a reliable measure will allow us to differentiate between students with high versus low levels of anxiety. Using a measure that is reliable will also ensure that if we were to ask students to respond to the same questions at a later date their responses would be similar. At a school or district level, it allows the client to have confidence in the results they are seeing, as they are based on responses to rigorously tested survey content.

Additional articles located on the Knowledge Base: Accuracy of OurSCHOOL measures How measures are developed

 

How do you make sure that the results are valid? And not skewed?
We do not ‘clean’ or ‘modify’ the data in any way so that it always reflects the true responses from students who participate. When appropriate survey prep is undertaken, students understand the purpose of the data and see their results being actioned we know this influences their level of participation. Survey length can also play a role.


Although there may always be a few students who may not take the survey seriously or who ‘mess around’ with their responses, their responses will have a relatively small impact on the survey results, or no impact depending upon the size of the school. From our experience, we have also witnessed that when students indicate that they may have ‘messed around with the survey’ we generally do not see that reflected in the results or the OEQ responses.

Survey results should always be viewed as a starting point or one piece of the puzzle...if a client feels their data isn’t accurate or skewed, it should be viewed as an opportunity to discuss the results with students.

Additional articles located on the Knowledge Base: Practical Tips for Generating Survey Excitement with Students

Are you worried that by actually asking questions, you are leading students to believe that they indeed have a low level of self-esteem? Or Anxiety?
The survey is constructed in such a way as to minimize leading questions, however, as a perception survey, at times students need to be directly asked about situations or behaviour they have experienced. To avoid prompting students to respond in a certain way, students do not actually see the measure titles themselves, and would only ever see the survey items or questions.


The survey also includes functionality that filters the questions that students are asked. In this way, students are never asked follow-up questions about behaviour or situations they have not experienced. For example, only students who report being a victim of bullying are asked questions about where and when the bullying occurred.

Finally, the concepts and terminology that we use are based on standard definitions that are found in the research, are age-appropriate, and reflect common language that students would hear or use. Students are aware of these concepts, and it is about making these topics more accessible. For example, with our Anxiety measure, the survey can help remove the stigma around mental health conversations and give students a safe environment in which to respond to these questions and have their voice heard. Mental health is also an important component of student well-being and academic success.

It could be argued that all items or questions on perception surveys cause students to reflect on the topics under evaluation (e.g., being a victim of bullying, not having positive friendships, their relationships with students).

To name measures, like level of Anxiety or Depression, is using actual diagnostic terms. Therefore categorising students under a psychological condition, what is your response?
Our content is not meant for diagnostic purposes, and we do not make reference to diagnosis within any of our resources or supporting documentation. The measures of anxiety and depression were developed with the assistance of a child and adolescent psychiatrist and are designed to focus on the key markers or early indicators of mental health. Both primary and secondary students respond to questions regarding the extent to which they experience feelings or display symptoms related to anxiety or depression.


It is important to evaluate the definition of anxiety and depression against the actually survey items to provide context and help understand the results.

Additional articles located on the Knowledge Base: Anxiety Depression

You mentioned that a meaningful change is equivalent to 0.5 on a 10 pt scale or 5%, how do you get to this conclusion? Why can’t 2% be a meaningful change? Are these the same at a district (System level)

It can be difficult to articulate what reflects a meaningful change as it is always important to take the context of the results into consideration, as well as the level/type of interventions that have been put into place.

Results at the system level are more subject to remaining relatively the same and as a result can appear to remain stagnant from year to year due to the sample sizes of larger cohorts of data. Drill-downs become a valuable tool as they help to ensure data is analyzed beyond the aggregate level. Looking at drill-downs can help show patterns (e.g., the dips always occur at grade 9, or the dips appear to be associated with the same cohort).

It is also important to go beyond the results at the district or system level and look at the individual school level results, including drill-downs. This is where you are more likely to see marked change, as well as gain information about which schools are moving the needle to engage in further discussion. Schools and districts should also be setting their own benchmarks and setting targets for how much they want to “move the needle” as part of their goals.

Additional articles located on the Knowledge Base: Long term data points

What if students were randomly answering the survey? How is this an effective way of measuring student wellbeing?

Answered by #2. Perception data will always have some level of error. Student prep is very important to counteract how students view the survey process, and should be followed by demonstrating how their responses are actioned to support change.

Additional articles from Knowledge Base: Communication and Survey Preparation

What is the difference between the OurSCHOOL norm and an Aggregate?

An aggregate reflects the calculated score for a school or district based on all the data collected during a given survey window. The aggregate is specific to the client, and reflects the unique responses of the participants being surveyed (e.g., parents, teachers, or students).

The OurSCHOOL norm, either one-click / interactive / replica, reflect static aggregate values that are based on a larger composition of respondents and are meant to provide a broader comparison point (e.g., Canadian students). Norms remain static for five years to ensure consistency in benchmark comparisons for school improvement planning purposes. It is important that the norm remains unchanged for a meaningful period of time to ensure these comparisons can be made and captured accurately over time (imagine having a different value to compare to each year!).

Additional articles located on the Knowledge Base: Replica Norms National & Comparison lines