So for over a year now, I have been working on this project in conjuncture with trying to finish my school work and several other research studies. This project has been a great experience and is nearing completion.
This project started off with a crowdfunding campaign to raise funds for one of the assessments used to make an observation of the participant's autism severity. If you haven't heard of Experiment.com, I suggest going over an exploring for a few hours; be careful you could get sucked in for hours looking at all the amazing projects people are doing around the world. It's like Kickstarter for research.
Anyway, thanks to some very amazing donors, I was able to purchase the Social Communication Questionnaire, allowing me to understand how my groups compared on severity. It is not the gold standard of assessing severity, nor it is the only way I could have done it, but the nice thing about the assessment is it gives a score based on the number of stereotypic behaviors that the participant displays, as reported by the parents; the higher the score, the more behaviors.
After securing the assessment, I finalized my participant pool. I did not get as many I had originally hoped. I ultimately had 13 parents consent to have their children participate in my study, of those, 10 completed the study. One participant was too old and did not fit within the inclusion categories. Another participant, who was very ball, did not give assent to participate. The child was very slow to make decisions and was not inclined to doing physical activity. I made two attempts in consecutive weeks to invite the child to participate, but on both occasions the child did not agree; so they were dropped. The last child, who was non-verbal, had very intense behaviors (i.e. crying, self-injury) during the first session, which was ended early due to the child's agitation, as we did not want to cause them any further discomfort. During the second session, similar behavior was demonstrated, so I took this a sign of non-assent and this child was also dropped from the study.
In this study, I was looking to understand how visual modifications, like a task/picture card or video model, could be used to improve performance on the assessment and increase understanding. To test this, the participants were split into three different groups; a control, task card, and video model group. In the control, the participants were given a demonstration as they would if given the assessment by the manual, which included a verbal prompt, a visual demo, followed by a verbal prompt. Each participant was given two trials for each of the assess skills (hop and throw). After each trial, participants were asked, "what skill did you just perform/do?" and asked to point to a picture of what they did.
In the task card group, participants were given similar instruction as the control group, except for the addition of a task card prior to the last verbal prompt. The video model group was given the same instruction as well, except for a video instead of a task card.
I am currently finishing analyzing the data and as there were so few participants, inferential statistics is difficult to interpret. However, there are some trends within the data that are interesting. First, within this group, there was no statistical difference in performance. However, the video group had an average of one more component of the throwing skill, then the other groups.
Also, the video and task card group, while not significant, had much less overall assessment time, then the participants in the control group.
Lastly, the video model had a 75% success rate on the validity check after each trial, compared to the task card (50%) and control (33%).
I hope to complete the manuscript and send it out for publication within the month. I will keep you all posted through here; if you have any suggestions for a journal that would be a good fit for this project, please leave them in the comments.