Richardson Study Review

In the paper by John Richardson Face-to-Face Versus Online Tutoring Support in Humanities Courses in Distance Education, Arts and Humanities in Higher Education 2009; 8; 69 - 85, we've been asked to comment on the conclusions that Richardson stated that in this study (compared to the other one that we commented on), there appears to be no difference between the perceived quality of tutoring whether it was online (technology based) or face to face (f2f).

I don't know whether Richardson, who wrote this section of the course thought there was a possible conflict of interest since the three studies we've been asked to comment on all have him as an author - either that or he has an ironic sense of humour.

As before, the research appears to be using the wrong tests to compare differences (one way ANOVA, the paper does not actually say but results for 'F' are reported). The overall conclusion is that there are no significant differences between online and f2f tutoring, however I am NOT convinced of this argument. Firstly the postal questionnaires were self selected. Also the selection of attending f2f vs. online tutoring was self selected. It's acknowledged by the author as a possible problem during interpretation, but nevertheless he continues to make the comparison. The conclusion also reached is that compared to the 2005 paper he co-wrote (see above) - that actually the OU context shows that the differences between the two modes was negligible, perhaps because OU is working hard to deliver online education in the best possible practice. 

There was also an acknowledgement that there might be some confusion because many 'on-campus' courses are today also delivered with online delivery -so called 'blended' education. The researcher did pair-wise matched comparisons between the groups, but it's not clear how the pairs were constructed other than being matched for the course that they did (conditions being the mode of tutoring undertaken). I also have a problem with the fact that the f2f tutoring allowed for email and telephone support as well, which was also allowed in the online version through email and I presume audio conference (Elluminate?). There could be some real movement to a standard here, ie the email is considered 'online' in either case.

This is how I think I would have done this study. 
  • Firstly, I would have set up a learning task [something that was interesting anyway] - perhaps a micro learning task - that OU students could volunteer to take - in other words it's not a 60 point course but rather a smaller task that would take place say over 8 weeks in total. It would not be part of their actual assessment but I would have paid them with [substantial] credit towards a paid course that they have yet to take. 
  • Then I would have split the groups randomly in their tutoring support - this WOULD be ethical because it's an acknowledged experiment and part of what they agree to. 
  • Thirdly, I would have ensured that the tutoring was strictly 'f2f' or 'online'. The former being you have to be able to see the person without the aid of technology. Online means any other means, incl. telephone, email, video conference, instant messaging.

Of course such an experiment could measure more than just perceptions of tutor support so it would be well worth the investment (ie the credit paid for participating in the study). 

  • Finally of course, if I'd used the same survey scales that he used (seem superficially to be well constructed), I would have conducted simple non-parametric statistical tests, doing pair wise matched comparisons (would have the highest statistical power compared to non matched group comparisons).
Life is not measured by the number of breaths we take, but by the moments that take our breath away.