I recently completed a wonderful two-part peer teaching observation with Dr Sara Lawrence. But, unlike the teaching observations I have done in the past, this one was for an online class. I had never done a teaching observation for an online class before. We had a standard checklist for peer review, but the questions in it did little to guide us, so we made our way via a somewhat organic discussion, facilitated by the course materials stored in the completed versions of Blackboard shells for two separate courses. Here’s our process, boiled down to four parts that we experienced organically through the process of two, one and a half hour peer reviews. The first of these peer reviews were held in November, the other after classes ended in December 2014.
- We began by the person whose teaching was being reviewed expressing their values and goals, before we set out to look at materials.
- Then the person whose teaching was being reviewed discussed how the design of the course reflected those goals, or didn’t, and what struggles (design tensions) they had already discovered.
- Then we moved through course materials and some manifested discussions that took place in the course as. Looking at student work branched into a lot of discussion and ideation. I think we both found it difficult to stop from explaining details to the other while one looked through the online interactions.
- We then discussed nuances of interpretation that students might gather and divergent strategies we might be taking in our different approaches. These talks meandered quite a bit to the relationships between teaching content, learner context, institutional resources, materials and past experiences with the design.
- We concluded with itemized lists of revisions that might improve our instructional designs, we each added some plausible, and new, instructional tactics that might specifically serve these unique groups of students we have in these courses.
As it was my first peer observation for an online class, I didn’t know how our process stacked up against others. I have always taught online while also teaching other courses face to face, and this was my first formal review of an online class. The institution where I teach is going through a process of implementing review standards for online classes. I think standards are a good thing, if they promote learning in the end. So, after these reviews were completed, I went and read some of the literature surrounding the commercial materials being implemented by Quality Matters, the organization contracted to ensure a process of quality improvement for online course design. I found these materials from their website, and focused on one document (Introduction to the Quality Matters Program) in particular because I thought it might give me decent insight into how our ad hoc process compared to the initiative. That document provided an overview of the process and the focus areas, and I supplemented it with reading a white paper literature review published by Quality Matters.
What struck me was that our organic process and the areas of focus as stated in the Quality Matters literature seemed to outline identical discussions. The following table maps out the organic discussion as organized by the QM rubric.
|Quality Matters Rubric Item||Organic discussion among two peers|
|1. The Course Overview and Introduction||We both focused on getting to those materials, but I remember there was a fair bit of explaining verbally. This lead to a rather elaborated discussion about the role of the syllabus in online education, just how long it should and could be, and what value sub-documents might have as linked components of an online syllabus.|
|2.Learning Objectives and Competencies||Both Sarah and I related learning objectives when explaining materials we has added to our courses; competencies were are part of that discussion. What I found curious about this discussion is that each of us had more holistic goals that go beyond performance objectives. In one case, the competencies were state-mandated, which introduced a design tension: at the graduate level, the mandated competency is vaguely stated as required content of the syllabus. Is the instructor free to elaborate on that? Does that break the rules?|
|3.Assessment and Measurement||Both teachers elaborated on the rationale behind the point systems used, the rationale behind the tasks, and the values expressed by the design of the assessment systems as a whole. What struck me as curious about this discussion was the relationship between context and grading, revealing a design tension on one case. If a course is only offered once every two years, then with each instantiation materials must change, requiring changes in assessment practices, and limiting the design to an ongoing process rather than a fixed entity that can be decided on prior to the launch of the course. How would this context stack up against a rubric designed for courses with fixed and permanent content? Thus the design tension lies in the assessment of the assessment, rather than in the assessment of the students in their performances in the course.|
|4.Instructional Materials||I gathered from these segments of the discussion that materials’ discussions are pervasive within any discussion of online teaching. Our discussion lead to a recognition that all of the course is effectively course material. For example, in the discussion of navigation, which would be an access issue I assume, where content items are stored and how they are linked up (inside or outside of the LMS) is in fact teaching material in its own right. The design of how materials are delivered sends a message. A video linked outside the shell carries a message of don’t trust the system. Thus the design itself is a component of the course materials, a nuance I had not originally been cognizant of when I designed the course.|
|5.Learner Interaction and Engagement||Both teachers iterated on the importance of interaction and the struggles with supporting it online. This leads to discussions of timing; e.g. when tasks are due during the week and throughout the semester. Views on interaction also lead to a discussion of teaching values. One instructor placed value on catering to the learners’ convenience by making everything phone-enabled. The other expressed a value of expressing the gravity of the task by pushing learners to experience the course via a full-sized screen, where they can interact more fully, less distracted, and deeper. The contrasting values in this regard were some of my most valuable takeaways from the experience. Again, the configuration of learner interaction with both other learners and the instructor was both an expression of teaching values and a materials issue at the same time.|
|6.Course Technology||Discussions of technological choices were organic and pervasive through both talks. A number of persistent obstacles emerged. For example, Blackboard does not transfer links to purchased content (Atomic Learning models) resulting in dead links for students despite a number of proofreads of the course since the icons for the modules do in fact transfer over, albeit with outdated links. Thus each commercial content link, and there could be several in a design, must be revisited. In proofing a course before its launch, this can be difficult since catching all of them is not automated like we might find in a web authoring tool. Another point of concern was that Blackboard does not support a traditional wiki environment, in the sense that learners normally define the term. Supporting collaborative writing is then challenging despite the assumption that this technology supports that type of interaction.|
|7.Learner Support||I found it curious that our discussion of learner support actually hinged on a definition of our learners, who they are, and the broader goals behind what is meant to be taught in the course. Again, the topic itself cannot be disentangled from the discussion of materials. For example, one instructor felt that a course requirement asking learners to search out answers to technical topics prior to submitting questions was a move toward empowerment, and the other focused on curating materials because of the large amount of misinformation on the web. Both perspectives present a logical and reasoned position from a teaching standpoint. This discussion revealed an important aspect to understanding how a whole course fits together; something we would hope a viewer would know prior to evaluating an online course. It struck me that answers to the “correct” way one might empower learners in an environment that places so much emphasis on self-accessed learning hinges on each teacher’s perspective, and knowledge of their specific population of learners.|
|8.Accessibility||This term in the QM literature would need to be interrogated to be fully understood because accessibility can range from providing multimodal access to learning materials in a traditional sense (captioned videos for example), to values related to the design technology-enhanced learning more broadly. For example, I had not previously noticed the cluttered nature of my courses landing page. I had not discovered some of the tools on the landing page could be removed or hidden. This is actually an access issue. With so much on a single screen, is the instructor aware of how much of a burden they are putting on learners’ ability to find what they are looking for? Also, I had not included redundant links in convenient locations within the course. In retrospect, these are as much accessibility issues as providing captions on videos. We discovered this through the course of our talk. The same goes for providing free alternatives to software that I had assumed learners would have, an assumption I now question.|
The literature on observation in teacher education makes some very clear points about how teachers are best supported. Some key considerations are that teachers are best supported by non-judgmental observations (Fanselow, Bullock, Van Es), observation of larger constructs such as course design and lesson plans reveal very different aspects of teaching expertise and perspective than close inspections of specific teacher moves (Preston, Rosean), and “effective” practice cannot be defined without the input of the teacher (Owston). I wonder if some of these ideas do not get lost when we try and standardize peer review of online teaching. The Quality Matters materials do seem quite tied to the current research in online learning, which is a good thing, so, it strikes me as a beneficial aid to a novice teacher. But I have to wonder if the rubric wasn’t created from discussions like the one I had with Sara. The mapping of the discussion over the rubric strikes me as too precise to be coincidental. Perhaps the rubric only tells you what *should* be in a professional discussion; things competent professionals would talk about anyway.
Perhaps the QM initiative is an attempt to combat the issue of teacher obsolesce in the face of new technology. I think what our conversation suggests to me is that teaching online is less about being online than it is about teaching. The QM website goes to great lengths to make it clear that it has a rigorous approach to development, informed by the most current research, and includes a lit review of some of the trends in online learning research. What I think the approach may be missing in this endeavor is a key recognition that teaching and learning are in fact two different things, albeit interrelated. Research into online learning can only take us so far into learning how to teach online. Once someone grasps the concepts behind the available tools, the discussion goes much deeper into values, perspectives, and deeply held notions about learning a specific set of content items. The relationship between strategy and content is not one to one, but rather stochastic and deeply tied to values the teacher holds (Reigeluth). We have a comparatively meager body of research related to online teaching strategies than we do on online learning. I came away from these talks with the notion that peer-review observation of an online course is less about technology than it is about what it means to teach, less about the machine and more about the people we are trying to connect with and nurture.
This experience left me somewhat suspect of decontextualized observations from someone who is not in touch with our learners. There comes a point where the context of the learner reigns supreme over any instructional strategy that a teacher is assuming, and those unique contexts surface through discussion with a colleague about how the materials and design came to be as they are. I welcome any peer review with a colleague from my institution, but I wonder how much common ground I might find with someone from somewhere else. Perhaps this is related to why we grant tenure based on teaching at the institution giving tenure and not elsewhere. The differences in foci among teachers at different institutions may parallel differences among teachers within one institution, but they may also not. What one teacher is trying to accomplish may, or may not, be what the rubric supposes the teacher is trying to accomplish, and may or may not be needed for these particular students given the circumstances of the program as a whole. I remember more than once coming upon the phrase, “no, that’s already in another course.” This may be why MJ Bishop (University System of Maryland) says we don’t do courses as much as we do degrees; the course and the teachers have to fit together. The conversation about what should be taught is equally as important as the conversation about how it should be taught, and I am not so sure the two can be extricated from each other in a meaningful way, as we found in the multiple locations in our talk where the discussion of materials could not be torn from the other topics that our meandering conversation unearthed. I would feel more comfortable discussing curriculum choices with a teacher whose approach I know, than one whose approach is still a black box. Professional familiarity has an important value on the context of a school, a value the QM perspective seems to overlook.
The value of professional knowledge of a colleague hints at something more profound, and something related to improving teaching growth on a personal level as well as on a program or school level: the importance of trust and comradery. The importance of trust is pervading into many fields of late, including medicine. All of the components of teacher observation that I found in my in-person observations were present in this discussion of our teaching, and I don’t know how comfortable I feel about intimate discussions of my teaching rationale with someone who may function on very different belief system about what teaching is, is not, and what is should be. Nor do I see an incentive to want a third party to see my course through my eyes. Sharing that with a colleague I foresee working with for a long time does hold that value for sharing. Working with people you know and trust is not intimidating nor trivial; it is pleasurable and feels like time well invested in a colleague and one’s own teaching. It strikes me that this will pay off for students as well; teachers who know each other and how each other teach can make better decisions about what policies to implement. Also, accomplishments can be shared, and setbacks can be supported. A peer review with a trusted colleague makes for a better working environment.
I came out of this experience with the conclusion that a peer observation of an online class is not wholly different from peer observation of a classroom teaching– the most rewarding parts are the discussion that happens after looking at teaching choices, and in discovering why a trusted colleague teaches the way they do, sharing why I do what I do, and reflecting on how this might be done better for the both of us, and our learners.
One thought on “Deconstructing the Quality Matters Rubric”
Enjoyed your post. I found the organic nature of our discussion very fulfilling. Yesterday, I participated in a Quality Matters workshop reviewing standards for peer reviews. I too found many alignments between our process and the QM process.
In the workshop, much time was spent on how to speak/provide feedback to a colleague in a “sensitive” tone among other concepts. Perhaps this is needed if you don’t know the peer participating in the review process. The one thing I really enjoyed about our process, more than the collegiality, was the sense of trust. I trusted your intentions were in my best interest and that I trusted that you know your subject area. To me, this added exponentially to the organic nature of our process.
Without trust, the QM framework is probably very helpful to instill a level of objectivity (artificial trust) in the process. However, I prefer the organic process with a colleague of shared respect and trust.