On Q: an appropriate methodology for researching autonomy?
Lucy Cooker, University of Nottingham, UK (email@example.com), interviewed by Mike Nix, Chuo University, Japan
Published in Learner Autonomy in Language Learning (https://ailarenla.org/lall),
August 2012. First published in two parts, 2010–11, in Learning Learning 17/2: 24-30 and 18/1: 31-38).
Mike: Lucy, I know that your research is about the assessment of language learner autonomy and you’ve surveyed teachers online about their approach to this. I believe you’re now using something called Q methodology to analyse learners’ views of this issue. Like many other readers I imagine, I’m not very familiar with Q, or why it might be useful for researching autonomy. But before we get into that, could you tell us a bit more about the purpose of your research?
Lucy: I hope to develop a learner generated tool for assessment of autonomy which can be used by learners themselves and others. I started by creating an online survey to investigate educators’ practices regarding the assessment of language learner autonomy. I wanted to know if language learner autonomy is assessed and, if so, how it is assessed and by whom. The results of this survey indicated that it is indeed assessed, but there is no systematic approach.
One finding which surprised me was that self-assessment was used relatively infrequently. This seemed paradoxical: learner autonomy is assessed but where is the autonomy in that process? For me self-assessment is an integral aspect of learner autonomy. When I was teaching full time, I frequently encouraged my learners to engage in self-assessment practice and, in fact, for one course I used to ask my students to write their own test. Certainly now, having explored self-assessment so thoroughly, I would always attempt to incorporate self-assessment into my classes as I believe that good self-assessment empowers both the students and the teacher.
Mike: I agree there’s something very contradictory about the use of teacher assessment when the development of learners’ autonomy is a pedagogical goal, but it doesn’t really surprise me that learners’ self-assessment of their own autonomy is quite rare. I know from my own experience that giving up control over assessment as a teacher can be quite an emotional and ethical wrench. And, during assessment, teachers often feel their own autonomy limited by the normalising gaze of their institutions and by expectations that they exercise their authority as “subject experts”. So developing frameworks and processes for thoughtful and informed learner self-assessment – and being able to demonstrate to our institutions that these have as much “reliability” as other forms of assessment – seems very useful work indeed. But why did you decide to use Q methodology for this?
Lucy: Q is a research methodology which allows for the systematic investigation of subjectivity. In other words it is designed to research viewpoints, perceptions and understandings, using techniques which are both quantitative and qualitative. By taking a more systematic approach to the investigation of this area of subjectivity, I hope we can go beyond the “reflection and reasoning” (Benson, 2001, p. 182) which has characterised research into learner autonomy to date. My research can be seen as a response to the calls from Benson (2001, 2007) and Ushioda (2008) for a more systematic approach to the analysis of data in learner autonomy research.
Mike: That’s an interesting point. As you say, a lot of the research on learner autonomy – certainly much of the research (including my own) associated with the Learner Development SIG in Japan – has used qualitative research techniques and case studies. Indeed, writing in the Learner Development SIG anthology, Autonomy You Ask, in 2003, Benson himself acknowledged “a tension between two apparently contradictory feelings: the feeling that we need more data-based research and the feeling that, at the end of the day, data-based research is not going to be enough to tell us quite what we want to know about autonomy”. So it’s significant that you have decided to use Q to go for a more “systematic investigation of subjectivity”, as you put it. Could you explain a little more how the Q research process combines qualitative and quantitative approaches?
Lucy: In total, there are eight stages in Q. These are shown in Table 1:
First, the researcher decides on the topic for investigation and then collects as many viewpoints as possible relevant to that area. This is the concourse, the collection of discourse around the particular area under investigation. In my study, I collected the viewpoints of language learners and teachers about the non-linguistic outcomes of learning a language in an autonomous learning environment. Next, the researcher needs to create a Q set by selecting between 40 and 60 of these viewpoints which are phrased as a single propositional statement and printed on cards. When the researcher has selected the participants for the study, each participant does a Q sort: they sort the cards into a particular pattern expressing their personal response to the viewpoints on the cards. For example they might be asked to sort cards along a cline from “most like me” to “least like me”.
The researcher then interviews the participant about their card placement. Then the researcher statistically analyses the card patterns using a form of factor analysis to generate different factors or “pictures” about the viewpoints on this issue. In my research, six factors were generated. These six factors represent six sets of beliefs, each set held by one or more participant, about the non-linguistic outcomes of autonomous language learning.
Lastly, each factor is interpreted using the collection of statements used in the study and the interview data analysis. To give an example from my research, one factor I have tentatively named ‘Oozing confidence’. Learners who share this view hold an optimistic outlook in general and feel they will have a successful life; they hold positive beliefs regarding future language learning and use and believe they are more likely to use their language well in the future; and they believe they will be able to continue their language learning when they leave education. One of the more negative characteristics held by learners who share this view is that they are not able to organise their learning time very effectively.
Mike: The development of the concourse and the Q set are clearly important stages in the Q research process, and ones that distinguish it from other kinds of research. Could you explain a bit more about what these stages involve and what they contribute to a “systematic investigation of subjectivity”?
Lucy: To develop my concourse I scoured the literature to find statements relating to the non-linguistic outcomes of autonomous learning. In addition to trawling through the literature, I used written statements from second language learners about what autonomy means to them, written statements from English teachers about what they perceived to be the non-linguistic outcomes of learner autonomy, oral statements from students in pilot interviews, and comments that were generated from a posting I made on the AUTO-L email discussion list. These statements were not always used in their ‘raw’ form, but were edited to be understandable to typical non-native English speakers at university in Hong Kong and Japan. I used my own judgement to assess the linguistic difficulty of the statements. In keeping with Q theory, each statement contained only one proposition (Watts & Stenner, 2005). As in questionnaire writing, double-barrelled statements and long statements containing two or more ideas should not be used in Q statements as this may create confusion during the analysis of the Q-sorts. The final number of statements in my concourse was 124. I sent these to three experts in the field of learner autonomy for their face validity to be assessed. This stage is not essential to a successful Q-study, but provided me with some reassurance that my concourse was as well-defined as possible and that the statements did indeed reflect the full range of discourse surrounding the non-linguistic outcomes of language learner autonomy. Those statements judged by the experts to be not relevant to the non-linguistic outcomes of learner autonomy, or those statements in which the meaning was duplicated or ambiguous were discarded, leaving a total of 76 statements.
Mike: Okay, so what happens next?
Lucy: The third stage in Q is to develop the ‘Q-set’. This is the collection of statements selected from the concourse to be used by research participants. In Q, the theoretically optimal number of statements in the Q-set is between 40 and 80. I categorised my 76 statements according to both my theoretical model of learner autonomy that I used for this study and to a model of ‘generic learning outcomes’ developed by the Museums, Libraries and Archives Council (MLA) to assess learning outcomes in those environments. The theoretical model of learner autonomy I developed operationalises learner autonomy into 7 main categories which are sub-divided into a total of 34 constitutive elements. The MLA model of generic learning outcomes comprises 5 categories and was the only example of a non-content specific learning outcomes model I could find in the literature. Once I had categorised the statements according to these two models, I then chose statements in proportion to the number in each category. My final Q-set comprised 52 statements. Two examples of the statements in my Q-set and the way in which they were categorised are shown in Table 2:
Mike: Can I ask about the process of creating the concourse a bit more? You said you scoured the literature for statements about the non-linguistic outcomes of language learning, as well as taking statements from teachers and learners themselves. So is it important to try to gather as much as possible of the discourse on your issue out there, to find divergent views, and also to draw from different kinds of sources and different communities (e.g. teachers and learners)? Others might be interested to do research using a concourse but perhaps on a smaller scale or adapted to local circumstances – is this possible?
Lucy: The diversity of the concourse is important. As I mentioned above, the analysis of the final data is a form of factor analysis, and so the researcher is required to meet certain requirements of quantitative research. Typically, in quantitative research, generalisability is a concern, and therefore much time is devoted to generating a representative sample from a larger population. In Q-methodology, the ‘population’ is the concourse, and the Q-set, also sometimes called the Q-sample, is the representative sample of that population. The Q researcher, therefore, is required to find statements about as many aspects of the issue as possible and different perspectives on those. Thus, depending on the research question, the concourse may be very large or very small. Clearly, if one is investigating Class 3C’s views on the best strategies to use for learning vocabulary, the concourse will be smaller than if one is investigating Japanese students’ beliefs about the usefulness of the TOEIC test in preparing them for the workplace, or the views of black American voters on the success of President Obama’s first year in office. In some studies, a concourse can be over 1000 items but, more typically, other studies define concourses of between 400 and 700 items (e.g. Stenner & Stainton-Rogers, 1998; Bryant, Green & Hewison, 2006). Developing the concourse is one of the most time-consuming aspects of doing a Q study and the work involved in collecting the complete discourse surrounding the focus of the study should not be underestimated. One of the interesting aspects about Q is that it is not just textual statements that can be sorted – studies have been done with pictures, objects, audio-visual data and even smells.
Mike: For someone like myself, whose research has usually been focused on small groups of my own students, it is encouraging that a Q concourse can be developed with different sizes and types of “discourse communities”, including a single class of students, and can also be multi-modal, with visual, material, or even smelly elements! You mentioned that generalisabiltiy is a concern in Q and time needs to be devoted to ensuring this. Generalisability has not been an issue, I think, for much of the qualitative research done on learner autonomy, including that done in Japan and in LD SIG projects. We’ve been more concerned about investigating the dynamics of autonomy in the specific institutional and local conditions we work in. And we have hoped that insights from this research will “resonate” in some way with people in other situations. But we haven’t really considered the key conditions that affect the development of autonomy in our own contexts or specified those in our writing so that readers in other places can relate our research to the conditions that they face. So perhaps issues of generalisability are useful to consider after all. What then do you see, Lucy, as some of the benefits for research on autonomy of the emphasis on generalisability in Q methodology?
Lucy: Perhaps I should have used the word ‘generalisability’ above with caution. I was using it in the sense of statistical generalisability to describe the purpose behind sampling in quantitative research within a positivist paradigm and to explain the theory behind the sampling of the concourse in Q methodology. Arguably, this is one of the ways in which Q methodology has a quantitative element but perhaps this particular quantitative element is more concerned with process than epistemology. I see my work rooted very much in the qualitative tradition and make no statistical generalisability claims – although I can see a case for claiming generalisability to theory rather than populations (Bryman, 2004), or as Bryman puts it, “it is the quality of the theoretical inferences that are made out of qualitative data that is crucial to the assessment of generalization” (p. 285). There is also possibly a case to be made for ‘moderatum generalisability’ (Williams, 2000). Williams defines moderatum generalisability as when “aspects of [the research focus] can be seen to be instances of a broader recognisable set of features” (p. 215). Williams argues that in interpretivist research generalisability is “inevitable, desireable and possible” (p. 209) and that “everyday moderatum generalisations are what it is that the researcher wants to understand, and of course if she can understand them then she will know something of the cultural consistency within which they reside and is then able to make her own generalisations about that cultural consistency” (p. 220). To bring this back to my work, I suggest that there is a certain “cultural consistency” in environments where language learners are learning autonomously. I hope that through examining my Q-set and the description of my research, colleagues in the learner autonomy field will find something familiar, and therefore of interest and use to them in their local contexts.
Mike: So far then, Lucy, you’ve collected a concourse of different viewpoints from students and teachers about the non-linguistic outcomes of language learning in an autonomous environment. These were then turned into a Q-set of statements to be arranged in a Q-sort by the participants in the research. Can you tell us now about what happens in this Q-sort?
Lucy: In brief, the participants sort the set of statements into a pre-set pattern along a scale labelled, in this case, from most like me to least like me. Let me explain how that is done in a little more detail.
Whilst developing the concourse and the resulting Q-set, I was concurrently thinking about the “condition of instruction”. This is the statement given to participants to help them sort the Q-set. The condition of instruction I used was: “Think about the ways you have developed since studying [your language] outside the classroom without the direct support of a teacher (e.g. in a self-access centre or using the Internet). Sort the statements according to most like me ↔ least like me.”
At the start of the Q-sort process, I asked each participant to first divide the statements in the Q-set into 3 piles: a most like me pile, a sort of like me pile, and a least like me pile. Then the participants arranged the cards using the sorting grid shaped like an inverted bell-shaped curve, shown in Figure 1. First they were asked to take the most like me pile and to choose the two cards which represented their views most strongly and to place them in the +5 section of the grid. Following this, three statements were chosen for the +4 grid section, and so on. When all the most like me cards had been sorted, the participants were asked to sort the least like me pile at the negative side of the grid. Finally, the participants sorted the sort of like me cards into the centre sections of the grid. The number of cards in each pile was dependent on each participant’s views. Some participants placed most of their cards in the most like me or least like me piles. Others have a more equal distribution of statements across all three categories.
When the participants were happy with their distributions, I made a note of the place of each card and asked the participant to explain to me their reasoning behind the card placements. This resulted in an in-depth interview. As a researcher I found it remarkable how many participants made unprompted comments such as “It was very interesting” or “I really enjoyed this activity”. In my experience, it’s unusual for participants in, say, survey research, to comment so favourably on the research methodology.
Mike: So the participants obviously felt they were getting something out of the research themselves. What do you think using the Q-set contributed to the ways that they understood their own learning in this research? I’m thinking that most qualitative research on autonomy focuses on learners’ reflections on and evaluations of their own learning under specific conditions, but in your research learners are reflecting on and evaluating decontextualised statements about the outcomes of learning. How does this help learners to think more insightfully about their own learning? Does, for example, the range of statements in the concourse suggest to them new, different or more “meta” ways of making sense of their own learning and help them expand or transform their understandings of it. Does the (rather game-like) task of distributing the statements help them engage with their own learning or does it lead to rather predictable, formulaic patterns of interpretation? I guess what I’m really interested in here is whether and how Q can help learners develop more theoretically perceptive understandings of their own specific learning practices. Can it help them “ascend from the abstract to the concrete”, as Marx put it, in their understanding of their learning – and in ours, too, as researchers? That seems to me a key aim of useful research about learner autonomy.
Lucy: What impact has doing the Q-sort had on learners in terms of helping them think about their own learning is an interesting question for me to consider. These were not my own students and my interaction with them was limited to the 60-90 minutes it took on average to complete the Q-sort and interview, so I’m not able to comment on the effect of their participation in terms of engagement with their own learning after their involvement in this study. I only have my perceptions of their reactions to doing the Q-sorts and the interview data to consider when making a judgement about this. Certainly, from their comments, it would seem as if participating in this study did help them think about their own learning. As they explained the positioning of the statements to me, they used examples from their own learning to illustrate the points they were making. This suggested to me that although the statements were “decontextualised” as you pointed out, the participants were laying their own contextual meanings onto the statements. Depending on the length of the interview and the number of statements they talked about, some participants volunteered numerous illustrative points from their own learning experiences. I wouldn’t feel comfortable saying it made them more insightful because I didn’t have a baseline perspective from which to make this comparison. However, certainly anecdotally, it seems to be the case that the Q-sort and interview gave learners the chance to think about, and reflect on, their own learning. As I mentioned above, several participants commented on how enjoyable the activity was and several participants commented explicitly on how interesting the activity was and how much they had learned from the experience. One participant said:
This is very meaningful…. I don’t know my pattern of learning languages and this interview helped me to understand myself….I don’t know why I do these thing but now I know. Because it is relaxing, this is interesting.
And another commented:
It made me think a lot about how I work. I hadn’t really thought about it before. It kind of made me think a lot more. I knew WHY I did it, but I never really thought about it.
One of the statements that learners were required to sort was “Reflecting on my learning makes me feel bored”. Out of 30 participants, 16 sorted this statement in one of the 9 slots ranked from -5 to -3 thus indicating it was least like them. This seems a particularly noticeable proportion especially as no participants sorted the same statement in one of the nine slots ranked from +5 to +3 which would have indicated it was most like them. It is possible, therefore, that my study attracted participants who were particularly keen on self-reflection and thus found the Q-sort and interview a positive learning process.
Mike: It’s great that participants felt the Q-sort and interview had been an opportunity to reflect on their learning, and that they’d been able to learn from the whole process too. You were hesitant about saying it had made learners more insightful. But the fact that one participant said they had developed a new understanding of their “pattern of learning languages”, and the other felt they were “really” thinking about their way of working for the first time, suggests that the process can help students make their thinking about learning explicit, and perhaps even contribute to them developing a theory of their practice of learning. It’s interesting that the second student hadn’t previously thought much about their learning but that, on the whole, you think the research may have attracted participants who were keen on self-reflection. This made me wonder if the most significant breakthroughs in understanding in Q are likely to be so-called “aha!” moments that come almost out of the blue (or out of the Q, if you like!) for participants who haven’t done much thinking about their learning before. Or is the awareness developed in Q likely to be better if participants are already used to reflecting explicitly on their learning and to articulating their thinking? A bit of both, I imagine.
I was also interested in the effect of categorising the statements you collected in the concourse against models of learner outcomes (your own and the Museums, Libraries and Archives Council model) before you decided on the Q-set. Does this mean that the participants’ own thinking about their learning, when they do the Q-sort, is influenced – either constrained or enabled – by these models?
Lucy: In your questions above, you asked whether the selection of statements according to a particular framework or model affects the participants’ own thinking about their learning and whether the Q-sort is thus constrained or enabled by the models. As Kramer, de Hegedus and Gravina (2003) point out, “although the Q researcher may choose to identify a particular statement with a specific […] category, this a-priori ‘labeling’ makes little difference to the subsequent interpretation of the data.” The factor analysis carried out on the Q-sort data is designed to reveal underlying connections, similarities and differences between participants’ views about the statements in the Q-set, so in this sense the data analysis and interpretation are much more nuanced than the statements themselves.
Mike: One other question about the Q-set and how that helps participants think about their learning: Do the participants get to see who each statement came from so they themselves get a sense of those different perspectives?
Lucy: In my study, the participants did not see where each statement came from. Although I did give them the opportunity to contribute their own statements at the end of the Q-sort, none of them did. It is interesting to ponder whether it would have been useful for them to know the provenance of the statements. Given that many of the statements were derived from the literature, I think this may have been irrelevant, overwhelming or simply uninteresting for them and would have been an extra cognitive load in a task which was already quite cognitively demanding. Nevertheless, it is interesting to consider whether, had the source of the statements been made available to participants, they would have sorted the cards in a different way.
Mike: You said in earlier that the final stage of the Q process is a statistical factor analysis. For those interested in how the quantitative side of Q works, could you just explain this a bit more. (Those who, like me, come over all funny at the mention of statistical analysis can skip to the next question if they like!)
Lucy: The data generated in a Q-sort is analysed using a by-person factor analysis technique. Traditional factor analyses are by-item and look for correlations between observed variables (e.g. items in a questionnaire) across a sample of subjects to generate one or more unobserved variables. These are the “factors” which explain the existence of the observed variables. In by-person factor analysis, the researcher is looking for correlations amongst participants across a sample of variables (the Q-set). The viewpoints of individuals, made evident through the Q-sort, are correlated to generate underlying factors or viewpoints. The benefit of this analysis is in its data reduction technique: similar patterns (viewpoints) are chunked together, so rather than identifying 30 different viewpoints (each Q-sort is one viewpoint) I am able to draw on the similarities between the viewpoints in a systematic way. The benefit of Q-methodology is that although it is a data reduction technique, the resulting analysis is nevertheless much richer and nuanced than would be possible with a questionnaire or survey study.
When analysing the Q-sort data, conventional software for statistical analysis can be used, but bespoke software packages also exist. In my research, I used PQMethod. This DOS-based programme is available freely online fromhttp://www.lrz–muenchen.de/~schmolck/qmethod/ . The data from each Q-sort is entered into PQMethod and the programme then generates a researcher-specified number of factors. These factors are interpreted by the researcher using the statements in the Q-set and the in-depth interview data.
Mike: The full Q experience – from making the concourse to doing the factor analysis at the end – sounds very time consuming! You said in the first part of this dialogue that a concourse could be made from the views of one class or group of learners. Do you have any other suggestions about how teacher-researchers could adapt Q to investigate issues with their own learners and without a huge investment of time?
Keeping in mind issues of local specificity and generalisability, and thinking about the way that “can-do” assessments of language competence, based on the Common European Framework of Reference for Languages (CEFR) seem to be spreading like wildfire, do you see this as an assessment framework that can be used or adapted for different contexts, with different students, and for self-study, self-access learning and classroom-based learning? Or is there a need for teachers to develop other frameworks for learner assessment based on local conditions, and the concerns of their learners, and perhaps even based on some Q research of their own?
Lucy: Whilst Q methodology does require an investment of time this can be tempered somewhat by involving learners themselves in the research stages – the fact that Q methodology allows the participant to retain a sense of control over the process and the content of a Q study is one of the reasons why I think it’s particularly suitable for learner autonomy research. For example, if the focus of the study is well delineated, learners themselves can generate the concourse through written responses to questions or by recording their own discussions around the area of subjectivity. For example, in a study on Japanese learners’ motivation for learning English, the study might begin with a free-writing exercise in which learners write a paragraph or two on a question such as “The reasons why I am learning English”. Then, on a practical level, learners could be involved in categorising statements for sampling. In the example used above, the researcher might collate all the unique statements which were generated out of the free writing exercise and ask learners to divide them up into themes. The themes could be ones the learners decide on themselves, or they could be given an existing model to follow. Once the Q-set is created and the statements are printed on card, learners can even cut up their own set of statements. After the learners have carried out the Q-sort they can record their own responses by writing the number of each statement onto a copy of the sorting grid. In lieu of the post-sort interview, learners can even be guided to record orally, or in writing, the reasoning behind their placement of the statements on the grid; or using a technique I first learned about from Andy Barfield, they could interview each other about why they placed the cards in that particular pattern. In many ways, this is a methodology which offers lots of possibilities for learner involvement and control.
Mike: Finally, Lucy, I’d like to return to your focus on researching the non-linguistic outcomes of language learning that you described in part 1 of our dialogue. This raises all sorts of interesting questions for me about what autonomy is. Can it be separated from language learning practices and outcomes? Does it exist in its own right (autonomously, as it were) or is it always situated within specific processes and conditions of learning? Is it our responsibility as teachers to help students develop both their language ability and autonomy, or to develop their language ability more autonomously, or to develop their autonomy through language learning? Are there times when there is a tension between linguistic development and development as an autonomous learner? Can non-linguistic outcomes of language learning include the development of content knowledge and understanding too?
Lucy: They are great questions Mike, and my answer is a rather circular one emanating from my personal experience working in tertiary level language education in Japan. During my time working as the Self-Access Learning Centre (SALC) founder and supervisor at Kanda University of International Studies in Chiba, I was frequently asked by the university administration and guest visitors how we knew that the SALC benefited students. On the one hand I resented having to justify something that was clearly so successful, but on the other hand I was frustrated that, at that time, no tool existed for being able to prove the effect that using the SALC had on learners. We could have looked at various measures of language proficiency such as the KEPT (the university in-house proficiency test with a ground-breaking group oral component) or TOEIC, and compared the scores of those students who used the SALC and those who didn’t, but of course, as has been well documented in the literature, there are difficulties in separating the language learning which takes place autonomously and that which doesn’t. That led me towards the desire to be able to measure the level of autonomy of our SALC users, and this was where I started when thinking about the focus of my PhD. For the record, and in answer to your questions above, I believe that successful language learning has to be considered a lifelong project. As teachers we see our students in class for, usually, a maximum of about 6 hours per week for any one course and over the time span of a semester, or even a year, this is relatively little time in which to “make a difference”. What we can do however, in that time, is ensure that our students have the knowledge and awareness to further their own learning of languages once they have left our classrooms – for the day and at the end of the course. I believe that as language teachers, more than any other kind of teachers, we have a responsibility to develop our learners’ autonomy at least in line with their language proficiency.
To close the circle, although I started by wanting to measure autonomy, my PhD has changed considerably, and now I am quite certain that the most appropriate way forward for our field is not to measure autonomy, but to think about the formative assessment of autonomy. In other words, creating a means of assessment which aids the development of autonomy – assessment for autonomy, not the assessment of autonomy. I also believe that it’s important to keep within the tenets of learner autonomy and not to impose assessment practices on learners but to provide a means for them to (formatively) assess their own autonomy if they so desire. Finally, and quite simply, if we are looking at autonomy assessment then we need to know what we are assessing – hence my focus on non-linguistic outcomes of autonomous language learning.
Benson, P. (2001). Teaching and Researching Autonomy in Language Learning. Harlow: Pearson Education.
Benson, P. (2003). ‘A Bacardi by the pool.’ In A. Barfield & M. Nix (eds), Autonomy You Ask! Tokyo: Japan Association for Language Teaching.
Benson, P. (2007). ‘Autonomy in language teaching and learning’. Language Teaching 40/1: 21-40.
Bryman, A. (2004). Social Research Methods (2nd ed.). Oxford: Oxford University Press.
Kramer, B., de Hegedus, P. & Gravina, V. (2003). ‘Evaluating a dairy herd improvement project in Uruguay to test and explain Q methodology’. Paper presented at the AIAEE 2003 19th Annual Conference ‘Going Forward in Agricultural and Extension Education: Trends, Policies, and Designs Worldwide’, Raleigh, North Carolina, USA.
Ushioda, E. (2008). ‘Researching autonomy: A bandwagon in need of more instruments?’. Paper presented at the IATEFL LA SIG/SWAN one day event: Autonomy in Language Learning: Beyond the Bandwagon? University of Nottingham, UK.
Watts, S. & Stenner, P. (2005). ‘Doing Q methodology: Theory, method and interpretation.’ Qualitative Research in Psychology 2: 67-91.
Williams, M. (2000). ‘Interpretivism and generalisation’. Sociology 34/2: 209-224.