- Conceptual Design of Methodology
4.1 Methods and Instruments Applied for Main Study
Though there are a range of methodologies that can be used in research such as interviews, focus groups, and observations, due to the volume of learners undertaking open courses the most suitable methodology would be an online survey as a survey will be able to collate both quantitative and qualitative large-scale data for analysis. This became evident in the execution of the initial study due to the length of time it took to contact learners, arrange interview times, carry out interviews, and transcribe notes. There are a number of attractions in using a survey; one-shot data gathering, wide population representation, ascertains correlations, accurate data capture, and statistical data processing (Morrison, 1993: 38-40).
The self-completion survey is to be hosted online as the sample population of learners enrol and study open courses online, so the required demographic is suitably targeted. In a change to the original strategy for the main study, learners who have completed start of course surveys for courses that are hosted on both FutureLearn and OpenLearn will be targeted. Then the response data for individual courses can be analysed for comparison, and also collated for collective comparison of open online courses hosted on OpenLearn and FutureLearn. As the platform functionality and features differ this will aid in developing an understanding as to whether the presence or absence of such features impact on engagement and learning design.
The survey includes a combination of nominal data (for comparison with ‘traditional’ MOOC data (Jordan, 2014) to ascertain whether this open course community of informal learners is different), and scaled questions to establish attitudes of participants towards course engagement and learning design. Capturing large scale data through an online survey will aid to determine factual information; preferences, attitudes, behaviour, experiences and beliefs (Weisberg et al, 1996).
The design of the survey has taken into consideration Hoinville and Jowell’s (1978) three prerequisites of survey design; purpose of inquiry, population specification, and resources available. The survey questions strongly address the research questions of engagement, disengagement and learning design. Three populations of learner strategically aligned to the JIFL journey have been identified (address in the Participants and Samples Chosen for Main Study section below), and the survey is to be hosted in the three said locations online within the research timescales. Concern for participants (Sapsford, 1999: 34-40) has been taken into consideration through ensuring anonymity of participants and the fourteen stage process identified by Cohen et al (2009: 209) was followed.
The main study will also include the development of interview questions to be conducted on an individual 1:1 basis or as part of a focus group. For this purpose, and to confirm, clarify, and question any commonalities, trends and anomalies the final survey question allows participants to submit their personal details for further contact.
The evaluation report data from the first ten MOOCs presented by The OU on FutureLearn will also be reviewed in conjunction with the demographic data collected by the survey as historical documentary research to ascertain whether non-MOOC (therefore JIFL prospective) learners are being successfully targeted.
4.2 Participants Chosen
Due to the range of research being undertaken by various academics at The Open University with regards to FutureLearn MOOC data, careful consideration and discussion was required as to how the participants for this study were to be selected. It was agreed that a random sample of 500 participants from each of the courses currently hosted on FutureLearn would be selected. Where multiple presentations existed the random sampling would incorporate all cohorts to ensure that the level of repeat contact by other research studies would be kept to a minimum. As data is continually collected in new and repeating presentations this would ensure that the pool of particpants would continue to expand and the likelihood of repeat contact being reduced.
Within the main study survey there is further opportunity for particpants to submit their personal contact information for further research. It is from this that the participants will be selected for follow up interviews or focus groups.
4.3 Data Collection
The data from the survey will be collected via Qualtrics and held securely within the account for analysis via SPSS (Statistical Package for the Social Sciences) for which both have single account holder access. The survey allows participants to remain anonymous unless contact details are submitted in the final question. If contact details are submitted and then used for follow up telephone or Skype interviews then the same process of storage and number allocation is followed from the initial study.
The current blog ‘Doctor in Waiting’ will utilised further as a research journal (Burgess, 1984b) documenting the decision making processes of the content analysis. The use of grounded theory (Strauss, (1987) and Strauss and Corbin, 1990; 1997) using thematic analysis and to fragment and isolate data from qualitative responses from within surveys and interviews will be used. If a higher number of interviews than expected are conducted within the main study, the software package QSR NVivio will be utilised to aid the extrapolation of attitudes and themes from the qualitative data. Following the methodology by Stroh (2000) the text will be chunked, labelled, and coded to organise the data for analysis.
Though the data to be collected from OpenLearn and FutureLearn OOCs are stored separately it will be analysed for prevalent themes and patterns from within the individual platform and course data sets and collectively. The application of different levels of analysis will be conducted to ascertain whether commonalities occur with regards to engagement, disengagement, preference of learning design elements, etc. but also to clarify whether demographical data and technology use have an affect also. Themes will be drawn together to denote whether there is a stronger narrative within the data and reviewed as to the reoccurrence of the themes to denote emerging patterns. This analysis is undertaken throughout the stages of research (Bryman, 2001) to ensure that the research questions are been considered and answered.
4.4 Variables and Factors Affecting Study
In creating samples from across the two platforms the survey it aids in the broadening out of the respondent demographic to address the representation of informal learners interested in open courses that may produce similar or different results. From these results, themes across all samples may be determined, or it may be determined that each sample approaches informal learning differently so therefore has different needs for building and sustaining engagement to completion.
It is also possible that each of the samples may have a very different response rate, meaning that themes may be detected in larger response rates and not in smaller ones, or that it may be difficult to compare a small sample with a much larger one. In this situation it is proposed that a random sample of equal size is taken from each of the sets and the data from this is analysed also.
As part of the survey there will be an opportunity for participants to enter an email address if they wish to participate in further research. From the data gathered the provision of contact information creates the further exploration of themes or anomalies that will become apparent from the findings either with a secondary survey or in a telephone interview. At present this research is an unknown entity as platforms have not been survey in regards to learning engagement before either individually or collectively for comparison. The surveys will be hosted through Qualtrics from which the numerical data will be analysed through SPSS and the narrative data to be analysed using the protocols of content analysis.
In the hosting of separate (yet identical) surveys on Qualtrics, it will ensure that the samples are kept separate for comparison studies and each set to be labelled numerically to give the platform surveyed anonymity as well as the participants within. As noted previously it is expected that the goals for engaging with the courses will be extrinsically different, therefore it is hypothesised that the level of engagement with the individual elements of the course and the course as a whole with the view of completion, will be different with the learners on different completion trajectories dependent on external pressures (such as work and study related deadlines).The data groups can then be combined for additional collective analysis to understand as a collective the data isolated to determine if other factors affect levels of engagement such as age or types of technology used to access the courses.
The limitation in using both surveys and interviews is that there is little flexibility in relating the questions directly to the respondent’s personal circumstances and therefore may limit the respondent in the answers given. Through the use of natural language (Kvale, 1996) ‘stimulus equivalence’ (Oppenheim, 1992) may be achieved, whereby each respondent may understand the questions set before them, even if they are unable to relate it then to their personal circumstances.
However, the use of interview questions from the pilot study and then in the main study, can result in unanticipated answers that can lead to further connections in data relationships and addressing or the creation of hypothesis (Cohen et al., 2007). These responses could potentially be categorised in Tuckman’s (1972) seven modes, of which ‘filled in response’ is most likely due to the type of questions to be posed to the respondents.
The final limitation of the main study to consider is the current absence of a survey being conducted within a presentation of a MOOC. Instead random samples of participants are being selected from contact information inputted into start and end of course surveys of the courses presented. It must be taken into consideration that these learners are already engaged to a degree to have completed optional surveys.
- Ethical Considerations
Before conducting any research an application to HREC (Human Research Ethics Committee) at The OU outlining the research proposal was made. To ensure full compliancy an enquiry for further submission to SRPP (Student Research Project Panel) was also made to be informed that as OU students wouldn’t be specifically targeted additional SRPP ethical approval wouldn’t be required. Upon HREC approval further ethics application was made and granted from the Open Media Unit to research and release data on The OU’s open educational resources.
In line with guidelines of BERA and the Association of Internet Researchers the moral duty to respect privacy, confidentiality and anonymity is adhered to. For the surveys an introductory page on all surveys displays the ethical research statement detailing the purpose of the research, how the research will be used, how to exit the survey at any time, contact details for further information, and that by clicking to enter the questionnaire is a confirmation of acceptance of the ethical statement (information on how withdraw is also given).
The list of email addresses for survey use is kept in a password protected spreadsheet that will form part of the use, storage, and disposal requirements of the other data gathered for the research. Only completed papers, reports and publications will be published whereby participants are anonymised to protect their privacy addressing Bryman’s (2001) ethic principles and Bassey’s (1999) ethical values, whilst adhering to the guidelines set by BERA (1992).
- Implementation
The main study survey has been finalised and replicated for each of the OOCs in hosting on FutureLearn and OpenLearn securely on Qualtrics. The emails in which the individual survey links specific to the relevant course will be sent through Qualtrics. Learners will be given a deadline of completion for the 28 February 2016, with Qualtrics generating reminders and thank you emails upon successful completion of the survey.
The study will take into account surveys for 23 FutureLearn hosted OOCs and 10 OpenLearn hosted OOCs (which are also hosted on FutureLearn). The OOCs provide a balance across the subject categories provided in FutureLearn so have the opportunity to demonstrate engagement patterns based on platform, subject category and individual OOC.
Upon completion of the deadline date, analysis will aim to identify learners for potential for interviews and focus groups. The analysis of the data with be provided in PR07, and the methodology and analysis of the interviews and focus groups to be provided in PR08 and PR09 respectively.
- Structure of Data Analysis
As both quantitative and qualitative data will be collated for the main study via surveys and interviews different approaches to data analysis will be taken.
With regards the data analysis of the quantitative data from the survey, the use of nominal scales for questions relating to demographics, and ordinal scales operating on the Likert scale principles will be utilised, thus considered non-parametric.
In the analysis of the data a one-tailed test may be applied to test the hypothesis that those learners associating engagement with open courses as related to an extrinsic professional or academic goal in comparison to a leisure learner, are more likely to engage with the course until completion, with leisure learners being more succinct and sporadic in their engagement strategy. The analysis of the data should define whether variables, such as academic and professional current positioning and future goals bear any relation to the learners perception of, and engagement with the open courses and what linear or non-linear relationships can be drawn from this.
To ensure reliability in the data analysis SPSS will be used, especially in taking into consideration the large-scale data expecting to be received via the surveys. It is proposed that the split-half technique be applied in conjunction with the use of SPSS. Data will then be tabulated for expression within the text, and data visualisations in the form of graphs, pie charts, etc. only to be displayed where it adds greater value to the analysis beyond the use if frequency and percentage tables.
The secondary element of analysis will be for the qualitative data acquired through data collection from the interviews following the survey. To amalgamate the key issues emerging from the transcriptions a combination of progressive focusing (Parlett and Hamilton, 1976), content analysis (Ezzy, 2002: 83, Anderson and Arsenault, 1998: 102) and grounded theory (Strauss and Corbin, 1994: 273) will be used, initially taking a wide angle approach to gather the data from interviews across the three data sets, to then through sorting, coding, reviewing, and reflection upon the responses given to systematically gather and analyse the data.
Through typological analysis (LeCompte and Preissle, 1993: 257) the three data sets the organising of the data can firstly be ordered in the three platform groups, then reviewed and organised as individuals to ascertain whether any themes or frequencies through the application of secondary coding (Miles and Huberman, 1984) emerge that allow the organisation of the data by issue to analysis plausibilities as whether it can be organised by research question. The following of Brenner et al’s (1985) steps to content analysis in conjunction with the organisation of the data into groups should ensure reliability in its interpretation.