Friday, 25 April 2008

Ideas for evaluation plan

Some brief notes on my thoughts so far for my evaluation plan - using headings from the template provided (and not completed yet).

Background:

Moodle is a learning management system (LMS) or virtual learning environment (VLE) which was introduced to South Western College (invented name for the pupose of my project) in September 2005. It is available to all teaching staff, administrators and students. It is used as a repository for college administration (e.g. such as policy documents) and training documentation. It is also used as a resource repository with space available for every subject area. The areas are generated by course qualification. Some subject areas use the available area to provide all course documentation, resources that supplement the face-to-face classroom learning and relevant activities and links. The level of use of these areas varies greatly from course to course.

This section describes any information which is needed to provide the reader with an understanding of the background of the interactive multimedia that is being evaluated.

Purposes:

The purpose of the evaluation is to carry out a formative review of Moodle in terms of its impact on learning. In particular the following two e-learning guidelines will be investigated.
  1. ST7 Will the e-learning foster students’ curiosity and creativity?
  2. ST9 Do the technologies employed help students successfully meet the learning outcomes?


This section thoroughly describes the purposes of the evaluation. A single plan can address a variety of purposes, but all must be delineated clearly. Evaluation is always a political process and all parties must accept the purposes for the evaluation to be successful.

Audiences:

To focus in on the learner experience of using Moodle the evaluation will concentrate on gathering information from students who have experienced its use in their lessons.

The curriculum managers for the two departments will see the results of the evaluation.

This section specifies all the primary and secondary audiences or consumers of the evaluation. In general, it is recommended to open the evaluation up to as many people or agencies as the client will allow.

Decisions:

Formative evaluation - answers to questions on how well is it working so far? Does it help? Does it stimulate creativitiy and curiosity? Does it help learners achieve? If yes, how can good practice be disseminated to other staff. If no, how can these issues be addressed in the future?

This section is probably the most difficult, but it should be included if the evaluation is to have meaningful impact on decision-making. Trying to anticipate the decisions which can be influenced by an evaluation takes creativity and trust. Many developers do not wish to anticipate negative outcomes for their efforts, but these too must be considered.

Questions:

Are learners benefitting from the use of Moodle in their face-to-face lessons - learning outcomes?

Does using Moodle enable learners to develop skills/interest in their subjects - curiosity and creativity?

Does using Moodle enable students to achieve at a level equal to or above their achievments in a wholly face to face environment- learning outcomes?

A key element of a sound evaluation plan is careful specification of the questions to be addressed by the evaluation design and data collection methods. The clearer and more detailed these questions are, the more likely that you will be able to provide reliable and valid answers to them.

Methods:

Observation of lessons - to see how students interact with Moodle and whether it fosters creative activity and curiosity. Also to judge whether lesson outcomes can be achieved when using blended learning. I've put a list of statements together in an observation checklist to give me pointers to look for. I'd appreciate any feedback.

Student survey - to gather student opinions on the use of Moodle and their perceptions of the effects of the blended approach to their learning. I've started a survey but struggling a little with the questions to include - again any feedback welcome.

Informal interviews - as a result of survey responses and/or observations to help clarify any unexpected issues that arise to reflect 'responsive evaluation'.

This section describes the evaluation designs and procedures. There are scores of designs and hundreds of procedures which can be used. The keys to success are matching these options to the purposes and questions of your client and keeping within the budget and time line of the study.

Sample:

This depends on factors outisde my control - availability of teachers and students at a busy time in the academic year. It's likely to be between 20-40 students, aged 16-19 studying courses at level 1, 2 or 3.

This section specifies exactly which students, trainers, and other personnel will participate in the evaluation. If necessary, a rationale for sample sizes should also be included.

Instrumentation:

This section describes all the evaluation instruments and tools to be used in the evaluation. Actual instruments should be included in appendices for review and approval.

Limitations:

This section spells out any limitations to the interpretation and generalizability of the evaluation. It should also describe potential threats to the reliability and validity of the evaluation design and instrumentation.

Logistics:

This section spells out who will be responsible for the various implementation, analysis, and reporting aspects of the evaluation.

Time Line:

This section presents the schedule for implementation, analysis, and reporting of the evaluation.

Budget:

This section "costs out" the finances for the evaluation. Personnel time usually is the major cost factor. Evaluators often charge from two hundred to several thousand dollars per day depending on their expertise and reputation. Other significant cost factors are travel, data preparation (e.g.transcribing taped interviews), and document duplication.

No comments: