This work package will run under an agile framework and cover two cycles of iterative pilot execution, evaluation and revision before the final version is cleared for dissemination and exploitation in WP6. Each pilot will integrate the results of WP1-WP4 and will consist of the following steps: a) Train students to use virtual labs and run a number of experiments as these have been identified in WP1; b) Monitor students’ activities while using labs and log behavioral data streams through the analytics back-end installed in WP2; c) Employ the authoring tool (WP4) in order to enhance virtual labs taking into consideration the deep analytics results presented to the teacher through the front-end (WP3); and d) Evaluate conditions 1-3 as outlined in the Objectives, above.

Evaluation will take place at EA with support from the other partners, notably AAU on the analytics component. Particularly, the process of using the authoring tool as a means for building virtual labs will be evaluated by teachers and e-learning experts. The support offered by the analytics tools in the process of improving virtual labs will be evaluated by teachers and game analytics experts. The delivered virtual labs and the learning content will be evaluated by both teachers and students as well as user interface experts. The primary evaluation tools will be observation, user testing and user feedback collected online, combined with behavioral telemetry, which provides detailed behavioral data for all these three evaluation conditions. The evaluation will follow best practices in evaluating learning software as well as digital games. The evaluation protocol for each virtual lab will be defined in multiple levels so as to cover measures of effectiveness, efficiency, usability, functionality as well as user experience and benefit for the educational organizations in terms of the extent to which the developed technologies have achieved to motivate the engagement of students with virtual labs. To maximize the benefit of the evaluation process we plan to adopt (and extend when appropriate) the evaluation and experimental protocols that are currently in place at EA for evaluating their current solutions. In this way, we can leverage their experience of what exactly do we need to measure, how we can measure it and how we can analyze the test results for deriving conclusions about technology improvements. The above will be accomplished by undertaking the following tasks.

 

T5.1 – Building virtual labs: Small-scale pilots addressing the educational scenarios (EA, All) [M9-M21]

This task will focus on developing a set of virtual labs with the ENVISAGE assets, based on the educational scenarios identified in T1.1. Using the data inserted in the project by T1.2 and the combination of the modules that were implemented in WP2-4, the specific educational scenarios will be executed towards building and delivering a set of virtual labs that aim to meet the goals and needs of the use case partner (EA). Focus will be on producing and evaluating different versions of virtual labs, using an A/B testing approach towards improving the design and functionality of the virtual labs that have been initially built by the authoring tool. For this purpose, at least two pilot iterations are needed to effectively perform the A/B testing. In particular, two pilot iterations will be executed during the lifetime of the project so as to enable the refinement of the developed modules, while more iterations can be executed by the use case partner after the end of the project when the final software package will be delivered, towards further optimizing their virtual labs.

T5.2 System test and evaluation (AAU, All) [M9-M21]

 The objective of this task is to evaluate the ENVISAGE solution by evaluating each of the three conditions outlined above, which are described hereunder as subtasks:

  1. Authoring tool evaluation: Following the internal evaluation of the authoring tool in T4.3, this subtask will focus on evaluating the authoring tool as a means to easily and effectively build virtual labs. More specifically, a number of external evaluators (i.e., teachers) will utilize the authoring tool and evaluate several aspects of the tool like for instance its usability, its effectiveness, the friendliness of the interface, the ease of use, etc. The evaluation outcomes will be very important for the second cycle of the project, where an updated and improved version of the authoring tool will be developed and used in the second evaluation phase.
  2. Analytics tools evaluation: The focus of this subtask will be to demonstrate the effect of the analytics tools in the process of optimizing the design and the learning process of virtual labs. AAU will take the lead in evaluating the analytical results from the labs/scenarios and liaise with EA to measure the benefit and support provided by the analytical techniques (both shallow and deep) in terms of their applicability in the practical life of a designer and teacher.
  3. Virtual labs evaluation: This subtask will focus on evaluating the usefulness and effectiveness of the delivered virtual labs and learning content towards fostering the engagement of the students with the lab and performing retention. The evaluation of the virtual labs will be accomplished by both teachers and students in order to be ensured that the process of improving the virtual labs has achieved to fulfill the expectations of both.

Feedback from the first evaluation phase will be used as the basis for revising the ENVISAGE solution by the relevant partners (GIO, UoM, CERTH, AAU).

Partners’ role:

AAU will work with EA on planning and executing the evaluation of the different components of the ENVISAGE system, forming the basis for the iterative analysis of the behavioral telemetry collected. In order to evaluate the impact of the analytics tools in the ENVISAGE system, close collaboration between AAU and EA will be necessary to cross-check the actionability or impact of these tools. AAU will also take the lead on mapping and recommending changes to the analytical front end, working closely with UoM, GIO and CERTH on ensuring a smooth iterative evaluation process on the analytics support.

EA will bring the perspective of the users (teachers and learners) to the process of evaluating and assessing the authoring tool and the virtual labs that can be constructed using this tool, as well as the underlying analytics support during both of these steps. EA will develop the virtual labs based on the educational scenarios developed in WP1, and form the basis for the planning and execution of the system tests. EA will host the physical evaluation of the tool and labs, and define the evaluation protocol in collaboration with AAU. Fianlly, EA will also measure the impact of the tool in the process of designing, running and improving virtual labs, working with AAU on the analytics aspects of the evaluation.

CERTH will ensure the applicability of the authoring tool during the pilot execution, and will integrate the evaluation results into the authoring tool between and following test iterations.

GIO will iteratively integrate the changes needed on the analytical back end with respect to data collection.

UoM will iteratively integrate the results of evaluation in between and following iterations as they pertain to the analytics front end.