Introduction

Computer-aided design (CAD) is one of the fundamental engineering courses in universities, particularly for mechanical engineering students. The combination of face-2-face and lab based instruction challenges students to bring together theoretical knowledge, design ability and drawing skills. In this study we propose that analytics can be applied to passive data and used as a significant indicator to improve the learning and teaching experience for tutors and students studying lab-based CAD courses. This is based on the use of a novel online learning and teaching platform called SurreyConnect that allows the analysis of a set of elements related to student activity that includes log-in and log-out data, time spent on a particular application, and time spent on assignments. From the collected data we can identify key indicators such as student attendance and sitting in groups that can impact on their marks and final results. Translating this information visually into dashboards can guide tutors during a teaching session and provide indicators as to when to make an intervention with a learning activity, for example to maintain attention and raise motivation. Meaningful trends and indicators of progress against learning outcomes can be produced when this information is combined with historical learner records, such as their interim and final results. These trends can also provide the basis for developing early warning systems to catch students at risk of failure (Arnold and Pistilli 2012).

Background

An increasing number of students are now ‘born digital’ and they arrive at our institutions with particular expectations of the services they will receive (Kay and van Harmelen 2012). As (Prensky 2001) comments:

Our students have changed radically. Today’s students are no longer the people our educational system was designed to teach. (p1).

In response, educational institutions have adapted their more traditional ways of teaching to meet the needs of students who are comfortable with the use of new technologies in their personal and professional lives. Higher Education Institutions (HEI) are now deploying blended learning approaches to teach these ‘digital natives’. Blended learning can broadly be defined as the use of technology to support face-to-face teaching and to enhance student participation (Liao and Lu 2008). An area that has received substantial attention from educators has been the use of technology to support collaborative learning. This is where groups of students work together to achieve common learning objectives (Resta 1995). The term computer supported collaborative learning (CSCL) is often used to refer to a computer-based network environment that supports this type of group-based learning. These environments often comprise a shared interface in which users can work in groups (Ellis et al. 1991), with a set of cognitive tools that bring individuals together to combine their activities and reach a shared understanding. These software tools help divide complex work across the group and can function as a scaffold to mitigate the limitations of human memory (Corfield 2013).

With advances in the use of technology for learning, teaching and management we have also witnessed an increasing trend in the collection and sharing of data in the form of Key Information Sets (KIS). These have included performance indicators, progression and retention rates, and assessment tracking as well more broad satisfaction indicators such as those found in the UK National Student Survey (NSS). These data are often not directly targeted at improving learning performance, but more widely used to assess student engagement with their studies. In other words, these data sets have not been used as part of a wider ‘analytics’ approach that could provide deeper insights into learning and teaching processes. This is what MacNeill (2012) describes as:

The process of developing actionable insights through problem definition and the application of statistical models and analysis against existing and/or simulated future data. (p3).

A recent Educause survey (Bichsel 2012) highlighted the potential value that HEIs were missing by not fully analysing the large amounts of digital information that are being captured by their information systems:

Higher Education institutions, for the most part, are collecting more data than ever before. Most of these data are used to satisfy credentialing or reporting requirements rather than to address strategic questions, and much of the data collected are not used at all. (p3).

Two approaches that are being applied to explore big-data in education are academic/learning analytics (Table 1) and educational data mining. There is no strict distinction between the two areas but generally, data mining looks for new patterns, while analytics applies known predictive models to data (Bienkowski et al. 2012). Researchers have identified educational data mining as a method for building student and domain models, and with analytics have started to develop insights into the effects of different kinds of pedagogical support on learner achievement (Romero and Ventura 2010; Peterson et al. 2010; Siemens and d Baker 2012).

Table 1 Differentiation between types of analytics approach

The increasing use of learning analytics has had a direct impact on the necessity to present information in ways that allow target users to quickly identify features, trends and patterns (Beinkowski et al. 2012). The most prominent data visualisation techniques have been in the development of dashboards that contain useful information that can be easily interpreted by course tutors. Interactive and customizable dashboards can help tutors to map trends in learner behaviour (Johnson et al. 2010) and ultimately suggest modifications to the teaching and learning setting to enhance the learning experience, and the achievement of desired educational goals.

Research focus

This study is framed by two key research questions:

  1. 1.

    Can we successfully identify and measure critical factors that influence the learning outcomes of students in lab-based teaching environments.

  2. 2.

    Can analytics applied to this passive data be used to generate a predictive algorithm/s to identify students at risk and prompt tutor intervention?

To test this, four different groups of students on the same course were observed and monitored and the statistical significance of a range of measured factors were calculated. The focus has been on passive data collection—as opposed to active data that is explicitly entered into a system by users, for example data, collected via survey and feedback forms. Passive data is generally collected using sophisticated tools that typically require no input from users within the process. HEIs have traditionally used active data collection (e.g. student surveys to drive improvements) but this is often more useful for academic analytics as opposed to learning analytics:

Learning analytics is the application of analytic techniques to analyse educational data, including data about learner and teacher activities, to identify patterns of behaviour and provide actionable information to improve learning and learning-related activities. (Harmelen and Workman 2012: p. 5).

Methodology

Design and sample

331 undergraduate students participated in this study; 167 during the 2012–2013 academic session and 164 during the 2013–2014 session. All students were studying a Computer Aided Drawing (CAD) module at University of Surrey. The sample was approximately evenly distributed across four different groups within each of the years: CAD-A; CAD-B; CAD-C; and CAD-D. All groups came to together for a Monday theory class, and then group lab sessions were separately assigned on Mondays, Thursdays, Tuesdays and Fridays; each class was 3 h in duration. The semester was 10 weeks long with the initial 5 weeks period dedicated to engineering drawing practice and the last 5 weeks to computer aided design. The drawing test comprised an online quiz supplemented by a course-work task designed to measure CAD skills. A written examination on design theory was administered based on the theory classes. The average result represents the arithmetic mean of all the marks obtained.

The tests were conducted in a large computer lab with approximately 65 workstations (Fig. 1). There were two multimedia screens on the front wall and one on each wing to display presentation and practical demonstrations. There were five rows in the lab with on average seven workstations in each row. Every workstation was assigned with a unique identifier that was later used to find the position of each student and his/her fellows in the lab.

Fig. 1
figure 1

Computer lab layout for the CAD course

SurreyConnect learning and teaching environment

A light-weight and simple to operate teaching system called SurreyConnect was built to support the lab-based CAD teaching sessions (Akhtar et al. 2013). The system was purposely designed to record passive information without disrupting the workflow of end users. It comprises three major modules: server, tutor, and student. The server module works on a high performance dedicated system with a database to record student activity and then send updates to tutors. This centralized approach supports error-free synchronization of information among tutors and students and also enables tutors to use any domain workstation without installation rights. SurreyConnect is a multifunctional, multilingual tool which offers various features to support in-class teaching and learning activities.

These activities include a choice of various operating modes such as:

  • Teaching mode: teacher/tutor shares their screen with students;

  • Student broadcast mode: selected student screen is shared with the class;

  • Progress monitoring mode: periodic snapshots of a student’s progress is updated on the tutor interface;

  • One-to-One mode: a selected student station can be remotely controlled to help with exercise problems. This mode is supports voice and video communication;

  • Question–Answer mode: students and tutors can collaborate with each other via text chat.

The home interface (Fig. 2) is where different modes of operation can be selected. Tabs below the menu bar offer particular interfaces for specific modes of operation. The monitoring mode interface (Fig. 3) shows a customized workstation map for each classroom to identify each student’s location. The collaboration mode is where tutors and students share the same window of communication to discuss their ideas (Fig. 4). Tutors have additional options to broadcast messages or start one-to-one interactions with particular students.

Fig. 2
figure 2

Tutor module: home interface showing the main operating options as large buttons

Fig. 3
figure 3

Tutor module: monitoring interface with workstation location map

Fig. 4
figure 4

Tutor module: collaboration interface

The student module (Fig. 5) provides multiple options including recording and playback of lectures. Students can request to move to different virtual rooms, screen share, ask questions online in audio/video, and collaborate with other students. They can also change the status icon to provide live feedback to tutors during a lecture.

Fig. 5
figure 5

Student module–main interface with options presented as large buttons

Data collection process

All students were pre-registered on the SurreyConnect system and data collection started from Week 1 of the course. The layout of workstation IDs in the lab was provided to enable automatic detection of each student location. The system detected the login name of the domain user and automatically signed them into the server. The system recorded log-in and log-out times and workstation ID against their predefined domain IDs in the database.

The CAD module used Solid Edge ST3 for visualizing computer aided design. SurreyConnect was able to automatically detect the start and finish time of the CAD software to monitor the duration of total activities on a given exercise. The system performed further analysis that included: number and ID of unique students; total number of students; average time spent in the class; average seating row; and total attendance. The results from the assessments—drawing test, CAD coursework, and design examination—were mapped on the recorded data to find the correlation between behaviour and outcome. Table 2 details the information that was collected, including purpose, and mapping. Attendance was recorded when a student appeared at class and logged into the data collection system. The information from workstations included learner distance from the lecturer and identification of neighbouring students. This neighbour information was used to explore the grouping patterns of students who regularly sat together. Login/logout time was collected to find total time spent in the classroom and also compared with time on exercise to produce a ratio for productive time on task.

Table 2 Passive digital information collection

The CAD module is divided in three sub-sections: drawing, CAD, and design. Information available at the start of the first session included ‘Batch’ (typically referred to as academic year), ‘Program’ (also known as pathway), ‘Gender’, and CAD group. As shown in Fig. 6, Week 1–5 data is taken from the drawing sessions, with an online test conducted in week 5. CAD modelling started in week 6, with the design lectures conducted as parallel theory based teaching sessions.

Fig. 6
figure 6

Data generation timeline showing key assessment points

Data analysis

Raw data from SurreyConnect was first imported as a comma separated file (CSV) into Microsoft Excel for processing. Attendance was calculated as the percentage of the total number of classes attended in the whole semester. Final marks were the arithmetic mean of design, drawing and CAD marks. Average time was calculated by adding all the time spent during the semester and then averaging out this value by the attendance. Each seating row in the lab was assigned a unique number based on its position relative to the main projector screen. ‘Rows’ were calculated by averaging the number of rows value throughout the semester. The ‘groups’ analysis started by collecting fellows of each student sitting at neighbouring workstations, then the unique IDs were recorded and separated. Finally the percentage of unique IDs against total number of IDs was calculated to retrieve a group-indicator.

An ANOVA test was used to identify the statistical significance of the recorded elements and rule out the potential impact of the allocated CAD group (A, B, C, or D) on the summative assessment outcomes. In the next step, Pearson correlation was applied to find the independent variables that correlated with the final assessment outcomes. This allowed for identification of parameters that may impact on the overall learning outcomes. In the final step, linear regression was applied to the identified variables to generate a score for the prediction equation that was used to estimate the final outcome. The comparison of actual and predicted results is visually represented in Fig. 7. To minimise complications, the scores were categorized as: low, medium, and high. In an ideal situation the prediction equation should generate 100 % accurate results with both green (for predicted results) and red (for actual result) circles of the same size and concentric. In practice this was difficult to achieve. The overlapping region of the circles shows correct predictions (i.e. students predicted to fail and did so), with the red region showing those students who failed but were not identified as at-risk. The green region shows the group of students who were identified as at-risk but actually scored well and passed.

Fig. 7
figure 7

Possible outcomes of score prediction

Focus group interviews

Qualitative data were collected using a light-weight semi-structured interview format with small student groups (Table 3). These were conducted to gain insight into the reasoning and intent behind the choices that students were making in class, as revealed in the passive data capture and analysis. For the interviews, students were invited in the groups in which they usually attended their classes. All interviews were recorded and transcribed with individual identities being kept anonymous. The twelve focus groups comprised 48 students in total. The informal nature of the interviews provided an opportunity for students to articulate the intentions behind their observed behaviours. The interviews were transcribed and coded by two independent researchers and key themes drawn out and cross-referenced.

Table 3 Semi-structured interview questions

Results

This study has been driven by two key research questions that focus on factors within the learning and teaching environment that will influence the learning outcome of students and the ability to use the passive data collected in relation to these factors to create a predictive algorithm that can help tutors teaching in lab based teaching sessions.

We observed four different groups of students and the impact of five different independent variables were explored on the final learning outcome. The results are shown in Table 4. The distribution of data shows that most of the students demonstrated a consistently high level of attendance with more than 85 % maintaining an attendance record of above 70 %. However, the majority of the students were not present for the full duration of the lab sessions.

Table 4 Hypothesis testing

The CAD teaching sessions were conducted in a five-row computer lab with the first row nearest to the main projector screen and typically to the presenter. Almost 70 % of the students preferred to take seat in the middle rows (Row 2–Row 4), the remaining were divided into two almost equal groups of average front and back benchers. Observing the group trend suggested that students moderately preferred to sit with one or more of their companions, however one-third did not sit with any particular set of fellow students.

The results of the one way ANOVA highlights the level of importance of the variables identified in Table 4. The first four variables had a statistically significant impact on the average marks at the 0.01 and 0.001 levels. Attending the class in different groups has no significant impact on the average marks secured by the students (Table 5).

Table 5 Analysis of variance (ANOVA) between CAD group and module sub-sections

The level of correlation between the significant variables was calculated using SPSS and presented in Table 6. This correlation reveals that the marks in different sub-sections are linked, meaning that those students who scored well in one sub-section also performed well in others. According to the analysis, attendance is significantly correlated with the final mark and other collected variables. Total time spent in class is highly correlated with attendance. Students who prefer to sit in groups or remain next to their fellow students in the following classes were likely to score better than individuals who did not associate themselves regularly with other students. For the correlation calculation on rows, these have been scaled with the first row allotted 5 points and fifth row allotted 1 point. These points were added to arrive at the scaled column for rows used in the correlation. Attendance can be seen to have an impact on the points collected with rows. However, it seemed unlikely that front row students preferred to sit in groups, which resonates with the observations made by tutors.

Table 6 Correlation among identified variables

Regression equation

As noted, the data were collected over two academic sessions with four independent groups within each session. Linear regression was applied in SPSS to generate a regression equation. After the first academic session one linear regression equation was computed and after the second academic year two new equations were added: one for the new academic session the other from combined data. The parameters of all the equation are shown in Table 7.

Table 7 Regression analysis for predicted scores

The three regression equations from Table 7 were applied to three different sets of data and predicted scores were compared with actual results. Actual students who scored less that 50 % were marked as failed. Students with predicted low scores (<50 %) were marked as ‘Total identified at-risk’. Some students were identified as at risk but were successful in passing the exam (Table 8).

Table 8 Results of linear regression equations

Focus group and interview data

The interview data were transcribed and analysed to identify the emerging themes in relation to the four variable categories of: attendance; time spent in class; sitting position; and sitting in groups. These themes were organised across two opposing dimensions (Fig. 8). Typically, students with positive behaviour traits such as high attendance and spending more time in class were more disposed to contribute to the interviews. However, intentions were successfully captured from those with more negative behavioural traits such as those low class attendance and/or leaving before the end of the timetabled session.

Fig. 8
figure 8

Results of interviews with focus groups. These are organised across four categories and show two variable dimensions under each one

Prediction dashboard

The results of the predicted outcome scores were presented to the students in the form of a simple dashboard feature within SurreyConnect called ‘iPredict’. Many students were positive about this feature, despite some minor criticisms in the implementation method. Rather than using a simple performance ranking of Good–Average–Bad, students indicated they would prefer to see their progress in terms of percentage and/or with motivating comments such as “You need to work harder in this particular area/subject”. The students found that the simple rating scale was not precise enough and receiving a ‘bad’ rating was demotivating and lessened their confidence. One student commented: “Top students will get motivated but bottom students may get stressed watching the iPredict tool”. Additionally, some students suggested that including a badging system for rewarding positive behaviours could enhance motivation, for example for consistent learner achievements.

The overall student feedback suggested that the implementation of the iPredict tool may help in improving learning. As one student commented: “It is good to know where you stand”. One student supported the idea of displaying scores, commenting that “… bonus marks is good system to motivate low scoring students”.

No-one opted to use the online collaboration facility when the opportunity for face-to-face interaction with the tutor was available. However, students did see benefit in viewing the tutor via their webcam while he/she was helping them out using the remote control feature to operate their workstation. Most of the students in the focus group reported that preferred to sit with their friends because they found it quicker to ask their friends before the tutor. As one student commented “I always ask my friends first because sometimes it takes a click of a few buttons [to solve a problem]”.

Discussion

The data analysis presented in Table 4 shows that the four student groups were statistically similar in terms of their final results. This confirmed the validity of using the combined group data to explore the impact of the identified factors on the average marks. The results demonstrated that a time gap between the theory lecture and the practical demonstration session did not make any difference to student achievement. An observation that was encouraging for the curriculum planners who could effectively select any day of the week for follow up lab work after the theory sessions. In the interviews, students reported a preference for not taking both sessions on the same day. They commented that studying the same course for more than 6 h per day led to cognitive overload (Chandler and Sweller 1991) and difficulties in maintaining a high level of concentration. Those students who attained good grades suggested that one of the advantages in having a gap between the theory and practice sessions was that it allowed them time to review the lecture materials and source complementary materials from elsewhere.

The correlation analysis (Table 6) showed that attendance was a significant factor in relation to academic performance. Though statistically, attending all lectures did not make a difference. The positive impact of attendance on achievements has been reported in other studies for example Gurung et al. (2010), who found attending classes was significant at the p < 0.05 level. The majority of students in this study with low attendance did not achieve higher grades. However, high attendance was not always a predictor of high performance and in some cases individual students with high attendance did not perform well and this may be explained by other influencing factors that impact on a student’s learning capacity. In the focus group interviews the importance of the attendance was found to be significant in the students’ opinion. The students clearly valued attending the lab sessions and saw them as an important opportunity to ask the tutor questions. In fact, when offered online videos of the lab demonstration, most of them felt that, comparatively, their concentration level was poor when they tried to learn from recorded material. Additionally they reported that they were not confident in being able to watch a video and absorb all the information. Other work in this area suggests that positive or negative perceptions can be influential on learning outcomes (Cennamo 1993). Students also commented on the sense of motivation they experienced when they observed fellow students working and achieving their goals. This motivational lift has been identified by other researchers as vicarious learning, where observing others learning can be beneficial in developing the metacognitive skill of learning to learn (Bandura 1999; Mayes et al. 2002).

The CAD course in this study was designed around weekly tasks for the students. Those who finished early were free to either continue or to leave the class. The correlation results indicated that attending the class for a longer period of time was statistically significance on the final outcome at the 0.01 level. The positive impact of time on the outcome concurs with the work Romer (1993) and others (Chan and Shum 1997; Dolton et al. 2003; Kirby and McElroy 2003; Rodgers 2001) who found a positive and significant relationship between class attendance and academic performance. However, it is worth noting that in some exceptional cases, high scoring individuals actually spent less time in class than the average. In general, the analysis suggests that by encouraging students to spend more time in the class their scores will improve. Something that could be achieved through careful design of the lectures and lab sessions to hold the students in class for the full duration of the session. Although some students in the interviews believed they were committed enough to work at home, the majority of the students used the time in lab sessions to complete their weekly objectives and take advantage the expert help on offer. Even confident students wanted to complete their set tasks in the lab sessions to gain direct feedback from the tutors.

The physical layout of the room for the lab sessions (Fig. 1) shows two corner machines, each with a dedicated connection to a multi-media data projector. Tutors typically used these machines to teach from. Some students chose to sit in positions towards the back benches or the last rows of the class, where the screens faced towards the wall. In the focus group interviews, a number of students indicated that these furthest locations (i.e. which have less likelihood of being monitored) were purposefully chosen to reduce potential interaction with tutors. The correlation between seating position and the marks obtained showed that those students on the back benches did not achieve high scores. The front benchers achieved better average grades than mid-benchers. In general, the focus group interviews revealed that the students preferred to sit in the middle benches to have the best view of the projected screen, even though some students did indicate that they wanted to avoid being asked questions by the tutors. A group of students with good marks did choose the back side to avoid distraction from the people entering and leaving the class, and from the students located in the centre of the class. However, students agreed, that sitting in the back rows reduced their ability to gain the tutor’s attention. These reported location effects are not new. Kinarthy’s (1975) study on seating behaviour in introductory psychology classes revealed that students in the front and centre communicate more with the teacher. Furthermore, students in front rated themselves more intelligent and liked by the teacher compared to those who chose to sit at the back.

A number of suggestions towards the better organization of the lab were made by the students themselves, and these included:

  • Place an additional screen on the back wall of the class for the rows which face opposite sides of the presentation screen. Alternatively, a screen sharing system would help students avoid having to turn around.

  • Include a system of online notification, on a first-come-first-served basis, for students sitting in the corners of the large rectangular classroom where pillars made it difficult to gain the tutor’s attention.

The tutors suggested a more radical approach to adjusting the room configuration in redesigning the lab as an Active Learning Space along the lines of new spaces that have evolved at other institutions such as MIT in the USA and Nottingham Trent University in the UK (Beichner et al. 2007; Peberdy 2014).

The results of the correlation analysis (Table 6) showed the statistical contrast between students who appeared to prefer to sit in a random position versus those who always preferred to sit with someone they have associated with before. Analysis of the correlation with the learning outcome revealed that students who sit in groups have higher average scores. This is one of the most complex parameters in student behaviour to calculate and one of the most varied in-terms of reasons why students might choose to sit in a particular group. Peer interaction can be beneficial to learning in small groups (Webb 1989) but there can be a negative influence through distraction and attending to multiple tasks (Junco 2012; Wood et al. 2012).

In this study we identified three overlapping categories of students who typically preferred to sit with those they already knew:

  1. 1.

    Students from a similar cultural background, most importantly sharing the same language. To reduce anxiety and mitigate language barrier effects that might cause stress. This has been reported by other researchers such as Yeh and Inose (2003);

  2. 2.

    Students with whom they shared their social events. This category may or may not include first category but it was referred to as distracting, being more socially orientated, for example around a sports event taking place (Kuh et al. 2010);

  3. 3.

    Student groups formed on the basis of perceived similar intellectual levels. Students sitting in groups of this category appeared to achieve good marks (Antonio 2004; Kuh et al. 2010).

One issue that was associated with sitting in groups was a reluctance to ask questions, though no specific reason for this anxiety was given by the students who were interviewed. It was found mainly in the non-native-English speakers but surprisingly, some local students expressed the same fear. European students disclosed that they were sometimes quiet in the class when dealing with a particular topic not covered in their previous studies. Other reasons given included a perceived inability to gauge the level of their question and therefore they became anxious about looking foolish in front of their peers. In class observation, as reported by the tutor, suggests that those students who did ask questions and solicit feedback attained high scores, reflecting evidence presented in the research literature in this area (Hattie and Timperley 2007).

Conclusions

In relation to our original research questions, the collection and analysis of passive data from the learning environment has confirmed that we can: (1) measure factors within the learning environment that affect learning performance and achievement, and (2) that these data can be used to develop analytic algorithms that show promise in providing tutors with an early warning system for identifying students at risk.

The statistical evaluation of data collected in this study has shown that attendance and average time-spent on task has a direct relation with the learning outcomes. Seating positioning in the class and sitting with a particular group of students had a positive impact on performance, whilst differences in the timing between the practical labs and theory classes had no impact on final student outcomes. We also note that performing analytics on the passive data collected from student behaviours and mapping this to interview data provided a powerful combination of quantitative and qualitative analysis that could be used to provide deeper insights than a single methodological approach alone (Brannen 1992).

On the basis of results in this study, certain practical interventions could be recommended for tutors who are running CAD and other lab-based sessions as described here. These interventions can be designed to mitigate the factors that have been shown to have a negative impact on learning and teaching performances. Distractions and loss of concentration may be addressed by the addition of interactive teaching materials with different learning activities that are interspersed throughout the course and stimulate an environment led by active learning (Bonwell and Eison 1991). Improving attention during the lectures can readily be achieved by using short quizzes during the sessions combined with electronic voting (Guthrie and Carlin 2004; Salmon and Stahl 2005; Schell et al. 2013). Based on the previous data from initial lectures, if a group of students have high variations in marks, it might be positive to rearrange the groups to avoid unnecessary distractions (Subban 2006). For the fixed class rooms, previous student seating data may be used to suggest new seating positions in the class for better observation of progress (Perkins and Wieman 2005). For a large classroom with multiple teaching recourses, it becomes nearly impossible to identify if a student feels anxious to ask question. A centralized system which keeps record of questions asked per lecture may help to pinpoint the students who require additional support (Marzano et al. 2001).

Overall, the value of the analytics approach being developed here for lab-based design teaching is exciting and offers the potential for the use of in-session and inter-session data to build up a picture of predicted student performance and allow for timely teaching interventions to be administered as well as promoting student self-regulation (Zimmerman 2008). The reaction from tutors to the dashboards developed thus far has been positive, and the aim is to further enhance these with reference to other research in this area through participatory co-design workshops (Sanders and Stappers 2008).

Finally, the SurreyConnect system is available to any design teaching institution interested in testing the functionality, in particular in relation to extending its use into the area of distance education which is traditionally a challenging for space for design based courses to compete in (Dosen et al. 2012).