Next Article in Journal
Path Planning for Multi-UAV Formation Rendezvous Based on Distributed Cooperative Particle Swarm Optimization
Previous Article in Journal
Waterproof Aerated Bricks from Stone Powder Waste through Nano-TiO2 Structured Hydrophobic Surface Modification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Mass Collaboration and Learning: Opportunities, Challenges, and Influential Factors

Faculty of Science and Technology and CTS-Uninova, Nova University of Lisbon, 2829-516 Caparica, Portugal
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(13), 2620; https://doi.org/10.3390/app9132620
Submission received: 7 May 2019 / Revised: 3 June 2019 / Accepted: 22 June 2019 / Published: 28 June 2019
(This article belongs to the Section Applied Physics General)

Abstract

:
Learning ecosystems can benefit from mass collaboration where large numbers of minds collectively drive intellectual efforts to learn in the form of knowledge building and sharing. Mass collaborative learning represents a significant shift away from traditional teacher-centered approach towards a self-directed model in virtual communities in which contributing members take on creative roles to maximize their learning and that of their peers. In order to design, implement, and exploit such a learning approach, influencing constituents should be identified, and appropriate conditions need to be provided. This study aims to systematically review recent literature with a view to identifying relevant affecting constituents and success factors for mass collaboration and learning—namely, the type of organizational structures, collaborative learning techniques, adopted technologies, and methods for evaluating the quality of both members’ performance, and co-created knowledge. Therefore, 100 related papers are collected, and their findings are critically evaluated. The results of evaluation are then addressed and discussed.

1. Introduction

Significant advances in information and communication technology (ICT) and more specifically Internet-based solutions that have arisen over the last few years have opened new avenues to collaborate massively in ways seemed impossible even a few decades ago. There is a great shift from hierarchical collaboration towards online decentralized models. The number of people across the globe coming to take part in collaborative initiatives has increased to unprecedented levels. History shows that mass collaboration has been helping organizations and communities to potentially reduce the barriers of starting huge projects, and successfully leverage the resources, energy, skills, talents, and knowledge. When large numbers of self-organized members actively participate in collective learning, they can help with saving money, creating a social spirit, increasing transparency, outcome ownership, and awareness, and harnessing cognitive surplus [1]. Mass collaboration is a promising approach for agile knowledge creation and sharing. This fascinating phenomenon in education and learning was successfully proven by Wikipedia.
However, in today’s world of proliferation of knowledge, information, and data through the use of new digital tools, it is imperative for users to be well prepared to carefully pick those items that are reliable, true, and healthy. While the volume of online shared materials is astounding, identifying knowledge of high quality is now becoming a serious challenge. Unlike printed materials in serious newspapers, magazines, books, and academic libraries or similar information found in well-reputed radio and television broadcasts, knowledge transferred through the Internet and social media is not regulated for accuracy and quality. That is, anybody can publish anything they wish with very few limitations. People may present their opinion as a fact; and some individuals might find it as an ideal venue for personal, political, or commercial propaganda since there is no validated systematic monitoring and assessment procedure to evaluate such distributed knowledge or information. No recognized organization or individual (a kind of watchdog entity) controls, verifies, and/or edits produced content. When it comes to mass level of collaboration, this situation may increase the risk of rapid and widespread dissemination of false and misleading knowledge or information from unknown sources which leads to deleterious effects [2].
The fact is that people are now collaborating more than any time in human history. There is a growing willingness among communities of all sizes from the emerging to the established ones for reaping the benefits of mass collaboration [3]. Despite tremendous achievements in this particular area, the concept and associated mechanism are still evolving. As such, there are still several ambiguities regarding the mass collaboration and learning process that researchers from different fields of study have to face. For instance, there are still no universally agreed specific ways for evaluating the quality of knowledge, or the set of corresponding measurement tools. Such a problem remains a major concern. This survey, therefore, intends to provide an overview of undertaken mass collaborative projects and initiatives over the last years with the purpose of synthesizing findings, clarifying the area-related concepts, trends, and open issues in the hope to identify research directions for solving the mentioned problems.
In other words, this work aims to help better understand which instructional, technological, and technical bottlenecks, needs, or potentials in this field of study currently exist. The rest of the article is organized as follows: the base concepts of the survey are presented in Section 2, where we state our own definitions for both ‘mass collaboration’ and ‘collective intelligence’. The adopted approach and methodology for data collection and analysis are briefly explained in Section 3. The main findings are then introduced in Section 4, addressing the applied organizational structures, adopted technologies, collaborative network models, methods for evaluating both community members’ performance, and quality of co-created knowledge. The article then concludes with a discussion on open issues and directions for further research.

2. Base Concepts

There is no universally agreed single definition of mass collaboration as the concept can be addressed from a variety of perspectives. Mass collaboration, as a new and emerging area, overlaps in some ways with a number of other collective action-related terms, including collective intelligence, crowd-sourcing, crowd wisdom, peer production, user-powered systems, community systems, social systems, social collaboration, Wikinomics, smart mobs, and human computation [1]. Even though certain general characteristics of mass collaboration are addressed in most proposed definitions (a large number of participants who act collectively), some particular characteristics and features still remain controversial and confusing, e.g., What kind of problems are tackled? Should technology always be used? If yes, what kind of technology is exactly needed? Whether or not all forms of mass collaboration use a textual medium? etc.
Furthermore, the current literature introduces the idea of mass collaboration from different perspectives, such as knowledge construction [4], innovation [1], organizational sustainability [5], knowledge management, and collaborative learning [1,6]. Table 1 presents some partial definitions in order to gain a better insight into this concept.
The closest notion to mass collaboration among the various overlapping concepts is collective intelligence. These two terms are even used as synonymous by some authors [3]. They claim that principles such as openness, peering, sharing, and acting globally make it possible to apply them interchangeably. In order to better discern between mass collaboration and collective intelligence some related definitions and statements on collective intelligence are summarized in Table 2.
The fact is that it is controversial whether mass collaboration and collective intelligence are similar concepts or not. If not, what differences and similarities are there between these two collective actions? Existing examples show that mass collaboration and collective intelligence have in fact many things in common. In some examples, they are particularly the same, but in some instances, they can be clearly distinguished. Based on: (A) our findings from the literature review, (B) comparison of these two concepts and their suggested definitions (some of them are addressed in Table 1 and Table 2), and also (C) our understanding of some application examples of mass collaboration (listed in Table 3), a number of similar characteristics and different features can be seen between mass collaboration and collective intelligence, including:
Some major similarities:
  • Both, mass collaboration and collective intelligence take place at group level.
  • Both benefit of diverse capacities, capabilities, and strategically useful information from the multiple participants.
  • In both, collaborative and intellectual endeavors occur.
  • In both, ‘general individual intelligence’ turns into ‘general collective intelligence’.
  • In both, the control structure (if any) is decentralized.
  • In both, the location is universally distributed.
  • In both, the knowledge flow shifts from an individual to collective level.
  • In both, the relations are informal.
Some major differences:
  • The size of community in mass collaboration is, in all cases, large. However, in collective intelligence it could be small, medium, or big.
  • The collective intelligence communities are not exclusively human (they are also observed within many social animal species, or even communities of machines), but mass collaboration is (so far) solely for human communities.
  • The purpose of mass collaboration is solving complex problems, but for collective intelligence is solving diverse problems, or amplifying and improving outcomes.
  • Due to the size of community in mass collaboration, it predominantly occurs over the Internet, but in collective intelligence (particularly in small and medium size communities) online connection is not a must, and it may exist without use of ICT (except those cases that are online by nature).
  • Mass collaboration is basically mediated by content, but collective intelligence is mediated by social interaction.
  • Mass collaboration relies on collaboration and cooperation, but collective intelligence in addition to them might also motivate participants via healthy levels of competition.
Given the above, it can be concluded that collective intelligence is not exactly as the same as mass collaboration, however the two concepts have a large overlap. In Figure 1, their relationship is illustrated. In addition, some application examples of mass collaboration are addressed in Table 3.
Taking into account the above-mentioned points, examples, and contributions from the literature the following definitions are suggested.
Mass collaboration is a kind of collective effort that leads to complex problem solving when a large number of distributed autonomous participants work together or in parallel on a single project and share their sharable resources and commonalities. Solving such a problem is often considered unsolvable, and/or is beyond one’s ability, that is why it needs the confluence of different contributions from a variety of backgrounds. This type of collaboration is typically mediated by the contents or objects being created (a kind of stigmergy) and occurs mostly over the Internet, using social software and computer-supported collaboration tools (but not always).
Collective intelligence is a kind of wisdom and knowledge that emerge from the collaboration or competition of a small to large number of self-organized contributors, and it is used to solve a problem and/or fulfil a task. In principle, collective intelligence leads to mutual recognition and enrichment of contributors. It is mediated by direct social interaction and may exist without the use of ICT.

3. Survey Approach

For this study, a systematic literature review was adopted in an attempt to identify, critically analyze, and integrate the findings of multiple research works. This process aims to establish to what extent existing research has improved towards illustrating a specific problem; to identify gaps, relations, and inconsistencies in the literature; to synthesize findings and make conceptualizations, and to provide an implication for practice [15].

3.1. Research Questions

In this survey, the following research questions (RQs) are addressed:
RQ1.
What are organizational structures in mass collaborative learning?
RQ2.
What are the adopted methods/mechanisms in mass collaborative learning?
RQ3.
What are the adopted technologies in mass collaborative learning?
RQ4.
What are the approaches for evaluating performance in mass collaborative learning?
RQ5.
What are the approaches for assessing the quality of content in mass collaborative learning?

3.2. Search Process and Selection Criteria

With the intention of searching and choosing relevant papers, databases such as SCOPUS, IEEE Xplore, Web of Science, and Google Scholar were used. As both mass collaboration and mass collaborative learning are emerging paradigms, they are not well structured, and they are not yet represented by focused scholarly publications available in the literature. Therefore, we had to include a variety of keywords such as, mass collaboration and learning, collective learning, collaborative learning, collective intelligence and learning, virtual collaborative learning, network learning, and so on.
Based on the mentioned databases, initially a large number of related papers were collected, but by manual filtering (through content scanning) those that are not quite relevant for the purpose of this study were excluded. In consequence, among the rest, 100 target publications (e.g., books, journal articles, dissertations and theses, surveys, and conference materials) within the period of 2010–2018 (to reflect recent trends) were selected for this survey. They were then reviewed one by one to ensure having potential for using and referencing in this study. Afterwards, they were separately summarized and tabulated based on their major information, applied methods, main objectives, and findings. Lastly, they were all analyzed according to the guiding research questions.
In order to ensure that the collected papers are properly organized and the analyzed results are reliable, the following strategies were considered:
  • Identifying and using reliable sources: highest quality, most current, complete, and relevant studies.
  • Assessing the collected papers for relevance and quality.
  • Summarizing the evidences and synthesizing the collected data through tabulating the main information of collected papers.
  • Meticulous data organization.
  • Including the used sources for probable duplication.
  • Using structured and clear format for presenting the results of analyze that will enhance textual commentary.
  • Interpreting the findings to ensure that the analyzed results can be trusted.

4. Analysis of Affecting Factors

There are several factors that can directly, indirectly, positively, or negatively influence on mass collaborative learning. From our point of view the five considered factors in this study each has potentially a key role to play in creating, shaping, managing, and developing mass collaborative learning. Based on the analyzed literature, we think that these factors are fundamental issues which need especial attention by those who deal with this subject. This perception was inspired by studies on collaborative networks and confirmed by the identified examples.

4.1. Organizational Structures

An organizational structure determines how power, roles, and duties can be defined, controlled, and coordinated toward reaching organizational goals. It also specifies the way in which knowledge, information or data flow across different levels of organization. Every organization or community certainly needs a structure (even if self-organizing) in order to survive, take actions, and grow, otherwise it may face with a kind of chaos, mess, or very little clarity everywhere [16]. Every community should select its structure based on its requirements and priorities. The type of organizational structure implicitly indicates in which ways internal works can be carried out.

4.1.1. Types of Organizational Structure

Various types of organizational structures can be defined to meet the requirements of an organization or community. Some authors [17] categorize them into eight groups: (1) line organizational structure, (2) staff or functional authority organizational structure (3) line and staff organizational structure, (4) committee organizational structure, (5) divisional organizational structure, (6) project organizational structure, (7) matrix organizational structure, and (8) hybrid organizational structure. The organizational structures are also classified by Wikipedia into six categories: (a) pre-bureaucratic structure, (b) bureaucratic structure, (c) post-bureaucratic structure, (d) functional structure, (e) divisional structure, (f) matrix structure. In another classification [18] an organizational structure is sorted into five main classes: (1) simple form, (2) machine bureaucracy, (3) professional bureaucracy, (4) divisionalized form, and (5) adhocracy.
This diversity of structures shows that in the long run different organizations or communities can take the advantages of different structures. However, neither one universal organizational structure is recommended that can be used for every organization or community, nor is there a clear way to select the best structure.

4.1.2. Organizational Structure and Mass Collaboration

There are vast amounts of literature on organizational structures in various contexts. For instance, [19] discussed the importance of organizational structure for collaboration between sales and marketing, [20] assessed the influence of the technological knowledge base on the organizational structure, and virtual collaboration in a matrix organization was discussed by [21].
In the past, the structures of organizations or communities were mostly designed for effectiveness and efficiency although they are nowadays designed for agility, speed, and adaptability to be able to compete and win in today’s global competitive environments. As organizations or communities are becoming more and more digital-based and there is a transformation towards performing projects collaboratively, they are also facing with an imperative to redesign their structures in order to learn more rapidly, move faster, quickly respond to demands, and adapt to the characteristics of new workforces and workplaces. While the business environment, customer needs, technology capabilities, and the nature of work in organizations and communities are likely to change, the organizational structure needs to reshape as well in a deliberate and strategic way. In this way, the design of structures for adaptability is a shift away from traditional organizational structures such as, hierarchical, centralized, and bureaucratic models towards unconventional models where projects are fulfilled collectively by networking a variety of self-directed participants [22].
However, thus far there have been very few attempts to report on the role of organizational structures in the context of mass collaboration and learning. Furthermore, there is no clear evidence in the literature that shows how mass collaborative projects can define, design, implement, and develop appropriate structures.
Nevertheless, we may gain some insight on what kinds of organizational structures have more chances for adoption in mass collaborative learning projects if we review the structures used in a closer area. For this purpose, the taxonomy presented in [23] was chosen as it can fit the type of structures that are applied by various virtual communities in the collected papers. As depicted in Table 4, two main forms of structures in networks—collaborative networked organizations and ad-hoc collaboration forms—can be found at the root of this taxonomy.
Taking into account this taxonomy, an analysis of the collected papers shows that the issue of organizational structure is addressed to some extent in 32 of them. The majority of these (88%) consider some form of collaborative networked organization, and the rest (12%) ad-hoc collaboration. Among the long-term strategic network structures, the PVC was the most applied form (19%) whereas business ecosystem was the least-mentioned one (3%). Regarding goal-oriented networks, virtual team had the highest percentage of occurrence (22%) among all introduced structures. In ad-hoc collaboration, the most mentioned structure is mass collaboration (9%). It is remarkable that the type of organizational structure which was applied in three articles in collected papers falls between the VBE and PVC categories in the above taxonomy (a kind of hybrid model). The types and percentage of used organizational structures in collected papers are depicted in Figure 2. Besides, examples of addressed structures and corresponding references are presented in Table 5.
From the findings it can also be added that as the level of collaboration is shifting from small and medium size to large scale, and thus the structures should be respectively reconfigured. If a community in mass collaborative projects intends to acquire more knowledge, not only the organizational structure needs to be adequately dynamic, but also members’ network position should be well matched with the organizational perspective. Nevertheless, in this context, there are some important issues that remain unaddressed, including:
  • As mass collaboration is a form of decentralized and self-directed action, how and by whom should its structure be defined, designed, developed, and coordinated?
  • How can all participants be well prepared for a defined task that needs to be accomplished?
  • How can participants with similar abilities work effectively together on specialized tasks (differentiation by specialization)?
  • Whether some recommended structures like, Holacracy (the community becomes a hierarchy of self-directed and self-organized teams, governed by a constitution) [52], or Flatter (unlike the traditional hierarchy, a flatter structure opens the lines of communication and collaboration, and there is no job titles, seniority, managers, or executives) [53] can be suitable alternatives for mass collaborative learning projects or not?
In a nutshell, considering background knowledge from the area of collaborative networks, it can be expected that community learning through mass collaboration could be helped if existing models of organizational structures for long-term strategic networks are extended to allow more fluid borders and new roles, incentives, and internal subgroups are defined to focus on learning and knowledge generation. Furthermore, by considering the organizational structures of successful examples of mass collaboration, picking up their most suitable features and characteristics, and integrating them in a single framework, it is expected to gather hints for a general organizational structure for mass collaborative learning purposes. In other words, the process combines background knowledge from organizational models of collaborative networks with evidence from promising emerging cases of mass collaboration.

4.2. Methods and Mechanisms in Collaborative Learning

Collaborative learning, in a broad sense, is an educational approach for learning and teaching in which learners create a group (two or more people) working together by face-to-face conversation or over the Internet, with the purpose of solving a problem, completing a project, creating something new, and/or maximizing their own and each other’s learning. Various Collaborative Learning Techniques (CoLTs) have been developed to support this process. These techniques can provide good directions for improving the quality of teaching and learning. Furthermore, they can deliver helpful guidelines to teachers and learners for being more productive. Each of these techniques is designed for adoption in a different educational context [54].
These techniques have succeeded to provide a number of social and psychological benefits: having more positive attitudes toward ourselves, our group and collective learning; being exposed to different viewpoints; learning how to collaborate in group; and how to enhance group productivity, to name a few [55]. From a cognitive perspective also, these techniques can ensure knowledge is jointly built, not just shared [56]. Literature has documented ample positive yield regarding to effectiveness of CoLTs that could turn learners into critical thinkers and reflective learners, and how well they could boost learners’ capability in decision-making and problem-solving [57].

Collaborative Learning Techniques and Mass Collaboration

The conditions for applying CoLTs in general collective learning are relatively distinguished from mass collaborative learning due to some specific characteristics. For instance, group size (the prominent feature of mass collaboration) shifts from small or medium to a large number of learners; learning shifts from knowledge acquisition (at individual level) to knowledge building (at group level); communication and collaboration for distributed learners could be facilitated by applying digital technologies; the co-created product in a mass collaborative learning project is jointly produced in the form of a virtual artifact. Lastly, community satisfaction is the ultimate aim of learners’ participation instead of just being satisfied personally [1].
In recent years, several research works have studied some contexts in which mass collaborative learning can take place (e.g., Wikipedia [27], Innovative Socio-Technical environments [36], MOOCs [1], Yahoo! Answers [10], etc.). Different aspects are also evaluated (e.g., the role of learners and the way of organizing them in mass collaboration process [1], qualifying and measuring the learning output [58], and the role of collaborative methods [1], etc.). However, very few researchers thus far have studied the role of CoLTs in mass collaborative learning projects; nor has a comprehensive list of adopted CoLTs in this context has been presented. Therefore, not enough evidence for this purpose exists in the literature to clarify what kinds of techniques are exactly required and how they can support learners effectively in mass collaborative projects.
In spite of such limitations, in order to identify the most promising techniques for the purpose of learning in mass collaboration, the taxonomy recommended in [54] was selected. This taxonomy, among existing classifications, was chosen as it is more comprehensive than the others in terms of number of proposed categories and contained techniques, and it covers all the techniques that are addressed in the collected papers. This taxonomy encompasses five main categories of general learning activities namely, (1) discussion, (2) reciprocal teaching, (3) problem solving, (4) graphical information organizing, and (5) collaborative writing. Each category can be used for different purposes and it includes a set of techniques. Each technique is defined for a specific group size and period, and amount of time to be devoted to the task. To understand better the concepts, categories, and techniques, some short descriptions are presented in Table 6.
Among the collected papers, only 23 explicitly mention CoLTs. For this group, techniques for discussion are the most applied (43.44%) and within this category, paired annotation techniques are the most used (21.73%). In contrast, techniques for reciprocal teaching received the least attention (8.69%). Techniques that focus on collaborative writing are the second most applied (21.80%). Techniques for problem solving and techniques that use graphical information organizers received equal attention (13.04%) by those papers. More details about the type and percentage of applied techniques are shown in Figure 3. Moreover, examples of adopted CoLTs and corresponding references are addressed in Table 7.
From the performed analysis it can be also concluded that:
  • Although the categories and techniques mentioned above are developed for the purpose of collaborative learning in general, some of them—e.g., peer editing, paired annotation, group problem solving, etc.—have potential features to be used in mass collaborative learning projects. On the other hand, techniques such as dialogue journal, and tree-step are not able to serve this purpose.
  • Techniques for discussions seem to be relatively convenient for learning in mass collaborative projects.
  • Techniques for reciprocal teaching (except note-taking pairs) do not seem to have as much application as techniques for discussion in mass collaborative learning projects.
There are also some open issues in this context that need further investigations, for example:
-
What are the ways in which CoLTs can be effectively adopted to leverage mass collaborative learning?
-
How often and how much can CoLTs help in mass collaborative learning?
-
How can CoLTs unify various analytical issues and make them easily accessible for learners in mass collaboration?

4.3. Adopted Technologies in Mass Collaborative Learning

Over the years, various research works [78] from different fields of study have addressed the importance of utilizing supportive tools in collaborative learning namely, computer-assisted instruction, educational computing, educational technology, virtual learning, distributed learning, synchronous and asynchronous learning, Computer Supported Collaborative Learning (CSCL), etc. In recent years, considerable effort has been made to incorporate emerging tools into collaborative learning. Some researchers, for example, pointed out the importance of using blogs in developing collaboration among students [79]. Other studies advocate the plausibility of applying Wiki tools in learning improvement [80]. However, [81] also argued that “when an organization’s technological knowledge is deep, collaboration may be discouraged, due to the greater risks of knowledge being disclosed and the lower chances of benefiting from learning from partners”.

Supportive Tools and Technologies Used in Mass Collaboration and Learning

New digital tools have a significant facilitation role to play when involving hundreds of thousands of learners from around the world who are geographically, temporally, and conceptually dispersed but want to learn. The advantages of these tools are undeniable as for instance, they can empower countless learners to access educational resources anytime and anywhere, equip them for easy communication with other peers, provide opportunities for all whom are eager to exchange their knowledge and experiences, etc. [3]. In conformity with the adopted technologies, some interactive courses such as MOOCs suggest new pathways for delivering learning materials and supporting community connections and interactions. MOOCs are independently massive open online forums for learning. They can also serve as complementary tools in support of mass collaborative learning [1]. It means that learners by participating in and taking advantage of MOOCs, they can utilize of their findings in mass collaborative learning projects. Literature also shows that tools such as CSCL [1], social media [82], and the Internet [60] are about deriving value from collective efforts, and thus can be conveniently adopted for collaborative learning in large scale. For such level of learning, which is intrinsically linked to social interaction, digital tools are a must [1]. Table 8 includes some examples of supportive tools, technologies, services, and web sites in mass collaboration.
Even though various references in the literature address the importance and advantages of technology in support of collective learning, real applicable tools for mass collaboration are still evolving, and as such, their appropriateness for self-directed learning still needs to be verified. In order to gain better insight about potential tools for this purpose, the Project-Based Collaborative Learning Model [86] was selected (as the most fit model with the nature of mass collaborative learning) to compare it with adopted tools in collected papers. As its name implies, this model is a synthesis of project-based and collaborative learning approaches. Project-based encourages learners to engage in activity-investigation, problem-solving, decision-making, and artifact-designing. Collaborative learning involves groups of learners working effectively together to understand meanings, find solutions, complete a task, and achieve common goals. This model consists of seven distinct phases and can classify some supportive tools under each phase, see Figure 4.
As shown in this model, the process is divided into separate phases, and a sequence is defined among them in order to make the model easily-understandable although it does not mean that the collaboration is basically a linear process. To facilitate the understanding of these phases Table 9 presents some descriptions.
Among the collected papers, 35 address the technological support for collaboration and learning. Communication and resource management phases attracted the most attention (31.40% and 28.52% respectively), whereas both consensus building phase and presentation and archiving phase are equally the lowest used (2.85%). Besides, 11.40% of the applied tools belong to both Team Definition & Participants phase and Project Management phase. Among all suggested tools, Wiki was the most applied (22.99%) which belongs to Co-Creation & Ideation phase. Figure 5 gives more details about the type and percentage of applied tools in the collected papers. Furthermore, examples of adopted tools and corresponding references are addressed in Table 10.
From the performed analysis it can also be concluded that:
  • Resource management tools can provide professional opportunities for mass users to access, evaluate, use and share their resources properly.
  • Apparently, some suggested tools (e.g., Routing, Milestones, and Calendaring) are not as much used in mass collaborative projects as Wiki, Discussion board, and Blogs.
There are some issues in this context that have not been adequately covered, including:
-
How can a group promote adoption and develop competence in designing technology-mediated mass collaborative projects?
-
How can a massive load of textual material be effectively processed by intelligent tools?
-
How can issues such as, privacy and security arising from introducing new tools be dealt with?
-
How needed information or training be provided for those in the community who do not have enough technical information?

4.4. Evaluating Learners’ Performance

In its simplest terms, evaluating performance needs learners to demonstrate that they have acquired specific knowledge, capabilities, and proficiencies by doing or creating something. The goal is to gauge the extent to which learners add value to the organization or community, and whether their achievement is below or above defined norms. Any standard evaluation system is mainly based on a set of collected data, translated into a specific rating scale [98].
Over recent decades, several authors have assessed collaborative processes from different points of view. A number of scholars have recommended diverse procedures (e.g., the use of behaviorally anchored scales, grading, benchmarking, self-efficacy, etc.) for evaluating individuals’ performance in group projects [50]. In this regard, various techniques for performance evaluation have been suggested such as, peer evaluation, 90°, 180°, 270°, balanced score card, mixed standard scale, human resource costing and accounting, forced distribution method, behavioral observation scale, paired comparison, mixed standard scale, electronic performance monitoring, confidential reports, etc. [99,100]. The importance of performance evaluation in an education and learning context has been receiving growing attention by researchers [28,67,101]. The relationship between performance and learning is inseparable, that is, without learning learners are not able to improve their performance, and on the other hand improvement in performance can yield significant learning [50].

4.4.1. Evaluating Learners’ Performance and Mass Collaboration

So far, very few research works have been conducted to specifically evaluate learners’ performance in mass collaborative projects. Only a few papers were found addressing related topics. For instance, [102] explores how the context of an organizational workgroup affects the relationship between group diversity and various performance outcomes. The work described in [101] evaluates distance factors affecting the performance of distributed collaboration. In [99], the authors present trust-based techniques with the aim of assisting instructors in performance appraisal in open and collaborative environments. Because of this scarcity of relevant literature, we first reviewed several proposed methods for performance evaluation in closely related areas, and then compared them with applied methods in the collected papers.

4.4.2. Performance Evaluation Methods

A number of methods for performance evaluation, and some concrete experiences have been reported which are appropriate for different situations and characteristics of an organization or community. Based on [103], there are three performance evaluation categories: (1) absolute standards (compare learners’ performance to a standard, and the evaluation is independent of others); (2) relative standards (compare learners’ performance against others); and (3) objective-based (learners are assessed on how well they fulfill a specific set of objectives). This taxonomy can be extended with three techniques borrowed from [98,104], see Figure 6. In order to further clarify, some explanations for each method are presented in Table 11.
Considering the collected papers, only 25 papers focused on methods of performance evaluation. The majority of these papers preferred to use absolute standards and added methods in their study. This category has the highest percentage of application (48%) and (16%) respectively among all three suggested categories. While objective methods have the lowest degree (4%) of application. The usage of relative standards methods was half of the absolute standards methods (32%). Among all proposed methods, ranking method was the most applied (24%). More details are illustrated in Figure 7. Some adopted methods and corresponding references are also addressed in Table 12.
From the performed analysis it can also be concluded that:
  • The focus of performance evaluation should be related to the learning objectives.
  • For all learners, it is important to gain not only the sense of responsibility for their performance but also the sense of community learning that can help to see how their individual pieces of work could affect the way in which knowledge will flow.
  • Developing a culture of openness to evaluation, and also a great deal of active engagement in this process, is essential for all learners.
  • Co-created and co-designed evaluation resulting from group agreement might be more promising.
  • Access to the result of performance evaluation for all learners might provide a basis for improvement.
There are also several issues in this context that remain a challenge, such as:
-
What foundations need to be built for incorporating performance evaluation into mass collaboration?
-
When the learning activity takes the form of a large-scale project, by whom, when, and how should performance evaluation be conducted?
-
How can we ensure that the evaluation is objective enough, comprehensive, fair, and truly reflect the learner’s performance and contribution at mass level?

4.5. Quality of Knowledge Building and Learning

Knowledge building can be defined as the process of producing, testing, and continual improving of an idea or conceptual artifact. At a collective level, knowledge is created when learners work and interact in a social context. In this case, learners investigate a topic, ask questions, conduct a research, and jointly evaluate their progress [123]. Traditionally, knowledge was created and developed by experts, but now there is a drastic transition towards collective knowledge building based on collaboration of lay citizens [124]. Knowledge is shared through the situations in which learners operate, and learning is ‘located’ in such situations, and also in networks of distributed learners. At individual learning level, a learner works individually and increases the knowledge at his/her own level, but collective learning means sharing with others whatever the learner has acquired at individual level [125].

4.5.1. Evaluating the Quality of Generated Knowledge

Undoubtedly, it is vital in this new context to be able to evaluate how much trustworthy the acquired knowledge is, because the knowledge is power, and it serves as a basis for making choices and decisions, and also understanding the world. Naturally unreliable knowledge may lead to failure at individual and collective level. Reliability refers to believability of knowledge and its source [126]. Evaluating the quality and reliability of generated knowledge is a multidimensional challenge by its nature, and has turned into a central challenge in various fields of study, e.g., organization management [127], web information system [128], etc.
Various research works have been conducted from different perspective to find practical ways for evaluating the quality of knowledge. For instance, regarding the reliability of online contents, a number of questions can provide some guidelines for this evaluation:
(1)
Who is the author or publisher (individual or organization)?
-
Does the author or publisher have high expertise with good qualifications and reputation?
-
Can they be contacted for discussion or clarification?
(2)
What can be said about the structure, style, context, content, and completeness of the knowledge that is provided by the author?
-
What is implied by the content?
-
Are there any evidences that support it?
(3)
When was the knowledge published?
-
Is it up to date?
-
Is a publication and expiration date provided?
(4)
Where else can the provided knowledge be found?
-
Is the knowledge authentic?
-
Is the piece of knowledge original or was it copied?
(5)
Why was the provided knowledge published?
-
What are the biases, assumptions, perspectives and opinions of the author?
-
Who are the intended audiences for the published knowledge?
Furthermore, [129] introduces two different ways of quality evaluation (local and global measuring) as a part of the information fusion system; [130] offers a contextual model of web site evaluation (that focuses on information external to a particular site); [131] recommends a checklist model that focuses on assessing the internal characteristics of web sites (e.g., author identification, checking last update, etc.). The literature shows that many researchers, (e.g., [2,130,132,133]) do concur that for evaluating the quality of generated knowledge, learners need to critically take into account some classic criteria that are listed in Table 13.

4.5.2. Evaluating the Quality of Knowledge in Mass Collaboration

There is a well-developed body of research on evaluation various aspects of data, information, and knowledge management in different contexts (e.g., management science, medicine, multimedia, information systems engineering) and different businesses. However, for mass collaboration in particular, we could not find any substantial study that provides a comprehensive list of recommended methods that can be broadly used for evaluating the quality of created knowledge. Moreover, no systematic study in this context seems to have been already carried out to address this important issue. Perhaps the exception is Wikipedia, a prominent example of mass collaboration, from which we can gain some insight that might be used for guiding other mass collaborative cases.
Wikipedia is an open access, large scale, and continuously evolving collaborative knowledge workflow that harnesses the efforts of thousands of volunteer contributors. A number of research works appraised the quality of knowledge in Wikipedia from different points of view. For instance, [134] assesses Wikipedia content construction and qualification from the perspective of participatory journalism; [135] recommends a technology that called ‘history flow’ to visualize Wikipedia content appraisal by means of article version histories; and [136] appraises the quality of Wikipedia articles with lifecycle-based metrics (based on tracking the changes in the editing intensity throughout the entire existence of an article).
In fact, there are at least three types of processes for quality assurance in Wikipedia, including:
(1)
The process of evaluating the quality of an article by direct actions like, modifying, changing, or deleting the status;
(2)
The process of Wikipedia editor’s performance evaluation and selection of quality assurance agents; and
(3)
The process of creating and maintaining the work coordination artifacts of Wikipedia.
Besides, to assess the quality of featured articles in Wikipedia, eight major criteria need to be considered—namely, (1) accuracy, (2) comprehensiveness, (3) stability, (4) well written, (5) uncontroversial, (6) compliance with standards of Wikipedia, (7) having appropriate style, and (8) having appropriate images. In addition to them, some popular methods (e.g., nominating featured article, peer review, feedback, and reputation mechanism) are also considered by Wikipedia community [137]. In order to compare these criteria and methods considered in Wikipedia with those proposed in collected papers, we summarized them in Table 14 along with brief explanations and respective references.
From the collected papers it can be concluded that:
  • User feedback and expert evaluation were the most suggested methods for evaluation.
  • Top contributors (e.g., community managers, leaders) are also important, as they could bring their rich experiences to improve the process of evaluation.
  • Evaluation seems to be more effective when built on a combination of machine learning and human work.
  • When the results of evaluation are published for all learners, it could be helpful not only for error detection but also for error correction.
  • There is a pressing need for learners to be well trained for taking the advantages of knowledge evaluation.
From our analysis, the evaluation of the quality of knowledge in mass collaboration contexts is still at its embryonic stage. Therefore, various issues still remain unaddressed, for example:
-
While endless knowledge is disseminated through the Internet and social media from different sources (while little is known about most of it) how can we really determine the quality of created and shared knowledge that might be readily altered, misrepresented, plagiarized, or built up anonymously under false claim?
-
How can communities gain a common sense for perceiving the expectations of evaluation?
-
How can we motivate learners to contribute to the evaluation process?
-
How can entice learner to give feedback?
-
What kinds of feedback should be solicited?
-
How can we combine collected feedback?
-
How can participants formulate and distribute the data for evaluation?
Last but not least, in mass collaborative learning, collective consensus and group agreement, indeed, play a main role in almost all its functional aspects, particularly in the process of evaluation. In which decisions are made about the determination of a subject’s merit, worth, and significance by formulating a group judgment, and using the criteria that were formerly defined. In other words, group agreement—by benefiting of mutual opinions, pool of possessed knowledge and skills, and also sharing diverse perspectives—helps the community to better and more effectively make decisions about the quality of performances and/or materials.
Given the above and considering the five perspectives of analysis of factors affecting mass collaboration and learning, it can be clearly seen that none has yet achieved enough maturity to clearly provide us strong evidences and sound guidelines. In order to increase the awareness around these influential factors, more contribution and wider investigation are certainly required. However, considerable advances in various examples of mass collaborative learning projects or environments highlight the growing importance of the area.
In addition, the findings of this research work aid us to propose the following definition:
Mass Collaborative Learning (MCL) occurs when a large number of scattered and self-directed contributors share their partial knowledge, information, data, and experiences with each other (typically by means of ICT platforms) in order to learn something new. In this collective action, knowledge is jointly and continually created, shared, and developed.

4.6. Challenges

  • A major challenge nowadays concerns defining a mechanism for automatically and accurately evaluating the trustworthiness, reliability, validity, and credibility of large sets of co-created knowledge that are disseminated across multiple locations, delivered from time to time by unknown, non-pertinent, or even malicious agents.
  • There is a global need to design a comprehensive and standard performance evaluation approach for mass collaborative learning that embraces a wide range of performance assessment methods, from rather traditional to highly novel and yet-untested approaches.
  • There have been few attempts to date to conceptualize an accepted model for knowledge production in mass collaboration.

5. Conclusions

Mass collaboration is a complex phenomenon and a new topic particularly in learning contexts. In mass collaborative learning, the focus of attention is more on the group level (knowledge building) instead of individual level (knowledge acquisition) though both levels may influence each other mutually. In this process, learners do not just create, revise, remix, and share contents with others, but they even take a role in the building of a spirit of collective effort.
However, this form of collective learning faces multiple challenges, for instance, regarding the assessment of created knowledge (that has wide variety of degrees of quality), general and particular criteria for determining the value of co-created content need to be defined; there is also a need for effective methods to help community contributors in processing incoming information and construct knowledge in the form of learning.
From the conducted literature survey, it is evident that research on mass collaboration and particularly mass collaborative learning is still in its infancy, and this body of knowledge is not yet well developed. This was indeed the main limitation of this study and resulted in being unable to find a sufficient number of related works. Therefore, this research is carried out in order to first take small step in filling this gap, and second identifying the relevance factors involved in mass collaboration and learning.
This survey is a part of an ongoing study conducted to find better understanding about the area and also obtain the fundamental information needed for the rest of the study. In future work, the particular concern is on two steps. First, the organizational structure of some successful examples of mass collaboration will be separately evaluated, aiming to help us to find a suitable organizational structure for mass collaborative learning. For the second step, the methods and possible solutions for evaluating the quality of material will be deeply assessed with the purpose of identifying an appropriate mechanism that helps learners to distinguish unreliable contens in mass collaborative learning projects.

Author Contributions

Conceptualization and methodology, M.Z. and L.M.C.-M.; Literature search and data extraction, M.Z.; Methods & Analysis, M.Z. and L.M.C.-M.; Text Revision, L.M.C.-M. and M.Z.

Funding

This work was funded in part by the Center of Technology and Systems of UNINOVA and the Portuguese FCT-PEST program UID/EEA/00066/2019.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cress, U.; Moskaliuk, J.; Jeong, H. Mass Collaboration and Education; Computer-Supported Collaborative Learning Series: New York, NY, USA, 2016. [Google Scholar]
  2. Fritch, J.W.; Cromwell, R.L. Evaluating internet resources: Identity, affiliation, and cognitive authority in a networked world. J. Am. Soc. Inf. Sci. Technol. 2001, 52, 499–507. [Google Scholar] [CrossRef]
  3. Tapscott, D.; Williams, A.D. Wikinomics: How mass collaboration changes everything. Int. J. Commun. 2008, 58, 396–405. [Google Scholar]
  4. Richardson, M.; Domingos, P. Building large knowledge bases by mass collaboration. In Proceedings of the International Conference on Knowledge Capture —K-CAP ’03, Sanibel Island, FL, USA, 23–25 October 2003; pp. 129–137. [Google Scholar]
  5. Campbell, A.; Hurry, J.; Zidov, M. Designing an organisation to activate cross-sectoral mass collaboration towards sustainability. Master Thesis, Blekinge Institute of Technology, Karlskrona, Sweden, 2011. [Google Scholar]
  6. Elliott, M.A. Stigmergic Collaboration A Theoretical Framework for Mass Collaboration. Ph.D. Thesis, Center for Ideas, Victorian College of the Arts, University of Melbourne, Melbourne, Australia, October 2007. [Google Scholar]
  7. Fallis, D. Introduction: The epistemology of mass collaboration. Episteme 2009, 6, 1–7. [Google Scholar] [CrossRef]
  8. Potter, A.; McClure, M.; Sellers, K. Mass collaboration problem solving: A new approach to wicked problems. In Proceedings of the 2010 International Symposium on Collaborative Technologies and Systems CTS 2010, Chicago, IL, USA, 17–21 May 2010; pp. 398–407. [Google Scholar]
  9. Panchal, J.H.; Fathianathan, M. Product realization in the age of mass collaboration. In Proceedings of the IDETC/CIE 2008 ASME 2008 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference, Brooklyn, NY, USA, 3–6 August 2008. [Google Scholar]
  10. Doan, A.; Ramakrishnan, R.; Halevy, A.Y. Mass Collaboration Systems on the World-Wide Web. Commun. ACM 2010, 54, 86–96. [Google Scholar] [CrossRef]
  11. Bonabeau, E. Decisions 2.0: The power of collective intelligence. MIT Sloan Manag. Rev. Camb. 2009, 50, 45–52. [Google Scholar]
  12. MIT Management SLOAN. Climate Colab Crowdsources Solutions for Global Problems. 2014. Available online: http://mitsloan.mit.edu/newsroom/articles/climate-colab-crowdsources-solutions-for-global-problems/ (accessed on 31 January 2018).
  13. MIT Center for Collective Intelligence. Examples of Collective Intelligence. 2011. Available online: https://scripts.mit.edu/~cci/HCI/index.php?title=Examples_of_collective_intelligence (accessed on 31 January 2018).
  14. Silva, T.H.; de Melo, P.O.S.V.; Viana, A.C.; Almeida, J.M.; Salles, J.; Loureiro, A.A.F. Traffic condition is more than colored lines on a map: Characterization of Waze alerts. Lect. Notes Comput. Sci. (Incl. Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinform.) 2013, 8238, 309–318. [Google Scholar]
  15. Baumeister, R.F.; Leary, M.R. Writing narrative literature reviews. Rev. Gen. Psychol. 1997, 1, 311–320. [Google Scholar] [CrossRef]
  16. Galbraith, J.R. Organization design. In. Handbook of Organizational Behavior; Lorsch, J.W., Ed.; Prentice Hall: Englewood Cliffs, NJ, USA, 1987; pp. 343–357. [Google Scholar]
  17. Chand, S. 8 Types of Organisational Structures: Their Advantages and Disadvantages. Advertisements. 2016. Available online: http://www.yourarticlelibrary.com/organization/8-types-of-organisational-structures-their-advantages-and-disadvantages/22143 (accessed on 31 January 2018).
  18. Mintzberg, H. The structuring of organizations: A synthesis of research. Adm. Sci. Q. 1980, 25, 547–552. [Google Scholar]
  19. Meunier-FitzHugh, K.L.; Piercy, N.F. The importance of organizational structure for collaboration between sales and marketing. J. Gen. Manag. 2008, 34, 19–36. [Google Scholar]
  20. Zhang, J.; Baden-Fuller, C. The influence of technological knowledge base and organizational structure on technology collaboration. J. Manag. Stud. 2010, 47, 679–704. [Google Scholar] [CrossRef]
  21. Kates, A.; Erickson, P.J. Virtual collaboration in a matrix organisation. In The Handbook of High Performance Virtual Teams; Jossey-Bass: San Francisco, CA, USA, 2008; pp. 619–651. [Google Scholar]
  22. Yaragarla, R. Scenarios of Collaborative Approach in Mass Collaboration. Open Sourse Framework for Enterprise Application. 2016. Available online: http://www.workmonkeylabs.com/scenarios-of-collaborative-approach-in-mass-collaboration/ (accessed on 31 January 2018).
  23. Camarinha-Matos, L.M.; Afsarmanesh, H.; Galeano, N.; Molina, A. Collaborative networked organizations—Concepts and practice in manufacturing enterprises. Comput. Ind. Eng. 2009, 57, 46–60. [Google Scholar] [CrossRef]
  24. Toprak, E.; Genc-Kumtepe, E. Cross-cultural communication and collaboration: Case of an international E-learning project. Eur. J. Open Distance E-Learn. 2014, 17, 134–146. [Google Scholar] [CrossRef]
  25. Diki, D. International collaboration of distance learning universities for online learning in indonesia. Lux 2013, 2, 1–8. [Google Scholar] [CrossRef]
  26. Franks, P.C.; Oliver, G.C. Experiential learning and international collaboration opportunities: Virtual internships. Libr. Rev. 2012, 61, 272–285. [Google Scholar] [CrossRef]
  27. Halatchliyski, I.; Moskaliuk, J.; Kimmerle, J.; Cress, U. Explaining authors’ contribution to pivotal artifacts during mass collaboration in the Wikipedia’s knowledge base. Int. J. Comput. Collab. Learn. 2014, 9, 97–115. [Google Scholar] [CrossRef]
  28. Short, B.J. 21st Century Skills Development: Learning in Digital Communities: Technology and Collaboration; University of Oregon: Eugene, OR, USA, 2012. [Google Scholar]
  29. Halatchliyski, I. Networked Knowledge: Approaches to Analyzing Dynamic Networks of Knowledge in Wikis for Mass Collaboration; Universitat Tubingen: Tübingen, Germany, 2015. [Google Scholar]
  30. Hairon, S.; Tan, C. Professional learning communities in Singapore and Shanghai: Implications for teacher collaboration. Comp. A J. Comp. Int. Educ. 2017, 47, 91–104. [Google Scholar] [CrossRef]
  31. Franks, P.C.; Oliver, G.C. Virtual internships: Opportunities for experiential learning and international collaboration in digital curation curricula. In Proceedings of the 77th IFLA General Conference and Assembly, San Juan, PR, USA, 13–18 August 2011. [Google Scholar]
  32. Allen, W.; Fenemor, A.; Kilvington, M.; Harmsworth, G.; Young, R.G.; Deans, N.; Horn, C.; Phillips, C.; de Oca, O.M.; Ataria, J.; et al. Building collaboration and learning in integrated catchment management: The importance of social process and multiple engagement approaches. N. Z. J. Mar. Freshw. Res. 2011, 45, 525–539. [Google Scholar] [CrossRef]
  33. Manouselis, N.; Vuorikari, R.; van Assche, F. Collaborative recommendation of e-learning resources: An experimental investigation. J. Comput. Assist. Learn. 2010, 26, 227–242. [Google Scholar] [CrossRef]
  34. BerG-Weger, M.; Schneider, F.D. Interdisciplinary collaboration in social work education. J. Soc. Work Educ. 1998, 34, 97–107. [Google Scholar] [CrossRef]
  35. De Moor, A. Creativity meets rationale: Collaboration patterns for social innovation. Creat. Ration. 2013, 20, 377–404. [Google Scholar]
  36. Fischer, G. Exploring, understanding, and designing innovative socio-technical environments for fostering and supporting mass collaboration. In Mass Collaboration and Education; Springer: Cham, Switzerland, 2016; pp. 43–63. [Google Scholar]
  37. Sun, G.; Shen, J. Facilitating social collaboration in mobile cloud-based learning: A teamworkas a service (TaaS) approach. IEEE Trans. Learn. Technol. 2014, 7, 207–220. [Google Scholar] [CrossRef]
  38. Gea, M.; Soldado, R.M.; Gamiz, V. Collective intelligence and online learning communities. In Proceedings of the International Conference on Information Society (i-Society 2011), London, UK, 27–29 June 2011; pp. 319–323. [Google Scholar]
  39. Urquhart, R.; Cornelissen, E.; Lal, S.; Colquhoun, H.; Klein, G.; Richmond, S.; Witteman, H.O. A community of practice for knowledge translation trainees: An innovative approach for learning and collaboration. J. Contin. Educ. Health Prof. 2013, 33, 274–281. [Google Scholar] [CrossRef] [PubMed]
  40. Louder, J.R. Distance Learning Environments at One Emerging Research Institution in Texas: The Relationship between Instructor Support, Student Interaction and Collaboration, and Learning. Ph.D. Thesis, Texas Tech University, Lubbock, TX, USA, 2011. [Google Scholar]
  41. Domik, G.; Fischer, G. Transdisciplinary collaboration and lifelong learning: Fostering and supporting new learning opportunities. Lect. Notes Comput. Sci. (Incl. Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinform.) 2011, 6570, 129–143. [Google Scholar]
  42. Bosch-Sijtsema, P.; Sivunen, A. Professional virtual worlds supporting computer-mediated communication, collaboration, and learning in geographically distributed contexts. IEEE Trans. Prof. Commun. 2013, 56, 160–175. [Google Scholar] [CrossRef]
  43. Maries, I.; Scarlat, E. Enhancing the computational collective intelligence within communities of practice using trust and reputation models. Lect. Notes Comput. Sci. (Incl. Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinform.) 2011, 6560, 74–95. [Google Scholar]
  44. Awal, G.K.; Bharadwaj, K.K. Team formation in social networks based on collective intelligence—An evolutionary approach. Appl. Intell. 2014, 41, 627–648. [Google Scholar] [CrossRef]
  45. Joyce, E.; Pike, J.C.; Butler, B.S. Rules and roles vs. consensus: Self-governed deliberative mass collaboration bureaucracies. Am. Behav. Sci. 2013, 57, 576–594. [Google Scholar] [CrossRef]
  46. Daxenberger, J. The Writing Process in Online Mass Collaboration NLP-Supported Approaches to Analyzing Collaborative Revision and User Interaction. Ph.D. Thesis, Technische Universität, Darmstadt, Germany, 2016. [Google Scholar]
  47. Nathaniel, T. Wikipedia and the politics of mass collaboration. Platf. J. Media Commun. 2010, 2, 40–53. [Google Scholar]
  48. Nielsen, W.; Chan, E.K.; Jahng, N. Collaborative learning in an online course: A comparison of communication patterns in small and whole group activities. J. Distance Educ. 2010, 24, 39–58. [Google Scholar]
  49. Persico, D.; Pozzi, F. Task, team and time to structure online collaboration in learning environments. World J. Educ. Technol. 2011, 3, 1–15. [Google Scholar]
  50. Zhu, C. Quantitative Models for Performance Evaluation and Benchmarking: Data Envelopment Analysis with Spreadsheets, 1st ed.; Springer: Berlin, Germany, 2008. [Google Scholar]
  51. Luo, S.; Xia, H.; Yoshida, T.; Wang, Z. Toward collective intelligence of online communities: A primitive conceptual model. J. Syst. Sci. Syst. Eng. 2009, 18, 203–221. [Google Scholar] [CrossRef]
  52. Robertson, B. Organization at the leading edge: Introducing holacracyTM. Integr. Leadersh. Rev. 2007, 7, 1–13. [Google Scholar]
  53. Anne, L.B. Exploring network organization in military contexts: Effects of flatter structure and more decentralized processes. Mil. Psychol. 2011, 23, 315–331. [Google Scholar]
  54. Barkley, E.F.; Cross, K.P.; Major, C.H. Collaborative Learning Techniques: A Handbook for College Faculty, 1st ed.; Jossey-Bass: San Francisco, CA, USA, 2004. [Google Scholar]
  55. Johnson, C.; Maruyama, G.; Johnson, R.; Nelson, D.; Skon, L. Effects of cooperative, competitive, and individualistic goal structures on achievement: A meta-analysis. Psychol. Bull. 1981, 89, 47–62. [Google Scholar] [CrossRef]
  56. Lee, C.B.; Chai, C.S.; Tsai, C.-C.; Hong, H.-Y. Using knowledge building to foster conceptual change. J. Educ. Train. Stud. 2016, 4, 116–125. [Google Scholar] [CrossRef]
  57. Hwang, N.R.; Lui, G.; Tong, M.Y.J.W. Cooperative learning in a passive learning environment. Issues Account. Educ. 2008, 23, 67–75. [Google Scholar] [CrossRef]
  58. Sancho, J. Learning opportunities for mass collaboration projects through learning analytics: A case study. Rev. Iberoam. Tecnol. Aprendiz. 2016, 11, 148–158. [Google Scholar] [CrossRef]
  59. Cheng, X.; Li, Y.; Sun, J.; Huang, J. Application of a novel collaboration engineering method for learning design: A case study. Br. J. Educ. Technol. 2016, 47, 803–818. [Google Scholar] [CrossRef]
  60. Shen, X.L.; Lee, M.K.O.; Cheung, C.M.K. Harnessing collective intelligence of Web 2.0: Group adoption and use of Internet-based collaboration technologies. Knowl. Manag. Res. Pract. 2012, 10, 301–311. [Google Scholar] [CrossRef]
  61. Pombo, L.; Loureiro, M.J.; Moreira, A. Assessing collaborative work in a higher education blended learning context: Strategies and students’ perceptions. EMI. Educ. Media Int. 2010, 47, 217–229. [Google Scholar] [CrossRef]
  62. De Liddo, A.; Sándor, Á.; Shum, S.B. Contested collective intelligence: Rationale, technologies, and a human-machine annotation study. Comput. Supported Coop. Work 2012, 21, 417–448. [Google Scholar] [CrossRef]
  63. Rodriguez-Artacho, M.; Mayorga, J.I.; Read, T.M.; Velez, J.; Ros, S.; Rodrigo, C.; Lorenzo, E.J.; Delgado, J.L.; Bárcena, E.; Castro-Gil, M.; et al. Enhancing authoring, modelling and collaboration in E-learning environments: UNED research outline in the context of e-Madrid excellence network. In Proceedings of the 2010 IEEE Educational Engineer Conference EDUCON 2010, Madrid, Spain, 14–16 April 2010; pp. 1137–1144. [Google Scholar]
  64. Gao, F. A case study of using a social annotation tool to support collaboratively learning. Internet High. Educ. 2013, 17, 76–83. [Google Scholar] [CrossRef]
  65. DeLiddo, A.; Shum, S.B. The evidence hub: Harnessing the collective intelligence of communities to build evidence-based knowledge. In Proceedings of the Large Scale Ideation and Deliberation Workshop, Munich, Germany, 29 June–2 July 2013; p. 8. [Google Scholar]
  66. Grigore, M.; Rosenkranz, C. Increasing the willingness to collaborate online: An analysis of sentiment-driven interactions in peer content production. In Proceedings of the International Conference on Information Systems, Shanghai, China, 4–7 December 2011; pp. 1–18. [Google Scholar]
  67. Wang, H.; Wang, N.; Yeung, D.-Y. Collaborative deep learning for recommender systems. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Sydney, Australia, 10–13 August 2015; pp. 1235–1244. [Google Scholar]
  68. Nerantzi, C. A case of problem based learning for cross-institutional collaboration. Electron. J. E-Learn. 2013, 10, 306–314. [Google Scholar]
  69. Heylighen, F. Collective intelligence and its implementation on the web: Algorithms to develop a collective mental map. Comput. Math. Organ. Theory 1999, 3, 1–26. [Google Scholar]
  70. Mason, W.; Watts, D.J. Collaborative learning in networks. Proc. Natl. Acad. Sci. USA 2012, 109, 764–769. [Google Scholar] [CrossRef]
  71. Görs, J.; Horton, G.; Kempe, N. A collaborative algorithm for computer-supported idea selection in the front end of innovation. In Proceedings of the 2012 45th Hawaii International Conference on System Sciences, Maui, HI, USA, 4–7 January 2012; pp. 217–226. [Google Scholar]
  72. Wang, S.; Zhu, X.; Zhang, H. Web service selection in trustworthy collaboration network. In Proceedings of the 2011 8th IEEE International Conference on e-Business Engineering (ICEBE 2011), Beijing, China, 19–21 October 2011; pp. 153–160. [Google Scholar]
  73. Nickel, M.; Tresp, V.; Kriegel, H.-P. A three-way model for collective learning on multi-relational data. In Proceedings of the 28th International Conference on Machine Learning (ICML 2011), Bellevue, WA, USA, 28 June–2 July 2011. [Google Scholar]
  74. Aritajati, C.; Narayanan, N.H. Facilitating students’ collaboration and learning in a question and answer system. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work Companion, San Antonio, TX, USA, 23–27 February 2013; pp. 101–105. [Google Scholar]
  75. Oh, J.; Jeong, O.R.; Lee, E.; Kim, W. A framework for collective intelligence from internet Q&A documents. Int. J. Web Grid Serv. 2011, 7, 134–146. [Google Scholar]
  76. Li, Z.; Shen, H.; Grant, J. Collective intelligence in the online social network of yahoo! answers and its implications. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management, Maui, HI, USA, 29 October–2 November 2012; pp. 455–464. [Google Scholar]
  77. Caspi, A.; Blau, I. Collaboration and psychological ownership: How does the tension between the two influence perceived learning? Soc. Psychol. Educ. 2011, 14, 283–298. [Google Scholar] [CrossRef]
  78. Chan, T.; Roschelle, J.; Hsi, S.; Kinshuk, K.; Brown, T.; Patton, C.; Cherniavsky, J.; Pea, R.D.; Chan, T.; Roschelle, J.; et al. One-To-one technology-enhanced learning: An opportunity for global research collaboration. Res. Pract. Technol. Enhanc. Learn. 2006, 1, 3–29. [Google Scholar] [CrossRef]
  79. Du, Z.; Fu, X.; Zhao, C.; Liu, Q.; Liu, T. Interactive and collaborative e-learning platform with integrated social software and learning management system. In Proceedings of the 2012 International Conference on Information Technology and Software Engineering: Software Engineering & Digital Media Technology, Beijing, China, 8–10 December 2012. [Google Scholar]
  80. Zaffar, F.O.; Ghazawneh, A. Knowledge sharing and collaboration through social media—The case of IBM. MCIS 2012 Proceedings 2012, 28, 1–11. [Google Scholar]
  81. Wang, Q.; Tunzelmann, N.V. Complexity and the functions of the firm: Breadth and depth. Res. Policy 2000, 29, 805–818. [Google Scholar] [CrossRef]
  82. Wolf, M.M.; Wolf, M.; Frawley, T.; Torres, A.; Wolf, S. Using social media to enhance learning through collaboration in higher education: A case study. In Proceedings of the Applied Agricultural Economics Association’s 2012 AAEA Annual Conference, Seattle, WA, USA, 12 August 2012; pp. 1–13. [Google Scholar]
  83. Bernardo, T. Employing mass collaboration information technologies to protect human lives and to reduce mass destruction of animals. Vet. Ital. 2007, 43, 273–284. [Google Scholar] [PubMed]
  84. Azua, M. The Social Factor: Innovate, Ignite, and Win through Mass Collaboration and Social Networking, 1st ed.; IBM Press: Indianapolis, IN, USA, 2010. [Google Scholar]
  85. Neumann, T.; Carrington, A. A mass collaboration approach to e-learning: Multiple venue production. ALT Newsl. 2007, 8. [Google Scholar]
  86. Deal, A. A Teaching with Technology White Paper: Collaboration Tools. Available online: https://www.cmu.edu/teaching/technology/whitepapers/CollaborationTools_Jan09.pdf (accessed on 31 January 2018).
  87. Lloyd, J.; Amigo, M.; Hettitantri, N. Learning through participation as a mass collaboration. Asia-Pac. J. Coop. Educ. 2016, 17, 163–174. [Google Scholar]
  88. Espitia, M.I.; Olarte, A.C. Virtual forums: A pedagogical tool for collaboration and learning in teacher education. Colomb. Appl. Linguist. J. 2011, 13, 29–42. [Google Scholar] [CrossRef]
  89. Liu, B.; Jiang, Y.; Sha, F.; Govindan, R. Cloud-enabled privacy-preserving collaborative learning for mobile sensing. In Proceedings of the 10th ACM Conference on Embedded Network Sensor Systems—SenSys ’12, Toronto, ON, Canada, 6–9 November 2012; pp. 57–70. [Google Scholar]
  90. Gholami, B.; Safavi, R. Harnessing collective intelligence: Wiki and social network from end-user perspective. In Proceedings of the IC4E 2010–2010 International Conference on E-Education, E-Business, E-Management and e-Learning, Sanya, China, 22–24 January 2010; pp. 242–246. [Google Scholar]
  91. Rogers, P.C.; Liddle, S.W.; Chan, P.; Doxey, A.; Isom, B. WEB 2.0 learning platform: Harnessing collective intelligence. Turkish Online J. Distance Educ. 2007, 8, 16–33. [Google Scholar]
  92. Gray, C.; Smyth, K. Collaboration creation: Lessons learned from establishing an online professional learning community. Electron. J. E-Learn. 2012, 10, 60–75. [Google Scholar]
  93. Hughes, J.E.; Narayan, R. Collaboration and learning with wikis in post-secondary classrooms. J. Interact. Online Learn. 2009, 8, 63–82. [Google Scholar]
  94. Jones, P. Collaboration at a distance: Using a wiki to create a collaborative learning environment for distance education and on-campus students in a social work course. J. Teach. Soc. Work 2010, 30, 225–236. [Google Scholar] [CrossRef]
  95. Bradley, L.; Lindström, B.; Rystedt, H. Rationalities of collaboration for language learning in a wiki. Eur. Assoc. Comput. Assist. Lang. Learn. ReCALL 2010, 22, 247–265. [Google Scholar] [CrossRef] [Green Version]
  96. Tsai, W.; Li, W.; Elston, J. Collaborative learning using wiki web sites for computer science undergraduate education: A case study. IEEE Trans. Educ. 2011, 54, 114–124. [Google Scholar] [CrossRef]
  97. Turban, E.; Liang, T.P.; Wu, S.P.J. A framework for adopting collaboration 2.0 tools for virtual group decision making. Gr. Decis. Negot. 2011, 20, 137–154. [Google Scholar] [CrossRef]
  98. Shaout, A.; Yousif, M.K. Performance evaluation—Methods and techniques survey. Int. J. Comput. Inf. Technol. 2014, 3, 966–979. [Google Scholar]
  99. DeNisi, A.S.; Pritchard, R.D. Performance appraisal, performance management and improving individual performance: A motivational framework. Manag. Organ. Rev. 2006, 2, 253–277. [Google Scholar] [CrossRef]
  100. Aggarwal, A.; Sundar, G.; Thakur, M. Techniques of performance appraisal—A review. Int. J. Eng. Adv. Technol. 2013, 2, 617–621. [Google Scholar]
  101. Pallot, M.; Martínez-Carreras, M.A.; Prinz, W. Collaborative distance: A framework for distance factors affecting the performance of distributed collaboration. Int. J. E-Collab. 2010, 6, 1–32. [Google Scholar] [CrossRef]
  102. Jehn, K.A.; Bezroukova, K. A field study of group diversity, work group context, and performance. J. Organ. Behav. 2004, 25, 703–729. [Google Scholar] [CrossRef]
  103. Jafari, M.; Bourouni, A.; Amiri, R.H. A new framework for selection of the best performance appraisal method. Eur. J. Soc. Sci. 2009, 7, 92–100. [Google Scholar]
  104. Venclova, K.; Salkova, A.; Kolackova, G. Identification of employee performance appraisal methods in agricultural organizations. J. Compet. 2013, 5, 20–36. [Google Scholar] [CrossRef]
  105. Šalková, A. The use of cost management techniques as a strategic weapon in SME’s. Sci. Pap. Univ. Pardubic. Ser. D 2013, 20, 91. [Google Scholar]
  106. Miguel, J.; Caballé, S.; Xhafa, F.; Prieto, J.; Barolli, L. A collective intelligence approach for building student’s trustworthiness profile in online learning. In Proceedings of the Ninth International Conference on P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC 2014), Guangzhou, China, 8–10 November 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 46–53. [Google Scholar]
  107. Liu, Y.; Wu, Y. A survey on trust and trustworthy e-learning system. In Proceedings of the 2010 International Conference on Web Information Systems and Mining, Sanya, China, 23–24 October 2010; pp. 118–122. [Google Scholar]
  108. Schumann, J.; Shih, P.C.; Redmiles, D.F.; Horton, G. Supporting initial trust in distributed idea generation and idea evaluation. In Proceedings of the 17th ACM International Conference on Supporting Group Work -GROUP ’12, Sanibel Island, FL, USA, 27–31 October 2012; pp. 199–208. [Google Scholar]
  109. Lambropoulos, N.; Faulkner, X.; Culwin, F. Supporting social awareness in collaborative e-learning. Br. J. Educ. Technol. 2011, 43, 295–306. [Google Scholar] [CrossRef]
  110. Rabbany, R.; ElAtia, S.; Takaffoli, M.; Zaïane, O.R. Collaborative learning of students in online discussion forums: A social network analysis perspective. Educ. Data Min. 2013, 524, 441–466. [Google Scholar]
  111. Alkhattabi, M.; Neagu, D.; Cullen, A. Information quality framework for e-learning systems. Knowl. Manag. E-Learn. An Int. J. 2010, 2, 340–362. [Google Scholar] [Green Version]
  112. Qi, G.-J.; Aggarwal, C.C.; Han, J.; Huang, T. Mining collective intelligence in diverse groups. In Proceedings of the 22nd International Conference on World Wide Web, Rio de Janeiro, Brazil, 13–17 May 2013; pp. 1041–1052. [Google Scholar]
  113. Blagojević, M.; Milošević, M. Collaboration and learning styles in pure online courses: An action research. J. Univers. Comput. Sci. 2013, 19, 984–1002. [Google Scholar]
  114. Huang, L.; Deng, S.; Li, Y.; Wu, J.; Yin, J.; Li, G. A trust evaluation mechanism for collaboration of data-intensive services in cloud. Appl. Math. Inf. Sci. 2013, 7, 121–129. [Google Scholar] [CrossRef]
  115. Riedl, C.; Blohm, I.; Leimeister, J.M.; Krcmar, H. Rating scales for collective intelligence in innovation communities: Why quick and easy decision making does not get it righ. In Proceedings of the Thirty First International Conference on Information Systems (ICIS), Saint Louis, MO, USA, 12–15 December 2010; pp. 1–21. [Google Scholar]
  116. Dondio, P.; Longo, L. Trust-based techniques for collective intelligence in social search systems. In Next Generation Data Technologies for Collective Computational Intelligence; Springer: Berlin/Heidelberg, Germany, 2011; pp. 113–135. [Google Scholar]
  117. Duque, R.; Gómez-Pérez, D.; Nieto-Reyes, A.; Bravo, C. Analyzing collaboration and interaction in learning environments to form learner groups. Comput. Human Behav. 2015, 47, 42–49. [Google Scholar] [CrossRef] [Green Version]
  118. Blohm, I.; Riedl, C.; Leimeister, J.M.; Krcmar, H. Idea evaluation mechanisms for collective intelligence in open innovation communities: Do traders outperform raters? In Proceedings of the International Conference on Information Systems, ICIS, 2011, Shanghai, China, 4–7 December 2011; pp. 1–24. [Google Scholar]
  119. Agichtein, E.; Castillo, C.; Donato, D.; Gionis, A.; Mishne, G. Finding high-quality content in social media. In Proceedings of the International Conference on Web Search and Web Data Mining—WSDM ’08, Palo Alto, CA, USA, 11–12 February 2008; pp. 183–194. [Google Scholar]
  120. Aperjis, C.; Huberman, B.A.; Wu, F. Harvesting collective intelligence: Temporal behavior in yahoo answers. arXiv 2010, arXiv:1001.2320. [Google Scholar]
  121. Yu, Y.; Wang, J.; Zheng, G.; Gu, B. A collaborative filtering recommendation algorithm based on user interest change and trust evaluation. Int. J. Digit. Content Technol. Its Appl. 2010, 4, 106–113. [Google Scholar]
  122. Nitti, M.; Girau, R.; Atzori, L.; Iera, A.; Morabito, G. A subjective model for trustworthiness evaluation in the social Internet of Things. In Proceedings of the IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), Sydney, Australia, 9–12 September 2012. [Google Scholar]
  123. Hemetsberger, A.; Reinhardt, C. Learning and knowledge-building in open-source communities: A social-experiential approach. Manag. Learn. 2006, 37, 187–214. [Google Scholar] [CrossRef]
  124. Lichtenstein, S.; Parker, C.M. Wikipedia model for collective intelligence: A review of information quality. Int. J. Knowl. Learn. 2009, 5, 254. [Google Scholar] [CrossRef]
  125. Lou, Y.; Abrami, P.; D’Apollonia, S. Small group and individual learning with technology: A meta-analysis. Rev. Educ. Res. 2001, 71, 449–521. [Google Scholar] [CrossRef]
  126. Hovland, C.I.; Janis, I.L.; Kelley, H.H. Communication and Persuasion: Psychological Studies of Opinion Change; Yale University Press: New Haven, CT, USA, 1953. [Google Scholar]
  127. Madnick, S.E.; Wang, R.Y.; Lee, Y.W.; Zhu, H. Overview and framework for data and information quality research. ACM J. Data Inf. Qual. 2009, 1, 1–22. [Google Scholar] [CrossRef]
  128. Naumann, F. Quality-Driven Query Answering for Integrated Information Systems; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
  129. Todoran, I.-G.; Lecornu, L.; Khenchaf, A.; le Caillec, J.-M. Information quality evaluation in fusion systems. In Proceedings of the 2013 16th International Conference on Information Fusion (FUSION), Istanbul, Turkey, 9–12 July 2013. [Google Scholar]
  130. Meola, M. Chucking the checklist: A contextual approach to teaching undergraduates web-site evaluation. Portal Libr. Acad. 2004, 4, 331–344. [Google Scholar] [CrossRef]
  131. Metzger, M.J. Making sense of credibility on the web: Models for evaluating online information and recommendations for future research. J. Am. Soc. Inf. Sci. Technol. 2007, 58, 2078–2091. [Google Scholar] [CrossRef]
  132. Alexander, J.E.; Tate, M.A. Web wisdom: How to Evaluate and create information quality on the web. IEEE Trans. Prof. Commun. 2000, 43, 341–342. [Google Scholar]
  133. Kapoun, J. Teaching undergrads WEB evaluation: A guide for library instruction. Coll. Res. Libr. News 1998, 59, 522–523. [Google Scholar]
  134. Lih, A. Wikipedia as participatory journalism: Reliable sources? Metrics for evaluating collaborative media as a news resource. In Proceedings of the 5th International Symposium on Online Journalism, University of Texas at Austin, Austin, TX, USA, 16–17 April 2004; p. 31. [Google Scholar]
  135. Viégas, F.B.; Wattenberg, M.; Dave, K. Studying cooperation and conflict between authors with history flow visualizations. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems—CHI ’04, Vienna, Austria, 24–29 April 2004; Volume 6, pp. 575–582. [Google Scholar]
  136. Wöhner, T.; Peters, R. Assessing the quality of Wikipedia articles with lifecycle based metrics. In Proceedings of the 5th International Symposium on Wikis and Open Collaboration—WikiSym 09, Orlando, FL, USA, 25–27 October 2009; p. 1. [Google Scholar]
  137. Stvilia, B.; Twidale, M.B.; Smith, L.C.; Gasser, L. Information quality discussions in wikipedia. J. Am. Soc. Inf. Sci. Technol. 2008, 59, 983–1001. [Google Scholar] [CrossRef]
  138. McCann, R.; Doan, A.; Varadarajan, A.K.V. Building data integration systems: A mass collaboration approach. In Proceedings of the International Workshop on Web and Databases, San Diego, CA, USA, 12–13 June 2003. [Google Scholar]
  139. Kane, G.C. A multimethod study of information quality in wiki collaboration. ACM Trans. Manag. Inf. Syst. 2011, 2, 1–16. [Google Scholar] [CrossRef]
  140. Bothos, E.; Apostolou, D.; Mentzas, G. Collective intelligence for idea management with Internet-based information aggregation markets. Internet Res. 2009, 19, 26–41. [Google Scholar] [CrossRef]
  141. Spielman, S.E. Spatial collective intelligence? credibility, accuracy, and volunteered geographic information. Cartogr. Geogr. Inf. Sci. 2014, 41, 115–124. [Google Scholar] [CrossRef]
  142. Bothos, E.; Apostolou, D.; Mentzas, G. Collective intelligence with web-based information aggregation markets: The role of market facilitation in idea management. Expert Syst. Appl. 2012, 39, 1333–1345. [Google Scholar] [CrossRef]
  143. Maleewong, K.; Anutariya, C.; Wuwongse, V. A collective intelligence approach to collaborative knowledge creation. In Proceedings of the 2008 Fourth International Conference on Semantics, Knowledge and Grid, Beijing, China, 3–5 December 2008; pp. 64–70. [Google Scholar]
Figure 1. Relationship between mass collaboration and collective intelligence.
Figure 1. Relationship between mass collaboration and collective intelligence.
Applsci 09 02620 g001
Figure 2. Types and percentages of mentioned structures in the collected papers.
Figure 2. Types and percentages of mentioned structures in the collected papers.
Applsci 09 02620 g002
Figure 3. Type and percentage of adopted CoLTs in collected papers.
Figure 3. Type and percentage of adopted CoLTs in collected papers.
Applsci 09 02620 g003
Figure 4. Project-Based Collaborative Learning Model [86].
Figure 4. Project-Based Collaborative Learning Model [86].
Applsci 09 02620 g004
Figure 5. Types and percentages of adopted tools in the collected papers.
Figure 5. Types and percentages of adopted tools in the collected papers.
Applsci 09 02620 g005
Figure 6. Methods for performance evaluation based on [105].
Figure 6. Methods for performance evaluation based on [105].
Applsci 09 02620 g006
Figure 7. Types and percentages of applied methods for performance evaluation in the collected papers.
Figure 7. Types and percentages of applied methods for performance evaluation in the collected papers.
Applsci 09 02620 g007
Table 1. Some partial definitions of mass collaboration.
Table 1. Some partial definitions of mass collaboration.
Definitions and StatementsSources
Mass collaboration is characterized by the large number of people being involved in it, the digital tools they use (Web 2.0), and digital products they create.[1]
While most collaborations involve only a few people, new information technologies now allow huge numbers of people (separated by very large distances) to work together on a single project.[7]
Mass collaboration is based on individuals and companies employing widely distributed computation and communication technologies to achieve shared outcomes through loose voluntary association.[3]
Mass Collaboration Problem Solver would utilize the brainpower of large numbers of humans and orchestrate their individual efforts to solve hard problems that are beyond the reach of purely computational methods. Mass collaboration problem solving is an idea whose time has come. This has been brought about by an unprecedented convergence of technologies and social phenomena that have more fully accomplished the global nature of the Internet.[8]
Mass collaboration involves the collective action of large numbers of people to perform a task. Users have evolved from passively receiving information through the web to playing an active role by forming communities, interacting with peers, sharing information, and adding value to the Internet as a result of their interactions.[9]
Mass Collaboration (MC) system enlists a mass of users to explicitly collaborate to build a long-lasting artifact that is beneficial to the whole community. MC system enlists a mass of humans to help solve a problem defined by the system owners.[10]
Table 2. Some partial definitions of collective intelligence.
Table 2. Some partial definitions of collective intelligence.
Definitions and StatementsSources
Collective intelligence (CI) is based on the concept that large groups of cooperating individuals can produce higher-order intelligence, solutions, and innovation and come to function as a single entity. Collective intelligence may receive various forms including volunteers that collaborate in order to achieve a common goal that will benefit their community, political parties that mobilize large numbers of people to run campaigns and select candidates, as well as large groups of individuals that collaborate or compete towards finding the best solution to a problem. CI may generally exist without the use of technology.[1]
Collective intelligence is a form of universal, distributed intelligence, which arises from the collaboration and competition of many individuals. It is the general ability of a group to perform a wide variety of tasks. The phenomenon is closely related to swarm intelligence, which means collective, largely self-organized behavior emerging from swarms of social insects.[7]
When a group of individuals collaborate or compete with each other, intelligence or behavior that otherwise did not exist suddenly emerges; this is commonly known as collective intelligence. The actions or influence of a few individuals slowly spread across the community until the actions become the norm for the community. As users interact on the web and express their opinions, they influence others.[3]
Large groups of cooperating individuals can produce higher-order intelligence, solutions, and innovation and come to function as a single entity. Collective intelligence may receive various forms including volunteers that collaborate towards achieving a common goal that will benefit their community. One may observe that CI may generally exist without the use of technology.[8]
Collective intelligence is groups of individuals doing things collectively that seem intelligent.[9]
Table 3. Some application examples of mass collaboration.
Table 3. Some application examples of mass collaboration.
Some Application ExamplesReferences
Wikipedia[8]
Digg[7]
Yahoo! Answers[7]
SETI@home [7]
Scratch[1]
Galaxyzoo[1]
Foldit[1]
Applying Delphi method[11]
Climate Colab[12]
Assignment Zero[13]
DonationCoder[13]
Experts Exchange[13]
Waze[14]
Makerspaces[1]
Table 4. Taxonomy of Collaborative Network based on [23].
Table 4. Taxonomy of Collaborative Network based on [23].
Collaborative Network
Collaborative Networked OrganizationAd-Hoc Collaboration
Long-Term Strategic NetworkGoal-Oriented Network
-
Mass Collaboration
-
Flash mob
-
Informal network
-
One-to-one informal collaboration
-
Virtual organizations Breeding Environment (VBE)
-
Professional Virtual Community (PVC)
-
Grasping opportunity driven network
-
Continuous production driven network
-
Industry cluster
-
Industry district
-
Disaster rescue network
-
Business ecosystem
-
Collaborative innovation network
-
Extended enterprise
-
Virtual Enterprise
-
Virtual Organization
-
Virtual team
-
Supply chain
-
Collaborative e-government
-
Collaborative smart grid
-
Distributed manufacturing
Table 5. Examples of addressed structures and corresponding references.
Table 5. Examples of addressed structures and corresponding references.
VBE[24,25]
PVC[26,27,28,29,30,31]
Multi-stakeholder[32,33,34]
Business ecosystem[1]
Collaborative innovation network[35,36,37]
Virtual organization[5,38]
Virtual team[1,39,40,41,42,43,44]
Mass collaboration[45,46,47]
Informal network[1]
Hybrid[48,49,50,51]
Table 6. CoLTs and their partial descriptions [54].
Table 6. CoLTs and their partial descriptions [54].
Categories and Their TechniquesSome Descriptions
Techniques for DiscussionsBy these techniques, learners can share their viewpoints and respond others’ ideas.
Think-Pair-Share/Write-Pair-ShareLearners before sharing their ideas with the entire community, first think individually and look for a partner’s opinion about them.
Round RobinGenerated ideas move from one learner to the next.
Buzz GroupsIn a small group, learners informally discuss about the topic.
Talking ChipsIt provides equal participation in discussion for all members.
Three-Step InterviewIn a question-and-answer sessions one member is the interviewer and another is the interviewee, and at the end they give a report from what they learnt.
Paired AnnotationsIn order to deliver a summary of an ongoing task, members prepare a composite annotation.
Critical DebatesMembers argue about an issue in favor or opposite of their personal views.
Techniques for Reciprocal TeachingMembers grouped by four skills—namely, questioning, clarifying, summarizing, and predicting—help promoting others’ reading comprehension.
Note-Taking PairsMembers work collectively to improve their individual notes.
Learning CellMembers by creating question and answer activities try to develop their learning.
FishbowlMembers seated inside the ‘fishbowl’ have participatory discussion, while those siting around observer without interpreting.
Role PlayMembers act out the role of different identities and represent in action.
JigsawFor given topic, members first develop knowledge and then share it with others.
Test-Taking TeamsMembers first take an individual test, and then retake it in their community.
Techniques for Problem SolvingMembers help each other to solve problems.
Think-Aloud Pair Problem SolvingMembers try to solve problems aloud in order to help analytical reasoning skills.
Send-A-ProblemProblems and respected solutions are passed among groups to find final solution.
Case StudyMembers try to develop a solution for a real-world scenario.
Structured Problem SolvingIn order to solve a problem, members try to follow a structured format.
Analytic TeamsMembers evaluate a specific task with critical points of view.
Group InvestigationMembers in community make plan, conduct, and report on projects.
Techniques Using Graphic Information OrganizersIn order to organize and present information, members use visual tools.
Affinity Grouping Generating ideas, organizing them, and identifying common themes by group.
Group GridMembers are asked to put given information into the blank cell of a grid.
Team Matrix Members distinguish between similar concepts by considering defining features.
Sequence ChainsSeries of actions will be depicted and analyzed graphically.
Word WebsThe relationships of generated ideas are graphically organized by lines or arrows.
Techniques Collaborative on WritingMembers by group collaboration help to learn important course contents.
Dialogue JournalsIn a journal, members record their thoughts and share with others for comments.
Round TableMembers try to respond questions in turn, before passing to others.
Dyadic EssaysThe developed questions and answers for an essay is compared with model answer.
Peer EditingFor a piece of writing, a critical review and editorial feedback will be provided.
Collaborative WritingMembers try to write a formal paper collaboratively.
Team Anthologies Compile course-related readings with members and annotations.
Paper SeminarParticipate in writing a paper, engaging in discussion, and receiving feedbacks.
Table 7. Examples of adopted CoLTs and corresponding references.
Table 7. Examples of adopted CoLTs and corresponding references.
Think-pair-share[59]
Round robin[1,60]
Buzz group[61]
Paired annotation[1,46,62,63,64]
Critical debates[65]
Note-taking pairs[66,67]
Send-a-problem[68]
Group problem solving[69,70]
Affinity grouping[71]
Team matrix[72]
Word webs[73]
Dyadic essays[74,75,76]
Peer editing[46,77]
Table 8. Some examples of supportive tools, technologies, services, and web sites in mass collaboration.
Table 8. Some examples of supportive tools, technologies, services, and web sites in mass collaboration.
Some Evaluated AspectsSources
Internet-based mass collaboration[83]
Open Source Software and mass collaboration[9]
Mass collaboration and Web 2.0 tools (e.g., wikis, weblogs, podcasts, folksonomies, file sharing and virtual online worlds)[1,83]
Mass collaboration and social medias (e.g., Wiki, blogs, Twitter, LinkedIn, Facebook, YouTube)[83,84]
Mass learning and Synchronous Audio graphic web Conferencing (SAC) technology[85]
Table 9. Some partial descriptions for proposed phases in Project-Based Collaborative Learning Model [86].
Table 9. Some partial descriptions for proposed phases in Project-Based Collaborative Learning Model [86].
PhasesDescriptions
Communication The entire project-based collaborative effort takes place in the context of communication. Majority of collaborative software are equipped to make easy communication among participants.
Team Definition and ParticipantsIn this phase, tools enable members to find key players in the community and manage their participations in different tasks.
Project ManagementLogistical aspects of planning, scheduling, workflow, and task management are handled by tools in this phase.
Resource ManagementCommon issues such as accessing to a shared storage space for project files, and keeping up with multiple versions of the same document are addressed by tools in this phase.
Co-Creation and IdeationBoth direct interaction among members and building or editing project artifacts are facilitated by tools in this phase.
Consensus BuildingThe proposed solutions by community members are refined through consensus-building tools.
Presentation and ArchivingThe presented outcomes to instructors, clients, or public are facilitated by tools in this phase.
Table 10. Examples of adopted tools and corresponding references.
Table 10. Examples of adopted tools and corresponding references.
Virtual meeting[79]
Email[10,87]
Blogs[79,80,88]
Web conferencing[87,89]
Discussion board[79,82,88]
Social networking[80]
User profiles[90]
Task management[10,87]
File storage[10,79]
Database management[91]
Version tracking[91]
Access management[79]
Social bookmarking[79,90,92]
Tagging[90,91]
Wiki[79,80,87,90,93,94,95,96]
Polling[97]
Hosted media sharing[80]
Table 11. Some partial descriptions of performance evaluation methods [98,104,105].
Table 11. Some partial descriptions of performance evaluation methods [98,104,105].
Categories and MethodsDescriptions
Absolute standardsIt compares learners’ performance to a standard, and the evaluation is independent of others.
Check list methodsIt provides a series of statements e.g., ‘yes’ or ‘no’ questions and their answers for ratter to mark right answers.
Critical incident methodsThe focus of ratter is on behaviors that make difference between performing a task in a noteworthy manner.
Graphic Rating scale methodsInfluential behaviors on performance are listed and learners are rated based on them. The rates are helpful for quantifying the behaviors.
Essay methodsIn order to improve learner’s performance, a narrative description is written by the ratter about learner’s status.
Behavioral Anchored Rating Scales (BARS)It combines the benefits of narratives, critical incidents, and quantified ratings by anchoring a quantified scale with specific behavioral examples of good or poor performance.
Forced choice methodsA learner is being evaluated and rated based on before written statements. One common method in this group involves positive and negative statements.
Added methods
Grading methodsOutstanding, satisfactory and unsatisfactory are three established categories of worth for evaluation.
Confidential report A confidential report rates learner’s performance with respect to items such as, teamwork, attendance, reasoning and technical abilities, etc.
Assessment centers Methods such as social or informal events, tests, and exercises are used to evaluate learners’ performance for future responsibilities.
Relative standardsIt compares learners’ performance against others
Ranking methods Ranks from the highest to the lowest are used for comparing learner with others.
Paired comparison Based on one trait or one- on one basis, learner is compared with others.
ObjectivesLearners are assessed on how well they fulfil a specific set of objectives
Management by objectives (MBO)Learners are evaluated periodically based on defined objectives.
360° appraisal Learners are evaluated by people working around them through confidential, anonymous feedback from.
Table 12. Some adopted methods and corresponding references.
Table 12. Some adopted methods and corresponding references.
Check list[28]
Critical incident[106,107,108]
Graphic rating scale[58,109,110]
Essay[111]
BARS[65,112]
Forced choice[4]
Grading[113,114]
Confidential report[44]
Assessment center[115]
Ranking[67,82,116,117,118,119]
Paired comparison[120,121]
360 degree[122]
Forced distribution method[76]
Table 13. Some classic criteria for evaluating created knowledge.
Table 13. Some classic criteria for evaluating created knowledge.
CriteriaConsideration
AuthorityIs the author credible enough? Is he/she well-known?
AccuracyIs the knowledge free from errors, and can it be verified?
AccessibilityIs the knowledge easily retrievable?
CurrencyIs the knowledge up to date?
Coverage Is the knowledge comprehensive and depth enough for respective audience?
RelevancyIs the knowledge respected to your need (your topic or answer of question)?
Purpose What is the knowledge served for (teaching, informing, selling, entertaining)?
Objectivity/Point of view or biasAre all perspectives presented in an unbiased manner and balanced viewpoint? Are opinions separated from facts?
SoundnessIs the created knowledge reasonable for the intended application?
Applicability and utilityIs the created knowledge suitable for the intended learner?
Clarity and completenessTo what extent is the created knowledge clear and complete?
Uncertainty and variability To what extent is the created knowledge certain and variable?
SafetyAre the privacy policies and data protection procedures presented?
ReferencesAre the qualifications of the owner, reference, or reviewer addressed?
PolicyWhich policy is the knowledge following up (advertising, political, etc.)?
Technical criteria (e.g., links, navigation, proper operation)Is the knowledge created in the structured way?
Table 14. Suggested methods in collected papers for evaluating the quality of knowledge.
Table 14. Suggested methods in collected papers for evaluating the quality of knowledge.
Suggested MethodsExplanationsSources
-
Credit assignment
-
Machine learning
-
User feedback
-
Experts statements
-
Using web of trust
-
Automatically propose to contributors for refining their entries
[4]
-
Initialization
-
Gaging user weights
-
User feedback
-
Manually specify the correct topics from the title
-
Each user measures the quality of his or her feedback
[138]
-
Type of contributor activity
-
Number of anonymous contributors
-
Top contributor experience
-
The type of contributor activity spent shaping an article for general consumption
-
It shows the level of social presence (the degree to which contributors establish personal connections in a communication setting)
-
They often exert considerable influence over collaboration in online communities
[139]
-
User feedback
-
Ranking method
-
Expert valuation
-
By comments and ratings
-
Measure both the value and transaction volume of knowledge
[140]
-
CI model (lay citizens and traditional experts)
-
It relies on contributions, administration, reviewing, and consensus from both lay citizens and traditional experts
[124]
-
Structure of groups
-
Systems should be designed in the way to foster conditions for producing collective intelligence
[141]
-
Content facilitation
-
Process facilitation
-
Direct intervention in the content to improve the quality of contribution
-
Procedural intervention by facilitator in structuring of group processes
[142]
-
Argumentation
-
Consensus
-
Selection
-
Expert evaluation
-
Submit arguments in support or oppose the topic
-
Drive a community towards a consensus
-
All knowledge is ranked from best to worst based on the consensus
[143]
-
Reputation
-
Peer review
-
User feedback
-
Editors who previously established records of valuable contributions
[137]
-
Group observation
[112]

Share and Cite

MDPI and ACS Style

Zamiri, M.; Camarinha-Matos, L.M. Mass Collaboration and Learning: Opportunities, Challenges, and Influential Factors. Appl. Sci. 2019, 9, 2620. https://doi.org/10.3390/app9132620

AMA Style

Zamiri M, Camarinha-Matos LM. Mass Collaboration and Learning: Opportunities, Challenges, and Influential Factors. Applied Sciences. 2019; 9(13):2620. https://doi.org/10.3390/app9132620

Chicago/Turabian Style

Zamiri, Majid, and Luis M. Camarinha-Matos. 2019. "Mass Collaboration and Learning: Opportunities, Challenges, and Influential Factors" Applied Sciences 9, no. 13: 2620. https://doi.org/10.3390/app9132620

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop