EER’14
E-Learning and Educational Resources
471
A brief overview of quality inside learning object repositories Cristian Cechinel
Xavier Ochoa
Federal University of Pelotas (UFPel) R. Felix da Cunha 630, Centro – Pelotas (RS), Brasil +55 (53) 32279079
Escuela Superior Politécnica del Litoral (ESPOL) Km. 30.5 Vía Perimetral- Guayaquil, Ecuador +593 4 2269773 ext. 7006
[email protected]
[email protected] on the process they focus. Among others, they mentioned two characteristic examples of approaches, those which focus on the process of creating resources, and those who focus on ready resources and their evaluation.
ABSTRACT Assessing quality of learning objects (LOs) is a difficult and complex task that normally revolves around multiples and different aspects that need to be addressed. Nowadays, quality assessment of LOs inside repositories is based on the information provided by the same community of users and experts that use such platforms. These kinds of information are known as evaluative metadata and constitute a value body of knowledge about Los that is normally used inside the repositories during the process of searching and retrieving. The present work aims to present a brief overview about how LOs quality is being assessed inside some of the most important repositories existent nowadays, as well as some limitations of these existing approaches.
According to Williams[4], what a LO ought to be is related to the perspectives of different opinions of those who are the actual users of the resource. So, in order to evaluate quality, it is necessary to consider the particular spectrum of users and the particular set of criteria used by these users to value the resource. Williams [4] proposes a participant-oriented model (involving different users and stakeholders) composed by four types of LO evaluation that should be made simultaneously, repeatedly and sequentially during various stages of the LO development. This approach covers the whole process of creating resources, and the four types of LO evaluation proposed are:
Categories and Subject Descriptors
1.
Context Evaluation – It tries to establish if there is a need of some LO according to the needs and expectations of the possible users of this LO;
2.
Input Evaluation – It compares alternative inputs focusing to meet the needs identified in the previous step. The main goal here is to evaluate the alternative learning objects that could attend the established needs.
3.
Process Evaluation – It assesses the planning, the design and the development of the selected inputs, e.g., how well the instructional strategy and LO were implemented.
4.
Product Evaluation – It assesses if the LO is attending the initial outcomes expected for its usage.
H.3.7 [Information Storage and Retrieval]: Digital Libraries – collection, user issues.
General Terms Management, Measurement, Design, Human Factors.
Keywords Learning Object Repositories, Quality Assessment, Evaluative Metadata, Ratings, Evaluation.
1. INTRODUCTION Assessing quality of learning resources is a difficult and complex task that often revolve around multiple and different aspects that must be observed. In fact, the very definition of quality is not straightforward. Vargo, Nesbit, Belfer & Archambault [1] state that, even though LO evaluation can be considered a relatively new field, it has roots with an extensive body of prior work on the evaluation of instructional software. As stated by Bethard, Wetzer, Butcher, Martin & Sumner [2] quality is contextual and it will depend on “the alignment between the user constituency being served, the educational setting where deployed, and the intended purpose of the resource". Vuorikari et al. [3] highlights that existing evaluation approaches could be differentiated based
Each type of evaluation should consider who are the people which care about the LO (the audience of the LO), and what do they care or have interest about. The people who care about the LO could be, for instance, students, teachers, instructional designers, an organization, among others. These audiences can have different understandings and expectations about the LO, and thus can use distinct criteria and values to judge the quality of the LO (for instance, reusability, quality of the metadata, the instructional approach, among others). According to [4], the combination of these information would then define how one should conduct the process of evaluation of a LO. Besides Williams[4], other authors have also claimed that concerns about quality normally focus on different criteria. For instance, in the context of digital libraries, Custard & Sumner [5] stated that the main issues related to quality are: Accuracy of content, Appropriateness to Intended Audience, Effective Design, and Completeness of Metadata Documentation. In the specific field of learning multimedia resources, the so far most recognized instrument for quantitatively measuring quality is the Learning Object Review Instrument (LORI) [6]. This instrument
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Conference’10, Month 1–2, 2010, City, State, Country. Copyright 2010 ACM 1-58113-000-0/00/0010 …$15.00.
472
is intended to evaluate the final and “ready for use” LO. In LORI quality is evaluated according to nine different criteria which are rated in a 1 to 5 Likert scale (see Figure 1 ).
Leacock & Nesbit [7] provide some explanations about each one of the nine dimensions of LORI and how they should be interpreted to evaluate LOs: Content quality – one of the most important aspects of LO quality. This dimension deals with the level of accuracy and reliability of the content, as well as the existence of biases, errors and omissions.
2.
Learning goal alignment – it is focused for LOs with a moderate level of granularity, and containing a combination of content, learning activities, and assessments. It intends to evaluate whether the learning activities are aligned with the goals of the LO, and if these activities provide the required knowledge for the users successfully answer the assessments.
3.
Feedback and adaptation – it measures the capability of the LO to provide feedback and adapt itself according to the user needs. Such adaptation can be related to the localization of the LO for a specific culture or language (as in [8], or even to change the LO presentation and content according to a certain preferred user learning style, for instance.
4.
Motivation – it evaluates the ability of the LO in retaining users attention, i.e. if the LO is relevant to the learners’ goals and in accordance to their level of knowledge. According to Leacock & Nesbit [7] learner’s expectations about their success or failure on performing a given task using the LO will also impact on motivation.
5.
Presentation design – This refers to the quality of exposition (clearness and conciseness) of all items in a LO (text, video, animations, graphics). Aspects such as the font size, or the existence of distracting colors should also be taken into consideration.
6.
Interaction usability – this criteria evaluates how easy is for a learner to navigate the LO. Good usability will present consistent layout and structure thus avoiding overloading the user with misleading responses and information. Problems with navigation could also be caused, for instance, by broken links or long delays during the usage.
7.
Accessibility – it refers to accommodation of issues of accessibility of people with disabilities in the design of the LO. For instance, a LO with only textual information would exclude blind learners if no audio voice-over is included.
Reusability – IIt deals with the potential of the LO to be used in different courses and contexts. Issues as the granularity of the LO and openness will influence its portability to different scenarios.
9.
Standards compliance - Whether the metadata fields associated to the LO follow the international standards and are complete in a way that allow others to effectively use that information to search and evaluate the LO relevance.
Even though Leacock & Nesbit [7] provide structural and theoretical foundations for assessing and understanding these many aspects involving quality, they still are all broadly interpreted dimensions that can be subject of divergence from different evaluators. Moreover, different evaluators can also give more importance to one specific dimension than to the others. In order to soften this situation, Nesbit et al. [9] propose applying LORI through the use of a convergent model, where several evaluators from distinct areas (instructors, instructional designers, and multimedia developers) collaborate to achieve a single and unique quality rating for a given resource. In fact, this concept was applied in eLera as it will be shown in next section.
Figure 1. Screenshot of LORI evaluation sheet [6]
1.
8.
The focus of this paper is to present how evaluation of resources takes place inside learning object repositories. As resources inside repositories are normally ready for use, the quality evaluation approaches adopted by the repositories and covered in the next section are related to the second approach mentioned by Vuorikari et al. [3] (which focus on ready resources rather than on the process of creating them), and to the Product Evaluation type proposed by Williams [4]. The rest of the paper is structured as follows. Section 2 describes how quality assessment takes place inside some of the most important repositories existing nowadays, and presents some basic differences between the two forms of reviews used inside repositories (peer-reviews and public-review). Section 3 discusses some limitations of the current approaches for quality assessment and describes initial results on experiments towards automated quality assessment of learning objects. Section 4 presents the final remarks.
2. Evaluation inside Repositories After their production, LOs must be published in a place where users can easily search and retrieve them for future use, a phase defined in the LO life-cycle by Collis & Strijker [10] as offering. Learning Object Repositories (LORs) are the software systems that provide the functionalities for that. A repository could be simply defined as a digital collection where resources are stored for further retrieval. LORs are potential aggregators of communities of practitioners [11-13], i.e. people who share interests and concerns about something they do and learn through their interactions. Due to that, they tend to harness the features of such social environments through the adoption of strategies for the establishment of quality that rely on the impressions of usage and evaluations given by regular users and experts that are members of the repository community. These strategies rely: on 1) the hypothesis of transactive memory systems [14], i.e., systems that store individuals memories, impressions or/and information about a subject in order to form a universal and collective body of knowledge that can serve as an external memory aid for other individuals; and 2) on the value of metadata from the perspective of social capital theories, i.e.,
473
enabling and strengthening social relations that have potential to facilitate the accrual of economic or non-economic benefits to the individuals[15].
pieces of knowledge) and collections (groups of modules structured into course notes). In Connexions every material available is free for using, reusing and sharing with others under a Creative Commons2 license.
Vuorikari et al. [3] address this kind of information as evaluative metadata. According to the authors, “evaluative metadata has a cumulative nature, meaning that annotations from different users accumulate by the time, as opposed to having one single authoritative evaluation”. Inside repositories, evaluative information are normally used as the basis for quality assurance of the resources, but also for properly rank and recommend them for users.
Quality in Connexions is approached by a system called lenses (see Figure 3) that arranges resources according to evaluations provided by individuals and organizations [18]. In this context, resources are explicitly endorsed by third parties, and gain higher quality assurance as they start to accumulate more endorsements (lenses) from others. Moreover, Connexions also provides mechanisms to sort materials considering their number of accesses over the time and considering the ratings given by users. Recently, Connexions has also integrated to the repository plugins of two popular and well succeeded tools for social interaction (Facebook and Twitter) thus allowing the community of users to recommend and disseminate materials across these social platforms.
In this section we present how evaluative metadata can be found in some of the most important LORs existing nowadays.
2.1 Lera eLERA (www.elera.net) – Stands for E-Learning Research and Assessment Network. It was a small LORF 1(with approximately three hundred resources), however, its importance rested on the fact that it was originally created for research purposes. The main focus of the repository was to provide mechanisms and tools for the collaborative and participative assessment of learning objects through the use of LORI. In eLera, members could create reviews of learning objects by using LORI, and experienced members could moderate teams of members in a collaborative online review process where reviewers discussed and compared their evaluations [16] (see Figure 2). Besides, members could also add some resource to their personal bookmarks, allowing eLera to recommend materials not only by using their associated ratings, but also by using their popularity.
Figure 3. Connexions repository – lenses display
2.3 Organic.Edunet Organic.Edunet (portal.organic-edunet.eu) is a federation of repositories funded by the European Union and focused on contents exclusively related to Organic Agriculture and Agroecology. Even though it is a very recent repository (launched in 2009) it has already approximately 2,500 users and 11,000 resources. The importance of Organic.Edunet also lays on the fact that this repository is a SLOR thus allowing users to perform a semantic search for the materials.
Figure 2. An eLera request for review (left) and distribution of ratings on a LORI item (right), taken from Nesbit & Li [16]
In Organic.Edunet, quality is assured by the community of users who are allowed to rate the resources, and to improve their metadata translations (the portal is multilingual, the interface is available in nine (9) languages and the metadata about the resources is manually translated in up to eight (8) languages)(see Figure 4). Moreover, any user can give direct feedback about a given resource to the portal as well as to report inappropriate contents.
2.2 Connexions Connexions is a repository that allows users to collaborative create and share learning materials and that has presented an exponential growth of contributors in the last years. According to [17], such success can be attributed to the fact that, differently from the traditional LORs, Connexions functions through the “social interaction for the creation of materials”, where all materials are created by its own community of members. This community can develop materials in two formats: modules (small 1
Repository that stores only metadata about LOs and not LOs themselves. 2
474
http://creativecommons.org/
Resources in Graphite are classified/tagged according their subjects (Language and Reading, Math, Science, Social Studies, Arts, and Hobbies) and that the resource facilitates (Thinking and reasoning, creativity, self-direction, emotional development, communication, collaboration, responsibility and ethics, technical skills, and health and fitness). Each resource review also contains comments about the pros and cons of the resource, and how the resource works.
Figure 4. Educational resource at Organic.Edunet
2.4 GRAPHITE Graphite is a relatively new repository that stores information about learning resources and is supported by Common Sense Media3. In Graphite it is possible to find websites, games and apps that are officially rated by a board of editors and reviewers of the portal. As the portal is build by teachers and for teachers, such community is also allowed to rate and comment the resources, adding impressions of their usage in the classroom (the so called field notes). The averages teachers ratings are then displayed together with the official ratings (see Figure 5). The evaluations range from 1 to 5 and indicate the learning potential of the resources (not for learning, fair, good, very good, excellent) following three learning dimensions, which are: 1. 2. 3.
Figure 6. Quality Evaluation in Graphite
2.5 MERLOT The Multimedia Educational Resource for Learning and Online Teaching (MERLOT4) is a well known and recognized international initiative which allows users to catalogue educational resources aiming to facilitate the use and sharing of online learning technologies[19]. It is developed by the California State University Center for Distributed Learning and stores metadata of over 30,000 materials distributed in several areas (Arts, Business, and Humanities, among others). Its community of users is formed by about 100,000 members. As MERLOT does not store LOs locally, it can be considered as a referatory.
Engagement (whether the resources hold learners’ interest); Pedagogy (if the product carry content central to the learning experience) Supports (whether the resource provides appropriate feedback, and are there support for teachers and learners).
The MERLOT repository introduced a post-publication peerreview model in order to assure the quality of its catalogued resources [19]. The materials catalogued in MERLOT are peerreviewed by different experts in the discipline domain according to a formal and pre-defined evaluation criterion that addresses three different aspects: 1. 2. 3.
Quality of Content; Potential Effective as a Teaching Tool; and Ease of use.
After peer-reviewers report their evaluations, the chief-editor composes one single report which is published in the repository with the authorization of the authors. In addition to peer-reviewers evaluations, MERLOT also allows the community of users to provide comments and ratings for the materials, complementing its strategy of evaluation with an alternative and more informal mechanism. The ratings of both
Figure 5. List of resources in Graphite
3
4
http://www.commonsensemedia.org/
475
www.merlot.org
(users and peer-reviewers) range from 1 to 5, with 5 as the best rating.
the scene of educational resources after its implementation in MERLOT.
Moreover, MERLOT also allows users to bookmark the resources in the so-called Personal Collections, providing them a way of organizing their favorite materials according to their individual interests [20]. At last, MERLOT annually gives a special award (the MERLOT Classics Awards) for outstanding materials according to a program criterion of the disciplines (see Figure 7). All these evaluative metadata together are used to sort learning materials every time a user performs a search in the repository.
On the other hand, public-review is widely diffused in areas such as online vendors (e.g. Amazon, eBay) and several communities of interest (e.g. IMDb, YouTube, RYM, etc). In these, users normally benefit themselves from comments and ratings given by the community through the use of recommender systems (such as collaborative filters) which, based on the comparison of user’s profiles and the correlation of personal tastes, provide personalized recommendation of items and products that will probably be of their interest [23]. In this kind of social systems, the motivations and goals behind the users’ participation vary significantly, from the desire and need of social interaction, to professional self expression and reputation benefits [24]. Table 1 explores some other aspects which normally differentiate standard peer-review and public-review systems. Table 1. Different aspects involving peer-review and publicreview
Figure 7. The MERLOT repository (Arts discipline learning materials) MERLOT is particularly peculiar in the sense that ratings are gathered from two well defined and different groups of people (general public and experts), which possibly come from distinct backgrounds and may have divergent opinions with respect to quality. In fact, these differences between reviewer’s groups can be considered as a strong point of the adopted approach, which provides complementary views about the same subject. In the next subsection we briefly describe the main characteristics and differences between these two approaches.
Aspects
Peer-Review
Public-Review
Evaluator background
Expert in the field domain
Non-expert
Existence of official criteria or metrics
Yes
No/Sometimes
Community of evaluators
Restricted
Wide opened
Common models
Pre-publication
Post-publication
Domain
Scientific field, journals and funding calls
Online vendors, communities of interest
Motivation
Prestige, fame, to determine the quality and direction of research in a particular domain, obligation
Desire and need of social interaction, professional self expression, reputation
Communication among evaluators
Not allowed
Encouraged
Selection of evaluators
Editor responsibility
None
Financial compensation
Normally none
None
Time taken for the evaluation
Typically slow
Typically fast
Level of formality
Formal process for editing and revision
Informal
Author’s identity
Masked
Non-masked
Requirements to be a reviewer
To be an expert in the field and to be invited
Creation of a member’s account
2.6 Peer-Review and Public-Review Peer-review is conventionally known as the process of assessing a scientific paper or project idea by critical examination of third parties that are experts in the same work domain. This system is widespread in the process of publishing papers in journals and conferences, where the work under evaluation is submitted to a chief-editor which requests a group of fellow-experts to review it in order to obtain advices about whether or not the article must be accepted for publishing, and what further work is still required in the case of acceptance [21]. In the most widely adopted form of peer-review, the identity of the reviewers is hidden from the authors, as well as from the other reviewers. The defenders of peer-reviewing claim that this kind of professional approval serves as a way of assuring the quality of papers published. However, the system is not free from criticisms and issues such as: conflicts of interest, biases of the peers, unnecessary time delay, and the inability on detecting frauds, all mentioned as possible shortcomings of the peer-review process [22]. In any case, and despite the controversies regarding its efficiency, the peer-review system remains as the cornerstone for quality assurance in the academic field, and has also entered in
476
In another experiment [28], the authors tested a slightly different and more algorithmic approach, i.e., the models were generated exclusively through the use of data mining algorithms. Among other good results, one can mention the model for Humanities ∩ Simulation that was able to classify good resources with 75% of precision and not-good resources with 79%; and the model developed for Mathematics ∩ Tutorial with 79% of precision for classifying good resources and 64% for classifying not-good ones.
3. Existing strategies for evaluation inside LORs versus automated evaluation Although current strategies for evaluation inside repositories can be considered successful at some extent, the amount of learning objects inside repositories tend to grow faster than the capacity of the community to evaluate them [25]. Such condition turns impractical to rely only on human effort to classify good quality resources as it becomes impossible to provide evaluative metadata for every single resource in the repository, thus leaving many resources of the current repositories without any measure of quality at all. This situation has raised the concern about the development of new automated techniques and tools that could be used to complement the existing approaches in order to relieve manual work. The actual abundance of resources inside repositories [26] and the availability of contextual evaluations in some of them have opened the possibility of seeking for intrinsic metrics of learning objects that could be used as indicators of quality. This means to say that learning objects could be “mined” and quantitative measures of good and not-good resources could be compared in order to discover intrinsic attributes associated with quality, thus allowing the creation of statistical profiles of good and poor resources that could serve as the basis for quality prediction.
The same approach was tested for the Connexions repository in [29], however the generated models presented poor performances for classifying resources. According to the authors, this may be a consequence of the small size of the population of resources that had evaluative metadata (endorsements). Therefore it is still needed to wait the growth of endorsements in the repository in order to better evaluate the feasibility of creating models for automatically classify resources according to their amount of endorsements. Whether or not the methodology can be extrapolated for other repositories is still a subject for further investigation and research. In the mentioned works, the authors relied on information (categories of discipline, types of materials, peer reviewers and users ratings, endorsements) that are not (necessarily) available in other learning resources repositories. In the cases where some of these information are not available, alternative ways of searching for LOs quality must be found in order to contrast with the metrics for the establishment of these profiles, such as, for instance, the use of ranking metrics [30] or other kinds of evaluative metadata available in such repositories.
Even though automated analysis cannot replace traditional inspection techniques, it carries the potential of offering an inexpensive and time saving mechanism to a priori explore the quality of materials, therefore complementing other existing approaches. The deployment of such automated tools would certainly improve the general quality of the services provided by repositories regarding the processes of searching, selecting and recommending good quality materials. Contributors could, for instance, benefit of such new feature by evaluating beforehand the quality of their resources, which would allow their improvement through the use of the quality metrics referenced by the tool.
4. Final Remarks Evaluating quality of learning objects is a difficult task that normally involves several distinct aspects and different stakeholders, and the existing learning object evaluation methods and frameworks are not free from ambiguities. Different Learning object repositories are frequently adopting strategies that rely on the community of users and experts that assess the quality of the resources by rating and commenting them. Such evaluations can be performed according to a formal and predefined evaluation criterion that addresses specific different aspects, or in a more informal way and without pre-defined specifications. The resulting set of evaluations is then used by LORs to facilitate the process of searching and ranking resources and is considered as a social body of knowledge that serves as an external memory aid for individuals that navigate in such portals. The existence of such evaluations also opens de possibility for the future implementation of personalized recommendations based on the preferences of the users [31]. At the same time that current strategies have established themselves as the main alternative for quality evaluation inside repositories, they are still insufficient to cover the huge amount of resources that continuous grown in such platforms. Therefore, there is an urgent need for the development of alternatives that help to boost the provision of quality information in complement of existing manual strategies.
Initial works in this direction have been developed by [27] who proposed a complementary approach for automated evaluation that relies on the data that can be directly extracted from the learning resources themselves in combination with evaluative metadata . The main advantage of such proposal is to offer a tool which is able to assess quality of new resources inserted in the repository without the need of annotations about them. The authors have offer the very first foundations for the development of such tool by contrasting intrinsic metrics of highly-rated and poorly-rated learning objects stored in MERLOT repository and identifying which metrics are mostly associated with rated resources in the context of that repository. In their work, they have found that the tested metrics present different profiles and tendencies between good and not-good materials depending on the category of discipline and the type of material to which the resource belong. For instance, positive correlations were found between the Number of Images and highly rated learning resources in the disciplines of Education, Mathematics and Statistics, and Science and Technology, and for the Number of Applets in the Business discipline [27]. Moreover they built a Linear Discriminant Analysis model based on the metrics which was able to distinguish between good and not-good materials (for the discipline of Science and Technology and Simulation type) with an overall accuracy of 91.49%, a remarkable achievement for a preliminary attempt towards automated evaluation.
477
[12] Monge, S., Ovelar, R. and Azpeitia, I. Repository 2.0: Social Dynamics to Support Community Building in Learning Object Repositories. Interdisciplinary Journal of E-Learning and Learning Objects, 4(2008), 191-204. [13] Han, P., Kortemeyer, G., Krämer, B. and von Prümmer, C. Exposure and Support of Latent Social Networks among Learning Object Repository Users. Journal of the Universal Computer Science, 14, 10 (2008), 1717-1738. [14] Wegner, D. M. Transactive memory: A contemporary analysis of the group mind. Springer-Verlag, New York, 1986. [15] Lytras, M. D., Sicilia, M.-Á. and Cechinel, C. The value and cost of metadata. World Scientific Publishing Company, 2014. [16] Nesbit, J. C. and Li, J. Z. Web-based tools for learning object evaluation. In Proceedings of the International Conference on Education and Information Systems: Technologies and Applications (Orlando, Florida, 2004). [17] Ochoa, X. Connexions: a Social and Successful Anomaly among Learning Object Repositories. Journal of Emerging Technologies in Web Intelligence, 2, 1 (2010). [18] Kelty, C. M., Burrus, C. S. and Baraniuk, R. G. Peer Review Anew: Three Principles and a Case Study in Postpublication Quality Assurance. Proceedings of the IEEE, 96, 6 (2008), 1000-1011. [19] Cafolla, R. Project MERLOT: Bringing Peer Review to Web-Based Educational Resources. Journal of Technology and Teacher Education, 14, 2 (2006), 313-323. [20] Sicilia, M.-Á., Sánchez-Alonso, S., García-Barriocanal, E. and Rodriguez-Garcia, D. Exploring Structural Prestige in Learning Object Repositories: Some Insights from Examining References in MERLOT. In Proceedings of the International Conference on Intelligent Networking and Collaborative Systems - INCOS '09. (2009). [21] Harnad, S. The invisible hand of peer review. Exploit Interactive, April, 5 (2000) Retrieved from http://www.exploitlib.org/issue5/peer-review [22] Benos, D., Bashari, E., Chaves, J., Gaggar, A., Kapoor, N., LaFrance, M., Mans, R., Mayhew, D., McGowan, S., Polter, A., Qadri, Y., Sarfare, S., Schultz, K., Splittgerber, R., Stephenson, J., Tower, C., Walton, G. and Zotov, A. The ups and downs of peer review. Advances in Physiology Education, 31, 2 (2007), 145-152. [23] Resnick, P. and Varian, H. R. Recommender systems. Commun. ACM, 40, 3 (1997), 56-58. [24] Peddibhotla, N. and Subramani, M. Contributing to Public Document Repositories: A Critical Mass Theory Perspective. Organization Studies, 28, 3 (2007), 327-346. [25] Cechinel, C. and Sánchez-Alonso, S. Analyzing Associations between the Different Ratings Dimensions of the MERLOT Repository. Interdisciplinary Journal of E-Learning and Learning Objects 7(2011), 1-9. [26] Ochoa, X. and Duval, E. Quantitative Analysis of Learning Object Repositories. Learning Technologies, IEEE Transactions on, 2, 3 (2009), 226-238. [27] Cechinel, C., Sánchez-Alonso, S. and García-Barriocanal, E. Statistical profiles of highly-rated learning objects. Computers & Education, 57, 1 (2011), 1255-1269. [28] Cechinel, C., da Silva Camargo, S., Ochoa, X., Sicilia, M. and Sanchez-Alonso, S. Populating learning object repositories with hidden internal quality information. In Proceedings of the 2nd Workshop on Recommender Systems for Technology
5. ACKNOWLEDGMENTS This work has been funded by CYTED (Ibero-American Programme for Science, Technology and Development) as part of project “RIURE - Ibero-American Network for the Usability of Learning Repositories “ (code 513RT0471), and by the Núcleo de Estudios e Investigaciones en Educación Superior del Sector Educativo del MERCOSUR (Center of Studies and Investigations on Higher Education of MERCOSUL) as part of Project “Marcos regulatorios, Modelos institucionales y formatos didácticos: Dimensiones de la calidad en las Prácticas de Enseñanza a Distancia en países del MERCOSUR” (Regulatory Marks, Institutional Models and Didactic Forms: Quality Dimensions of Distance Learning Practice in MERCOSUL Countries).
6. REFERENCES [1] Vargo, J., Nesbit, J. C., Belfer, K. and Archambault, A. Learning Object Evaluation: Computer-Mediated Collaboration and Inter-Rater Reliability. International Journal of Computers and Applications, 25, 3 (2003), 1-8. [2] Bethard, S., Wetzer, P., Butcher, K., Martin, J. H. and Sumner, T. Automatically characterizing resource quality for educational digital libraries. In Proceedings of the Proceedings of the 9th ACM/IEEE-CS joint conference on Digital libraries (Austin, 2009). [3] Vuorikari, R., Manouselis, N. and Duval, E. Using Metadata for Storing, Sharing and Reusing Evaluations for Social Recommendations: the Case of Learning Resources. Social Information Retrieval Systems: Emerging Technologies and Applications for Searching the Web Effectively. Hershey, PA: Idea Group Publishing(2008), 87–107. [4] Williams, D. D. Evaluation of learning objects and instruction using learning objects, http://reusability.org/read/chapters/williams.doc, 2000. [5] Custard, M. and Sumner, T. Using Machine Learning to Support Quality Judgments. D-Lib Magazine, 11, 10 (2005). [6] Nesbit, J. C., Belfer, K. and Leacock, T. Learning object review instrument (LORI), E-learning research and assessment network. Retrieved from http://www.elera.net/eLera/Home/Articles/LORI%20manual, 2003. [7] Leacock, T. L. and Nesbit, J. C. A Framework for Evaluating the Quality of Multimedia Learning Resources. Educational Technology & Society, 10, 2 (2007), 44-59. [8] Cechinel, C., Camargo, S. d. S. and Perez, C. C. Uma proposta para localização facilitada de Objetos de Aprendizagem. In Proceedings of the XVII Simpósio Brasileiro de Informática na Educação (Aracaju, 2011 ). [9] Nesbit, J. C., Belfer, K. and Vargo, J. A Convergent Participation Model for Evaluation of Learning Objects. Canadian Journal of Learning and Technology, 28, 3 (2002). [10] Collis, B. and Strijker, A. Technology and Human Issues in Reusing Learning Objects. Journal of Interactive Media in Education; May 2004: JIME Special Issue on the Educational Semantic Web, 2004, 1 (2004). [11] Brosnan, K. Developing and sustaining a national learningobject sharing network: A social capital theory perspective. In Proceedings of the Proceedings of the ASCILITE 2005 Conference (Brisbane, Australia, 2005). ASCILITE.
478
[31] Cechinel, C., Sicilia, M.-Á., Sánchez-Alonso, S. and GarcíaBarriocanal, E. Evaluating collaborative filtering recommendations inside large learning object repositories. Information Processing & Management, 49, 1 (2013), 34-50.
Enhanced Learning (RecSysTEL) (Saarbrücken, 2012). CEUR Workshop Proceedings. [29] Cechinel, C., Sánchez-Alonso, S., Sicilia, M.-Á. and Azevedo Simões, P. Exploring the Development of Endorsed Learning Resources Profiles in the Connexions Repository. Springer Berlin Heidelberg, 2011. [30] Ochoa, X. and Duval, E. Relevance Ranking Metrics for Learning Objects. Learning Technologies, IEEE Transactions on, 1, 1 (2008), 34-48.
479
Virtual Learning Environment adoption and organizational change in Higher Education Virginia Rodés
Alén Pérez Casas
Departamento de Apoyo Técnico Departamento de Apoyo Técnico Académico, Programa de Entornos Académico, Programa de Entornos Virtuales de Aprendizaje, Comisión Virtuales de Aprendizaje, Comisión Sectorial de Enseñanza, Universidad Sectorial de Enseñanza, Universidad de la República de la República José E. Rodó 1854 José E. Rodó 1854 Montevideo Uruguay Montevideo Uruguay +59824080912 +59824080912 11200 11200
[email protected] Natalia Correa Facultad de Ciencias Económicas y Administración Gonzalo Ramirez1926 Montevideo Uruguay +598 2411 88 39 11200
[email protected]
Leticia Lorier Facultad de Información y Comunicación José Leguizamón 3666 Montevideo Uruguay +598 2628 96 49 11200
[email protected]
[email protected] Gabriel Budiño Facultad de Ciencias Económicas y Administración Gonzalo Ramirez1926 Montevideo Uruguay +598 2411 88 39 11200
[email protected] Manuel Podetti Departamento de Apoyo Técnico Académico, Programa de Entornos Virtuales de Aprendizaje, Comisión Sectorial de Enseñanza, Universidad de la República José E. Rodó 1854 Montevideo Uruguay +59824080912 11700
José Fager Departamento de Apoyo Técnico Académico, Programa de Entornos Virtuales de Aprendizaje, Comisión Sectorial de Enseñanza, Universidad de la República José E. Rodó 1854 Montevideo Uruguay +59824080912 11200
[email protected]
[email protected]
officials and experts, as well as the analysis of documentary sources.
ABSTRACT The growing use of virtual environments in higher education results in the need to identify different management models that can guide their use in a manner consistent with institutional conditions.
Conclusions were drawn regarding the perceptions of students, teachers and government officials, conclusions that make it possible to identify various management aspects and that contribute to the assessment of the impact of Moodle as a VLE (Virtual Learning Environment) at universities.
To this end, a case-study interdisciplinary research project was conducted between 2011 and 2013. It focused on the analysis of management, from an organizational perspective, of the institutional change effected by the adoption of Moodle at the Universidad de la República.
It was through the analysis and descriptive reading of the sources that it was possible to create the following conceptual categories: 1) Development of Human Resources; 2) Changes in Management and Organizational Structure; 3) Design and Implementation of Institutional Policies; 4) Technological Infrastructure; 5) Transformation of Education.
The research methodology included qualitative and quantitative techniques to collect information: focused interviews, surveys and participatory work dynamics with teachers, students, government
This research project makes it possible the understanding of the strengths and weaknesses of the process studied, and also to build
480
universities: their structure, the way they plan and teach lessons, academic management and administration, and research and knowledge dissemination. They highlight the need to identify tested models so as to guide strategic planning and decision-making, establishing levels of institutional leadership and also the perspective of the actors, so that innovation adapts to the idiosyncrasy and style of the institution. Our research was conducted between 2011 and 2013. Its aim was to identify and analyse various aspects or dimensions of the process of adoption of Virtual Learning Environments at university level. Three schools were taken as case studies: the School of Social Sciences, the School of Economic Sciences and Administration and the School of Information and Communication. Some of the dimensions analysed were: • strategic decisions made by institutional actors in connection with the Programme of Virtual Learning Environments for the introduction and educational use of technologies, • investments made in technological infrastructure and how this technology has been used, • transformation of the corresponding academic services on account of the widespread educational use of the technologies, • actions to raise awareness, to motivate and to train people in the use of technologies in education, • degree of incorporation and acceptance by the different actors and the impact of the actions implemented, • institutional organisation models for the incorporation of technologies at university level. This study focused on the cases observed from the perspective of the main actors: teachers, students and other institutional actors that have participated in the process of adoption of educational technologies at university level. The study had two main aims: the historicization of processes [7] and the impact on the institutional change [8] (at different levels: technological, educational and organizational). The historicization was conducted through in-depth interviews to identify the milestones and components of the process through an approach which enables actors to review practices [9]. The impact was analysed through surveys conducted among teachers and students, with the subsequent implementation of participatory work dynamics. The data were analysed following the methodological approach of grounded theory [10]. It was through the analysis and descriptive reading of the sources that it was possible to create the following conceptual categories [11]: 1) Development of Human Resources; 2) Changes in Management and Organizational Structure; 3) Design and Implementation of Institutional Policies; 4) Technological Infrastructure; 5) Transformation of Education. The knowledge generated provides development models, which makes it possible to transfer them to other universities in similar situations. In the following section we present a characterization of the main components of the institutional and organizational change for the incorporation of virtual learning environments at university level.
a conceptual perspective that is theoretically grounded, and on the subject at hand. Thus, important contributions can be made to the field and the practice. The results obtained through this research are a significant antecedent and theoretical-methodological contribution for the diagnosis, design, planning and implementation of institutional strategies related to the adoption of Virtual Learning Environments in higher education.
Categories and Subject Descriptors K.4.3 [Organizational Impacts]: Employment
General Terms Management, Human Factors, Theory.
Keywords Virtual Learning Environment, Organizational Change, Higher Education.
1. INTRODUCTION Since 2000, the Universidad de la República has been working with the integration of Information and Communications Technology (ICT) in Education. This has led to the creation of the Virtual Learning Environment (EVA, its acronym in Spanish) in 2008, developed on Moodle. The initiative has had a major impact on teaching activities at University level after the widespread adoption of EVA by students and teachers. Nowadays, the Moodle platform at the Universidad de la República has a total of more than 100.000 users. In 2011 the Programme of Virtual Learning Environments (PROEVA, its acronym in Spanish) was created at the Universidad de la República. PROEVA aims to promote the widespread use of Virtual Learning Environments at the Universidad de la República to support the extension of active education throughout the national territory. The short-term, mid term and long-term effects of the actions proposed in this programme will contribute towards the fulfilment of a growing demand in higher education, the improvement of the quality of education, the bridging of the digital and geographical divide, and the integration of university functions. In this article we present the main findings of a research project [1] aimed at describing, organizing and comparing the characteristics of the organizational models of institutional change for the integration of Moodle as a Virtual Learning Environment at the Universidad de la República. There are very few studies from the organizational perspective of educational and technological innovation in higher education in Latin America [2], [3], [4]. Furthermore, the models proposed are those of universities from Europe or the United States. Several studies on the processes of integration of ICT in universities [5] [6] analyse the changes in the internal dynamics: how university teaching is planned and developed, academic administration and student services, and research and dissemination activities. They show the decisions taken by university governing bodies that were necessary to reach the stage of habitual use of ICT at their institutions. They conclude that the introduction of ICT at university level has been conducted without strategic planning, and that its use has come as a result of external demand. Furthermore, they state that the use of ICT, particularly the Internet, is radically transforming the institutional dynamics of
481
2. A CONCEPTUAL MODEL FOR THE INTEGRATION OF THE EDUCATIONAL USE OF TECHNOLOGIES IN A UNIVERSITY SETTING
2.1.1 Initial Phase
In this section we present a theoretical framework devised after analysing the data. It enables us to show the components of the organizational processes for the integration of the educational use of technologies, and specifically the adoption of Moodle as a virtual learning environment in a university setting. The following components are identified: 1. Phases of the adoption process 2. Institutional actors 3. Institutional contexts 4. Infrastructure 5. Development of Human Resources 6. Academic transformation
In this phase, processes are presented gradually and they are expected to be sustainable. There are strong personal leaderships and a novel formation of teams for technical and teaching support.
Importance of the key role of the central driving force at the university and of authorities. Success depends on the existence of a favourable link with previous experiences, and on a strong coordination with general initiatives from an integral approach (technological, educational and educational dimensions).
It is necessary to have the intersectoral articulation of educational settings (didactic-pedagogical advice, and advice and IT management). This stage is characterized by the adoption of the technology by the innovative and enthusiastic actors, called the "early adopters". Successful experiences require a careful alignment with curricular changes and of educational practices. It is necessary to provide financial resources as an incentive to teachers, as well as to define institutional guidelines and protocols.
2.1.2 Generalization Phase This stage is characterized by the design of guidelines and strategic planning. The topic goes from the setting of those who are innovative, enthusiastic and committed to the agenda of political decision-makers. Different coordinated and joint actions are promoted at various organizational levels. Innovations are adopted by a majority of actors, more educational uses are adopted and there is a gradual and exponential growth. Processes mature as of the standardization of criteria, the quality of contents and educational practices. The quality of the actions is measured by the increase in the active use by teachers and students.
2.1 Phases of the adoption process Based on our analysis, we have identified three phases in the process of adoption of technological innovations, for which the following names are proposed [12]: Initial Phase, Generalization Phase and Institutionalization Phase.
Obstacles appear in the form of resistance to change, and the will and the accumulated experience of institutional actors act as driving forces. Prospective visions start to appear.
2.1.3 Institutionalization Phase This phase is characterized by the formalization of specific roles and functions related to the management of integrated technological systems. Clear definitions of institutional policies are presented, in a search for economic, organizational and educational models for the sustainability of the innovations. Resources are adapted to the demand and specific regulations are created. This is expressed in the allocation of the budget and the inclusion in the government agenda. This is the stage in which innovations are adopted by laggards, thus achieving universalization.
2.2 Actors In the case of innovation processes at universities, we identify the following actors and changes related to them and with regard to the emergence of new roles, tasks, work models, integration of sectors traditionally unrelated, new models of institutional leadership and impact on institutional culture.
Figure 1. Phases of the adoption process.
482
affecting, in a longitudinal way, institutional structure and practices.
2.2.6 Institutional culture Tensions between obstacles to transformation and facilitators appear in the dialogue between the institutional culture and the innovation process. Institutional style, tradition, educational model, inertia and resistance to change, all are expressed through the specific symbolic tensions of the old/new and old/young polarities.
2.3 Contexts Contexts are the physical environments or conditions where actors perform the actions according to the roles they have been allocated. Figure 2. Actors of the adoption process.
2.2.1 Management teams These are the people and groups who provide technological, organizational and educational support to the other actors in the institution so as to promote innovation. Their main aim is to be gradually recognized by recipients as agents that promote institutional change. It is important for them to be interdisciplinary and to be formed by different people (teams rather than only one person) and for them to gradually adopt a formal format (specific positions and functions) in the academic and technical structure of the institution.
2.2.2 New roles New roles appear in terms of the new tasks and activities necessary for the adoption of technologies in higher education. Specifically, it is the teacher who incorporates new roles related to new tasks. The figure of the tutor and content creator appears, specially the multiplier agent among peers. Furthermore, roles related to Advice, Management and Systems Administration become significant.
Figure 3. Context of the adoption process.
2.2.3 Organization of work
2.3.1 Institutional contexts
The organization of academic work is affected, new tasks appear: management, administration, content and course creation, and services for student users. All of these require planning by teaching teams so that responsibilities can be allocated. Furthermore, previous tasks are redistributed. All this contributes to the internal movements of transformation of academic practices and of the ways in which teams relate to each other.
Spaces of relevance and action within the university institution.
2.2.4 Sectors Sectors that are traditionally unrelated, such as teams connected with pedagogic-didactical advice and technical and IT advice, begin to work jointly, exploring new potentials regarding human resources and institutional materials. Furthermore, academic and administrative sectors come together to attend to the demands posed by the virtualization of courses.
•
University: Higher education/university institution which includes several schools, research centres, learning centres, departments, offices, among others, and where academic degrees are awarded: graduate and postgraduate degrees respectively.
•
School: group of schools, departments and institutes joined under the same representation.
•
Faculty/Department or Area: the space where the group of areas within the academic structure is represented.
•
Programme unit within a given plan of studies.
•
University libraries devoted to the purchase, conservation, study and exhibition of books and documents for the university community.
•
Copying centres, in charge of copying educational materials, guides and teaching notes, books, in the cases analysed related to student centres.
2.2.5 Leadership Institutional leadership flows in new and innovative patterns. There are not only top-down models (government, authorities) but rather the academic sector appears, and the teachers become crucial: innovative teachers become referents and, from their enthusiastic and committed practice, exert their influence, thus
483
2.3.2 External contexts •
•
National, regional and international academic networks: an academic network can be conceived as a mechanism of support and exchange of information, supported by a community of horizontal communication, whose base is a social network. It is there that there is a synergy, through the interactions among links, dynamics, common interests and meeting points (nodes), so as to create knowledge and join in the search or creation of solutions for a given topic or problem. Its importance lies in that fact that it allows academicians to work with flexibility and cooperatively through integration in academic, scientific, technical, social and cultural development in a community, team, group or region. It can be formed by institutions, secretariats, research centres, thus facilitating the exchange of information, knowledge, as well as promoting reflection processes.
2.4.1 Attention to teaching problems The use of technologies as a tool to overcome Teaching Problems (overpopulation, dropout rates and retention strategies).
2.4.2 Teacher use The modalities which teachers can adopt to use technologies: tutorships, content creation, communication with students, followup and evaluation. These new modalities have an impact on the educational relationship.
2.4.3 Course modalities The transformations in course modalities, the change from inperson to the use of technologies that support teaching, the diversification of teaching modalities, blended formats, to open strategies, distance modalities and telepresence.
2.5 Development of Human Resources It includes the actions to train, raise awareness and support provided by the institution to promote the development and creation of skills in human resources connected to innovation, teachers, students, administrative staff and leaders.
Educational policies: policies that are promoted by the State and from other national settings to boost education.
2.4 Academic Transformation The impact of the integration processes of technologies within the framework of university transformation. Technology as support for the generalization of higher education and its educational use. Here we include the new ways of teaching in the dimension Course Modalities (blended and distance) and the participation of EVA and its use for lifelong education, Postgraduate and Graduate courses, and Virtual Tutorships. Regarding the Educational Use, we analyse dimensions related to Student Use, Teacher Use (transformations in teaching practices, types of use [Repository of materials, Use of Forums, Participation in Forums, Use of Chat Rooms) and the Assessment made by the actors regarding the educational use of EVA.
Figure 5. Development of Human Resources.
2.5.1 Training This refers to the actions taken in the institutions to educate and train the actors, thus favouring the adoption of the necessary educational strategies to deal with innovations through the use of technology. It includes training, creation of courses and educational materials and tutorships.
2.5.2 Awareness raising It involves dissemination and awareness-raising of the initiative on a broad level, through the media (internet, social networks) as well as face-to-face (talks, conferences), together with key stakeholders in the institution, in order to motivate and involve all actors in the creation, use, promotion and adoption of technological innovations, as well as the benefits and opportunities they provide. It must be noted that the dissemination of innovative and/or technological matters requires different means and modalities so as to promote the adoption of the initiative, as there might be a number of people that are not related to the specific field. Figure 4. Academic transformations
484
2.5.3 Support
2.6.5 Use
This refers to several technical support actions (IT, design, etc.) and pedagogic (didactical) support actions implemented in the institution to support the actions of the actors involved. It is here where potential financial, social and political incentives are incorporated, offered by the institution to provide sustainability and visibility to the actors' initiatives.
Definition of access criteria, building the demand to adapt resources to institutional needs. Design of specific policies for technology management: planning, budget allocation and processes of selection and purchase of technologies.
2.6.6 Investments Short-term and long-term investment planning, planning of specific budget lines, development and/or acquisition of technologies.
2.5.4 Recipients The recipients of the actions of training, awareness raising, advice and support are, in the case of higher education institutions, are Teachers, Students, Administrative Staff, as well as the actors that hold institutional and government leadership positions.
2.5.5 Impact
3. CONCLUSIONS
The impact of the actions of training, awareness raising, support and advice can be seen, on the one hand, in the quality of contents and courses, and on the other hand, in educational philosophy and practices.
This research project makes it possible the understanding of the strengths and weaknesses of the process studied, and also to build a conceptual perspective that is theoretically grounded, and on the subject at hand. Thus, important contributions can be made to the field and the practice.
2.6 Infrastructure and use of technologies
The results obtained are a significant theoretical-methodological contribution for the diagnosis, design, planning and implementation of institutional strategies related to the adoption of a Virtual Learning Environment in Higher Education.
Technological infrastructure includes technological Resources, their management, support, maintenance and investments, as well as the use of technology (Access and educational Use).
4. ACKNOWLEDGMENTS This work was made possible by funding from the Sector Committee for Scientific Research (CSIC) Universidad de la República,R & D Program, 2010, Uruguay. Our thanks to RIURE (CYTED 513RT0471 - Red iberoamericana para la usabilidad de repositorios educativos) for allowing us to continue and improve the research described in this work.
5. REFERENCES [1] Rodés, V. (2013). Informe Técnico, Proyecto"Análisis de procesos de cambio organizacional para la incorporación del uso educativo de TIC en la Universidad de la República" PROYECTOS I+D LLAMADO 2010 CSIC [2] Bacigalupo Acuña, C. Montaño, V. (2005). "Modelo de incorporación de tic en el proceso de innovación docente para la implementación de un b-learning". Didáctica, Innovación y Multimedia, N. 11 (2008) p. 0-0 ISSN 16993748. Disponible en: http://ddd.uab.cat/pub/dim/16993748n11a2.pdf
Figure 6. Infraestructure and use
2.6.2 Management
[3] Luz Osorio, María Aldana, (2008). "Diseño de lineamientos para la formulación de planes estratégicos de incorporación de TIC en IES colombianas". En: Redes, comunidades de aprendizaje y tecnología movil, Universidad Del Norte , p.20 40 , v.1