Abstract
The presented system and approach facilitate intelligent, contextualized information access for learners based on automatic learning video analysis. The underlying workflow starts with automatically extracting keywords from learning videos followed by the generation of recommendations of learning materials. The approach has been implemented and investigated in a user study in a real-world VET setting. The study investigated the acceptance, perceived quality and relevance of automatically extracted keywords and automatically generated learning resource recommendations in the context of a set of learning videos related to chemistry and chemical engineering. The results indicate that such extracted keywords are in line with user-generated keywords and summarize the content of videos quite well. Also, they can be used as search key to find relevant learning resources.
Keywords: Video analysis, Content analysis, Keyword extraction, Learning resource recommendation, Vocational education, Digital transformation
Introduction
The ongoing digital transformation of industrial work aims at reaching new levels of process automation [1] and comprises the adoption of digital technologies in business processes, organizational structures, business domains and the society as a whole [2]. The digital transformation demands major changes in habits and ways of working and calls for new digital competences and digital literacy in newly evolving skill profiles. This needs to be reflected in vocational education and training (VET). Major companies such as Evonik1 have started to equip their apprentices with digital tools such as mobile devices to support mobile learning using digital internet and communication technologies in workplaces and training settings. This comes with specific challenges but also has a potential for triggering innovation in VET. The adoption of new digital technologies, particularly when pioneering digital initiatives inside an organization, may lead to a more heterogeneous and scattered landscape with different digital islands that are not well connected. Beyond technical interoperability, the point is to integrate learning and instructional process in a technical environment that enables “educational interoperability” [3].
Videos are a more and more popular format of media and have a high potential to support and enrich professional training and education [4]. This is not limited to high-end, polished materials, but can also include learner-generated videos [5]. Such learner-generated videos have been analyzed using semi-automatic methods combining human coding and automatic content analysis order, e.g. to detect missing pre-knowledge and misconceptions [6]. To extract semantic concepts and relations, DBPedia Spotlight uses NLP techniques to spot (compound) terms in texts and to relate it to structured data from DBPedia [7]. Similar techniques have been applied to discover learning resource recommendations based on textual learning materials [8]. However, many recommender systems in the context of learning materials usually take user preferences such as the rating of materials into account [9]. Consequently, such approaches have difficulties providing good recommendations if the amount of data, particularly user ratings, is quite low. Content-based recommender systems typically analyze item descriptions and often transform this into a vector space model, whereas the recommendation quality highly depends on the quality of the description [10]. According to Kopeinik, Kowald, and Lex standard recommendation algorithms are not well-suited to provide learning resource recommendations [11]. Particularly for sparse-data learning environments, they propose ontology-based recommender systems to better describe the learners and learning resources. Semantic technologies using AI help to automatically process content given by the learning context or learner-generated artefacts [12].
The research presented in this paper stems from an academia-industry cooperation with Evonik, a large company in the chemical industry. The overarching goal of this cooperation is to design and implement well-adapted technologies to support digitalization in VET. Video-based learning was taken as a starting point in a joint endeavour between in-company instructors, apprentices and researchers. Videos have been created by apprentices as well as instructors and are shared on a collaboration platform. Automatic information extraction from these learning videos is used as a key to recommending relevant learning materials as a value-adding function.
Video-Analysis and Contextualized Information Access
Our approach to extract information and to generate recommendations from learning videos is based on the following general workflow: (1) segmentation of the video; (2) transcription (speech-to-text); (3) keyword extraction; (4) representation of the data. In the first step, the video file is segmented and de-multiplexed into separate video and audio streams. Second, each audio segment will be transcribed using a speech-to-text API and stored in the file system. Third, keywords are extracted from the transcripts using DBPedia Spotlight and classical tf-idf measure. Finally, the keywords will be represented in the learning environment, e.g. as lists of learning resource recommendations or interactive tag clouds.
The recommendations for learning materials are generated through keyword-based search in the sense of a content-based recommender system. This helps to easily connect new and already existing search APIs to the learning environment. Open Search is one of the approaches to easily integrate such an already existing API. Using existing knowledge sources in already implemented management services is crucial for companies in order to preserve a predominance in a certain field of business. To discover learning resources, multiple searches with the extracted keywords are performed using the Google Custom Search API, followed by a ranking of the different results. Each search can be parametrized, e.g., to prioritize certain domains in the results.
Evaluation
To evaluate the keyword extraction and recommendation mechanisms, an online questionnaire was set up. In it we included the recommenders results for four chemistry related videos which were presented in a randomized order to reduce order effects.
Initially the participants received a brief textual instruction. For each video the participants were asked to watch the video before giving their own keyword suggestions, then rating the quality of the extracted keywords and of the top 10 proposed learning resources. The keywords were to be rated as “important in relation to the topic”, “suitable but not important” or “irrelevant to the topic”. The learning resources could be rated as “suitable and helpful”, “suitable but not helpful” or “unsuitable”.
Subsequently, the participants were presented a shortened version of the ResQue questionnaire [13]. The included items were related to the constructs Quality of Recommendation, Perceived Usefulness, Transparency and Attitudes. Finally, demographic questions regarding gender, age and occupation. 32 apprentices completed the questionnaire (n = 32, 19 males, age range 16–31 with M = 21.03, SD = 3.614).
Perceived Quality of the Extracted Keywords and Generated Information.
In the rating systems for keywords and resources two of the three options can be considered positive, with one of the two being more neutral than the other, while the third is negative. The combined score shows that the participants on average rate 87.74% of the keywords and 84.69% of the resources for one of the four videos positively. The aggregated results are depicted in Table 1.
Table 1.
Rating of keywords and learning resources
| Keywords | Resources | |||||
|---|---|---|---|---|---|---|
| Important in relation to the topic | Suitable but not important | Irrelevant to the topic | Suitable and helpful | Suitable but not helpful | Unsuitable | |
|
Video 1 Video 2 Video 3 Video 4 |
61,98% 58,81% 48,38% 66,25% |
28,13% 30,40% 30,93% 26,10% |
9,90% 10,80% 20,69% 7,66% |
55,31% 45,00% 62,81% 57,19% |
30,00% 32,81% 26,88% 27,5% |
14,69% 22,19% 10,31% 15,31% |
| Overall | 58,86% | 28,89% | 12,26% | 55,08% | 29,30% | 15,63% |
Among the more poorly rated keywords are mainly those that are not directly related to the main topic. For example, in video 3 which deals with oxidation and has 29 keywords total, the keywords ‘oxidation’, ‘chemic reaction’ and ‘oxygen’ are rated really good while ‘vitamin’, ‘candle wax’ and ‘English’ are rated rather poorly.
Evaluation of the Recommender System (ResQue).
The constructs measured by items adapted from ResQue each have a mean score of at least 3, as can be seen in Table 2. The low diversity score relates to an open question where a participant remarked that some of the resources had very similar content. The higher accuracy score fits the positive ratings of keywords and resources.
Table 2.
ResQue results
| M | SD | |
|---|---|---|
| Quality of Recommendation | 3.23 | 0.58 |
| Accuracy | 3.81 | 1.00 |
| Novelty | 3.09 | 1.09 |
| Diversity | 3.00 | 0.75 |
| Perceived Usefulness | 3.14 | 1.11 |
| Transparency | 3.50 | 1.19 |
| Attitudes | 3.40 | 0.72 |
Conclusion and Discussion
The results show that the majority of the proposed keywords and resources are rated positively. The videos being informative yet not very high level do contain excerpts that digress from the main topic. Therefore, it makes sense that the system does find some keywords that do not match the topic and seem misplaced for participant or user.
The results of the ResQue questionnaire showed moderate but slightly positive scores. While this can be perfectly accurate it should also be noted that the participants neither interacted with the tool themselves nor were they informed in detail on the option to use the tool with any media other than videos. Both of which might influence the rating if it were included in further studies. Additionally, the rating might be influenced by the state of knowledge of the trainees.
Footnotes
Evonik Industries AG (2020). https://corporate.evonik.com/en. Retrieved: 2020-02-26.
Contributor Information
Ig Ibert Bittencourt, Email: ig.ibert@ic.ufal.br.
Mutlu Cukurova, Email: m.cukurova@ucl.ac.uk.
Kasia Muldner, Email: kasia.muldner@carleton.ca.
Rose Luckin, Email: r.luckin@ucl.ac.uk.
Eva Millán, Email: eva@lcc.uma.es.
Cleo Schulten, Email: schulten@collide.info.
Sven Manske, Email: manske@collide.info.
Angela Langner-Thiele, Email: angela.thiele@evonik.com.
H. Ulrich Hoppe, Email: hoppe@collide.info.
References
- 1.Hirsch-Kreinsen H. Digitization of industrial work: Development paths and prospects. J. Labour Mark. Res. 2016;49(1):1–14. doi: 10.1007/s12651-016-0200-6. [DOI] [Google Scholar]
- 2.Parviainen P, Tihinen M, Kääriäinen J, Teppola S. Tackling the digitalization challenge: How to benefit from digitalization in practice. Int. J. Inf. Syst. Proj. Manage. 2017;5(1):63–77. [Google Scholar]
- 3.Milrad, M., Hoppe, H.U., Gottdenker, J., Jansen, M.: Exploring the use of mobile devices to facilitate educational interoperability around digitally enhanced experiments. In: Proceedings of WMTE 2004, pp. 182–186. IEEE Press (2004)
- 4.Erkens, M., Manske, S., Bodemer, D., Hoppe, H.U., Langner-Thiele, A.: Video-based competence development in chemistry vocational training. In: Proceedings of ICCE 2019 (2019)
- 5.Malzahn N, Hartnett E, Llinás P, Hoppe HU, et al. A smart environment supporting the creation of juxtaposed videos for learning. In: Li Y, et al., editors. State-of-the-Art and Future Directions of Smart Learning. Singapore: Springer; 2016. pp. 461–470. [Google Scholar]
- 6.Erkens, M., Daems, O., Hoppe, H.U.: Artifact analysis around video creation in collaborative STEM learning scenarios. In: Procedings of ICALT 2014, pp. 388–392. IEEE (2014)
- 7.Mendes, P.N., Jakob, M., García-Silva, A., Bizer, C.: DBpedia spotlight: Shedding light on the web of documents. In: Proceedings of 7th International Conference on Semantic Systems, pp. 1–8 (2011)
- 8.Ahn J-W, et al. Wizard’s apprentice: Cognitive suggestion support for wizard-of-Oz question answering. In: André E, Baker R, Hu X, Rodrigo MMT, du Boulay B, et al., editors. Artificial Intelligence in Education; Cham: Springer; 2017. pp. 630–635. [Google Scholar]
- 9.Ghauth KI, Abdullah NA. Learning materials recommendation using good learners’ ratings and content-based filtering. Educ. Tech. Res. Dev. 2010;58(6):711–727. doi: 10.1007/s11423-010-9155-4. [DOI] [Google Scholar]
- 10.Pazzani MJ, Billsus D. Content-based recommendation systems. In: Brusilovsky P, Kobsa A, Nejdl W, editors. The Adaptive Web: Methods and Strategies of Web Personalization. Heidelberg: Springer; 2007. pp. 325–341. [Google Scholar]
- 11.Kopeinik S, Kowald D, Lex E. Which algorithms suit which learning environments? A comparative study of recommender systems in TEL. In: Verbert K, Sharples M, Klobučar T, editors. Adaptive and Adaptable Learning; Cham: Springer; 2016. pp. 124–138. [Google Scholar]
- 12.Manske, S., Hoppe, H.U.: The “Concept Cloud”: supporting collaborative knowledge construction based on semantic extraction from learner-generated artefacts. In: 2016 IEEE 16th International Conference on Advanced Learning Technologies (ICALT), pp. 302–306. IEEE (2016)
- 13.Pu, P., Chen, L., Hu, R.: A user-centric evaluation framework for recommender systems. In: Proceedings of the 5th ACM Conference on Recommender Systems, pp. 157–164. ACM (2011)
