Brian Marchini
Abstract
A critical component in education is the assessment which can be classified into formative and summative in nature. Formative assessments allow instructors to adjust lessons and pacing to meet the needs of their students during an academic cycle while summative assessments provide direction for improvements in curriculum over time. Educators have employed a variety of assessment ranging from qualitative-focused assessments, such as written examinations and projects, to more quantitative-based assessments which often employ a limited choice format such as multiple choice, fill in the bank, and true or false. While there is a general consensus that limited choice assessments tend to provide limited insights, time constraints and class size often leave many educators to choose these this format given their efficiency in grading and maintaining.
The application of game theory towards education has shown positive results in terms of student comprehension, engagement and overall enjoyment. While game theory provides gains in these areas, development of gamified learning and assessment is often time consuming and, in many cases, lies outside the skills of educators in the field today. While assessment tools such as Kahoot currently exist, adoption as a replacement for traditional assessment has been slow.
Literature Review
Assessments are often broken down into categories based on format or operation. Black and William have written seminole works classifying assessments into the categories of formative and summative from an operational approach. Formative assessments are often defined as assessments used by instructors to adjust lesson dynamics to improve the learning process while summative assessments provide an analysis of a student’s understanding of the concepts for use in ranking and grading. Formative assessment not only provides feedback to the instructor, but they also form an important aspect in pedagogy in the form of student feedback. Black and William stress that formative assessments are most effective when they provide accurate insights on where students are struggling and need further review (Black & William, 2009). While there is debate regarding where we delineate summative and formative assessment as their operations blur in practical use, there is common agreement on their efficacy when used correctly (Dunn & Mulvenon, 2009).
The most common type of assessment test question used in high education are limited choice questions which include mulltiple choice, true-false and fill in the blank questions. Limited choice questions have many advantages. While developing effective limited choice questions takes time, extremely efficient scoring of these tests allows for more efficient scoring in terms of consistency and time especially with large course sections. Furthermore, the limited number of possible answers reduces testing contention given a finite number of answers in contrast to open ended questions which often rely heavily on grader interpretation. Lastly, test bank sharing has reduced the development time of multiple choice questions (Roediger & Marsh, 2005, p. 1155).
Criticisms of Assessments
The efficacy of limited choice assessment testing has long been questioned. The question of whether assessment testing accurately measures a student’s understanding of material was the focus of a study by Gabriel Reich in 2009 (Reich, 2009). Reich sought to measure the effectiveness of the New York State’s Global History and Geography Regents exam in a qualitative study of a heterogeneous class of tenth grade history students in the Greater New York Metropolitan area using a combination of oral and written observations. Reich employed a 15 question multiple choice exam on concepts confirmed to have been covered in class while collecting data using “talk aloud” in-exam observations and post-test interviews (Reich, 2009, p. 331-333). In some cases, students had a general understanding of the concept, but chose the wrong answer. In other cases, students successfully answered a question using test taking strategies and guessing. While the overall scoring provided a summative measure of the class as a whole, the accuracy on an individual level could vary greatly (Reich, 2009, p. 335-347). Given the results of the analysis, Reich concluded that standardized assessment not only led to poor levels of measurement, but the weight assigned to these tests could lead an instructor to teach for the test rather than the material (Reich, 2009, p. 349).
The negative effects of multiple-choice testing are not limited to their measurement of concept comprehension. Students who prepare for multiple choice assessments spend less time in preparation than those taking a written exam. While test banks allow for better questions through a collective effort, security regarding test banks must be maintained to avoid question leaking and cheating. Measures employed to keep test banks secure have led many instructors to only provide reviews of the exams during limited instructor office hours to prevent dissemination of the test bank to the student body (Roediger & Marsh, 2005, p. 1155-1156).
Transformational Play
In 2010, Barab, Gresalfi and Ingram-Goble published their findings on their theory of transformational play which is an extension of John Dewey’s research on transactivity where learning is viewed as a series of transactions between the learner and instructor. Their theory of transformational play positions that learning can be enhanced when students are positioned in a situated, narrative game as a protagonist that affects their world around them in the choices and decisions made with a degree of consequentiality (Barab, Gresalfi, & Ingram-Goble, 2010, p. 2). Their design environments utilized a narrative game platform, Quest Atlantis, with two different methods. The first method immersed students into the role of an environmental scientist charged with investigating recent declines in fish populations (Barab, Gresalfi, & Ingram-Goble, 2010, p. 4-5). The six grade students of this study were given a pre-test and post-test on the material and split between a control and treated group with the control receiving traditional instruction rather than game-based learning. The treated group showed significant gains in learning as well as increased levels of engagement (Barab, Gresalfi, & Ingram-Goble, 2010, p. 6). The second method placed students in the role of an assistant to a doctor in a storyline paralleling Mary Shelley’s Frankenstein novel with a focus on ethics in science (Barab, Gresalfi, & Ingram-Goble, 2010, p. 7). This method involved seventh graders with similar results, but in this case, the assessments were more qualitative in nature which makes the case that this approach has the potential to be used across a variety of subject areas.
Game Based Learning
Implications of the Barab, Gresalfi and Ingram-Goble study implies game based learning as being an effective medium for learning across a variety of subject areas rather than limited to a specific discipline. Furthermore, it shows that game based learning increases levels of engagement using a platform that is traditionally viewed as a leisure activity. While there is a growing interest in the gamification of learning, there is often disagreement on what constitutes game based learning. Bressler and Bodzin published a study on the effects of an augmented reality mystery game on learning scientific methods which indicated both an increase in student engagement and comprehension of the subject matter, and while their application of game theory to education was effective, their study did not incorporate what we would traditionally consider a video game in the classical sense (Bressler & Bodzin, 2016). As demonstrated by this study, game based learning does not necessitate the use of a video game, but instead, game theory simply implies the incorporation of game features to learning.
Game Features
If we define game based learning as the application of game elements to learning, what are the features that create an engaging game based environment? King, Delfabbro and Griffiths published a paper that expanded Wood’s framework which sought to identify the key psychological characteristics of video games. They break down the features into five main categories: social features, manipulation and control, narrative and identity, reward and punishment, and presentation (King, Delfabbro, & Griffiths, 2009, p. 93).
Social Features
Social features include off-game discussion sites where players can discuss strategies, tips and experiences, a customizable avatar which is represents how the player would like to be seen by the community and in-game communication in the form of gestures, dialog and game specific identifiers such as a pointer. To fill a player’s social need for community, video games rely upon leader boards and a moderated gamer fellowship that encourages positive interactions. As a result, players develop a relationship with the game that encourages them to return often. (King, Delfabbro, & Griffiths, 2009, p. 94-95).
Manipulation and Control
User input ranges from joystick and keyboard input to newer forms of tactile input such as voice recognition, touch screen interfaces and virtual reality interfaces. While the type of each type of input has its own learning curve, users often are able to rapidly master their level of control in most environments. Save points and additional lives allow players to replay scenarios until tasks are successful completed or to experience alternative outcomes with the ability to replay or rewind to difficult sections. Additional key player management features include heads up displays (HUD), inventories, avatar improvements and skill development (King, Delfabbro, & Griffiths, 2009, p. 94-97).
Non-controllable features of games often have a significant role in game manipulation and control. These are most often seen in “scripted events” that move a narrative story forward (King, Delfabbro, & Griffiths, 2009, p. 97). In addition to carry narratives forward, these may also improve the fluidity of a game when used in conjunction of with loading session during online connectivity or software loading scenes.
Narrative and Identity
Avatar creation and development often plays for an important role in the game experience leading players to increase their in-game level of engagement, but this also leads to players replaying games under a different type of avatar. Among adolescents in particular, bonds can be developed between the players as well as providing a type of self-identification. Storytelling increases the level of immersion within a game with many allowing the user’s choices to determine the scope and direction of the narrative. These components are further strengthened by choice of theme and genre within the game (King, Delfabbro, & Griffiths, 2009, p. 97-99).
Reward and Punishment
The majority of rewards in games are psychological in nature. These range from experience points, skill development, in-game items with varying levels of scarcity to more physical rewards in the form of physic force feedback and auditory praise. In contrast to rewards, punishments often involve failures within a game which can often be overcome by simply replaying a scenario of a previous save point. Alternatively, meta-game rewards such as achievement having the effect of increasingly the players likelihood to continue playing the game. The effect of these rewards are often amplified when given intermittently by playing on operant conditioning theory (King, Delfabbro, & Griffiths, 2009, p. 99-102).
Presentation
Graphics and Sound are an integral part of many games. Not only do they increase the player’s level of immersion, but changes in tone and graphics can also affect the player’s level of engagement with changes in soundtrack and background denoting a more difficult portion of the game. As players are conditioned to these visual and auditory changes, they can use these to control the mood of the game of experience for the player even beyond the game. As a result, gamers often romanticize when hearing particular tracks or seeing images from games previously played even years later (King, Delfabbro, & Griffiths, 2009, p. 99-103-104).
Barriers to Implementation
In his review of the state of education video games, John Rice explores many of the barriers preventing widespread proliferation of the new educational medium (Rice, 2007). A difficult barrier to cross is overcoming the negative view that many traditional educators have towards video games. Many educators also have perceived themselves battling for students attention against personal video games at home and even in the classroom. This negative view is furthered by a lack of standards on which video games are most effective. Another hurdle to overcome is time constraints placed on educators do to limited school time periods. Lastly, technical limitations of educators and long development times limit the volume of educational games.
Gamified Assessments
There are new tools being developed that apply features of game play into the assessment process. A popular example is Kahoot. Kahoot is a free, online assessment plaform that is part limited choice assessment and part online game. Instructors can use this tool for a variety of assessments. Some of the key features of Kahoot include a musical soundtrack, gamified HUD, instant feedback and ranking system. All of these combine are designed to create an assessment that is fun play while promoting engagement. Instructors can upload questions using spreadsheets, share their assessments to others, and customize portions of the experience while giving instructors basic metrics on their student’s performance (Ismail & Mohammad, 2017, p. 19-20). In a recent study on its application as a formative assessment, Kahoot was found to increase levels of engagement for their students, but they found the tool lacking when used for more difficult levels of assessment (Ismail & Mohammad, 2017, p. 24-25). Furthermore, these still contain the same flaws in assessment accuracy as outlined earlier by Gabriel.
Call for Research
In contrast to traditional, limited choice, multiple assessments, what impact does gamified formative assessment using the Gamified Assessment Test for Education (GATE) have on student engagement and student achievement in higher education?
If we studied a system that incorporates many of the features found in successful game platforms, and given the increased levels of engagement garnered by games, we expect a similar increase in engagement in this approach in comparison to traditional methods of assessment. In addition to an improved student experience, we also anticipate positive feedback from instructors, given the focus on ease of use and subject portability as well as the ability to draw deeper learning insights from the spectral approach to question answering. As a result of increased levels of engagement, we anticipate an increase in overall summative grading results in course finals which we will use as a measure of efficacy.
Given the need to increase levels of engagement with a dramatic shift towards online and distance learning which are self-paced in nature, more research is warranted in this subject area.
Works Referenced
An, D., & Carr, M. (2017). Learning styles theory fails to explain learning and achievement: Recommendations for alternative approaches. Personality and Individual Differences, 116, 410–416. https://doi.org/10.1016/j.paid.2017.04.050
An Donggun holds a PhD in Educational Psychology from the University of Georgia and researcher in Education at Seoul National University.Abstract: “The purpose of this paper is to propose a multiple approaches to explaining and predicting individual differences in learning. First, this article briefly reviews critical problems with learning styles. Three major concepts are discussed: lack of a clear, explanatory framework, problems of measurement, and a failure to link learning styles to achievement. Next, this paper presents several alternative approaches to learning styles that do a better job of explaining how learning styles might predict achievement. Alternatives to learning styles include individual differences in verbal and visual skills, expertise and domain knowledge, self-regulation and inhibition, and perfectionism. For expertise and domain knowledge, knowledge representation and fluency are specifically discussed. It is recommended that the new approach that focuses on individual differences in learning be used by teachers.”
Balasubramaniam, G., & K, I. (2016). A Study of Learning Style Preferences among First Year Undergraduate Medical Students Using VARK Model. Education in Medicine Journal, 8(4). https://doi.org/10.5959/eimj.v8i4.440
Abstract: “Medical students have a wide range of diversity in their learning preferences. This has been always a challenge for the teachers to meet the demands of all the medical students. VARK (Visual, Aural, Read/write and Kinaesthetic way of learning styles) is a learning inventory grouped under ‘instructional preference’ model. Methods: This study analysed the learning style and approaches to learning among the first year undergraduate students in a tertiary care teaching hospital using VARK questionnaire version. Results: Our study revealed that, unimodal learning preference was among 48% of the students and multimodal learning styles being with 52% of students. Among the unimodal learning preferences, kinaesthetic and auditory learning styles were predominant (35% and 34% respectively). Among multimodal learning style preferences Kinaesthetic, Aural (KA) and Visual, Aural, Kinaesthetic (VAK) styles were predominant. There was no difference in the learning preferences among the sexes (p = 0.208). Conclusion: Since the most preferred learning styles were kinaesthetic and auditory, the strategies employed by students could be recordings of lectures, audio recordings of power point presentations, increased frequency of early clinical exposure to patients in wards and use of models and problem solving questionnaires. This will also help the teachers to act in accordance with students need and guide them in achieving their academic goals.”
Becker, K. (2005). Games and Learning Styles. IASTED International Conference on Education and Technology. ICET.
Several ways to address learning are: 1) through learning theories, 2) through learning styles (treated as distinct from learning theories here), and 3) through instructional design theories and models. This paper looks at the second approach to examine how modern games support various learning styles in their design and gameplay. Four well-known learning style models are examined in the context of computer game design. These are: the Keirsey Temperament Sorter, the Gregory Style Delineator, Felder’s Index of Learning Styles, and Kolb’s Learning Style Inventory. Good, i.e. top-rated games can be shown to incorporate aspects of most, if not all of these, and in this way actively support learners of all learning style preferences.
Bressler, D. M., & Bodzin, A. M. (2016). Investigating Flow Experience and Scientific Practices During a Mobile Serious Educational Game. Journal of Science Education and Technology, 25(5), 795–805. https://doi.org/10.1007/s10956-016-9639-z
Chislet, v., & Chapman, A. (2006, June 26). VAK Learning Styles Self-Assessment Questionnaire. Retrieved September 29, 2019, from Wisconsin.edu website: https://uwm.courses.wisconsin.edu/d2l/common/viewFile.d2lfile/Database/NTEzMzQ3OA/vak%20learning%20styles%20questionnaire%20selftest.doc?ou=7062&contextId=166461,89961
Cuevas, J. (2015). Is learning styles-based instruction effective? A comprehensive analysis of recent research on learning styles. Theory and Research in Education, 13(3), 308–333. https://doi.org/10.1177/1477878515606621
Abstract: “In an influential publication in 2009, a group of cognitive psychologists revealed that there was a lack of empirical evidence supporting the concept of learning styles-based instruction and provided guidelines for the type of research design necessary to verify the learning styles hypothesis. This article examined the literature since 2009 to ascertain whether the void has been filled by rigorous studies designed to test the matching hypothesis and identify interaction effects. Correlational and experimental research recently published on learning styles is reviewed, along with an examination of how the subject is portrayed in teacher education texts. Results revealed that the more methodologically sound studies have tended to refute the hypothesis and that a substantial divide continues to exist, with learning styles instruction enjoying broad acceptance in practice, but the majority of research evidence suggesting that it has no benefit to student learning, deepening questions about its validity.”
El-Hmoudova, D. (2015). Assessment of Individual Learning Style Preferences with Respect to the Key Language Competences. Procedia – Social and Behavioral Sciences, 171, 40–48. https://doi.org/10.1016/j.sbspro.2015.01.086
Abstract: “The paper focuses on the assessment of individual learning style preferences of students of two different disciplines, Management of Tourism and Applied Informatics, at Faculty of Informatics and Management, University of Hradec Kralove. The Felder-Silverman learning styles inventory was administered to students in Professional English language course in the Blackboard learn environment to monitor and check students’ proficiency of key language competences. Descriptive statistics identified that students do vary in their preference for particular learning style with a great variety of learning style preferences distributed among the sample groups of students. A large number of the students showed mild preference to Active, Visual and Sequential learning styles. On the other hand, there is a large group of students displaying a strong preference to Sensing learning style dimension. The key language competences defined by the Common European Framework of Reference for Languages (CEFR) are tested in the computer-based environment with a special view on studentś individual learning styles.”
Fatta, H. A., Maksom, Z., & Zakaria, M. H. (2019). Learning Style on Mobile-Game-Based Learning Design: How to Measure? Intelligent and Interactive Computing, 503–512. https://doi.org/10.1007/978-981-13-6031-2_8
Hanif Al Fatta ofAmikom University Yogyakarta presents a conference paper looking into the efficacy of using game based learning to measure learning style preference. His area of expertise include artificial intelligence, game-based learning and human computer interaction.
Felder, R. (2002). LEARNING AND TEACHING STYLES IN ENGINEERING EDUCATION. Engineering Education, 78(7), 674–681. Retrieved from https://www.engr.ncsu.edu/wp-content/uploads/drive/1QP6kBI1iQmpQbTXL-08HSl0PwJ5BYnZW/1988-LS-plus-note.pdf
Richard M. Felder is Professor Emeritus of Chemical Engineering at North Carolina State University and researcher on learning styles within the field of chemical engineering which led to his development of the Learning Style Inventory (LSI) model for learning styles. In this work, Felder discusses his model in depth and provides access and reference to his learning assessment framework. While the focus of this model was initially used to model engineering students, Felder has since proposed that this model has applications well beyond engineering and STEM programs. This is a candidate model for learning assessment for this proposal.
Feldman, J., Monteserin, A., & Amandi, A. (2014). Automatic detection of learning styles: state of the art. Artificial Intelligence Review, 44(2), 157–186. https://doi.org/10.1007/s10462-014-9422-6
Abstract: “A learning style describes the attitudes and behaviors, which determine an individual’s preferred way of learning. Learning styles are particularly important in educational settings since they may help students and tutors become more self-aware of their strengths and weaknesses as learners. The traditional way to identify learning styles is using a test or questionnaire. Despite being reliable, these instruments present some problems that hinder the learning style identification. Some of these problems include students’ lack of motivation to fill out a questionnaire and lack of self-awareness of their learning preferences. Thus, over the last years, several approaches have been proposed for automatically detecting learning styles, which aim to solve these problems. In this work, we review and analyze current trends in the field of automatic detection of learning styles. We present the results of our analysis and discuss some limitations, implications and research gaps that can be helpful to researchers working in the field of learning styles.”
Fleming, N. (1995). I’m different; not dumb. Modes of presentation (VARK) in the tertiary classroom. 18, 308–313. Retrieved from https://fyi.extension.wisc.edu/wateroutreach/files/2016/03/Fleming_VARK_Im_Different_Not_Dumb.pdf
Neil Fleming worked in faculty development at Lincoln University, New Zealand, and developed the VARK model of learning styles as an extension of the VAK model developed by Walter Barbe. Fleming claims that learning preference can be broken down into the four sensory modalities of visual, auditory, physical and social learning. This work interprets the results of the VARK model. This is a candidate model for learning assessment for this proposal.
Gee, J. (2003). What video games have to teach us about learning and literacy. (1st ed.). New York: Palgrave Macmillan.
James Paul Gee is a Professor of Literacy Studies at Arizona State University with a Ph.D. in Linguistics from Stanford University and author of several books on the learning in the digital age. In his book What Video Games Have to Teach Us About Learning and Literacy, Gee explores the positive impacts that video games have on learning by analyzing how gamers learn and adapt to games such as World of Warcraft and Half Life 2. This work is important to this research as it provides a framework for our methodology when we measure how accurately games can assess learning style.
Gee, J. (2012). James Paul Gee on Learning with Video Games [YouTube Video]. Retrieved from https://www.youtube.com/watch?v=JnEN2Sm4IIQ
Hawk, T. F., & Shah, A. J. (2007). Using Learning Style Instruments to Enhance Student Learning. Decision Sciences Journal of Innovative Education, 5(1), 1–19. https://doi.org/10.1111/j.1540-4609.2007.00125.x
Thomas Hawk is a Professor Emeriti from Frostburg State University with a Ph.D. from the University of Pittsburgh and an MBA from Harvard Business School. In this article, Hawk reviewed five learning style instruments including, the Kolb Learning Style Indicator, the Gregorc Style Delineator, the Felder–Silverman Index of Learning Styles, the VARK Questionnaire, and the Dunn and Dunn Productivity Environmental Preference Survey. His goal was to find the common measures and the differences, as well as discussing their validity, reliability, and areas for improvement. This review article will be used a jumping off point into exploring the various assessment methods in use with reference also given to criticisms and praises of each approach.
Institute of Education Sciences. (2018). College Navigator – Bucks County Community College. Retrieved December 8, 2019, from Ed.gov website: https://nces.ed.gov/collegenavigator/?q=Bucks+County+Community+College&s=all&id=211307
Introduction to VARK | VARK. (2019). Retrieved October 21, 2019, from Vark-learn.com website: http://vark-learn.com/introduction-to-vark/
Introductory videos describing the VARK model, its origins, various types of learners, application within the classroom and its application in High Education.
Ismail, M. A.-A., & Mohammad, J. A.-M. (2017). Kahoot: A Promising Tool for Formative Assessment in Medical Education. Education in Medicine Journal, 9(2), 19–26. https://doi.org/10.21315/eimj2017.9.2.2
Kim, Y. J., & Shute, V. J. (2015). The interplay of game elements with psychometric qualities, learning, and enjoyment in game-based assessment. Computers & Education, (87), 340–356. https://doi.org/10.1016/j.compedu.2015.07.009
Abstract: “Educators today are increasingly interested in using game-based assessment to assess and support students’’ learning. In the present study, we investigated how changing a game design element, linearity in gameplay sequences, influenced the effectiveness of game-based assessment in terms of validity, reliability, fairness, learning, and enjoyment. Two versions of a computer game, Physics Playground (formerly Newton’s Playground), with different degrees of linearity in gameplay sequences were compared. Investigation of the assessment qualities—validity, reliability, and fairness—suggested that changing one game element (e.g., linearity) could significantly influence how players interacted with the game, thus changing the evidentiary structure of in-game measures. Although there was no significant group difference in terms of learning between the two conditions, participants who played the nonlinear version of the game showed significant improvement on qualitative physics understanding measured by the pre- and posttests while the participants in the linear condition did not. There was also no significant group difference in terms of enjoyment. Implications of the findings for future researchers and game-based assessment designers are discussed.””
King, D., Delfabbro, P., & Griffiths, M. (2009). Video Game Structural Characteristics: A New Psychological Taxonomy. International Journal of Mental Health and Addiction, 8(1), 90–106. https://doi.org/10.1007/s11469-009-9206-4
Abstract: “Excessive video game playing behaviour may be influenced by a variety of factors including the structural characteristics of video games. Structural characteristics refer to those features inherent within the video game itself that may facilitate initiation, development and maintenance of video game playing over time. Numerous structural characteristics that influence gambling frequency and expenditure have been identified in the gambling literature, and some researchers have drawn comparisons between the rewarding elements in video gaming and those in slot machine gambling. However, there have been few rigorous attempts to classify and organise the psycho-structural elements of video games in a similar way to gambling. In order to aid current psychological understanding of problem video game playing and guide further research questions in this area, a new taxonomy of video game features is proposed, which includes: (a) social features, (b) manipulation and control features, (c) narrative and identity features, (d) reward and punishment features, and (e) presentation features. Each category is supported with relevant theory and research, where available, and the implications of these features for excessive video game playing are discussed.”
Kirschner, P. A. (2017). Stop propagating the learning styles myth. Computers & Education, 106, 166–171. https://doi.org/10.1016/j.compedu.2016.12.006
Paul A. Kirschner is an author and Professor of Educational Psychology at the Open University of the Netherlands. This paper calls into question the current state of learning styles in education.Abstract: “We all differ from each other in a multitude of ways, and as such we also prefer many different things whether it is music, food or learning. Because of this, many students, parents, teachers, administrators and even researchers feel that it is intuitively correct to say that since different people prefer to learn visually, auditively, kinesthetically or whatever other way one can think of, we should also tailor teaching, learning situations and learning materials to those preferences. Is this a problem? The answer is a resounding: Yes! Broadly speaking, there are a number of major problems with the notion of learning styles. First, there is quite a difference between the way that someone prefers to learn and that which actually leads to effective and efficient learning. Second, a preference for how one studies is not a learning style. Most so-called learning styles are based on types; they classify people into distinct groups. The assumption that people cluster into distinct groups, however, receives very little support from objective studies. Finally, nearly all studies that report evidence for learning styles fail to satisfy just about all of the key criteria for scientific validity. This article delivers an evidence-informed plea to teachers, administrators and researchers to stop propagating the learning styles myth.”
Murray, J. (2009). The VARK Model of Teaching Strategies. Retrieved October 21, 2019, from TeachHUB website: https://www.teachhub.com/vark-model-teaching-strategies
This website provides an overview and example surveys of the VARK learning style model and how it can be used within the K-12 environment.
O’Mahony, S. M., Sbayeh, A., Horgan, M., O’Flynn, S., & O’Tuathaigh, C. M. P. (2016). Association between learning style preferences and anatomy assessment outcomes in graduate-entry and undergraduate medical students. Anatomical Sciences Education, 9(4), 391–399. https://doi.org/10.1002/ase.1600
Abstract: “An improved understanding of the relationship between anatomy learning performance and approaches to learning can lead to the development of a more tailored approach to delivering anatomy teaching to medical students. This study investigated the relationship between learning style preferences, as measured by Visual, Aural, Read/write, and Kinesthetic (VARK) inventory style questionnaire and Honey and Mumford’s learning style questionnaire (LSQ), and anatomy and clinical skills assessment performance at an Irish medical school. Additionally, mode of entry to medical school [undergraduate/direct‐entry (DEM) vs. graduate‐entry (GEM)], was examined in relation to individual learning style, and assessment results. The VARK and LSQ were distributed to first and second year DEM, and first year GEM students. DEM students achieved higher clinical skills marks than GEM students, but anatomy marks did not differ between each group. Several LSQ style preferences were shown to be weakly correlated with anatomy assessment performance in a program‐ and year‐specific manner. Specifically, the ‘Activist’ style was negatively correlated with anatomy scores in DEM Year 2 students (rs = −0.45, P = 0.002). The ‘Theorist’ style demonstrated a weak correlation with anatomy performance in DEM Year 2 (rs = 0.18, P = 0.003). Regression analysis revealed that, among the LSQ styles, the ‘Activist’ was associated with poorer anatomy assessment performance (P < 0.05), while improved scores were associated with students who scored highly on the VARK ‘Aural’ modality (P < 0.05). These data support the contention that individual student learning styles contribute little to variation in academic performance in medical students. Anat Sci Educ 9: 391–399. © 2016 American Association of Anatomists.”
Pashler, H., McDaniel, M., Rohrer, D., & Bjork, R. (2008). Learning Styles. Psychological Science in the Public Interest, 9(3), 105–119. https://doi.org/10.1111/j.1539-6053.2009.01038.x
Harold Pashler is a Distingquished Professor of Psychology at the University of California San Diego whose interests include learning, memory, focus and attention. This publication brought into question the efficacy of current learning style assessments and their application. Their conclusion was that there is no evidence supporting the value add of these techniques in education. This forms the basis of taking a different approach to learning style assessment in light of the lack of effectiveness in current strategies.
Pedro De Bruyckere. (2019). Urban Myths About Learning And Education. London: Routledge.
Pedro De Bruyckere (1974) is an educational scientist at Arteveldehogeschool University College, Ghent, Belgium and author of several books on education and psychology. In this book, he explores the misconceptions currently at play within education including a critique of learning styles.
Pixabay. (2019). Pixabay. Retrieved from Pixabay.com website: https://pixabay.com
Reich, G. A. (2009). Testing Historical Knowledge: Standards, Multiple-Choice Questions and Student Reasoning. Theory & Research in Social Education, 37(3), 325–360. https://doi.org/10.1080/00933104.2009.10473401
Rice, J. (2007). New Media Resistance: Barriers to Implementation of Computer Video Games in the Classroom. Journal of Educational Multimedia and Hypermedia, 16(3), 249–261.
Roediger, H. L., & Marsh, E. J. (2005). The Positive and Negative Consequences of Multiple-Choice Testing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31(5), 1155–1159. https://doi.org/10.1037/0278-7393.31.5.1155
Abstract: “Multiple-choice tests are commonly used in educational settings but with unknown effects on students’knowledge. The authors examined the consequences of taking a multiple-choice test on a later generalknowledge test in which students were warned not to guess. A large positive testing effect was obtained:Prior testing of facts aided final cued-recall performance. However, prior testing also had negativeconsequences. Prior reading of a greater number of multiple-choice lures decreased the positive testingeffect and increased production of multiple-choice lures as incorrect answers on the final test. Multiplechoice testing may inadvertently lead to the creation of false knowledge.”
Ryan, R. M., Rigby, C. S., & Przybylski, A. (2006). The Motivational Pull of Video Games: A Self-Determination Theory Approach. Motivation and Emotion, 30(4), 344–360. https://doi.org/10.1007/s11031-006-9051-8
Four studies apply self-determination theory (SDT; Ryan & Deci, 2000) in investigating motivation for computer game play, and the effects of game play on well-being. Studies 1–3 examine individuals playing 1, 2 and 4 games, respectively and show that perceived in-game autonomy and competence are associated with game enjoyment, preferences, and changes in well-being pre- to post-play. Competence and autonomy perceptions are also related to the intuitive nature of game controls, and the sense of presence or immersion in participants’ game play experiences. Study 4 surveys an on-line community with experience in multi-player games. Results show that SDT’s theorized needs for autonomy, competence, and relatedness independently predict enjoyment and future game play. The SDT model is also compared with Yee’s (2005) motivation taxonomy of game play motivations. Results are discussed in terms of the relatively unexplored landscape of human motivation within virtual worlds.
Schell, J. (2019). The art of game design : a book of lenses (2nd ed.). New York: A K Peters/CRC Press.
Jesse Schell is a video game designer, author, and Professor of the Practice in Entertainment Technology at Carnegie Mellon University’s. He holds a B.S. in Computer Science from Rensselaer Polytechnic Institute a Master’s Degree in Information Networking from Carnegie Mellon University. In this book, Schell explores the psychology involved in video game design by looking through games from what he describes as a variety of lenses. Well-designed games have the potential to bring a gamer into a world while isolating everything else into the background, and it is through this environment that I hope to assess and test a person’s preferred learning style. I will be incorporating strategies discussed by Schell into the game assessment platform.
Staple, L., Carter, A., Jensen, J., & Walker, M. (2018). Paramedic Learning Style Preferences and Continuing Medical Education Activities: A Cross-Sectional Survey Study. Journal of Allied Health, 47(1), 51–57. Retrieved from https://www.ingentaconnect.com/contentone/asahp/jah/2018/00000047/00000001/art00008
Abstract: “Paramedics participate in continuing medical education (CME) to maintain their skills and knowledge. An understanding of learning styles is important for education to be effective. This study examined the preferred learning styles of ground ambulance paramedics and describes how their preferred learning styles relate to the elective CME activities these paramedics attend. METHODS: All paramedics (n=1,036) employed in a provincial ground ambulance service were invited to participate in a survey containing three parts: demographics, learning style assessed by the Kolb Learning Style Inventory (LSI), and elective CME activity.”
Van Zwanenberg, N., Wilkinson, L. J., & Anderson, A. (2000). Felder and Silverman’s Index of Learning Styles and Honey and Mumford’s Learning Styles Questionnaire: How do they compare and do they predict academic performance? Educational Psychology, 20(3), 365–380. https://doi.org/10.1080/713663743
Samples of engineering and business students at undergraduate, postgraduate and post-experience levels at two UK universities completed the Index of Learning Styles (ILS) (N = 284) or the (Learning Styles Questionnaire LSQ) (N = 182) and a biographical data questionnaire. Broad psychological aspects of the two learning style instruments are examined and compared. Psychometric properties of the instruments, including factor structure, internal reliability and inter-scale correlation are analysed. Potential limitations are commented on, in particular those related to the construct validity and relatively low internal reliability of the ILS scales (alpha = 0.41 to 0.65). These compare with alphas of 0.59 to 0.74 for the LSQ. Relationships between the scales are discussed and a circumplex arrangement of the LSQ is proposed. Proposals for augmenting the circumplex are made. Academic performance results and scores on each of the two instruments are compared. The general lack of significant correlations between learning style scores and performance in these samples is discussed. Conclusions are drawn about the disappointing psychometric robustness of the measures, the activity-centred nature of learning styles and the advantages of viewing styles as a circumplex
Ventura, M., & Shute, V. (2013). The validity of a game-based assessment of persistence. Computers in Human Behavior, 29(6), 2568–2572. https://doi.org/10.1016/j.chb.2013.06.033
Abstract: “In this study, 154 students individually played a challenging physics video game for roughly 4 h. Based on time data for both solved and unsolved problems derived from log files, we created a game-based assessment of persistence that was validated against an existing measure of persistence. We found that the game-based assessment of persistence predicted learning of qualitative physics after controlling for gender, video game experience, pretest knowledge and enjoyment of the game. These findings support the implementation of a real-time formative assessment of persistence to be used to dynamically change gameplay.”
Wood, L., Teräs, H., Reiners, T., & Gregory, S. (2013). The role of gamification and game-based learning in authentic assessment within virtual environments. Higher Education Research and Development of Society Austalasia Inc, 514–523. Retrieved from https://openrepository.aut.ac.nz/bitstream/handle/10292/5835/HERDSA_2013_WOOD%20et%20al%20-POSTPRINT-%20the%20role%20of%20gamification%20and%20game-based%20learning%20in%20authentic%20assessment%20within%20virtual%20environments.pdf?sequence=12&isAllowed=y