The Cognitive Science Lab is currently conducting a variety of experiments, primarily focused on learning, attention, and decision-making.
Described below in more detail, we have ongoing projects using StarCraft 2 data, eyetracking and Category Learning, Virtual Reality, and Human Computer Interaction Design.
StarCraft 2 - Using esports to Study Cognition and Expertise
We use the real-time strategy game StarCraft 2 in our lab as it is a useful domain in which to study learning and attention. RTS games, in which players develop game pieces called units with the ultimate goal to destroy their opponent’s headquarters, have two relevant differences from strategy games such as chess.
First, the game board, called a map, is much larger than what that player can see at any one time. The resulting uncertainty about the game state leads to a variety of information gathering strategies, and requires vigilance and highly developed attentional processes. Because the game records where players are looking throughout the game, we have access to this attentional data.
Second, players in RTS games do not have to wait for their opponent to play their turn. Players can play as fast as they are able. Players that can execute strategic goals more efficiently have an enormous advantage. Consequently, motor skills that allow for efficient keyboard and mouse use are an integral component of the game.
The lab has completed a number of Starcraft-based projects in diverse topics such as complex skill development (Video Game Telemetry as a Critical Tool in the Study of Complex Skill Learning), age-related changes in cognition (Over the Hill at 24: Persistent Age-Related Cognitive-Motor Decline in Reaction Times in an Ecologically Valid Video Game Task Begins in Early Adulthood), and motor chunking (Using Video Game Telemetry Data to Research Motor Chunking, Action Latencies, and Complex Cognitive‐Motor Skill Learning.).
For more information on the lab's video game telemetry research, see:
- Thompson J.J., Blair M.R., Chen L., Henrey A.J. (2013) Video Game Telemetry as a Critical Tool in the Study of Complex Skill Learning. PLoS ONE 8(9): e75129. doi:10.1371/journal.pone.0075129
- Thompson J.J., Blair M.R., Henrey A.J. (2014) Over the Hill at 24: Persistent Age-Related Cognitive-Motor Decline in Reaction Times in an Ecologically Valid Video Game Task Begins in Early Adulthood. PLoS ONE 9(4): e94215. doi:10.1371/journal.pone.0094215
- Thompson, J. J., McColeman, C. M., Stepanova, E. R., & Blair, M. R. (2017). Using Video Game Telemetry Data to Research Motor Chunking, Action Latencies, and Complex Cognitive‐Motor Skill Learning. Topics in Cognitive Science, 9(2), 467-484.
Virtual Reality
We are starting work into testing the viability of immersive virtual reality in conducting cognitive science research. Using the categroy learning paradigm, we are exploring how people change their behaviour over time as they learn to categorize different stimuli into their appropriate groups.
Specifically, in previous research using desktop computers, information was accessed by moving your eyes across the screen to different points. In VR, we can acces information using our hands to physically rotate objects. Despite these being separate motor processes (eyes vs arms), we expect that they both use a common learning system in the brain, and that the way in which these motor processes are utilized should exhibit common patterns of activation as people learn to correctly identify stimuli.
Category Learning
We also perform lab experiments on category learning, using eyetracking data to measure attention allocation as participants learn a simple categorization task. One of our most recent projects seeks to draw parallels between trends observed in eyetracking data from controlled laboratory studies and trends in screen-attention allocation from Starcraft 2 data. By comparing eye movements to screen movements, we hope to extend the findings of gaze allocation reseach to attention alllocation more generally. This is especially relevant as the world becomes increasingly digital--people communicate, shop, and work through screens, making the computer interface an attentional tool we use almost as much as our eyes.
Human Computer Interfaces - ExNovo
Recent work incorporates custom human-computer interfaces and virtual reality into our toolbox. The new generation of spatial computing tools provided by virtual, mixed and augmented reality technologies both encourages and requires software design that respects how humans learn and attend to information in their environment.
ExNovo is aimed at designing and testing a new computer interface that is grounded in what we know about human cognition. The effectiveness of Human-Computer Interfaces (HCI) are constrained by the limits human memory and attention, and existing interfaces leave a lot to be desired in how they can often overwhlem users. The most common computer interface allows users to select actions from lists in a menu. The graphical user interface (GUI) minimizes memory costs, but requires visual inspection and careful targeting to select actions, thus slowing down performance. At the same time, most computer interfaces also allow for rapid execution of actions via keyboard hotkeys (such as ctrl-c to copy). Some hotkey combinations are difficult to perform, requiring users to use awkward hand positions that require visual inspection of the keyboard to execute, and most hotkey combination are largely arbitrary, making memory of the cryptic combination a challenge. Another important problem is that these two ways of initiating actions (menus and hotkeys) are essentially entirely separate interfaces, and the time spent learning one does almost nothing to help with the other.
Our Ex Novo (which means *from the beginning*) interface that aims to unify the speed of hotkeys with the learnability of a GUI. Because it is a single consistent interface, users improve with experience from the slow visually-guided choices of the novice, to the rapid, automatic actions of an expert. Our research investigates how speed and performance differs between ExNovo and traditional menu-based interfaces, and how interface elements such as sound and visual coding might help to make interfaces easier to learn.