Visual Parsing of Command Cards for an Equitable Augmented Reality Learning System
DescriptionNavigational gameplay can train preschoolers in a wide range of skills including problem-solving, critical thinking, and concepts of procedural programming. However, relying on conventional input devices such as mice and keyboards presents a barrier for children under five, as they have limited familiarity with operating these tools. This research aims to mitigate this problem by developing a set of intuitive visual parsing algorithms that can execute in-game commands without traditional input devices. Within an Augmented Reality learning system, printed QR-like command cards will be positioned on a table by children, and a webcam will detect them with a machine vision system. Our objective is to develop parsing algorithms capable of connecting commands with modifiers and executing them in a sequential manner, while handling potential placement errors that may arise during a child's gameplay. The visual parsing algorithm development is isolated in a sandbox application to speed up the iteration process that includes input processing, sorting, and graph connecting. We plan to evaluate these algorithms with Minimum Spanning Tree as an alternative and benchmark their performance on a PC and Raspberry Pi 4 across the proposed test cases. By the use of tangibles (printed cards) and running the system on a Raspberry Pi 4, our approach not only fosters children's pedagogical improvement but also serves as a low-cost solution. In future, this set of visual parsing algorithms can be implemented in AR learning systems for marginalized school communities so that young learners can access an inclusive and satisfactory computing environment.