Publication

The integration of AI has significant opportunities for enhancing human-machine collaboration, particularly in dynamic environments like the construction industry, where excessive information affects decision-making and coordination. This study investigates how visual attention distribution relates to SA development under information overload by addressing three research questions: (1) How does visual allocation relate to individual SA under information overload? (2) How does visual allocation influence TSA formation? (3) Do high-TSA teams exhibit different visual allocation patterns compared to low-TSA teams? To answer these questions, a multi-sensor virtual reality (VR) construction environment is created as testbed that includes realistic task simulations involving both human teammates and AI-powered robots (e.g., drones and robotic dog). Participants completed a pipe installation task when navigating construction hazards like falls, trips, and collisions, while experiencing varying degrees of information overload. Team situation awareness (TSA)—the shared understanding of tasks and environmental conditions—was assessed using the situation awareness global assessment technique (SAGAT) and eye movements were tracked using Meta Quest Pro. The relationship between eye-tracking metrics and SA/TSA scores is analyzed using linear mixed-effects models (LMMs) and a two-sample t-test compared visual allocation patterns between high- and low-TSA teams. Results indicate that eye tracking metrics can predict SA’s levels, an individual’s SA may also be enhanced through dyadic communication with team members, allowing participants to acquire updates without directly seeing the changes. Furthermore, high TSA teams significantly allocated more attention to environment-related objects and exhibited a more balanced visual allocation pattern (run count and dwell time) on task- and environment-related objects. In contrast, low TSA teams were more task-focused, potentially reducing their awareness of broader situational risks. These findings helps to identify at-risk workers using their psychophysiological responses. This research contributes to developing safer and more effective human-AI collaboration in construction and other high-risk industries by prioritizing TSA and AI-driven personalized feedback.

AHFE 2025 Orlando, Florida, USA Jul. 2025

Virtual reality is widely adopted for applications such as training, education, and collaboration. The construction industry, known for its complex projects and numerous personnel involved, relies heavily on effective collaboration. Setting up a real-world construction site for experiments can be expensive and time-consuming, whereas conducting experiments in VR is relatively low-cost, scalable, and efficient. We propose Col-Con, a virtual reality simulation testbed for exploring collaborative behaviors in construction. Col-Con is a multi-user testbed that supports users in completing tasks collaboratively. Additionally, Col-Con provides immersive and realistic simulated construction scenes, where real-time voice communication, along with synchronized transformations, animations, sounds, and interactions, enhances the collaborative experience. As a showcase, we implemented a pipe installation construction task based on Col-Con. A user study demonstrated that Col-Con excels in usability, and participants reported a strong sense of immersion and collaboration. We envision that Col-Con will facilitate research on exploring virtual reality-based collaborative behaviors in construction.

Video Code
arXiv

Augmented reality (AR) games, particularly those designed for headsets, have become increasingly prevalent with advancements in both hardware and software. However, the majority of AR games still rely on pre-scanned or static scenes, and interaction mechanisms are often limited to controllers or hand-tracking. Additionally, the presence of identical objects in AR games poses challenges for conventional object tracking techniques, which often struggle to differentiate between identical objects or necessitate the installation of fixed cameras for global object movement tracking. In response to these limitations, we present a novel approach to address the tracking of identical objects in an AR scene to enrich physical-virtual interaction. Our method leverages partial scene observations captured by an AR headset, utilizing the perspective and spatial data provided by this technology. Object identities within the scene are determined through the solution of a label assignment problem using integer programming. To enhance computational efficiency, we incorporate a Voronoi diagram-based pruning method into our approach. Our implementation of this approach in a farm-to-table AR game demonstrates its satisfactory performance and robustness. Furthermore, we showcase the versatility and practicality of our method through applications in AR storytelling and a simulated gaming robot. Our video demo is available at: https://youtu.be/rPGkLYuKvCQ.

Video Code
arXiv

Poster

MIG2024 Arlington, VA, USA

Poster

MIG2024 Arlington, VA, USA

Peggy Smedley and Laura Black talk to Craig Yu, associate professor, computer science, college of engineering and computing, George Mason, Brenda Bannan, professor, division of learning technologies, college of education and human development, George Mason University, and Liuchuan Yu, third-year computer science PHD student, George Mason University. They discuss a project they are doing that uses VR (virtual reality) simulations to study the work and collaboration patterns of people with ADHD in construction.

PEGGY SMEDLEY SHOW Episode 878

Advancements in extended reality (XR) technology have spurred research into XR-based training and collaboration. On the other hand, mixed reality (MR) fuses the real and the virtual world in real time and provides interaction, which brings the possibility of completing real-world tasks collaboratively through MR headsets. We present HoloCook, a novel real-time remote cooking tutoring system utilizing HoloLens 2. HoloCook is a lightweight system that doesn't require any additional devices. HoloCook can not only synchronize the coach's action with the trainee in real time but also provide the trainee with animations and 3D annotations to aid in tutoring process. HoloCook supports tutoring two recipes: pancakes and cocktails. Our user evaluation with one coach and four trainees establishes HoloCook as a feasible and usable remote cooking tutoring system in mixed-reality environments.

Video Code
HCII2024 Washington DC, USA

We present a novel AAC application, HoloAAC, based on mixed reality that helps people with expressive language difficulties communicate in grocery shopping scenarios via a mixed reality device. A user, who has difficulty in speaking, can easily convey their intention by pressing a few buttons. Our application uses computer vision techniques to automatically detect grocery items, helping the user quickly locate the items of interest. In addition, our application uses natural language processing techniques to categorize the sentences to help the user quickly find the desired sentence. We evaluate our mixed reality-based application on AAC users and compare its efficacy with traditional AAC applications. HoloAAC contributed to the early exploration of context-aware AR-based AAC applications and provided insights for future research.

Video Code
HCII2024 Washington DC, USA

We synthesized virtual reality fire evacuation training drills in a shared virtual space to explore people's collaboration behavior. We formulate the authoring process of the fire evacuation training drill in a total cost function, which we later solve with a Markov Chain Monte Carlo (MCMC) optimization-based method. The users' assigned task in the synthesized training drill is to help virtual agents evacuate the building as quickly as possible using predefined interaction mechanisms. The users can join the training drill from different physical locations and collaborate and communicate in a shared virtual space to finish the task. We conducted a user study to collect both in-game measurements and subjective ratings to evaluate whether the synthesized training drills would affect how the participants collaborated.

Video
ISMAR2022-Adjunct Singapore, Singapore

We discuss the existing facilities at the Design Computing and Extended Reality (DCXR) Lab at George Mason University, which comprise mostly commercial-off-the-shelf computing and extended reality devices, for conducting research on virtual reality-based training. We also share thoughts on extending the facilities for conducting more sophisticated virtual reality (VR) training research in the future, which features more advanced functionalities such as remote VR training, adaptive training, and co-training in VR. In particular, we discuss a remote VR training platform to be established between George Mason University and Purdue University.

IEEEVR2023-VRW Shanghai, China