Building on a strong foundation in VR research, I pursue interdisciplinary HCI studies across VR/AR/MR/XR technologies, film, gaming, theater, intangible cultural heritage, therapeutic applications, music, EEG, education, environmental protection, AI, virtual humans, psychology, spatial experiences, and exhibitions.
I served as the lead developer, extending Gesture2Pro (2025 UIST Demo 2025) through system optimization and academic validation. Based on preliminary research, I introduced a finger-drawing interaction within the existing “gesture + voice” framework to address the generation of geometrically ambiguous 3D objects. The semantic recognition process was redesigned into a dual-path structure, distinguishing between “gesture-based indication” and “free-form drawing.” The generation pipeline follows a multi-stage process (“semantic → image → coarse model → refined model”). Additionally, I integrated real-time VR interaction, enabling generated weapon-type props to be immediately used in combat scenarios (e.g., generating a Gatling gun for instant shooting with rule-based damage calculation). To support user studies, I also designed a dual-interface questionnaire system, allowing participants to complete surveys directly within the headset while enabling experimenters to monitor and store data in real time on a desktop interface, significantly improving the efficiency of iterative experiments.
This project leverages Mixed Reality (MR) and AI technologies to generate 3D ceramic models in real-world space based on user sketches, supporting beginner learning in shaping, painting, and glazing processes. I was responsible for the design of both quantitative and qualitative studies, including usability and creativity evaluation, as well as interview analysis and data-driven conclusion formation. The project was initially led by Zijun Wan and submitted as a full paper to UIST but was not accepted. Following their departure from academia, the project was temporarily suspended. Building on the completeness of the collected data, we later reorganized and refined the work. Under the leadership of Yuxuan Guo and the supervision of Liang Chen, the project was restructured into a 6-page short paper and resubmitted to CHI 2026, where it was successfully accepted.
This project originated as a sub-study within a broader research initiative on bamboo weaving as intangible cultural heritage (ICH). After completing an early MR prototype focused on heritage education, the team shifted its direction toward aesthetics and creative reinterpretation. I subsequently redefined the research focus and proposed a dual-factor framework to investigate the aesthetic implications of AI-mediated transformation: (1) the degree of abstraction (ranging from realistically weaveable structures to forms that are structurally incompatible with bamboo weaving), and (2) the generation pathway of structure and texture (structure-first vs. pattern-first). I led the development of the research framework, guided the questionnaire design, and contributed to interview studies and survey model construction. Under tight time constraints and limited team resources, I completed the full manuscript draft (excluding the related work section), produced the initial figures, and created the project video.
As the project lead, I extended SugART by developing a storage and interaction system that enables the digital reactivation of static intangible cultural heritage (ICH) artifacts. The system supports content-driven animation (e.g., butterflies dynamically animating based on semantic meaning) and embodied real-world interaction, including grabbing and spatial manipulation. I was responsible for the full development pipeline, as well as the research theme formulation and experimental design. Under tight time constraints, I first delivered a short-paper version of the work, which was later expanded into a full paper when additional time became available. The project was ultimately selected as one of only three Best New Idea (BNI) papers at the conference.
Developed a preliminary demo of GestuProp, enabling 3D prop generation through multimodal interaction combining hand gestures and voice input. As the lead developer, I was responsible for core system implementation. When another programmer was unable to complete assigned tasks on schedule, I proactively reallocated the workload and took over the unfinished components, ensuring the project was delivered on time and met all functional requirements.