Building on a strong foundation in VR research, I pursue interdisciplinary HCI studies across VR/AR/MR/XR technologies, film, gaming, theater, intangible cultural heritage, therapeutic applications, music, EEG, education, environmental protection, AI, virtual humans, psychology, spatial experiences, and exhibitions.
I served as the lead developer, extending Gesture2Pro (2025 UIST Demo 2025) through system optimization and academic validation. Based on preliminary research, I introduced a finger-drawing interaction within the existing “gesture + voice” framework to address the generation of geometrically ambiguous 3D objects. The semantic recognition process was redesigned into a dual-path structure, distinguishing between “gesture-based indication” and “free-form drawing.” The generation pipeline follows a multi-stage process (“semantic → image → coarse model → refined model”). Additionally, I integrated real-time VR interaction, enabling generated weapon-type props to be immediately used in combat scenarios (e.g., generating a Gatling gun for instant shooting with rule-based damage calculation). To support user studies, I also designed a dual-interface questionnaire system, allowing participants to complete surveys directly within the headset while enabling experimenters to monitor and store data in real time on a desktop interface, significantly improving the efficiency of iterative experiments.
Continue reading →This project leverages Mixed Reality (MR) and AI technologies to generate 3D ceramic models in real-world space based on user sketches, supporting beginner learning in shaping, painting, and glazing processes. I was responsible for the design of both quantitative and qualitative studies, including usability and creativity evaluation, as well as interview analysis and data-driven conclusion formation. The project was initially led by Zijun Wan and submitted as a full paper to UIST but was not accepted. Following their departure from academia, the project was temporarily suspended. Building on the completeness of the collected data, we later reorganized and refined the work. Under the leadership of Yuxuan Guo and the supervision of Liang Chen, the project was restructured into a 6-page short paper and resubmitted to CHI 2026, where it was successfully accepted.
Continue reading →This project originated as a sub-study within a broader research initiative on bamboo weaving as intangible cultural heritage (ICH). After completing an early MR prototype focused on heritage education, the team shifted its direction toward aesthetics and creative reinterpretation. I subsequently redefined the research focus and proposed a dual-factor framework to investigate the aesthetic implications of AI-mediated transformation: (1) the degree of abstraction (ranging from realistically weaveable structures to forms that are structurally incompatible with bamboo weaving), and (2) the generation pathway of structure and texture (structure-first vs. pattern-first). I led the development of the research framework, guided the questionnaire design, and contributed to interview studies and survey model construction. Under tight time constraints and limited team resources, I completed the full manuscript draft (excluding the related work section), produced the initial figures, and created the project video.
Continue reading →As the project lead, I extended SugART by developing a storage and interaction system that enables the digital reactivation of static intangible cultural heritage (ICH) artifacts. The system supports content-driven animation (e.g., butterflies dynamically animating based on semantic meaning) and embodied real-world interaction, including grabbing and spatial manipulation. I was responsible for the full development pipeline, as well as the research theme formulation and experimental design. Under tight time constraints, I first delivered a short-paper version of the work, which was later expanded into a full paper when additional time became available. The project was ultimately selected as one of only three Best New Idea (BNI) papers at the conference.
Continue reading →Developed a preliminary demo of GestuProp, enabling 3D prop generation through multimodal interaction combining hand gestures and voice input. As the lead developer, I was responsible for core system implementation. When another programmer was unable to complete assigned tasks on schedule, I proactively reallocated the workload and took over the unfinished components, ensuring the project was delivered on time and met all functional requirements.
Continue reading →This project was developed as part of an XR Bootcamp group assignment, where I was responsible for the AI-to-3D module development. After the course, the first author initially submitted the work to the SIGGRAPH Demo track, but it was not accepted due to time constraints and limited experience. I subsequently took on a corresponding and mentoring role, restructuring the project framework and leading the refinement process, including key visual design, video production, and poster development. The project was then resubmitted to the SIGGRAPH Asia Poster track and successfully accepted.
Continue reading →The project originated from Pastry Painter, a nominee of XRCC 2025. Following the award, we recognized its potential for research in intangible cultural heritage (ICH) and further developed it into an academic publication. Building upon the original artist team, we expanded the collaboration by incorporating two additional academic researchers. Leveraging my interdisciplinary background in art, technology, and research, I served as both the project lead and the bridge between the artistic and academic teams. We re-examined the paradigm of MR-based at-home ICH learning and, informed by prior work in ceramic heritage (ARtisanAI, accepted at CHI 2026 Poster), abstracted a more generalizable system framework. This framework was implemented as an interactive application and subsequently extended into a full paper, Animating the Ephemera.
Continue reading →To address challenges in remote sound art curation, such as insufficient spatial auditory perception and limited flexibility in display control, our team developed Resonix, a VR-based prototype tool integrating features like a modular simulation toolkit, sound field spatial display, sound-time manager, and role manager. I was responsible for the initial prototype development and writing the Discussion and Conclusion sections of the paper. This work was later accepted at CHI LBW (a CCF-A conference), laying the foundation for future system iteration and scalability.
Continue reading →To address weak gesture interaction and insufficient cultural understanding in VR traditional culture games, I conducted in-depth interviews with practitioners and intangible cultural heritage (ICH) inheritors, along with a literature review and analysis. Based on these findings, I independently designed and developed a prototype interaction system featuring five VR gesture modes (movement, continuous movement, teleportation, single-hand following, and double-hand following) that incorporate ICH elements. The design solution was presented at the 2nd Virtual Reality Innovation Development Forum, and the system was later applied to the VR first-person combat game Exorcist, showcased at the UAL “2023 Game Collaboration Show.” I was responsible for developing the prototype, writing the technical proposal, and delivering the conference presentation.
Continue reading →To compare the differences in narrative power dynamics among directors, actors, and audiences between traditional films and VR films, I conducted in-depth interviews with industry professionals to analyze VR creation mechanisms suitable for immersive storytelling and to explore the demand for storyboard-like tools. Based on the findings, I designed a creation workflow and developed Dreamfly, a collaborative storyboard software, which was iteratively refined through testing. The research revealed that audiences in VR narratives partially assume director and actor roles due to heightened interactivity. I proposed a design framework to enhance narrative interactivity in VR production and formulated a three-layer theory of narrative freedom in VR storytelling. Dreamfly was exhibited at UAL’s “Discover the Next Generation of Creatives.” I led the interview research, software development, testing, and wrote the thesis.
Continue reading →To quantify the impact of dirtiness and outdoor color on attention allocation in VR environments, I developed a controlled VR scene featuring variables for outdoor color and indoor dirtiness. Fifteen participants completed a proofreading task under cross-experimental conditions, with attention data monitored using EEG (electroencephalogram) and ECG (electrocardiogram). The results showed that attention shifts more toward other objects in cluttered environments, while outdoor color had minimal impact on attention. I was responsible for designing the experimental prototype, conducting its development, and performing data collection.
Continue reading →To investigate the impact of safety sign parameters (size, height, deviation) on anxiety levels in fire evacuation scenarios, I constructed a VR campus fire escape scene and invited 15 participants to perform emergency evacuation tasks. Anxiety levels were analyzed based on task duration and physiological signals (EEG/ECG). I was responsible for designing and developing the experimental prototype.
Continue reading →