GestuProp: 3D Virtual Reality Prop Generation with Co-Speech Gestures

Zhihao Yao, Xiwen Yao, Haowei Xiong, Yuan-Ling Feng, Qirui Sun, Yijie Guo, and Haipeng Mi. 2026. GestuProp: 3D Virtual Reality Prop Generation with Co-Speech Gestures. In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI '26). ACM, New York, NY, USA.(CCF-A)

Objective:
Enable accessible and interactive ceramic learning by leveraging MR and AI technologies to transform user sketches into spatial 3D ceramic models for embodied practice.

Methods:

· Designed an MR + AI system that generates 3D ceramic models in real space based on user sketches.

· Supported a full learning workflow including shaping, painting, and glazing through interactive guidance.

· Conducted both quantitative and qualitative studies, including usability and creativity evaluation, as well as semi-structured interviews.

· Performed data analysis to assess user experience, learning effectiveness, and creative support.

Results:

· Demonstrated that MR-based sketch-to-3D interaction effectively supports beginner learning in ceramic crafting.

· Showed improvements in usability, engagement, and creative exploration through AI-assisted interaction.

· The project was accepted at CHI 2026 as a 6-page short paper after restructuring from an earlier full paper submission.

Contribution:
Led the design of quantitative and qualitative evaluation methods, conducted interview and data analysis, and contributed to restructuring the project into a publishable short paper after initial submission iterations.

Abstract:

Ceramic craftsmanship traditionally requires hands-on practice, expert guidance, and access to specialized materials, making it difficult for beginners to learn in remote or home-based environments. Existing digital learning tools often lack embodied interaction and real-time feedback, limiting their effectiveness in supporting skill acquisition and creative exploration.

This study presents an MR and AI-based system that transforms user sketches into 3D ceramic models in real space, enabling an interactive and embodied learning experience. The system supports key stages of ceramic creation, including shaping, painting, and glazing, while providing guidance through AI-assisted interaction.

To evaluate the system, both quantitative and qualitative methods were employed, including usability and creativity assessments as well as semi-structured interviews. Results indicate that the system enhances user engagement, lowers learning barriers, and supports creative exploration in ceramic practice.

The project was initially submitted as a full paper to UIST and later restructured into a short paper, leading to its acceptance at CHI 2026. These findings demonstrate the potential of integrating MR and AI technologies in transforming craft education and expanding access to intangible cultural heritage practices.

Keywords:
Mixed Reality (MR), Artificial Intelligence, Ceramic Craft, Intangible Cultural Heritage (ICH), Interactive Learning, Sketch-to-3D