We`re the Soft Interaction Lab.





Soft Interaction Lab, led by professor J. Hwaryoung Seo, is an interactive art/design research group. We integrate physical and digital experiences experimenting soft/organic materials and tangible interaction techniques. Our primary aim is to engage diverse audiences who could get benefits from soft interaction through research and creative activities. 

If you would like to join SIL as a graduate student or a undergraduate student, please contact Dr. Seo at hwaryoung@tamu.edu.

Interactive Arts
Tangible STEAM Education
Creative Health & Wellbeing

Our Projects

Our Team

Jinsil Hwaryoung Seo, PhD

Jinsil Hwaryoung Seo, PhD


Assistant Professor VIZ

Brian Smith

Brian Smith

3D Artist

MFA in VIZ student

Stephen Aldriedge

Stephen Aldriedge


MS student in VIZ

Janelle Arita

Janelle Arita

UX Designer

MS student, VIZ


Soft Interaction Lab at AHFE 2017

MS VIZ student, Megan Cook presented her research project as well as our Anatomy Builder study at AHFE 2017 (Applied Human Factors and Ergonomics) international conference yesterday. Go Megan!

Congratulations to all co-authors!

Brian Michael Smith, Erica Renee Malone, Dr. Michelle Pine, Steven Leal, Dr. Zhikun Bai (Anatomy Builder) and Annie Sungkajun (for Megan’s project).

Annie presented at EVA London 2017

Annie is back from EVA (Electronic Visualization and the Arts) London 2017.

She presented three projects from our lab.

Thanks Annie for representing us!



Soft Interaction Lab ran a YAP: Virtual Technology program last week (July 11-15).

More Photos

Anatomy Education Tools

The research team has been working on developing digital, augmented reality tools for anatomy learning: ARnatomy and FlexAR. Students learn osteology and the muscular system of the canine pelvic limb and human thoracic limb moving physical bones in front of a camera attached to smart devices that provide various anatomical information.



Integration of the traditional materials (bones) and augmented reality using mobile devices keeps the core quality of an embodied experience of using bones, and builds multimedia information around the bones in computational environment. We created a system that can recognize a variety of 3D printed bones while a user holds and moves a bone in front of the camera of a mobile device or behind the camera. Once recognized the bones are populated with virtual text labels that move on the screen to match the video camera feed of the bones. The labels are clear and effective at pointing out regions of interest. In addition, we created an additional mode that allows the user to see the recognized bone in the context of the entire skeleton.

The system is separated into 3 main components. These elements receive data from the mobile device’s camera. This data is provided to the Object Recognition and Tracking module. The spatial data is approximated and fed to the Unity3D Game engine in the Graphic User Interface step. Inside of the Unity3D application will be a collection of components that define the content of expected and recognized objects. All bones and learning content are stored and kept track of at this level. This collection of data describes and acts on the 3D scene that is presented to the user composite with the video feed from the camera.

Seo, J. H., Storey, J., Chavez, J., Reyna, D., Suh, J., & Pine, M. (2014). ARnatomy: tangible AR app for learning gross anatomy. ACM SIGGRAPH 2014 Posters (SIGGRAPH ’14). New York, NY, USA.

Seo, J. H. (2015). One ARnatomy. Augmented World Expo 2015. Santa Clara, CA. USA.



We focus on demonstrating the flexion and extension of various muscle groups as the result of moving a physical skeletal model. In addition we wanted to explore the different AR interface styles to see how they support different learning styles. The styles we explored were wearable, tablet, and computer. Users of our prototype manipulate a physical skeletal model affixed with augmented reality (AR) targets.

An AR-enabled device records this interaction and projects a digital 3D model consisting of the bones and major muscles of the arm over the physical model. Users are then able to examine both gross anatomy as well as muscle flexion and extension. The user can also interact through a graphical user interface to highlight and display additional information on individual muscles. Flex-AR was built using the Unity game engine with Qualcomm’s Vuforia plugin, a mobile AR library, to handle the capturing and tracking of our augmented reality targets. For FlexAR, we use 4 targets: 1 to determine the basic position of the arm and the others to control the rotation of the shoulder, elbow, and wrist joints of the 3D model. The assets for the 3D overlay were developed in Maya using our physical arm model.

Saenz, M., Strunk, J., Maset, K., Malone, E., & Seo, J. H. (2015). See the Flex: Investigating Various Display Settings for Different Study Conditions. SIGGRAPH 2015 Posters (SIGGRAPH ’15). ACM, New York, NY, USA.


Arts-based practices for Anatomy Education

This research has been conducted by  the Creative Anatomy Collective team. The collaborative efforts have been produced by Dr. Jinsil Hwaryoung Seo and Dr. Michelle Pine and other faculty members: Tim McLaughlin, Carol LaFayette, Felice House from the Department of Visualization, Christine Bergeron from the Dance program, and Takashi Yamauchi from Department of Psychology.

Section 200 of the Biomedical Anatomy course (Instructors: Dr. Michelle Pine and Erica Malone) was designed to combine the traditional methods of teaching undergraduate anatomy with more innovative, creative and engaging methods involving the arts. As part of the course, four studio sessions were held during the combined lecture and lab periods. During each of the studio sessions the concepts and structures being presented in lecture and dissection were addressed, using a different creative method for each session.

art activities_01

Session #1 (Drawing) : Students conducted an in-depth study of facial muscles of expression by creating three large-scale drawings and labeling the muscles on each.

Session #2 (Sculpture): Students compared the human and canine muscles of the thoracic limb by building the same muscles for each species in clay. The arm models were provided by Anatomy in Clay.

Session #3 and #4 (Body Movement): The final two sessions required students to transfer knowledge gained on the model species used in class, the dog, to their own bodies, as they explored the human anatomy of the thoracic and pelvic limbs respectively.

Students reported that they enjoyed the studio sessions and that each helped them to better understand or recognize a concept or structure covered in the course, which contributed to their success.

Body Paintings

art activities_02

Interactive Kinetic Modeling

art activities_03

More Photos from this research

VIZKids_YAP 2015

Soft Interaction Lab ran a YAP 3: Virtual Technology program last week (July 27-31).

More Photos

Come and Work with Us