ARnatomy 2014

  • SIGGRAPH 2014

Project Details

Many gross anatomy students use a dog bone box for individual study. This is very effective because feeling through our body increases a student’s immersion into the anatomical context and helps to recall the information associated with a physical body experience. Integration of the traditional materials (bones) and augmented reality using mobile devices keeps the core quality of an embodied experience of using bones, and builds multimedia information around the bones in computational environment.

We created a system that can recognize a variety of 3D printed bones while a user holds and moves a bone in front of the camera of a mobile device or behind the camera. Once recognized the bones are populated with virtual text labels that move on the screen to match the video camera feed of the bones. The labels are clear and effective at pointing out regions of interest. In addition we created an additional mode that allows the user to see the recognized bone in context of the entire skeleton.

The system is separated into 3 main components. These elements receive data from the mobile device’s camera. This data is provided to the Object Recognition and Tracking module. The spatial data is approximated and fed to the Unity3D Game engine in the Graphic User Interface step. Inside of the Unity3D application will be a collection of components that define the content of expected and recognized objects. All bones and learning content is stored and kept track of at this level. This collection of data describes and acts on the 3D scene that is presented to the user composite with the video feed from the camera.

  • Researchers: Jinsil Hwaryoung Seo, James Storey, John Chavez, Diana Reyna, Jinkyo Suh

More Photos