GAIT ANALYSIS USING A 3D GRAPHIC MODEL TO DRIVE IMAGE PROCESSING

Kathy Johnson*, Rick Parent**, Jianxiang Chang**, Xiaoning Fu**

*Dept. Health Informatics, UT Houston; **CIS Dept., The Ohio State University

ABSTRACT

Gait analysis is a valuable tool in diagnosing walking disorders. However, the laboratories where such studies take place are often far from where the patients are. In this paper we describe a preliminary approach to using standard video for follow-up analysis. Techniques from graphics and human motion modeling have been combined to determine the patient’s motion.

INTRODUCTION

Today, human leg motion has been widely studied and simulated. Some approaches have used several detectors, and multiple cameras to record the position of the leg and then used image processing to simulate the leg motion. In this work, our objective is to minimize the use of markers and cameras in order to provide a low-tech solution to gait digitization. In order to accomplish this, we are using a 3D model of the patient’s leg to drive the image processing.

In our scenario, a patient has an initial visit to a clinic at which time information about the patient’s leg geometry and pathology are recorded. After some treatment period has elapsed (e.g. wearing a brace or post-surgical recovery), a videotape of the patient walking is taken at a remote site and sent in for analysis. We are in the process of developing techniques to enable this analysis.

METHODOLOGY

Step 1: Leg modeling

For the initial stage of the project we created surface and skeleton data for a human leg. To obtain digital data for a human leg, we have downloaded 110 images from the "Visual Man" which were cross sections of the leg.(1) Each image was processed to extract the contour of the cross section. Each contour was digitized and connected to other contour data so that the surface of the leg was constructed. A digital model of the leg’s bones was obtained from the internet.

Step 2: Image processing

The next step was to obtain gait data from a human subject without the use of markers. With the assistance of The Ohio-State University Gait Lab, video sequences of a subject’s leg motion was taken. (In order to simplify the image processing at this time, the subject wore a red stocking on one leg and a black stocking on the other; this requirement will be eliminated as our image processing becomes more sophisticated). The video was recorded with different cameras at different positions; one camera was in the front of the subject and the other was at the side of the subject.

The side-view and front-view video was dumped to a digital disk. The silhouettes of the legs were extracted from the images; to reduce the noise, a cubic B-Spline curve was used to represent the leg contour. This curve was fitted to the contour data using the contour points as control points of the B-spline curve. This had the effect of smoothing out local variations in the contour data due to noise. Horizontal slices were taken through each scanline of the silhouettes and the midpoint of the slice was used to produce a central axis for each image. Thus, each axis consisted of one to two hundred points, one per scanline. Figure 1 shows a sample contour.

Step 3: 3-D leg motion simulation

The multiple 2D views of the leg motion, basic knowledge about the human anatomy, and velocity constraints on the motion were used to reconstruct the leg motion from the axis data. First, for each time slice, the two 2D axes were used to construct a 3D axis by constructing the planes of project from the camera setup, and then intersecting the projections of the 2D axes from the image planes. This resulted in, for each of the two legs, a single animation sequence of a 3D axis through time. These axes, however, still contained noise. In order to overcome the noise and to infer consistent upper and lower leg axes, the sequence was searched for the frames in which the largest curvature occurred at approximately the middle of the axis. This located the knee joint in each frame with some associated measure of reliability. By fitting straight lines to the upper and lower parts of this, a stylized leg could be constructed. The same process was used to locate the ankle in one or more frames. Once this was done for the set of frames, the more reliable points could be used as control points of a space-time curve to track the motion of the knee joint and ankle joint throughout the sequence. (This is somewhat inaccurate because the original 2D axis we constructed used the midpoint of the silhouette instead of something more anatomically appropriate; we will improve this in future work). The space-curve could also be used to infer the position of joints during frames in which data was missing due to occlusion. Once these joints were located in all frames, the entire skeleton could be constructed in all frames. The bone data was fit to this data as was the surface data created in Step 1.

RESULTS & DISCUSSION

Progress is being made rapidly in being able to do the tasks that we have set out for ourselves. Examples of the state of our work are given in the figures. Figure 1 shows a contour extracted from video. Figure 2 shows a frame of bone and contour animation.

R99_f1.jpg (20506 bytes)                  R99_f2.jpg (19672 bytes)
Figure 1 Extracted leg contour side view  

Figure 2 Single frame of motion and skeleton animation

Using video of a markerless human has proven to be a promising approach to analyze gait. We hope to improve on many facets of the approach in order to make the system more robust and to eliminate the many restrictions that our system currently has. However, we have successfully analyzed video and recreated a human gait cycle with this technique. The next step is to simultaneously capture marker data and use it as a baseline comparison for our results. We have already used motion data from our gait lab to animate the initial skeleton and surface data.

REFERENCES

  1. Visual Man, http://www.nlm.nih.gov/research/visible/visible_human.html, NLM
  2. D.DeCarlo,et. al., Deformable Model-Based Shape and Motion Analysis from Images using Motion Residual Error, Proceedings ICCV’98,pp.113-119, 1998
  3. Kim, S. et. al. Two-dimensional analysis and prediction of human knee joint, Biomedical Sciences Instrumentation, 29, pp.33-46,1993
  4. L. Hong,et. al., 3D virtual colonoscopy,. Proceedings. of Biomedical Visualization, pp.26-32, 1995.

ACKNOWLEDGMENTS

We wish to thank the OSU Interdisciplinary Seed Grant Program for providing the initial funding for this project, the OSU Gait Lab for access to their facilities and personnel to enable us the opportunity to conduct our tests, and the CIS Department for providing the space and infrastructure support to conduct the research.

Copyright 2002 University of Texas Health Science Center at Houston.
For problems or questions regarding this web contact Kathy.A.Johnson@uth.tmc.edu.
Last updated: October 13, 2003.