Feature Based Image Morphing

Varun Srivastava and Saurabh Shukla
{varun, sshukla}@cs.utexas.edu


Image Morphing        

                Image morphing is the technique of seamlessly transforming one image to another. It typically involves generating a number of intermediate images which depict the transition of one image into another. Morphing is extensively used in the entertainment industry. It has been used in many movies – Terminator and Star Trek are a few examples. It is also used in the gaming industry to generate animations in video games.

There are a few traditional techniques of image morphing. A given image can be transformed into another image by manually drawing the intermediate images and then generating an animation of all intermediate images. Another simple technique for image morphing is cross-dissolving. In this technique, the source image is slowly faded out and the target image is slowly faded in. This technique transforms the images pixel by pixel. Morphing at the pixel-level can also be performed by defining each pixel as a particle. The particle system then maps pixels from the source image to the destination image.
There are two major problems with these techniques – they are either complex or they fail to consider the image features. For example, if we are morphing one face to another then we want to morph eyes, nose and mouth of the source image with the corresponding eyes, nose and mouth of the target image.

We have implemented feature based image morphing which was introduced by Thaddeus Beier and Shawn Neely [1]. We extended this technique for applying morphs on specific regions within images of human faces. This greatly improves the quality of morphs.

Feature Based Image Morphing

                Feature based image morphing consists of two steps — image warping and cross-dissolving. During image warping, the source image is transformed into the destination image by specifying an inverse transform which maps every pixel in the destination image to a pixel in the source image. This distorts the source image and this distortion can be controlled by specifying a pair of control lines.

                Following figure depicts how this distortion is obtained.








                PQ is the control line in the destination image. For every pixel X in the destination image, the values for u and v are obtained. Then in the corresponding control line P’Q’ in the source image, using the values for u and v, the corresponding pixel X’ is obtained. The color at X’ is passed to the cross-dissolving step.
                Similarly, for every pixel in the source image, the color of the warped pixel in the destination image is obtained and is passed to the cross-dissolving step. During, cross-dissolving, weighted components of both the colors are used to generate the color in the resultant morphed image.

                For improving the quality of warping, more that one pair of lines are used and all color values are weighted by the following parameters —
                1) a — position of the pixel on a control line
                                warping smoothens with larger values
                2) b — weight of lines w.r.t. to distance from pixel
                                 lines closer to a pixel has more weight
                3) c — weight of lines w.r.t. to length
                                longer lines have more weight

The algorithm is as follows —

For each pixel X in destination
             DSUM = (0,0)
              weightsum = 0
              For each line PiQi
                           Calculate u,v based on PiQi
                           Calculate X'i based on u,v and P'iQ'i
                           Di = X'i – Xi for this line
                           dist = shortest distance from X to PiQi
                           weight = (length^p / (a+dist))^b
                           DSUM += Di * weight
                           weightsum +=weight
                           X' = X + DSUM / weightsum
              DestinationImage (X) = sourceImage(X')





















Extension — Region morphing


                During morphing faces, objects in the images which are not the parts of a face e.g. background may also get morphed. This gives unwanted results. For example, in the figure below, the user has specified proper control lines for all the features in the two faces. But because the entire images are morphed including the backgrounds, the result is unexpected and not a perfect morph.



















                Our system can automatically detect eyes, nose and mouth within faces and also the boundaries of the faces. Control lines are automatically drawn for each of the detected features. The detection mechanism is very simple, e.g. mouth is located approximately at one third of the height of the face from the bottom of the face. Since these are approximate locations, we also provide the user to adjust these lines to their correct positions.

                After clipping the faces, morphing is applied only to these regions. As shown in the figure below, this greatly improves the quality of morphing.









































[1] “Feature-Based Image Metamorphosis”, Thaddeus Beier and Shawn Neely, ACM SIG

       GRAPH Computer Graphics, 1992