One of the few things that a human can do better than a computer is tracing. Our brains are able to detect edges very quickly and recognize them when drawing pictures or doing any kind of graphic work. Computers, however, usually do estimations of edge detections which gives a rough estimate of where edges appear on pictures.
The Intelligent Scissors algorithm was developed by William A. Barrett and Eric N. Mortensen, which uses a picture's gradient magnitude (fG); gradient direction (fD); and edge filtering, (fZ), in order to determine where edges may lie on an image. The formula below is used to calculate the cost of a particular pixel, p, to one of its neighbors, q, in order to create a low cost path for an edge.
cost(p,q) = 0.43 * fG(q) + 0.43 * fZ(q) +0.14 * fD(p,q)
The path created by finding an edge with the algorithm is called a Contour. The contour of image is generated by placing seeds, starting points for the edge generation to take place, and drawing out many paths and connecting all of them together once all the seeds have been in place. More information on his Algorithm can be found here: Mortensen's Paper
For this project, I used a simpler version of the Intelligent Scissors Algorithm by only incorporating gradient magnitude as a determination to where edges may lie. The overall goal of the algorithm is to compare costs of adjacent neighbors to particular pixels and thus creating a low cost path of pixels that will generate an edge along a picture. First, I would calculate the gradient magnitude of the 8 adjacent neighbors of a particular pixel. Whichever neighbor had the highest gradient magnitude meant that they had the sharpest change in colors and thus an edge may lie in that direction.
After calculating all of the gradient magnitudes for each pixel and their adjacent neighbors, I kept track of the highest gradient magnitude that was encountered throughout the whole image. Using this value, I normalized the gradient magnitudes of each link, having links with the highest gradient magnitude have the lowest cost. By applying costs to each neighboring edge link, I used Djikstra's Algorithm in order to find the lowest cost path from a particular node to a source node.
For complex and/or very large images, having manual seed placement for the algorithm to take place can be time consuming and slightly inaccurate. Path Cooling is a technique that uses time/heuristic of how many times a path has been drawn in order to automatically create seed generation. My implementation uses a timing Path Cooling algorithm in order to determine if the path is stable enough. The default value for my implementation is one second, but it can be changed based on the User's Preferences. This technique has its benefits and downfalls. Path cooling indeed does speed up the process by having the path stay where it is if it hasn't changed in a while, making the user not have to place points all the time when trying to look for good path to take. However, finding the middle ground using time/heuristics to cool the path could lead to inaccurate results if a middle ground isn't found or it could make no difference or take longer than if Path Cooling wasn't implemented.
For Example: In the pictures below, finding the boundary for something complex such as this dragon may require more placements of seeds rather than having an automatic generation. Figure 2(a) is the contour of the dragon without Path Cooling. It took the most seed placements out of the 3 pictures. Figure 2(b) is Path Cooling turned on with a 1 second timer. It did 66% better in seed placements, however it took longer than without Path Cooling. Why is that? When searching for lowest cost path, in order to prevent Path Cooling from cooling the path, movements of the Livewire must be done in order to reset the timer. This was usually done during the short segments of the picture as well as when forcing the soft edges to be edge contour such as on the Dragon's shoulder or ear rather than the hard edge which was found in the inner picture. The path cools once the livewire hasn't been moved for one second. In Figure 2(c), even less Seed Placement took place and the time was faster. However, notice the contour of the dragon. It is not as precise as Figures 2(a)-(b) is. This was due to the fact the Path Cooling was too short, thus making the Livewire freeze in place when it's not supposed to. A big source of slower times and inaccuracy comes from pictures with little edges such as finding the contour of the dragon's wing or tail. Usually, in cases like this, more seed placement is needed because the edges are shorter.
The Livewire Program that was originally given to me by the website was only to create multiple cutouts, but store the masks, which is just the cutout as whole but whited out, and the contour, which was a saved contour line of the image if someone were to come back and attempt to make a cutout again. My addition to this Livewire project has been adapted to take 2 types of cuts. The first cut is simply a cutout of the image, but the cutout is colored rather than white. The second type of cutout is to cutout what is not wanted in the picture and keep the remainder of the picture intact. In addition, my program is able to save the cutouts not only as ".tga" files, but also as ".bmp" files which will be discussed later of what they are used for.
After completing the Livewire Interface, I managed to merge the project into the first project of the class, Impressionist. Within the Impressionist, I was able to upload the Cutouts from the Livewire and manipulate the images in different ways.
The Drop Down Menu was accommodated to load already precut Images (or any images in particular) that could be used as cutouts in the picture. After the Livewire saved the current cutouts, users would be able to access their cutouts made from the Livewire Interface and use them on the canvas. The Cutouts were their own class that was comprised of the original cutout and dimensions, a display buffer that would be written to the canvas, and a resizable Buffer that was used whenever the canvas window was resized and the image needed to be adjusted accordingly. When dealing with the cutouts, I wanted to preserve the quality of the picture whenever the Cutout had to be resized. Without having a temporary resize buffer that would hold the newly resized picture, 2 things would happen. One, depending on the size of the image, it would be incredibly slow in order to recopy and resize things constantly. I resize my pictures in order to fix the screen once every newly loaded image so it can be adjusted to the canvas' width and height. Whenever a user changes the origView picture, I update my list of cutouts to ensure that they are resized once per loaded picture. The second problem could occur is if the original dimensions and quality of the original cutout would be lost , increasing the dimensions of the cutout would make the image look pixelated rather than scaled. The Cutouts can be shifted, scaled, colored, and filtered with kernels convolution as well as stone texture maps.