Then, those paths are augmented with the least cost paths within the stroke from all vertices within C 1 to all vertices in C 2 , the sec- ond cap of the over-draw stroke. Finally, those paths are augmented with the least cost paths from all vertices in C 2 to V 2 , the point at which the new cut contour connects back to the original proposed cut (Figure 5d). The path found with overall least cost is spliced into the proposed cut (Figure 5e). This incremental refinement approach has several desirable properties. First, it provides local control, guaranteeing that pre- viously drawn strokes will not be overridden by new strokes unless they are in close proximity. Second, it is fast to compute, since most of the least cost path searches are constrained to lie within the stroke. Finally, it allows the user to specify precisely and natu- rally where the splice should be made both by simply starting and stopping the over-draw stroke with the cursor near the proposed contour. The second issue we address is searching a large database of 3D models. The challenge is to develop effective mechanisms whereby users can enter a query specifying the objects they want to retrieve from the database and the system quickly returns the best matches and suggests them as candidates for future editing operations. In general, we would like to support two types of queries. Initially, we expect the user to search for whole objects that represent the general shape they would like to construct – requiring an interface supporting whole-object matching. As the user progresses through her editing operations, we expect that she will want to replace indi- vidual parts of the model with parts from other models – requiring an interface supporting partial object matching. In either case, it is important that the interface be easy to use and capable of finding similar parts efficiently. Perhaps the simplest and most common query interface is textual keywords. Most users are familiar with this type of query and it is well suited for finding 3D models based on their semantics if models are well-annotated. However, recent research [Min 2004] has indicated that many models are poorly annotated. This problem is further exacerbated if we are interested in finding parts of models, which tend to have even less annotation. For this reason, our system augments text search with shape-based methods. Traditional methods for matching shapes [Besl and Jain 1985; Loncaric 1998; Tangelder and Veltkamp 2004] have focused on whole-object matching, providing methods for finding models whose overall shape is similar to a query. We provide this feature in our system, but would also like to support “part-in-whole” shape searches. This type of query matches whole objects, but with spe- cial emphasis on a selected part. It is very useful for finding specific parts within a class of objects. For instance, consider a situation in which a user has a chair and would like to replace some of the parts (Figure 6). If she performs a shape similarity query with only the legs of the chair selected, then the system should retrieve chairs with similarly shaped legs, independent of the presence or shapes of their arms. On the other hand, if she selects only the arms of the chair and performs a shape similarity query, the system should find chairs with arms. Figure 6 shows the results achieved by our system in this case. Traditional shape matching methods that represent each 3D model by a multi-dimensional feature vector (a shape descriptor) and define the dissimilarity between a pair of models as the dis- tance ( L 1 , L 2 , L ∞ , etc.) between them are not well-suited for this type of query. They consider two models the same if and only if their descriptors are equal. In this work, we would like to represent a single model by many different descriptors, depending on the fea- tures selected by the user. Since it is imperative that a model with 5 Search for Similar Parts

Figure 6: Results of shape similarity queries where the query pro- vided to the system is (top) the chair with the legs selected, and (bottom) the chair with the arms selected.

features selected always match itself, we must use an alternative approach. The notion of shape similarity that we use is based on the sum of squared distances for models aligned in the same coordinate sys- tem. Specifically, we define the distance between two models as the sum of the squares of the distances from every point on one surface to the closest point on the other, and vice-versa. This definition of shape similarity is intuitive, since it approximates the amount of work required to move points on one surface to the other (as in [Rubner et al. 2000; Tangelder and Veltkamp 2003]), and it implies that two shapes should be subsets of one another to achieve a good match. It is also well suited for feature based matching, since we can associate a weight to each point and then scale the contribution of each summand accordingly. Then, selected parts (points with higher weight) can contribute more to the measure of shape simi- larity than others. While a direct approach for computing the sum of squared dis- tances would require a complex integration over the surfaces of the models, we present a new method for computing this distance that is easy to implement (Figure 7). For each model A in the database, we represent the model by two voxel grids, R A and E A . The first voxel grid, R A , is the rasterization of the boundary, with value 1 at a voxel if the voxel intersects the boundary, and value 0 if it does not. The second voxel grid, E A is the squared Euclidean Distance Transform of the boundary, with the value at a voxel equal to the square of the distance to the nearest point on the boundary. In order to compare two models A and B we simply set the distance between the two of them to be equal to: d ( A , B ) = h R A , E B i + h E A , R B i , the dot product of the rasterization of the first model with the square distance transform of the second, plus the dot product of the raster- ization of the second model with the squared-distance transform of the first. The dot product h R A , E B i is equal to the integral over the surface of A of the square distance transform of B . Thus, it is equal precisely to the minimum sum of square distances that points on the surface of A need to be moved in to order to lie on the surface of B . For matching parts of models, the process is very similar. Given a triangulated model A with a subset of triangles, S ⊂ A , selected by the user as features with some weight w , we can compute the feature weighted descriptor of A , { R A , S , E A } , by setting R A , S to be the ras- terization of A with value w for points on S , a value of 1 for points on A that are not in S , and a value of 0 everywhere else. When we compute the dot product of the weighted rasterization with the dis- tance transform of another model, h R A , S , E B i , we get the weighted sum of square distances, with the contribution of square distances from points on S scaled by weighting factor w . This approach al- lows us to compute just a single shape descriptor for each model in the database, while using it for matching queries with arbitrary feature weighting.

Made with <A HREF="" TITLE="Learn about FlippingBook Software">FlippingBook</A> - professional solution for displaying marketing and sales documents online