Type | Book |
---|---|
Author | Mark de Berg |
Author | Otfried Cheong |
Author | Marc van Kreveld |
Author | Mark Overmars |
URL | https://www.springer.com/us/book/9783540779735 |
Edition | 3 |
Place | Berlin Heidelberg |
Publisher | Springer-Verlag |
ISBN | 978-3-540-77973-5 |
Date | 2008 |
Extra | 1 |
Accessed | 6/3/2019, 10:00:04 AM |
Library Catalog | www.springer.com |
Language | en |
Abstract | Computational geometry emerged from the ?eld of algorithms design and analysis in the late 1970s. It has grown into a recognized discipline with its own journals, conferences, and a large community of active researchers. The success of the ?eld as a research discipline can on the one hand be explained from the beauty of the problems studied and the solutions obtained, and, on the other hand, by the many application domains—computer graphics, geographic information systems (GIS), robotics, and others—in which geometric algorithms play a fundamental role. For many geometric problems the early algorithmic solutions were either slow or dif?cult to understand and implement. In recent years a number of new algorithmic techniques have been developed that improved and simpli?ed many of the previous approaches. In this textbook we have tried to make these modern algorithmic solutions accessible to a large audience. The book has been written as a textbook for a course in computational geometry, but it can also be used for self-study. |
Short Title | Computational Geometry |
Date Added | 6/3/2019, 10:00:04 AM |
Modified | 6/5/2019, 11:43:24 AM |
Type | Book |
---|---|
Author | Franco P. Preparata |
Author | Michael Shamos |
URL | https://www.springer.com/us/book/9780387961316 |
Series | Monographs in Computer Science |
Place | New York |
Publisher | Springer-Verlag |
ISBN | 978-0-387-96131-6 |
Date | 1985 |
Extra | 2 |
Accessed | 6/3/2019, 9:25:10 AM |
Library Catalog | www.springer.com |
Language | en |
Abstract | From the reviews: "This book offers a coherent treatment, at the graduate textbook level, of the field that has come to be known in the last decade or so as computational geometry. ... ... The book is well organized and lucidly written; a timely contribution by two founders of the field. It clearly demonstrates that computational geometry in the plane is now a fairly well-understood branch of computer science and mathematics. It also points the way to the solution of the more challenging problems in dimensions higher than two." #Mathematical Reviews#1 "... This remarkable book is a comprehensive and systematic study on research results obtained especially in the last ten years. The very clear presentation concentrates on basic ideas, fundamental combinatorial structures, and crucial algorithmic techniques. The plenty of results is clever organized following these guidelines and within the framework of some detailed case studies. A large number of figures and examples also aid the understanding of the material. Therefore, it can be highly recommended as an early graduate text but it should prove also to be essential to researchers and professionals in applied fields of computer-aided design, computer graphics, and robotics." #Biometrical Journal#2 |
Short Title | Computational Geometry |
Date Added | 6/3/2019, 9:25:10 AM |
Modified | 6/5/2019, 11:43:32 AM |
Type | Journal Article |
---|---|
Author | Gill Barequet |
URL | http://www.worldscientific.com/doi/abs/10.1142/S0218195998000308 |
Volume | 08 |
Issue | 05n06 |
Pages | 619-636 |
Publication | International Journal of Computational Geometry & Applications |
ISSN | 0218-1959, 1793-6357 |
Date | 10/1998 |
Extra | 3 |
DOI | 10.1142/S0218195998000308 |
Accessed | 6/3/2019, 9:13:05 AM |
Library Catalog | Crossref |
Language | en |
Abstract | In this paper we describe the DCEL system: a geometric software package which implements a polyhedral programming environment. This package enables fast prototyping of geometric algorithms for polyhedra or for polyhedral surfaces. We provide an overview of the system's functionality and demonstrate its use in several applications. |
Short Title | DCEL |
Date Added | 6/3/2019, 9:13:05 AM |
Modified | 6/5/2019, 11:43:43 AM |
Type | Journal Article |
---|---|
Author | Tetsuo Asano |
Author | Günter Rote |
Volume | 2 |
Pages | 46-68 |
Publication | JoCG |
Date | 2009 |
Extra | 4 |
DOI | 10.20382/jocg.v2i1a4 |
Library Catalog | Semantic Scholar |
Abstract | Constant-work-space algorithms may use only constantly many cells of storage in addition to their input, which is provided as a read-only array. We show how to construct several geometric structures efficiently in the constant-work-space model. Traditional algorithms process the input into a suitable data structure (like a doubly-connected edge list) that allows efficient traversal of the structure at hand. In the constant-work-space setting, however, we cannot afford to do this. Instead, we provide operations that compute the desired features on the fly by accessing the input with no extra space. The whole geometric structure can be obtained by using these operations to enumerate all the features. Of course, we must pay for the space savings by slower running times. While the standard data structure allows us to implement traversal operations in constant time, our schemes typically take linear time to read the input data in each step. We begin with two simple problems: triangulating a planar point set and finding the trapezoidal decomposition of a simple polygon. In both cases adjacent features can be enumerated in linear time per step, resulting in total quadratic running time to output the whole structure. Actually, we show that the former result carries over to the Delaunay triangulation, and hence the Voronoi diagram. This also means that we can compute the largest empty circle of a planar point set in quadratic time and constant work-space. As another application, we demonstrate how to enumerate the features of an Euclidean minimum spanning tree (EMST) in quadratic time per step, so that the whole EMST can be found in cubic time using constant work-space. Finally, we describe how to compute a shortest geodesic path between two points in a simple polygon. Although the shortest path problem in general graphs is NL-complete [18], this constrained problem can be solved in quadratic time using only constant work-space. |
Date Added | 6/3/2019, 9:57:54 AM |
Modified | 6/5/2019, 11:49:01 AM |
Type | Book Section |
---|---|
Series Editor | Gerhard Goos |
Series Editor | Juris Hartmanis |
Series Editor | Jan van Leeuwen |
Editor | Ralf Hartmut Güting |
Editor | Dimitris Papadias |
Editor | Fred Lochovsky |
Author | Ludger Becker |
Author | André Giesen |
Author | Klaus H. Hinrichs |
Author | Jan Vahrenhold |
URL | http://link.springer.com/10.1007/3-540-48482-5_17 |
Volume | 1651 |
Place | Berlin, Heidelberg |
Publisher | Springer Berlin Heidelberg |
Pages | 270-285 |
ISBN | 978-3-540-66247-1 978-3-540-48482-0 |
Date | 1999 |
Extra | 5 |
Accessed | 6/3/2019, 10:27:40 AM |
Library Catalog | Crossref |
Language | en |
Abstract | We consider the problem of performing polygonal map overlay and the refinement step of spatial overlay joins. We show how to adapt algorithms from computational geometry to solve these problems for massive data sets. A performance study with artificial and real-world data sets helps to identify the algorithm that should be used for given input data. |
Book Title | Advances in Spatial Databases |
Date Added | 6/3/2019, 10:27:40 AM |
Modified | 6/5/2019, 11:44:22 AM |
Type | Conference Paper |
---|---|
Author | J. S. Challa |
Author | P. Goyal |
Author | S. Nikhil |
Author | A. Mangla |
Author | S. S. Balasubramaniam |
Author | N. Goyal |
Pages | 27-36 |
Date | December 2016 |
Extra | 6 |
DOI | 10.1109/BigData.2016.7840586 |
Library Catalog | IEEE Xplore |
Conference Name | 2016 IEEE International Conference on Big Data (Big Data) |
Abstract | Parallelizing data mining algorithms has become a necessity as we try to mine ever increasing volumes of data. Spatial data mining algorithms like Dbscan, Optics, Slink, etc. have been parallelized to exploit a cluster infrastructure. The efficiency achieved by existing algorithms can be attributed to spatial locality preservation using spatial indexing structures like k-d-tree, quad-tree, grid files, etc. for distributing data among cluster nodes. However, these indexing structures are static in nature, i.e., they need to scan the entire dataset to determine the partitioning coordinates. This results in high data distribution cost when the data size is large. In this paper, we propose a dynamic distributed data structure, DD-Rtree, which preserves spatial locality while distributing data across compute nodes in a shared nothing environment. Moreover, DD-Rtree is dynamic, i.e., it can be constructed incrementally making it useful for handling big data. We compare the quality of data distribution achieved by DD-Rtree with one of the recent distributed indexing structure, SD-Rtree. We also compare the efficiency of queries supported by these indexing structures along with the overall efficiency of DBSCAN algorithm. Our experimental results show that DD-Rtree achieves better data distribution and thereby resulting in improved overall efficiency. |
Proceedings Title | 2016 IEEE International Conference on Big Data (Big Data) |
Short Title | DD-Rtree |
Date Added | 6/5/2019, 11:27:41 AM |
Modified | 6/5/2019, 11:49:12 AM |
Type | Conference Paper |
---|---|
Author | W. Randolph Franklin |
Author | Salles Viana Gomes de Magalhães |
Author | Marcus Vinícius Alvim Andrade |
URL | http://dl.acm.org/citation.cfm?doid=3282834.3282839 |
Publisher | ACM Press |
Pages | 16-19 |
ISBN | 978-1-4503-6041-8 |
Date | 2018 |
Extra | 7 |
DOI | 10.1145/3282834.3282839 |
Accessed | 6/5/2019, 11:16:35 AM |
Library Catalog | Crossref |
Language | en |
Abstract | This paper describes data structures and algorithms for efficient implementation of GIS operations for large datasets on multicore Intel CPUs and on NVIDA GPUs. Typical operations are boolean combinations of polygons and map overlay. Efficient parallelization prefers simple regular data structures, such as structures of arrays of plain old datatypes. Warps of 32 threads are required to execute the same instruction (or be idle). Ideally, the data used by adjacent threads is adjacent in memory. Minimizing storage is important, as is accessing it in a regular pattern. That disparages pointers, linked lists, and trees. That implies that explicitly representing global topology is bad. If using only local topological formulae is sufficient, then it will be much faster. E.g., for many operations on a 2-D map (aka planar graph), the set of oriented edges suffices. Each edge knows the locations of its endpoints and the ids of its adjacent polygons. Any mass operation, such as area computation or point location, can be implemented as a map-reduce. All these techniques also apply in 3D to CAD/CAM and additive manufacturing. Indeed they are more important there. |
Date Added | 6/5/2019, 11:16:35 AM |
Modified | 6/5/2019, 11:45:50 AM |
Type | Conference Paper |
---|---|
Author | Salles V. G. Magalhães |
Author | Marcus V. A. Andrade |
Author | W. Randolph Franklin |
Author | Wenli Li |
URL | http://dl.acm.org/citation.cfm?doid=2835185.2835188 |
Publisher | ACM Press |
Pages | 45-54 |
ISBN | 978-1-4503-3974-2 |
Date | 2015 |
Extra | 8 |
DOI | 10.1145/2835185.2835188 |
Accessed | 6/3/2019, 10:22:44 AM |
Library Catalog | Crossref |
Language | en |
Abstract | We present EPUG-Overlay (Exact Parallel Uniform Grid Overlay), an algorithm to overlay two maps that is fast and parallel, has no roundoff errors, and is freely available. EPUG-Overlay combines several novel aspects. It represents coordinates with rational numbers, thereby ensuring exact computations with no roundoff errors and the ensuing sliver problems and topological impossibilities. For efficiency, EPUG-Overlay performs the map overlay in parallel, thereby utilizing the ubiquitous multicore architecture. Our application goes beyond merely using existing packages, which are inefficient when used in parallel on large problems. Indeed, overlaying two maps with 53,000,000 edges and 730,000 faces took only 322 elapsed seconds (plus 116 seconds for I/O) on a dual 8-core 3.1 GHz Intel Xeon E5-2687 workstation. In contrast, GRASS, executing sequentially and generating roundoff errors, takes 5300 seconds. |
Date Added | 6/3/2019, 10:22:44 AM |
Modified | 6/5/2019, 11:46:21 AM |
Type | Conference Paper |
---|---|
Author | Satish Puri |
Author | Dinesh Agarwal |
Author | Xi He |
Author | Sushil K. Prasad |
URL | http://dx.doi.org/10.1109/IPDPSW.2013.254 |
Series | IPDPSW '13 |
Place | Washington, DC, USA |
Publisher | IEEE Computer Society |
Pages | 1009–1016 |
ISBN | 978-0-7695-4979-8 |
Date | 2013 |
Extra | 9 |
DOI | 10.1109/IPDPSW.2013.254 |
Accessed | 6/4/2019, 11:18:12 AM |
Library Catalog | ACM Digital Library |
Proceedings Title | Proceedings of the 2013 IEEE 27th International Symposium on Parallel and Distributed Processing Workshops and PhD Forum |
Date Added | 6/4/2019, 11:18:12 AM |
Modified | 6/5/2019, 11:49:17 AM |
Type | Conference Paper |
---|---|
Author | Ibrahim Sabek |
Author | Mohamed F. Mokbel |
URL | http://dl.acm.org/citation.cfm?doid=3139958.3139967 |
Publisher | ACM Press |
Pages | 1-10 |
ISBN | 978-1-4503-5490-5 |
Date | 2017 |
Extra | 10 |
DOI | 10.1145/3139958.3139967 |
Accessed | 6/5/2019, 8:54:18 AM |
Library Catalog | Crossref |
Language | en |
Abstract | This paper provides the first attempt for a full-fledged query optimizer for MapReduce-based spatial join algorithms. The optimizer develops its own taxonomy that covers almost all possible ways of doing a spatial join for any two input datasets. The optimizer comes in two flavors; cost-based and rule-based. Given two input data sets, the cost-based query optimizer evaluates the costs of all possible options in the developed taxonomy, and selects the one with the lowest cost. The rule-based query optimizer abstracts the developed cost models of the cost-based optimizer into a set of simple easy-tocheck heuristic rules. Then, it applies its rules to select the lowest cost option. Both query optimizers are deployed and experimentally evaluated inside a widely used open-source MapReduce-based big spatial data system. Exhaustive experiments show that both query optimizers are always successful in taking the right decision for spatially joining any two datasets of up to 500GB each. |
Date Added | 6/5/2019, 8:54:18 AM |
Modified | 6/5/2019, 11:46:41 AM |
Type | Journal Article |
---|---|
Author | Chen Zhou |
Author | Zhenjie Chen |
Author | Manchun Li |
URL | https://www.tandfonline.com/doi/full/10.1080/13658816.2018.1508689 |
Volume | 32 |
Issue | 12 |
Pages | 2402-2426 |
Publication | International Journal of Geographical Information Science |
ISSN | 1365-8816, 1362-3087 |
Date | 2018-12-02 |
Extra | 11 |
DOI | 10.1080/13658816.2018.1508689 |
Accessed | 6/3/2019, 10:08:44 AM |
Library Catalog | Crossref |
Language | en |
Abstract | Polygon intersection is an important spatial data-handling process, on which many spatial operations are based. However, this process is computationally intensive because it involves the detection and calculation of polygon intersections. We addressed this computation issue based on two perspectives. First, we improved a method called boundary algebra filling to efficiently rasterize the input polygons. Polygon intersections were subsequently detected in the cells of the raster. Owing to the use of a raster data structure, this method offers advantages of reduced task dependence and improved performance. Based on this method, we developed parallel strategies for different procedures in terms of workload decomposition and task scheduling. Thus, the workload across different parallel processes can be balanced. The results suggest that our method can effectively accelerate the process of polygon intersection. When addressing datasets with 1,409,020 groups of overlapping polygons, our method could reduce the total execution time from 987.82 to 53.66 s, thereby obtaining an optimal speedup ratio of 18.41 while consistently balancing the workloads. We also tested the effect of task scheduling on the parallel efficiency, showing that reducing the total runtime is effective, especially for a lower number of processes. Finally, the good scalability of the method is demonstrated. |
Date Added | 6/3/2019, 10:08:44 AM |
Modified | 6/5/2019, 11:46:53 AM |
Type | Conference Paper |
---|---|
Author | Satish Puri |
Author | Sushil K. Prasad |
URL | http://dx.doi.org/10.1109/IPDPSW.2013.174 |
Series | IPDPSW '13 |
Place | Washington, DC, USA |
Publisher | IEEE Computer Society |
Pages | 2238–2241 |
ISBN | 978-0-7695-4979-8 |
Date | 2013 |
Extra | 12 |
DOI | 10.1109/IPDPSW.2013.174 |
Accessed | 6/5/2019, 9:26:06 AM |
Library Catalog | ACM Digital Library |
Abstract | Polygon overlay is one of the complex operations in Geographic Information Systems (GIS). In GIS, a typical polygon tends to be large in size often consisting of thousands of vertices. Sequential algorithms for this problem are in abundance in literature and most of the parallel algorithms concentrate on parallelizing edge intersection phase only. Our research aims to develop parallel algorithms to find overlay for two input polygons which can be extended to handle multiple polygons and implement it on General Purpose Graphics Processing Units (GPGPU) which offers massive parallelism at relatively low cost. Moreover, spatial data files tend to be large in size (in GBs) and the underlying overlay computation is highly irregular and compute intensive. MapReduce paradigm is now standard in industry and academia for processing large-scale data. Motivated by MapReduce programming model, we propose to develop and implement scalable distributed algorithms to solve large-scale overlay processing in this dissertation. |
Proceedings Title | Proceedings of the 2013 IEEE 27th International Symposium on Parallel and Distributed Processing Workshops and PhD Forum |
Date Added | 6/5/2019, 9:26:06 AM |
Modified | 6/5/2019, 11:49:22 AM |
Type | Journal Article |
---|---|
Author | Joseph E Gonzalez |
Author | Reynold S Xin |
Author | Ankur Dave |
Author | Daniel Crankshaw |
Author | Michael J Franklin |
Author | Ion Stoica |
Pages | 16 |
Extra | 13 |
Library Catalog | Zotero |
Language | en |
Abstract | In pursuit of graph processing performance, the systems community has largely abandoned general-purpose distributed dataflow frameworks in favor of specialized graph processing systems that provide tailored programming abstractions and accelerate the execution of iterative graph algorithms. In this paper we argue that many of the advantages of specialized graph processing systems can be recovered in a modern general-purpose distributed dataflow system. We introduce GraphX, an embedded graph processing framework built on top of Apache Spark, a widely used distributed dataflow system. GraphX presents a familiar composable graph abstraction that is sufficient to express existing graph APIs, yet can be implemented using only a few basic dataflow operators (e.g., join, map, group-by). To achieve performance parity with specialized graph systems, GraphX recasts graph-specific optimizations as distributed join optimizations and materialized view maintenance. By leveraging advances in distributed dataflow frameworks, GraphX brings low-cost fault tolerance to graph processing. We evaluate GraphX on real workloads and demonstrate that GraphX achieves an order of magnitude performance gain over the base dataflow framework and matches the performance of specialized graph processing systems while enabling a wider range of computation. |
Date Added | 6/3/2019, 9:31:02 AM |
Modified | 6/5/2019, 11:46:09 AM |