Considers sequence alignment and RNA structure problems in which the solution is constructed by piecing together some initial set of fragments (e.g. short sequences that match exactly). The method is to consider a planar point set formed by the fragment positions in the two input sequences, and use plane sweep to construct a cellular decomposition of the plane similar to the rectilinear Voronoi diagram.
(BibTeX -- Citations to conference version -- Citations to SDP I -- Citations to SDP II)
In this paper, we construct triangulations of point sets and polygons by using quadtrees to add extra vertices to the input. As a result we can guarantee that all triangles have angles bounded away from zero, using a number of triangles within a constant of optimal; this was the first paper to provide simultaneous bounds on mesh element quality and mesh complexity of this form, and therefore the first to provide finite element mesh generation algorithms that guarantee both the robustness of the algorithm against unexpected input geometries and the quality of its output.
In the same paper we also use quadtrees to triangulate planar point sets so that all angles are non-obtuse, using linearly many triangles, and to triangulate higher dimensional point sets with no small solid angles and a number of simplices within a constant of optimal. Also, we can augment any higher dimensional point set so the Delaunay triangulation has linear complexity.
In later follow-up work, I showed that the same technique can also be used to find a triangulation whose edges have total length within a constant factor of optimal. Bern, Mitchell, and Ruppert showed that alternative methods can be used to triangulate any polygon without obtuse angles; see "Faster circle packing with application to nonobtuse triangulation" for an algorithmic improvement to their technique. Additionally, with Bern, Chew, and Ruppert, we showed that any point set in higher dimensions can be triangulated with nonobtuse simplices. Bern and I surveyed these and related results in our paper "Mesh generation and optimal triangulation".
(BibTeX -- Citations -- Preliminary copy of journal version -- MIT hypertext bibliography -- CiteSeer -- ACM DL (JCSS))
Uses a divide and conquer on the edge set of a graph, together with the idea of replacing subgraphs by sparser certificates, to make various dynamic algorithms as fast on dense graphs as they are on sparse graphs. Applications include random generation of spanning trees as well as finding the k minimum weight spanning trees for a given parameter k.
(BibTeX -- Citations -- MIT hypertext bibliography -- ACM DL)
This paper presents an algorithm that finds multiple short paths connecting two terminals in a graph (allowing repeated vertices and edges in the paths) in constant time per path after a preprocessing stage dominated by a single-source shortest path computation. The paths it finds are the k shortest in the graph, where k is a parameter given as input to the algorithm.
The k shortest paths problem has many important applications for finding alternative solutions to geographic path planning problems, network routing, hypothesis generation in computational linguistics, and sequence alignment and metabolic pathway finding in bioinformatics. Although there have been many papers on the k shortest paths problem before and after this one, it has become frequently cited in those application areas. Additionally, it marks a boundary in the theoretical study of the problem: prior theoretical work largely concerned how quickly the problem could be solved, a line of research that was closed off by the optimal time bounds of this paper. Subsequent work has focused instead on devising efficient algorithms for more complex alternative formulations of the problem that avoid the repeated vertices and other shortcomings of the alternative paths produced by this formulation.
(BibTeX -- Full paper -- Citations -- Graehl implementation -- Jiménez-Marzal implementations -- Shibuya implementation -- Martins implementation -- CiteSeer: TR '94, SJC '98 -- ACM DL)
We consider the problem of "connect the dots": if we have an unknown smooth curve from which sample points have been selected, we would like to find a curve through the sample points that approximates the unknown curve. We show that if the local sample density is sufficiently high, a simple algorithm suffices: form the Delaunay triangulation of the sample points together with their Voronoi vertices, and keep only those Delaunay edges connecting original sample points. There have been many follow-up papers suggesting alternative methods, generalizing the problem to the reconstruction of curves with sharp corners or to curves and surfaces in higher dimensions, etc.
(BibTeX -- Citations -- CiteSeer -- ACM DL)

This paper introduces the diameter-treewidth property (later known as bounded local treewidth): a functional relationship between the diameter of its graph and its treewidth. Previously known results imply that planar graphs have bounded local treewidth; we characterize the other minor-closed families with this property. Specifically, minor-closed family F has bounded local treewidth if and only if there exists an apex graph G that is not in F; an apex graph is a graph that can be made planar by removing a single vertex. The minor-free families that exclude an apex graph (and therefore have bounded local treewidth) include the bounded-genus graphs (for which, as with planar graphs, we show a linear bound for the treewidth as a function of the diameter) and K3,a-free graphs. As a consequence, subgraph isomorphism for subgraphs of bounded size and approximations to several NP-hard optimization problems can be computed efficiently on these graphs, extending previous results for planar graphs.
Some of these results were announced in the conference version of "subgraph isomorphism for planar graphs and related problems" but not included in the journal version. Since its publication, there have been many more works on local treewidth. The class of problems that could be solved quickly on graphs of bounded local treewidth was extended and classified by Frick and Grohe, "Deciding first-order properties of locally tree-decomposable structures", J. ACM 48:1184-1206, 2001; the proof that bounded local treewidth is equivalent to having an excluded apex minor was simplified, and the dependence of the treewidth on diameter improved, by a subsequent paper of Demaine and Hajiaghayi, "Diameter and treewidth in minor-closed graph families, revisited", Algorithmica 40:211-215, 2004. The concept of local treewidth is the basis for the theory of bidimensionality, a general framework for fixed-parameter-tractable algorithms and approximation algorithms in minor-closed graph families; for a survey, see Demaine and Hajiaghayi, "The bidimensionality theory and its algorithmic applications", The Computer J. 51:292-302, 2008.
We describe a decomposition of graphs embedded on 2-dimensional manifolds into three subgraphs: a spanning tree, a dual spanning tree, and a set of leftover edges with cardinality determined by the genus of the manifold. This tree-cotree decomposition allows us to find efficient data structures for dynamic graphs (allowing updates that change the surface), better constants in bounded-genus graph separators, and efficient algorithms for tree-decomposition of bounded-genus bounded-diameter graphs.
(BibTeX -- SODA talk slides -- Citations)
We consider a class of multivariate recurrences frequently arising in the worst case analysis of Davis-Putnam-style exponential time backtracking algorithms for NP-hard problems. We describe a technique for proving asymptotic upper bounds on these recurrences, by using a suitable weight function to reduce the problem to that of solving univariate linear recurrences; show how to use quasiconvex programming to determine the weight function yielding the smallest upper bound; and prove that the resulting upper bounds are within a polynomial factor of the true asymptotics of the recurrence. We develop and implement a multiple-gradient descent algorithm for the resulting quasiconvex programs, using a real-number arithmetic package for guaranteed accuracy of the computed worst case time bounds.
The journal version uses the longer title "Quasiconvex analysis of multivariate recurrence equations for backtracking algorithms".
(BibTeX -- SODA talk slides -- Citations)
Many combinatorial structures such as the set of acyclic orientations of a graph, weak orderings of a set of elements, or chambers of a hyperplane arrangement have the structure of a partial cube, a graph in which vertices may be labeled by bitvectors in such a way that graph distance equals Hamming distance. This book analyzes these structures in terms of operations that change one vertex to another by flipping a single bit of the bitvector labelings. It incorporates material from several of my papers including "Algorithms for Media", "Algorithms for Drawing Media", "Upright-Quad Drawing of st-Planar Learning Spaces", and "The Lattice Dimension of a Graph".
(Publisher's web site -- Reinhard Suck's review in J. Math. Psych. -- Reinhard Suck's review in MathSciNet)

We provide a graph-theoretic characterization of three classes of nonconvex polyhedra with axis-parallel sides, analogous to Steinitz's theorem characterizing the graphs of convex polyhedra.
(Slides)

Publications -- David Eppstein -- Theory Group -- Inf. & Comp. Sci. -- UC Irvine
Semi-automatically filtered from a common source file.