# 2015

## Seminar on Algorithms, Data Structures, and Applications, SOS

Talks are usually given on Mondays, 15:00 h, CAB H 52

If you wish to receive email announcements of upcoming events, please subscribe to our SOS mailing list.

### Talks given 2015

• 07.12.15, 14:00, CAB H 53, Przemysław Uznański: Prime Factorization of the Kirchhoff Polynomial: Compact Enumeration of Arborescences.
Abstract: We study the problem of enumerating all arborescences of a directed graph $G$. They can be expressed as the Kirchhoff polynomial, a polynomial over the edge labels of $G$, in which each monomial represents a single arborescence. We show how to compute, in linear time, a compact representation of the Kirchhoff polynomial -- its prime factorization, and how it relates to combinatorial properties of digraphs such as strong connectivity and vertex domination.
• 24.11.15, 11:00 CAB H 53, Simon Rösch: Symmetric Difference as Optimization Goal for De Novo Peptide Sequencing.
Abstract: De novo peptide sequencing is an important task in data analysis for proteome research. A peptide is an amino acid sequence. In mass spectrometry multiple copies of such a sequence are split at a random position. The mass of the fragments is then measured. From these masses we want to reconstruct the amino acid sequence of the peptide. We call the set of measured masses X and define TS(S) as the set of masses we would expect from the spectrometric analysis of an amino acid sequence S. The approach in previous work was to search the sequence S that maximizes the intersection X ∩ TS(S). We study a different optimization goal: minimizing the symmetric difference (X \ TS(S)) ∪ (TS(S) \ X). We develop a new algorithm to solve this problem. In real experiments, different split positions and molecular losses lead to mass deviations. We extend our algorithm to handle such deviations. We show that our algorithm can handle twice as large mass offsets than most other approaches. We compare our algorithm to the one of Chen et al.(2001). In our benchmarks, our algorithm can reconstruct up to 80% of the peptides while Chen’s algorithm reconstructed only 64%.
• 24.11.15, 13:00, ML H 37.1, Tim Roughgarden: How Computer Science Informs Modern Auction Design.
Abstract: Economists have studied the theory and practice of auctions for decades.  How can computer science contribute?  Using the upcoming (March 29, 2016) U.S. FCC double-auction for wireless spectrum as a case study, I'll illustrate the many answers: novel auction formats, algorithms for NP-hard problems, approximation guarantees for simple auctions, and communication complexity-based impossibility results.
• 16.11.15, 17:00, CAB H 53, Sven Hammann: The Online Vertex Cover Problem with Advice.
Abstract: An online problem is a computational problem where the input consists of a sequence of requests that must be answered by an online algorithm in a sequential fashion. Each request must be answered while only knowing this request and the previous ones. Such problems usually cannot be solved optimally by an online algorithm that has no information about future requests. The performance of such an algorithm is usually measured by the competitive ratio, the ratio between the cost of the solution given by an algorithm and the cost of an optimal solution, introduced by Sleator and Tarjan in 1985 . The advice complexity of an online algorithm is the amount of information an algorithm must be given in order to produce an optimal solution or to achieve the desired competitive ratio.
In this thesis, we study the online vertex cover problem. In the usual model for this problem, one vertex is revealed in every time step. The first vertex is revealed in isolation and with each further vertex all edges to previously revealed vertices are revealed. We prove a linear lower bound on the advice complexity of online vertex cover on caterpillars and show that a known lower bound for online vertex cover still holds if we restrict the input to bipartite graphs. In order to prove these bounds, we give a formal framework for reductions between online problems. This framework can be used as a general tool for proving lower bounds on the advice complexity of online algorithms. To this end, we further define two generic problems called weighted string guessing and subset guessing that are closely connected to the string guessing problem and the asymmetric string guessing problem.
Furthermore, the idea of parameterization in the field of online problems is explored. Parameterized complexity introduced by Downey and Fellows in 1999 is an active area of research in the field of complexity theory and we provide a first step in adapting the concept to online problems. We give parameterized algorithms for online vertex cover on graphs that contain few vertices of high degree and on a special class of graphs that we call regular caterpillars. We also discuss problems that arise when trying to adapt some of the known parameterizations for the offline version of vertex cover to the onlineversion and give suggestions on how to perform further research in the direction of online parameterizations.
We finally look at different online graph models where in each timestep an edge is revealed rather than a vertex. We show that in this setting there exists an algorithm without advice with strict competitive ratio 2. Furthermore, there cannot be an algorithm without advice with a better strict competitive ratio.
• 23.10.15, 15:00, CAB H 100.5, Shungo Koichi: Integrality of the multiflow problems and tight spans of directed metric spaces.
Abstract: A linear programming problem does not necessarily have an integral optimal solution even if the matrices and vectors describing the problem are all integral. On the other hand, the maximum flow problem, which can be also written as a linear programming problem, has an interesting property; that is, if the capacity of each edge is an (positive) integer, there is necessarily an integral optimal solution.
As is classically known, one of the way to describe this property is using a total unimodularity of matrices. However, this cannot be applied to explain that some of the multiflow problems, which are generalizations of the maximum flow problem, also have an integral property. In order to clarify the multiflow problems having an integral optimal solution, we introduce the tight span of a directed metric space, and demonstrate how to use it.
• 20.10.15, 16:00, CAB H 53, Anita Schöbel: Concepts for Robust Multiobjective Optimization.
• 20.10.15, 10:00, CAB H 52, Lukas Gianinazzi: Cache Oblivious Minimum Cut.
Abstract: We study the problem of finding the minimum cut cache efficiently. We introduce two cache oblivious algorithms for the global minimum cut problem. The first algorithm is probabilistic and the second one is deterministic.
The first algorithm is based on the algorithm by Karger and Stein. It returns the correct result with high probability. On a graph with V
vertices and a cache of size M with lines of width B, the algorithm the algorithm causes O(ceil{V^2/B log^2 V log ceil{V/M}}) cache misses in expectation and runs in expected time O(V^2 log V). This is made possible by a graph representation which supports O(V) random edge contractions incurring a total of O(ceil{V^2/B}) cache misses on a graph with initially V vertices.
The second algorithm is based on Stoer and Wagner’s algorithm. On a graph with V vertices and E edges it incurs O(min(EV log ceil{V/B}, ceil{V^3/B})) cache misses and runs in time O(min(EV log V, V^3)).
Both results assume a tall cache, meaning that M in Ω(B^2). As the algorithms are cache oblivious, they do not require knowledge of the cache size M or the line width B and in fact, the bounds apply to all levels of the memory hierarchy simultaneously.
Experimental comparison with existing codes show that our randomized minimum cut algorithm gives significant performance improvements in external memory.
• 21.09.15, 14:00, CAB H 52, Paolo Penna: Noise and Synchronization in Games.
Abstract:  I will briefly present simple game dynamics with and without noise, and explain how the latter are used to model certain physical systems (ferro-magnetism) and other applications in Computer Science (technology diffusion). A remarkable discovery in game theory is that 'noise' can be an extremely powerful selection tool in the sense that players are able to select the good equilibria (and discard the bad ones).
In the second part of this talk I will present a refined version of the Price of Anarchy (quality of equilibria in games) based on 'decentralized' dynamics with noise. This notion has two advantages: (1) It suggests an intuitive definition of 'reasonable' equilibria (robust to small mistakes and easy to reach) and (2) It allows to understand the role of *synchronization* in such dynamics (by comparing to the quality of 'synchronous' dynamics with noise).
• 31.08.15, 15:00, CAB H 52, Tobias Pröger: Robust Routing in Urban Public Transportation: Evaluating Strategies that Learn From the Past.
Abstract: Given an urban public transportation network and historic delay information, we consider the problem of computing reliable journeys. We propose new algorithms based on our recently presented solution concept (Böhmová et al., ATMOS 2013), and perform an experimental evaluation using real-world delay data from Zürich, Switzerland. We compare these methods to natural approaches as well as to our recently proposed method which can also be used to measure typicality of past observations. Moreover, we demonstrate how this measure relates to the predictive quality of the individual methods. In particular, if the past observations are typical, then the learning-based methods are able to produce solutions that perform well on typical days, even in the presence of strong delays.
• 27.08.15, 14:00, CAB H 52, Sebastian Schraink: A Generalization of the Clique Minimal Separator Decomposition for k-uniform Hypergraphs.
Abstract: We investigate how the divide-and-conquer approach of the clique minimal separator (CMS) decomposition can be used for solving problems on hypergraphs. After an introduction to the CMS decomposition of graphs, we look at particular algorithmic problems on hypergraphs and search for strategies to solve them by using the CMS decomposition as a building block. As we will see, this works for several problems; however, there are problem classes which are extremely difficult to
transform to a suitable form s.t. the CMS decomposition can be applied. Therefore we aim to find a proper generalization of the CMS decomposition that operates directly on the hypergraph. It turns out that it is far from simple to find a suitable definition for the CMS decomposition of a hypergraph at all. Firstly we need to give an appropriate definition of a clique. To achieve this, we restrict ourselves to k-uniform hypergraphs. Moreover we need to define the notion of a separator for hypergraphs. Most importantly, we aim to establish a relationship between CMS and chordality for hypergraphs, similar to the one that is present in graphs.
• 25.08.15, 13:30, CAB H 52, Daniel Graf: How to Sort by Walking on a Tree.
Abstract: Consider a graph G with n vertices. On each vertex we place a box. Thesen vertices and n boxes are both numbered from 1 to n and initiallyshuffled according to a permutation π. We introduce a sorting problem for a single robot: In every step, the robot can walk along an edge of Gand can carry at most one box at a time. At a vertex, it may swap the box placed there with the box it is carrying. How many steps does the robot need to sort all the boxes?
We present an algorithm that produces a shortest possible sorting walk for such a robot if G is a tree. The algorithm runs in quadratic time and can be simplified further if G is a path. We show that for planar graphs the problem of finding a shortest possible sorting walk is NP-complete.
• 19.08.15, 15:00, CNB H 100.5, Sandro Montanari: On Sampling Simple Paths in Planar Graphs According to Their Lengths.
Abstract: We consider the problem of sampling simple paths between two given vertices in a planar graph and propose a natural Markov chain exploring such paths by means of "local" modifications. This chain can be tuned so that the probability of sampling a path depends on its \emph{length} (for instance, output shorter paths with higher probability than longer ones). We show that this chain is always ergodic and thus it converges to the desired sampling distribution for any planar graph. While this chain is not rapidly mixing in general, we prove that a simple restricted variant is. The restricted chain samples paths on a 2D lattice which are monotone in the vertical direction. To the best of our knowledge, this is the first example of a rapidly mixing Markov chain for sampling simple paths with a probability that depends on their lengths.
• 11.08.15, 15:00, CAB H 52 - Barbara Geissmann: Recurring Comparison Faults: Sorting and Finding the Minimum.
Abstract: In a faulty environment, comparisons between two elements with respect to an underlying linear order can come out right or go wrong. A wrong  comparison is a recurring comparison fault if comparing the same two  elements yields the very same result each time we compare the elements.  We examine the impact of such faults on the elementary problems of  sorting a set of distinct elements and finding a minimum element in such  a set.The more faults occur, the worse the approaches to solve these  problems can become and we parametrize our analysis by an upper bound k on the number of faults.
We first explain that reconstructing the sorted order of the elements is impossible in the presence of even one fault. Then, we focus on the maximum information content we get by performing all possible  comparisons. We consider two natural approaches for sorting the elements  that involve knowledge of the outcomes of all comparisons: the first  approach finds a permutation (compatible solution) that contradicts at  most k times the outcomes of comparisons, and the second approach sorts  the elements by the number of times an element is returned to be larger  in the outcomes of its comparisons with all other elements (score solution). In such permutations the elements can be dislocated from their positions in the linear order. We measure the quality of such  permutations by three measures: the maximum dislocation of an element, the sum of dislocations of all elements, and the Kemeny distance compared to the linear order. We show for compatible solutions that the Kemeny distance is at most 2k, the sum of dislocations at most 4k, and the maximum dislocation at most 2k. In score solutions the Kemeny distance is smaller than 4k, the sum of dislocations smaller than 8k, and the maximum dislocation at most k+1. Our upper bounds are tight for compatible solutions, but possibly not tight for score solutions. It turns out that none of the two approaches is better than the other in all measures.
For the problem of finding a minimum element, we first observe that there is no deterministic algorithm that guarantees to return one of the smallest k+1 elements. This implies that computing the first element of a score solution is optimum and we derive an algorithm that guarantees to find one of the k+2 smallest elements in time O(√(k)n) making O(√(k)n) comparisons, where n is the number of elements, and we generalize this algorithm to find all elements of score at most a given target t.
• 11.08.15, 11:30, CAB H 52 - Daniel Graf: Scheduling and Sorting Algorithms for Robotic Warehousing Systems.
Abstract: In this thesis, we study an automated storage and retrieval systems in the context of storing bicycles at a train station, where commuters bring their bicycles in the morning and reclaim them in the evening. The commuters place their bicycles in a box at a door, where a robot then moves it underground for safe storage.
In the first part of the thesis, we investigate how the robot best assigns boxes to the underground storage slots. The goal is to find a schedule which allows the robot to meet all the customers at the door without causing any waiting time. For a single door, we devise an efficient algorithm that finds such an optimal schedule under certain conditions. If there is a second door, or arriving and departing customers are heavily interleaved, we show that it is NP-complete for the robot to find a wait-free schedule even if it knows all the requests in advance.
In the second part of the thesis, we focus on the task of rearranging the stored boxes in between customer interactions. We phrase the task as a compelling physical sorting problem: on every vertex of a graph G, we place a box. The n vertices and n boxes are each numbered from 1 to n, and initially shuffled according to a permutation π. A single robot is given the task to sort these boxes. In every step, the robot can walk along an edge of the graph and can carry at most one box at a time. At a vertex, it may swap the box placed there with the box it is carrying. How many steps does the robot need to sort all the boxes? We
present efficient algorithms that construct such a shortest sorting walk if G is a path, a tree or a cycle, and we show that the problem is NP-complete for planar graphs.
• 10.08.15, 15:00, CAB H 52 - Zlatko Drmac (University of Zagreb): A New Selection Operator for the Discrete Empirical Interpolation Method -- improved a priori error bound and extensions.
Abstract: Direct numerical simulation of dynamical systems plays a crucial role in studying a great variety of complex physical phenomena in areas ranging from neuron modeling to microchip design. The ever increasing demand for accuracy leads to dynamical systems of ever larger scale and complexity. Simulation in such large-scale settings can make overwhelming demands on computational resources; thus creating a need for model reduction such as the Discrete Empirical Interpolation Method
(DEIM) to create smaller, faster approximations to complex dynamical systems that still guarantee high fidelity.
We present a new framework for constructing the DEIM projection operator. The interpolation nodes selection procedure is formulated using a QR factorization with column pivoting. This selection strategy leads to a sharper error bound for the DEIM projection error and the selection operator does not change if the DEIM basis U is replaced by UQ with arbitrary unitary matrix Q. The new approach allows modifications that, in the case of large dimensions, use only randomly sampled rows of U but are capable of producing equally good approximations.
• 07.07.15, 17:00, CAB H 52 - Andreas Baertschi: On Conflict-Free Multi-Coloring.
Abstract: A conflict-free coloring of a hypergraph H = (V, E) with n=|V| nodes and m=|E| hyperedges (where E ⊆ 2^V) is a coloring of the vertices V such that every hyperedge contains a vertex of unique'' color. Our goal is to minimize the total number of distinct colors. In its full generality, this problem is known as the conflict-free (hypergraph) coloring problem. It is known that Theta(sqrt{m}) colors might be needed in general. In this paper we study the relaxation of the problem where one is allowed to assign multiple colors to the same node. The goal here is to substantially reduce the total number of colors, while keeping the number of colors per node as small as possible. By a simple adaptation of a result by Pach and Tardos [2009] on the single-color version of the problem, one obtains that only O(log^2 m) colors in total are sufficient (on every instance) if each node is allowed to use up to O(log m) colors. By improving on the result of Pach and Tardos (under the assumption n << m), we show that the same result can be achieved with O(log m * log n) colors in total, and either O(log m) or O(log n * log log m) ⊆ O(log^2 n) colors per node. The latter coloring can be computed by apolynomial-time Las Vegas algorithm.
• 22.06.15, 15:00, CAB H 52 - Florian Buenzli: Sparsifiers maintaining structural parts of Clique Minimal Separator Decomposition.
Abstract: In many situations graphs can be used to model complex networks of objects. The Clique Minimal Separator (CMS) Decomposition is a suitable approach to separate the graphs into overlapping subsets, called atoms, which allows to process the data in a divide-and-conquer approach or may be used to analyze the network’s structure. To improve the processing of graphs, we have sought sparsifiers to preprocess the graph, making it more sparse but maintaining its properties regarding the CMS decomposition. We studied the cycle sparsifier, that removes all chords of all cycles, and, if applied to atoms, does not create a new CMS as we have shown. On the other hand there are difficulties concerning its application to arbitrary graphs due to the complexity of detecting whether an edge is part of a clique minimal separator, whose removal may cause atoms to collapse. Further we improved a prototype implementation of the CMS decomposition algorithms and published it in the well-established open source library JGraphT.
• 27.05.15, 14:30, CAB H 52 - Priska Pietra: An XLogo programming environment to steer a wheeled robot.
Abstract: LOGO is a programming language developed at MIT focused on the teaching of informatics concepts, and XLogo is a dialect that developed from it. Different dedicated editors exist, one of them is XLogo4Schools, which evolved from the XLogo editor and is a simple software used to work with the students. However, its possibilities are restricted to the on-screen turtle graphic. It would be of great motivation for the students to see that creating software for physical hardware is not rocket science, and this can be done by giving them the possibility to program a robot. This project extends the XLogo4Schools editor with an interpreter translating XLogo programs in C++ routines that will then be uploaded
to an Arduino Robot. Using a robot the pupils are encouraged to practice an engineering approach to solve tasks that take place in the physical word, while learning concepts like modularization, parametrization and testing in a playful environment.
• 13.03.15, 16:00, CAB H 52 - Sandro Montanari: Max Shortest Path for Imprecise Points.
Abstract: We are given a graph G=(V,E) with n vertices where each vertex corresponds to one of n given polygons in the plane and two vertices s,t∈V. We consider the problem of placing a point inside each polygon in order to maximize the shortest path distance between s and t in the resulting geometric graph. We show that the problem is hard to approximate for any factor (1-ε) with ε<1/4, even when the polygons consist only of vertical aligned segments. For the special case where G is a path, we provide an algorithm computing an optimum placement in time O(n·k²), where k is the maximum number of corners of a polygon.

• 02.03.15, 15:00, CAB H 52 - Christian Kudahl: The Advice Complexity for a Class of Online Problems.
Abstract: The advice complexity of an online problem is a measure of how much knowledge of the future an online algorithm needs in order to achieve a certain competitive ratio. We determine the advice complexity of a number of hard online problems including independent set, vertex cover, dominating set and several others. These problems are hard, since a single wrong answer by the online algorithm can have devastating consequences. For each of these problems, we show that $\log\left(1+\frac{(c-1)^{c-1}}{c^{c}}\right)n=\Theta (n/c)$ bits of advice are necessary and sufficient (up to an additive term of $O(\log n)$) to achieve a competitive ratio of $c$. This is done by introducing a new string guessing problem related to those of Emek et al. (TCS 2011) and Böckenhauer et al. (TCS 2014). It turns out that this gives a powerful but easy-to-use method for providing both upper and lower bounds on the advice complexity of an entire class of online problems.

Previous results of Halldórsson et al. (TCS 2002) on online independent set, in a related model, imply that the advice complexity of the problem is $\Theta (n/c)$. Our results improve on this by providing an exact formula for the higher-order term. Böckenhauer et al. (ISAAC 2009) gave a lower bound of $\Omega (n/c)$ and an upper bound of $O((n\log c)/c)$ on the advice complexity of online disjoint path allocation. We improve on the upper bound by a factor of $\log c$. For the remaining problems, no bounds on their advice complexity were previously known.

• 23.02.15, 15:00, CAB H 52 - Jérémie Chalopin: Data Delivery by Energy-Constrained Mobile Agents on a Line.
Abstract: We consider n mobile agents of limited energy that are placed on a straight line and that need to collectively deliver a single piece of data from the given source point s to the given target point t on the line. Agents can move only as far as their batteries allow. They can hand over the data when they meet, i.e., when they are positioned at the same point. In this talk we show that deciding whether the agents can deliver the data is (weakly) NP-complete, and we present a quasi-, pseudo-polynomial time algorithm that runs in time O(∆^2·n^{1+4log ∆}), where ∆ is the distance between s and t. This answers an open problemstated by Anaya et al. (DISC 2012).

• 06.01.15, 14:00, HG H 52 - Andreas Feldmann: A (1 + ε)-Embedding of Low Highway Dimension Graphs into Bounded Treewidth Graphs.
Abstract:Graphs with low highway dimension were introduced in [Abraham et al., SODA 2010] as a model of transportation networks. We show that any suchgraph can be embedded into a distribution over bounded treewidth graphswith arbitrarily small distortion. More concretely, if the highwaydimension of G is constant we show how to randomly compute a subgraph ofthe shortest path metric of the input graph G with the following two properties: it distorts the distances of G by a factor of 1 + ε in expectation and has a treewidth that is polylogarithmic in the aspect ratio of G. In particular, this result implies quasi-polynomial time approximation schemes for a number of optimization problems that naturally arise on transportation networks, including Travelling Salesman, Steiner Tree, and Facility Location. This is joint work with Wai Shing Fung, Jochen Könemann, and Ian Post.

Page URL: http://www.pw.ethz.ch/news-and-events/sos-talkactual/2015.html
Fri Jul 28 10:41:00 CEST 2017
© 2017 Eidgenössische Technische Hochschule Zürich