Publications

Grouped by type: here ; Grouped by date: here; All: here

Conference or Workshop Proceedings

  1. SIGCOMM ABM: Active Buffer Management in Datacenters
    Vamsi Addanki, Maria Apostolaki, Manya Ghobadi, Stefan Schmid, and Laurent Vanbever.
    Proceedings of the ACM SIGCOMM 2022 Conference, Amsterdam, Netherlands, 2022.
    Bootstrap Paper   Bootstrap Slides   Bootstrap Video   Bootstrap Code   Bootstrap Abstract   Bootstrap BibTeX  

    Click here to close the dropdown!
    Today’s network devices share buffer across queues to avoid drops during transient congestion and absorb bursts. As the buffer-per-bandwidth-unit in datacenter decreases, the need for optimal buffer utilization becomes more pressing. Typical devices use a hierarchical packet admission control scheme: First, a Buffer Management (BM) scheme decides the maximum length per queue at the device level and then an Active Queue Management (AQM) scheme decides which packets will be admitted at the queue level. Unfortunately, the lack of cooperation between the two control schemes leads to (i) harmful interference across queues, due to the lack of isolation; (ii) increased queueing delay, due to the obliviousness to the per-queue drain time; and (iii) thus unpredictable burst tolerance. To overcome these limitations, we propose ABM, Active Buffer Management which incorporates insights from both BM and AQM. Concretely, ABM accounts for both total buffer occupancy (typically used by BM) and queue drain time (typically used by AQM). We analytically prove that ABM provides isolation, bounded buffer drain time and achieves predictable burst tolerance without sacrificing throughput. We empirically find that ABM improves the 99th percentile FCT for short flows by up to 94% compared to the state-of-the-art buffer management. We further show that ABM improves the performance of advanced datacenter transport protocols in terms of FCT by up to 76% compared to DCTCP, TIMELY and PowerTCP under bursty workloads even at moderate load conditions.
    Click here to close the dropdown!
    @inproceedings{abm,
      author = {Addanki, Vamsi and Apostolaki, Maria and Ghobadi, Manya and Schmid, Stefan and Vanbever, Laurent},
      title = {ABM: Active Buffer Management in Datacenters},
      year = {2022},
      booktitle = {Proceedings of the ACM SIGCOMM 2022 Conference}
    }
    

  2. NSDI PowerTCP: Pushing the Performance Limits of Datacenter Networks
    Vamsi Addanki, Oliver Michel, and Stefan Schmid.
    19th USENIX Symposium on Networked Systems Design and Implementation (NSDI 22), Renton, WA, 2022.
    Bootstrap Paper   Bootstrap Slides   Bootstrap Video   Bootstrap Website   Bootstrap Code   Bootstrap Abstract   Bootstrap BibTeX  

    Click here to close the dropdown!
    Increasingly stringent throughput and latency requirements in datacenter networks demand fast and accurate congestion control. We observe that the reaction time and accuracy of existing datacenter congestion control schemes are inherently limited. They either rely only on explicit feedback about the network state (e.g., queue lengths in DCTCP) or only on variations of state (e.g., RTT gradient in TIMELY). To overcome these limitations, we propose a novel congestion control algorithm, PowerTCP, which achieves much more fine-grained congestion control by adapting to the bandwidth-window product (henceforth called power). PowerTCP leverages in-band network telemetry to react to changes in the network instantaneously without loss of throughput and while keeping queues short. Due to its fast reaction time, our algorithm is particularly well-suited for dynamic network environments and bursty traffic patterns. We show analytically and empirically that PowerTCP can significantly outperform the state-of-the-art in both traditional datacenter topologies and emerging reconfigurable datacenters where frequent bandwidth changes make congestion control challenging. In traditional datacenter networks, PowerTCP reduces tail flow completion times of short flows by 80% compared to DCQCN and TIMELY, and by 33% compared to HPCC even at 60% network load. In reconfigurable datacenters, PowerTCP achieves 85% circuit utilization without incurring additional latency and cuts tail latency by at least 2x compared to existing approaches.
    Click here to close the dropdown!
    @inproceedings{nsdi22,
      author = {Addanki, Vamsi and Michel, Oliver and Schmid, Stefan},
      title = {{PowerTCP}: Pushing the Performance Limits of Datacenter Networks},
      booktitle = {19th USENIX Symposium on Networked Systems Design and Implementation (NSDI 22)},
      year = {2022},
      address = {Renton, WA},
      url = {https://www.usenix.org/conference/nsdi22/presentation/addanki},
      publisher = {USENIX Association}
    }
    

  3. Networking Moving a step forward in the quest for Deterministic Networks (DetNet)
    Vamsi Addanki and Luigi Iannone.
    2020 IFIP Networking Conference (Networking), Paris, France, 2020.
    Bootstrap Paper   Bootstrap Slides   Bootstrap Code   Bootstrap Abstract   Bootstrap BibTeX  

    Click here to close the dropdown!
    Recent years witnessed a fast-growing demand, in the context of industrial use-cases, for the so-called Deterministic Networks (DetNet). IEEE 802.1 TSN architecture provides linklayer services and IETF DetNet provides network-layer services for deterministic and reliable forwarding. In such a context, in the first part of this paper, we tackle the problem of misbehaving flows and propose a novel queuing and scheduling mechanism, based on Push-In-First-Out (PIFO) queues. Differently from the original DetNet/TSN specifications, our solution is able to guarantee performance of priority flows in spite of misbehaving flows. In the second part of this paper, we present our simulator DeNS:DetNet Simulator, based on OMNET++ and NeSTiNG, providing building blocks for link-layer TSN and network-layer DetNet. Existing simulators have important limitations that do not allow simulating the full DetNet/TSN protocol stack. We overcome these limitations, making easy DetNet/TSN evaluations possible. Our simulations clearly show that our solution is able to satisfy constraints of deterministic networks, namely, guarantee zero packet loss and low latency, while at the same time allowing best-effort flows to co-exist. Furthermore, we show how our newly-proposed queuing and scheduling solution successfully limits the impact of misbehaving flows.
    Click here to close the dropdown!
    @inproceedings{detnetnetworking20,
      author = {Addanki, Vamsi and Iannone, Luigi},
      booktitle = {2020 IFIP Networking Conference (Networking)},
      title = {Moving a step forward in the quest for Deterministic Networks (DetNet)},
      year = {2020},
      volume = {},
      number = {},
      pages = {458-466},
      doi = {}
    }
    

  4. PAM Alias Resolution Based on ICMP Rate Limiting
    Kevin Vermeulen, Burim Ljuma, Vamsi Addanki, Matthieu Gouel, Olivier Fourmaux, Timur Friedman, and Reza Rejaie.
    Passive and Active Measurement, 2020.
    Bootstrap Paper   Bootstrap Abstract   Bootstrap BibTeX  

    Click here to close the dropdown!
    Alias resolution techniques (e.g., Midar) associate, mostly through active measurement, a set of IP addresses as belonging to a common router. These techniques rely on distinct router features that can serve as a signature. Their applicability is affected by router support of the features and the robustness of the signature. This paper presents a new alias resolution tool called Limited Ltd. that exploits ICMP rate limiting, a feature that is increasingly supported by modern routers that has not previously been used for alias resolution. It sends ICMP probes toward target interfaces in order to trigger rate limiting, extracting features from the probe reply loss traces. It uses a machine learning classifier to designate pairs of interfaces as aliases. We describe the details of the algorithm used by Limited Ltd. and illustrate its feasibility and accuracy. Limited Ltd. not only is the first tool that can perform alias resolution on IPv6 routers that do not generate monotonically increasing fragmentation IDs (e.g., Juniper routers) but it also complements the state-of-the-art techniques for IPv4 alias resolution. All of our code and the collected dataset are publicly available.
    Click here to close the dropdown!
    @inproceedings{aliaspam20,
      author = {Vermeulen, Kevin and Ljuma, Burim and Addanki, Vamsi and Gouel, Matthieu and Fourmaux, Olivier and Friedman, Timur and Rejaie, Reza},
      editor = {Sperotto, Anna and Dainotti, Alberto and Stiller, Burkhard},
      title = {Alias Resolution Based on ICMP Rate Limiting},
      booktitle = {Passive and Active Measurement},
      year = {2020},
      publisher = {Springer International Publishing},
      address = {Cham},
      pages = {231--248},
      isbn = {978-3-030-44081-7}
    }
    

  5. Networking Controlling software router resource sharing by fair packet dropping
    Vamsi Addanki, Leonardo Linguaglossa, James Roberts, and Dario Rossi.
    IFIP Networking Conference (IFIP Networking) and Workshops, Zurich, Switzerland, 2018.
    Bootstrap Paper   Bootstrap Code   Bootstrap Abstract   Bootstrap BibTeX  

    Click here to close the dropdown!
    The paper discusses resource sharing in a software router where both bandwidth and CPU may be bottlenecks. We propose a novel fair dropping algorithm to realize per-flow max-min fair sharing of these resources. The algorithm is compatible with features like batch I/O and batch processing that tend to make classical scheduling impractical. We describe an implementation using Vector Packet Processing, part of the Linux Foundation FD.io project. Preliminary experimental results prove the efficiency of the algorithm in controlling bandwidth and CPU sharing at high speed. Performance in dynamic traffic is evaluated using analysis and simulation, demonstrating that the proposed approach is both effective and scalable.
    Click here to close the dropdown!
    @inproceedings{fairdropnetworking18,
      author = {Addanki, Vamsi and Linguaglossa, Leonardo and Roberts, James and Rossi, Dario},
      booktitle = {IFIP Networking Conference (IFIP Networking) and Workshops},
      title = {Controlling software router resource sharing by fair packet dropping},
      year = {2018},
      volume = {},
      number = {},
      pages = {1-9},
      doi = {10.23919/IFIPNetworking.2018.8696549}
    }
    

Tech Reports

  1. arXiv Mars: Near-Optimal Throughput with Shallow Buffers in Reconfigurable Datacenter Networks
    Vamsi Addanki, Chen Avin, and Stefan Schmid.
    CoRR, 2022.
    Bootstrap Paper   Bootstrap Abstract   Bootstrap BibTeX  

    Click here to close the dropdown!
    The performance of large-scale computing systems often critically depends on high-performance communication networks. Dynamically reconfigurable topologies, e.g., based on optical circuit switches, are emerging as an innovative new technology to deal with the explosive growth of datacenter traffic. Specifically, periodic reconfigurable datacenter networks (RDCNs) such as RotorNet (SIGCOMM 2017), Opera (NSDI 2020) and Sirius (SIGCOMM 2020) have been shown to provide high throughput, by emulating a complete graph through fast periodic circuit switch scheduling.
         However, to achieve such a high throughput, existing reconfigurable network designs pay a high price: in terms of potentially high delays, but also, as we show as a first contribution in this paper, in terms of the high buffer requirements. In particular, we show that under buffer constraints, emulating the high-throughput complete-graph is infeasible at scale, and we uncover a spectrum of unvisited and attractive alternative RDCNs, which emulate regular graphs of lower node degree.
         We present Mars, a periodic reconfigurable topology which emulates a d-regular graph with near-optimal throughput. In particular, we systematically analyze how the degree d can be optimized for throughput given the available buffer and delay tolerance of the datacenter.
    Click here to close the dropdown!
    @article{marsreview22,
      author = {Addanki, Vamsi and Avin, Chen and Schmid, Stefan},
      url = {https://arxiv.org/abs/2204.02525},
      title = {Mars: Near-Optimal Throughput with Shallow Buffers in Reconfigurable Datacenter Networks},
      journal = {CoRR},
      volume = {abs/2204.02525},
      year = {2022},
      eprinttype = {arXiv},
      eprint = {2204.02525}
    }
    

  2. arXiv FB: A Flexible Buffer Management Scheme for Data Center Switches
    Maria Apostolaki, Vamsi Addanki, Manya Ghobadi, and Laurent Vanbever.
    CoRR, 2021.
    Bootstrap Paper   Bootstrap Abstract   Bootstrap BibTeX  

    Click here to close the dropdown!
    Today, network devices share buffer across priority queues to avoid drops during transient congestion. While cost-effective most of the time, this sharing can cause undesired interference among seemingly independent traffic. As a result, low-priority traffic can cause increased packet loss to high-priority traffic. Similarly, long flows can prevent the buffer from absorbing incoming bursts even if they do not share the same queue. The cause of this perhaps unintuitive outcome is that today’s buffer sharing techniques are unable to guarantee isolation across (priority) queues without statically allocating buffer space. To address this issue, we designed FB, a novel buffer sharing scheme that offers strict isolation guarantees to high-priority traffic without sacrificing link utilizations. Thus, FB outperforms conventional buffer sharing algorithms in absorbing bursts while achieving on-par throughput. We show that FB is practical and runs at line-rate on existing hardware (Barefoot Tofino). Significantly, FB’s operations can be approximated in non-programmable devices.
    Click here to close the dropdown!
    @article{bufferfbreview21,
      author = {Apostolaki, Maria and Addanki, Vamsi and Ghobadi, Manya and Vanbever, Laurent},
      url = {https://arxiv.org/abs/2105.10553},
      title = {{FB:} {A} Flexible Buffer Management Scheme for Data Center Switches},
      journal = {CoRR},
      volume = {abs/2105.10553},
      year = {2021},
      eprinttype = {arXiv},
      eprint = {2105.10553}
    }
    

  3. arXiv Self-Adjusting Packet Classification
    Maciej Pacut, Juan Vanerio, Vamsi Addanki, Arash Pourdamghani, Gábor Rétvári, and Stefan Schmid.
    CoRR, 2021.
    Bootstrap Paper   Bootstrap Abstract   Bootstrap BibTeX  

    Click here to close the dropdown!
    This paper is motivated by the vision of more efficient packet classification mechanisms that self-optimize in a demand-aware manner. At the heart of our approach lies a self-adjusting linear list data structure, where unlike in the classic data structure, there are dependencies, and some items must be in front of the others; for example, to correctly classify packets by rules arranged in a linked list, each rule must be in front of lower priority rules that overlap with it. After each access we can rearrange the list, similarly to Move-To-Front, but dependencies need to be respected.
         We present a 4-competitive online rearrangement algorithm, whose cost is at most four times worse than the optimal offline algorithm; no deterministic algorithm can be better than 3-competitive. The algorithm is simple and attractive, especially for memory-limited systems, as it does not require any additional memory (e.g., neither timestamps nor frequency statistics). Our approach can simply be deployed as a drop-in replacement for a static datastructure, potentially benefitting many existing networks.
         We evaluate our self-adjusting list packet classifier on realistic ruleset and traffic instances. We find that our classifier performs similarly to a static list for low-locality traffic, but significantly outperforms Efficuts (by a factor 7x), CutSplit (3.6x), and the static list (14x) for high locality and small rulesets. Memory consumption is 10x lower on average compared to Efficuts and CutSplit.
    Click here to close the dropdown!
    @article{firewallreview21,
      author = {Pacut, Maciej and Vanerio, Juan and Addanki, Vamsi and Pourdamghani, Arash and R{\'{e}}tv{\'{a}}ri, G{\'{a}}bor and Schmid, Stefan},
      title = {Self-Adjusting Packet Classification},
      journal = {CoRR},
      volume = {abs/2109.15090},
      year = {2021},
      url = {https://arxiv.org/abs/2109.15090},
      eprinttype = {arXiv},
      eprint = {2109.15090}
    }
    

  4. arXiv Online List Access with Precedence Constraints
    Maciej Pacut, Juan Vanerio, Vamsi Addanki, Arash Pourdamghani, Gábor Rétvári, and Stefan Schmid.
    CoRR, 2021.
    Bootstrap Paper   Bootstrap Abstract   Bootstrap BibTeX  

    Click here to close the dropdown!
    This paper considers a natural generalization of the online list access problem in the paid exchange model, where additionally there can be precedence constraints ("dependencies") among the nodes in the list. For example, this generalization is motivated by applications in the context of packet classification. Our main contributions are constant-competitive deterministic and randomized online algorithms, designed around a procedure Move-Recursively-Forward, a generalization of Move-To-Front tailored to handle node dependencies. Parts of the analysis build upon ideas of the classic online algorithms Move-To-Front and BIT, and address the challenges of the extended model. We further discuss the challenges related to insertions and deletions.
    Click here to close the dropdown!
    @article{listaccessreview21,
      author = {Pacut, Maciej and Vanerio, Juan and Addanki, Vamsi and Pourdamghani, Arash and R{\'{e}}tv{\'{a}}ri, G{\'{a}}bor and Schmid, Stefan},
      title = {Online List Access with Precedence Constraints},
      journal = {CoRR},
      volume = {abs/2104.08949},
      year = {2021},
      url = {https://arxiv.org/abs/2104.08949},
      eprinttype = {arXiv},
      eprint = {2104.08949}
    }
    

Posters or Demos

  1. SIGCOMM Fair Dropping for Multi-Resource Fairness in Software Routers Extended Abstract
    Vamsi Addanki, Leonardo Linguaglossa, James Roberts, and Dario Rossi.
    Proceedings of the ACM SIGCOMM 2018 Conference on Posters and Demos, Budapest, Hungary, 2018.
    Bootstrap Paper   Bootstrap Poster   Bootstrap Video   Bootstrap Video-contd.   Bootstrap Abstract   Bootstrap BibTeX  

    Click here to close the dropdown!
    We demonstrate that fair dropping is an effective means to realize fair sharing of bandwidth and CPU in a software router. Analysis underpinning the effectiveness of the proposed approach is presented elsewhere [1].
    Click here to close the dropdown!
    @inproceedings{fairdropsigcomm18,
      author = {Addanki, Vamsi and Linguaglossa, Leonardo and Roberts, James and Rossi, Dario},
      title = {Fair Dropping for Multi-Resource Fairness in Software Routers Extended Abstract},
      year = {2018},
      isbn = {9781450359153},
      publisher = {Association for Computing Machinery},
      address = {New York, NY, USA},
      url = {https://doi.org/10.1145/3234200.3234210},
      doi = {10.1145/3234200.3234210},
      booktitle = {Proceedings of the ACM SIGCOMM 2018 Conference on Posters and Demos},
      pages = {132–134},
      numpages = {3},
      keywords = {vector packet processing (VPP), fair dropping, multi-resource fairness, Linux Foundation FD.io project},
      series = {SIGCOMM '18}
    }
    

Theses

  1. Masters Thesis Plasticine: A flexible buffer management scheme for data center networks
    Vamsi Addanki.
    Supervisors: Laurent Vanbever, Maria Apostalaki, Sebastien Tixeuil.
    ETH Zurich, aug, 2020.
    Bootstrap Paper   Bootstrap Slides   Bootstrap Abstract   Bootstrap BibTeX  

    Click here to close the dropdown!
    Network devices today share buffer across output queues to avoid drops during transient congestion with less buffer space, thus with a lower cost, per chip. While effective most of the time, this sharing can cause undesired interactions among seemingly independent traffic, especially in case of high load. As a result, low-priority traffic can cause increased packet loss to high-priority traffic, intra-DC traffic can impair WAN throughput and long flows can prevent the buffer for absorbing incoming bursts. The cause of this perhaps unintuitive outcome is today’s buffer sharing techniques that are unable to guarantee isolation even to a small portion of the traffic, without statically allocating buffer space. To address this issue we designed Plasticine a novel buffer management scheme which offers strict isolation guarantees without keeping the buffer idle and is practical in today’s hardware. We found that Plasticine: (i) significantly improves query completion times (≈>30% compared to state-of-the-art solution) while achieving on-par throughput compared to convectional buffer management algorithms as well as TCP nuances; (ii) improves short-flow completion times; Our proposal is the first attempt to address the problems of bursts in today’s data center networks from a buffer management perspective.
    Click here to close the dropdown!
    @thesis{masterthesis,
      thesis = {Masters Thesis},
      title = {Plasticine: A flexible buffer management scheme for data center networks},
      author = {Addanki, Vamsi},
      year = {2020}
    }
    

  2. Bachelors Thesis Vectorized Packet Processing
    Vamsi Addanki.
    Supervisors: Dario Rossi, Leonardo Linguaglossa.
    Telecom Paris, jul, 2017.
    Bootstrap Paper   Bootstrap Abstract   Bootstrap BibTeX  

    Click here to close the dropdown!
    In the last few years, numerous frameworks have emerged which implement network packet processing in user-space using kernel bypass techniques, high-speed software data plane functionalities on commodity hardware. VPP [3] is one such framework which gained popularity recently for some of the interesting points like its flexibility, user-space implementation, kernel bypass techniques etc. VPP allows users to arrange or rearrange the network functions as a packet processing graph, providing a Full-blown stack of network functions. Unlike the other frameworks in which packet processing is done packet-by-packet, VPP performs packet processing, vector-by-vector i.e a batch of packets at once. This brings a significant performance benefits due to L1 cache hit. In this report I discuss an introduction to Vector Packet Processor, a framework by Cisco which is now a part of the Fast Data Input Output (FD.io) project [2]. Later in this report I will discuss about the initial experiments on VPP, the test bed used for performing the experiments with VPP and how to set up the test bed. The variation of three values (Throughput, Average Vector size, CPU clock cycles) are observed while changing Maximum vector size. The result clearly show that cache-hit ratio increases from vector size of 4 to 256 where it is maximum and decreases.I begin with related work about packet processing to understand why VPP is important in the high speed networking world.
    Click here to close the dropdown!
    @thesis{bachelorthesis,
      thesis = {Bachelors Thesis},
      title = {Vectorized Packet Processing},
      author = {Addanki, Vamsi},
      year = {2017}
    }