Publications

Systems Support for Machine Learning

  • Efficient Sparse Collective Communication and its application to Accelerate Distributed Deep Learning
    J. Fei, C.-Y. Ho, A. N. Sahu, M. Canini, A. Sapio.
    KAUST technical report, Apr 2020.
    abstract

    Efficient collective communication is crucial to parallel-computing applications such as distributed training of large-scale recommendation systems and natural language processing models. Existing collective communication libraries focus on optimizing operations for dense inputs, resulting in transmissions of many zeros when inputs are sparse. This counters current trends that see increasing data sparsity in large models.
    We propose OmniReduce, an efficient streaming aggregation system that exploits sparsity to maximize effective bandwidth use by sending only non-zero data blocks. We demonstrate that this idea is beneficial and accelerates distributed training by up to 8.2×. Even at 100 Gbps, we assess that OmniReduce delivers 1.2-2.6× better performance for network-bottlenecked DNNs.

    bibtex
    @TECHREPORT{omnireduce,
      author = {Fei, Jiawei and Ho, Chen-Yu and Sahu, Atal Narayan and Canini, Marco and Sapio, Amedeo},
      title = {{Efficient Sparse Collective Communication and its application to Accelerate Distributed Deep Learning}},
      institution = {KAUST},
      note = {http://hdl.handle.net/10754/665369},
      year = {2020},
      month = {Sep} 
    }
    
  • Compressed Communication for Distributed Deep Learning: Survey and Quantitative Evaluation
    H. Xu, C.-Y. Ho, A. M. Abdelmoniem, A. Dutta, E. H. Bergou, K. Karatsenidis, M. Canini, P. Kalnis.
    KAUST technical report, Apr 2020.
    abstract

    Powerful computer clusters are used nowadays to train complex deep neural networks (DNN) on large datasets. Distributed training workloads increasingly become communication bound. For this reason, many lossy compression techniques have been proposed to reduce the volume of transferred data. Unfortunately, it is difficult to argue about the behavior of compression methods, because existing work relies on inconsistent evaluation testbeds and largely ignores the performance impact of practical system configurations.
    In this paper, we present a comprehensive survey of the most influential compressed communication methods for DNN training, together with an intuitive classification (i.e., quantization, sparsification, hybrid and low-rank). We also propose a unified framework and API that allows for consistent and easy implementation of compressed communication on popular machine learning toolkits. We instantiate our API on TensorFlow and PyTorch, and implement 16 such methods. Finally, we present a thorough quantitative evaluation with a variety of DNNs (convolutional and recurrent), datasets and system configurations. We show that the DNN architecture affects the relative performance among methods. Interestingly, depending on the underlying communication library and computational cost of compression/decompression, we demonstrate that some methods may be impractical.

    bibtex
    @TECHREPORT{grace,
      author = {Xu, Hang and Ho, Chen-Yu and Abdelmoniem, Ahmed M. and Dutta, Aritra and Bergou, El Houcine and Karatsenidis, Konstantinos and Canini, Marco and Kalnis, Panos},
      title = {{Compressed Communication for Distributed Deep Learning: Survey and Quantitative Evaluation}},
      institution = {KAUST},
      note = {http://hdl.handle.net/10754/662495},
      year = {2020},
      month = {Apr} 
    }
    
  • On the Discrepancy between the Theoretical Analysis and Practical Implementations of Compressed Communication for Distributed Deep Learning (Technical Report)
    A. Dutta, E. H. Bergou, A. M. Abdelmoniem, C.-Y. Ho, A. N. Sahu, M. Canini, P. Kalnis
    In Proceedings of AAAI’20, Feb 2020.
    abstract

    Compressed communication, in the form of sparsification or quantization of stochastic gradients, is employed to reduce communication costs in distributed data-parallel training of deep neural networks. However, there exists a discrepancy between theory and practice: while theoretical analysis of most existing compression methods assumes compression is applied to the gradients of the entire model, many practical implementations operate individually on the gradients of each layer of the model.
    In this paper, we prove that layer-wise compression is, in theory, better, because the convergence rate is upper bounded by that of entire-model compression for a wide range of biased and unbiased compression methods. However, despite the theoretical bound, our experimental study of six well-known methods shows that convergence, in practice, may or may not be better, depending on the actual trained model and compression ratio. Our findings suggest that it would be advantageous for deep learning frameworks to include support for both layer-wise and entire-model compression.

    bibtex
    @INPROCEEDINGS{Dutta.AAAI20,
      author = {Dutta, Aritra and Bergou, El Houcine and Abdelmoniem, Ahmed M. and Ho, Chen-Yu and Sahu, Atal Narayan and Canini, Marco and Kalnis, Panos},
      title = {{On the Discrepancy between the Theoretical Analysis and Practical Implementations of Compressed Communication for Distributed Deep Learning}},
      booktitle = {Proceedings of AAAI'20},
      year = {2020},
      month = {Feb},
    }
    
  • Cracking Open the Black Box: What Observations Can Tell Us About Reinforcement Learning Agents
    A. Dethise, M. Canini, S. Kandula
    In Proceedings of NetAI’19, Aug 2019.
    abstract

    Machine learning (ML) solutions to challenging networking problems, while promising, are hard to interpret; the uncertainty about how they would behave in untested scenarios has hindered adoption. Using a case study of an ML-based video rate adaptation model, we show that carefully applying interpretability tools and systematically exploring the model inputs can identify unwanted or anomalous behaviors of the model; hinting at a potential path towards increasing trust in ML-based solutions.

    bibtex
    @INPROCEEDINGS{Dethise.COTBB19,
      author = {Arnaud Dethise and Marco Canini and Srikanth Kandula},
      title = {{Cracking Open the Black Box: What Observations Can Tell Us About Reinforcement Learning Agents}},
      booktitle = {Proceedings of NetAI'19},
      year = {2019},
      month = {Aug} 
    }
    
  • Natural Compression for Distributed Deep Learning
    S. Horvath, C.-Y. Ho, L. Horvath, A. N. Sahu, M. Canini, P. Richtarik
    arXiv preprint arXiv:1905.10988
    abstract

    Due to their hunger for big data, modern deep learning models are trained in parallel, often in distributed environments, where communication of model updates is the bottleneck. Various update compression (e.g., quantization, sparsification, dithering) techniques have been proposed in recent years as a successful tool to alleviate this problem. In this work, we introduce a new, remarkably simple and theoretically and practically effective compression technique, which we call natural compression (NC). Our technique is applied individually to all entries of the to-be-compressed update vector and works by randomized rounding to the nearest (negative or positive) power of two. NC is “natural” since the nearest power of two of a real expressed as a float can be obtained without any computation, simply by ignoring the mantissa. We show that compared to no compression, NC increases the second moment of the compressed vector by the tiny factor 9/8 only, which means that the effect of NC on the convergence speed of popular training algorithms, such as distributed SGD, is negligible. However, the communications savings enabled by NC are substantial, leading to 3-4x improvement in overall theoretical running time. For applications requiring more aggressive compression, we generalize NC to natural dithering, which we prove is exponentially better than the immensely popular random dithering technique. Our compression operators can be used on their own or in combination with existing operators for a more aggressive combined effect. Finally, we show that NC is particularly effective for the in-network aggregation (INA) framework for distributed training, where the update aggregation is done on a switch, which can only perform integer computations.

    bibtex
    @ARTICLE{2019arXiv:1905.10988,
      author = {Horv{\'a}th, Samuel and Ho, Chen-Yu and Horv{\'a}th, \v{L}udov{\'i}t and Sahu, Atal Narayan and Canini, Marco and Richt{\'a}rik, Peter},
      title = {{Natural Compression for Distributed Deep Learning}},
      journal = {arXiv preprint arXiv:1905.10988},
      year = {2019},
      month = {May}
    }
    
  • Direct Nonlinear Acceleration
    E. H. Bergou, A. Dutta, Y. Xiao, M. Canini, P. Richtarik
    arXiv preprint arXiv:1905.11692
    abstract

    Due to their hunger for big data, modern deep learning models are trained in parallel, often in distributed environments, where communication of model updates is the bottleneck. Various update compression (e.g., quantization, sparsification, dithering) techniques have been proposed in recent years as a successful tool to alleviate this problem. In this work, we introduce a new, remarkably simple and theoretically and practically effective compression technique, which we call natural compression (NC). Our technique is applied individually to all entries of the to-be-compressed update vector and works by randomized rounding to the nearest (negative or positive) power of two. NC is “natural” since the nearest power of two of a real expressed as a float can be obtained without any computation, simply by ignoring the mantissa. We show that compared to no compression, NC increases the second moment of the compressed vector by the tiny factor 9/8 only, which means that the effect of NC on the convergence speed of popular training algorithms, such as distributed SGD, is negligible. However, the communications savings enabled by NC are substantial, leading to 3-4x improvement in overall theoretical running time. For applications requiring more aggressive compression, we generalize NC to natural dithering, which we prove is exponentially better than the immensely popular random dithering technique. Our compression operators can be used on their own or in combination with existing operators for a more aggressive combined effect. Finally, we show that NC is particularly effective for the in-network aggregation (INA) framework for distributed training, where the update aggregation is done on a switch, which can only perform integer computations.

    bibtex
    @ARTICLE{2019arXiv:1905.11692,
      author = {Bergou, El Houcine and Dutta, Aritra and Xiao, Yunming and Canini, Marco and Richt{\'a}rik, Peter},
      title = {{Direct Nonlinear Acceleration}},
      journal = {arXiv preprint arXiv:1905.11692},
      year = {2019},
      month = {May}
    }
    
  • Scaling Distributed Machine Learning with In-Network Aggregation
    A. Sapio, M. Canini, C.-Y. Ho, J. Nelson, P. Kalnis, C. Kim, A. Krishnamurthy, M. Moshref, D. R. K. Ports, P. Richtarik.
    KAUST technical report, Feb 2019.
    abstract

    Training complex machine learning models in parallel is an increasingly important workload. We accelerate distributed parallel training by designing a communication primitive that uses a programmable switch dataplane to execute a key step of the training process. Our approach, SwitchML, reduces the volume of exchanged data by aggregating the model updates from multiple workers in the network. We co-design the switch processing with the end-host protocols and ML frameworks to provide a robust, efficient solution that speeds up training by up to 300%, and at least by 20% for a number of real-world benchmark models.

    bibtex
    @TECHREPORT{SwitchML,
      author = {Sapio, Amedeo and Canini, Marco and Ho, Chen-Yu and Nelson, Jacob and Kalnis, Panos and Kim, Changhoon and Krishnamurthy, Arvind and Moshref, Masoud and Ports, Dan R. K. and Richt{\'a}rik, Peter},
      title = {{Scaling Distributed Machine Learning with In-Network Aggregation}},
      institution = {KAUST},
      note = {http://hdl.handle.net/10754/631179},
      year = {2019},
      month = {Feb} 
    }
    
  • In-Network Computation is a Dumb Idea Whose Time Has Come
    A. Sapio, I. Abdelaziz, A. Aldilaijan, M. Canini, P. Kalnis.
    In Proceedings of HotNets’17, Nov 2017.
    abstract

    Programmable data plane hardware creates new opportunities for infusing intelligence into the network. This raises a fundamental question: what kinds of computation should be delegated to the network?
    In this paper, we discuss the opportunities and challenges for co-designing data center distributed systems with their network layer. We believe that the time has finally come for offloading part of their computation to execute in-network. However, in-network computation tasks must be judiciously crafted to match the limitations of the network machine architecture of programmable devices. With the help of our experiments on machine learning and graph analytics workloads, we identify that aggregation functions raise opportunities to exploit the limited computation power of networking hardware to lessen network congestion and improve the overall application performance. Moreover, as a proof-of-concept, we propose DAIET, a system that performs in-network data aggregation. Experimental results with an initial prototype show a large data reduction ratio (86.9%-89.3%) and a similar decrease in the workers’ computation time.

    bibtex
    @INPROCEEDINGS{Sapio.DAIET17,
      author = {Amedeo Sapio and Ibrahim Abdelaziz and Abdulla Aldilaijan and Marco Canini and Panos Kalnis},
      title = {{In-Network Computation is a Dumb Idea Whose Time Has Come}},
      booktitle = {Proceedings of HotNets'17},
      year = {2017},
      month = {Nov} 
    }
    

Performance Predictability in Clouds, Key-Value Stores and Big Data Systems

  • Assise: Performance and Availability via Client-local NVM in a Distributed File System (Technical Report)
    T. E. Anderson, M. Canini, J. Kim, D. Kostic, Y. Kwon, S. Peter, W. Reda, H. N. Schuh, E. Witchel.
    In Proceedings of OSDI’20, Nov 2020. To appear.
    abstract

    The adoption of low latency persistent memory modules (PMMs) upends the long-established model of remote storage for distributed file systems. Instead, by colocating computation with PMM storage, we can provide applications with much higher IO performance, sub-second application failover, and strong consistency. To demonstrate this, we built the Assise distributed file system, based on a persistent, replicated coherence protocol that manages client-local PMM as a linearizable and crash-recoverable cache between applications and slower (and possibly remote) storage. Assise maximizes locality for all file IO by carrying out IO on process-local, socket-local, and client-local PMM whenever possible. Assise minimizes coherence overhead by maintaining consistency at IO operation granularity, rather than at fixed block sizes.
    We compare Assise to Ceph/BlueStore, NFS, and Octopus on a cluster with Intel Optane DC PMMs and SSDs for common cloud applications and benchmarks, such as LevelDB, Postfix, and FileBench. We find that Assise improves write latency up to 22×, throughput up to 56×, fail-over time up to 103×, and scales up to 6× better than its counterparts, while providing stronger consistency semantics.

    bibtex
    @INPROCEEDINGS{Reda.OSDI20,
      author = {Anderson, Thomas E. and Canini, Marco and Kim, Jongyul and Kosti\'c, Dejan and Kwon, Youngjin and Peter, Simon and Reda, Waleed and Schuh, Henry N. and Witchel, Emmett},
      title = {{Assise: Performance and Availability via Client-local NVM in a Distributed File System}},
      booktitle = {Proceedings of the 14th USENIX Symposium on Operating Systems Design and Implementation 2020 (OSDI'20)},
      year = {2020},
      month = {Nov}
    }
    
  • Finding the Right Cloud Configuration for Analytics Clusters
    M. Bilal, M. Canini, R. Rodrigues.
    In Proceedings of SoCC’20, Oct 2020. To appear.
    abstract

    Finding good cloud configurations for deploying a single distributed system is already a challenging task, and it becomes substantially harder when a data analytics cluster is formed by multiple distributed systems since the search space becomes exponentially larger. In particular, recent proposals for single system deployments rely on benchmarking runs that become prohibitively expensive as we shift to joint optimization of multiple systems, as users have to wait until the end of a long optimization run to start the production run of their job.
    We propose Vanir, an optimization framework designed to operate in an ecosystem of multiple distributed systems forming an analytics cluster. To deal with this large search space, Vanir takes the approach of quickly finding a good enough configuration and then attempts to further optimize the configuration during production runs. This is achieved by combining a series of techniques in a novel way, namely a metrics-based optimizer for the benchmarking runs, and a Mondrian forest-based performance model and transfer learning during production runs. Our results show that Vanir can find deployments that perform comparably to the ones found by state-of-the-art single-system cloud configuration optimizers while spending 2× fewer benchmarking runs. This leads to an overall search cost that is 1.3-24× lower compared to the state-of-the-art. Additionally, when transfer learning can be used, Vanir can minimize the benchmarking runs even further, and use online optimization to achieve a performance comparable to the deployments found by today’s single-system frameworks.

    bibtex
    @INPROCEEDINGS{Bilal.SoCC20,
      author = {Bilal, Muhammad and Canini, Marco and Rodrigues, Rodrigo},
      title = {{Finding the Right Cloud Configuration for Analytics Clusters}},
      booktitle = {Proceedings of the 11th ACM Symposium on Cloud Computing 2020 (SoCC'20)},
      year = {2020},
      month = {Oct} 
    }
    
  • Do the Best Cloud Configurations Grow on Trees? An Experimental Evaluation of Black Box Algorithms for Optimizing Cloud Workloads
    M. Bilal, M. Serafini, M. Canini, R. Rodrigues.
    In Proc. PVLDB Endow. 13(11), Aug 2020.
    abstract

    Cloud configuration optimization is the procedure to determine the number and the type of instances to use when deploying an application in cloud environments, given a cost or performance objective. In the absence of a performance model for the distributed application, black-box optimization can be used to perform automatic cloud configuration. Numerous black-box optimization algorithms have been developed; however, their comparative evaluation has so far been limited to the hyper-parameter optimization setting, which differs significantly from the cloud configuration problem. In this paper, we evaluate 8 commonly used black-box optimization algorithms to determine their applicability for the cloud configuration problem. Our evaluation, using 23 different workloads, shows that in several cases Bayesian optimization with Gradient boosted regression trees performs better than methods chosen by prior work.

    bibtex
    @ARTICLE{Bilal.VLDB20,
      author = {Bilal, Muhammad and Serafini, Marco and Canini, Marco and Rodrigues, Rodrigo},
      title = {{Do the Best Cloud Configurations Grow on Trees? An Experimental Evaluation of Black Box Algorithms for Optimizing Cloud Workloads}},
      journal={Proc. VLDB Endow.},
      year = {2020},
      volume = {13},
      number = {11},
      doi = {10.14778/3407790.3407845}
    }
    
  • Fast and Accurate Load Balancing for Geo-Distributed Storage Systems
    K. L. Bogdanov, W. Reda, Gerald Q. Maguire Jr., D. Kostic, M. Canini.
    In Proceedings of SoCC’18, Oct 2018.
    abstract

    The increasing density of globally distributed datacenters reduces the network latency between neighboring datacenters and allows replicated services deployed across neighboring locations to share workload when necessary, without violating strict Service Level Objectives (SLOs).
    We present Kurma, a practical implementation of a fast and accurate load balancer for geo-distributed storage systems. At runtime, Kurma integrates network latency and service time distributions to accurately estimate the rate of SLO violations for requests redirected across geo-distributed datacenters. Using these estimates, Kurma solves a decentralized rate-based performance model enabling fast load balancing (in the order of seconds) while taming global SLO violations. We integrate Kurma with Cassandra, a popular storage system. Using real-world traces along with a geo-distributed deployment across Amazon EC2, we demonstrate Kurma’s ability to effectively share load among datacenters while reducing SLO violations by up to a factor of 3 in high load settings or reducing the cost of running the service by up to 17%.

    bibtex
    @INPROCEEDINGS{Bogdanov.SoCC18,
      author = {Bogdanov, Kirill L. and Reda, Waleed and Maguire Jr., Gerald Q. and Kosti\'c, Dejan and Canini, Marco},
      title = {{Fast and Accurate Load Balancing for Geo-Distributed Storage Systems}},
      booktitle = {Proceedings of the 9th ACM Symposium on Cloud Computing 2018 (SoCC'18)},
      year = {2018},
      month = {Oct} 
    }
    
  • Towards Automatic Parameter Tuning of Stream Processing Systems
    M. Bilal, M. Canini.
    In Proceedings of SoCC’17, Sep 2017.
    abstract

    Optimizing the performance of big-data streaming applications has become a daunting and time-consuming task: parameters may be tuned from a space of hundreds or even thousands of possible configurations. In this paper, we present a framework for automating parameter tuning for stream-processing systems. Our framework supports standard black-box optimization algorithms as well as a novel gray-box optimization algorithm. We demonstrate the multiple benefits of automated parameter tuning in optimizing three benchmark applications in Apache Storm. Our results show that a hill-climbing algorithm that uses a new heuristic sampling approach based on Latin Hypercube provides the best results. Our gray-box algorithm provides comparable results while being two to five times faster.

    bibtex
    @INPROCEEDINGS{Bilal.SoCC17,
      author = {Bilal, Muhammad and Canini, Marco},
      title = {{Towards Automatic Parameter Tuning of Stream Processing Systems}},
      booktitle = {Proceedings of the 8th ACM Symposium on Cloud Computing 2017 (SoCC'17)},
      year = {2017},
      month = {Sep} 
    }
    
  • Distributed Resource Management Across Process Boundaries
    L. Suresh, P. Bodik, I. Menache, M. Canini, F. Ciucu.
    In Proceedings of SoCC’17, Sep 2017.
    abstract

    Multi-tenant distributed systems composed of small services, such as Service-oriented Architectures (SOAs) and Micro-services, raise new challenges in attaining high performance and efficient resource utilization. In these systems, a request execution spans tens to thousands of processes, and the execution paths and resource demands on different services are generally not known when a request first enters the system. In this paper, we highlight the fundamental challenges of regulating load and scheduling in SOAs while meeting end-to-end performance objectives on metrics of concern to both tenants and operators. We design Wisp, a framework for building SOAs that transparently adapts rate limiters and request schedulers system-wide according to operator policies to satisfy end-to-end goals while responding to changing system conditions. In evaluations against production as well as synthetic workloads, Wisp successfully enforces a range of end-to-end performance objectives, such as reducing average latencies, meeting deadlines, providing fairness and isolation, and avoiding system overload.

    bibtex
    @INPROCEEDINGS{Suresh.SoCC17,
      author = {Suresh, Lalith and Bodik, Peter and Menache, Ishai and Canini, Marco and Ciucu, Florin},
      title = {{Distributed Resource Management Across Process Boundaries}},
      booktitle = {Proceedings of the 8th ACM Symposium on Cloud Computing 2017 (SoCC'17)},
      year = {2017},
      month = {Sep} 
    }
    
  • Rein: Taming Tail Latency in Key-Value Stores via Multiget Scheduling
    W. Reda, M. Canini, L. Suresh, D. Kostic, S. Braithwaite.
    In Proceedings of EuroSys’17, Apr 2017.
    abstract

    We tackle the problem of reducing tail latencies in distributed key-value stores, such as the popular Cassandra database. We focus on workloads of multiget requests, which batch together access to several data elements and parallelize read operations across the data store machines. We first analyze a production trace of a real system and quantify the skew due to multiget sizes, key popularity, and other factors. We then proceed to identify opportunities for reduction of tail latencies by recognizing the composition of aggregate requests and by carefully scheduling bottleneck operations that can otherwise create excessive queues. We design and implement a system called Rein, which reduces latency via inter-multiget scheduling using low overhead techniques. We extensively evaluate Rein via experiments in Amazon Web Services (AWS) and simulations. Our scheduling algorithms reduce the median, 95th, and 99th percentile latencies by factors of 1.5, 1.5, and 1.9, respectively.

    bibtex
    @INPROCEEDINGS{Reda.EuroSys17,
      author = {Reda, Waleed and Canini, Marco and Suresh, Lalith and Kosti\'c, Dejan and Braithwaite, Sean},
      title = {{Rein: Taming Tail Latency in Key-Value Stores via Multiget Scheduling}},
      booktitle = {Proceedings of the 7th ACM european conference on Computer Systems (EuroSys'17)},
      year = {2017},
      month = {Apr} 
    }
    
  • C3: Cutting Tail Latency in Cloud Data Stores via Adaptive Replica Selection
    L. Suresh, M. Canini, S. Schmid, A. Feldmann.
    In Proceedings of NSDI’15, May 2015.
    abstract

    Achieving predictable performance is critical for many distributed applications, yet difficult to achieve due to many factors that skew the tail of the latency distribution even in well-provisioned systems. In this paper, we present the fundamental challenges involved in designing a replica selection scheme that is robust in the face of performance fluctuations across servers. We illustrate these challenges through performance evaluations of the Cassandra distributed database on Amazon EC2. We then present the design and implementation of an adaptive replica selection mechanism, C3, that is robust to performance variability in the environment. We demonstrate C3’s effectiveness in reducing the latency tail and improving throughput through extensive evaluations on Amazon EC2 and through simulations. Our results show that C3 significantly improves the latencies along the mean, median, and tail (up to 3 times improvement at the 99.9th percentile) and provides higher system throughput.

    bibtex
    @INPROCEEDINGS{Suresh.NSDI15,
      author = {Suresh, Lalith and Canini, Marco and Schmid, Stefan and Feldmann, Anja},
      title = {{C3: Cutting Tail Latency in Cloud Data Stores via Adaptive Replica Selection}},
      booktitle = {Proceedings of 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI'15)},
      year = {2015},
      month = {May} 
    }
    
  • Application Aware Placement and Scheduling for Multi-tenant Clouds
    L. Suresh, M. Canini.
    HPI Technical Report for Spring 2013 Future SOC Lab
    abstract

    In an Infrastructure-as-a-Service (IaaS) environment, it is paramount to perform intelligent allocation of shared resources. Placement is the problem of choosing which virtual machine (VM) should run on which physical machine (PM), whereas Scheduling is the problem of sharing resources between multiple co-located VMs. An efficient placement and scheduling is one, that in addition to satisfying all constraints, increases the overall utilization of physical resources such as CPU, storage, or network. Determining an efficient placement and scheduling is a very challenging problem, especially in face of conflicting goals and partially available information about workloads.
    In order to reason about placement, we first tackle the problem of performance interference that may affect co-located VMs—when there is more demand by multiple VMs for a resource than is available at a given instant of time. We thus characterize the performance of Hadoop in a shared and virtualized setting.

    bibtex
    @TECHREPORT{AbsintheHPITR,
    author = {Suresh, Lalith and Canini, Marco},
    title = {{Application Aware Placement and Scheduling for Multi-tenant Clouds}},
    institution = {Technical Report for Spring 2013 Future SOC Lab},
    year = {2013}
    }
    

Software-Defined Networking and Network Architecture

  • P4xos: Consensus as a Network Service
    H. T. Dang, P. Bressana, H. Wang, K. S. Lee, N. Zilberman, H. Weatherspoon, M. Canini, F. Pedone, R. Soule.
    IEEE/ACM Transactions on Networking, 28(4), Aug 2020.
    abstract

    In this paper, we explore how a programmable forwarding plane offered by a new breed of network switches might naturally accelerate consensus protocols, specifically focusing on Paxos. The performance of consensus protocols has long been a concern. By implementing Paxos in the forwarding plane, we are able to significantly increase throughput and reduce latency. Our P4-based implementation running on an ASIC in isolation can process over 2.5 billion consensus messages per second, a four orders of magnitude improvement in throughput over a widely-used software implementation. This effectively removes consensus as a bottleneck for distributed applications in data centers. Beyond sheer performance, our approach offers several other important benefits: it readily lends itself to formal verification; it does not rely on any additional network hardware; and as a full Paxos implementation, it makes only very weak assumptions about the network.

    bibtex
    @ARTICLE{Dang.P4xos,
      author = {Dang, Huynh Tu and Bressana, Pietro and Wang, Han, and Lee, Ki Suh and Zilberman, Noa and Weatherspoon, Hakim and Canini, Marco and Pedone, Fernando and Soul{\'{e}}, Robert},
      title = {{P4xos: Consensus as a Network Service}},
      journal = {IEEE/ACM Transactions on Networking},
      year = {2020},
      month = {Aug},
      volume = {28},
      number = {4},
      doi = {10.1109/TNET.2020.2992106}
    }
    
  • A Survey on the Current Internet Interconnection Practices
    P. Marcos, M. Chiesa, C. Dietzel, M. Canini, M. Barcellos.
    SIGCOMM Comput. Commun. Rev., 50(1), Mar 2020.
    abstract

    The Internet topology has significantly changed in the past years. Today, it is richly connected and flattened. Such a change has been driven mostly by the fast growth of peering infrastructures and the expansion of Content Delivery Networks as alternatives to reduce interconnection costs and improve traffic delivery performance. While the topology evolution is perceptible, it is unclear whether or not the interconnection process has evolved or if it continues to be an ad-hoc and lengthy process. To shed light on the current practices of the Internet interconnection ecosystem and how these could impact the Internet, we surveyed more than 100 network operators and peering coordinators. We divide our results into two parts: (i) the current interconnection practices, including the steps of the process and the reasons to establish new interconnection agreements or to renegotiate existing ones, and the parameters discussed by network operators. In part (ii), we report the existing limitations and how the interconnection ecosystem can evolve in the future. We show that despite the changes in the topology, interconnecting continues to be a cumbersome process that usually takes days, weeks, or even months to complete, which is in stark contrast with the desire of most operators in reducing the interconnection setup time. We also identify that even being primary candidates to evolve the interconnection process, emerging on-demand connectivity companies are only fulfilling part of the existing gap between the current interconnection practices and the network operators’ desires.

    bibtex
    @ARTICLE{Marcos.CCR,
    author = {Marcos, Pedro and Chiesa, Marco and Dietzel, Christoph and Canini, Marco and Barcellos, Marinho},
    title = {{A Survey on the Current Internet Interconnection Practices}},
    journal = {SIGCOMM Comput. Commun. Rev.},
    year = {2020},
    month = {Mar},
    volume = {50},
    number = {1},
    doi = {10.1145/3390251.3390254}
    }
    
  • Towards Consistent SDNs: A Case for Network State Fuzzing
    A. Shukla, S. J. Saidi, S. Schmid, M. Canini, T. Zinner, A. Feldmann.
    IEEE Transactions on Network and Service Management, 17(2), Jun 2020.
    abstract

    The conventional wisdom is that a software-defined network (SDN) operates under the premise that the logically centralized control plane has an accurate representation of the actual data plane state. Unfortunately, bugs, misconfigurations, faults or attacks can introduce inconsistencies that undermine correct operation. Previous work in this area, however, lacks a holistic methodology to tackle this problem and thus, addresses only certain parts of the problem. Yet, the consistency of the overall system is only as good as its least consistent part.
    Motivated by an analogy of network consistency checking with program testing, we propose to add active probe-based network state fuzzing to our consistency check repertoire. Hereby, our system, PAZZ, combines production traffic with active probes to periodically test if the actual forwarding path and decision elements (on the data plane) correspond to the expected ones (on the control plane). Our insight is that active traffic covers the inconsistency cases beyond the ones identified by passive traffic. PAZZ prototype was built and evaluated on topologies of varying scale and complexity. Our results show that PAZZ requires minimal network resources to detect persistent data plane faults through fuzzing and localize them quickly while outperforming baseline approaches.

    bibtex
    @ARTICLE{Shukla.TNSM20,
    author = {Shukla, Apoorv and Saidi, S. Jawad and Schmid, Stefan and Canini, Marco and Zinner, Thomas and Feldmann, Anja},
    title = {{Towards Consistent SDNs: A Case for Network State Fuzzing}},
    journal = {IEEE Transactions on Network and Service Management},
    year = {2020},
    month = {Jun},
    volume = {17},
    number = {2},
    doi = {10.1109/TNSM.2019.2955790}
    }
    
  • Measurements as First-class Artifacts
    P. Laffranchini, L. Rodrigues, M. Canini, B. Krishnamurthy.
    In Proceedings of INFOCOM’19, Apr 2019.
    abstract

    The emergence of programmable switches has sparked a significant amount of work on new techniques to perform more powerful measurement tasks, for instance, to obtain fine-grained traffic and performance statistics. Previous work has focused on the efficiency of these measurements alone and has neglected flexibility, resulting in solutions that are hard to reuse or repurpose and that often overlap in functionality or goals.
    In this paper, we propose the use of a set of reusable primitive building blocks that can be composed to express measurement tasks in a concise and simple way. We describe the rationale for the design of our primitives, that we have named MAFIA (Measurements As FIrst-class Artifacts), and using several examples we illustrate how they can be combined to realize a comprehensive range of network measurement tasks. Writing MAFIA code does not require expert knowledge of low-level switch architecture details. Using a prototype implementation of MAFIA, we demonstrate the applicability of our approach and show that the use of our primitives results in compiled code that is comparable in size and resource usage with manually written specialized P4 code, and can be run in current hardware.

    bibtex
    @INPROCEEDINGS{Laffranchini.INFOCOM19,
      author={Paolo Laffranchini and Luis Rodrigues and Marco Canini and Balachander Krishnamurthy},
      title = {{Measurements as First-class Artifacts}},
      booktitle = {Proceedings of INFOCOM'19},
      year = {2019},
      month = {Apr}
    }
    
  • Dynam-IX: a Dynamic Interconnection eXchange
    P. Marcos, M. Chiesa, L. Muller, P. Kathiravelu, C. Dietzel, M. Canini, M. Barcellos.
    In Proceedings of CoNEXT’18, Dec 2018.
    abstract

    Autonomous Systems (ASes) can reach hundreds of networks via Internet eXchange Points (IXPs), allowing improvements in traffic delivery performance and competitiveness. Despite the benefits, any pair of ASes needs first to agree on exchanging traffic. By surveying 100+ network operators, we discovered that most interconnection agreements are established through ad-hoc and lengthy processes heavily influenced by personal relationships and brand image. As such, ASes prefer long-term agreements at the expense of a potential mismatch between actual delivery performance and current traffic dynamics. ASes also miss interconnection opportunities due to trust reasons. To improve wide-area traffic delivery performance, we propose Dynam-IX, a framework that allows operators to build trust cooperatively and implement traffic engineering policies to exploit the rich interconnection opportunities at IXPs quickly. Dynam-IX offers a protocol to automate the interconnection process, an intent abstraction to express interconnection policies, a legal framework to digitally handle contracts, and a distributed tamper-proof ledger to create trust among ASes. We build and evaluate a Dynam-IX prototype and show that an AS can establish tens of agreements per minute with negligible overhead for ASes and IXPs.

    bibtex
    @INPROCEEDINGS{Marcos.CoNEXT18,
      author={Pedro Marcos and Marco Chiesa and Lucas Muller and Pradeeban Kathiravelu and Christoph Dietzel and Marco Canini and Marinho Barcellos},
      title = {{Dynam-IX: a Dynamic Interconnection eXchange}},
      booktitle = {Proceedings of CoNEXT'18},
      year = {2018},
      month = {Dec}
    }
    
  • Sonata: Query-Driven Streaming Network Telemetry
    A. Gupta, R. Harrison, M. Canini, N. Feamster, J. Rexford, W. Willinger.
    In Proceedings of SIGCOMM’18, Aug 2018.
    abstract

    Managing and securing networks requires collecting and analyzing network traffic data in real time. Existing telemetry systems do not allow operators to express the range of queries needed to perform management or scale to large traffic volumes and rates. We present Sonata, an expressive and scalable telemetry system that coordinates joint collection and analysis of network traffic. Sonata provides a declarative interface to express queries for a wide range of common telemetry tasks; to enable real-time execution, Sonata partitions each query across the stream processor and the data plane, running as much of the query as it can on the network switch, at line rate. To optimize the use of limited switch memory, Sonata dynamically refines each query to ensure that available resources focus only on traffic that satisfies the query. Our evaluation shows that Sonata can support a wide range of telemetry tasks while reducing the workload for the stream processor by as much as seven orders of magnitude compared to existing telemetry systems.

    bibtex
    @INPROCEEDINGS{Gupta.SIGCOMM18,
      author={Arpit Gupta and Rob Harrison and Marco Canini and Nick Feamster and Jennifer Rexford and Walter Willinger},
      title = {{Sonata: Query-Driven Streaming Network Telemetry}},
      booktitle = {Proceedings of SIGCOMM'18},
      year = {2018},
      month = {Aug}
    }
    
  • Prelude: Ensuring Inter-Domain Loop-Freedom in SDN-Enabled Networks
    A. Dethise, M. Chiesa, M. Canini.
    In Proceedings of APNet’18, Aug 2018.
    abstract

    Software-Defined eXchanges (SDXes) promise to improve the inter-domain routing ecosystem through SDN deployment. Yet, the naïve deployment of SDN on the Internet raises concerns about the correctness of the inter-domain data-plane. By allowing operators to deflect traffic from default BGP routes, SDN policies can create permanent forwarding loops that are not visible to the control-plane.
    We propose Prelude, a system for detecting SDN-induced forwarding loops between SDXes with high accuracy without leaking private routing information of network operators. To achieve this, we leverage Secure Multi-Party Computation (SMPC) techniques to build a novel and general privacy-preserving primitive that detects whether any subset of SDN rules might affect the same portion of traffic without learning anything about those rules. We then leverage this primitive as the main building block of a distributed system tailored to detect forwarding loops among any set of SDXes. We leverage the particular nature of SDXes to further improve the efficiency of our SMPC solution.
    The number of valid SDN rules rejected by our solution is 100x lower than previous privacy-preserving solutions, and provides better privacy guarantees. Furthermore, our solution naturally provides network operators with some insights on the cost of the deflected paths.

    bibtex
    @INPROCEEDINGS{Dethise.APNet18,
      author={Arnaud Dethise and Marco Chiesa and Marco Canini},
      title = {{Prelude: Ensuring Inter-Domain Loop-Freedom in SDN-Enabled Networks}},
      booktitle = {Proceedings of APNet'18},
      year = {2018},
      month = {Aug}
    }
    
  • Picking a Partner: A Fair Blockchain Based Scoring Protocol for Autonomous Systems
    Y. Alowayed, M. Canini, P. Marcos, M. Chiesa, M. Barcellos.
    In Proceedings of ANRW’18, Jul 2018.
    abstract

    We tackle the problem of enabling Autonomous Systems to evaluate network providers on the basis of their adherence to Service Level Agreements (SLAs) regarding interconnection agreements. In current Internet practices, choices of interconnection partners are driven by factors such as word of mouth, personal relationships, brand recognition and market intelligence, and not by proofs of previous performance. Given that Internet eXchange Points provide increasingly more peering choices, rudimentary schemes for picking interconnection partners are not adequate anymore.
    Although the current interconnection ecosystem is shrouded in confidentiality, our key observation is that recently-emerged blockchain technology and advances in cryptography enable a privacy-preserving decentralized solution based on actual performance measurements. We propose the concept of SLA score to evaluate network providers and introduce a privacy-preserving protocol that allows networks to compute and verify SLA scores.

    bibtex
    @INPROCEEDINGS{Alowayed.ANRW18,
      author={Yousef Alowayed and Marco Canini and Pedro Marcos and Marco Chiesa and Marinho Barcellos},
      title = {{Picking a Partner: A Fair Blockchain Based Scoring Protocol for Autonomous Systems}},
      booktitle = {Proceedings of ANRW'18},
      year = {2018},
      month = {Jul}
    }
    
  • Renaissance: Self-Stabilizing Distributed SDN Control Plane
    M. Canini, I. Salem, L. Schiff, E. M. Schiller, S. Schmid.
    In Proceedings of ICDCS’18, Jul 2018.
    abstract

    By introducing programmability, automated verification, and innovative debugging tools, Software-Defined Networks (SDNs) are poised to meet the increasingly stringent dependability requirements of today’s communication networks. However, the design of fault-tolerant SDNs remains an open challenge. This paper considers the design of dependable SDNs through the lenses of self-stabilization—a very strong notion of fault-tolerance. In particular, we develop algorithms for an in-band and distributed control plane for SDNs, called Renaissance, which tolerate a wide range of (concurrent) controller, link, and communication failures. Our self-stabilizing algorithms ensure that after the occurrence of an arbitrary combination of failures, (i) every non-faulty SDN controller can eventually reach any switch in the network within a bounded communication delay (in the presence of a bounded number of concurrent failures) and (ii) every switch is managed by at least one non-faulty controller. We evaluate Renaissance through a rigorous worst-case analysis as well as a prototype implementation (based on OVS and Floodlight), and we report on our experiments using Mininet.

    bibtex
    @INPROCEEDINGS{Canini.ICDCS18,
      author={Canini, Marco and Salem, Iosif and Schiff, Liron and Schiller, Elad M. and Schmid, Stefan},
      title = {{Renaissance: Self-Stabilizing Distributed SDN Control Plane}},
      booktitle = {Proceedings of ICDCS'18},
      year = {2018},
      month = {Jul}
    }
    
  • Methodology, Measurement and Analysis of Flow Table Update Characteristics in Hardware OpenFlow Switches
    M. Kuzniar, P. Peresini, D. Kostic, M. Canini.
    Computer Networks, 136, May 2018.
    abstract

    Software-Defined Networking (SDN) and OpenFlow are actively being standardized and deployed. These deployments rely on switches that come from various vendors and differ in terms of performance and available features. Understanding these differences and performance characteristics is essential for ensuring successful and safe deployments.
    We propose a systematic methodology for SDN switch performance analysis and devise a series of experiments based on this methodology. The methodology relies on sending a stream of rule updates, while relying on both observing the control plane view as reported by the switch and probing the data plane state to determine switch characteristics by comparing these views. We measure, report and explain the performance characteristics of flow table updates in six hardware OpenFlow switches. Our results describing rule update rates can help SDN designers make their controllers efficient. Further, we also highlight differences between the OpenFlow specification and its implementations, that if ignored, pose a serious threat to network security and correctness.

    bibtex
    @ARTICLE{Kuzniar.COMNET18,
    author = {Ku\'zniar, Maciej and Pere\v{s}\'ini, Peter and Kosti\'c, Dejan and Canini, Marco},
    title = {{Methodology, Measurement and Analysis of Flow Table Update Characteristics in Hardware OpenFlow Switches}},
    journal = {Computer Networks},
    volume = {136},
    year = {2018},
    month = {May},
    doi = {10.1016/j.comnet.2018.02.014}
    }
    
  • Moving Bits with a Fleet of Shared Virtual Routers
    P. Kathiravelu, M. Chiesa, P. de B Marcos, M. Canini, L. Veiga.
    In Proceedings of IFIP Networking’18, May 2018.
    abstract
    bibtex
    @INPROCEEDINGS{Kathiravelu.Networking18,
      author={Pradeeban Kathiravelu and Marco Chiesa and Pedro de B Marcos and Marco Canini and Luis Veiga}, 
      title = {{Moving Bits with a Fleet of Shared Virtual Routers}},
      booktitle = {Proceedings of IFIP Networking'18},
      year = {2018},
      month = {May}
    }
    
  • SIXPACK: Securing Internet eXchange Points Against Curious onlooKers
    M. Chiesa, D. Demmler, M. Canini, M. Schapira, T. Schneider.
    In Proceedings of ACM CoNEXT’17, Dec 2017.
    abstract

    Internet eXchange Points (IXPs) play an ever-growing role in Internet inter-connection. To facilitate the exchange of routes amongst their members, IXPs provide Route Server (RS) services to dispatch the routes according to each member’s peering policies. Nowadays, to make use of RSes, these policies must be disclosed to the IXP. This poses fundamental questions regarding the privacy guarantees of route-computation on confidential business information. Indeed, as evidenced by interaction with IXP administrators and a survey of network operators, this state of affairs raises privacy concerns among network administrators and even deters some networks from subscribing to RS services. We design SIXPACK, an RS service that leverages Secure Multi-Party Computation (SMPC) to keep peering policies confidential, while extending, the functionalities of today’s RSes. As SMPC is notoriously heavy in terms of communication and computation, our design and implementation of SIXPACK aims at moving computation outside of the SMPC without compromising the privacy guarantees.
    We assess the effectiveness and scalability of our system by evaluating a prototype implementation using traces of data from one of the largest IXPs in the world. Our evaluation results indicate that SIXPACK can scale to support privacy-preserving route-computation, even at IXPs with many hundreds of member networks.

    bibtex
    @INPROCEEDINGS{Chiesa.CoNEXT17,
      author={Marco Chiesa and Daniel Demmler and Marco Canini and Michael Schapira and Thomas Schneider}, 
      title = {{SIXPACK: Securing Internet eXchange Points Against Curious onlooKers}},
      booktitle = {Proceedings of ACM CoNEXT'17},
      year = {2017},
      month = {Dec}
    }
    
  • ENDEAVOUR: A Scalable SDN Architecture for Real-World IXPs
    G. Antichi, I. Castro, M. Chiesa, E. L. Fernandes, R. Lapeyrade, D. Kopp, J. H. Han, M. Bruyere, C. Dietzel, M. Gusat, A. W. Moore, P. Owezarski, S. Uhlig, M. Canini.
    IEEE Journal on Selected Areas in Communications, 35(11), Nov 2017.
    abstract

    Innovation in interdomain routing has remained stagnant for over a decade. Recently, IXPs have emerged as economically-advantageous interconnection points for reducing path latencies and exchanging ever increasing traffic volumes among, possibly, hundreds of networks. Given their far-reaching implications on interdomain routing, IXPs are the ideal place to foster network innovation and extend the benefits of SDN to the interdomain level.
    In this paper, we present, evaluate, and demonstrate ENDEAVOUR, an SDN platform for IXPs. ENDEAVOUR can be deployed on a multi-hop IXP fabric, supports a large number of use cases, and is highly-scalable while avoiding broadcast storms. Our evaluation with real data from one of the largest IXPs, demonstrates the benefits and scalability of our solution: ENDEAVOUR requires around 70% fewer rules than alternative SDN solutions thanks to our rule partitioning mechanism. In addition, by providing an open source solution, we invite everyone from the community to experiment (and improve) our implementation as well as adapt it to new use cases.

    bibtex
    @ARTICLE{Antichi.JSAC,
      author = {Gianni Antichi and Ignacio Castro and Marco Chiesa and Eder L. Fernandes and Remy Lapeyrade and Daniel Kopp and Jong Hun Han and Marc Bruyere and Christoph Dietzel and Mitchell Gusat and Andrew W. Moore and Philippe Owezarski and Steve Uhlig and Marco Canini}, 
      title = {{ENDEAVOUR: A Scalable SDN Architecture for Real-World IXPs}},
      journal = {IEEE Journal on Selected Areas in Communications},
      volume = {35}, 
      number = {11}, 
      year = {2017}, 
      month = {Nov},
      doi = {10.1109/JSAC.2017.2760398}
    }
    
  • Decentralized Consistent Updates in SDN (Technical Report)
    T. D. Nguyen, M. Chiesa, M. Canini.
    In Proceedings of SOSR’17, Apr 2017.
    abstract

    We present ez-Segway, a decentralized mechanism to consistently and quickly update the network state while preventing forwarding anomalies (loops and black-holes) and avoiding link congestion. In our design, the centralized SDN controller only pre-computes information needed by the switches during the update execution. This information is distributed to the switches, which use partial knowledge and direct message passing to efficiently realize the update. This separation of concerns has the key benefit of improving update performance as the communication and computation bottlenecks at the controller are removed. Our evaluations via network emulations and large-scale simulations demonstrate the efficiency of ez-Segway, which compared to a centralized approach, improves network update times by up to 45% and 57% at the median and the 99th percentile, respectively. A deployment of a system prototype in a real OpenFlow switch and an implementation in P4 demonstrate the feasibility and low overhead of implementing simple network update functionality within switches.

    bibtex
    @INPROCEEDINGS{Nguyen.SOSR17,
      author = {Thanh Dang Nguyen and Marco Chiesa and Marco Canini},
      title = {{Decentralized Consistent Updates in SDN}},
      booktitle = {Proceedings of SOSR'17},
      year = {2017},
      month = {Apr} 
    }
    
  • Correct by Construction Networks using Stepwise Refinement
    L. Ryzhyk, N. Bjorner, M. Canini, J.-B. Jeannin, C. Schlesinger, D. B. Terry, G. Varghese.
    In Proceedings of NSDI’17, Mar 2017.
    abstract

    Building software-defined network controllers is an exercise in software development and, as such, likely to introduce bugs. We present Cocoon, a framework for SDN development that facilitates both the design and verification of complex networks using stepwise refinement to move from a high-level specification to the final network implementation.
    A Cocoon user specifies intermediate design levels in a hierarchical design process that delineates the modularity in complicated network forwarding and makes verification extremely efficient. For example, an enterprise network, equipped with VLANs, ACLs, and Level 2 and Level 3 Routing, can be decomposed cleanly into abstractions for each mechanism, and the resulting stepwise verification is over 200x faster than verifying the final implementation. Cocoon further separates static network design from its dynamically changing configuration. The former is verified at design time, while the latter is checked at run time using statically defined invariants. We present six different SDN use cases including B4 and F10. Our performance evaluation demonstrates that Cocoon is not only faster than existing verification tools but can also find many bugs statically before the network design has been fully specified.

    bibtex
    @INPROCEEDINGS{Ryzhyk.NSDI17,
      author = {Leonid Ryzhyk and Nikolaj Bj\orner and Marco Canini and Jean-Baptiste Jeannin and Cole Schlesinger and Douglas B. Terry and George Varghese},
      title = {{Correct by Construction Networks using Stepwise Refinement}},
      booktitle = {Proceedings of 14th USENIX Symposium on Networked Systems Design and Implementation (NSDI'17)},
      year = {2017},
      month = {Mar} 
    }
    
  • Network Monitoring as a Streaming Analytics Problem
    A. Gupta, R. Birkner, M. Canini, N. Feamster, C. Mac-Stoker, W. Willinger.
    In Proceedings of HotNets’16, Nov 2016.
    abstract

    Programmable switches make it easier to perform flexible network monitoring queries at line rate, and scalable stream processors make it possible to fuse data streams to answer more sophisticated queries about the network in real-time. Unfortunately, processing such network monitoring queries at high traffic rates requires both the switches and the stream processors to filter the traffic iteratively and adaptively so as to extract only that traffic that is of interest to the query at hand. Others have network monitoring in the context of streaming; yet, previous work has not closed the loop in a way that allows network operators to perform streaming analytics for network monitoring applications at scale. To achieve this objective, Sonata allows operators to express a network monitoring query by considering each packet as a tuple and efficiently partitioning each query between the switches and the stream processor through iterative refinement. Sonata extracts only the traffic that pertains to each query, ensuring that the stream processor can scale traffic rates of several terabits per second. We show with a simple example query involving DNS reflection attacks and traffic traces from one of the world’s largest IXPs that Sonata can capture 95% of all traffic pertaining to the query, while reducing the overall data rate by a factor of about 400 and the number of required counters by four orders of magnitude.

    bibtex
    @INPROCEEDINGS{Gupta.HotNets16,
      author = {Arpit Gupta and R\"udiger Birkner and Marco Canini and Nick Feamster and Chris Mac-Stoker and Walter Willinger},
      title = {{Network Monitoring as a Streaming Analytics Problem}},
      booktitle = {Proceedings of the 15th ACM Workshop on Hot Topics in Networks (HotNets'16)},
      year = {2016},
      month = {Nov} 
    }
    
  • Inter-domain Networking Innovation on Steroids: Empowering IXPs with SDN Capabilities
    M. Chiesa, C. Dietzel, G. Antichi, M. Bruyere, I. Castro, M. Gusat, T. King, A. W. Moore, T. D. Nguyen, P. Owezarski, S. Uhlig, M. Canini.
    IEEE Communications Magazine, 54(10), Oct 2016.
    abstract

    While innovation in inter-domain routing has remained stagnant for over a decade, Internet Exchange Points (IXPs) are consolidating their role as economically advantageous interconnection points for reducing path latencies and exchanging ever increasing amounts of traffic. As such, IXPs appear as a natural place to foster network innovation and assess the benefits of Software-Defined Networking (SDN), a recent technological trend that has already boosted innovation within data-center networks.
    In this paper, we give a comprehensive overview of use cases for SDN at IXPs, which leverage the superior vantage point of an IXP to introduce advanced features like load-balancing and DDoS mitigation. We discuss the benefits of SDN solutions by analyzing real-world data from one of the largest IXPs. We also leverage insights into IXP operations to not only shape benefits for members but also for operators.

    bibtex
    @ARTICLE{Chiesa.Communications,
      author = {Marco Chiesa and Christoph Dietzel and Gianni Antichi and Marc Bruyere and Ignacio Castro and Mitch Gusat and Thomas King and Andrew W. Moore and Thanh Dang Nguyen and Philippe Owezarski and Steve Uhlig and Marco Canini}, 
      title = {{Inter-domain Networking Innovation on Steroids: Empowering IXPs with SDN Capabilities}},
      journal = {IEEE Communications Magazine}, 
      volume = {54}, 
      number = {10}, 
      year = {2016}, 
      month = {Oct},
      doi = {10.1109/MCOM.2016.7588277}
    }
    
  • Towards Decentralized Fast Consistent Updates
    T. D. Nguyen, M. Chiesa, M. Canini.
    In Proceedings of ANRW’16, Jul 2016.
    abstract

    Updating data plane state to adapt to dynamic conditions is a fundamental network control operation. Software-Defined Networking (SDN) offers abstractions for updating network state while preserving consistency properties. However, realizing these abstractions in a purely centralized fashion is inefficient, due to the inherent delays between switches and the SDN controller, we argue for delegating the responsibility of coordinated updates to the switches. To make our case, we propose ez-Segway, a mechanism that enables decentralized network updates while preventing forwarding anomalies and avoiding link congestion. In our architecture, the controller is only responsible for computing the intended network configuration. This information is distributed to the switches, which use partial knowledge and direct message passing to efficiently schedule and implement the update. This separation of concerns has the key benefit of improving update performance as the communication and computation bottlenecks at the controller are removed. Our extensive simulations show update speedups up to a factor of 2.

    bibtex
    @INPROCEEDINGS{Nguyen.ANRW16,
      author = {Thanh Dang Nguyen and Marco Chiesa and Marco Canini},
      title = {{Towards Decentralized Fast Consistent Updates}},
      booktitle = {Proceedings of the Applied Networking Research Workshop 2016 (ANRW'16)},
      year = {2016},
      month = {Jul} 
    }
    
  • Network Hardware-Accelerated Consensus
    H. T. Dang, P. Bressana, H. Wang, K. S. Lee, H. Weatherspoon, M. Canini, F. Pedone, R. Soule.
    arXiv preprint arXiv:1605.05619
    abstract

    Consensus protocols are the foundation for building many fault-tolerant distributed systems and services. This paper posits that there are significant performance benefits to be gained by offering consensus as a network service (CAANS). CAANS leverages recent advances in commodity networking hardware design and programmability to implement consensus protocol logic in network devices. CAANS provides a complete Paxos protocol, is a drop-in replacement for software-based implementations of Paxos, makes no restrictions on network topologies, and is implemented in a higher-level, data-plane programming language, allowing for portability across a range of target devices. At the same time, CAANS significantly increases throughput and reduces latency for consensus operations. Consensus logic executing in hardware can transmit consensus messages at line speed, with latency only slightly higher than simply forwarding packets.

    bibtex
    @ARTICLE{2016arXiv:1605.05619,
      author = {Dang, Huynh Tu and Bressana, Pietro and Wang, Han, and Lee, Ki Suh and Weatherspoon, Hakim and Canini, Marco and Pedone, Fernando and Soul{\'{e}}, Robert},
      title = {{Network Hardware-Accelerated Consensus}},
      journal = {arXiv preprint arXiv:1605.05619},
      year = {2016},
      moth = {May}
    }
    
  • Ground Control to Major Faults: Towards a Fault Tolerant and Adaptive SDN Control Network
    L. Schiff, S. Schmid, M. Canini.
    In Proceedings of DISN’16, Jun 2016.
    abstract

    To provide high availability and fault-tolerance, SDN control planes should be distributed. However, distributed control planes are challenging to design and bootstrap, especially if to be done in-band, without dedicated control network, and without relying on legacy protocols. This paper promotes a distributed systems approach to build and maintain connectivity between a distributed control plane and the data plane. In particular, we make the case for a self-stabilizing distributed control plane, where from any initial configuration, controllers self-organize, and quickly establish a communication channel among themselves. Given the resulting managed control plane, arbitrary network services can be implemented on top.
    This paper presents a model for the design of such self-stabilizing control planes, and identifies fundamental challenges. Subsequently, we present techniques which can be used to solve these challenges, and implement a plug & play distributed control plane which supports automatic topology discovery and management, as well as flexible controller membership: controllers can be added and removed dynamically. Interestingly, we argue that our approach can readily be implemented in today’s OpenFlow protocol. Moreover, our approach comes with interesting security features.

    bibtex
    @INPROCEEDINGS{Schiff.DISN16,
      author = {Liron Schiff and Stefan Schmid and Marco Canini},
      title = {{Ground Control to Major Faults: Towards a Fault Tolerant and Adaptive SDN Control Network}},
      booktitle = {Proceedings of the 2nd Workshop on Dependability Issues on SDN and NFV (DISN'16)},
      year = {2016},
      month = {Jun} 
    }
    
  • Scheduling Multi-flow Network Updates in Software-Defined NFV Systems
    Y. Liu, Y. Li, M. Canini, Y. Wang, J. Yuan.
    In Proceedings of SWFAN’16, Apr 2016.
    abstract

    Combining Network Functions Virtualization (NFV) with Software-Defined Networking (SDN) is an emerging solution to provide fine-grained control over scalable and elastic packet processing functions. Due to changes in network policy, traffic characteristics, or physical topology in Software-Defined NFV (SDNFV) systems, the controller needs to carry out network updates frequently, i.e., change the data plane configuration from one state to another. In order to adapt to a newly desired network state quickly, the network update process is expected to be completed in the shortest time possible. However, the update scheduling schemes need to address resource constraints including flow table sizes, CPU capacities of Virtualized Network Functions (VNFs) and link bandwidths, which are closely coupled. Thus, the problem is difficult to solve, especially when multiple flows are involved in the network update. In this work we investigate the multi-flow update problem in SDNFV systems, and formulate it as a mixed integer programming problem, which is NP-complete. We propose an approximation algorithm via linear relaxation. By extensive simulations, we demonstrate that our algorithm approaches the optimal solution, while requiring 10x-100x less computing time.

    bibtex
    @INPROCEEDINGS{Liu.SWFAN16,
      author = {Yujie Liu and Yong Li and Marco Canini and Yue Wang and Jian Yuan},
      title = {{Scheduling Multi-flow Network Updates in Software-Defined NFV Systems}},
      booktitle = {Proceedings of the 1st International INFOCOM Workshop on Software-Driven Flexible and Agile Networking (SWFAN'16)},
      year = {2016},
      month = {Apr} 
    }
    
  • An Industrial-Scale Software Defined Internet Exchange Point
    A. Gupta, R. MacDavid, R. Birkner, M. Canini, N. Feamster, J. Rexford, L. Vanbever.
    In Proceedings of NSDI’16, Mar 2016.
    abstract

    Software-Defined Internet Exchange Points (SDXes) promise to significantly increase the flexibility and function of interdomain traffic delivery on the Internet. Unfortunately, current SDX designs cannot yet achieve the scale required for large Internet exchange points (IXPs), which can host hundreds of participants exchanging traffic for hundreds of thousands of prefixes. Existing platforms are indeed too slow and inefficient to operate at this scale, typically requiring minutes to compile policies and millions of forwarding rules in the data plane. We motivate, design, and implement iSDX, the first SDX architecture that can operate at the scale of the largest IXPs. We show that iSDX reduces both policy compilation time and forwarding table size by two orders of magnitude compared to current state-of-the-art SDX controllers. Our evaluation against a trace from one of the largest IXPs in the world found that iSDX can compile a realistic set of policies for 500 IXP participants in less than three seconds. Our public release of iSDX, complete with tutorials and documentation, is already spurring early adoption in operational networks.

    bibtex
    @INPROCEEDINGS{Gupta.NSDI16,
      author = {Arpit Gupta and Robert MacDavid and Rudiger Birkner and Marco Canini and Nick Feamster and Jennifer Rexford and Laurent Vanbever},
      title = {{An Industrial-Scale Software Defined Internet Exchange Point}},
      booktitle = {Proceedings of 13th USENIX Symposium on Networked Systems Design and Implementation (NSDI'16)},
      year = {2016},
      month = {Mar} 
    }
    
  • Paxos Made Switch-y (Technical Report)
    H. T. Dang, M. Canini, F. Pedone, R. Soule.
    SIGCOMM Comput. Commun. Rev., 46(2), Apr 2016.
    abstract

    The Paxos protocol is the foundation for building many fault-tolerant distributed systems and services. This paper posits that there are significant performance benefits to be gained by implementing Paxos logic in network devices. Until recently, the notion of a switch-based implementation of Paxos would be a daydream. However, new flexible hardware is on the horizon that will provide customizable packet processing pipelines needed to implement Paxos. While this new hardware is still not readily available, several vendors and consortia have made the programming languages that target these devices public. This paper describes an implementation of Paxos in one of those languages, P4. Implementing Paxos provides a critical use case for P4, and will help drive the requirements for data plane languages in general. In the long term, we imagine that consensus could someday be offered as a network service, just as point-to-point communication is provided today.

    bibtex
    @ARTICLE{Dang.CCR16,
      author = {Dang, Huynh Tu and Canini, Marco and Pedone, Fernando and Soul{\'{e}}, Robert},
      title = {{Paxos Made Switch-y}},
      journal = {SIGCOMM Comput. Commun. Rev.},
      year = {2016},
      month = {Apr},
      volume = {46},
      number = {2},
      doi = {10.1145/2935634.2935638}
    }
    
  • NetPaxos: Consensus at Network Speed
    H. T. Dang, D. Sciascia, M. Canini, F. Pedone, R. Soule.
    In Proceedings of SOSR’15, Jun 2015.
    abstract

    This paper explores the possibility of implementing the widely deployed Paxos consensus protocol in network devices. We present two different approaches: (i) a detailed design description for implementing the full Paxos logic in SDN switches, which identifies a sufficient set of required OpenFlow extensions; and (ii) an alternative, optimistic protocol which can be implemented without changes to the OpenFlow API, but relies on assumptions about how the network orders messages. Although neither of these protocols can be fully implemented without changes to the underlying switch firmware, we argue that such changes are feasible in existing hardware. Moreover, we present an evaluation that suggests that moving Paxos logic into the network would yield significant performance benefits for distributed applications.

    bibtex
    @INPROCEEDINGS{Dang.SOSR15,
      author = {Dang, Huynh Tu and Sciascia, Daniele and Canini, Marco and Pedone, Fernando and Soul\'e, Robert},
      title = {{NetPaxos: Consensus at Network Speed}},
      booktitle = {Proceedings of SOSR'15)},
      year = {2015},
      month = {Jun}
    }
    
  • A Distributed and Robust SDN Control Plane for Transactional Network Updates (Technical Report)
    M. Canini, P. Kuznetsov, D. Levin, S. Schmid.
    In Proceedings of INFOCOM’15, Apr 2015.
    abstract

    Software-defined networking (SDN) is a novel paradigm that outsources the control of programmable network switches to a set of software controllers. The most fundamental task of these controllers is the correct implementation of the network policy, i.e., the intended network behavior. In essence, such a policy specifies the rules by which packets must be forwarded across the network. This paper studies a distributed SDN control plane that enables concurrent and robust policy implementation. We introduce a formal model describing the interaction between the data plane and a distributed control plane (consisting of a collection of fault-prone controllers). Then we formulate the problem of consistent composition of concurrent network policy updates (termed the CPC Problem). To anticipate scenarios in which some conflicting policy updates must be rejected, we enable the composition via a natural transactional interface with all-or-nothing semantics.
    We show that the ability of an f-resilient distributed control plane to process concurrent policy updates depends on the tag complexity, i.e., the number of policy labels (a.k.a. tags) available to the controllers, and describe a CPC protocol with optimal tag complexity f+2.

    bibtex
    @INPROCEEDINGS{Canini.INFOCOM15,
      author = {Canini, Marco and Kuznetsov, Petr and Levin, Dan and Schmid, Stefan},
      title = {{A Distributed and Robust SDN Control Plane for Transactional Network Updates}},
      booktitle = {Proceedings of INFOCOM'15)},
      year = {2015},
      month = {Apr} 
    }
    
  • Software-Defined Networks: Incremental Deployment with Panopticon
    M. Canini, A. Feldmann, D. Levin, F. Schaffert, S. Schmid.
    Computer, 47(11), Nov 2014.
    abstract

    Practically speaking, most enterprises migrating to a software-defined network (SDN) must do so incrementally. Panopticon offers an approach for designing and operating an interim hybrid network that combines both traditional and SDN switches by exposing a logical SDN abstraction.

    bibtex
    @ARTICLE{Canini.PanopticonComputer, 
      author = {Canini, Marco and Feldmann, Anja and Levin, Dan and Schaffert, Fabian and Schmid, Stefan},
      journal = {Computer},
      title = {{Software-Defined Networks: Incremental Deployment with Panopticon}},
      year = {2014},
      month = {Nov},
      volume = {47},
      number = {11},
      doi = {10.1109/MC.2014.330}, 
    }
    
  • ESPRES: Transparent SDN Update Scheduling
    P. Peresini, M. Kuzniar, M. Canini, D. Kostic.
    In Proceedings of HotSDN’14, Aug 2014.
    abstract

    Network forwarding state undergoes frequent changes, in batches of forwarding rule modifications at multiple switches. Installing or modifying a large number of rules is time consuming given the performance limits of current programmable switches, which are also due to economical factors in addition to technological ones.
    In this paper, we observe that a large network-state update typically consists of a set of sub-updates that are independent of one another w.r.t. the traffic they affect, and hence sub-updates can be installed in parallel, in any order. Leveraging this observation, we treat update installation as a scheduling problem and design ESPRES, a runtime mechanism that rate-limits and reorders updates to fully utilize processing capacities of switches without overloading them. Our early results show that compared to using no scheduler, our schemes yield 2.17-3.88 times quicker sub-update completion time for 20th percentile of sub-updates and 1.27-1.57 times quicker for 50th percentile.

    bibtex
    @INPROCEEDINGS{Peresini.HOTSDN14,
      author = {Pere\v{s}\'ini, Peter and Ku\'zniar, Maciej and Canini, Marco and Kosti\'c, Dejan},
      title = {{ESPRES: Transparent SDN Update Scheduling}},
      booktitle = {Proceedings of HotSDN'14},
      year = {2014},
      month = {Aug} 
    }
    
  • Panopticon: Reaping the Benefits of Incremental SDN Deployment in Enterprise Networks (Technical Report)
    D. Levin, M. Canini, S. Schmid, F. Schaffert, A. Feldmann.
    In Proceedings of USENIX Annual Technical Conference (ATC’14), Jun 2014.
    abstract

    The operational challenges posed in enterprise networks present an appealing opportunity for automated orchestration by way of Software-Defined Networking (SDN). The primary challenge to SDN adoption in the enterprise is the deployment problem: How to deploy and operate a network consisting of both legacy and SDN switches, while benefiting from simplified management and enhanced flexibility of SDN.
    This paper presents the design and implementation of Panopticon, an architecture for operating networks that combine legacy and SDN switches. Panopticon exposes an abstraction of a logical SDN in a partially upgraded legacy network, where SDN benefits can extend over the entire network. We demonstrate the feasibility and evaluate the efficiency of our approach through both testbed experiments with hardware switches and through simulation on real enterprise campus network topologies entailing over 1500 devices. Our results suggest that when as few as 10% of distribution switches support SDN, most of an enterprise network can be operated as a single SDN while meeting key resource constraints.

    bibtex
    @INPROCEEDINGS{Levin.ATC14,
    author = {Levin, Dan and Canini, Marco and Schmid, Stefan and Schaffert, Fabian and Feldmann, Anja},
    title = {{Panopticon: Reaping the Benefits of Incremental SDN Deployment in Enterprise Networks}},
    booktitle = {Proceedings of the 2014 USENIX Annual Technical Conference (ATC'14)},
    year = {2014},
    month = {Jun} 
    }
    
  • ESPRES: Easy Scheduling and Prioritization for SDN
    P. Peresini, M. Kuzniar, M. Canini, D. Kostic.
    In Proceedings of Open Networking Summit (ONS’14), Mar 2014.
    abstract

    Due to traffic engineering, topology changes, and VM migrations, today’s networks undergo a variety of large updates that concurrently affect many switches. Big updates are time consuming because the available switches have low rule installation rates. In this paper, we observe that a large network update may consist of a set of sub-updates that are independent and can be installed in parallel in any order. We treat update installation as a scheduling problem and design ESPRES, a runtime mechanism that rate-limits and reorders updates to fully utilize switches without overloading them. Our early results show that compared to using no scheduler, our simple scheduler yields 4 times quicker sub-update completion time for 20th percentile of sub-updates and 40% quicker for 50th percentile.

    bibtex
    @INPROCEEDINGS{Peresini.ONS14,
      author = {Pere\v{s}\'ini, Peter and Ku\'zniar, Maciej and Canini, Marco and Kosti\'c, Dejan},
      title = {{ESPRES: Easy Scheduling and Prioritization for SDN}},
      booktitle = {Proceedings of Open Networking Summit (ONS'14)},
      year = {2014},
      month = {Mar} 
    }
    
  • STN: A Robust and Distributed SDN Control Plane
    M. Canini, D. De Cicco, P. Kuznetsov, D. Levin, S. Schmid, S. Vissicchio.
    In Proceedings of Open Networking Summit (ONS’14), Mar 2014.
    abstract

    This paper presents the STN, a distributed SDN control plane based on Software Transactional Memory principles that supports concurrent policy updates while ensuring consistent policy composition and high availability. In particular, STN gives network programmers an abstraction that guarantees (1) linearizability: any execution can be transformed into an equivalent sequential history and (2) wait-freeness: the control plane tolerates up to f node failures, for some parameter f. We describe an STN design using a number of tags which is asymptotically minimal to ensure the above properties. Also, we report our progresses in building an STN prototype based on the Pyretic modular programming language.

    bibtex
    @INPROCEEDINGS{Canini.ONS14,
      author = {Canini, Marco and De Cicco, Daniele and Kuznetsov, Petr and Levin, Dan and Schmid, Stefan and Vissicchio, Stefano},
      title = {{STN: A Robust and Distributed SDN Control Plane}},
      booktitle = {Proceedings of Open Networking Summit (ONS'14)},
      year = {2014},
      month = {Mar} 
    }
    
  • Brief Announcement: Towards Distributed and Reliable Software Defined Networking
    M. Canini, P. Kuznetsov, D. Levin, S. Schmid.
    In Proceedings of International Symposium on Distributed Computing (DISC’13), Oct 2013.
    abstract

    Software-defined networking (SDN) is a novel paradigm that out-sources the control of packet-forwarding switches to a set of software controllers. The most fundamental task of these controllers is the correct implementation of the network policy, i.e., the intended network behavior. In essence, such a policy specifies the rules by which packets must be forwarded across the network. This paper initiates the study of the SDN control plane as a distributed system.

    bibtex
    @INPROCEEDINGS{Canini.DISC13,
      author = {Canini, Marco and Kuznetsov, Petr and Levin, Dan and Schmid, Stefan},
      title = {{Brief Announcement: Towards Distributed and Reliable Software Defined Networking}},
      booktitle = {Proceedings of International Symposium on Distributed Computing (DISC'13)},
      year = {2013},
      month = {Oct} 
    }
    
  • The Case for Reliable Software Transactional Networking
    M. Canini, P. Kuznetsov, D. Levin, S. Schmid.
    arXiv preprint arXiv:1305.7429
    abstract

    Software-defined networking (SDN) is a novel paradigm that out-sources the control of packet-forwarding switches to a set of software controllers. The most fundamental task of these controllers is the correct implementation of the network policy, i.e., the intended network behavior. In essence, such a policy specifies the rules by which packets must be forwarded across the network. This paper initiates the study of the SDN control plane as a distributed system.
    We introduce a formal model describing the interaction between the data plane and a distributed control plane (consisting of a collection of fault-prone controllers). Then we formulate the problem of consistent composition of concurrent network policy updates. The composition is enabled via a transactional interface with all-or-nothing semantics. The system behaves as though committed updates are installed atomically and every data packet traverses the network instantaneously, respecting a sequential composition of previously installed committed updates. Updates that cannot be composed are aborted and do not affect the data plane.
    We show that in the asynchronous environment, it is impossible to achieve consistent policy composition that tolerates a single controller crash. We then discuss stronger variants of the model that allow for solving the problem and study algorithmic complexities of such solutions.

    bibtex
    @ARTICLE{2013arXiv1305.7429C,
      author = {Canini, Marco and Kuznetsov, Petr and Levin, Dan and Schmid, Stefan},
      title = {{The Case for Reliable Software Transactional Networking}},
      journal = {arXiv preprint arXiv:1305.7429},
      year = {2013},
      month = {May}
    }
    
  • Software Transactional Networking: Concurrent and Consistent Policy Composition
    M. Canini, P. Kuznetsov, D. Levin, S. Schmid.
    In Proceedings of HotSDN’13, Aug 2013.
    abstract

    It seems natural to imagine that SDN policy specification and control is distributed, and this paper focuses on the resulting concurrency issues. Indeed, conflicts among concurrent policy updates may result in serious inconsistencies on the data plane, even when each update is installed with per-packet consistent update semantics. This paper introduces the problem of consistent composition of concurrent policy updates. Intuitively, consistent concurrent policy composition must appear as though there is no concurrency neither between any policy updates, nor between a policy update and in-flight packets on the data plane.
    We propose an elegant policy composition abstraction based on a transactional interface with all-or-nothing semantics: a policy update is either committed, in which case the policy is guaranteed to compose consistently over the entire network and the update is installed in its entirety, or aborted, in which case, no packet is affected by it. Consequently, the control application logic is relieved from the cumbersome and potentially error-prone synchronization and locking tasks, and control applications are kept light-weight. In this paper, we also sketch a simple implementation of the transactional synchronization: our approach is based on fine-grained locking on network components and avoids complex state machine replication.

    bibtex
    @INPROCEEDINGS{Canini.HOTSDN13,
    author = {Canini, Marco and Kuznetsov, Petr and Levin, Dan and Schmid, Stefan},
    title = {{Software Transactional Networking: Concurrent and Consistent Policy Composition}},
    booktitle = {Proceedings of HotSDN'13},
    year = {2013},
    month = {Aug}
    }
    
  • FatTire: Declarative Fault Tolerance for Software-Defined Networks
    M. Reitblatt, M. Canini, A. Guha, N. Foster.
    In Proceedings of HotSDN’13, Aug 2013.
    abstract

    This paper presents FatTire, a new language for writing fault-tolerant network programs. The central feature of this language is a new programming construct based on regular expressions that allows developers to specify the set of paths that packets may take through the network as well as the degree of fault tolerance required. This construct is implemented by a compiler that targets the in-network fast-failover mechanisms provided in recent versions of the OpenFlow standard, and facilitates simple reasoning about network programs even in the presence of failures. We describe the design of FatTire, present algorithms for compiling FatTire programs to OpenFlow switch configurations, describe our prototype FatTire implementation, and demonstrate its use on simple examples.

    bibtex
    @INPROCEEDINGS{Reitblatt.HOTSDN13,
    author = {Reitblatt, Mark and Canini, Marco and Guha, Arjun and Foster, Nate},
    title = {{FatTire: Declarative Fault Tolerance for Software-Defined Networks}},
    booktitle = {Proceedings of HotSDN'13},
    year = {2013},
    month = {Aug}
    }
    
  • OF.CPP: Consistent Packet Processing for OpenFlow
    P. Peresini, M. Kuzniar, N. Vasic, M. Canini, D. Kostic.
    In Proceedings of HotSDN’13, Aug 2013.
    abstract

    This paper demonstrates a new class of bugs that is likely to occur in enterprise OpenFlow deployments. In particular, step-by-step, reactive establishment of paths can cause network-wide inconsistencies or performance- and space-related inefficiencies. The cause for this behavior is inconsistent packet processing: as the packets travel through the network they do not encounter consistent state at the OpenFlow controller. To mitigate this problem, we propose to use transactional semantics at the controller to achieve consistent packet processing. We detail the challenges in achieving this goal (including the inability to directly apply database techniques), as well as a potentially promising approach. In particular, we envision the use of multi-commit transactions that could provide the necessary serialization and isolation properties without excessively reducing network performance.

    bibtex
    @INPROCEEDINGS{Peresini.HOTSDN13,
    author = {Pere\v{s}\'ini, Peter and Ku\'zniar, Maciej and Vasi\'c, Nedeljko and Canini, Marco and Kosti\'c, Dejan},
    title = {{OF.CPP: Consistent Packet Processing for OpenFlow}},
    booktitle = {Proceedings of HotSDN'13},
    year = {2013},
    month = {Aug}
    }
    
  • Toward Transitional SDN Deployment in Enterprise Networks
    D. Levin, M. Canini, S. Schmid, A. Feldmann.
    Open Networking Summit, Research Track (ONS’13), Apr 2013.
    abstract

    The operational challenges posed in enterprise networks, present an appealing opportunity for the software-defined orchestration of the network (SDN). However, the primary challenge to realizing solutions built on SDN in the enterprise is the deployment problem. Unlike in the datacenter, network upgrades in the enterprise start with the existing deployment and are particularly budget and resource-constrained.
    In this work, we investigate the prospect for partial Software Defined Network deployment. We present Panopticon, an architecture and methodology for planning and operating networks that combine legacy and upgraded SDN switches. Panopticon exposes an abstraction of a fully-deployed SDN in a partially upgraded legacy network, where the SDN benefits extend potentially over the entire network.
    We evaluate the feasibility of our approach through simulation on real enterprise campus network topologies entailing over 1500 switches and routers. Our results suggest that with only a handful of upgraded switches, it becomes possible to operate most of an enterprise network as a single SDN while meeting key resource constraints.

    bibtex
    @INPROCEEDINGS{Levin.ONS13,
    author = {Levin, Dan and Canini, Marco and Schmid, Stefan and Feldmann, Anja},
    title = {{Toward Transitional SDN Deployment in Enterprise Networks}},
    booktitle = {Open Networking Summit (ONS'13)},
    year = {2013},
    month = {Apr}
    }
    

Security in Clouds and Big Data Systems

  • Mitigating Network Side Channel Leakage for Stream Processing Systems in Trusted Execution Environments
    M. Bilal, H. Alsibyani, M. Canini.
    In Proceedings of DEBS’18, Jun 2018.
    abstract

    A crucial concern regarding cloud computing is the confidentiality of sensitive data being processed in the cloud. Trusted Execution Environments (TEEs), such as Intel Software Guard eXtensions (SGX), allow applications to run securely on an untrusted platform. However, using TEEs alone for stream processing is not enough to ensure privacy as network communication patterns may leak information about the data.
    This paper introduces two techniques – anycast and multicast – for mitigating leakage at inter-stage communications in streaming applications according to a user-selected mitigation level. These techniques aim to achieve network data obliviousness, i.e., communication patterns should not depend on the data. We implement these techniques in an SGX-based stream processing system. We evaluate the latency and throughput overheads, and the data obliviousness using three benchmark applications. The results show that anycast scales better with input load and mitigation level, and provides better data obliviousness than multicast.

    bibtex
    @INPROCEEDINGS{Bilal.DEBS18,
      author = {Bilal, Muhammad and Alsibyani, Hassan and Canini, Marco},
      title = {{Towards Automatic Parameter Tuning of Stream Processing Systems}},
      booktitle = {Proceedings of the 12th ACM International Conference on Distributed and Event-based Systems 2018 (DEBS'18)},
      year = {2018},
      month = {Jun} 
    }
    

Education in Reproducible Research

  • Learning Reproducibility with a Yearly Networking Contest
    M. Canini, J. Crowcroft.
    In Proceedings of Reproducibility’17, Aug 2017.
    abstract

    Better reproducibility of networking research results is currently a major goal that the academic community is striving towards. This position paper makes the case that improving the extent and pervasiveness of reproducible research can be greatly fostered by organizing a yearly international contest.

    bibtex
    @INPROCEEDINGS{Canini.Reproducibility17,
      author = {Canini, Marco and Crowcroft, Jon},
      title = {{Learning Reproducibility with a Yearly Networking Contest}},
      booktitle = {Proceedings of the Reproducibility Workshop},
      year = {2017},
      month = {Aug} 
    }
    

Reliable Networked Systems

  • Systematically Testing OpenFlow Controller Applications
    P. Peresini, M. Kuzniar, M. Canini, D. Venzano, D. Kostic, J. Rexford.
    Computer Networks, 92(2), Dec 2015.
    abstract

    The emergence of OpenFlow-capable switches enables exciting new network functionality, at the risk of programming errors that make communication less reliable. The centralized programming model, where a single controller program manages the network, seems to reduce the likelihood of bugs. However, the system is inherently distributed and asynchronous, with events happening at different switches and end hosts, and inevitable delays affecting communication with the controller. In this paper, we present efficient, systematic techniques for testing unmodified controller programs. Our NICE tool applies model checking to explore the state space of the entire system—the controller, the switches, and the hosts. Scalability is the main challenge, given the diversity of data packets, the large system state, and the many possible event orderings. To address this, we propose a novel way to augment model checking with symbolic execution of event handlers (to identify representative packets that exercise code paths on the controller). We also present a simplified OpenFlow switch model (to reduce the state space), and effective strategies for generating event interleavings likely to uncover bugs. Our prototype tests Python applications on the popular NOX platform. In testing three real applications—a MAC-learning switch, in-network server load balancing, and energy-efficient traffic engineering—we uncover 13 bugs.

    bibtex
    @ARTICLE{Peresini.15.NICE,
    author = {Pere\v{s}\'ini, Peter and Ku\'zniar, Maciej and Canini, Marco and Venzano, Daniele and Kosti\'c, Dejan and Rexford, Jennifer},
    title = {{Systematically Testing OpenFlow Controller Applications}},
    journal = {Computer Networks},
    volume = {92},
    number = {2},
    year = {2015},
    month = {Dec},
    doi = {10.1016/j.comnet.2015.03.019}
    }
    
  • A SOFT Way for OpenFlow Switch Interoperability Testing
    M. Kuzniar, P. Peresini, M. Canini, D. Venzano, D. Kostic.
    In Proceedings of ACM CoNEXT’12, Dec 2012.
    abstract

    The increasing adoption of Software Defined Networking, and OpenFlow in particular, brings great hope for increasing extensibility and lowering costs of deploying new network functionality. A key component in these networks is the OpenFlow agent, a piece of software that a switch runs to enable remote programmatic access to its forwarding tables. While testing high-level network functionality, the correct behavior and interoperability of any OpenFlow agent are taken for granted. However, existing tools for testing agents are not exhaustive nor systematic, and only check that the agent’s basic functionality works. In addition, the rapidly changing and sometimes vague OpenFlow specifications can result in multiple implementations that behave differently.
    This paper presents SOFT, an approach for testing the interoperability of OpenFlow switches. Our key insight is in automatically identifying the testing inputs that cause different OpenFlow agent implementations to behave inconsistently. To this end, we first symbolically execute each agent under test in isolation to derive which set of inputs causes which behavior. We then crosscheck all distinct behaviors across different agent implementations and evaluate whether a common input subset causes inconsistent behaviors. Our evaluation shows that our tool identified several inconsistencies between the publicly available Reference OpenFlow switch and Open vSwitch implementations.

    bibtex
    @INPROCEEDINGS{Kuzniar.CONEXT12,
    author = {Ku\'zniar, Maciej and Pere\v{s}\'ini, Peter and Canini, Marco and Venzano, Daniele and Kosti\'c, Dejan},
    title = {{A SOFT Way for OpenFlow Switch Interoperability Testing}},
    booktitle = {Proceedings of ACM CoNEXT'12},
    year = {2012},
    month = {Dec}
    }
    
  • OFTEN Testing OpenFlow Networks
    M. Kuzniar, M. Canini, D. Kostic.
    In Proceedings of the 1st European Workshop on Software Defined Networks (EWSDN), Oct 2012.
    abstract

    Software-defined networking and OpenFlow in particular enable independent development of network devices and software that controls them. Such separation of concerns eases the introduction of new network functionality; however, it leads to distributed responsibility for bugs. Despite the common interface, separate development entails the need to test an integrated network before deployment. In this work-in-progress paper, we identify the challenges of creating an environment that simplifies and systematically conducts such tests. We discuss optimizations required for efficient and reliable OpenFlow switch black-box testing and present a possible approach to address other challenges. In our preliminary prototype, we combine systematic state-space exploration techniques with real switches execution to explore an integrated network behavior. Our initial results show that such methods help detect previously unrevealed inconsistencies in the network.

    bibtex
    @INPROCEEDINGS{Kuzniar.EWSDN12,
    author = {Ku\'zniar, Maciej and Canini, Marco and Kosti\'c, Dejan},
    title = {{OFTEN Testing OpenFlow Networks}},
    booktitle = {Proceedings of the 1st European Workshop on Software Defined Networks (EWSDN'12)},
    year = {2012},
    month = {Oct}
    }
    
  • A NICE Way to Test OpenFlow Applications (Technical Report)
    M. Canini, D. Venzano, P. Peresini, D. Kostic, J. Rexford.
    In Proceedings of NSDI’12, Apr 2012.
    abstract

    The emergence of OpenFlow-capable switches enables exciting new network functionality, at the risk of programming errors that make communication less reliable. The centralized programming model, where a single controller program manages the network, seems to reduce the likelihood of bugs. However, the system is inherently distributed and asynchronous, with events happening at different switches and end hosts, and inevitable delays affecting communication with the controller. In this paper, we present efficient, systematic techniques for testing unmodified controller programs. Our NICE tool applies model checking to explore the state space of the entire system—the controller, the switches, and the hosts. Scalability is the main challenge, given the diversity of data packets, the large system state, and the many possible event orderings. To address this, we propose a novel way to augment model checking with symbolic execution of event handlers (to identify representative packets that exercise code paths on the controller). We also present a simplified OpenFlow switch model (to reduce the state space), and effective strategies for generating event interleavings likely to uncover bugs. Our prototype tests Python applications on the popular NOX platform. In testing three real applications—a MAC-learning switch, in-network server load balancing, and energy-efficient traffic engineering—we uncover eleven bugs.

    bibtex
    @INPROCEEDINGS{Canini.NSDI12,
    author = {Canini, Marco and Venzano, Daniele and Pere\v{s}\'ini, Peter and Kosti\'c, Dejan and Rexford, Jennifer},
    title = {{A NICE Way to Test OpenFlow Applications}},
    booktitle = {Proceedings of 9th USENIX Symposium on Networked Systems Design and Implementation (NSDI'12)},
    year = {2012},
    month = {Apr}
    }
    
  • Automating the Testing of OpenFlow Applications
    M. Canini, D. Kostic, J. Rexford, D. Venzano.
    In Proceedings of the 1st International Workshop on Rigorous Protocol Engineering (WRiPE 2011), Oct 2011.
    abstract

    Software-defined networking, and the emergence of OpenFlow-capable switches, enables a wide range of new network functionality. However, enhanced programmability inevitably leads to more software faults (or bugs). We believe that tools for testing OpenFlow programs are critical to the success of the new technology. However, the way OpenFlow applications interact with the data plane raises several challenges. First, the space of possible inputs (e.g., packet headers and inter-packet timings) is huge. Second, the centralized controller has a indirect view of the traffic and experiences unavoidable delays in installing rules in the switches. Third, external factors like user behavior (e.g., mobility) and higher-layer protocols (e.g., the TCP state machine) affect the correctness of OpenFlow programs.
    In this work-in-progress paper, we extend techniques for symbolic execution to generate inputs that systematically explore the space of system executions. First, we analyze controller applications to identify equivalence classes of packets that exercise different parts of the code. Second, we propose several network models with increasing precision, ranging from simple traffic models to live testing on the target network. Initial experiences with our prototype, which symbolically executes OpenFlow applications written in Python, suggest that our techniques can help programmers identify bugs in their OpenFlow programs.

    bibtex
    @INPROCEEDINGS{Canini.WRIPE,
    author = {Canini, Marco and Kosti\'c, Dejan and Rexford, Jennifer and Venzano, Daniele},
    title = {{Automating the Testing of OpenFlow Applications}},
    booktitle = {Proceedings of the 1st International Workshop on Rigorous Protocol Engineering (WRiPE'11)},
    year = {2011},
    month = {Oct}
    }
    
  • Finding Almost-Invariants in Distributed Systems
    M. Yabandeh, A. Anand, M. Canini, D. Kostic.
    In Proceedings of the 30th IEEE Symposium on Reliable Distributed Systems (SRDS’11), Oct 2011.
    abstract

    It is notoriously hard to develop dependable distributed systems. This is partly due to the difficulties in foreseeing various corner cases and failure scenarios while implementing a system that will be deployed over an asynchronous network. In contrast, reasoning about the desired distributed system behavior and the corresponding invariants is easier than reasoning about the code itself. Further, the invariants can be used for testing, theorem proving, and runtime enforcement.
    In this paper, we propose an approach to observe the system behavior and automatically infer invariants which reveal implementation bugs. Using our tool, Avenger, we automatically generate a large number of potentially relevant properties, check them within the time and spatial domains using traces of system executions, and filter out all but a few properties before reporting them to the developer. Our key insight in filtering is that a good candidate for an invariant is the one that holds in all but a few cases, i.e., an “almost-invariant”. Our experimental results with the XORP BGP implementation demonstrate Avenger’s ability to identify the almost-invariants that lead the developer to programming errors.

    bibtex
    @INPROCEEDINGS{Yabandeh.SRDS.Avenger,
    author = {Yabandeh, Maysam and Abhishek, Anand and Canini, Marco and Kosti\'c, Dejan},
    title = {{Finding Almost-Invariants in Distributed Systems}},
    booktitle = {Proceedings of the 30th IEEE Symposium on Reliable Distributed Systems (SRDS'11)},
    year = {2011},
    month = {Oct}
    }
    
  • Toward Online Testing of Federated and Heterogeneous Distributed Systems (Technical Report)
    M. Canini, V. Jovanovic, D. Venzano, B. Spasojevic, O. Crameri, D. Kostic.
    In Proceedings of USENIX Annual Technical Conference (ATC’11), Jun 2011.
    abstract

    Making distributed systems reliable is notoriously difficult. It is even more difficult to achieve high reliability for federated and heterogeneous systems, i.e., those that are operated by multiple administrative entities and have numerous inter-operable implementations. A prime example of such a system is the Internet’s inter-domain routing, today based on BGP.
    We argue that system reliability should be improved by proactively identifying potential faults using an online testing functionality. We propose DiCE, an approach that continuously and automatically explores the system behavior, to check whether the system deviates from its desired behavior. DiCE orchestrates the exploration of relevant system behaviors by subjecting system nodes to many possible inputs that exercise node actions. DiCE starts exploring from current, live system state, and operates in isolation from the deployed system. We describe our experience in integrating DiCE with an open-source BGP router. We evaluate the prototype’s ability to quickly detect origin misconfiguration, a recurring operator mistake that causes Internet-wide outages. We also quantify DiCE’s overhead and find it to have marginal impact on system performance.

    bibtex
    @INPROCEEDINGS{Canini.ATC.TowardOnlineTesting,
    author = {Canini, Marco and Jovanovi\'c, Vojin and Venzano, Daniele and Spasojevi\'c, Boris and Crameri, Olivier and Kosti\'c, Dejan},
    title = {{Toward Online Testing of Federated and Heterogeneous Distributed Systems}},
    booktitle = {Proceedings of the 2011 USENIX Annual Technical Conference (ATC'11)},
    year = {2011},
    month = {Jun}
    }
    
  • Fault Prediction in Distributed Systems Gone Wild
    M. Canini, D. Novakovic, V. Jovanovic, D. Kostic.
    In Proceedings of the 4th ACM SIGOPS/SIGACT Workshop on Large Scale Distributed Systems and Middleware (LADIS’10), Jul 2010.
    abstract

    We consider the problem of predicting faults in deployed, large-scale distributed systems that are heterogeneous and federated. Motivated by the importance of ensuring reliability of the services these systems provide, we argue that the key step in making these systems reliable is the need to automatically predict faults. For example, doing so is vital for avoiding Internet-wide outages that occur due to programming errors or misconfigurations.

    bibtex
    @INPROCEEDINGS{Canini.LADIS.FaultPred,
    author = {Canini, Marco and Novakovi\'c, Dejan and Jovanovi\'c, Vojin and Kosti\'c, Dejan},
    title = {{Fault Prediction in Distributed Systems Gone Wild}},
    booktitle = {Proceedings of the 4th ACM SIGOPS/SIGACT Workshop on Large Scale Distributed Systems and Middleware (LADIS'10)},
    year = {2010},
    month = {Jul}
    }
    

Energy-Efficient Networks

  • Identifying and Using Energy-Critical Paths
    N. Vasic, D. Novakovic, S. Shekhar, P. Bhurat, M. Canini, D. Kostic.
    In Proceedings of ACM CoNEXT’11, Dec 2011.
    abstract

    The power consumption of the Internet and datacenter networks is already significant, and threatens to shortly hit the power delivery limits while the hardware is trying to sustain ever-increasing traffic requirements. Existing energy-reduction approaches in this domain advocate recomputing network configuration with each substantial change in demand. Unfortunately, computing the minimum network subset is computationally hard and does not scale. Thus, the network is forced to operate with diminished performance during the recomputation periods. In this paper, we propose REsPoNse, a framework which overcomes the optimality-scalability trade-off. The insight in REsPoNse is to identify a few energy-critical paths off-line, install them into network elements, and use a simple online element to redirect the traffic in a way that enables large parts of the network to enter a low-power state. We evaluate REsPoNse with real network data and demonstrate that it achieves the same energy savings as the existing approaches, with marginal impact on network scalability and application performance.

    bibtex
    @INPROCEEDINGS{Vasic.CONEXT11,
    author = {Vasi\'c, Nedeljko and Novakovi\'c, Dejan and Shekhar, Satyam and Bhurat, Prateek and Canini, Marco and Kosti\'c, Dejan},
    title = {{Identifying and Using Energy-Critical Paths}},
    booktitle = {Proceedings of ACM CoNEXT'11},
    year = {2011},
    month = {Dec}
    }
    
  • Insomnia in the Access or How to Curb Access Network Related Energy Consumption
    E. Goma, M. Canini, A. Lopez Toledo, N. Laoutaris, D. Kostic, P. Rodriguez, R. Stanojevic, P. Yague Valentin.
    In Proceedings of ACM SIGCOMM’11, Aug 2011.
    abstract

    Access networks include modems, home gateways, and DSL Access Multiplexers (DSLAMs), and are responsible for 70-80% of total network-based energy consumption. In this paper, we take an in-depth look at the problem of greening access networks, identify root problems, and propose practical solutions for their user- and ISP-parts. On the user side, the combination of continuous light traffic and lack of alternative paths condemns gateways to being powered most of the time despite having Sleep-on-Idle (SoI) capabilities. To address this, we introduce Broadband Hitch-Hiking (BH2), that takes advantage of the overlap of wireless networks to aggregate user traffic in as few gateways as possible. In current urban settings BH2 can power off 65-90% of gateways. Powering off gateways permits the remaining ones to synchronize at higher speeds due to reduced crosstalk from having fewer active lines. Our tests reveal speedup up to 25%. On the ISP side, we propose introducing simple inexpensive switches at the distribution frame for batching active lines to a subset of cards letting the remaining ones sleep. Overall, our results show an 80% energy savings margin in access networks. The combination of BH2 and switching gets close to this margin, saving 66% on average.

    bibtex
    @INPROCEEDINGS{Goma.SIGCOMM.BH2,
    author = {Eduard Goma and Marco Canini and Alberto Lopez Toledo and Nikolaos Laoutaris and Dejan Kosti\'c and Pablo Rodriguez and Rade Stanojevi\'c and Pablo Yag\"ue Valent\'in},
    title = {{Insomnia in the Access or How to Curb Access Network Related Energy Consumption}},
    booktitle = {Proceedings of ACM SIGCOMM'11},
    year = {2011},
    month = {Aug}
    }
    

Network Monitoring and Application Identification

  • Evaluation and Design of Cache Replacement Policies under Flooding Attacks
    M. Zadnik, M. Canini.
    In Proceedings of the 2nd International Workshop on TRaffic Analysis and Classification (TRAC’11), Jul 2011.
    abstract

    A flow cache is a fundamental building block for flow-based traffic processing. Its efficiency is critical for the overall performance of a number of networked devices and systems. However, if not properly managed, the flow cache can be easily filled up and rendered ineffective by traffic patterns such as flooding attacks and scanning activities which, unfortunately, commonly occur in the Internet.In this paper, we show that popular cache replacement policies such as LRU cause the flow caches to evict the so called heavy-hitter flows during flooding attacks. To address this shortcoming, we build upon our recent work [1] and construct a replacement policy that is more resilient to floods and yet performs similarly to other policies under common network traffic conditions.

    bibtex
    @INPROCEEDINGS{Zadnik.TRAC.CacheEvo,
    author = {Zadnik, Martin and Canini, Marco},
    title = {{Evaluation and Design of Cache Replacement Policies under Flooding Attacks}},
    booktitle = {Proceedings of the 2nd International Workshop on TRaffic Analysis and Classification (TRAC'11)},
    year = {2011},
    month = {Jul}
    }
    
  • Evolution of Cache Replacement Policies to Track Heavy-hitter Flows
    M. Zadnik, M. Canini.
    In Proceedings of Passive Active Measurement conference (PAM’11), Mar 2011.
    abstract

    Several important network applications cannot easily scale to higher data rates without requiring focusing just on the large traffic flows. Recent works have discussed algorithmic solutions that trade-off accuracy to gain efficiency for filtering and tracking the so-called “heavy-hitters”. However, a major limit is that flows must initially go through a filtering process, making it impossible to track state associated with the first few packets of the flow.
    In this paper, we propose a different paradigm in tracking the large flows which overcomes this limit. We view the problem as that of managing a small flow cache with a finely tuned replacement policy that strives to avoid evicting the heavy-hitters. Our scheme starts from recorded traffic traces and uses Genetic Algorithms to evolve a replacement policy tailored for supporting seamless, stateful traffic-processing. We evaluate our scheme in terms of missed heavy-hitters: it performs close to the optimal, oracle-based policy, and when compared to other standard policies, it consistently outperforms them, even by a factor of two in most cases.

    bibtex
    @INPROCEEDINGS{Zadnik.PAM.CacheEvo,
    author = {Zadnik, Martin and Canini, Marco},
    title = {{Evolution of Cache Replacement Policies to Track Heavy-hitter Flows}},
    booktitle = {Proceedings of Passive Active Measurement conference (PAM'11)},
    year = {2011},
    month = {Mar}
    }
    
  • Experience with High-Speed Automated Application-Identification for Network-Management
    M. Canini, W. Li, M. Zadnik, A. W. Moore.
    In Proceedings of the 5th ACM/IEEE Symposium on Architectures for Networking and Communications Systems (ANCS’09), Oct 2009.
    abstract

    AtoZ, an automatic traffic organizer, provides control of how network-resources are used by applications. It does this by combining the high-speed packet processing of the NetFPGA with an efficient method for application-behavior labeling. AtoZ can control network resources by prohibiting certain applications and controlling the resources available to others. We discuss deployment experience and use real traffic to illustrate how such an architecture enables several distinct features: high accuracy, high throughput, minimal delay, and efficient packet labeling - all in a low cost, robust configuration that works alongside the enterprise access-router.

    bibtex
    @INPROCEEDINGS{Canini.ANCS.AtoZ,
    author = {Marco Canini and Wei Li and Martin Zadnik and Andrew W. Moore},
    title = {{Experience with High-Speed Automated Application-Identification for Network-Management}},
    booktitle = {Proceedings of the 5th ACM/IEEE Symposium on Architectures for Networking and Communications Systems (ANCS'09)},
    year = {2009},
    month = {Oct}
    }
    
  • Tracking Elephant Flows in Internet Backbone Traffic with an FPGA-based Cache
    M. Zadnik, M. Canini, A. W. Moore, D. J. Miller, W. Li.
    In Proceedings of the 19th International Conference on Field Programmable Logic and Applications (FPL’09), Aug 2009.
    abstract

    This paper presents an FPGA-friendly approach to tracking elephant flows in network traffic. Our approach, Single Step Segmented Least Recently Used (S3-LRU) policy, is a network traffic-friendly replacement policy for maintaining flow states in a Naïve Hash Table (NHT). We demonstrate that our S3-LRU approach preserves elephant flows: conservatively promoting potential elephants and evicting low rate flows in LRU manner. Our approach keeps flow-state of any elephant since start-of-day and provides a significant improvement over filtering approaches proposed in previous work. Our FPGA-based implementation of the S3-LRU in combination with an NHT suites well the parallel access to block memories while capitalising on the retuning of parameters through dynamic-reprogramming.

    bibtex
    @INPROCEEDINGS{Zadnik.FPL09.Elephant,
    author = {Martin Zadnik and Marco Canini and Andrew W. Moore and David J. Miller and Wei Li},
    title = {{Tracking Elephant Flows in Internet Backbone Traffic with an FPGA-based Cache}},
    booktitle = {Proceedings of the 19th International Conference on Field Programmable Logic and Applications (FPL'09)},
    year = {2009},
    month = {Aug}
    }
    
  • GTVS: Boosting the Collection of Application Traffic Ground Truth
    M. Canini, W. Li, A. W. Moore, R. Bolla.
    In Proceedings of the First International Workshop on Traffic Monitoring and Analysis (TMA’09), May 2009.
    abstract

    Interesting research in the areas of traffic classification, network monitoring, and application-oriented analysis can not proceed without real traffic traces, labeled with actual application information. However, hand-labeled traces are an extremely valuable but scarce resource in the traffic monitoring and analysis community, as a result of both privacy concerns and technical difficulties. Hardly any possibility exists for payloaded data to be released, while the impossibility of obtaining certain ground-truth application information from non-payloaded data has severely constrained the value of anonymized public traces.
    The usual way to obtain the ground truth is fragile, inefficient and not directly comparable from one’s work to another. This paper proposes a methodology and details the design of a technical framework that significantly boosts the efficiency in compiling the application traffic ground truth. Further, a case study on a 30 minute real data trace is presented. In contrast with past work, this is an easy hands-on tool suite dedicated to save user’s time and labor and is freely available to the public.

    bibtex
    @INPROCEEDINGS{Canini.TMA09.GTVS,
    author = {Marco Canini and Wei Li and Andrew W. Moore and Raffaele Bolla},
    title = {{GTVS: Boosting the Collection of Application Traffic Ground Truth}},
    booktitle = {Proceedings of the First International Workshop on Traffic Monitoring and Analysis (TMA'09)},
    year = {2009},
    month = {May}
    }
    
  • Efficient Application Identification and the Temporal Stability of Classification Schema
    W. Li, M. Canini, A. W. Moore, R. Bolla.
    Computer Networks, 53(6), Apr 2009.
    abstract

    Motivated by the importance of accurate identification for a range of applications, this paper compares and contrasts the effective and efficient classification of network-based applications using behavioral observations of network-traffic and those using deep-packet inspection.
    Importantly, throughout our work we are able to make comparison with data possessing an accurate, independently determined ground-truth that describes the actual applications causing the network-traffic observed.
    In a unique study in both the spatial-domain: comparing across different network-locations and in the temporal-domain: comparing across a number of years of data, we illustrate the decay in classification accuracy across a range of application-classification mechanisms. Further, we document the accuracy of spatial classification without training data possessing spatial diversity.
    Finally, we illustrate the classification of UDP traffic. We use the same classification approach for both stateful flows (TCP) and stateless flows based upon UDP. Importantly, we demonstrate high levels of accuracy: greater than 92% for the worst circumstance regardless of the application.

    bibtex
    @ARTICLE{Li.09.EAI,
    author = {Wei Li and Marco Canini and Andrew W. Moore and Raffaele Bolla},
    title = {{Efficient application identification and the temporal and spatial stability of classification schema}},
    journal = {Computer Networks},
    volume = {53},
    number = {6},
    year = {2009},
    month = {Apr},
    doi = {10.1016/j.comnet.2008.11.016}
    }
    
  • Per Flow Packet Sampling for High-Speed Network Monitoring
    M. Canini, D. Fay, D. J. Miller, A. W. Moore, R. Bolla.
    In Proceedings of the First International Conference on Communication Systems and Networks (COMSNETS’09), Jan 2009.
    abstract

    We present a per-flow packet sampling method that enables the real-time classification of high-speed network traffic. Our method, based upon the partial sampling of each flow (i.e., performing sampling at only early stages in each flow’s lifetime), provides a sufficient reduction in total traffic (e.g., a factor of five in packets, a factor of ten in bytes) as to allow practical implementations at one Gigabit/s, and, using limited hardware assistance, ten Gigabit/s.

    bibtex
    @INPROCEEDINGS{Canini.COMSNETS09.PFPS,
    author = {Marco Canini and Damien Fay and David J. Miller and Andrew W. Moore and Raffaele Bolla},
    title = {{Per Flow Packet Sampling for High-Speed Network Monitoring}},
    booktitle = {Proceedings of the First International Conference on Communication Systems and Networks (COMSNETS'09)},
    year = {2009},
    month = {Jan}
    }
    
  • On the Double-Faced Nature of P2P Traffic
    R. Bolla, M. Canini, R. Rapuzzi, M. Sciuto.
    In Proceedings of the Sixteenth Euromicro Conference on Parallel, Distributed and Network-Based Processing (PDP’08), Feb 2008.
    abstract

    Over the last few years, peer-to-peer (P2P) file sharing applications have evolved to become a major traffic source in the Internet. The ability to quantify their impact on the network, as a consequence of both signaling and download traffic, is fundamental to a number of network operations, including traffic engineering, capacity planning, quality of service, forecasting for long-term provisioning, etc.
    We present here a measurement study on the characteristics of the traffic associated with different P2P applications. Our aim is to offer useful insight into the nature of P2P traffic, which we consider a step toward building P2P traffic aggregates generators in simulative environments. We show that P2P traffic can be divided into two distinguished behavioral profiles, which, independently of the application protocol, present significant differences in the average and standard deviation of four measurements: arrival times, durations, volumes and average packet sizes of P2P conversations. These profiles well represent the typical behavior of signaling and download traffic. Based on our findings, we argue that, if such distinction is not taken into account, the statistical measurements needed to model P2P traffic aggregates would result biased, and potentially bring to misleading results.

    bibtex
    @INPROCEEDINGS{Bolla.PDP08.P2P,
    author = {Raffaele Bolla and Marco Canini and Riccardo Rapuzzi and Michele Sciuto},
    title = {{On the Double-Faced Nature of P2P Traffic}},
    booktitle = {Proceedings of the Sixteenth Euromicro Conference on Parallel, Distributed and Network-Based Processing (PDP'08)},
    year = {2008},
    pages = {524--530},
    month = {Feb}
    }
    
  • Characterizing the network behavior of P2P traffic
    R. Bolla, M. Canini, R. Rapuzzi, M. Sciuto.
    In Proceedings of the Fourth International Telecommunication Networking Workshop on QoS in Multiservice IP Networks (IT-NEWS’08), Feb 2008.
    abstract

    Nowadays the majority of Internet traffic is generated by peer-to-peer (P2P) file sharing applications. As the popularity of these applications has been increasing dramatically over the past few years, it becomes increasingly important to analyze their behavior and to understand their effects on the network. The ability to quantify their impact on the network is fundamental to a number of network operations, including traffic engineering, capacity planning, quality of service, forecasting for long-term provisioning, etc.
    We present here a measurement study on the characteristics of the traffic associated with two different P2P applications. Our aim is to provide useful insight into the nature of P2P traffic from the point of view of the network. To achieve this, we introduce a novel measurement, Content Transfer Index (CTI), to distinguish two classes of behavior associated with P2P traffic: the download and the signaling traffic profile. Next we apply the CTI to our data sets and show that it effectively offers a general characterization of P2P traffic. Finally, we present a number of statistical measurements that are significantly unbiased due to having considered the distinction between the two classes. To the best of our knowledge, this is the first study to follow this approach.
    We believe such a study will help researchers better understand the impact of P2P applications on the network and how to improve their performance.

    bibtex
    @INPROCEEDINGS{Bolla.ITNEWS08.P2P,
    author = {Raffaele Bolla and Marco Canini and Riccardo Rapuzzi and Michele Sciuto},
    title = {{Characterizing the network behavior of P2P traffic}},
    booktitle = {Proceedings of the Fourth International Telecommunication Networking Workshop on QoS in Multiservice IP Networks (IT-NEWS'08)},
    year = {2008},
    pages = {14--19},
    month = {Feb}
    }
    
  • A High Performance IP Traffic Generation Tool Based on the Intel IXP2400 Network Processor
    R. Bolla, R. Bruschi, M. Canini, M. Repetto.
    In Proceedings of the 2005 Tyrrhenian International Workshop on Digital Communications (TIWDC’05), Sorrento, Italy, Jun 2005, and in F. Davoli, S. Palazzo, S. Zappatore, Eds. Distributed Cooperative Laboratories: Networking, Instrumentation, and Measurements. Springer, Norwell, MA, pages 127-142, 2006.
    abstract

    Traffic generation is essential in the processes of testing and developing new network elements, such as equipment, protocols and applications, regarding both the production and research area. Traditionally, two approaches have been followed for this purpose: the first is based on software applications that can be executed on inexpensive Personal Computers, while the second relies on dedicated hardware. Obviously, performance in the latter case (in terms of sustainable rates, precision in delays and jitters) outclasses that in the former, but also the costs usually grow of some order of magnitude. In this paper we describe a software IP traffic generator based on a hardware architecture specialized for packet processing (known as Network Processor), which we have developed and tested. Our approach is positioned between the two different philosophies listed above: it has a software (and then flexible) implementation running on a specific hardware only slightly more expensive than PCs.

    bibtex
    @INCOLLECTION{Bolla.06.PKTGEN,
    author={Bolla, Raffaele and Bruschi, Roberto and Canini, Marco and Repetto, Matteo},
    title= {{A High Performance IP Traffic Generation Tool Based On The Intel IXP2400 Network Processor}},
    booktitle={Distributed Cooperative Laboratories: Networking, Instrumentation, and Measurements},
    series={Signals and Communication Technology},
    editor={Davoli, Franco and Palazzo, Sergio and Zappatore, Sandro},
    publisher={Springer US},
    pages={127--142},
    year={2006}
    }
    

Extended abstracts

Book chapters

  • Experiences with the Collection of Application Ground Truth Data.
    M. Canini, R. Rapuzzi, R. Bolla.
    In Antonio Pescape and Carlo Sansone (Editor), RECIPE. Robust and Efficient traffic Classification of IP nEtworks, Fridericiana Editrice Universitaria, Napoli, ISBN: 978-88-833-8081-5, July 2009.

Certain contents is © IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Certain contents is © ACM. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution.


© 2012-2020. All rights reserved.