Dependability and Interoperability with Event-Based Systems: EBSIS-sponsored session
An open event for all DisCoTec attendees (no registration required for this session).
The EBSIS H2020 project ( http://ebsis.info.uaic.ro ) is happy to sponsor a special session open to all DisCoTec attendees on Wednesday 21st 2017 afternoon. The session will feature three high-profile keynotes around the topics of dependability and interoperability.
Keynote #1 (13:30-14:30): Case studies for analyzing and managing big-system dependability
Lydia Y. Chen, IBM Zürich
Keynote #2 (14:30-15:30): Scaling State Machine Replication
Fernando Pedone, University of Lugano
Break (15:30 - 16:00)
Keynote #3 (16:00-17:00): Interoperability in distributed systems: past, present and future
David Bromberg, IRISA – Inria Rennes – ESIR
All events will take place on Wednesday 21st 2017.
Abstract: To ensure quality of service to end-users and system dependability, production datacenters monitor and collect large amounts of performance logs from virtual and physical resources, e.g., tens of gigabytes, resulting in a performance big data. While such logging information is used routinely by sysadmins for ad-hoc trouble-shooting and problem diagnosis, we point out that there is a tremendous value in analyzing such performance big data from a research point of view. In this talk, we will show two case studies on Google and IBM datacenter traces. We demonstrate how to analyze filed data using statistical learning techniques, derive new insights into the system norms and failures, and develop proactive strategies to manage the system dependability.
Bio: Lydia Y. Chen is a research staff member at the IBM Zurich Research Lab, Zurich, Switzerland. She received a Ph.D. from the Pennsylvania State University. Her research interests include big data analytics and cloud computing. She has published papers in international conferences and journals. She is the co-recipients of best paper awards at CCgrid'15, and eEnergy'15. She served on technical program committees for system and network conferences. She has lead and participated in several Swiss National Science Foundation and European FP7 projects. She is a IEEE senior member.
Abstract: State machine replication (SMR) is a well-established approach to developing highly available services. In essence, the idea is that replicas deterministically execute the same sequence of client commands in the same order and in doing so transition through the same sequence of states and produce the same results. While SMR provides configurable fault tolerance, it does not scale performance: since every replica added to the system needs to execute all requests, throughput does not improve with the number of replicas. In this talk, I will present Scalable SMR (S-SMR), our efforts towards extending SMR to support both configurable fault tolerance and configurable performance (i.e., scaling out with the number of replicas). We have used S-SMR to develop a number of distributed services, including a scalable Zookeeper clone and a scalable social network application.
Bio: Fernando Pedone is a full professor at the Faculty of Informatics at the University of Lugano (USI), Switzerland, and one of the faculty’s “founding members.” He received the Ph.D. degree from EPFL in 1999 and has been previously affiliated with Cornell University, USA, as a visiting professor, and Hewlett-Packard Laboratories (HP Labs), USA, as a researcher. Fernando Pedone’s research interests include the theory and practice of dependable distributed systems and dependable data management systems. He has authored more than 100 scientific papers and 6 patents. He is co-editor of the book “Replication: theory and practice”. Last but not least, Prof. Pedone is a windsurfing enthusiast.
Abstract: The need to deal with the existence of different protocols that perform similar functions is not new, and has been the focus of tremendous work since the 80s, leading to the study of protocol interoperability. As networked systems are becoming increasingly pervasive, they need to compose with their ever-evolving environment according to functionalities they provide and/or request. However, such composition is greatly challenged by the heterogeneity and autonomy of today’s digital systems, which are not designed in concert, but are instead independently developed and deployed within pervasive networking environments, making protocol interoperability a continued research challenge. More precisely, the need for interoperability drastically increased with the exponential growth of the Internet of Things. The IoT enables to interconnect altogether, while being potentially remote, heterogeneous systems coming from various application domains, such as aerospace, aviation, telecommunications, agriculture, healthcare, automotive, mobile computing, home automation, smart space, smart energy, etc. As a matter of fact, interoperability is required everywhere, whatever the considered application domain, from local to large-scale environment, and keeping efficiency in mind. We will introduce the history of interoperability in distributed systems, and we will review the past, present and future research challenges.
Bio: David Bromberg has been a Professor in Distributed Computer Systems at the Université de Rennes (IRISA) since 2015. Prior to that, he was an Associate Professor at the University of Bordeaux and member of the LaBRI Software Engineering research group from 2008 to 2015. David holds a PhD from Inria Rocquencourt (2006) and an Habilitation à Diriger les Recherches from the University of Bordeaux (2014). His main interest lies in the scalability and programmability of complex distributed systems (e.g. overlays) and software engineering for middleware, distributed systems and network programming. David has authored over 30 peer-reviewed publications, and served on a number of program committees in his field. This includes PC member of the 15th, 16th and 17th ACM/IFIP/USENIX International Conference on Middleware.