OPENING: Open Infrastructure for Cloud Native Applications
Open infrastructure, IT infrastructure built with open source components, provide the foundation for emerging use cases like edge computing, CI/CD, Machine Learning as well as the
evolving cloud computing landscape in an effective, cost efficient way. Learn how the OSF has applied its learnings from OpenStack to support additional projects covering these
real world scenarios, while supporting cloud native applications.
Optimal Resource Allocation of Cloud-Based Spark Applications
Nowadays, the big data paradigm is consolidating its central position in the industry, as well as in society at large. Lots of applications, across disparate domains, operate on huge amounts of data and offer great advantages both for business and research. According to analysts, cloud computing adoption is steadily increasing to support big data analyses and Spark is expected to take a prominent market position for the next decade.
As big data applications gain more and more importance over time and given the dynamic nature of cloud resources, it is fundamental to develop an intelligent resource management system to provide Quality of Service guarantees to application end-users.
This work presents a set of run-time optimization-based resource management policies for advanced big data analytics. Users submit Spark applications characterized by a priority and by a hard or soft deadline. Optimization policies address two scenarios: i) identification of the minimum capacity to run a Spark application within the deadline; ii) re-balance of the cloud resources in case of heavy load, minimising the weighted soft deadline application tardiness. The solution relies on an initial non-linear programming model formulation and a search space exploration based on simulation-optimization procedures. Spark applications execution times are estimated by relying on a gamut of techniques, including machine learning, approximated analyses, and simulation. The benefits of the approach are evaluated on Microsoft Azure HDInsight and on a private cluster based on POWER8 by considering the TPC-DS industry benchmark and SparkBench. The results obtained in the first scenario demonstrate that the percentage error of the prediction of the optimal resource usage with respect to system measurement and exhaustive search is around 7% on average while literature- based techniques present an average error of 12% with 50% as worst case. Moreover, in the second scenario, the proposed algorithms can address complex problems like computing the optimal redistribution of resources among tens of applications in less than a minute with an error of 8% on average. On the same considered tests, literature-based approaches obtain an average error about 57%.
Politecnico di Milano
OpenStack for Government Cloud : the italian experience
Nel 2016 è stata aggiudicata la più grande gara europea relativa alla realizzazione della nuova piattaforma digitale nazionale, secondo il paradigma Cloud Computing, per la Pubblica Amministrazione italiana denominata “SPC Cloud”. La gara è un passo importante nell’attuazione della strategia ICT nazionale come indicato nel documento “Strategia per la crescita digitale 2014-2020” della Presidenza del Consiglio. Si tratta di un progetto di trasformazione, coordinata, delle infrastrutture e servizi ICT di tutta la PA (locale, regionale, centrale,…) secondo un’architettura e linee guida definite secondo il paradigma Cloud Computing, ed implementata con piattaforma OpenStack per la realizzazione dell’infrastruttura e servizi IaaS, PaaS e SaaS.
Running OpenStack in production at CERN
At CERN, the European Organization for Nuclear Research, have been running a private OpenStack cloud since 2013. Our cloud has grown to >300K cores, providing 34000 Virtual Machines, 6500 Volumes, ~500 Container Clusters to more than 4000 projects. We will present details about the Cloud Service, its architecture and operational challenges, as well as our plans for expansion.
Jan Van Eldik
Dominique Le Foll
Open Cloud Networking at full speed
Traditionally the physical network (switch) was kept aside as closed solution but today open networking is a reality.
This session will present the different NOS (Network Operating Systems) available for an open cloud infrastructure with an emphasis on the open source options like Sonic and Linux Switch Mellanox is a Unique network vendor since it can provides a real End to End network solution for an open cloud infrastructure that includes all the components PCI to PCI this session will present the requirements from each of the network components to be able to perform in the best manner in an open infrastructure environment with an Overlay or l2 or L3 network.
Media Processing & High Performance Networking
How Sky Italia craeted k8s blueprint and network setup that empowers the brand new “Sky Q Fibra” service
Supermicro l’evoluzione della tecnologia
Una nuova generazione di prodotti che sono protagonisti nel Cloud, HPC e Intelligenza Artificiale.
Enzo Romeo Marro
A call towards a reference architecture to operate a virtualization/storage/containers infrastructure: the InfraScience and ISTI inspiring use case
Modern computing and storage infrastructures are very complex and hard to maintain. On top of that, Italian research facilities are always understaffed. A reference architecture with deployment and configuration examples would ease the burden of installing and maintaining such infrastructures; common practices of deployment and authentication/authorization setups can be a starting point to better integration (federation?) between infrastructures belonging to different institutes.
Under new management: migrating a running OpenStack to containerisation with Kolla
Deploying OpenStack with containers brings many operational benefits,
such as isolation of dependencies and repeatability of deployment. The
Kolla project provides tooling making it easy to deploy new OpenStack
clouds. However, migrating a running OpenStack cloud from another
deployment system requires a more ad hoc approach, particularly to
minimise impact on end users. This talk will describe our process for
migrating a running OpenStack production deployment to a containerised
solution using Kolla and Kayobe, a subproject designed to simplify
management of bare-metal nodes.
Infrastructures in a horizontal farmers community
The talk is about Infrastructures in a horizontal farmers community.
We will analyze the approach of a farming community near Bologna: Campi Aperti.
Speaking about: human organization, connectivity, managing of cloud, resources and incidents handler, maintaining and growing in a non-gerarchical organization. Technologies involved: consensus, mesh network, containers.
Cloud Native Computing at the Edge
Edge Computing is the method of processing data where it is being generated, at the edge of a network, allowing for real time data-processing without latency as opposed to sending data to a Public Cloud. This talks will dive into the benefits of using Kubernetes and other Cloud Native technologies like serverless and containers as part of a modern strategy for the Edge Computing.
Kubernetes for the Ansible Users
The agentless nature and simplicity are without any doubt the two key features that determined the incredible popularity of Ansible as a configuration management tool.However, exactly when many DevOps teams started working actively on the migration from existing configuration management tools to Ansible, another open-source tool started becoming very popular: Kubernetes. The promise of Kubernetes of being able to implement autonomous and self-healing infrastructures – both in the cloud and on-premise – led some teams to question their Ansible strategy. How is Kubernetes impacting existing efforts on the automation and consolidation of operations with Ansible? How can Kubernetes be used effectively in existing infrastructures already fully managed via Ansible? Can Kubernetes and Ansible co-exist? What’s the most effective way of transitioning from Ansible to Kubernetes? Will it work?
IoTronic: FaaS for the IoT in the Fog
Stack4Things is an opensource umbrella project for deep IoT-Cloud integration empowering Fog computing scenarios.
Explored use cases focus around a number of Smart City-oriented verticals, under the #SmartME umbrella project, mostly featuring popular embedded and mobile systems as full-blown (far) Fog nodes.
As part of the Stack4Things project, a subsystem for IoT (bare metal) management, called IoTronic, has been developed, that helps in managing IoT device fleets at the Fog level, without caring about their physical location, network configuration, or underlying technology.
IoTronic has been promoted to the shortlist of projects the Edge Computing Group is agreeing upon to have the OpenStack ecosystem fully support IoT in the Fog/Edge continuum in the near future.
Recently IoTronic has been extended to fully interoperate with the OpenStack subsystems for Containerization/FaaS.
Dashboard-as-Code: monitoring an evolutionary architecture
Le architetture a microservizi e le ultime tendenze nell’ambito della gestione infrastrutturale distribuita permettono oggi di comporre agilmente sistemi complessi, connettendo molteplici componenti con semplicità. In questo scenario, può essere complesso comprendere ed identificare in brevissimo tempo ciò che funziona e (soprattutto) ciò che non sta funzionando opportunamente.
Nella nostra esperienza, la visibilità sugli eventi e sullo stato del sistema è un requisito fondamentale per una efficace “root cause analysis”: nel momento in cui lo stesso ecosistema è in continua evoluzione, come garantire che il monitoraggio e la correlazione delle metriche siano sempre consistenti ed efficaci?
Questo talk affronta il concetto di “dashboard automation”, l’ultimo miglio dell’Infrastructure-as-Code. Discuteremo come sia possibile configurare automaticamente e manutenere i più comuni tool per la raccolta e la visualizzazione dei dati (come Grafana e Prometheus), utilizzando linguaggi come Jsonnet e Grafonnet, il tutto eseguito su un cluster Kubernetes.
Scaling Terraform: da Startup a Enterprise
Terraform ci permette di creare infrastrutture e gestirne i cambiamenti in maniera sicura e predicibile. Questo compito risulta semplice quando la tua architettura e il tuo team sono di piccole dimensioni. Ma cosa succede quando l’organizzazione e i gruppi di lavoro diventano piu grandi e complessi? A questo punto il problema non è piu la dimensione dell’infrastruttura, ma cambiare i flussi di lavoro per migliorare la collaborazione all’interno del team e tra i team. Questo talk esplorerà come sia possibile gestire e organizzare l’adozione di Terraform fra team eterogenei e garantire al contempo la consistenza e la sicurezza dell’infrastruttura.