Table of Links
Abstract and 1 Introduction
2 Background and Motivation
2.1 Decentralized Computing Infrastructure and Cloud
2.2 Advantages of DeFaaS
2.3 System Requirements
3 Overview of DeFaaS
4 Detailed Design of DeFaaS and 4.1 Decentralized Scheduling and Load Balancing
4.2 Decentralized Event Distribution
4.3 API Registration and Access Control
4.4 OAuth2.0 Support
4.5 Logging and Billing and 4.6 Trust Management
4.7 Supporting Multi-Cloud Service Mesh
5 Implementation and Evaluation
5.1 Implementations
5.2 Experiments
5.3 Evaluation
6 Related Work
7 Conclusions and References
5 Implementation and Evaluation
5.1 Implementations
The management blockchain is based Hyperledger Besu [Besu([n. d.])]. We apply containerized Open FaaS [OpenFaaS([n. d.])] based on Kubernetes [Kubeless([n. d.])]. Kubernetes clusters are supported by all the major cloud data centers (e.g., Google cloud, AWS, Azure). The following extensions to the Open FaaS implementation is necessary: a connector that allows OpenFaaS functions to be invoked by the external events where the events are distributed by decentralized IPFS event network; an interoperability plugin for authentication that verifies access tokens from the API clients. Validity of the access tokens are verified against the management blockchain. The implementation is based on [Fotiou et al.(2020)] that supports OAuth2 integration using Solidity smart contracts [?]. Support for randomized load balancing is based on modified version of the standard Power of Two Choices. Data sharing between cloud data centers is based on IPFS and Web3 storage. The management blockchain (Besu) supports cross-chain transactions with other public chains. For prototyping purposes, this work makes use of the existing cross-chain contracts (e.g., [Wu et al.(2021), Github([n. d.])]) and bridge components. Among the options for cross-chain bridges, a good choice is SOFIE Interledger bridge [Github([n. d.])]. SOFIE provides bi-directional communications between two blockchains. It listens for specific events coming from the sender blockchain, and communicates said events to the receiver blockchain, which in turn acknowledges if the process succeeds. Then the corresponding result is committed in the sender blockchain. Logging of API calls is based on IPFS.
5.2 Experiments
Methodology. In this section, we will explain how we conducted the measurements in this paper. Firstly, we set up a testbed using storage nodes located in different locations, which were deployed on Amazon AWS and Google Cloud Platform (GCP). Next, we detail the setup used to measure communication operations. Finally, we outline the specific test cases designed to assess the performance.
IPFS testbed. We have deployed 8 storage nodes for our testbed configuration. These were distributed across 7 geographical locations (6 for GCP) and one local node. The instances of these were deployed in Singapore, Sydney, Frankfurt, Oregon, N. Virginia, Sao Paulo (and for AWS one extra node at Ireland). Instances of t2.medium types were used for IPFS nodes (instance configuration as two vCPUs and 4GB memory, Ubuntu LTS 18.04 and IPFS 0.4.18). The setup is a standard testbed building approach [Hou et al.(2017), Alcantara et al.(2017) ˆ ]. When the data volume of an object is larger than 256KB, IPFS splits the data into multiple blocks. To test the performance of this process we have limited ourselves to 256kb. The IPFS environment is tested against application scenarios in Table 1. The project(Rsrch) implements a simple dApp for a car rental scenario.
Authors:
(1) Rabimba Karanjai, Department of Computer Science, University of Houston ([email protected]);
(2) Lei Xu, Department of Computer Science, Kent State University;
(3) Lin Chen, Department of Computer Science, Texas Texh University;
(4) Nour Diallo, Department of Computer Science, University Of Houston;
(5) Weidong Shi, Department of Computer Science, University Of Houston.