Główne pojęcia
This paper proposes BeFaaS, an extensible application-centric benchmarking framework for evaluating the performance of distributed FaaS platforms through realistic and typical FaaS application scenarios.
Streszczenie
The paper presents BeFaaS, an application-centric benchmarking framework for evaluating the performance of distributed FaaS platforms. BeFaaS includes the following key features:
Realistic Benchmark Applications: BeFaaS comes with four built-in benchmark applications that mimic typical FaaS use cases, including a microservice-based web application, an IoT application for hybrid edge-cloud setups, a smart factory application to measure event trigger performance, and a microservice application to study cold start behavior.
Extensibility: The modular design of BeFaaS allows users to easily add new benchmark applications and load profiles to adapt the framework to their specific needs.
Support for Federated Deployments: BeFaaS supports deploying benchmark applications across multiple FaaS platforms, including cloud, edge, and fog environments. It uses unique function names, individual deployment artifacts, and a publisher-subscriber event pipeline to enable this functionality.
Detailed Request Tracing: BeFaaS collects fine-grained measurements and traces individual requests to enable detailed drill-down analysis of the results. It injects code to capture timestamps, context IDs, and cold start detection.
Automated Experiment Orchestration: BeFaaS requires only the application code, a deployment configuration, and a load profile to automatically perform benchmark experiments and collect the results.
The paper presents the results of four experiments using BeFaaS to benchmark the performance of major cloud FaaS providers (AWS, Azure, GCP) and the open-source tinyFaaS edge platform. The key findings include:
Network transmission is a major contributor to response latency for function chains, especially in hybrid edge-cloud deployments.
The trigger delay between a published event and the start of the triggered function ranges from about 100ms for AWS Lambda to 800ms for Google Cloud Functions.
Azure Functions shows the best cold start behavior for the tested workloads.
Statystyki
Network transmission time is the most relevant driver of execution time on all providers for a typical function sequence.
In a hybrid edge-cloud setup, the cloud database increases the database round trip time, but the overall execution duration is lower compared to a fully cloud-based deployment.
The network latency to publish an event usually ranges between 25ms and 100ms, except for the Azure-AWS pair which may take up to 200ms.
The execution time of the publisher function ranges from about 10ms on Azure to about 800ms on GCP.
The trigger delay between publisher start and function start varies from 100ms on AWS to about 800ms on GCP.
Cytaty
"Network transmission is a major contributor to response latency for function chains, especially in hybrid edge-cloud deployments."
"The trigger delay between a published event and the start of the triggered function ranges from about 100ms for AWS Lambda to 800ms for Google Cloud Functions."
"Azure Functions shows the best cold start behavior for the tested workloads."