toplogo
Logg Inn

Application-Centric Benchmarking of Distributed FaaS Platforms using BeFaaS


Grunnleggende konsepter
This paper proposes BeFaaS, an extensible application-centric benchmarking framework for evaluating the performance of distributed FaaS platforms through realistic and typical FaaS application scenarios.
Sammendrag
The paper presents BeFaaS, an application-centric benchmarking framework for evaluating the performance of distributed FaaS platforms. BeFaaS includes the following key features: Realistic Benchmark Applications: BeFaaS comes with four built-in benchmark applications that mimic typical FaaS use cases, including a microservice-based web application, an IoT application for hybrid edge-cloud setups, a smart factory application to measure event trigger performance, and a microservice application to study cold start behavior. Extensibility: The modular design of BeFaaS allows users to easily add new benchmark applications and load profiles to adapt the framework to their specific needs. Support for Federated Deployments: BeFaaS supports deploying benchmark applications across multiple FaaS platforms, including cloud, edge, and fog environments. It uses unique function names, individual deployment artifacts, and a publisher-subscriber event pipeline to enable this functionality. Detailed Request Tracing: BeFaaS collects fine-grained measurements and traces individual requests to enable detailed drill-down analysis of the results. It injects code to capture timestamps, context IDs, and cold start detection. Automated Experiment Orchestration: BeFaaS requires only the application code, a deployment configuration, and a load profile to automatically perform benchmark experiments and collect the results. The paper presents the results of four experiments using BeFaaS to benchmark the performance of major cloud FaaS providers (AWS, Azure, GCP) and the open-source tinyFaaS edge platform. The key findings include: Network transmission is a major contributor to response latency for function chains, especially in hybrid edge-cloud deployments. The trigger delay between a published event and the start of the triggered function ranges from about 100ms for AWS Lambda to 800ms for Google Cloud Functions. Azure Functions shows the best cold start behavior for the tested workloads.
Statistikk
Network transmission time is the most relevant driver of execution time on all providers for a typical function sequence. In a hybrid edge-cloud setup, the cloud database increases the database round trip time, but the overall execution duration is lower compared to a fully cloud-based deployment. The network latency to publish an event usually ranges between 25ms and 100ms, except for the Azure-AWS pair which may take up to 200ms. The execution time of the publisher function ranges from about 10ms on Azure to about 800ms on GCP. The trigger delay between publisher start and function start varies from 100ms on AWS to about 800ms on GCP.
Sitater
"Network transmission is a major contributor to response latency for function chains, especially in hybrid edge-cloud deployments." "The trigger delay between a published event and the start of the triggered function ranges from about 100ms for AWS Lambda to 800ms for Google Cloud Functions." "Azure Functions shows the best cold start behavior for the tested workloads."

Viktige innsikter hentet fra

by Martin Gramb... klokken arxiv.org 04-29-2024

https://arxiv.org/pdf/2311.09745.pdf
Application-Centric Benchmarking of Distributed FaaS Platforms using  BeFaaS

Dypere Spørsmål

How can BeFaaS be extended to support more advanced FaaS features, such as serverless workflows or stateful functions?

BeFaaS can be extended to support more advanced FaaS features by incorporating additional modules or components that specifically cater to these functionalities. For serverless workflows, BeFaaS could introduce a workflow orchestration module that allows users to define complex sequences of function executions, dependencies, and error handling. This module could provide a visual interface or a declarative language for defining workflows, similar to AWS Step Functions or Azure Durable Functions. For stateful functions, BeFaaS could implement a state management component that enables functions to maintain state across invocations. This could involve integrating with external state storage services like Redis or DynamoDB, or implementing a lightweight in-memory state management solution within the BeFaaS framework. By adding support for stateful functions, BeFaaS would enable users to build more sophisticated applications that require persistent data storage and stateful processing.

What are the potential limitations of using a single external database service in the benchmark applications, and how could this be addressed to ensure a more fair comparison between FaaS platforms?

Using a single external database service in benchmark applications can introduce limitations and biases in the comparison between FaaS platforms. Some potential limitations include: Latency Variability: The latency between the functions and the external database may vary based on the geographical location of the database server, leading to inconsistent performance results across different FaaS platforms. Scalability Issues: If the external database service becomes a bottleneck due to high load during benchmarking, it may impact the performance of the FaaS platforms unfairly. Dependency on Database Performance: The overall benchmark results may be influenced by the performance and availability of the external database, rather than solely reflecting the capabilities of the FaaS platforms. To address these limitations and ensure a more fair comparison between FaaS platforms, the following strategies could be implemented: Distributed Database Deployment: Deploying multiple instances of the external database service across different regions or cloud providers can help mitigate latency issues and provide a more balanced comparison. Load Testing the Database: Conducting load testing on the external database to ensure it can handle the expected workload without becoming a performance bottleneck. Isolating Database Performance: Monitoring and isolating the performance of the external database from the FaaS platform metrics during benchmarking to accurately attribute any performance variations.

How could the BeFaaS framework be adapted to benchmark other serverless computing models beyond FaaS, such as container-based or virtual machine-based serverless platforms?

To adapt the BeFaaS framework to benchmark other serverless computing models beyond FaaS, such as container-based or virtual machine-based serverless platforms, the following modifications and enhancements could be made: Flexible Deployment Adapters: Develop deployment adapters tailored to the specific requirements of container-based or virtual machine-based serverless platforms. These adapters should handle the deployment and execution of functions within the respective environments. Integration with Orchestration Tools: Integrate BeFaaS with container orchestration tools like Kubernetes or serverless orchestration platforms like AWS Fargate to support the deployment and management of functions in containerized or VM-based environments. Extended Tracing Capabilities: Enhance the tracing capabilities of BeFaaS to capture performance metrics and request flows in container-based or VM-based serverless platforms, allowing for detailed analysis and comparison. Custom Workload Profiles: Develop custom workload profiles that simulate the behavior and characteristics of applications running on container-based or VM-based serverless platforms, ensuring realistic benchmarking scenarios. By incorporating these adaptations, BeFaaS can effectively benchmark and compare the performance of various serverless computing models, providing valuable insights for developers and organizations evaluating different serverless platforms.
0
visual_icon
generate_icon
translate_icon
scholar_search_icon
star