Please use this identifier to cite or link to this item: http://hdl.handle.net/20.500.12188/27208
Title: WebAssembly Orchestration in the Context of Serverless Computing
Authors: Kjorveziroski, Vojdan 
Filiposka, Sonja 
Keywords: WebAssembly
Serverless Computing
Function as a Service
Kubernetes
Orchestration
Issue Date: Jul-2023
Publisher: Springer
Source: Kjorveziroski, V., Filiposka, S. WebAssembly Orchestration in the Context of Serverless Computing. J Netw Syst Manage 31, 62 (2023). https://doi.org/10.1007/s10922-023-09753-0
Project: NSA
Journal: Journal of Network and Systems Management
Abstract: Recent WebAssembly advancements including better programming language support and the introduction of both the WebAssembly System Interface, and the WebAssembly Component Model, have transformed it from primarily a client-side technology to a server-side one as well. The advantages associated with WebAssembly, such as cross platform portability, small software artifacts sizes, fast start up times, and per execution isolation make it a good fit for serverless scenarios. While there are existing initiatives for using WebAssembly in such serverless contexts, orchestration is still an open question. To overcome this issue, we present a way for extending Kubernetes, allowing it to orchestrate natively executed WebAssembly modules, in addition to containers. We describe an extension to an existing WebAssembly software shim for containerd and a new Kubernetes WebAssembly operator. Benchmarking results from the proposed solution obtained using 9 serverless functions packaged both as WebAssembly modules and OpenFaaS functions running in containers, show that WebAssembly has clear advantages for frequently executed serverless functions which require elasticity. WebAssembly functions enjoy two times faster deployment times and at least an order of magnitude smaller artifact sizes while still offering comparable execution performance. However, when it comes to sustained performance for long running serverless functions with processor intensive workloads, containers are the preferred choice, compensating for the increased cold start times with faster execution times.
URI: http://hdl.handle.net/20.500.12188/27208
DOI: 10.1007/s10922-023-09753-0
Appears in Collections:Faculty of Computer Science and Engineering: Journal Articles

Files in This Item:
File Description SizeFormat 
Orchestration-Accepted-Version.pdf1.73 MBAdobe PDFView/Open    Request a copy
Show full item record

Page view(s)

25
checked on Apr 29, 2024

Download(s)

4
checked on Apr 29, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.