Sampling in Cloud Benchmarking: A Critical Review and Methodological Guidelines

From MaRDI portal



DOI10.5281/zenodo.14000548Zenodo14000548MaRDI QIDQ6720592

Dataset published at Zenodo repository.

Author name not available (Why is that?)

Publication date: 28 October 2024

Copyright license: No records found.



This replication package contains the data and code to replicate our critical review on sampling in cloud benchmarking. Paper Akbari, Saman, and Manfred Hauswirth. "Sampling in Cloud Benchmarking: A Critical Review and Methodological Guidelines." 2024 IEEE International Conference on Cloud Computing Technology and Science (CloudCom). IEEE, 2024. DOI: 10.1109/CloudCom62794.2024.00034. @inproceedings{akbari2024sampling, title={Sampling in Cloud Benchmarking: A Critical Review and Methodological Guidelines}, author={Akbari, Saman and Hauswirth, Manfred}, booktitle={2024 IEEE International Conference on Cloud Computing Technology and Science (CloudCom)}, pages={160--167}, year={2024}, organization={IEEE} } Abstract Cloud benchmarks suffer from performance fluctuations caused by resource contention, network latency, hardware heterogeneity, and other factors along with decisions taken in the benchmark design. In particular, the sampling strategy of benchmark designers can significantly influence benchmark results. Despite this well-known fact, no systematic approach has been devised so far to make sampling results comparable and guide benchmark designers in choosing their sampling strategy for use within benchmarks. To identify systematic problems, we critically review sampling in recent cloud computing research. Our analysis identifies concerning trends: (i) a high prevalence of non-probability sampling, (ii) over-reliance on a single benchmark, and (iii) restricted access to samples. To address these issues and increase transparency in sampling, we propose methodological guidelines for researchers and reviewers. We hope that our work contributes to improving the generalizability, reproducibility, and reliability of research results.






This page was built for dataset: Sampling in Cloud Benchmarking: A Critical Review and Methodological Guidelines