Lyrid Bare Metal Kubernetes Performance Benchmarking

Handoyo Sutanto
5 minutes
March 14th, 2024
Handoyo Sutanto
5 minutes
March 14th, 2024

Why Performance Benchmarking?

I get asked this a lot when building a cloud application or solutions: “what is it that the customer is paying for when they are running an application in the cloud?”. The simple answer is, of course, for renting out the “space” in the datacenter for compute, storage and networking. 

However if you dig deeper, one of the hardest things when operating in the cloud is to understand and compare performance for cloud computing from one cloud provider to another. Many cloud providers are typically comparing CPU count from one to another, as if they are the same. That is not the case, and we can show you further what we mean.

There are some example of these variables that differentiate systems that are usually published (but not limited to):

  • CPU/Memory types and clock speed
  • Storage max performance (IOPS) and interface
  • Network characteristics and interface (public/private)
  • Hypervisors

And on top of that, there’s some example of an unknown variables that can affect your performance (but not limited to):

  • Neighboring VM in the same host for virtualized environment
  • Background system maintenance/backup/replication that you can’t control
  • In-system infrastructure API, analytics and monitoring.

At the end, there’s one thing that most people can understand the best: Price and Use-Case based performance. Price performance is how much you pay for the services, whereas Use-Case based performance is how a system performs given a specific use-case. One example of this is by answering this question: How many SQL queries per second can a system drive given X$ amount that I will be spending to run the system? 

Let’s explore that a little bit. And for that, let’s explore something that is more pure - like bare metal hardware.

Bare metal hardware is an asset that’s relatively overlooked in the space. Utilizing bare metal hardware itself provides users with a cost-effective hosting experience for those working with data-intensive applications that prioritize data privacy, low latency, and extremely fast processing. 

Despite the benefits of using bare metal hardware, many people find difficulty in accessing these machines through data centers, especially if they’re novices in the field. Through our partnerships with several data centers, the Lyrid platform is able to be offered on top of bare metal hardware, mobilizing these machines for innovators of all levels. Offering the Lyrid platform on top of bare metal machines grants you all the benefits of the machinery, Kubernetes, and Lyrid infrastructure management, all within a user-friendly platform. In order to demonstrate the true power of bare metal Kubernetes in terms of processing and cost-savings, we ran performance benchmarking tests.

With the cloud vendor landscape growing every day, we wanted to demonstrate how our platform and machinery held up in this competitive space, and what our findings mean for you. Our internal benchmarking compares the performance of our ubuntu instances across three different databases, all hosted within the Lyrid platform and associated bare metal machinery, while seeking to identify performance gaps. The results demonstrated database performance, namely queries per second and transactions per second, that rivaled that of big cloud providers.

Benchmarking Baseline

To preface, our baseline for this benchmarking project was based on the tests that Amazon Web Services ran on AWS RDS, specifically with regards to the power of their in-house processors.

It’s important to note the baseline price for this AWS setup is estimated to be $1700 per month based on the AWS RDS specifications along with the VM driver that is writing to the database:

Image courtesy of AWS RDS

Benchmarking Tools and Setup: Hardware and Software

Before we go into details of our performance benchmarking process, we would like to highlight the hardware and software involved, as well as the benchmarking tools:

Hardware 

The machine involved in our benchmarking had three nodes with the following configurations:

  • CPU: Dual Intel(R) Xeon(R) Platinum 8160 CPU @ 2.10GHZ
  • Memory: 64GB DDR4
  • OS Disk: 500GB
  • Additional Data: Dual 1TB Samsung EVO SSD
  • Network: Broadcom NetXtreme 5720  Dual 10 Gbps Port, LACP setup

Software

Our benchmarking involves the use of the following software:

  • OS: Ubuntu 22.04.3 LTS
  • Kubernetes (v1.25.9 with a k3s1 distribution)
  • Storage Classes Driver - Rook Ceph v1.1.3
  • Database-as-a-Service (DBaaS): Percona Everest: MySQL XtraDB Cluster (v.8.032-24.2)

Six instances were created using Ubuntu to test machine handling and to gauge any potential performance bottlenecks. To preserve operational consistency, each instance had the same version, same size, same settings, and utilized MySQL as the main database. 

Benchmarking Tools

In our benchmarking processes, we utilized sysbench to run benchmarking, monitor performance, and gather results.

Sysbench is an open source benchmarking tool used to analyze the performance of databases. This is our preferred database benchmarking tool because it can create complex testing instances and workloads, without hampering business processes and without needing a database server.

The Benchmarking Process

Benchmark Setup

Our benchmarking started with machine setup, specifically the single bare metal machine with three nodes and external storage disks stitched. This machines acts as the main environments for our test, with Kubernetes clusters being created inside this machine itself

Six Ubuntu and MySQL 8.0.36 instances were then created in the resulting Kubernetes clusters, with Percona Everest MySQL Operator storing all the performance data and Ceph (operated by Rook) acting as backend storage. Lastly, sysbench is set up to run and record the benchmarking results.

Benchmark Operations

Through sysbench, we tested the instances for 30 minutes (competitor average operation time). While gauging the performance of one instance would provide sufficient performance resources, we opted for testing our machine limits by gradually increasing the number of instances utilized through each test, from one instance initially to six maximum. Results were recorded within Percona Everest and Ceph.

Benchmarking Performance Metrics and Results

The performance of the Ubuntu MySQL instances within our Kubernetes clusters showed significant queries per second handling as the instance number increased. It is important to note, however, that performance experienced a bottleneck once the sixth instance started up, leading us to opt for an optimized performance of five instances instead of adding a relatively subpar sixth instance in our actual operations.

Average number of queries per second processed per instance

Across our testing, our six Ubuntu MySQL instances averaged 12544 queries per second (QPS), showing the capabilities of our clusters and machines in handling six instances and reaching industry standard. Our baseline in this benchmarking experiment, AWS RDS, achieved 12501 QPS for a single instance powered by their AWS Graviton2 processor.

After running individual instance tests, we opted in running concurrent parallel testing to gauge the maximum performance of our machines. Running our tests from instance 1 and then ramping up to instance 1 and instance 2, and so on, we discovered that, on a single machine, our clusters were able to handle 75264 queries per second with six running instances until a bottleneck was experienced. 

Cumulative queries per second across all instance operations

It’s important to note that during this benchmarking period, the test database settings were configured to reflect the settings chosen by our competitors. That being said, our testing shows that our instances perform as well, if not better, than competitor instances, without utilizing as many resources. 

We are pricing this particular bare-metal offering starting at $2000 per month total with the help from our friends at OpenColo. Based on sheer instance numbers alone, you’re able to get 5x the handling power through our clusters and machines compared to competitor machines, and a lower overall cost. 

What does this mean for you?

We get that pricing and performance are just two of the many factors you consider when choosing a vendor for your cloud infrastructure. In the end, we are always an advocate of price transparency when it comes to our services and performance, and how you benefit from both. 

We are not trying to single out any particular clouds, and if there were a main takeaway from our benchmarking, it would be that powerful and tested solutions and machines are everywhere, not just in some big names. Through our partnerships with managed service providers, we’re able to provide you with both, while giving you competitive pricing amongst other benefits. 

If you’re looking to streamline your business processes and increase your business performance through Kubernetes and bare metal, or if you’re looking to learn more about our benchmarking process and really dive deep into our methodology, book a call with one of our product specialists!

Schedule a demo

Let's discuss your project

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

99 South Almaden Blvd. Suite 600
San Jose, CA
95113

Jl. Pluit Indah 168B-G, Pluit Penjaringan,
Jakarta Utara, DKI Jakarta
14450

copilot