chore: add scaletest convenience script (#7819)

- Adds a convenience script `scaletest.sh` to automate process of running scale tests
- Enables pprof endpoint by default, and captures pprof traces before tearing down infra.
- Improves idempotency of coder_init.sh
- Removes the promtest.Float64 invocations in workspacetraffic runner, these metrics will be in prometheus.
- Increases default workspace traffic output to 40KB/s/workspace.
This commit is contained in:
Cian Johnston 2023-06-08 01:30:02 -07:00 committed by GitHub
parent 9ec1fcf1a7
commit efbb55803b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
18 changed files with 347 additions and 84 deletions

4
.gitignore vendored
View File

@ -58,5 +58,5 @@ site/stats/
# Loadtesting
./scaletest/terraform/.terraform
./scaletest/terraform/.terraform.lock.hcl
terraform.tfstate.*
**/*.tfvars
scaletest/terraform/secrets.tfvars
.terraform.tfstate.*

View File

@ -61,8 +61,8 @@ site/stats/
# Loadtesting
./scaletest/terraform/.terraform
./scaletest/terraform/.terraform.lock.hcl
terraform.tfstate.*
**/*.tfvars
scaletest/terraform/secrets.tfvars
.terraform.tfstate.*
# .prettierignore.include:
# Helm templates contain variables that are invalid YAML and can't be formatted
# by Prettier.

83
scaletest/README.md Normal file
View File

@ -0,0 +1,83 @@
# Scale Testing
This folder contains CLI commands, Terraform code, and scripts to aid in performing load tests of Coder.
At a high level, it performs the following steps:
- Using the Terraform code in `./terraform`, stands up a preconfigured Google Cloud environment
consisting of a VPC, GKE Cluster, and CloudSQL instance.
> **Note: You must have an existing Google Cloud project available.**
- Creates a dedicated namespace for Coder and installs Coder using the Helm chart in this namespace.
- Configures the Coder deployment with random credentials and a predefined Kubernetes template.
> **Note:** These credentials are stored in `${PROJECT_ROOT}/scaletest/.coderv2/coder.env`.
- Creates a number of workspaces and waits for them to all start successfully. These workspaces
are ephemeral and do not contain any persistent resources.
- Waits for 10 minutes to allow things to settle and establish a baseline.
- Generates web terminal traffic to all workspaces for 30 minutes.
- Directly after traffic generation, captures goroutine and heap snapshots of the Coder deployment.
- Tears down all resources (unless `--skip-cleanup` is specified).
## Usage
The main entrypoint is the `scaletest.sh` script.
```console
$ scaletest.sh --help
Usage: scaletest.sh --name <name> --project <project> --num-workspaces <num-workspaces> --scenario <scenario> [--dry-run] [--skip-cleanup]
```
### Required arguments:
- `--name`: Name for the loadtest. This is added as a prefix to resources created by Terraform (e.g. `joe-big-loadtest`).
- `--project`: Google Cloud project in which to create the resources (example: `my-loadtest-project`).
- `--num-workspaces`: Number of workspaces to create (example: `10`).
- `--scenario`: Deployment scenario to use (example: `small`). See `terraform/scenario-*.tfvars`.
> **Note:** In order to capture Prometheus metrics, you must define the environment variables
> `SCALETEST_PROMETHEUS_REMOTE_WRITE_USER` and `SCALETEST_PROMETHEUS_REMOTE_WRITE_PASSWORD`.
### Optional arguments:
- `--dry-run`: Do not perform any action and instead print what would be executed.
- `--skip-cleanup`: Do not perform any cleanup. You will be responsible for deleting any resources this creates.
### Environment Variables
All of the above arguments may be specified as environment variables. Consult the script for details.
### Prometheus Metrics
To capture Prometheus metrics from the loadtest, two environment
## Scenarios
A scenario defines a number of variables that override the default Terraform variables.
A number of existing scenarios are provided in `scaletest/terraform/scenario-*.tfvars`.
For example, `scenario-small.tfvars` includes the following variable definitions:
```
nodepool_machine_type_coder = "t2d-standard-2"
nodepool_machine_type_workspaces = "t2d-standard-2"
coder_cpu = "1000m" # Leaving 1 CPU for system workloads
coder_mem = "4Gi" # Leaving 4GB for system workloads
```
To create your own scenario, simply add a new file `terraform/scenario-$SCENARIO_NAME.tfvars`.
In this file, override variables as required, consulting `vars.tf` as needed.
You can then use this scenario by specifying `--scenario $SCENARIO_NAME`.
For example, if your scenario file were named `scenario-big-whopper2x.tfvars`, you would specify
`--scenario=big-whopper2x`.
## Utility scripts
A number of utility scripts are provided in `lib`, and are used by `scaletest.sh`:
- `coder_shim.sh`: a convenience script to run the `coder` binary with a predefined config root.
This is intended to allow running Coder CLI commands against the loadtest cluster without
modifying a user's existing Coder CLI configuration.
- `coder_init.sh`: Performs first-time user setup of an existing Coder instance, generating
a random password for the admin user. The admin user is named `admin@coder.com` by default.
Credentials are written to `scaletest/.coderv2/coder.env`.
- `coder_workspacetraffic.sh`: Runs traffic generation against the loadtest cluster and creates
a monitoring manifest for the traffic generation pod. This pod will restart automatically
after the traffic generation has completed.

View File

@ -11,17 +11,27 @@ fi
[[ -n ${VERBOSE:-} ]] && set -x
CODER_URL=$1
CONFIG_DIR="${PWD}/.coderv2"
DRY_RUN="${DRY_RUN:-0}"
PROJECT_ROOT="$(git rev-parse --show-toplevel)"
# shellcheck source=scripts/lib.sh
source "${PROJECT_ROOT}/scripts/lib.sh"
CONFIG_DIR="${PROJECT_ROOT}/scaletest/.coderv2"
ARCH="$(arch)"
if [[ "$ARCH" == "x86_64" ]]; then
ARCH="amd64"
fi
PLATFORM="$(uname | tr '[:upper:]' '[:lower:]')"
mkdir -p "${CONFIG_DIR}"
if [[ -f "${CONFIG_DIR}/coder.env" ]]; then
echo "Found existing coder.env in ${CONFIG_DIR}!"
echo "Nothing to do, exiting."
exit 0
fi
maybedryrun "$DRY_RUN" mkdir -p "${CONFIG_DIR}"
echo "Fetching Coder CLI for first-time setup!"
curl -fsSLk "${CODER_URL}/bin/coder-${PLATFORM}-${ARCH}" -o "${CONFIG_DIR}/coder"
chmod +x "${CONFIG_DIR}/coder"
maybedryrun "$DRY_RUN" curl -fsSLk "${CODER_URL}/bin/coder-${PLATFORM}-${ARCH}" -o "${CONFIG_DIR}/coder"
maybedryrun "$DRY_RUN" chmod +x "${CONFIG_DIR}/coder"
set +o pipefail
RANDOM_ADMIN_PASSWORD=$(tr </dev/urandom -dc _A-Z-a-z-0-9 | head -c16)
@ -31,7 +41,7 @@ CODER_FIRST_USER_USERNAME="coder"
CODER_FIRST_USER_PASSWORD="${RANDOM_ADMIN_PASSWORD}"
CODER_FIRST_USER_TRIAL="false"
echo "Running login command!"
"${CONFIG_DIR}/coder" login "${CODER_URL}" \
DRY_RUN="$DRY_RUN" "${PROJECT_ROOT}/scaletest/lib/coder_shim.sh" login "${CODER_URL}" \
--global-config="${CONFIG_DIR}" \
--first-user-username="${CODER_FIRST_USER_USERNAME}" \
--first-user-email="${CODER_FIRST_USER_EMAIL}" \
@ -39,7 +49,7 @@ echo "Running login command!"
--first-user-trial=false
echo "Writing credentials to ${CONFIG_DIR}/coder.env"
cat <<EOF >"${CONFIG_DIR}/coder.env"
maybedryrun "$DRY_RUN" cat <<EOF >"${CONFIG_DIR}/coder.env"
CODER_FIRST_USER_EMAIL=admin@coder.com
CODER_FIRST_USER_USERNAME=coder
CODER_FIRST_USER_PASSWORD="${RANDOM_ADMIN_PASSWORD}"
@ -47,5 +57,7 @@ CODER_FIRST_USER_TRIAL="${CODER_FIRST_USER_TRIAL}"
EOF
echo "Importing kubernetes template"
"${CONFIG_DIR}/coder" templates create --global-config="${CONFIG_DIR}" \
--directory "${CONFIG_DIR}/templates/kubernetes" --yes kubernetes
DRY_RUN="$DRY_RUN" "$PROJECT_ROOT/scaletest/lib/coder_shim.sh" templates create \
--global-config="${CONFIG_DIR}" \
--directory "${CONFIG_DIR}/templates/kubernetes" \
--yes kubernetes

11
scaletest/lib/coder_shim.sh Executable file
View File

@ -0,0 +1,11 @@
#!/usr/bin/env bash
# This is a shim for easily executing Coder commands against a loadtest cluster
# without having to overwrite your own session/URL
PROJECT_ROOT="$(git rev-parse --show-toplevel)"
# shellcheck source=scripts/lib.sh
source "${PROJECT_ROOT}/scripts/lib.sh"
CONFIG_DIR="${PROJECT_ROOT}/scaletest/.coderv2"
CODER_BIN="${CONFIG_DIR}/coder"
DRY_RUN="${DRY_RUN:-0}"
maybedryrun "$DRY_RUN" exec "${CODER_BIN}" --global-config "${CONFIG_DIR}" "$@"

View File

@ -11,9 +11,13 @@ fi
[[ -n ${VERBOSE:-} ]] && set -x
LOADTEST_NAME="$1"
CODER_TOKEN=$(./coder_shim.sh tokens create)
PROJECT_ROOT="$(git rev-parse --show-toplevel)"
CODER_TOKEN=$("${PROJECT_ROOT}/scaletest/lib/coder_shim.sh" tokens create)
CODER_URL="http://coder.coder-${LOADTEST_NAME}.svc.cluster.local"
export KUBECONFIG="${PWD}/.coderv2/${LOADTEST_NAME}-cluster.kubeconfig"
export KUBECONFIG="${PROJECT_ROOT}/scaletest/.coderv2/${LOADTEST_NAME}-cluster.kubeconfig"
# Clean up any pre-existing pods
kubectl -n "coder-${LOADTEST_NAME}" delete pod coder-scaletest-workspace-traffic --force || true
cat <<EOF | kubectl apply -f -
apiVersion: v1
@ -37,7 +41,7 @@ spec:
- command:
- sh
- -c
- "curl -fsSL $CODER_URL/bin/coder-linux-amd64 -o /tmp/coder && chmod +x /tmp/coder && /tmp/coder --url=$CODER_URL --token=$CODER_TOKEN scaletest workspace-traffic"
- "curl -fsSL $CODER_URL/bin/coder-linux-amd64 -o /tmp/coder && chmod +x /tmp/coder && /tmp/coder --verbose --url=$CODER_URL --token=$CODER_TOKEN scaletest workspace-traffic --concurrency=0 --bytes-per-tick=4096 --tick-interval=100ms"
env:
- name: CODER_URL
value: $CODER_URL

190
scaletest/scaletest.sh Executable file
View File

@ -0,0 +1,190 @@
#!/usr/bin/env bash
[[ -n ${VERBOSE:-} ]] && set -x
set -euo pipefail
PROJECT_ROOT="$(git rev-parse --show-toplevel)"
# shellcheck source=scripts/lib.sh
source "${PROJECT_ROOT}/scripts/lib.sh"
DRY_RUN="${DRY_RUN:-0}"
SCALETEST_NAME="${SCALETEST_NAME:-}"
SCALETEST_NUM_WORKSPACES="${SCALETEST_NUM_WORKSPACES:-}"
SCALETEST_SCENARIO="${SCALETEST_SCENARIO:-}"
SCALETEST_PROJECT="${SCALETEST_PROJECT:-}"
SCALETEST_PROMETHEUS_REMOTE_WRITE_USER="${SCALETEST_PROMETHEUS_REMOTE_WRITE_USER:-}"
SCALETEST_PROMETHEUS_REMOTE_WRITE_PASSWORD="${SCALETEST_PROMETHEUS_REMOTE_WRITE_PASSWORD:-}"
SCALETEST_SKIP_CLEANUP="${SCALETEST_SKIP_CLEANUP:-0}"
script_name=$(basename "$0")
args="$(getopt -o "" -l dry-run,help,name:,num-workspaces:,project:,scenario:,skip-cleanup -- "$@")"
eval set -- "$args"
while true; do
case "$1" in
--dry-run)
DRY_RUN=1
shift
;;
--help)
echo "Usage: $script_name --name <name> --project <project> --num-workspaces <num-workspaces> --scenario <scenario> [--dry-run] [--skip-cleanup]"
exit 1
;;
--name)
SCALETEST_NAME="$2"
shift 2
;;
--num-workspaces)
SCALETEST_NUM_WORKSPACES="$2"
shift 2
;;
--project)
SCALETEST_PROJECT="$2"
shift 2
;;
--scenario)
SCALETEST_SCENARIO="$2"
shift 2
;;
--skip-cleanup)
SCALETEST_SKIP_CLEANUP=1
shift
;;
--)
shift
break
;;
*)
error "Unrecognized option: $1"
;;
esac
done
dependencies gcloud kubectl terraform
if [[ -z "${SCALETEST_NAME}" ]]; then
echo "Must specify --name"
exit 1
fi
if [[ -z "${SCALETEST_PROJECT}" ]]; then
echo "Must specify --project"
exit 1
fi
if [[ -z "${SCALETEST_NUM_WORKSPACES}" ]]; then
echo "Must specify --num-workspaces"
exit 1
fi
if [[ -z "${SCALETEST_SCENARIO}" ]]; then
echo "Must specify --scenario"
exit 1
fi
if [[ -z "${SCALETEST_PROMETHEUS_REMOTE_WRITE_USER}" ]] || [[ -z "${SCALETEST_PROMETHEUS_REMOTE_WRITE_PASSWORD}" ]]; then
echo "SCALETEST_PROMETHEUS_REMOTE_WRITE_USER or SCALETEST_PROMETHEUS_REMOTE_WRITE_PASSWORD not specified."
echo "No prometheus metrics will be collected!"
read -pr "Continue (y/N)? " choice
case "$choice" in
y | Y | yes | YES) ;;
*) exit 1 ;;
esac
fi
SCALETEST_SCENARIO_VARS="${PROJECT_ROOT}/scaletest/terraform/scenario-${SCALETEST_SCENARIO}.tfvars"
if [[ ! -f "${SCALETEST_SCENARIO_VARS}" ]]; then
echo "Scenario ${SCALETEST_SCENARIO_VARS} not found."
echo "Please create it or choose another scenario:"
find "${PROJECT_ROOT}/scaletest/terraform" -type f -name 'scenario-*.tfvars'
exit 1
fi
if [[ "${SCALETEST_SKIP_CLEANUP}" == 1 ]]; then
log "WARNING: you told me to not clean up after myself, so this is now your job!"
fi
CONFIG_DIR="${PROJECT_ROOT}/scaletest/.coderv2"
if [[ -d "${CONFIG_DIR}" ]] && files=$(ls -qAH -- "${CONFIG_DIR}") && [[ -z "$files" ]]; then
echo "Cleaning previous configuration"
maybedryrun "$DRY_RUN" rm -fv "${CONFIG_DIR}/*"
fi
maybedryrun "$DRY_RUN" mkdir -p "${CONFIG_DIR}"
SCALETEST_SCENARIO_VARS="${PROJECT_ROOT}/scaletest/terraform/scenario-${SCALETEST_SCENARIO}.tfvars"
SCALETEST_SECRETS="${PROJECT_ROOT}/scaletest/terraform/secrets.tfvars"
SCALETEST_SECRETS_TEMPLATE="${PROJECT_ROOT}/scaletest/terraform/secrets.tfvars.tpl"
log "Writing scaletest secrets to file."
SCALETEST_NAME="${SCALETEST_NAME}" \
SCALETEST_PROJECT="${SCALETEST_PROJECT}" \
SCALETEST_PROMETHEUS_REMOTE_WRITE_USER="${SCALETEST_PROMETHEUS_REMOTE_WRITE_USER}" \
SCALETEST_PROMETHEUS_REMOTE_WRITE_PASSWORD="${SCALETEST_PROMETHEUS_REMOTE_WRITE_PASSWORD}" \
envsubst <"${SCALETEST_SECRETS_TEMPLATE}" >"${SCALETEST_SECRETS}"
pushd "${PROJECT_ROOT}/scaletest/terraform"
echo "Initializing terraform."
maybedryrun "$DRY_RUN" terraform init
echo "Setting up infrastructure."
maybedryrun "$DRY_RUN" terraform apply --var-file="${SCALETEST_SCENARIO_VARS}" --var-file="${SCALETEST_SECRETS}" --auto-approve
if [[ "${DRY_RUN}" != 1 ]]; then
SCALETEST_CODER_URL=$(<"${CONFIG_DIR}/url")
else
SCALETEST_CODER_URL="http://coder.dryrun.local:3000"
fi
KUBECONFIG="${PROJECT_ROOT}/scaletest/.coderv2/${SCALETEST_NAME}-cluster.kubeconfig"
echo "Waiting for Coder deployment at ${SCALETEST_CODER_URL} to become ready"
maybedryrun "$DRY_RUN" kubectl --kubeconfig="${KUBECONFIG}" -n "coder-${SCALETEST_NAME}" rollout status deployment/coder
echo "Initializing Coder deployment."
DRY_RUN="$DRY_RUN" "${PROJECT_ROOT}/scaletest/lib/coder_init.sh" "${SCALETEST_CODER_URL}"
echo "Creating ${SCALETEST_NUM_WORKSPACES} workspaces."
DRY_RUN="$DRY_RUN" "${PROJECT_ROOT}/scaletest/lib/coder_shim.sh" scaletest create-workspaces \
--count "${SCALETEST_NUM_WORKSPACES}" \
--template=kubernetes \
--concurrency 10 \
--no-cleanup
echo "Sleeping 10 minutes to establish a baseline measurement."
maybedryrun "$DRY_RUN" sleep 600
echo "Sending traffic to workspaces"
maybedryrun "$DRY_RUN" "${PROJECT_ROOT}/scaletest/lib/coder_workspacetraffic.sh" "${SCALETEST_NAME}"
maybedryrun "$DRY_RUN" kubectl --kubeconfig="${KUBECONFIG}" -n "coder-${SCALETEST_NAME}" wait pods coder-scaletest-workspace-traffic --for condition=Ready
maybedryrun "$DRY_RUN" kubectl --kubeconfig="${KUBECONFIG}" -n "coder-${SCALETEST_NAME}" logs -f pod/coder-scaletest-workspace-traffic
echo "Starting pprof"
maybedryrun "$DRY_RUN" kubectl -n "coder-${SCALETEST_NAME}" port-forward deployment/coder 6061:6060 &
pfpid=$!
maybedryrun "$DRY_RUN" trap "kill $pfpid" EXIT
echo "Waiting for pprof endpoint to become available"
pprof_attempt_counter=0
while ! maybedryrun "$DRY_RUN" timeout 1 bash -c "echo > /dev/tcp/localhost/6061"; do
if [[ $pprof_attempt_counter -eq 10 ]]; then
echo
echo "pprof failed to become ready in time!"
exit 1
fi
maybedryrun "$DRY_RUN" sleep 3
done
echo "Taking pprof snapshots"
maybedryrun "$DRY_RUN" curl --silent --fail --output "${SCALETEST_NAME}-heap.pprof.gz" http://localhost:6061/debug/pprof/heap
maybedryrun "$DRY_RUN" curl --silent --fail --output "${SCALETEST_NAME}-goroutine.pprof.gz" http://localhost:6061/debug/pprof/goroutine
# No longer need to port-forward
maybedryrun "$DRY_RUN" kill "$pfpid"
maybedryrun "$DRY_RUN" trap - EXIT
if [[ "${SCALETEST_SKIP_CLEANUP}" == 1 ]]; then
echo "Leaving resources up for you to inspect."
echo "Please don't forget to clean up afterwards:"
echo "cd terraform && terraform destroy --var-file=${SCALETEST_SCENARIO_VARS} --var-file=${SCALETEST_SECRETS} --auto-approve"
exit 0
fi
echo "Cleaning up"
maybedryrun "$DRY_RUN" terraform destroy --var-file="${SCALETEST_SCENARIO_VARS}" --var-file="${SCALETEST_SECRETS}" --auto-approve

View File

@ -1,43 +0,0 @@
# Load Test Terraform
This folder contains Terraform code and scripts to aid in performing load tests of Coder.
It does the following:
- Creates a GCP VPC.
- Creates a CloudSQL instance with a global peering rule so it's accessible inside the VPC.
- Creates a GKE cluster inside the VPC with separate nodegroups for Coder and workspaces.
- Installs Coder in a new namespace, using the CloudSQL instance.
## Usage
> You must have an existing Google Cloud project available.
1. Create a file named `override.tfvars` with the following content, modifying as appropriate:
```terraform
name = "some_unique_identifier"
project_id = "some_google_project_id"
```
1. Inspect `vars.tf` and override any other variables you deem necessary.
1. Run `terraform init`.
1. Run `terraform plan -var-file=override.tfvars` and inspect the output.
If you are not satisfied, modify `override.tfvars` until you are.
1. Run `terraform apply -var-file=override.tfvars`. This will spin up a pre-configured environment
and emit the Coder URL as an output.
1. Run `coder_init.sh <coder_url>` to setup an initial user and a pre-configured Kubernetes
template. It will also download the Coder CLI from the Coder instance locally.
1. Do whatever you need to do with the Coder instance:
> Note: To run Coder commands against the instance, you can use `coder_shim.sh <command>`.
> You don't need to run `coder login` yourself.
- To create workspaces, run `./coder_shim.sh scaletest create-workspaces --template="kubernetes" --count=N`
- To generate workspace traffic, run `./coder_trafficgen.sh <name of loadtest from your Terraform vars>`. This will keep running until you delete the pod `coder-scaletest-workspace-traffic`.
1. When you are finished, you can run `terraform destroy -var-file=override.tfvars`.

View File

@ -96,8 +96,12 @@ coder:
secretKeyRef:
name: "${kubernetes_secret.coder-db.metadata.0.name}"
key: url
- name: "CODER_PPROF_ENABLE"
value: "true"
- name: "CODER_PROMETHEUS_ENABLE"
value: "true"
- name: "CODER_PROMETHEUS_COLLECT_AGENT_STATS"
value: "true"
- name: "CODER_VERBOSE"
value: "true"
image:
@ -129,7 +133,7 @@ EOF
}
resource "local_file" "kubernetes_template" {
filename = "${path.module}/.coderv2/templates/kubernetes/main.tf"
filename = "${path.module}/../.coderv2/templates/kubernetes/main.tf"
content = <<EOF
terraform {
required_providers {
@ -216,6 +220,11 @@ resource "local_file" "kubernetes_template" {
EOF
}
resource "local_file" "output_vars" {
filename = "${path.module}/../.coderv2/url"
content = local.coder_url
}
output "coder_url" {
description = "URL of the Coder deployment"
value = local.coder_url

View File

@ -1,8 +0,0 @@
#!/usr/bin/env bash
# This is a shim for easily executing Coder commands against a loadtest cluster
# without having to overwrite your own session/URL
SCRIPT_DIR=$(dirname "${BASH_SOURCE[0]}")
CONFIG_DIR="${SCRIPT_DIR}/.coderv2"
CODER_BIN="${CONFIG_DIR}/coder"
exec "${CODER_BIN}" --global-config "${CONFIG_DIR}" "$@"

View File

@ -102,7 +102,7 @@ prometheus:
# after creating a cluster, and we want this to be brought up
# with a single command.
resource "local_file" "coder-monitoring-manifest" {
filename = "${path.module}/.coderv2/coder-monitoring.yaml"
filename = "${path.module}/../.coderv2/coder-monitoring.yaml"
depends_on = [helm_release.prometheus-chart]
content = <<EOF
apiVersion: monitoring.coreos.com/v1
@ -122,7 +122,7 @@ spec:
resource "null_resource" "coder-monitoring-manifest_apply" {
provisioner "local-exec" {
working_dir = "${abspath(path.module)}/.coderv2"
working_dir = "${abspath(path.module)}/../.coderv2"
command = <<EOF
KUBECONFIG=${var.name}-cluster.kubeconfig gcloud container clusters get-credentials ${google_container_cluster.primary.name} --project=${var.project_id} --zone=${var.zone} && \
KUBECONFIG=${var.name}-cluster.kubeconfig kubectl apply -f ${abspath(local_file.coder-monitoring-manifest.filename)}

View File

@ -0,0 +1,4 @@
nodepool_machine_type_coder = "t2d-standard-8"
nodepool_machine_type_workspaces = "t2d-standard-8"
coder_cpu = "7" # Leaving 1 CPU for system workloads
coder_mem = "28Gi" # Leaving 4GB for system workloads

View File

@ -0,0 +1,4 @@
nodepool_machine_type_coder = "t2d-standard-4"
nodepool_machine_type_workspaces = "t2d-standard-4"
coder_cpu = "3000m" # Leaving 1 CPU for system workloads
coder_mem = "12Gi" # Leaving 4 GB for system workloads

View File

@ -0,0 +1,4 @@
nodepool_machine_type_coder = "t2d-standard-2"
nodepool_machine_type_workspaces = "t2d-standard-2"
coder_cpu = "1000m" # Leaving 1 CPU for system workloads
coder_mem = "4Gi" # Leaving 4GB for system workloads

View File

@ -0,0 +1,4 @@
name = "${SCALETEST_NAME}"
project_id = "${SCALETEST_PROJECT}"
prometheus_remote_write_user = "${SCALETEST_PROMETHEUS_REMOTE_WRITE_USER}"
prometheus_remote_write_password = "${SCALETEST_PROMETHEUS_REMOTE_WRITE_PASSWORD}"

View File

@ -19,8 +19,6 @@ import (
"github.com/coder/coder/cryptorand"
"github.com/coder/coder/scaletest/harness"
"github.com/coder/coder/scaletest/loadtestutil"
promtest "github.com/prometheus/client_golang/prometheus/testutil"
)
type Runner struct {
@ -144,15 +142,6 @@ func (r *Runner) Run(ctx context.Context, _ string, logs io.Writer) error {
return xerrors.Errorf("read from pty: %w", rErr)
}
duration := time.Since(start)
logger.Info(ctx, "Test Results",
slog.F("duration", duration),
slog.F("bytes_read_total", promtest.ToFloat64(r.metrics.BytesReadTotal)),
slog.F("bytes_written_total", promtest.ToFloat64(r.metrics.BytesWrittenTotal)),
slog.F("read_errors_total", promtest.ToFloat64(r.metrics.ReadErrorsTotal)),
slog.F("write_errors_total", promtest.ToFloat64(r.metrics.WriteErrorsTotal)),
)
return nil
}

View File

@ -61,8 +61,8 @@ stats/
# Loadtesting
.././scaletest/terraform/.terraform
.././scaletest/terraform/.terraform.lock.hcl
terraform.tfstate.*
**/*.tfvars
../scaletest/terraform/secrets.tfvars
.terraform.tfstate.*
# .prettierignore.include:
# Helm templates contain variables that are invalid YAML and can't be formatted
# by Prettier.

View File

@ -61,8 +61,8 @@ stats/
# Loadtesting
.././scaletest/terraform/.terraform
.././scaletest/terraform/.terraform.lock.hcl
terraform.tfstate.*
**/*.tfvars
../scaletest/terraform/secrets.tfvars
.terraform.tfstate.*
# .prettierignore.include:
# Helm templates contain variables that are invalid YAML and can't be formatted
# by Prettier.