Skip to content

Releases: grafana/k6-operator

v1.3.2

02 Apr 09:26

Choose a tag to compare

A patch release to fix a #757 - to allow any kind of values in environment variables in initializer pod.

Full Changelog: v1.3.1...v1.3.2

v1.3.1

26 Mar 11:52

Choose a tag to compare

A patch release to fix a bug with environment variables in cloud output mode. Previously, duration of the tests set via environment variables could be calculated incorrectly at initializer step, resulting in potential timeouts from Grafana Cloud k6.

Full Changelog: v1.3.0...v1.3.1

v1.3.0

03 Mar 08:37

Choose a tag to compare

✨ New features

We added two major features in this release.

Firstly, there is now an option to skip initializer pod step by setting .spec.initializer.disabled: true in the TestRun CRD. Thanks, @Kristina-Pianykh! Note:

  • By skipping the initializer pod, there's a risk of starting N runner pods and having them all fail on a misconfigured script.
  • This option is recommended for advanced users who have a stable testing setup and have done due diligence for their scripts as described here.
  • Lastly, this option will be ignored for cloud output tests as they require initializer pod execution.

Secondly, we're adding a new option of PodTemplate to the PrivateLoadZone CRD to unlock extended configuration. It supports only a limited number of fields now:

apiVersion: k6.io/v1alpha1
kind: PrivateLoadZone
metadata:
  name: <NAME>
  namespace: <NS>
spec:
  token: <TOKEN>
  resources:
    limits:
      cpu: 400m
      memory: 1000Mi

  podTemplate: # a new, optional field
    spec:
      securityContext:
        runAsUser: 100
        runAsGroup: 100
        fsGroup: 100
      tolerations:
        - key: "app"
          operator: "Equal"
          value: "blue"
          effect: "NoSchedule"
      containers:
        - name: k6
          securityContext:
            allowPrivilegeEscalation: false

No other field can be passed to the .spec.podTemplate: it'll be blocked by Kubernetes validation, with an error about unknown field. This configuration will be applied to all Pods started by k6-operator for this PrivateLoadZone.

Note

PrivateLoadZone doesn't support mutability yet, so it must be re-created anew if you want to add a new config to it.

🛠️ Maintenance

Notable updates from the automated renovate bot:

  • Update of module go.k6.io/k6 to v1.6.1 (PR)

We now have a JSON schema validation check for all new PRs: if there's a PR opened for the Helm chart, a new GitHub Workflow will check if the JSON schema was updated as it should be, without waiting for human review. Thanks, @railgun-0402!

A couple of new commands were added to the Makefile to simplify maintenance:

  • make e2e-update-latest to help populate e2e/latest folder (PR). At the moment, it is mainly meant for the release process.
  • make patch-helm-crd to help copy changes in CRD to the Helm chart (PR). It can be used both in the release process and during normal PRs.

Full Changelog: v1.2.0...v1.3.0

v1.2.0

07 Jan 08:33

Choose a tag to compare

✨ New features

PrivateLoadZone tests got an enhancement around logic for the setup function. Now, when setup fails with an error, it causes an abort of the test.

🐛 Bug fixes

There had been a regression of volume claim setup: it was fixed and released as v1.1.1.

The resources field in PrivateLoadZone CRD wasn't being validated at the k6-operator level and resulted in a rather obscure error from the Cloud. It is now validated early with a CEL validation rule: .resources.limits cannot be empty.

📦 Helm

It is now possible to pass optional manager.dnsConfig and manager.dnsPolicy to the Helm chart. Thanks, @kworkbee!

A couple of bugs in the Helm chart were fixed:

  • Service labels weren't set in Service as expected. Thanks, @kworkbee!
  • Namespaced mode (rbac.namespaced=true) is fully functional now: it creates Roles instead of ClusterRoles where applicable.
    • Note: this mode sets the WATCH_NAMESPACE environment variable to point to the namespace with all resources. Don't use this mode together with custom WATCH_NAMESPACE values in manager.env: the deployment might not work.

🛠️ Maintenance

Quarterly maintenance is part of this release:

  • controller-runtime to v0.22.4
  • k8s group to v0.34.1
  • go.k6.io/k6 to v1.4.2
  • controller-tools to v0.19.0

Additional small updates to CI have also been included.

Lastly, we're now relying on renovate to help with dependency updates: the initial config was added, and we'll be polishing it more in the future.

Full Changelog: v1.1.1...v1.2.0

v1.1.1

12 Dec 19:01

Choose a tag to compare

A patch release to fix regression with VolumeClaim spec in 1.1.0, issue.
In the comment here, there is a description of the main cases of VolumeClaim configuration that are included into tests and should work.

Full Changelog: v1.1.0...v1.1.1

v1.1.0

12 Nov 13:14

Choose a tag to compare

✨ New features

There are a couple of additions to the TestRun CRD in this release:

  1. It's now possible to set .spec.runner.priorityClassName, .spec.starter.priorityClassName, and .spec.initializer.priorityClassName to help avoid unwanted evictions of pods. Thanks, @vsoloviov!

  2. Init containers can have custom resources set as .spec.runner.initContainers[*].resources, for cases when preparation for the test run requires more resources. Thanks, @gcaldasnu!

Additionally, this release contains a resolution to configurable path in VolumeClaim. Thanks, @moko-poi! Now it's possible to set the path like this:

apiVersion: k6.io/v1alpha1
kind: TestRun
metadata:
  name: k6-test-with-pvc
spec:
  script:
    volumeClaim:
      name: dynamic-pvc
      file: /foo/script.js # Path to the script
      readOnly: true
  parallelism: 1

It's a backwards-compatible change, so existing TestRuns should continue to work as is.

Another small improvement was to improve log output of curl containers, turning them to JSON. Thanks, @moko-poi!

Last but not least, BackoffLimit of starter Job is now set to zero, to avoid additional creation of starter or stopper pods on failure to reach k6 runners. It's worth noting that curl containers are configured with three retries to ensure that such a failure is not transient. Thanks, @moko-poi!

🐛 Bug fixes

A simple validation for .spec.arguments has been added to avoid accidental Golang panic on misconfigured TestRun CRD. Note, this is not a full validation of all possible arguments: such validation is a job of k6 CLI and is expected to be done by a user before writing it down in the TestRun spec.

📦 Helm

Helm chart received a fix to ensure that manager.serviceAccount.create option is taken into account. Thanks, @bcrisp4!

🛠️ Maintenance

Logs of the k6-operator were adjusted to include the host value for GCk6 API calls, to assist with troubleshooting.

Full Changelog: v1.0.0...v1.1.0

v1.0.0

15 Sep 16:06

Choose a tag to compare

🎉 k6-operator 1.0 is here!

We're happy to announce k6-operator v1.0.0: a milestone that marks our commitment to the k6-operator project, switching to Semantic Versioning 2.0.0, and improving the stability guarantees. Starting from this release, we're formalizing our approach to versioning, release schedule, and maintenance updates.

We wouldn't have been here if not for the support of our amazing community. Thank you for providing feedback and so often lending a helping hand. 💛 💙

📜 Documentation

Regular maintenance updates are a must for long-term stability of the k6-operator project. Now, we've added the description of how maintenance updates are handled in the k6-operator.

With the switch to Semantic Versioning, we've formalized our understanding of what each type of version increase means in the k6-operator. Read more on that in this doc.

Additionally, we commit to making regular minor releases every 8 weeks. The planning for each release can be seen in the corresponding GitHub milestone.

Last but not least, we've published an upgrade guide to help you set up your upgrade workflows around the k6-operator deployment.

✨ New features

We've fixed the technical debt when aggregation variables for metrics were not passed from Grafana Cloud k6 to the k6 runners in PrivateLoadZone tests. Now, the PLZ tests execute in a similar way to how cloud output tests do, in terms of metrics processing.

🐛 Bug fixes

This release contains a bug fix to validate that the .spec.parallelism value in the TestRun CRD is positive. If it's not, a corresponding error message will appear in the logs.

📦 Helm

There were a couple of additions to the Helm chart:

  • Added the service.portName configuration option to the Helm chart. This allows you to configure a name for the HTTP port where metrics of the k6-operator app are served.
  • Added the manager.logging.development boolean configuration option to the Helm chart. This allows you to switch the default logging level from development mode to production mode. Refer to the issue for the details. Thanks, @Kristina-Pianykh!

🛠️ Maintenance

In this release, we're bumping most of our Golang dependencies:

  • Dockerfile Golang 1.25; PR.
  • controller-runtime 0.22.1; PR.
  • k8s.io group 0.34.0; PR.
  • go.k6.io 1.2.3; PR.
  • golangci-lint v2.4.0; PR.
  • Other Golang dependencies brought up to latest; PR.

Full Changelog: v0.0.23...v1.0.0

v0.0.23

07 Aug 11:10

Choose a tag to compare

✨ New features

We improved the security stance of PrivateLoadZone tests: now the Grafana Cloud k6 token is not visible in k6 Pods' definitions. This enables admins to configure the cluster so that users of the PrivateLoadZone have access to the k6 Pods but not to the GCK6 token.

Starting from this release, the k6 Operator no longer uses Scuttle-based images for runner pods by default (issue). Instead, it is using plain grafana/k6:latest, which is guaranteed to point to the latest official release of k6.

⚠️ Deprecation warning

The image of Scuttle-based runner will still be built on each release, so if you need it, you can configure it with .spec.runner.image. However, we're going to deprecate that image as well as remove Scuttle from TestRun CRD and the k6 Operator. If you are currently using Scuttle, please switch to using native sidecars. See this issue for details, and this documentation for how to configure Istio with the k6 Operator on up-to-date clusters.

🐛 Bug fixes

The .spec.script.volumeClaim.readOnly option is now set for VolumeMount instead of Volume. This fix moves the read-only option to the k6 container level, allowing for greater flexibility in complex setups. Thanks, @The-Flash-Routine! If you have been using this option, please double-check if this fix changes any implicit behaviour in your TestRun workflow.

In rare cases of bad timing, PrivateLoadZone would fail to deregister upon deletion. The logic for it has now been improved.

📦 Helm

There have been a few issues with the Helm release 3.14 after the kubebuilder update. Thanks to the reports from our users and contributions from @chris-pinola-rf, these were fixed and released as patch releases. PRs: #599, #606, #620.

If you encounter an issue with Helm setup, please share the details.

📜 Documentation

This release comes with several significant additions to the documentation.

Firstly, there is a machine-generated Markdown with full reference to the CRD types. It can be accessed in the docs folder here.

Secondly, there is a contributing guide that describes the main points to pay attention to when creating an issue or PR for the k6 Operator. It can be accessed in the CONTRIBUTING.md

The troubleshooting guide was split into the TestRun part and PrivateLoadZone part, with some additions and clarifications.

We've also added the Istio guide to the public documentation here. If you need to use the k6 Operator on the cluster with Istio, please refer to this guide.

Finally, there were several other smaller improvements to the docs. Thanks to @mostafa and @heitortsergent for the help!

🛠️ Maintenance

There were small fixes to the e2e test suite (#608, #622). Additionally, controller-tools was updated to v0.18.0.

Full Changelog: v0.0.22...v0.0.23

v0.0.22

30 Jun 10:07

Choose a tag to compare

🪛⚠️ A breaking maintenance update

k6-operator was initially created several years ago, and since then Kubernetes libraries have changed certain implementation approaches quite a lot, most noticeably around authentication proxy for metrics. We have been receiving user requests to switch to the new approach. Additionally, the image kube-rbac-proxy is no longer officially recommended for a default layout and we should not continue to rely on it.

That said and since our goal is to simplify the default setup of k6-operator as part of v1 preparations, in this release, there has been a refactoring of code and manifests which correspond to the update of kubebuilder to v4. Depending on your setup, some of the changes may be breaking. Here are the key changes from the user's perspective:

  1. kube-rbac-proxy is no longer part of the Deployment. There is only one container now.
  • As a consequence, the next Helm chart no longer contains authProxy section.
  • In default RBAC, instead of proxy-role, there is now metrics-auth-role.
  1. The CLI arguments to k6-operator were renamed:
  • metrics-addr to metrics-bind-address (default is 8080).
  • health-addr to health-probe-bind-address (default is 8081).
  • enable-leader-election to leader-elect (default is false).
  1. The Deployment now has the liveness & readiness probes enabled by default (/healthz endpoint on 8081), as well as a default securityContext:
# pod's
securityContext:
  runAsNonRoot: true

# container's
securityContext:
  allowPrivilegeEscalation: false
  capabilities:
    drop:
      - "ALL"

Full changes in manifests can be seen in the PR.

❓ Do I need to change my tests or setup?

These changes did not impact the core functionality of k6-operator, and the tests do not need to be changed.

This is a change in the app's CLI and default manifests, which mostly impacts how metrics are handled. Firstly, if the CLI arguments are passed to the k6-operator externally, change them to new values as shown above. Next, if you rely on kube-rbac-proxy for metrics authentication, refer to this documentation and adjust your setup accordingly as part of this upgrade. If you believe that your use case is not fully supported by k6-operator, you're very welcome to open the issue and a PR with the details.

✨ New features

Thanks to our contributors, this release also features several new additions.

k6-operator now has support for native sidecars; issue. They can be specified with the initContainers[*].restartPolicy field as per official Kubernetes documentation. Thanks, @stytchiz!

In addition to WATCH_NAMESPACE environment variable, it's now possible to watch several namespaces at once via WATCH_NAMESPACES; issue. An example:

- name: WATCH_NAMESPACES
  value: "some-ns,some-other-ns"

Thanks, @chris-pinola-rf!

The starter pod can also be configured with custom resources, just as other pods, if hard-coded values are not suitable; issue. Thanks, @seanankenbruck!

📦 Helm

As described above, the Helm chart has been changed during the kubebuilder update: authProxy section was removed and RBAC objects were changed. See this PR for details.

In addition, there was a bug fix for ServiceMonitor's namespaceSelector, released as the 3.13.1 chart. Thanks, @jdegendt!

It is now possible to switch off the installation of ClusterRoles and ClusterRoleBindings objects by setting rbac.namespaced: true; issue. Thanks, @stytchiz!

📜 Documentation

There was an addition for automatic internal Grafana documentation for k6-operator repo. Thanks, @the-it!

Full Changelog: v0.0.21...v0.0.22

v0.0.21

07 May 11:36

Choose a tag to compare

✨ New features

Starting from this release, k6-operator supports multiple PrivateLoadZones 🎉 As a reminder, previously it was possible to have only one PrivateLoadZone per installation. Now one can create several in a row and have them all working simultaneously:

$ kubectl -f plz-demo.yaml apply
privateloadzone.k6.io/plz-demo created
$ kubectl -f plz.yaml apply
privateloadzone.k6.io/kung-fu created
$ kubectl get privateloadzones.k6.io -A
NAMESPACE   NAME             AGE   REGISTERED
plz-ns      plz-demo         6s    True
plz-ns      kung-fu          14s   True

Note, however, that restriction of 100% distribution remains: it is possible to reference only one PrivateLoadZone in a k6 test.

The PLZs are distinguished by name only. This is required to be backward compatible with the older PLZs. Since it's backward compatible, there is no migration required: just an update of k6-operator to the latest release should be sufficient to switch to the new setup. The start of the new version of the app will read the registered PLZ in the system and continue to work as before, but now with the potential to add more PLZs if you'd like.

Grafana Cloud k6 sets the maximum number of PLZs per organization. By default, it is 5. If you need more, please contact customer support.

The documentation update will follow next week.

📦 Helm

There have been quite a few improvements around ServiceMonitor usage:

  • Ability to configure the ServiceMonitor with namespace, jobLabel, interval, scrapeTimeout, labels, and some other fields. Thanks, @EladAviczer!
    • ⚠️ This is a breaking change: instead of prometheus.enabled, configuration now happens via metrics.serviceMonitor.enabled. Please adjust your Helm values accordingly!
  • It is now possible to have k6-operator's Service without enabling authProxy. To do that, there is a service.enabled option. Thanks, @afreyermuth98!

🪛 Maintenance

There were a lot of CI improvements to harden our security stance. One of the most noticeable additions is that Zizmor scanning is executed on each pull request.

Some minor Golang library updates were also done here.

📜 Documentation

Last but not least, the internal docs now have a write-up about the NDE (Native Distributed Execution) proposal in k6 OSS and its potential impact on k6-operator.

With the help of @heitortsergent, we are improving our troubleshooting docs. The existing guide will be split into two parts (general & TestRun troubleshooting VS PrivateLoadZone troubleshooting) and simplified to make them more useful.

Full Changelog: v0.0.20...v0.0.21