As an Amazon Associate I earn from qualifying purchases from

ChatOps: Managing Kubernetes Deployments in Webex

That is the third submit in a collection about writing ChatOps providers on high of the Webex API.  Within the first submit, we constructed a Webex Bot that obtained message occasions from a bunch room and printed the occasion JSON out to the console.  In the second, we added safety to that Bot, including an encrypted authentication header to Webex occasions, and subsequently including a easy listing of licensed customers to the occasion handler.  We additionally added consumer suggestions by posting messages again to the room the place the occasion was raised.

On this submit, we’ll construct on what was executed within the first two posts, and begin to apply real-world use circumstances to our Bot.  The aim right here might be to handle Deployments in a Kubernetes cluster utilizing instructions entered right into a Webex room.  Not solely is that this a enjoyable problem to resolve, nevertheless it additionally offers wider visibility into the goings-on of an ops staff, as they’ll scale a Deployment or push out a brand new container model within the public view of a Webex room.  Yow will discover the finished code for this submit on GitHub.

This submit assumes that you just’ve accomplished the steps listed within the first two weblog posts.  Yow will discover the code from the second submit right here.  Additionally, essential, you should definitely learn the primary submit to learn to make your native growth surroundings publicly accessible in order that Webex Webhook occasions can attain your API.  Ensure your tunnel is up and working and Webhook occasions can stream by means of to your API efficiently earlier than continuing on to the subsequent part.  On this case, I’ve arrange a brand new Bot known as Kubernetes Deployment Supervisor, however you should utilize your present Bot if you happen to like.  From right here on out, this submit assumes that you just’ve taken these steps and have a profitable end-to-end knowledge stream.


Let’s check out what we’re going to construct:

Architecture Diagram

Constructing on high of our present Bot, we’re going to create two new providers: MessageIngestion, and Kubernetes.  The latter will take a configuration object that offers it entry to our Kubernetes cluster and might be accountable for sending requests to the K8s management aircraft.  Our Index Router will proceed to behave as a controller, orchestrating knowledge flows between providers.  And our WebexNotification service that we constructed within the second submit will proceed to be accountable for sending messages again to the consumer in Webex.

Our Kubernetes Sources

On this part, we’ll arrange a easy Deployment in Kubernetes, in addition to a Service Account that we are able to leverage to speak with the Kubernetes API utilizing the NodeJS SDK.  Be at liberty to skip this half if you have already got these assets created.

This part additionally assumes that you’ve got a Kubernetes cluster up and working, and each you and your Bot have community entry to work together with its API.  There are many assets on-line for getting a Kubernetes cluster arrange, and getting kubetcl put in, each of that are past the scope of this weblog submit.

Our Take a look at Deployment

To maintain factor easy, I’m going to make use of Nginx as my deployment container – an easily-accessible picture that doesn’t have any dependencies to stand up and working.  In case you have a Deployment of your individual that you just’d like to make use of as a substitute, be happy to exchange what I’ve listed right here with that.

# in assets/nginx-deployment.yaml
apiVersion: apps/v1
variety: Deployment
    identify: nginx-deployment
      app: nginx
  replicas: 2
      app: nginx
      app: nginx
    - identify: nginx
      picture: nginx:1.20
      - containerPort: 80

Our Service Account and Function

The subsequent step is to ensure our Bot code has a method of interacting with the Kubernetes API.  We will try this by making a Service Account (SA) that our Bot will assume as its identification when calling the Kubernetes API, and making certain it has correct entry with a Kubernetes Function.

First, let’s arrange an SA that may work together with the Kubernetes API:

# in assets/sa.yaml
apiVersion: v1
variety: ServiceAccount
  identify: chatops-bot

Now we’ll create a Function in our Kubernetes cluster that can have entry to just about all the things within the default Namespace.  In a real-world utility, you’ll possible wish to take a extra restrictive strategy, solely offering the permissions that permit your Bot to do what you plan; however wide-open entry will work for a easy demo:

# in assets/position.yaml
variety: Function
  namespace: default
  identify: chatops-admin
- apiGroups: ["*"]
  assets: ["*"]
  verbs: ["*"]

Lastly, we’ll bind the Function to our SA utilizing a RoleBinding useful resource:

# in assets/rb.yaml
variety: RoleBinding
  identify: chatops-admin-binding
  namespace: default
- variety: ServiceAccount
  identify: chatops-bot
  apiGroup: ""
  variety: Function
  identify: chatops-admin
  apiGroup: ""

Apply these utilizing kubectl:

$ kubectl apply -f assets/sa.yaml
$ kubectl apply -f assets/position.yaml
$ kubectl apply -f assets/rb.yaml

As soon as your SA is created, fetching its data will present you the identify of the Secret through which its Token is saved.

Screenshot of the Service Account's describe output

Fetching data about that Secret will print out the Token string within the console.  Watch out with this Token, because it’s your SA’s secret, used to entry the Kubernetes API!

The secret token value

Configuring the Kubernetes SDK

Since we’re writing a NodeJS Bot on this weblog submit, we’ll use the JavaScript Kubernetes SDK for calling our Kubernetes API.  You’ll discover, if you happen to have a look at the examples within the Readme, that the SDK expects to have the ability to pull from an area kubectl configuration file (which, for instance, is saved on a Mac at ~/.kube/config).  Whereas which may work for native growth, that’s not excellent for Twelve Issue growth, the place we sometimes cross in our configurations as surroundings variables.  To get round this, we are able to cross in a pair of configuration objects that mimic the contents of our native Kubernetes config file and may use these configuration objects to imagine the identification of our newly created service account.

Let’s add some surroundings variables to the AppConfig class that we created within the earlier submit:

// in config/AppConfig.js
// contained in the constructor block
// after earlier surroundings variables

// no matter you’d like to call this cluster, any string will do
this.clusterName = course of.env['CLUSTER_NAME'];
// the bottom URL of your cluster, the place the API could be reached
this.clusterUrl = course of.env['CLUSTER_URL'];
// the CA cert arrange to your cluster, if relevant
this.clusterCert = course of.env['CLUSTER_CERT'];
// the SA identify from above - chatops-bot
this.kubernetesUserame = course of.env['KUBERNETES_USERNAME'];
// the token worth referenced within the screenshot above
this.kubernetesToken = course of.env['KUBERNETES_TOKEN'];

// the remainder of the file is unchanged…

These 5 strains will permit us to cross configuration values into our Kubernetes SDK, and configure an area shopper.  To do this, we’ll create a brand new service known as KubernetesService, which we’ll use to speak with our K8s cluster:

// in providers/kubernetes.js

import {KubeConfig, AppsV1Api, PatchUtils} from '@kubernetes/client-node';

export class KubernetesService {
    constructor(appConfig) {
        this.appClient = this._initAppClient(appConfig);
        this.requestOptions = { "headers": { "Content material-type": 

    _initAppClient(appConfig) { /* we’ll fill this in quickly */  }

    async takeAction(k8sCommand) { /* we’ll fill this in later */ }

This set of imports on the high provides us the objects and strategies that we’ll want from the Kubernetes SDK to stand up and working.  The requestOptions property set on this constructor might be used once we ship updates to the K8s API.

Now, let’s populate the contents of the _initAppClient methodology in order that we are able to have an occasion of the SDK prepared to make use of in our class:

// contained in the KubernetesService class
_initAppClient(appConfig) {
    // constructing objects from the env vars we pulled in
    const cluster = {
        identify: appConfig.clusterName,
        server: appConfig.clusterUrl,
        caData: appConfig.clusterCert
    const consumer = {
        identify: appConfig.kubernetesUserame,
        token: appConfig.kubernetesToken,
    // create a brand new config manufacturing unit object
    const kc = new KubeConfig();
    // cross in our cluster and consumer objects
    kc.loadFromClusterAndUser(cluster, consumer);
    // return the shopper created by the manufacturing unit object
    return kc.makeApiClient(AppsV1Api);

Easy sufficient.  At this level, we now have a Kubernetes API shopper prepared to make use of, and saved in a category property in order that public strategies can leverage it of their inside logic.  Let’s transfer on to wiring this into our route handler.

Message Ingestion and Validation

In a earlier submit, we took a have a look at the complete payload of JSON that Webex sends to our Bot when a brand new message occasion is raised.  It’s value having a look once more, since it will point out what we have to do in our subsequent step:

Message event body

Should you look by means of this JSON, you’ll discover that nowhere does it listing the precise content material of the message that was despatched; it merely provides occasion knowledge.  Nonetheless, we are able to use the area to name the Webex API and fetch that content material, in order that we are able to take motion on it.  To take action, we’ll create a brand new service known as MessageIngestion, which might be accountable for pulling in messages and validating their content material.

Fetching Message Content material

We’ll begin with a quite simple constructor that pulls within the AppConfig to construct out its properties, one easy methodology that calls a few stubbed-out non-public strategies:

// in providers/MessageIngestion.js
export class MessageIngestion {
    constructor(appConfig) {
        this.botToken = appConfig.botToken;

    async determineCommand(occasion) {
        const message = await this._fetchMessage(occasion);
        return this._interpret(message);

    async _fetchMessage(occasion) { /* we’ll fill this in subsequent */ }

    _interpret(rawMessageText) { /* we’ll discuss this */ }

We’ve acquired an excellent begin, so now it’s time to write down our code for fetching the uncooked message textual content.  We’ll name the identical /messages endpoint that we used to create messages within the earlier weblog submit, however on this case, we’ll fetch a selected message by its ID:

// in providers/MessageIngestion.js
// contained in the MessageIngestion class

// discover we’re utilizing fetch, which requires NodeJS 17.5 or increased, and a runtime flag
// see earlier submit for more information
async _fetchMessage(occasion) {
    const res = await fetch("" +, {
        headers: {
            "Content material-Kind": "utility/json",
            "Authorization": `Bearer ${this.botToken}`
        methodology: "GET"
    const messageData = await res.json();
    if(!messageData.textual content) {
        throw new Error("Couldn't fetch message content material.");
    return messageData.textual content;

Should you console.log the messageData output from this fetch request, it is going to look one thing like this:

The messageData object

As you may see, the message content material takes two kinds – first in plain textual content (identified with a crimson arrow), and second in an HTML block.  For our functions, as you may see from the code block above, we’ll use the plain textual content content material that doesn’t embody any formatting.

Message Evaluation and Validation

This can be a advanced matter to say the least, and the complexities are past the scope of this weblog submit.  There are plenty of methods to research the content material of the message to find out consumer intent.  You could possibly discover pure language processing (NLP), for which Cisco gives an open-source Python library known as MindMeld.  Or you would leverage OTS software program like Amazon Lex.

In my code, I took the straightforward strategy of static string evaluation, with some inflexible guidelines across the anticipated format of the message, e.g.:

<tagged-bot-name> scale <name-of-deployment> to <number-of-instances>

It’s not essentially the most user-friendly strategy, nevertheless it will get the job executed for a weblog submit.

I’ve two intents obtainable in my codebase – scaling a Deployment and updating a Deployment with a brand new picture tag.  A swap assertion runs evaluation on the message textual content to find out which of the actions is meant, and a default case throws an error that might be dealt with within the index route handler.  Each have their very own validation logic, which provides as much as over sixty strains of string manipulation, so I gained’t listing all of it right here.  Should you’re taken with studying by means of or leveraging my string manipulation code, it may be discovered on GitHub.

Evaluation Output

The completely satisfied path output of the _interpret methodology is a brand new knowledge switch object (DTO) created in a brand new file:

// in dto/KubernetesCommand.js
export class KubernetesCommand {
    constructor(props = {}) {
        this.kind = props.kind;
        this.deploymentName = props.deploymentName;
        this.imageTag = props.imageTag;
        this.scaleTarget = props.scaleTarget;

This standardizes the anticipated format of the evaluation output, which could be anticipated by the assorted command handlers that we’ll add to our Kubernetes service.

Sending Instructions to Kubernetes

For simplicity’s sake, we’ll give attention to the scaling workflow as a substitute of the 2 I’ve acquired coded.  Suffice it to say, that is not at all scratching the floor of what’s potential together with your Bot’s interactions with the Kubernetes API.

Making a Webex Notification DTO

The very first thing we’ll do is craft the shared DTO that can comprise the output of our Kubernetes command strategies.  This might be handed into the WebexNotification service that we in-built our final weblog submit and can standardize the anticipated fields for the strategies in that service.  It’s a quite simple class:

// in dto/Notification.js
export class Notification {
    constructor(props = {}) {
        this.success = props.success;
        this.message = props.message;

That is the article we’ll construct once we return the outcomes of our interactions with the Kubernetes SDK.

Dealing with Instructions

Beforehand on this submit, we stubbed out the general public takeAction methodology within the Kubernetes Service.  That is the place we’ll decide what motion is being requested, after which cross it to inside non-public strategies.  Since we’re solely trying on the scale strategy on this submit, we’ll have two paths on this implementation.  The code on GitHub has extra.

// in providers/Kuberetes.js
// contained in the KubernetesService class
async takeAction(k8sCommand) {
    let outcome;
    swap (k8sCommand.kind) {
        case "scale":
            outcome = await this._updateDeploymentScale(k8sCommand);
            throw new Error(`The motion kind ${k8sCommand.kind} that was 
decided by the system will not be supported.`);
    return outcome;

Very simple – if a acknowledged command kind is recognized (on this case, simply “scale”) an inside methodology is named and the outcomes are returned.  If not, an error is thrown.

Implementing our inside _updateDeploymentScale methodology requires little or no code.  Nonetheless it leverages the K8s SDK, which, to say the least, isn’t very intuitive.  The information payload that we create contains an operation (op) that we’ll carry out on a Deployment configuration property (path), with a brand new worth (worth).  The SDK’s patchNamespacedDeployment methodology is documented within the Typedocs linked from the SDK repo.  Right here’s my implementation:

// in providers/Kubernetes.js
// contained in the KubernetesService class
async _updateDeploymentScale(k8sCommand) {
    // craft a PATCH physique with an up to date reproduction depend
    const patch = [
            "op": "replace",
            "value": k8sCommand.scaleTarget
    // name the K8s API with a PATCH request
    const res = await 
"default", patch, undefined, undefined, undefined, undefined, 
    // validate response and return an success object to the
    return this._validateScaleResponse(k8sCommand, res.physique)

The tactic on the final line of that code block is accountable for crafting our response output.

// in providers/Kubernetes.js
// contained in the KubernetesService class
_validateScaleResponse(k8sCommand, template) {
    if (template.spec.replicas === k8sCommand.scaleTarget) {
        return new Notification({
            success: true,
            message: `Efficiently scaled to ${k8sCommand.scaleTarget} 
cases on the ${k8sCommand.deploymentName} deployment`
    } else {
        return new Notification({
            success: false,
            message: `The Kubernetes API returned a duplicate depend of 
${template.spec.replicas}, which doesn't match the specified 

Updating the Webex Notification Service

We’re virtually on the finish!  We nonetheless have one service that must be up to date.  In our final weblog submit, we created a quite simple methodology that despatched a message to the Webex room the place the Bot was known as, based mostly on a easy success or failure flag.  Now that we’ve constructed a extra advanced Bot, we want extra advanced consumer suggestions.

There are solely two strategies that we have to cowl right here.  They may simply be compacted into one, however I choose to maintain them separate for granularity.

The general public methodology that our route handler will name is sendNotification, which we’ll refactor as follows right here:

// in providers/WebexNotifications
// contained in the WebexNotifications class
// discover that we’re including the unique occasion
// and the Notification object
async sendNotification(occasion, notification) {
    let message = `<@personEmail:${occasion.knowledge.personEmail}>`;
    if (!notification.success) {
        message += ` Oh no! One thing went flawed! 
    } else {
        message += ` Properly executed! ${notification.message}`;
    const req = this._buildRequest(occasion, message); // a brand new non-public 
message, outlined under
    const res = await fetch(req);
    return res.json();

Lastly, we’ll construct the non-public _buildRequest methodology, which returns a Request object that may be despatched to the fetch name within the methodology above:

// in providers/WebexNotifications
// contained in the WebexNotifications class
_buildRequest(occasion, message) {
    return new Request("", {
        headers: this._setHeaders(),
        methodology: "POST",
        physique: JSON.stringify({
            roomId: occasion.knowledge.roomId,
            markdown: message

Tying All the things Collectively within the Route Handler

In earlier posts, we used easy route handler logic in routes/index.js that first logged out the occasion knowledge, after which went on to reply to a Webex consumer relying on their entry.  We’ll now take a unique strategy, which is to wire in our providers.  We’ll begin with pulling within the providers we’ve created up to now, preserving in thoughts that it will all happen after the auth/authz middleware checks are run.  Right here is the complete code of the refactored route handler, with modifications happening within the import statements, initializations, and handler logic.

// revised routes/index.js
import categorical from 'categorical'
import {AppConfig} from '../config/AppConfig.js';
import {WebexNotifications} from '../providers/WebexNotifications.js';
import {MessageIngestion} from "../providers/MessageIngestion.js";
import {KubernetesService} from '../providers/Kubernetes.js';
import {Notification} from "../dto/Notification.js";

const router = categorical.Router();
const config = new AppConfig();
const webex = new WebexNotifications(config);
const ingestion = new MessageIngestion(config);
const k8s = new KubernetesService(config);

// Our refactored route handler
router.submit('/', async operate(req, res) {
  const occasion = req.physique;
  strive {
    // message ingestion and evaluation
    const command = await ingestion.determineCommand(occasion);
    // taking motion based mostly on the command, at present stubbed-out
    const notification = await k8s.takeAction(command);
    // reply to the consumer 
    const wbxOutput = await webex.sendNotification(occasion, notification);
    res.statusCode = 200;
  } catch (e) {
    // reply to the consumer
    await webex.sendNotification(occasion, new Notification({success: false, 
message: e}));
    res.statusCode = 500;
    res.finish('One thing went terribly flawed!');
export default router;

Testing It Out!

In case your service is publicly obtainable, or if it’s working regionally and your tunnel is exposing it to the web, go forward and ship a message to your Bot to try it out.  Do not forget that our take a look at Deployment was known as nginx-deployment, and we began with two cases.  Let’s scale to a few:

Successful scale to 3 instances

That takes care of the completely satisfied path.  Now let’s see what occurs if our command fails validation:

Failing validation

Success!  From right here, the chances are countless.  Be at liberty to share your whole experiences leveraging ChatOps for managing your Kubernetes deployments within the feedback part under.

Observe Cisco Studying & Certifications

Twitter, Fb, LinkedIn and Instagram.


We will be happy to hear your thoughts

Leave a reply

Enable registration in settings - general
Compare items
  • Total (0)
Shopping cart