May 9, 2023 1 minutes and 52 seconds read

Running Rancher on K3S using K3D

Requirements

  • Colima installed
    • When using Colima make sure to point to the correct docker socket using an export
      export DOCKER_HOST="unix://${HOME}/.colima/default/docker.sock
  • K3D installed
  • Helm installed

Steps

# start colima with 4 CPU and 8GB memory
colima start --cpu 4 --memory 8

# add the correct Helm repositories
echo "Adding Helm repos" \
&& helm repo add rancher-latest https://releases.rancher.com/server-charts/latest \
&& helm repo add rancher-stable https://releases.rancher.com/server-charts/stable \
&& helm repo add jetstack https://charts.jetstack.io \
&& helm repo update

# create a cluster where rancher will be installed (name: rmaster -> this will become k3d-rmaster)
## - port 80/443 on localhost will be forwarded to 80/443 in the cluster/load balancer
## - the number of servers and agents are specified explicitly but they are the defaults!
##   servers: the number of servers that will be run
##   agents: the number of worker nodes
k3d cluster create rmaster --port 80:80@loadbalancer --port 443:443@loadbalancer --servers 1 --agents 0

# change to the correct Kubernetes context
## actually not required as k3d will switch to the just created context
kubectl config set-context k3d-rmaster

# install Cert Manager
kubectl create namespace cert-manager \
&& helm install cert-manager jetstack/cert-manager \
    --namespace cert-manager \
    --version v1.11.1 \
    --set installCRDs=true --debug

# verify if cert-manager is running
kubectl get pods --namespace cert-manager

# install Rancher (this can take a while! So dont get scared when you see pods in error state!)
## - replicas: the number of rancher replicas
## - hostname: set the hostname to your correct ip; for this we are using https://nip.io
##             an alternative could be using ngrok or manually update your hosts file!
## - bootstrapPassword: The password for login into the rancher web ui
## - global.cattle.psp.enabled: We disable pod security policies; we need to harden later on!
kubectl create namespace cattle-system \
&& helm install rancher rancher-latest/rancher \
    --namespace cattle-system \
    --set replicas=1 \
    --set hostname=192-168-68-107.nip.io \
    --set global.cattle.psp.enabled=false \
    --set bootstrapPassword=welcome123 \
    --debug

# verify if rancher is running
kubectl get pods --namespace cattle-system

# create client clusters
k3d cluster create rclient1 # (name: rclient1 -> this will become k3d-rclient1)
k3d cluster create rclient2 # (name: rclient2 -> this will become k3d-rclient2)

# import the client clusters via the web interface!
## make sure to pick the correct host; see the rancher install
## this will give an unsecure connection; no need to worry just proceed in your browser
http://192-168-68-107.nip.io

Resources


March 29, 2021 1 minutes and 34 seconds read

Paste image from clipboard directly into org-mode document

Personally I really like using org-mode files for creating documentation and presentations. When working with screenshots my normal workflows always has been.

  1. Capture screenshot to clipboard
  2. Save the screenshot to a relative ./images folder and give it a descriptive name.
  3. Manually create link to the screenshot

This works ok but after a while you get the feel that the whole process can be automated.

The following function can help in automating the whole process.

  1. Capture screenshot
  2. In your org-document paste using C-M-y (or call my/insert-clipboard-image) and automatically add the screenshot to an =./images- directory and insert the link to the image.
;; Overview
;; --------
;; Inserts an image from the clipboard by prompting the user for a filename.
;; Default extension for the pasted filename is .png

;; A ./images directory will be created relative to the current Org-mode document to store the images.

;; The default name format of the pasted image is:
;; filename: <yyyymmdd>_<hhmmss>_-_<image-filename>.png

;; Important
;; --------
;; This function depends on 'pngpaste' to paste the clipboard image
;; -> $ brew install pngpaste

;; Basic Customization
;; -------------------
;; Include the current Org-mode header as part of the image name.
;; (setq my/insert-clipboard-image-use-headername t)
;; filename: <yyyymmdd>_<hhmmss>_-_<headername>_-_<image-filename>.png

;; Include the buffername as part of the image name.
;; (setq my/insert-clipboard-image-use-buffername t)
;; filename: <yyyymmdd>_<hhmmss>_-_<buffername>_-_<image-filename>.png

;; Full name format
;; filename: <yyyymmdd>_<hhmmss>_-_<buffername>_-_<headername>_-_<image-filename>.png
(defun my/insert-clipboard-image (filename)
  "Inserts an image from the clipboard"
  (interactive "sFilename to paste: ")
  (let ((file
         (concat
          (file-name-directory buffer-file-name)
          "images/"
          (format-time-string "%Y%m%d_%H%M%S_-_")
          (if (bound-and-true-p my/insert-clipboard-image-use-buffername)
              (concat (s-replace "-" "_"
                                 (downcase (file-name-sans-extension (buffer-name)))) "_-_")
            "")
          (if (bound-and-true-p my/insert-clipboard-image-use-headername)
              (concat (s-replace " " "_" (downcase (nth 4 (org-heading-components)))) "_-_")
            "")
          filename ".png")))

    ;; create images directory
    (unless (file-exists-p (file-name-directory file))
      (make-directory (file-name-directory file)))

    ;; paste file from clipboard
    (shell-command (concat "pngpaste " file))
    (insert (concat "[[./images/" (file-name-nondirectory file) "]]"))))

(map! :desc "Insert clipboard image"
      :n "C-M-y" 'my/insert-clipboard-image)

A nice setting in org-mode that also helps when viewing large screenshots is the following:

This display the taken screenshot in a acceptable format in your org-mode file.

(after! org
  (setq org-image-actual-width (/ (display-pixel-width) 2)))

March 16, 2021 1 minutes and 59 seconds read

Time tracking with Org Mode and sum time per tag

Tracking time using Org Mode is simple and easy. You can quickly create reports of the time spend on specific tasks. But how do you aggregate time across tasks belonging to tags?

This can be achieved by using a simple formula and the usage of an awesome Org package called Org Aggregate.

Input data

The data below is used for time tracking, note that individual items are tagged!

- Take out the trash :private:
:LOGBOOK:
CLOCK: [2021-03-12 Fri 11:24]--[2021-03-12 Fri 11:30] =>  0:06
:END:
- Update document for client :client1:
:LOGBOOK:
CLOCK: [2021-03-12 Fri 12:45]--[2021-03-12 Fri 13:30] =>  0:45
:END:
- Create my awesome note for work :work:
:LOGBOOK:
CLOCK: [2021-03-13 Sat 11:24]--[2021-03-13 Sat 12:53] =>  1:29
:END:
- Fill in timesheet :work:
:LOGBOOK:
CLOCK: [2021-03-12 Fri 11:24]--[2021-03-12 Fri 11:40] =>  0:16
:END:

Reporting

#+BEGIN: clocktable :scope file :maxlevel 3 :tags t :match "work|client1" :header "#+TBLNAME: timetable\n"
#+TBLNAME: timetable
| Tags    | Headline                              | Time |      |      |     T |
|---------+---------------------------------------+------+------+------+-------|
|         | *Total time*                            | *2:30* |      |      |       |
|---------+---------------------------------------+------+------+------+-------|
|         | Report with filtered tags and sum...  | 2:30 |      |      |       |
|         | \_  Input data                        |      | 2:30 |      |       |
| client1 | \_    Update document for client      |      |      | 0:45 | 00:45 |
| work    | \_    Create my awesome note for work |      |      | 1:29 | 01:29 |
| work    | \_    Fill in timesheet               |      |      | 0:16 | 00:16 |
#+TBLFM: $6='(convert-org-clocktable-time-to-hhmm $5)::@1$6='(format "%s" "T")
#+END:
  • :tags t used to display the tags
  • :match “work|client’= used to filter the tags of interest
  • :header “#+TBLNAME: timetable\n”= used to name our table so we can process it later on using Org Aggregate
  • #+TBLFM: is using a function to correctly display time in hh:mm so we can use it later on to sum. Note: this is required as the package Org Aggregate that we are using to aggregate data is expecting the time in a hh:mm format
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
(defun convert-org-clocktable-time-to-hhmm (time-string)
"Converts a time string to HH:MM"
(if (> (length time-string) 0)
(progn
(let* ((s (s-replace "*" "" time-string))
(splits (split-string s ":"))
(hours (car splits))
(minutes (car (last splits)))
)
(if (= (length hours) 1)
(format "0%s:%s" hours minutes)
(format "%s:%s" hours minutes))))
time-string))

Use Org Aggregate to sum the times of the tags

#+BEGIN: aggregate :table "timetable" :cols "Tags sum(T);U" :cond (not (equal Tags ""))
#+TBLNAME: timetable
| Tags    | sum(T);U |
|---------+----------|
| client1 |    00:45 |
| work    |    01:45 |
#+END:
  • :cond used to filter empty rows from the data input!

October 16, 2020 0 minutes and 19 seconds read

Get pretty org-bullets in Doom Emacs

When installing Doom Emacs and using org-mode the defaults bullets are `*`. In order to get some fancy bullets the following steps need to be taken.

  1. Add the org-mode +pretty flag to your org settings in init.el To read more on the available flags check the org-mode Doom Emacs module `lang/org`
:lang
(org +pretty ) ; organize your plain life in plain text
(setq
    org-superstar-headline-bullets-list '("⁖" "◉" "○" "✸" "✿")
)

November 28, 2018 0 minutes and 52 seconds read

Ranger - Show File in Path Finder

Ranger is a VIM-inspired filemanager for the console (https://ranger.github.io/) and can easily be installed by using brew install ranger. When working in the terminal sometimes it is nice to open the files in the default Finder app or use the excellent alternative called Path Finder. (https://cocoatech.com/#/)

To reveal your files in the Finder or Path Finder create a commands.py in ~/.config/ranger and paste the following code.

from ranger.api.commands import Command

class show_files_in_path_finder(Command):
    """
    :show_files_in_path_finder

    Present selected files in finder
    """

    def execute(self):
        import subprocess
        files = ",".join(['"{0}" as POSIX file'.format(file.path) for file in self.fm.thistab.get_selection()])
        reveal_script = "tell application \"Path Finder\" to reveal {{{0}}}".format(files)
        activate_script = "tell application \"Path Finder\" to activate"
        script = "osascript -e '{0}' -e '{1}'".format(reveal_script, activate_script)
        self.fm.notify(script)
        subprocess.check_output(["osascript", "-e", reveal_script, "-e", activate_script])

class show_files_in_finder(Command):
    """
    :show_files_in_finder

    Present selected files in finder
    """

    def execute(self):
        import subprocess
        files = ",".join(['"{0}" as POSIX file'.format(file.path) for file in self.fm.thistab.get_selection()])
        reveal_script = "tell application \"Finder\" to reveal {{{0}}}".format(files)
        activate_script = "tell application \"Finder\" to set frontmost to true"
        script = "osascript -e '{0}' -e '{1}'".format(reveal_script, activate_script)
        self.fm.notify(script)
        subprocess.check_output(["osascript", "-e", reveal_script, "-e", activate_script])

Restart Ranger and now you can execute the commands :show_files_in_pathfinder or :show_files_in_finder.


May 12, 2017 0 minutes and 12 seconds read

Infrastructure and System Monitoring using Prometheus

Last week I was lucky enough to host a talk during the Devoxx (UK) on the subject of “Infrastructure and System Monitoring using Prometheus”. You can find the used material here:

Feel free to share and distribute


November 25, 2016 1 minutes and 42 seconds read

Dockerize your Grails application

Ever wanted to create a Docker image that contains your Grails application? You are lucky! It is easy to do so..

Let us first create a new Grails application. In the example we will create a basic application using the rest-profile.

Prerequisite : Docker is installed on your machine.

$ grails create-app docker-test --profile rest-api

After the Grails application has been created, we will need to add the following files to our project.

  • Dockerfile (determines what our Docker image will contain)
  • docker-entrypoint.sh (script that is responsible for starting our Grails application)
// file: /src/main/docker/Dockerfile

FROM openjdk:latest

# set environment options
ENV JAVA_OPTS="-Xms64m -Xmx256m -XX:MaxMetaspaceSize=128m"

RUN mkdir -p /app
WORKDIR /app

COPY /app/application.jar application.jar
COPY /app/docker-entrypoint.sh docker-entrypoint.sh

# Set file permissions
RUN chmod +x /app/docker-entrypoint.sh

# Set start script as enrypoint
ENTRYPOINT ["./docker-entrypoint.sh"]
// file: /src/main/docker/app/docker-entrypoint.sh
#!/bin/bash
set -e

exec java ${JAVA_OPTS} -jar application.jar $@
# exec java ${JAVA_OPTS} -Dserver.port=${SERVER_PORT} -jar application.jar $@

Next step is to add the tasks to our build.gradle so it can generate the Docker image.

So add the following snippet to your build.gradle file!

// file: /build.gradle
String getDockerImageName() {
  "docker-test"
}

task buildDockerImage(type:Exec) {
    group = 'docker'
    description = 'Build a docker image'
    commandLine 'docker', 'build', '-f', 'build/docker/Dockerfile', '-t', "${dockerImageName}", 'build/docker'

    doFirst {
        println ">> Creating image: ${dockerImageName}"
        /* copy the generate war file to /build/docker/app */
        copy {
            from war.archivePath
            into 'build/docker/app/'
        }
        /* copy artifacts from src/main/docker/app into the build/docker/app */
        copy {
            from 'src/main/docker/app/'
            into 'build/docker/app'
        }
        /* copy Dockerfile from src/main/docker into the build/docker */
        copy {
            from('src/main/docker/') {
                include 'Dockerfile'
            }
            into 'build/docker'
        }
        /* rename war file to application.jar */
        file("build/docker/app/${war.archiveName}").renameTo("build/docker/app/application.jar")
    }
}

Now that we have everyting in place we can build the image and start the container,

Create the docker image using assemble target and buildDockerImage

$ ./gradlew assemble buildDockerImage

Run a container based on the previous created image

$ docker run -it --rm -p 8080:8080 docker-test

This will run the container in interactive mode (-it) and the container will be removed when we stop the container (--rm). Port 8080 in the container will be available on port 8080 on your host system (-p 8080:8080).

This will run the specified container and the endpoint will be available using your browser. Just visit http://localhost:8080


November 24, 2016 1 minutes and 20 seconds read

Creating/Pushing Docker images using Gradle without plugins

In our current project we where heavily focussed on the usage of Gradle plugins to create Docker images. We used plugins to create the images and push them to our AWS ECR repositories. When using these plugins we hit various bugs related to the fact that not all developers where using Linux operating systems to test our their containers. At the end we took a look on how we could create those images without using additional plugins.

Prerequisite : Docker is installed on your machine.

Creating an image

The following snippet will create a Docker image using the task gradle buildDockerImage

- application layout
| build.gradle
| > src
| >- main
| >-- docker (contains a Dockerfile)
| >--- app (contains data that can be used in your Dockerfile)
/*
    You can put some logic in the getDockerImageName to determine how your
    Docker image should be created.
*/
String getDockerImageName() {
  "my-first-docker-image"
}

/*
    Execute a docker build using commandline pointing to our Dockerfile that
    has been copied to /build/docker/.
*/
task buildDockerImage(type:Exec) {
    group = 'docker'
    description = 'Build a docker image'
    commandLine 'docker', 'build', '-f', 'build/docker/Dockerfile', '-t', "${dockerImageName}", 'build/docker'

    doFirst {
        println ">> Creating image: ${dockerImageName}"
        /* copy artifacts from src/main/docker/app into the build/docker/app */
        copy {
            from 'src/main/docker/app/'
            into 'build/docker/app'
        }
        /* copy Dockerfile from src/main/docker into the build/docker */
        copy {
            from('src/main/docker/') {
                include 'Dockerfile'
            }
            into 'build/docker'
        }
    }
}

pushing an image

Pushing an image without using plugins is just as easy.

pushDockerImage(type: Exec) {
    group = 'docker'
    description = 'Push a docker image'
    commandLine 'docker', 'push', "${dockerImageName}"

    doFirst {
        println ">> Pushing image: ${dockerImageName}"
    }
}

Using this approach without using unneeded Gradle plugins resulted in a an easy way to create containers on different platforms.


November 17, 2016 1 minutes and 41 seconds read

Identifying Docker container outage using Prometheus

Prometheus is a an open-source monitoring system with a dimensional data model, flexible query language, efficient time series database and modern alerting approach.

Metric data is pulled (on a regular time-interval) from so called exporters which expose the metrics coming from applications/operating systems etc..

+------------------+                               +----------+     Visualize data
|  +------------+  |                               | Grafana  +---> coming from
|  | Dockerized |  |                               +----+-----+     Prometheus
|  | Application|  |                                    |
|  +------------+  |                                    ^
|  +------------+  |  Pull data   +----------+          |
|  |  CAdvisor  +--------->-------+Prometheus+----------+
|  +------------+  |              +---------++
|                  |                        |
| Operating System |                        |
|       with       |                        |
| Docker installed |                        |
|                  |                        v
+------------------+           Prometheus collects data
                               coming from remote systems

In the diagram above cAdvisor is a so called exporter. There are other exporters like e.g. Node Exporter that exposes machine metrics. cAdvisor is used to get Docker container metrics.

cAdvisor

cAdvisor is a project coming from Google and analyzes resource usage and performance characteristics of running Docker containers! When running a Dockerized application and starting a cAdvisor container you will have instant metrics available for all running containers.

A lot of metrics are exposed by cAdvisor of which one is the metric container_last_seen. You can use this metric in Prometheus to identify if a container has left the building :) The challenge with Prometheus is that it keeps the data for a specific amount of time the so called Stale Timeout. This means that Prometheus will keep reporting back that the data has been received until this timeout has occurred (default 5 minutes). This is of course too much if we need to identify if a container has gone.

So if you would normally query like this:

count(container_last_seen{job="<jobname>",name=~".*<containername>.*"})

This would get results until 5 minutes.. way to far…

A simple alternate query to identify if the container has gone is like below:

count(time() - container_last_seen{job="<jobname>",name=~".*<containername>.*"} < 30) OR vector(0)

The ‘30’ is the time in seconds before we want to be notified if a container has gone. This time has to be larger then the pull interval for your job.

When using the mentioned query you can create a nice Singlestat panel in Grafan so you can display an alert when the container is gone.


November 16, 2016 4 minutes and 45 seconds read

Building a Consul cluster using Terraform/AWS

  • Service Discovery
  • Health Checking
  • Key/Value Store
  • Multi Datacenter

For more information on Consul itself please have a look in the excellent documentation.

Is it really easy?

Setting up a Consul cluster seems easy, just follow the many tutorials out there and you will have a Consul cluster running in a few steps on your local machine…

But hey.. what if you need to deploy this cluster on an AWS environment? How do you create the cluster and how can you make sure it is always available?

This simple write up is just an example to give you an idea how this Consul cluster can be created and provisioned by using Terraform only. The goal is to have a cluster using the official Docker images provided by Consul itself running on EC2 nodes.

Creating a Consul cluster

The principle is not that hard… Consul nodes can discover each other based on IP Address. If you feed the Consul cluster members with IP Addresses that are part of the cluster you are good to go. In the example case we are going to start a number of Consul cluster members. The first node will be unable to form a cluster but if the second node starts up it will get the ip from the first node and the the first node will then know the ip of the second one.. etc.. So if you start up more than 2 nodes you will be good to go.

+------+   +------+  +------+  +------+  +------+
|Consul|   |Consul|  |Consul|  |Consul|  |Consul|
|Node 1|   |Node 2|  |Node 3|  |Node 4|  |Node n|
+------+   +------+  +------+  +------+  +------+
    < Find each other based on ip address >

The power is in the user-data script that is used for bootstrapping the Consul cluster nodes. In the example case they will find each other based on a query using aws ec2 describe-instances that will find nodes with a specific name, and from those identified nodes we will extract the IP Addresses that will be used to joint the Consul cluster. You can always modify the query to your own needs off course. The user-data script is used in the launch configuration.

So enough talk… lets start :)

The given examples are intentionally kept simple!! So you will need to tweak your Terraform code according to your needs

Step 1: Define a launch Configuration

The core component of our Consul cluster is the launch configuration. This launch configuration determines what user-data file needs to be executed when launching a new instance.

resource "aws_launch_configuration" "consul-cluster" {

  /*
    in this case we use a docker optimized ami, because we are going
    to use the official Consul docker image as a starter.
    You can always use an ami on which you install docker manually!
  */
  image_id  = "docker-optimized-ami-0123456789"

  /*
    the user-data script that holds the magic
  */
  user_data = "${data.template_file.consul-cluster.rendered}"
  instance_type = "t2.micro"

  /*
    make sure you open the correct ports so the Consul nodes can
    discover each other the actual security group is not shown
  */
  security_groups = ["sg-0123456789"]
  key_name = your-deploy-key

  /*
    us a policy which grants read access to the EC2 api
  */
  iam_instance_profile = "arn:aws:iam::0123456789:read_ec2_policy/ec2"
}

/*
    The template file used for the user-data
*/
data "template_file" "consul-cluster" {
  template = "user-data-consul-cluster.tpl")}"
  vars {
    // the name must match the Name tag of the autoscaling group
    consul_cluster_name = "consul-cluster-member"

    // the number of instances that need to be in the cluster to be healthy
    consul_cluster_min_size = 3
  }
}

Step 2: Create the template file

The user-data file is going to query AWS using the aws describe-instances api and will return ec2 nodes that have a matching name using the --filters option. 'Name=tag:Name,Values=${consul_cluster_name}'

All the retrieved instances are then queried for their private ip and the values are stored in a list. After completing the list the instance ip for the current machine is removed.

A Consul specific join string is composed and provided to the docker image. This enables the Consul docker image to check for available servers when starting.

// File: user-data-consul-cluster.tpl

#!/bin/bash -ex
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1

echo 'installing additional software'
for i in {1..5}
do
    yum install -y aws-cli && break || sleep 120
done

################################################################################
# Running Consul
################################################################################
# get current instance ip
INSTANCE_IP=$(curl http://169.254.169.254/latest/meta-data/local-ipv4)

# get list of available Consul servers; based on Name (value) tag!
IP_LIST=$(aws ec2 describe-instances --region us-east-1 \
--filters 'Name=tag:Name,Values=${consul_cluster_name}' \
'Name=instance-state-name,Values=running' \
--query "Reservations[*].Instances[*].PrivateIpAddress" \
--output=text)

# remove the current instance ip from the list of available servers
IP_LIST="$${IP_LIST/$$INSTANCE_IP/}"

# remove duplicated spaces, \r\n and replace space by ','
IP_LIST=$(echo $$IP_LIST | tr -s " " | tr -d '\r\n' | tr -s ' ' ',')

# create join string
for i in $(echo $IP_LIST | sed "s/,/ /g")
do
    JOIN_STRING+="-retry-join $i "
done

# - run Consul
docker run -d --net=host \
-e 'CONSUL_LOCAL_CONFIG={"skip_leave_on_interrupt": true}' \
--name consul-server consul:latest \
agent -server -bind=$INSTANCE_IP $JOIN_STRING \
-bootstrap-expect=${consul_cluster_min_size} -ui -client 0.0.0.0
# ------------------------------------------------------------------------------

Step 3: Create an autoscaling group

/*
    creates an autoscaling group so servers are created when needed
*/
resource "aws_autoscaling_group" "consul-cluster" {
  min_size = 3
  max_size = 5
  desired_capacity = 3
  min_elb_capacity = 3

  launch_configuration = "${aws_launch_configuration.consul-cluster.name}"
  load_balancers = ["${aws_elb.consul.id}"]

  tag {
    key = "Name"
    /*
      note: this is the value that is being searched for in the user-data
    */
    value = "consul-cluster-member"
    propagate_at_launch = true
  }
}

Step 4: Create an ELB as frontend for the Consul cluster

resource "aws_elb" "consul-cluster" {
  name = "consul-cluster"
  subnets = ["sn-0123456789"]
  security_groups = ["sg-0123456789"]

  listener {
    instance_port = 8300
    instance_protocol = "tcp"
    lb_port = 8300
    lb_protocol = "tcp"
  }

  listener {
    instance_port = 8301
    instance_protocol = "tcp"
    lb_port = 8301
    lb_protocol = "tcp"
  }

  listener {
    instance_port = 8302
    instance_protocol = "tcp"
    lb_port = 8302
    lb_protocol = "tcp"
  }

  listener {
    instance_port = 8400
    instance_protocol = "tcp"
    lb_port = 8400
    lb_protocol = "tcp"
  }

  listener {
    instance_port = 8500
    instance_protocol = "http"
    lb_port = 8500
    lb_protocol = "http"
  }

  listener {
    instance_port = 8600
    instance_protocol = "tcp"
    lb_port = 8600
    lb_protocol = "tcp"
  }

  health_check {
    target = "HTTP:8500/v1/status/leader"
    healthy_threshold = 2
    unhealthy_threshold = 2
    interval = 30
    timeout = 5
  }
}

When putting all the pieces together you should now have a running Consul cluster!


November 2, 2016 0 minutes and 31 seconds read

Change the port of actuator endpoint in a Grails application

When using actuator endpoints to expose metrics in a Grails (Spring Boot) application, it may be useful to run the metrics on a different port.

This enables you to hide the metrics for the public and use the different port in an AWS infrastucture so that the metrics are only available internal.

Let us first enable the actuator endpoints

// File: grails-app/conf/application.yml

# Spring Actuator Endpoints are Disabled by Default
endpoints:
    enabled: true
    jmx:
        enabled: true

Change the port on which the metrics runs, add the lines below to the appl

// File: grails-app/conf/application.yml

management:
    port: 9000

Now when you start your Grails application it run on port 8080 and the metrics are available on port 9090/metrics