Ranger - Show File in Path Finder

Ranger is a VIM-inspired filemanager for the console (https://ranger.github.io/) and can easily be installed by using brew install ranger. When working in the terminal sometimes it is nice to open the files in the default Finder app or use the excellent alternative called Path Finder. (https://cocoatech.com/#/)

To reveal your files in the Finder or Path Finder create a commands.py in ~/.config/ranger and paste the following code.

from ranger.api.commands import Command

class show_files_in_path_finder(Command):
    """
    :show_files_in_path_finder

    Present selected files in finder
    """

    def execute(self):
        import subprocess
        files = ",".join(['"{0}" as POSIX file'.format(file.path) for file in self.fm.thistab.get_selection()])
        reveal_script = "tell application \"Path Finder\" to reveal {{{0}}}".format(files)
        activate_script = "tell application \"Path Finder\" to activate"
        script = "osascript -e '{0}' -e '{1}'".format(reveal_script, activate_script)
        self.fm.notify(script)
        subprocess.check_output(["osascript", "-e", reveal_script, "-e", activate_script])

class show_files_in_finder(Command):
    """
    :show_files_in_finder

    Present selected files in finder
    """

    def execute(self):
        import subprocess
        files = ",".join(['"{0}" as POSIX file'.format(file.path) for file in self.fm.thistab.get_selection()])
        reveal_script = "tell application \"Finder\" to reveal {{{0}}}".format(files)
        activate_script = "tell application \"Finder\" to set frontmost to true"
        script = "osascript -e '{0}' -e '{1}'".format(reveal_script, activate_script)
        self.fm.notify(script)
        subprocess.check_output(["osascript", "-e", reveal_script, "-e", activate_script])

Restart Ranger and now you can execute the commands :show_files_in_pathfinder or :show_files_in_finder.

Infrastructure and System Monitoring using Prometheus

Last week I was lucky enough to host a talk during the Devoxx (UK) on the subject of “Infrastructure and System Monitoring using Prometheus”. You can find the used material here:

Feel free to share and distribute

Dockerize your Grails application

Ever wanted to create a Docker image that contains your Grails application? You are lucky! It is easy to do so..

Let us first create a new Grails application. In the example we will create a basic application using the rest-profile.

Prerequisite : Docker is installed on your machine.

$ grails create-app docker-test --profile rest-api

After the Grails application has been created, we will need to add the following files to our project.

  • Dockerfile (determines what our Docker image will contain)
  • docker-entrypoint.sh (script that is responsible for starting our Grails application)
// file: /src/main/docker/Dockerfile

FROM openjdk:latest

# set environment options
ENV JAVA_OPTS="-Xms64m -Xmx256m -XX:MaxMetaspaceSize=128m"

RUN mkdir -p /app
WORKDIR /app

COPY /app/application.jar application.jar
COPY /app/docker-entrypoint.sh docker-entrypoint.sh

# Set file permissions
RUN chmod +x /app/docker-entrypoint.sh

# Set start script as enrypoint
ENTRYPOINT ["./docker-entrypoint.sh"]
// file: /src/main/docker/app/docker-entrypoint.sh
#!/bin/bash
set -e

exec java ${JAVA_OPTS} -jar application.jar $@
# exec java ${JAVA_OPTS} -Dserver.port=${SERVER_PORT} -jar application.jar $@

Next step is to add the tasks to our build.gradle so it can generate the Docker image.

So add the following snippet to your build.gradle file!

// file: /build.gradle
String getDockerImageName() {
  "docker-test"
}

task buildDockerImage(type:Exec) {
    group = 'docker'
    description = 'Build a docker image'
    commandLine 'docker', 'build', '-f', 'build/docker/Dockerfile', '-t', "${dockerImageName}", 'build/docker'

    doFirst {
        println ">> Creating image: ${dockerImageName}"
        /* copy the generate war file to /build/docker/app */
        copy {
            from war.archivePath
            into 'build/docker/app/'
        }
        /* copy artifacts from src/main/docker/app into the build/docker/app */
        copy {
            from 'src/main/docker/app/'
            into 'build/docker/app'
        }
        /* copy Dockerfile from src/main/docker into the build/docker */
        copy {
            from('src/main/docker/') {
                include 'Dockerfile'
            }
            into 'build/docker'
        }
        /* rename war file to application.jar */
        file("build/docker/app/${war.archiveName}").renameTo("build/docker/app/application.jar")
    }
}

Now that we have everyting in place we can build the image and start the container,

Create the docker image using assemble target and buildDockerImage

$ ./gradlew assemble buildDockerImage

Run a container based on the previous created image

$ docker run -it --rm -p 8080:8080 docker-test

This will run the container in interactive mode (-it) and the container will be removed when we stop the container (--rm). Port 8080 in the container will be available on port 8080 on your host system (-p 8080:8080).

This will run the specified container and the endpoint will be available using your browser. Just visit http://localhost:8080

Creating/Pushing Docker images using Gradle without plugins

In our current project we where heavily focussed on the usage of Gradle plugins to create Docker images. We used plugins to create the images and push them to our AWS ECR repositories. When using these plugins we hit various bugs related to the fact that not all developers where using Linux operating systems to test our their containers. At the end we took a look on how we could create those images without using additional plugins.

Prerequisite : Docker is installed on your machine.

Creating an image

The following snippet will create a Docker image using the task gradle buildDockerImage

- application layout
| build.gradle
| > src
| >- main
| >-- docker (contains a Dockerfile)
| >--- app (contains data that can be used in your Dockerfile)
/*
    You can put some logic in the getDockerImageName to determine how your
    Docker image should be created.
*/
String getDockerImageName() {
  "my-first-docker-image"
}

/*
    Execute a docker build using commandline pointing to our Dockerfile that
    has been copied to /build/docker/.
*/
task buildDockerImage(type:Exec) {
    group = 'docker'
    description = 'Build a docker image'
    commandLine 'docker', 'build', '-f', 'build/docker/Dockerfile', '-t', "${dockerImageName}", 'build/docker'

    doFirst {
        println ">> Creating image: ${dockerImageName}"
        /* copy artifacts from src/main/docker/app into the build/docker/app */
        copy {
            from 'src/main/docker/app/'
            into 'build/docker/app'
        }
        /* copy Dockerfile from src/main/docker into the build/docker */
        copy {
            from('src/main/docker/') {
                include 'Dockerfile'
            }
            into 'build/docker'
        }
    }
}

pushing an image

Pushing an image without using plugins is just as easy.

pushDockerImage(type: Exec) {
    group = 'docker'
    description = 'Push a docker image'
    commandLine 'docker', 'push', "${dockerImageName}"

    doFirst {
        println ">> Pushing image: ${dockerImageName}"
    }
}

Using this approach without using unneeded Gradle plugins resulted in a an easy way to create containers on different platforms.

Identifying Docker container outage using Prometheus

Prometheus is a an open-source monitoring system with a dimensional data model, flexible query language, efficient time series database and modern alerting approach.

Metric data is pulled (on a regular time-interval) from so called exporters which expose the metrics coming from applications/operating systems etc..

+------------------+                               +----------+     Visualize data
|  +------------+  |                               | Grafana  +---> coming from
|  | Dockerized |  |                               +----+-----+     Prometheus
|  | Application|  |                                    |
|  +------------+  |                                    ^
|  +------------+  |  Pull data   +----------+          |
|  |  CAdvisor  +--------->-------+Prometheus+----------+
|  +------------+  |              +---------++
|                  |                        |
| Operating System |                        |
|       with       |                        |
| Docker installed |                        |
|                  |                        v
+------------------+           Prometheus collects data
                               coming from remote systems

In the diagram above cAdvisor is a so called exporter. There are other exporters like e.g. Node Exporter that exposes machine metrics. cAdvisor is used to get Docker container metrics.

cAdvisor

cAdvisor is a project coming from Google and analyzes resource usage and performance characteristics of running Docker containers! When running a Dockerized application and starting a cAdvisor container you will have instant metrics available for all running containers.

A lot of metrics are exposed by cAdvisor of which one is the metric container_last_seen. You can use this metric in Prometheus to identify if a container has left the building :) The challenge with Prometheus is that it keeps the data for a specific amount of time the so called Stale Timeout. This means that Prometheus will keep reporting back that the data has been received until this timeout has occurred (default 5 minutes). This is of course too much if we need to identify if a container has gone.

So if you would normally query like this:

count(container_last_seen{job="<jobname>",name=~".*<containername>.*"})

This would get results until 5 minutes.. way to far…

A simple alternate query to identify if the container has gone is like below:

count(time() - container_last_seen{job="<jobname>",name=~".*<containername>.*"} < 30) OR vector(0)

The ‘30’ is the time in seconds before we want to be notified if a container has gone. This time has to be larger then the pull interval for your job.

When using the mentioned query you can create a nice Singlestat panel in Grafan so you can display an alert when the container is gone.

Building a Consul cluster using Terraform/AWS

Consul has multiple components, but as a whole, it is a tool for discovering and configuring services in your infrastructure. It provides several key features:

  • Service Discovery
  • Health Checking
  • Key/Value Store
  • Multi Datacenter

For more information on Consul itself please have a look in the excellent documentation.

Is it really easy?

Setting up a Consul cluster seems easy, just follow the many tutorials out there and you will have a Consul cluster running in a few steps on your local machine…

But hey.. what if you need to deploy this cluster on an AWS environment? How do you create the cluster and how can you make sure it is always available?

This simple write up is just an example to give you an idea how this Consul cluster can be created and provisioned by using Terraform only. The goal is to have a cluster using the official Docker images provided by Consul itself running on EC2 nodes.

Creating a Consul cluster

The principle is not that hard… Consul nodes can discover each other based on IP Address. If you feed the Consul cluster members with IP Addresses that are part of the cluster you are good to go. In the example case we are going to start a number of Consul cluster members. The first node will be unable to form a cluster but if the second node starts up it will get the ip from the first node and the the first node will then know the ip of the second one.. etc.. So if you start up more than 2 nodes you will be good to go.

+------+   +------+  +------+  +------+  +------+
|Consul|   |Consul|  |Consul|  |Consul|  |Consul|
|Node 1|   |Node 2|  |Node 3|  |Node 4|  |Node n|
+------+   +------+  +------+  +------+  +------+
    < Find each other based on ip address >

The power is in the user-data script that is used for bootstrapping the Consul cluster nodes. In the example case they will find each other based on a query using aws ec2 describe-instances that will find nodes with a specific name, and from those identified nodes we will extract the IP Addresses that will be used to joint the Consul cluster. You can always modify the query to your own needs off course. The user-data script is used in the launch configuration.

So enough talk… lets start :)

The given examples are intentionally kept simple!! So you will need to tweak your Terraform code according to your needs

Step 1: Define a launch Configuration

The core component of our Consul cluster is the launch configuration. This launch configuration determines what user-data file needs to be executed when launching a new instance.

resource "aws_launch_configuration" "consul-cluster" {

  /*
    in this case we use a docker optimized ami, because we are going
    to use the official Consul docker image as a starter.
    You can always use an ami on which you install docker manually!
  */
  image_id  = "docker-optimized-ami-0123456789"

  /*
    the user-data script that holds the magic
  */
  user_data = "${data.template_file.consul-cluster.rendered}"
  instance_type = "t2.micro"

  /*
    make sure you open the correct ports so the Consul nodes can
    discover each other the actual security group is not shown
  */
  security_groups = ["sg-0123456789"]
  key_name = your-deploy-key

  /*
    us a policy which grants read access to the EC2 api
  */
  iam_instance_profile = "arn:aws:iam::0123456789:read_ec2_policy/ec2"
}

/*
    The template file used for the user-data
*/
data "template_file" "consul-cluster" {
  template = "user-data-consul-cluster.tpl")}"
  vars {
    // the name must match the Name tag of the autoscaling group
    consul_cluster_name = "consul-cluster-member"

    // the number of instances that need to be in the cluster to be healthy
    consul_cluster_min_size = 3
  }
}

Step 2: Create the template file

The user-data file is going to query AWS using the aws describe-instances api and will return ec2 nodes that have a matching name using the --filters option. 'Name=tag:Name,Values=${consul_cluster_name}'

All the retrieved instances are then queried for their private ip and the values are stored in a list. After completing the list the instance ip for the current machine is removed.

A Consul specific join string is composed and provided to the docker image. This enables the Consul docker image to check for available servers when starting.

// File: user-data-consul-cluster.tpl

#!/bin/bash -ex
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1

echo 'installing additional software'
for i in {1..5}
do
    yum install -y aws-cli && break || sleep 120
done

################################################################################
# Running Consul
################################################################################
# get current instance ip
INSTANCE_IP=$(curl http://169.254.169.254/latest/meta-data/local-ipv4)

# get list of available Consul servers; based on Name (value) tag!
IP_LIST=$(aws ec2 describe-instances --region us-east-1 \
--filters 'Name=tag:Name,Values=${consul_cluster_name}' \
'Name=instance-state-name,Values=running' \
--query "Reservations[*].Instances[*].PrivateIpAddress" \
--output=text)

# remove the current instance ip from the list of available servers
IP_LIST="$${IP_LIST/$$INSTANCE_IP/}"

# remove duplicated spaces, \r\n and replace space by ','
IP_LIST=$(echo $$IP_LIST | tr -s " " | tr -d '\r\n' | tr -s ' ' ',')

# create join string
for i in $(echo $IP_LIST | sed "s/,/ /g")
do
    JOIN_STRING+="-retry-join $i "
done

# - run Consul
docker run -d --net=host \
-e 'CONSUL_LOCAL_CONFIG={"skip_leave_on_interrupt": true}' \
--name consul-server consul:latest \
agent -server -bind=$INSTANCE_IP $JOIN_STRING \
-bootstrap-expect=${consul_cluster_min_size} -ui -client 0.0.0.0
# ------------------------------------------------------------------------------

Step 3: Create an autoscaling group

/*
    creates an autoscaling group so servers are created when needed
*/
resource "aws_autoscaling_group" "consul-cluster" {
  min_size = 3
  max_size = 5
  desired_capacity = 3
  min_elb_capacity = 3

  launch_configuration = "${aws_launch_configuration.consul-cluster.name}"
  load_balancers = ["${aws_elb.consul.id}"]

  tag {
    key = "Name"
    /*
      note: this is the value that is being searched for in the user-data
    */
    value = "consul-cluster-member"
    propagate_at_launch = true
  }
}

Step 4: Create an ELB as frontend for the Consul cluster

resource "aws_elb" "consul-cluster" {
  name = "consul-cluster"
  subnets = ["sn-0123456789"]
  security_groups = ["sg-0123456789"]

  listener {
    instance_port = 8300
    instance_protocol = "tcp"
    lb_port = 8300
    lb_protocol = "tcp"
  }

  listener {
    instance_port = 8301
    instance_protocol = "tcp"
    lb_port = 8301
    lb_protocol = "tcp"
  }

  listener {
    instance_port = 8302
    instance_protocol = "tcp"
    lb_port = 8302
    lb_protocol = "tcp"
  }

  listener {
    instance_port = 8400
    instance_protocol = "tcp"
    lb_port = 8400
    lb_protocol = "tcp"
  }

  listener {
    instance_port = 8500
    instance_protocol = "http"
    lb_port = 8500
    lb_protocol = "http"
  }

  listener {
    instance_port = 8600
    instance_protocol = "tcp"
    lb_port = 8600
    lb_protocol = "tcp"
  }

  health_check {
    target = "HTTP:8500/v1/status/leader"
    healthy_threshold = 2
    unhealthy_threshold = 2
    interval = 30
    timeout = 5
  }
}

When putting all the pieces together you should now have a running Consul cluster!

Change the port of actuator endpoint in a Grails application

When using actuator endpoints to expose metrics in a Grails (Spring Boot) application, it may be useful to run the metrics on a different port.

This enables you to hide the metrics for the public and use the different port in an AWS infrastucture so that the metrics are only available internal.

Let us first enable the actuator endpoints

// File: grails-app/conf/application.yml

# Spring Actuator Endpoints are Disabled by Default
endpoints:
    enabled: true
    jmx:
        enabled: true

Change the port on which the metrics runs, add the lines below to the appl

// File: grails-app/conf/application.yml

management:
    port: 9000

Now when you start your Grails application it run on port 8080 and the metrics are available on port 9090/metrics

Accessing localhost from a Docker Container using native Docker for Mac

Ever had a need to access something from within a Docker container that runs on the host system?

When using native Docker on OSX you have bad luck. When configuring a container and pointing that to localhost will result in the fact the your software will be targeted at the localhost of the docker container.

A solution for this isto define a new local loopback to your localhost

$ sudo ifconfig lo0 alias 172.16.123.1

This will define a loopback network interface that points to your localhost. When you need to access the localhost you can use this ip.

Terraform to the rescue

Getting exposed to Amazon Web Services is fun! Certainly when you see that the infrastructure is growing and supporting the daily need of developers and the business. You slowly start adding services and try to keep everything in a state so that it is repeatable and maintainable. At a certain moment it becomes clear that you need the concept of Infrastructure As Code.

The Amazon way of doing this is by using AWS CloudFormation. This enables you to define the infrastructure in a JSON/YAML format and apply the changes to the infrastructure.

Our team manages a bunch of environments using services like AWS ECS, EC2, ElasticSearch, RDS and more.. Maintaining this infrastructure in CloudFormation seemed the standard way of doing things until we started a proof-of-concept with Terraform.

Why did we start this proof-of-concept?? Mainly because the overwhelming pieces of code that we needed to maintain in CloudFormation became unmaintainable. The use of Terraform was so successful that we decide to rewrite our entire infrastructure codebase using Terraform.

The advantages when using Terraform are:

  • less code to maintain because Terraform is less verbose
  • when using Terraform an infrastructure change can be planned, this shows what is going to be changed before actually executing the change
$ terraform plan

See what the changes are and then when everything seems ok…

$ terraform apply

Currently we have our entire Infrastructure in Terraform and we could never be more happier. Terraform came to our rescue!

Functional Rest API Testing with Grails/Rest Client Builder

Functional Rest API testing with Grails is easy and fun. We will be creating a simple Rest Controller and test it using Spock and Rest Client Builder.

When running the functional test a real container will be started on a specific port and tests are run against the running container. Just as it should be.

Scenario:
Performing a GET request on a url (http://localhost:8080/helloworld) should return a HTTP Status 200 and data with a json payload

{"message":"helloworld"}

So lets get started!

Create a Grails application

$ grails create-app RestHelloWorld

Update your build.gradle to include the Rest Client Builder dependencies which we will need later on

dependencies {
    // add the following line to the 'dependencies' section
    testCompile "org.grails:grails-datastore-rest-client:4.0.7.RELEASE"
}

Create an Integration Test

$ grails create-integration-test HelloWorld

Create a test method inside the integration test

Open up the created HelloWorldControllerSpec inside the /src/integration-test/groovy/resthelloworld/ package

package resthelloworld

import grails.test.mixin.integration.Integration
import grails.transaction.*
import spock.lang.*
import grails.plugins.rest.client.RestBuilder
import grails.plugins.rest.client.RestResponse

@Integration
@Rollback
class HelloWorldSpec extends Specification {

    def setup() {
    }

    def cleanup() {
    }

    def "Ask for a nice HelloWorld"() {
        given:
        RestBuilder rest = new RestBuilder()

        when:
        RestResponse response = rest.get("http://localhost:8080/helloworld/")

        then:
        response.status == 200

        and:
        response.json.message == "helloworld"
    }
}

Run your test

$ grails test-app

Offcourse this will fail as we do not have implement the controller yet.

Create a Rest controller

$ cd RestHelloWorld
$ grails create-controller HelloWorld

Note: The generation of the controller also create a Unit Test for the controller, default this test will fail. We are going to delete the generated Unit Test as we do not need it now. This test is located under the /src/test/ groovy package.

$ rm ./src/test/groovy/resthelloworld/HelloWorldControllerSpec.groovy

Implement the controller function that will return data

package resthelloworld

class HelloWorldController {

    def index() {
        render(status: 200, contentType: "application/json") {
            ["message" : "helloworld"]
        }
    }
}

Modify UrlMapping

In order to get our newly generated controller accessible via Rest we need to modify our UrlMappings.

class UrlMappings {

    static mappings = {
        "/$controller/$action?/$id?(.$format)?"{
            constraints {
                // apply constraints here
            }
        }

        "/"(view:"/index")
        "500"(view:'/error')
        "404"(view:'/notFound')

        // add the line below
        "/helloworld/"  (controller: "helloWorld", action: "index", method: "GET")
    }
}

Test your app

$ grails test-app

You should find that your tests are fine now :)

$ grails test-app
BUILD SUCCESSFUL

Total time: 2.054 secs
| Tests PASSED