November 16, 2016 4 minutes and 52 seconds read

Building a Consul cluster using Terraform/AWS

[Consul](http://Consul.io) has multiple components, but as a whole, it is a tool for discovering and configuring services in your infrastructure. It provides several key features:
  • Service Discovery
  • Health Checking
  • Key/Value Store
  • Multi Datacenter

For more information on Consul itself please have a look in the excellent documentation.

Is it really easy?

Setting up a Consul cluster seems easy, just follow the many tutorials out there and you will have a Consul cluster running in a few steps on your local machine…

But hey.. what if you need to deploy this cluster on an AWS environment? How do you create the cluster and how can you make sure it is always available?

This simple write up is just an example to give you an idea how this Consul cluster can be created and provisioned by using Terraform only. The goal is to have a cluster using the official Docker images provided by Consul itself running on EC2 nodes.

Creating a Consul cluster

The principle is not that hard… Consul nodes can discover each other based on IP Address. If you feed the Consul cluster members with IP Addresses that are part of the cluster you are good to go. In the example case we are going to start a number of Consul cluster members. The first node will be unable to form a cluster but if the second node starts up it will get the ip from the first node and the the first node will then know the ip of the second one.. etc.. So if you start up more than 2 nodes you will be good to go.

+------+   +------+  +------+  +------+  +------+
|Consul|   |Consul|  |Consul|  |Consul|  |Consul|
|Node 1|   |Node 2|  |Node 3|  |Node 4|  |Node n|
+------+   +------+  +------+  +------+  +------+
    < Find each other based on ip address >

The power is in the user-data script that is used for bootstrapping the Consul cluster nodes. In the example case they will find each other based on a query using aws ec2 describe-instances that will find nodes with a specific name, and from those identified nodes we will extract the IP Addresses that will be used to joint the Consul cluster. You can always modify the query to your own needs off course. The user-data script is used in the launch configuration.

So enough talk… lets start :)

The given examples are intentionally kept simple!! So you will need to tweak your Terraform code according to your needs

Step 1: Define a launch Configuration

The core component of our Consul cluster is the launch configuration. This launch configuration determines what user-data file needs to be executed when launching a new instance.

resource "aws_launch_configuration" "consul-cluster" {

  /*
    in this case we use a docker optimized ami, because we are going
    to use the official Consul docker image as a starter.
    You can always use an ami on which you install docker manually!
  */
  image_id  = "docker-optimized-ami-0123456789"

  /*
    the user-data script that holds the magic
  */
  user_data = "${data.template_file.consul-cluster.rendered}"
  instance_type = "t2.micro"

  /*
    make sure you open the correct ports so the Consul nodes can
    discover each other the actual security group is not shown
  */
  security_groups = ["sg-0123456789"]
  key_name = your-deploy-key

  /*
    us a policy which grants read access to the EC2 api
  */
  iam_instance_profile = "arn:aws:iam::0123456789:read_ec2_policy/ec2"
}

/*
    The template file used for the user-data
*/
data "template_file" "consul-cluster" {
  template = "user-data-consul-cluster.tpl")}"
  vars {
    // the name must match the Name tag of the autoscaling group
    consul_cluster_name = "consul-cluster-member"

    // the number of instances that need to be in the cluster to be healthy
    consul_cluster_min_size = 3
  }
}

Step 2: Create the template file

The user-data file is going to query AWS using the aws describe-instances api and will return ec2 nodes that have a matching name using the --filters option. 'Name=tag:Name,Values=${consul_cluster_name}'

All the retrieved instances are then queried for their private ip and the values are stored in a list. After completing the list the instance ip for the current machine is removed.

A Consul specific join string is composed and provided to the docker image. This enables the Consul docker image to check for available servers when starting.

// File: user-data-consul-cluster.tpl

#!/bin/bash -ex
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1

echo 'installing additional software'
for i in {1..5}
do
    yum install -y aws-cli && break || sleep 120
done

################################################################################
# Running Consul
################################################################################
# get current instance ip
INSTANCE_IP=$(curl http://169.254.169.254/latest/meta-data/local-ipv4)

# get list of available Consul servers; based on Name (value) tag!
IP_LIST=$(aws ec2 describe-instances --region us-east-1 \
--filters 'Name=tag:Name,Values=${consul_cluster_name}' \
'Name=instance-state-name,Values=running' \
--query "Reservations[*].Instances[*].PrivateIpAddress" \
--output=text)

# remove the current instance ip from the list of available servers
IP_LIST="$${IP_LIST/$$INSTANCE_IP/}"

# remove duplicated spaces, \r\n and replace space by ','
IP_LIST=$(echo $$IP_LIST | tr -s " " | tr -d '\r\n' | tr -s ' ' ',')

# create join string
for i in $(echo $IP_LIST | sed "s/,/ /g")
do
    JOIN_STRING+="-retry-join $i "
done

# - run Consul
docker run -d --net=host \
-e 'CONSUL_LOCAL_CONFIG={"skip_leave_on_interrupt": true}' \
--name consul-server consul:latest \
agent -server -bind=$INSTANCE_IP $JOIN_STRING \
-bootstrap-expect=${consul_cluster_min_size} -ui -client 0.0.0.0
# ------------------------------------------------------------------------------

Step 3: Create an autoscaling group

/*
    creates an autoscaling group so servers are created when needed
*/
resource "aws_autoscaling_group" "consul-cluster" {
  min_size = 3
  max_size = 5
  desired_capacity = 3
  min_elb_capacity = 3

  launch_configuration = "${aws_launch_configuration.consul-cluster.name}"
  load_balancers = ["${aws_elb.consul.id}"]

  tag {
    key = "Name"
    /*
      note: this is the value that is being searched for in the user-data
    */
    value = "consul-cluster-member"
    propagate_at_launch = true
  }
}

Step 4: Create an ELB as frontend for the Consul cluster

resource "aws_elb" "consul-cluster" {
  name = "consul-cluster"
  subnets = ["sn-0123456789"]
  security_groups = ["sg-0123456789"]

  listener {
    instance_port = 8300
    instance_protocol = "tcp"
    lb_port = 8300
    lb_protocol = "tcp"
  }

  listener {
    instance_port = 8301
    instance_protocol = "tcp"
    lb_port = 8301
    lb_protocol = "tcp"
  }

  listener {
    instance_port = 8302
    instance_protocol = "tcp"
    lb_port = 8302
    lb_protocol = "tcp"
  }

  listener {
    instance_port = 8400
    instance_protocol = "tcp"
    lb_port = 8400
    lb_protocol = "tcp"
  }

  listener {
    instance_port = 8500
    instance_protocol = "http"
    lb_port = 8500
    lb_protocol = "http"
  }

  listener {
    instance_port = 8600
    instance_protocol = "tcp"
    lb_port = 8600
    lb_protocol = "tcp"
  }

  health_check {
    target = "HTTP:8500/v1/status/leader"
    healthy_threshold = 2
    unhealthy_threshold = 2
    interval = 30
    timeout = 5
  }
}

When putting all the pieces together you should now have a running Consul cluster!


November 2, 2016 0 minutes and 31 seconds read

Change the port of actuator endpoint in a Grails application

When using actuator endpoints to expose metrics in a Grails (Spring Boot) application, it may be useful to run the metrics on a different port.

This enables you to hide the metrics for the public and use the different port in an AWS infrastucture so that the metrics are only available internal.

Let us first enable the actuator endpoints

// File: grails-app/conf/application.yml

# Spring Actuator Endpoints are Disabled by Default
endpoints:
    enabled: true
    jmx:
        enabled: true

Change the port on which the metrics runs, add the lines below to the appl

// File: grails-app/conf/application.yml

management:
    port: 9000

Now when you start your Grails application it run on port 8080 and the metrics are available on port 9090/metrics


November 1, 2016 0 minutes and 26 seconds read

Accessing localhost from a Docker Container using native Docker for Mac

Ever had a need to access something from within a Docker container that runs on the host system?

When using native Docker on OSX you have bad luck. When configuring a container and pointing that to localhost will result in the fact the your software will be targeted at the localhost of the docker container.

A solution for this isto define a new local loopback to your localhost

$ sudo ifconfig lo0 alias 172.16.123.1

This will define a loopback network interface that points to your localhost. When you need to access the localhost you can use this ip.


October 9, 2016 1 minutes and 6 seconds read

Terraform to the rescue

Getting exposed to Amazon Web Services is fun! Certainly when you see that the infrastructure is growing and supporting the daily need of developers and the business. You slowly start adding services and try to keep everything in a state so that it is repeatable and maintainable. At a certain moment it becomes clear that you need the concept of Infrastructure As Code.

The Amazon way of doing this is by using AWS CloudFormation. This enables you to define the infrastructure in a JSON/YAML format and apply the changes to the infrastructure.

Our team manages a bunch of environments using services like AWS ECS, EC2, ElasticSearch, RDS and more.. Maintaining this infrastructure in CloudFormation seemed the standard way of doing things until we started a proof-of-concept with Terraform.

Why did we start this proof-of-concept?? Mainly because the overwhelming pieces of code that we needed to maintain in CloudFormation became unmaintainable. The use of Terraform was so successful that we decide to rewrite our entire infrastructure codebase using Terraform.

The advantages when using Terraform are:

  • less code to maintain because Terraform is less verbose
  • when using Terraform an infrastructure change can be planned, this shows what is going to be changed before actually executing the change
$ terraform plan

See what the changes are and then when everything seems ok…

$ terraform apply

Currently we have our entire Infrastructure in Terraform and we could never be more happier. Terraform came to our rescue!


November 19, 2015 1 minutes and 44 seconds read

Functional Rest API Testing with Grails/Rest Client Builder

Functional Rest API testing with Grails is easy and fun. We will be creating a simple Rest Controller and test it using Spock and Rest Client Builder.

When running the functional test a real container will be started on a specific port and tests are run against the running container. Just as it should be.

Scenario:
Performing a GET request on a url (http://localhost:8080/helloworld) should return a HTTP Status 200 and data with a json payload

{"message":"helloworld"}

So lets get started!

Create a Grails application

$ grails create-app RestHelloWorld

Update your build.gradle to include the Rest Client Builder dependencies which we will need later on

dependencies {
    // add the following line to the 'dependencies' section
    testCompile "org.grails:grails-datastore-rest-client:4.0.7.RELEASE"
}

Create an Integration Test

$ grails create-integration-test HelloWorld

Create a test method inside the integration test

Open up the created HelloWorldControllerSpec inside the /src/integration-test/groovy/resthelloworld/ package

package resthelloworld

import grails.test.mixin.integration.Integration
import grails.transaction.*
import spock.lang.*
import grails.plugins.rest.client.RestBuilder
import grails.plugins.rest.client.RestResponse

@Integration
@Rollback
class HelloWorldSpec extends Specification {

    def setup() {
    }

    def cleanup() {
    }

    def "Ask for a nice HelloWorld"() {
        given:
        RestBuilder rest = new RestBuilder()

        when:
        RestResponse response = rest.get("http://localhost:8080/helloworld/")

        then:
        response.status == 200

        and:
        response.json.message == "helloworld"
    }
}

Run your test

$ grails test-app

Offcourse this will fail as we do not have implement the controller yet.

Create a Rest controller

$ cd RestHelloWorld
$ grails create-controller HelloWorld

Note: The generation of the controller also create a Unit Test for the controller, default this test will fail. We are going to delete the generated Unit Test as we do not need it now. This test is located under the /src/test/ groovy package.

$ rm ./src/test/groovy/resthelloworld/HelloWorldControllerSpec.groovy

Implement the controller function that will return data

package resthelloworld

class HelloWorldController {

    def index() {
        render(status: 200, contentType: "application/json") {
            ["message" : "helloworld"]
        }
    }
}

Modify UrlMapping

In order to get our newly generated controller accessible via Rest we need to modify our UrlMappings.

class UrlMappings {

    static mappings = {
        "/$controller/$action?/$id?(.$format)?"{
            constraints {
                // apply constraints here
            }
        }

        "/"(view:"/index")
        "500"(view:'/error')
        "404"(view:'/notFound')

        // add the line below
        "/helloworld/"  (controller: "helloWorld", action: "index", method: "GET")
    }
}

Test your app

$ grails test-app

You should find that your tests are fine now :)

$ grails test-app
BUILD SUCCESSFUL

Total time: 2.054 secs
| Tests PASSED

November 17, 2015 0 minutes and 58 seconds read

Using DavMail Gateway as a mail proxy for Microsoft Exchange (OWA)

If you find yourself into a situation where you have a need for non Microsoft mail client that needs support for Microsoft Exchange then you are often out of luck. In my case I needed Exchange support for the terrific PostBox mail client.

As for now PostBox does not support Microsoft Exhange natively so the hunt starts on how to get Exchange working. As it stands most companies also enable Exchange Web Access (or Outlook Web Access [OWA]) so maybe we can use that to feed our native mail client.

Enter the use of DavMail!

Davmail Gateway

Davmail is a local mail proxy that can work together with Microsoft Exchange [OWA] in a way that DavMail is actually connecting to a Exchange OWA and your mail client connects to DavMail as a proxy.

Configure Davmail

In order to get DavMail working correctly you need to provide the correct settings so it can use the OWA endpoint.

Configure PostBox

In order to get PostBox working with DavMail you need to create an outgoing mail server and an account that will use that outgoing mailserver.

Configure PostBox - Outgoing mailserver

Configure PostBox - Account setup

Configure PostBox - Identity setup

Now you are ready to send mail using your PostBox Client using DavMail and OWA.


October 30, 2015 0 minutes and 30 seconds read

Cleaning Grails Domain Objects in a Spock Test

Spock is a nice framework to execute integration tests in your Grails application. It may happen that the Spock test actually creates some domain objects and you want to clean them out on everuy single run of your feature test methods.

Spock provides a setup() and cleanup() method.

When you want to remove your domain objects after each feature test has run you can execute the following:

def setup() { ... }

def cleanup() {
        // make sure to clear out the database on after test
        <YourDomainObject>.withNewSession {
            <YourDomainObject>.findAll().each { it.delete(flush: true) }
        }
}

We need the .withNewSession because there is no Hibernate session provided in the setup() and cleanup() methods.


June 16, 2015 1 minutes and 16 seconds read

Adding WebSocket/Stomp support to a Spring Boot application

Adding support for WebSockets / Stomp in a Spring Boot application has never been more easy. You can use WebSockets to receive serverside events or push data to the server using WebSockets.

The following example will enable a server to send messages to a WebSocket/Stomp client.

  • Modify build.gradle
dependencies {
    compile("org.springframework.boot:spring-boot-starter-web")
    compile("org.springframework.boot:spring-boot-starter-websocket")
    compile("org.springframework:spring-messaging")
    testCompile("junit:junit")
}
  • Create a WebSocket configuration class that holds the configuration
@Configuration
@EnableWebSocketMessageBroker
public class WebSocketConfig extends AbstractWebSocketMessageBrokerConfigurer {

    @Override
    public void registerStompEndpoints(StompEndpointRegistry registry) {
        // the endpoint for websocket connections
        registry.addEndpoint("/stomp").withSockJS();
    }

    @Override
    public void configureMessageBroker(MessageBrokerRegistry config) {
        // use the /topic prefix for outgoing WebSocket communication
        config.enableSimpleBroker("/topic");

        // use the /app prefix for others
        config.setApplicationDestinationPrefixes("/app");
    }
}

Now a client that connects to /stomp endpoint is able to receive WebSocket messages.

  • Create a service that is going to send the data to a WebSocket endpoint
@Service
public class ScheduleTask {

    @Autowired
    private SimpMessagingTemplate template;

    // this will send a message to an endpoint on which a client can subscribe
    @Scheduled(fixedRate = 5000)
    public void trigger() {
        // sends the message to /topic/message
        this.template.convertAndSend("/topic/message", "Date: " + new Date());
    }

}
  • Create a client that is able to receive the message
<!DOCTYPE html>
<html>
<head>
    <title>WebSocket Stomp Receiving Example</title>
</head>
<body>
    <div>
        <h3>Messages:</h3>
        <ol id="messages"></ol>
    </div>

    <script type="text/javascript" src="//cdn.jsdelivr.net/jquery/1.11.2/jquery.min.js"></script>
    <script type="text/javascript" src="//cdn.jsdelivr.net/sockjs/0.3.4/sockjs.min.js"></script>
    <script type="text/javascript" src="//cdnjs.cloudflare.com/ajax/libs/stomp.js/2.3.3/stomp.min.js"></script>

    <script type="text/javascript">
        $(document).ready(function() {
            var messageList = $("#messages");

            // defined a connection to a new socket endpoint
            var socket = new SockJS('/stomp');

            var stompClient = Stomp.over(socket);

            stompClient.connect({ }, function(frame) {
                // subscribe to the /topic/message endpoint
                stompClient.subscribe("/topic/message", function(data) {
                    var message = data.body;
                    messageList.append("<li>" + message + "</li>");
                });
            });
        });
    </script>
</body>
</html>

The whole example project can be downloaded https://github.com/mpas/spring-boot-websocket-stomp-server-send-example


June 11, 2015 1 minutes and 9 seconds read

Setting up Docker RabbitMQ with predefined users/vhosts

When creating an Docker image it is nice to have predefined users and vhosts without manually having to create them after the Docker RabbitMQ image has started.

The following is a Dockerfile that extends the default Docker RabbitMQ image including the Management Plugin and it creates a standard set of users / vhosts when the container is created from the image.

It involves a init.sh script that is used to create the users and vhosts.

The Docker File

FROM rabbitmq:3-management

# Add script to create default users / vhosts
ADD init.sh /init.sh

# Set correct executable permissions
RUN chmod +x /init.sh

# Define default command
CMD [&quot;/init.sh&quot;]

The init.sh script

#!/bin/sh

# Create Default RabbitMQ setup
( sleep 10 ; \

# Create users
# rabbitmqctl add_user <username> <password>
rabbitmqctl add_user test_user test_user ; \

# Set user rights
# rabbitmqctl set_user_tags <username> <tag>
rabbitmqctl set_user_tags test_user administrator ; \

# Create vhosts
# rabbitmqctl add_vhost <vhostname>
rabbitmqctl add_vhost dummy ; \

# Set vhost permissions
# rabbitmqctl set_permissions -p <vhostname> <username> ".*" ".*" ".*"
rabbitmqctl set_permissions -p dummy test_user ".*" ".*" ".*" ; \
) &    
rabbitmq-server $@

Place both of these files in a directory and build your image:

$ docker build -t my_rabbitmq_image .

Start a container based on the image using:

$ docker run --rm=true --name my_rabbitmq_container -p 5672:5672 -p 15672:15672  my_rabbitmq_image

Then in your browser navigate to http://localhost:15672 and see if all is ok!

Note: When using Boot2Docker make sure to replace the localhost with the correct IP.