September 15, 2012 1 minutes and 1 second read

Combining ImageMagick and Grails

When there is a need to work with images (thumbnailing, watermark, resize etc.) there is always ImageMagick that comes to the rescue. Combining this image utility powerhouse with the Grails framework is a task which can be easily accomplished.

Steps:

  • Install ImageMagick according to the installation instructions.
  • It contains a utility called convert which we will need later on! This utility takes care of the conversion of images to thumbnails, watermarks etc. So remember where this utility is installed on your system!
  • Make sure that ImageMagick is installed correctly be converting an image to a thumbnail by using the following command in a terminal.
/opt/local/bin/convert <filename> -thumbnail 70x70 <thumbnail-filename>

example:

/opt/local/bin/convert /tmp/image-001.jpg -thumbnail 70x70 /tmp/thumbnail-image-001.jpg

Create some code that calls the ImageMagick convert utility with the correct parameters to enable you to achieve what you want. Something like below:

def createThumbnail(File file) {
   def command = "/opt/local/bin/convert ${file.canonicalPath} " +
                 "-thumbnail 70x70 " +
                 "/images/thumbs" + File.separator + "${file.name}"
   def proc = Runtime.getRuntime().exec(command)
   int exitStatus;
   while (true) {
       try {
           exitStatus = proc.waitFor();
           break;
       } catch (java.lang.InterruptedException e) {
           log.debug("Creating thumbnail - Interrupted: Ignoring and waiting")
       }
   }
    if (exitStatus != 0) {
        log.error("Error executing command: exitStatus=[${exitStatus}]")
    }
    log.debug("Succesfully created thumbnail")
    return (exitStatus == 0);
}

The above should give you some idea on how you could integrate Grails and ImageMagick into your own application.


September 14, 2012 1 minutes and 7 seconds read

Installing Vimball plugins when using Pathogen

No need to discuss that Vim is truly a great text editor. Wealth of features, great speed and extensive support for plugins. The installation of plugins is very easy. If you want to learn how to install plugins, make sure to check out the wiki.

Pathogen

When you instal a plugin one may copy the files to the plugin directory. In a later stage you also want to delete a plugin and then the hunt for files starts. You need to track down which files belong to the specific plugin you want to delete. Pathogen to the rescue :)

Pathogen enables you to create sub-folders inside a bundle-folder which will acts a place holder for all your plugins nicely separated in a ‘folder per plugin’ structure. So if you need to delete a plugin then you just delete the correct plugin-folder and everything is gone.

Normal installation of a Vim script is standard, you create a sub-folder below the bundle-folder, copy the Vim script and all is ok. BUT when you want to use a vimball then you need to do some additional steps.

  • Create a folder in which you want to later on extract the vimball. Preferably below the ‘bundle’ folder.
mkdir ~/.vim/bundle/align
```console

* Open the vimball with command ‘:e 'location of your vimball’/‘name of your vimball'
```console
:e ~/Downloads/Align.vba
  • Tell Vim to use the vimball by issuing command ‘:UseVimball ’location to extract’'
:UseVimball ~/.vim/bundle/align
  • Restart your Vim and your plugin should be available.

September 13, 2012 0 minutes and 44 seconds read

Installing Markdown on OSX and use it inside VIM

Back again to one of my favorites which is called Markdown. Once every now and then i forget how easy it is. Normally i use Textmate to do all my writing, but recently i have picked up VIM to do some editing etc. Why i did chose VIM? I will not trouble you with that decision :)

Using Textmate everything is easy, but when you want to use Markdown inside VIM it is somewhat different. But anything is different when using VIM :)

Steps

  • download Markdown from - The home of Markdown, It’s usual place as this is a Perl script you need to put it somewhere so OSX is able to execute it.
  • start your terminal and create a directory inside usr/local/bin
  • extract the downloaded file and put the Markdown.pl file * inside the user/local/bin directory
  • inside the terminal chmod the Markdown.pl to 777
  • using the Installing Markdown as OSX Service creates a service to use Markdown
  • You are done… :)

September 11, 2012 0 minutes and 23 seconds read

Change default homepage for a Grails application

You can set the default homepage for a Grails application by modifying the grails-app/conf/UrlMappings.groovy file. In a new Grails application this file will look like

class UrlMappings {
    static mappings = {
        "/$controller/$action?/$id?"{
            constraints {
                // apply constraints here
            }
        }
        "/"(view:"/index")
        "500"(view:'/error')
    }
}

Replace the line:

"/"(view:"/index")

with:

"/"(controller:'home', action:"/index")

This will result in the fact that when you start your Grails application and you enter the URL for your application it will trigger the HomeController and corresponding index action related to that controller.


September 10, 2012 3 minutes and 22 seconds read

Funcional Code Coverage using Cobertura

So assume you are assigned to a JEE/Web project with no written functional requirements, no technical design, no functional and unit tests and even no business process description. Sounds really hopeless, but it is your responsibility to learn the system and make adjustments to it. Does this sound familiar?? Hopefully not :) But every now and then this scenario seems to happen.

One can start to complain :), stop working on the project or even better master the concept of Software Archeology. An additional thing is to adopt the use of Cobertura a code coverage tool which can easily be used to track down Functional Code Coverage. Normally the concept of Code Coverage is used to identify what code is executed during development and test phase. This to give an indication on how much code you cover with your testing strategy (often unit testing). This is IMHO something you will always want to know! But in the case you do not have unit tests or creating them is impossible due to the technical/organisational nature of the project, you can rely on creating functional tests and still track down the ‘functional’ coverage with tools like Cobertura (or alternatives like Emma).

This tackles several problems:
You are creating functional tests which can be used for regression testing You are creating awareness on how little is tested or is known about the system Note: By functional testing we mean that we are going to test via the Web layer of the JEE project To see an example on how the reporting looks like, check out this sample report!

How to get Code Coverage information

General process:

  • Compile your software
  • Instrument the compiled code
  • Deploy your instrumented code and start the application
  • Use the application or run automated functional tests
  • Shutdown the application
  • Generate your Code Coverage reports
  • no step 7! All done :)

The remaining part of this article is going to describe how you can get Functional Code Coverage information in the process of continously building, deploying and testing your software. Some elements are not explained due to the fact that other extensive information is given somewhere else on the web!

Compile, Instrument and Build

The Maven project enables you to build your software with great ease. Giving a few simple commands makes it able to build a project, deploy it and even integrate it with tools and technologies such as Cobertura. There is a even a Cobertura Maven Plugin to easily use Cobertura in a Maven build. We need to use Cobertura in the build phase because it will instrument the compiled code and generate a small file called ‘cobertura.ser’ which is used as a kind of database that stores each call to a piece of code. The instrumented code and the database file are crucial because they contains all information needed to generate code coverage reports later on.

Run and Test

After the code is succesfully instrumented you may deploy the build artifacts together with the ‘cobertura.ser’ file inside a JBoss JEE container and run your application.

Note: In our project we used JBoss but offcourse you can use other application servers!

The JMeter project delivers an excellent tooling and technology which enables you to record your functional flow and lets you replay a scenario which was recorded earlier on. For more information on JMeter recording and usage, please check the JMeter project. But for now lets assume you have created a couple of functional tests, so you can execute them.

Generate coverage reports

After the functional tests have been executed, the modified database file ‘cobertura.ser’ can be collected and reports can be generated. Cobertura has some nice predefined reporting templates. After these stepes you should have inisight on what code is actually executed during a functional flow and this may contribute to your understanding of the application.

Not once but do it always!

The process of compiling, instrumenting, deploying, testing and reporting can be fully automatized. The famous Hudson comes to the rescue! When corectly implemented Hudson will serve you all information that you need on the moments you need it!

Tools & Technologies

The folowing list provides information on the tools that are used:

  • Maven -> used info for compiling and instrumenting your code (alternative to Ant)
  • Cobertura -> used to get Code Coverage information
  • JBoss -> used for running a JEE project
  • JMeter -> used to record and playback functional tests
  • Hudson -> used to automatically build & test your software

September 9, 2012 6 minutes and 7 seconds read

Distributed Deployment with Hudson & SSH

Have you already implemented an multi-server artifact deployment using a Continuous Integration Engine? If not, then read ahead and maybe this article is of help.

The need for Continuous Integration

A good practice in a software development methodology and lifecycle is the use of a Continuous Integration Engine. The adoption of Continuous Integration improves you software quality by quickly reporting failed builds so you can modify/correct your code. Popular Continuous Integration Engines can often be extended with software quality tooling so you can report on specific quality aspect of your software.

Thus informing developers and even other people who take an interest in the status of the latest build. IMHO a failed build can also be identified as code that compiles but that does not meet the quality standards set by your organization. You are off-course totally free in defining what in your opinion a failed build actually means!

A good build compiles, quality requirements have been met and automatic functional and unit testing has been successful.

There a few popular Continuous Integration Engines available:

Probably there a some more, but for me needs the [Hudson Extensible Continuous Integration Server] works perfect.

My needs

My Continuous Integration Engine:

  • Must support multiple programming languages Java/.Net/Ruby
  • Runs on multiple operating systems (Windows/Mac/Linux)
  • Pluggable in the sense that there must be integration with for example JUnit, JMeter, Cobertura, Checkstyle etc.
  • Must be able to send out notifications using email, twitter, Instant Messaging
  • Seemless integration with CVS/SubVersion and GIT
  • Simple and easy Configuration
  • Must support timed builds & trigger builds from SCM commits
  • Maintain a link between modified code and
  • and some more..

All of these requirements and more have been succesfully fulfilled by using Hudson

Hudson and automatic deployment

In every project a recurring problem arises, artifacts of a software build have to be distributed accross different servers and environments. How are the build artifacts going to be distributed and deployed? Why not let the Hudson server give a helping hand!!

Deployment Scenario

In the following scenario we will be distributing build artifacts from a central Hudson server to three different JBoss application servers al running Windows 2003 Server as Operating System. Next to the Hudson server we have a Subversion system which is used for SCM purposes.

Distribution of build artifacts

After a succesfull build, the artifacts have to be distributed to remote machines. Offcourse the latest code has been checked out from the SVN repository, compiled and tested. The distribution of the artifacts can be done in various ways. One can use Ftp, shared folders, SCM checkin etc..

For me the most easiest way to distribute build artifacts is using Secure Shell access aka SSH. This is a secure and a standardized manner for distribution. Lets assume we have the build artifacts somewhere on our Hudson server, we need a way of transfering them using SSH to a remote machine.

To accomplish this we need SSH access to the remote machine. With the help off CopSSH installing SSH is a breeze!

Prepare for installation of SSH

Prepare yourself by downloading:

  • CopSSHSSH service for Windows
  • PuttySSH client (used for connection to the SSH service)
  • PSCPSSH client for file transfer
  • PlinkSSH client for executing remote commands

Installing and enable remote access using SSH

  • Install CopSSH on the remote system
  • On the remote system enable a user for SSH access, see the installation guide of CopSSH
  • Start Putty on your local machine
  • Using Putty connect to the remote system and exchange security credentials

You are now officially ready to remotely access the system using SSH If you want to enable the Hudson server to access the remote system, start Putty on the Hudson server and repeat step 4!

Note: Make sure that your Hudson uses the same credentials then the account in which you exchange security credentials, otherwise remote access from Hudson server to the remote system will not work!

Execute remote commands and Exchange files using SSH

If you can succesfully access the remote server using Putty, it is time to exchange files or execute remote commands. This can be done by using 2 small commandline utilities called PSCP for file transfer and Plink for executing remote commands such as remotely deleting files etc.

Make sure these are in you PATH settings so you can execute them everywhere!

Examples for executing a remote command (substitute the %parameters% with your own ones)

#create a directory
plink -batch -pw %PASSWORD% %USERNAME%@%HOSTNAME%  mkdir C:/tmp

#delete a directory
plink -batch -pw %PASSWORD% %USERNAME%@%HOSTNAME%  rm -rf C:/tmp

#stop a windows service
plink -batch -pw %PASSWORD% %USERNAME%@%HOSTNAME%  net stop %SERVICENAME%

#upload a file
pscp -pw %PASSWORD% %SOURCE% %USERNAME%@%HOSTNAME%:%DESTINATION%

#upload multiple files
pscp -pw %PASSWORD% %SOURCE%\*.* %USERNAME%@%HOSTNAME%:%DESTINATION%

The return of the .bat file

So far we have enabled remote access using SSH / CopSSH, executed remote commands and transferred files. All the needed ingredients are in place to enable our Hudson server to remotely deploy build artifacts. In the job configuration of Hudson you can trigger a batch file after a succesfull build, so whenever a succesfull build occurs trigger a batch that executes a few commands to quickly deploy build artifacts to any number of remote servers.

In our case all deployment artifacts are copied to a central directory per project. So if we need to deploy a build, we can copy parts or the whole directory contents to a remote server.

To give an example see the following batch files: Main example for a batchfile that triggers stopping of the remote Windows Services gives the instruction on which files need to be remotely deployed and start the services again.

-> filename = upload-project-to-development.bat
@CLS
@ECHO OFF
SET USERNAME=%1
SET PASSWORD=%2
SET HOSTNAME=%3
SET JBOSSDIR=%4
SET SOURCEDIR=%5
@ECHO : - start upload procedure
@ECHO : -- stopping servers
@plink -batch -pw %PASSWORD% %USERNAME%@%HOSTNAME% net stop JBoss
@ECHO : -- uploading files CALL upload-files.bat %USERNAME% %PASSWORD% %HOSTNAME% %JBOSSDIR% %SOURCEDIR%\*.*
@ECHO : -- starting servers
@plink -batch -pw %PASSWORD% %USERNAME%@%HOSTNAME% net start JBoss
@ECHO : - finished upload procedure

The above script can be called easily by the Hudson server after a succesfull deployment.

example:

upload-project-to-development scott tiger 10.0.0.100 d:/java/server/jboss-v5.0 d:/build_artifacts/projectx

The example above:

  • Stops the JBoss server on the 10.0.0.100 host
  • Passes some parameters to a file called “upload-files.bat” script
  • Starts the JBoss servers again

The script that executes the actual maintenance and uploads is the “upload-files.bat file”. All parameters are passed in by the calling script.

-> filename = upload-files.bat
@ECHO OFF
SET USERNAME=%1
SET PASSWORD=%2
SET HOSTNAME=%3
SET JBOSSDIR=%4
SET SOURCE=%5
SET DESTINATION=%JBOSSDIR%/deploy
@ECHO  : - %HOSTNAME% - starting file copy
@ECHO  : -- %HOSTNAME% - deleting JBOSS tmp
@plink -batch -pw %PASSWORD% %USERNAME%@%HOSTNAME% rm -rf %JBOSSDIR%/tmp
@plink -batch -pw %PASSWORD% %USERNAME%@%HOSTNAME% mkdir %JBOSSDIR%/tmp
@ECHO  : -- %HOSTNAME% - deleting JBOSS work
@plink -batch -pw %PASSWORD% %USERNAME%@%HOSTNAME% rm -rf %JBOSSDIR%/work
@plink -batch -pw %PASSWORD% %USERNAME%@%HOSTNAME% mkdir %JBOSSDIR%/work
@ECHO  : -- %HOSTNAME% - deleting previous ears + jars
@plink -batch  -pw %PASSWORD% %USERNAME%@%HOSTNAME% rm %JBOSSDIR%/deploy/*.ear
@plink -batch  -pw %PASSWORD% %USERNAME%@%HOSTNAME% rm %JBOSSDIR%/deploy/*.jar
@ECHO  : -- %HOSTNAME% - copy remote files
@pscp -pw %PASSWORD% %SOURCE% %USERNAME%@%HOSTNAME%:%DESTINATION%
@ECHO  : - %HOSTNAME% - finishing file copy

The example above:

  • Removes the JBoss tmp & work directory
  • Removes artifacts from previous builds
  • Copies the artifacts to the remote JBoss deploy directory

Steps taken

So the list of tasks executed by calling the batch files with the correct parameters are:

  • Stopping the remote JBoss server
  • Removing the remote JBoss tmp & work directories
  • Removing the remote JBoss artifacts from previous deployments
  • Copy files to the remote JBoss server
  • Starting the JBoss server again

From build to deployment

So with a quick installation of SSH/Putty/Plink/PSCP we now have a modular and easy way of distributing files to remote systems. Offcourse there are lots of improvements to make, but for now it works without any problems!

The given examples can be easily modified so that after a succesfull build the artifact deployment to all of your servers can be done in a very simple and easy way. Notes


September 5, 2012 0 minutes and 25 seconds read

Using MySQL instead of in-memory database for a Grails application

A Grails application by default uses a in-memory HSQL database. To switch to a MySQL database the steps are simple and straightforward.

  • Download the MySQL JDBC driver [called a connector] from the MySQL website
  • Extract the zip or tar archive
  • Copy the driver (at this time of writing called mysql-connector-java-5.1.13-bin.jar into the grails-app/lib directory
  • Configure your application datasource in file grails-app/conf/DataSource.groovy
development {
    dataSource {
       dbCreate = "create-drop" // one of 'create', 'create-drop','update'
       url = "jdbc:mysql://localhost:<port>/<database>"
       driverClassName = "com.mysql.jdbc.Driver"
       port =  // default 3306
       username = "<username>"
       password = "<password>"
    }
}