October 27, 2020

Running Selenium tests in parallel

Filed under: automation — SiKing @ 2:26 pm
Tags: ,

I have been trying to optimize the run-time for my Selenium tests in the past few days.

Let me start with my setup: Java 8, Maven 3, Selenium 3, JUnit 4, and Failsafe plugin 2. The following information should work for TestNG as well (’cause “NG”, amirite), and Failsafe is just a fork of Surefire.

The obvious thing is running everything in parallel. Searching for some advice on the Intranets I found that most blogs only faithfully regurgitate the existing documentation, so not very helpful. I have also been dabbling in Serenity Screenplay, and they have their own way of handling parallelization. Most of these techniques rely on the assumption that your tests (methods) are completely independent of each other. I break this rule. All. The. Time. And I suspect I am not the only one.

The classic case is:

public class SomeTest {

    public void step1_Create_Something() {

    public void step2_Read_Something() {

    public void step3_Update_Something() {

    public void step4_Delete_Something() {

I use this pattern all over the place. Obviously running these “steps” in parallel is going to produce all sorts of false negatives.

Adding only:


to your Failsafe plugin will run these correctly. This will create new JVMs, up to 2 per core in this case, and feed test classes in one at a time per fork.

A more recent situation that I have will not work like this. Lets say that in the above step2_Read_Something() I am also checking that the Something is chronologically first – like a new post on this blog. And further, lets say that all my other tests follow the pattern:

public class AnotherTest {

    public void create_Something() throws Exception {
	// same as step1_Create_Something()

    public void throw_away_Something() throws Exception {
	// same as step4_Delete_Something()

    public void test_feature() {
	// verify a specific bug / feature

If these two tests (classes) run in parallel, this will create a race condition and potentially cause step2_Read_Something() to fail with false negative.

The solution here is a little involved. My app under test has nine different sections / areas where tests could collide; running two tests in two different “areas” is not a problem. I create nine packages in my test project, and I keep all tests related to one area in one package; I think this is a pretty normal logical grouping for your tests. The configuration for Failsafe in this case is:


This creates nine threads, and each is fed suites (packages) one at a time. Within one thread everything is always processed sequentially.

I would recommend against combining forks and threads.


September 28, 2018

Screeshot a WebElement in Selenium

Filed under: automation — SiKing @ 4:13 pm

In one of my projects I need to verify a roll-over effect: make sure the image changes when you hover a mouse over it. In Selenium this can only be done by image comparison. As a first attempt (make it work – make it right – make it fast) I took a screenshot of the entire browser screen and compared that. 2 tests, across 7 browsers, 3 screens each … it wasn’t pretty.

I finally spent some time to make it right.

The Selenium API supports something like ((TakesScreenshot) element).getScreenshotAs(OutputType.FILE);. However, this produces UnsupportedCommandException on pretty much every browser that I have access to. The method is still in Beta (as of version 3.14), so no surprise.

Little bit of Googling turned up this; actually Google first turned up this version – I don’t know who plagiarized whom. As long as you do not have to scroll the page, this will work. As soon as you move the page, this will error with RasterFormatException: (y + height) is outside of Raster. That is because the element x and y coordinates are from the top of the page, instead of the current viewport. So you have to adjust the y coordinate by the amount you scrolled the page. Google said this little trick would help with that, most of the time. MS-IE and Mac-Safari do not cooperate.

Safari was the easier one. The Safari driver takes a screenshot of the entire page, instead of just the viewport, so you do not need the scroll offset.

Internet Explorer is more tricky. The screenshot was too high by just a few pixels. I still have not figured out what the correct answer is. Although it is IE, and even Microsoft is abandoning it. 😦

My final method looks like this:

 * Take screenshot of just a {@link WebElement}; discussion is 
 * here.
 * @param driver
 * @param element
 * @return in memory image of the element
 * @throws IOException
public static BufferedImage getScreenshotOf(final WebDriver driver, final WebElement element) throws IOException {
	// Capture entire page screenshot.
	byte[] screen = ((TakesScreenshot) driver).getScreenshotAs(OutputType.BYTES);

	// Use selenium Point class to get x y coordinates of Image element.
	int xCoordinate = element.getLocation().getX();
	int yCoordinate = element.getLocation().getY();

	// Find the page scroll offset, and adjust the yCoordinate.
	if (!((RemoteWebDriver) driver).getCapabilities().getBrowserName().equals(BrowserType.SAFARI)) {
	    // credit:
	    Long yOffset = (Long) ((JavascriptExecutor) driver).executeScript("return window.pageYOffset;");
	    yCoordinate -= Math.toIntExact(yOffset);

	// Use selenium getSize() method to get height and width of element.
	int width = element.getSize().getWidth();
	int height = element.getSize().getHeight();

	// Reading full image screenshot.
	BufferedImage screenImage = ByteArrayInputStream(screen));

	// cut Image using height, width and x y coordinates parameters.
	BufferedImage image = screenImage.getSubimage(xCoordinate, yCoordinate, width, height);

	if (log.isDebugEnabled()) {
	     * On Jenkins storing images in "" is not going to be
	     * very helpful, as you probably do not have access (from the
	     * Jenkins UI) to that location. Perhaps try Maven's "target"
	     * directory first?
	    String path;
	    try {
		// credit:
		File myPath = new File(SeleniumHelper.class.getProtectionDomain().getCodeSource().getLocation().getPath());
		// "myPath" will probably end in: .../target/test-classes/
		path = myPath.getParent();
	    } catch (Exception ignore) {
		path = System.getProperty("");
	    // the filename will look something like: 20180926-152355.png
	    String logImage = String.format("%1$s/%2$tY%2$tm%2$td-%2$tH%2$tM%2$tS.png", path, new Date());
	    log.debug("Image logged in: " + logImage);
	    ImageIO.write(image, "PNG", new File(logImage));

	return image;

For the roll-over effect I take three snapshots: before mouse hover, during mouse hover, and after mouse hover. I use this code to make the comparisons. before != during, and before == after.

January 18, 2017

Raspbian: running the system from RAID

Filed under: linux — SiKing @ 11:11 am

SD card failures are a well-documented (and complained about) phenomenon on the Raspberry Pi. One of my clients have a product that runs on an RPi 24/7, and they tasked me to do something about this. I looked at several different things, and I am going to publish my findings here for (hopefully) the benefit of others.

So why do SD cards fail? To put it simply: each write generates a certain amount of heat; enough writes / heat will burn up the card.

In order to lower the chance of failure, there are certain things that can be done to the system. This discussion is a good starting point; this wiki is little more in-depth.

Some things that I looked into, but do not have enough information to share here, are:

  • Industrial-grade SD cards: usually come with some warranty, cost more money.
  • Moving logging to memory (tmpfs): tricky balance between how much memory you have versus how much memory you need for your app, and also how much information you need in case things crash as you will lose everything in memory.

I configured two things on my system: 1) ext2 file-system, and 2) RAID.


The default Raspbian OS image creates the following partitions:

$ sudo parted /dev/mmcblk0 print free
Model: SD SD16G (sd/mmc)
Disk /dev/mmcblk0: 15.5GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags: 

Number  Start   End     Size    Type     File system  Flags
        32.3kB  4194kB  4162kB           Free Space
 1      4194kB  70.3MB  66.1MB  primary  fat16        lba
 2      70.3MB  4019MB  3949MB  primary  ext4
        4019MB  15.5GB  11.5GB           Free Space

Note that in this example the disk-blocks (“Sector size”) match, which is desirable. However, some SD cards are manufactured with physical block size of 1024 bytes, or more. In this case, the blocks should be aligned.

boot partition

The first block of Free space is just wasted space?!?!

The second block (“Number 1”) is mounted as /boot and is a FAT partition. The Raspberry Pi is able to boot only from the internal SD card, and only from a FAT partition; citation. This is a limitation of the Raspberry SOC architecture and cannot be changed.

Even at run-time, this partition is accessed. I tried just mounting the partition as read-only, but this caused the system to fail to boot. I still want to figure out why (and what) this partition is being written to at run-time.

root partition

The next block (“Number 2”) is mounted as / (root) and holds the entire OS. This partition is mostly vanilla ext4.

The only restriction on the partition that holds the operating system, is that the file-system must support user permissions – most Linux file-systems do. The default ext4 system uses a journal which causes additional writes to the disk. This is intended to prevent file corruption in case of a catastrophe like power loss during a write. The SD card is fast enough that a chance of such a catastrophe is minimal, and the extra writes cause additional wear on the card. After the base image is created the journal can be removed, or the partition reformatted as ext2, which does not have a journal. Note that if the journal is removed from an ext4 file-system, some tools will actually report that partition as ext2.

free space

The last block of Free Space is probably intended for user apps? When the RPi is booted the first time, there is a utility that automatically expands the root partition into all available free space.

This partition simply needs to be formatted and added to /etc/fstab, so it can be used as a /app partition. Once this partition is created, the first-time utility that expands the root partition will run and fail; the error can be safely ignored.


A swap partition is conspicuously missing. The RPi uses a swap file. A swap file, versus a swap partition, can be easily reconfigured at run-time: turned on or off, resized, or even removed allowing for the space to be reclaimed by the system. dphys-swapfile controls the swap. Note that by default the vm.swappiness parameter is set to 1 (the lowest possible).

I have given some consideration to completely turning off the swap and removing it. However, I am not convinced this is a good idea.

  1. If the system does not need to swap, then the swap file is never written to. So just the presence of a swap file causes no additional wear on the SD card.
  2. And if the system does actually need to swap out, and no swap is available, the system will crash.

RAID system

We can install one or more additional SD cards (using a USB to mini SD adapter) on the RPi and configure them as a RAID system, where the additional card will be an exact duplicate of the first; this is a real-time function and should not be confused with a backup! In other words: if a file is erased from the system, it will be erased from both copies. In case one of the cards fails, the system can still operate on the one remaining card, ideally warning the user that the failed card should be replaced.

The math to calculate the odds of complete system failure is extensive. Using one additional SD card gives us only the possibility of mirroring the two cards (RAID1). Using two additional cards gives us the option of mirroring all of them (still RAID1), or using a more advanced system (RAID5), where the data is written across the cards. RAID5 also gives a slight performance boost – you are able to read or write almost twice as fast. Since you never get something for free, configuring three cards as a RAID5 you can lose only one; configuring three cards as a RAID1 you can lose up to two.

This setup still has one point of failure: the primary boot partition. We can replicate the boot partition using scripts, but the Raspberry Pi SOC is only able to boot from the one card that is plugged into the SD slot on the motherboard. If this partition has a failure, the entire system is dead.

What you are going to need

  • Two (2) identically formatted SD cards; at least one of them has to hold the Raspbian OS. In order to ensure the cards are identical, I just installed Raspbian on both of them.
  • One (1) USB to microSD card adapter; you can pick this up almost anywhere, but be careful: the cheaper models make lower quality electrical connections, and any slight bump might “dislodge” the card and degrade the array.


I am a big believer in predictable and reproducible. The individual scripts are numbered (in the filename), and are intended to be run in numerical order. Each individual script accomplishes one specific task. This makes the entire procedure modular, and can easily be modified and experimented with. Each script contains checks for any assumptions, and everything is documented inline. All the scripts have to be run with elevated privileges: sudo ./<scriptname>.

The first couple of scripts must be run on a workstation.

  • 01-write_image_to_card [source]
  • 02-create_app_partition [source] – If you are not interested in having a separate /app partition, then you will need to boot both cards separately so that the free space is expanded the same.
  • 03-tune_root_filesystem [source] – If you are building a RAID system, this script can safely be skipped.

If you are building a RAID array, then rerun the above scripts against the second SD card.

To continue, you now need to boot from one card on the RPi. The Raspbian OS has a one-time startup utility that automatically expands the root partition into all available space. If the /app partition was created, this utility will fail; the error can be safely ignored.

It is probably a good idea to upgrade the OS at this point. One of the later scripts (04) will tie the RPi to the current running version of the kernel.

$ sudo apt-get update
$ sudo apt-get dist-upgrade
$ sudo reboot

To continue, you now need to plug the second card into the USB to microSD adapter. The next several scripts will configure the RAID system on the RPi. The second card must have the exact same partitioning as the primary. Currently I do not have a script that reproduces this; I just cheated by installing Raspbian on both cards – the first three scripts above.

  • 04-configure_initramfs [source] – This script will tie the RPi to the current kernel. Upgrading the kernel after this, also requires upgrading the initramfs.
  • 05-configure_root_raid_pass1 [source] – Configuring the root partition RAID requires a restart in the middle of the process; hence the “pass1” and “pass2”.
  • 05-configure_root_raid_pass2 [source] – After this script, you can monitor the RAID build with watch -n 5 cat /proc/mdstat. It’s a good idea to let this complete before continuing.
  • 06-configure_app_raid [source] – If you are not interested in having an /app partition, then just skip this.
  • 07-setup_maintenance [source] – The RAID should be monitored periodically. sync_boot_partition [source], raid_check [source]

Additional reading

Additional resources I found useul:

February 18, 2016

Another way to fix SSLHandshakeException in SoapUI

Filed under: automation,windows — SiKing @ 8:02 am

Started testing a new API in SoapUI, and the very first thing I get is:

ERROR:Exception in request: Received fatal alert: handshake_failure

So off I go, copy-paste the error into Google and see what comes up. There is a lot of advice about this issue here, here, here, none of which worked for me. Although this SO answer did lead me down the right path.

In the end, the problem was much more deluding.

Java comes with its own CA TrustStore. SoapUI, at least on Windows, comes with bundled Java, usually very outdated version of Java. From my past (Linux) experience I know that SoapUI is intelligent enough, that if it does not find the bundled Java it will look elsewhere ($PATH, $JAVA_HOME). So I went to my SoapUI install directory and renamed jre to jre.ignore. Of course I had previously installed the latest Java8. Restart SoapUI, and problem goes away.

I am not sure whom to blame for this WTF. Certainly part of the blame lies with Oracle lawyers for being dickheads (Java8 is not allowed to be pre-installed on any device), part lies with Microsoft for forcing their users to be ignorant retards, and part lies with SmartBear for following the crowd (the “every Java application must also install its own version of Java” crowd) like a sheep.

August 21, 2015

hiking to the Hollywood sign

Filed under: cali — SiKing @ 9:29 pm

While on vacation, we decided to hike to the Hollywood sign.

There are plenty of blogs out there telling you all about the hike, distance, bring water, etc. They all lie!!!

Here is what the hike could be like:

This is the “easy” route. Notice it says that it could be 3.2 miles. I found some blog that even claimed 3 miles round trip. Yea, maybe the second time, and only if you took notes along the way.

The problem is that while there, it’s like a giant ant maze, and none of the paths are marked in any way. The best you can hope for is to talk to the other tourists (they are the ones not crazy enough to be jogging here) and compare notes. “Is the sign that way?” “No, we came from there, it’s definitely not there! Where did you come from?” So you keep getting lost, going the wrong way, backtracking, all up and down hills. We were out there for good 5 hours. We actually got lucky in that more than half the day it was cloudy. We brought 3 litre bottles of water, several bottles of energy water, and some fruit. It was not enough! I have no idea how the tourists that were hiking there with one tiny bottle of water in their hand lived through it.

When you finally make it to the end, the sign is completely fenced off. This is the best shot you can get, up close:


Hollywood Hills

I’m not trying to look funny, I am breathing that hard!

Bring water, lots of water. Don’t bother with a map, they don’t help. Shorts, decent hiking boots not open ones, lather up sunscreen first, hat. This is place where you could die!

June 18, 2015

TestNG @Before* and @After* methods

Filed under: automation — SiKing @ 2:58 pm

I am a long time user of JUnit. Both v3 and v4. But I have finally hit the limits of what JUnit can do for me. Specifically, in case you’re interested, the showstopper was the signature of:

	public static void setUpBeforeClass() throws Exception {

The static hurts.

Today I started looking at TestNG. It has a bunch of @Before.. and @After.. methods, like about twice as many as JUnit, and all of them just public void.

My first question was what order are they all run in? After a bit of Googling, several websites give you only a partial answer. All the examples use just one class that goes something like this:

public class FirstTest {
	public void test1() {
		System.out.println("Running " + getClass().getSimpleName() + " / test1.");
	public void test2() {
		System.out.println("Running " + getClass().getSimpleName() + " / test2.");

	public void beforeMethod() {
		System.out.println("Running " + getClass().getSimpleName() + " / beforeMethod.");

	public void afterMethod() {
		System.out.println("Running " + getClass().getSimpleName() + " / afterMethod.");

	public void beforeClass() {
		System.out.println("Running " + getClass().getSimpleName() + " / beforeClass.");

	public void afterClass() {
		System.out.println("Running " + getClass().getSimpleName() + " / afterClass.");

	public void beforeTest() {
		System.out.println("Running " + getClass().getSimpleName() + " / beforeTest.");

	public void afterTest() {
		System.out.println("Running " + getClass().getSimpleName() + " / afterTest.");

	public void beforeSuite() {
		System.out.println("Running " + getClass().getSimpleName() + " / beforeSuite.");

	public void afterSuite() {
		System.out.println("Running " + getClass().getSimpleName() + " / afterSuite.");

Things become much more apparent if you have two such classes, and you run them in one go. Running two such classes will give you something like the following:

Running FirstTest / beforeSuite.
Running SecondTest / beforeSuite.
Running FirstTest / beforeTest. ------\
Running SecondTest / beforeTest. -----+-\
Running FirstTest / beforeClass. ---\ | |
Running FirstTest / beforeMethod. -\| | |
Running FirstTest / test1.         || | |
Running FirstTest / afterMethod. --/| | |
Running FirstTest / beforeMethod. -\| | |
Running FirstTest / test2.         || | |
Running FirstTest / afterMethod. --/| | |
Running FirstTest / afterClass. ----/ | |
Running SecondTest / beforeClass. ---\| |
Running SecondTest / beforeMethod. -\|| |
Running SecondTest / test1.         ||| |
Running SecondTest / afterMethod. --/|| |
Running SecondTest / beforeMethod. -\|| |
Running SecondTest / test2.         ||| |
Running SecondTest / afterMethod. --/|| |
Running SecondTest / afterClass. ----/| |
Running FirstTest / afterTest. -------/ |
Running SecondTest / afterTest. --------/
PASSED: test1
PASSED: test2
PASSED: test1
PASSED: test2

    Tests run: 4, Failures: 0, Skips: 0

Running FirstTest / afterSuite.
Running SecondTest / afterSuite. 

In the above I have added the nested bowls myself to really stress the point.

The interesting (at least to me) things to note:

  • The pair of *Test and *Suite methods are run in no particular order. You can enforce an order if you need to by using one of the depends* parameters.
  • All the afterSuite methods are run after reporting is already generated.

November 4, 2014

Selenium PageBuilder

Filed under: automation — SiKing @ 12:45 pm

Anyone who has worked with Selenium for a while, will sooner or later try to find a way to organize all their element locators into some sort of a library. With a little bit of research you will be led to Selenium PageObject model, and subsequently the PageFactory pattern which makes your Page Objects a little less verbose.

In the Java world there is another interesting pattern: the Builder pattern. If your page is very predictable in how controls are created, then you might be able to use Page Builder to create all element locators on the fly. Here is how.

I write all my tests in Groovy, which is a scripting language that runs in a JVM. Big majority of things in the Groovy world are done with Builders. I was recently tasked with automating a Swagger page, which is a perfect candidate for the Page Builder pattern.

Swagger is a specification for documenting RESTfull APIs, and relies heavily on patterns and best practices. This makes the elements very easy to identify for Selenium. The page is divided into three groups of controls which Selenium can find like this:

class SwaggerBuilder {

	WebDriver driver
	def resources = [:]
	def operations = [:]
	def parameters = [:]

	SwaggerBuilder(WebDriver driver) {

		this.driver = driver

	def buildResources() {

		resources = driver.findElements(By.className("resource")).collectEntries {
			def resourceName = it.findElement(By.tagName("a")).getText().replaceFirst("[/]", "")
			[(resourceName): it]

	def buildOperations(WebElement resource) {

		operations = resource.findElements(By.className("operation")).collectEntries {
			def operationName = it.getAttribute("id")
			[(operationName): it]

	def buildParameters(WebElement operation) {

		parameters = operation.findElement(By.className("operation-params")).findElements(By.tagName("tr")).collectEntries {
			def parameterName = it.findElement(By.className("code")).getText()
			[(parameterName): it]

Another big deal in the Groovy world are Closures. In the code above, the .collectEntries is a Closure that returns a Map of my element locators. The rest of this is pretty straightforward Selenium.

So now you can start a test with something like:

def driver = new FirefoxDriver()
def petstore = new SwaggerBuilder(driver)

Right now our SwaggerBuilder() is only going to collect all the top-level resources; in the case of this example, it will be: pet, store, and user. We want to be able to expand a resource by calling something like: But we do not want to explicitly define the parameter pet, because we want this to be usable for any Swagger page anywhere. This is where the magic of Groovy Builders comes in:

Object propertyMissing(String name) {

	if(resources[(name)] == null)
		throw new NoSuchElementException("Resource $name cannot be found.")

		resources[(name)].findElement(By.linkText("List Operations")).click()


	return this

Every time you try to call a property on petstore, that does not exist, Groovy will call the propertyMissing() method. The rest of the method above is, again, just Selenium: check if the resource has already been expanded or click on it to expand, and then build up the operations that are contained in it. So now in your test you can try something like:

def driver = new FirefoxDriver()
def petstore = new SwaggerBuilder(driver)

And you will find that things are failing with stale element exceptions! This is because Swagger is heavily an AJAX page, and therefore Selenium needs a lot of hints when and where it needs to wait for stuff to be present.

First up is when the Swagger page loads up, it actually refreshes three times! So even using ExpectedConditions.refreshed() will not work. Before building the resources, either in the constructor or at the start of buildResources(), you need the following wait:

def wait = new FluentWait<By>(By.className("resource")).
	withTimeout(10, TimeUnit.SECONDS).
	pollingEvery(1000, TimeUnit.MILLISECONDS).
wait.until(new Function<By, Boolean>() {
		WebElement res
		Boolean apply(By by) {
			def oldRes = res
			res = driver.findElement(by)
			return res == oldRes

This will wait until the first resource is no longer changing. I had to raise the polling time to account for slow browsers; basically to fine tune this, it’s trial and error.

Now we want to be able to call one of the operations, something like: Just as with the parameters above, we do not want to be explicitly declaring every single method.

Object invokeMethod(String name, Object args) {

	if(operations[(name)] == null)
		throw new NoSuchElementException("Operation $name cannot be found.")


	if(args.size() > 0) {

The invokeMethod() method specifies what do with methods that do not exist in Groovy, like the .pet_addPet() method above.

And again: Selenium will complain about things that are not there. First one to tackle is building the operations. Our method buildOperations() does not actually need the operations to be visible in the browser, because it is not interacting with them. But before we click on one, in the invokeMethod(), it needs to be visible. This is taken care of right after building the operations, either in propertyMissing() or in buildOperations() with a simple:

def wait = new WebDriverWait(driver, 5)
wait.until(ExpectedConditions.visibilityOfAllElements(operations.collect { it.value }))

And the exact same thing is repeated after we build up the parameters.

Now to submit the form, and wait for a response to come back:

Object invokeMethod(String name, Object args) {

	if(operations[(name)] == null)
		throw new NoSuchElementException("Operation $name cannot be found.")


	def wait = new WebDriverWait(driver, 5)
	if(args.size() > 0) {
		wait.until(ExpectedConditions.visibilityOfAllElements(parameters.collect { it.value }))


	return assembleResponse(operations[(name)])

def enterParameters(Map args) {

	args.each {

The parameters that any operation requires will be submitted as a Map. Not all operations have parameters, which is taken care of by the if statement. All that is left is to:

def assembleResponse(WebElement operation) {

	if(operation.findElements(By.className("error")).size() > 0)
		return null

	// wait for very large responses
	def wait = new WebDriverWait(driver, 20)

	def request_url = new URL(operation.findElement(By.className("request_url")).getText())

	def response_body
	def json = new JsonSlurper()
	def xml = new XmlSlurper()
	def response_class = operation.findElement(By.className("response_body")).getAttribute("class").split(" ")
		response_body = json.parseText(operation.findElement(By.className("response_body")).getText())
	else if(response_class.contains("xml"))
		response_body = xml.parseText(operation.findElement(By.className("response_body")).getText())
		response_body = operation.findElement(By.className("response_body")).getText()

	def response_code = operation.findElement(By.className("response_code")).getText().toInteger()

	def response_headers = json.parseText(operation.findElement(By.className("response_headers")).getText())
		return ["request_url":request_url, "response_body":response_body, "response_code":response_code, "response_headers":response_headers]

The first if statement checks to make sure we supplied all required parameters. Some of the responses can be very large, so here I am using a 20 second wait. After that, just read everything and format it is a proper object: URL as URL(), JSON and XML as JsonSlurper() and XmlSlurper() – two canned Builders in Groovy – and integers as Integer.

The complete code of my SwaggerBuilder() can be found on my SourceForge account, along with some unit tests that show how to call and use this.

September 2, 2014

dynamic query parameters in SoapUI

Filed under: automation — SiKing @ 1:49 pm

The other day I needed to create a REST call in SoapUI that had a parameter of the form multi[<multi_type>][<combins>]=<stake>. Everything in the trig-brackets is a variable. This is a perfectly valid URI and hence perfectly valid REST, but not very common and hence SoapUI has no clue how to handle this. After some discussions with SoapUI staff, the solution ended up being rather trivial.

You have to create a new event handler. This is a -Pro only feature, and if you are not sure how to set one up have a read through the documentation before continuing.

The event type that I used is SubmitListener.beforeSubmit. Apparently RequestFilter.filterRequest would do the job as well, but this event listener is broken in older versions of SoapUI … such as the one that I am still running. :/ In order to make things cleaner, I also set my target filter to .*multibet.*.
You have to get all three variables from somewhere. I chose to keep them in TestCase properties; at run time you can set them any way you chose, such as property transfer.
The event code ends up being:

def multi_type = context.expand('${#TestCase#multi_type}')
def combins = context.expand('${#TestCase#combins}')
def stake = context.expand('${#TestCase#stake}')
def multi = submit.request.addProperty("multi[$multi_type][$combins]")
submit.request.setPropertyValue(, stake)

Note that this will actually modify the method in your service endpoint, which could mess up any other calls in your test suite. To clean this up, create a second event, of type SubmitListener.afterSubmit, probably same target if you are using one, with the code:

def multi_type = context.expand('${#TestCase#multi_type}')
def combins = context.expand('${#TestCase#combins}')

Simple, right? But wait, it gets better! :mrgreen:

I thought this same approach could be used to solve a problem SoapUI has had like for ever – and yes SmartBear, this is a problem! Array parameters; the problem has been discussed in various places.

At first I tried some kind of loop, where I did multiple submit.request.addProperty() each followed by submit.request.setPropertyValue(), but only the last one actually took. This proved that SoapUI does not support this at the object level (and not just at the GUI level), in which case the enhancement from them is probably not going to be trivial.

Again start by creating a SubmitListener.beforeSubmit. The toughest part here is deciding how you are going to mark the parameter and pass the values that need to be processed. There are two methods in the API that give you access to all the call parameters: submit.request.getPropertyList() and submit.request.getProperties(); each of these is broken in different ways in different older versions of SoapUI. 😕

Some options that I experimented with are:

  • Pass the values looking like an array, something like [1, 2, 3].
    submit.request.getPropertyList().each {
    	// extract the Array parameter name and array of values
    	def arrParamName
    	def arrParamValues = []
    	if(it.value.contains('[')) {
    		arrParamName =
    		arrParamValues =	// converts a String to an ArrayList
    	// wasn't an array
    	if(arrParamName == null)
    	// convert the array of values into multiple name-value pairs ... discussed below

    This has the problem what if you want to pass the literal string [1, 2, 3]? Also, it seems too verbose.

  • Somehow specially marking the parameter and still pass the values looking like an array. When you define your method and its parameters, there are several things that you can define about it. In order for any of this to work you need to set Disable Encoding. Have a look at the reference; we are talking about control number 13.
    submit.request.getProperties().each {
    	// convert the array of values into multiple name-value pairs ... discussed below
  • Alternatively, set the Type (control number 10) to something unique.
    submit.request.getProperties().each {
    	// convert the array of values into multiple name-value pairs ... discussed below

Now to convert the array of values into multiple parameters. So if your parameter name-value looks like foo=[1, 2, 3] you want to end up with foo=1&foo=2&foo=3. We already have the first "foo=", we just need to create the rest of this:

def arrParamName = it.key
def arrParamValues =	// yes, .value twice!
def nvPairs = new StringBuilder()
nvPairs << arrParamValues.remove(0)
arrParamValues.each {
	nvPairs << '&'
	nvPairs << arrParamName
	nvPairs << '='
	nvPairs << it

Now you need to pass this string undecoded to the original parameter.

submit.request.setPropertyValue(arrParamName, nvPairs.toString())

If the parameter is already set to Disable Encoding, then the first line above is not needed.

This solution is not perfect, but hopefully it is enough to at least get you going.

June 27, 2014

Simple Web Crawler

Filed under: automation — SiKing @ 3:19 pm

In some Selenium discussion fora, I often see a question how do you build a web crawler / link checker in Selenium. The short answer is: you don’t! The more lengthy answer is: you pick a different tool / library that is better suited for the job.

First, let me cover why Selenium is a (very) poor choice. Selenium is a tool that interacts with web applications, specifically, it interacts with DOM elements in a web browser. Some links on any given website are going to lead to non-DOM pages: download links, directory listings, etc. In all these cases Selenium just throws up its hands and gives up!


One very excellent library is HTTPBuilder. My implementation of HTTPBuilder crawler is available on SourceForge. (It’s documented inline, so I am not going to repeat myself here.) In fact, many so called Selenium web crawlers only use Selenium to open a page, but use HTTPBuilder to parse the page status … which makes Selenium just unnecessary overhead.

There are a few things that my example crawler does not handle; the exact solution for these edge cases is left as an exercise for the reader. 😉


Just for kicks, I tried to do this in SoapUI. It took a bit of convincing, but it can be done.

If you look at the links on most websites, you will find a mixture of complete URL (server and everything) and paths local to that server. The biggest challenge is to dynamically overwrite the endpoint of a SoapUI REST request. Below is only one possible solution.

First step is to create a new project, new REST service, new Resource, and a new GET Method. For all the prompts only set the endpoint to ${#TestCase#endpoint}.

crawler service

Now create a new testsuite, and a new testcase. The testcase has two properties: baseURL and endpoint. baseURL is going to hold the starting URL of the page you want to check; endpoint will eventually hold the URL of the link you are currently checking.

First test step is going to be a REST call to the baseURL. If you remember we set our Endpoint in the REST service to the literally ${#TestCase#endpoint}. So we need a testcase Setup Script:

testCase.setPropertyValue("endpoint", context.expand( '${#TestCase#baseURL}' ))

Run the first test step, and create an assertion for Valid HTTP Status Codes to be 200, to make sure that we get something back.

Next we want to extract all the links. This is done with a DataSource ste; set the type to XML, Source Step to your previous step, and Source Property to ResponseAsXml. Row XPath to select all the elements will be //*:a[exists(@href)]; this will filter out only anchors that actually have an href attribute. The column will be @href and the property name can be anything, I used “href”. Run it and make sure you are getting the links from your previous step.

After that will be a Groovy step to parse and transform what we just retrieved:

def location = new URI(context.expand( '${anchors#href}' ))

if(location.scheme == null) {
	testRunner.testCase.setPropertyValue("endpoint", context.expand( '${#TestCase#baseURL}' ) + context.expand( '${anchors#href}' ))
} else {
	testRunner.testCase.setPropertyValue("endpoint", context.expand( '${anchors#href}' ))

Note that my DataSource step was called “anchors”. I am using URI(), which breaks the string up into individual components that can I refer to as I need without doing any fancy String manipulations.

At this point it might be necessary to review exactly what is a URI and what is a URL. The big deal, tested in the if statement above, is whether it start with something like “http” or not. If not ( == null), then we have a local path and we have to prepend the server name in front of it from our baseURL. If yes, then it’s a complete link to some other server, so we take it as is. The result is assigned to the testcase property endpoint.

Note that just as in the case of HTTPBuilder above, this does not account for some other edge cases.

Next testcase step is to make another REST call, this time to our modified endpoint – still note that our REST service points to the literal ${#TestCase#endpoint}, so this will be picked up automatically. You can set an assertion for a list of Valid HTTP Status Codes to be whatever you need. Note that SoapUI by default follows redirects; these are normally the 300 status codes. If you explicitly want to see those, you will need to turn off redirect for this step in the test step properties.

crawler redirects

Lastly wrap the testcase in a DataSource loop step.

January 29, 2014

defining test categories in SoapUI

Filed under: automation — SiKing @ 1:05 pm
Tags: ,

Categories are a feature of most modern test automation frameworks, which allows you to assign an arbitrary tag to any test, and then subsequently select a group of tests to run only by that tag. JUnit introduced this feature in version 4.8, both TestNG and NUnit have had it for a while. In SoapUI you have to do some tricky naming of your test case and suites in order to be able to achieve this. Or … you could use a custom event.

You want to check for the categories before every test run, so the event has to be a TestRunListener.beforeRun event. Leave the target blank (any test case), and name it anything you like – I used “categories”.

The category tags will be specified in three places: each test is going to be tagged with a category in a property called “categories”. Then at the project level, we want to specify properties “includeCategories” and “excludeCategories”, which will list categories to include and categories to exclude, respectively. Each of these will be just a comma-separated list. First part of our event script is to read all that stuff in:

def testCategories = []
def excludeCategories = []
def includeCategories = []

def tstCats = testRunner.testCase.getPropertyValue("categories")
def exCats = testRunner.testCase.testSuite.project.getPropertyValue("excludeCategories")
def inCats = testRunner.testCase.testSuite.project.getPropertyValue("includeCategories")

if(tstCats != null) tstCats.split(',').each { testCategories << it.trim() }
if(exCats != null) exCats.split(',').each { excludeCategories << it.trim() }
if(inCats != null) inCats.split(',').each { includeCategories << it.trim() }

The first three lines each define an empty List. Next three lines read all the properties in, and the last three lines parse them. We do not want to bother the user with defining a proper List of Strings, such as ["category1", "category2", "etc."], the above code expects that the categories property just specifies something like: category1, category2, etc., no quotes, no brackets.

First let’s deal with exclude categories. The meaning of these is usually that if a test is tagged with any of these, do not run it.

// exclude categories
excludeCategories.each {
		testRunner.cancel("${} TestCase cancelled; excludeCategory = ${it}!")

And now the include categories. The meaning of these is that if the test is not tagged, skip it. Only if a test is tagged with any of these, then run it.

// include categories
if(includeCategories.size() != 0) {
	def cancelTest = true
	includeCategories.each {
			cancelTest = false

		testRunner.cancel("${} TestCase cancelled; includeCategories = ${includeCategories}!")

That is it!

As an added bonus, I can also have some automatic category handling, such as:

// Do not bother running LONG tests during work hours.
if(testCategories.contains("LONG")) {
	def rightNow = new GregorianCalendar()
	if(rightNow.get(Calendar.HOUR_OF_DAY) < 17)
		testRunner.cancel("${} test too long to bother during the day!")

Enjoy! 😀

November 1, 2013

testing concurrency with SoapUI

Filed under: automation — SiKing @ 11:02 am
Tags: ,

Testing concurrency in SoapUI is easy, if you know how – just like everything else in life. 🙂 There are several approaches that you can take.

First, let’s define what “concurrency” is. Concurrency, also called a “race condition”, is when multiple requests arrive at the server at the same time, but they each need to be handled separately. A simple example: let’s say that you have $100 in your bank account and you send several requests to withdraw all the money – obviously only one of the requests should succeed and the others should fail due to insufficient funds. If more than one of these requests succeeds, you found a race condition. The cause of concurrency is often improper database locking.

SoapUI has the ability to send requests in parallel at the project level (parallel test suites) or at the test suite level (parallel test cases). I will discuss several possible approaches based on different situations.

Note that in the below discussion in each case we are talking about only one test! However, when I use the terms “test case” and “test suite” I am talking about the level of hierarchy in a SoapUI project.

Case 1: the straightforward suite

In the simplest case you have a situation where only one of multiple requests should succeed, such as the case of withdrawing all the money from one bank account. This case assumes there is no setup or cleanup of any kind.

Start with creating a new test suite in SoapUI and setting the run mode to parallel. testcase parallel mode

Next create the first test case to make one transfer. Do not add any assertions that verify success / failure of the transfer itself – you are expecting only one to succeed and you do not know which one that is going to be. Since timing is important in a concurrency test, keep this lean. Normally I have just a single SOAP Response assertion, or a Valid HTTP Status Codes assertion (if dealing with a REST request).

Clone the test case (using F9) multiple times. As will be seen later, it is a good idea to keep the names of the test cases simple; something like: tranfer0, transfer1, transfer2, etc.

Lastly, you need to handle the verification in the test suite TearDown Script. The exact script will depend on what response you are going to get and what you are looking for. Let’s say that a failed transfer will respond with a SOAP fault, then your verification script could look like the following.

def goodTransfer = 0
testSuite.testCases.each {
	def response = new XmlHolder(it.getTestStepByName("transfer").getProperty("Response").value)
	if(response.getDomNode("//*:Fault") == null)
assert goodTransfer == 1 : "Too many transfers!"

You will have to adjust the script for your particular situation, but the idea is simple. Read the response of each of the steps (within the test cases in the SoapUI hierarchy) and count the number of successes – there had better be only one.

Case 2: login first

What if you first need to login or generate some other setup?

The only difference from case 1 is the setup – the login. Start with creating everything from case 1.

Next add a test case for the login. It does not matter where in the order of tests you place it (everything runs in parallel anyway), but it helps visually if you place it first. Make sure this test runs correctly and transfers any credentials you may need to your other steps.

Next comes the secret sauce. Disable this test case; this test must be run from the test suite Setup Script with:

testSuite.getTestCaseByName("login").run((, false)

The login could get more complicated if your application requires that HTTP session be maintained. The exact solution to your specific situation is left as an exercise for the reader. 🙂

Case 3a: verify afterwards (at project level)

What if, similarly to case 2, you need to run some sort of cleanup after everything has run. For example: deposit money back so that next run of the same test will work again.

You could take the same approach as in case 2: create a test case at the end of your test suite, disable it, and run it from the suite TearDown Script. However, any tests that are run this way will not get considered when determining if the suite failed or not (the final exit status of the test runner) and will not have any logs created!

In order to work around this, you could set up your entire test as a new SoapUI project. In the project, you will have three test suites: setup, transfers, and cleanup. All test suites are run sequentially, nothing gets disabled, nothing gets run from Setup or TearDown. The setup portion will have only one test case similar to the login from case 2. The transfers portion will have multiple test cases such as all the transfers from case 1 (possibly minus the verification), all run in parallel. The cleanup portion will have one test case like the deposit and any cleanup (and possibly the verification).

Practical example of this situation is that instead of withdrawing all the money from your account, you send in multiple deposits and withdrawals for smaller amounts, and then afterwards the transaction history (a separate API request?) has to show the correct running balance.

The exact implementation of all of this should be pretty straightforward if you got through all the discussion so far.

However, if you are running in a continuous integration system, it is undesirable to have a separate SoapUI project for every concurrency test.

Case 3b: verify afterwards (at test suite level)

You can have a test case wait until other test cases have completed. The magic sauce here is the SoapUI monitor.

Start by creating everything you have done in case 2.

Now create your verification test case. Just as before, it does not matter where in the test suite you place it (everything runs in parallel), but visually it makes more sense to place it at the end. This test does not get disabled, since this is where the verification will take place – you need this test to be considered in the final runner exist status and you want all the logs generated from this step.

In the test suite Setup Script initialize the monitor.

def monitor = new com.eviware.soapui.monitor.TestMonitor()
testSuite.testCases.each {
	testCase = it.value
	if(testCase.label.contains("transfer")) {"monitoring: ${testCase.label}")
runner.runContext.testMonitor = monitor

If you are less careful about how you name your test cases, the if statement above may become slightly more complicated?

	if(testCase.label.contains("withdrawal") || testCase.label.contains("deposit"))

In the last test case, the verification one, create a new Groovy Script step which will wait for everything else to complete. This step has to be first!

def monitor = context["#TestSuiteRunner#"].runContext.testMonitor
if (monitor == null) {
	log.warn("monitor not found in completion check")
} else {
	while(monitor.hasRunningTests()) {"waiting for tests to complete ...")
		try {
		} catch(InterruptedException ignored) { }
	}"Tests completed: ${!monitor.hasRunningTests()}")

July 25, 2013

SoapUI Cookie management

Filed under: automation — SiKing @ 12:09 pm
Tags: ,

It seems that HTTP session Cookie management in SoapUI is little understood. 😦 Performing several Google searches, supplemented by some emails to SmartBear support, yielded only a lot of confusion. Until one day I happened upon this gem by user “Unibet Support”!

Cookies are normally handled by the client in a “Cookie Store”; in SoapUI they cannot be read/set same as other parameters. I am intentionally using the upper-case C here, as one of my first attempts was: messageExchange.requestHeaders['cookie']. Through some trial and error I discovered that I could use an event like submit.request.requestHeaders['Cookie'] (with the upper-case C), however this was still not the correct path to enlightenment.

You first need to get into the Cookie jar.

def myCookieStore = HttpClientSupport.getHttpClient().getCookieStore()

Reading Cookies

def myCookies = myCookieStore.getCookies()

This will give you a List. An individual Cookie is going to look something like:

[version: 0][name: JSESSIONID][value: 6ed79202575ff0c178efa2d4d9f1][domain: abcd-zq11][path: /css][expiry: null]

You can access each of the items with a get method, which Groovy usually exposes as a parameter.

assert myCookies[0].getValue() == myCookies[0].value

To get one specific Cookie, you could do something like:

def interestingCookie
myCookies.each {
	if( == "JSESSIONID")
		interestingCookie = it

Updating Cookies

To update a Cookie is just as easy. Each of the get methods has a corresponding set method, again in Groovy exposed as a parameter.

interestingCookie.value = "new_cookie_value"

This, of course, updates the Cookie right in the Cookie Store! To avoid this:

def clonedCookie = interestingCookie.clone()
clonedCookie.value = "cookie_not_in_store"

Deleting Cookies


will clear out all Cookies from the Store. To delete only one specific Cookie, you could do something like:

interestingCookie.expiryDate = new Date() - 1	// yesterday
myCookieStore.clearExpired(new Date() - 1)

Creating Cookies

This is a little more involved.

import org.apache.http.impl.cookie.BasicClientCookie
def myNewCookie = new BasicClientCookie("cookie_name", "cookie_value")
myNewCookie.version = 1
myNewCookie.domain = "qa.test"

Of course you could have done something like:

def myNewCookie = new BasicClientCookie("cookie_name", interestingCookie.value)
Next Page »

Create a free website or blog at