Saturday, July 2, 2016

Selenium Grid with Windows video recording support

In a previous article I've shown you how to add video recording support into Selenium Grid to be able to run it in Docker containers on Linux. We'll extend Grid to record tests both on Linux and Windows this time.

To make a single cross-platform solution, we'll replace avconv with ffmpeg. As these tools are almost similar, there's no need in updating key recording commands.

Main updates will affect OS-specific recorder and display options. Fortunately, ffmpeg supports GDI-based screen capture device called gdigrab, which is an out of the box Windows device. In case of using this recorder, we should specify desktop input.
Next thing we need to care of is related to graceful process stopping operation. INT signal is supported on Linux out of the box, but there's no easy way to do the same on Windows. So we'll use a special tool written in C to achieve this goal. It will be included into Grid resources. We just need to to take care of its extraction into OS-specific temporary folder on jar's startup. This could be done via custom launcher implementation.
Let's modify stopVideoRecording API to support graceful ffmpeg process stopping.
That's it. There's one more point I'd like to cover. As we replaced avconv with ffmpegdocker-selenium project should be updated as well.
ffmpeg is missing from official Ubuntu repository. So first we need to add a custom one, and then install appropriate tool.

You can find sources on GitHub as usual. Don't forget to install ffmpeg and put it into system path. Happy coding. :)

Monday, April 25, 2016

Docker, Selenium and a bit of Allure: how to raise a scalable ready to use automation infrastructure with video recording support

Docker containers became a modern trend nowadays. You may already seen Selenium images in action. On the other hand, you may wondering about some missing features, which could be quite useful while e2e testing.

If we're running containers as daemons and some tests have failed, sometimes it's not enough just to take a look at screenshots and logs. So we have to re-run our tests in interactive mode to detect the root cause. I'd say it's boring. I don't want to sit and look at how my tests are running for N minutes or even hours. I'd like to watch a video recording! That's exactly what I was looking for as a primary feature of above mentioned images.

As you may guess, in this article I'll show you how we can "fix" that.

Our small journey will consist of 3 parts:
  1. Selenium standalone tweaks.
  2. Docker selenium tweaks.
  3. Demo with real-time video recording and further output attaching into Allure report.
There're plenty of ways of how we could record test's session within docker container. Let's list some of them:
  1. Monte.
  2. vnc2flv.
  3. ffmpeg / avconv.
I've tried them all. But within current guide's scope we'll pick the last option - avconv. It's a separate branch of ffmpeg, which has replaced later in official Ubuntu repositories. The other important criteria against first 2 options is mp4 format support, to be able to attach produced output to html5 player.

So our main goal is to add avconv support on selenium level. Let's start with utility class for video processing.
You can find a full list of available commands in the official avconv documentation. I'll just leave several notes here:

  • To be able to record a video, we need to specify a valid DISPLAY option, which is retrieved from environment variable. 
  • Screen size = max container size. There was no need to make it variable for this particular sample.
  • Video codec libx264 is a default valid option for html5 mp4 processing.
  • Quality level is managed by -crf option.
  • Frame rate could be set via -r argument.
There's 1 important thing here. We should trigger recording process asynchronously to avoid blocking selenium grid's main execution phase. For this purpose CompletableFuture was used.

Technically video recording is an infinite process. To stop it gracefully we should either use Ctrl + c hot key, or send INT signal.

Let's create a simple VideoRecordingServlet to be able to trigger avconv tool when required.
The main idea is to filter custom requests' commands to start / stop recording. Besides that, end-user should provide a json with listed above video options, which should be passed to avconv tool. Note that VideoInfo is a simple POJO which hold corresponding arguments.

Now we need to create a HubProxy, which will intercept sessions' creation / disposal requests and trigger / stop video recording process.
Note that we're getting video options from end-user in a form of DesiredCapabilities. It means that on proxy level we need to retrieve corresponding json and put it into request's entity, which will be sent then to VideoRecordingServlet.

That's it. Now you can rebuild selenium-server-standalone.jar with video recording feature on board using maven-assembly-plugin.

Let's move on to the next part, which is related to docker images modifications.

First you need force Base image to use you newly built selenium server instead of pulling an official one. Just replace corresponding line with the reference to your local hard drive.
Next thing we need to do is to modify NodeChrome / NodeFirefox config.json to allow our custom servlet and proxy support.
And the last point is related to NodeChromeDebug / NodeFirefoxDebug updates. These images are not supported avconv out of the box, so we need to install it. It could be done quite simple by adding libav-tools into appropriate installation section.
That's it. Optionally you can change images' names / tags to avoid overlapping with official sources.

To build new images use the following command:
When you're done, Selenium Grid infrastructure could be linked together with docker-compose to minimize further interactions with terminal.
To automate scaling procedure, you can use the following script:
You need just to pass an argument of how many containers you're going to raise for Chrome / Firefox nodes.

When you run this script, you'll be able to access Selenium Grid console the same way, as you've been doing before. Note that docker-compose.yml should be located in the same folder with above .sh file.

Let's move to the last part of our journey. It's related to client side code for passing custom capabilities to RemoteWebDriver instance and attaching produced video recordings to Allure report.

You can use the following code snippet for the start point.
Note that we're passing mapped in docker-compose.yml volume as an output folder for our further video recordings. File name should be unique to avoid potential overwriting due to output dir sharing between all raised containers.

In above example we're setting 18 quality level and 25 frame rate args (see official avconv docs) into VideoInfo POJO, which is then transformed into json string and pushed as a custom capability.

HTML5 video attachment feature is not yet released, but you can pull latest Allure snapshot to give it a shot.

If you're using TestNG, the best place to put attachment's snippet is a Listener. You can override onTestSuccess / onTestFailure methods to trigger the following code:
There's a tricky moment. You may already noticed that docker-compose.yml contains 2 mapped volumes. By default all recordings are pushed into temporary folder, and then just copied to the main one. This evil workaround is done to be able to track if video recording is finalized and ready to be pushed into report. If we try to call above method on video, which is not completed yet, an output will be corrupted and we won't be able to play it at all.

Let's see a short demo on a 2-threaded test execution.


That's pretty much it. You can find sources by the following GitHub links: docker-selenium, docker-selenium-grid, docker-selenium-samples.

Sunday, February 7, 2016

Selenium Camp 2016 - Effective UI tests scaling on Java - announce

As you may know, next Selenium Camp conference will take place at the end of Feb 2016. I'm glad to share a short announce of the talk I'm going to perform: Effective UI tests scaling on Java.

The following screencast will help yo to understand some technical aspects, which will be covered on corresponding event. Please note that both talk and announce are in Russian.


Saturday, November 21, 2015

WebDriver vs Select2

Nowadays it's quite popular to use Select2 control instead of a common one. Second version has some cool features like filtering, tagging, themes support, etc. On the other hand, sometimes it's quite hard to automate interaction with such controls due to their dynamic nature.

So what challenges do we face, while trying to access Select2 via WebDriver? First of all, we can't use existing Select wrapper to control this component anymore. The other problem is dynamic filtering: as sendKeys prints text character by character, Select2 will be constantly updating its state during typing. Besides that, we can't predict list items' loading time, as it's very dependent from collection size and performance.

We could try to play with WebDriverWait to resolve potential issues, but to be honest there're plenty of factors, which may produce an unexpected result. It's quite hard to control this component even with explicit waits technique. So how we could sort it out?

In this article I'll show how to create a custom Select2 wrapper, which will use its native API for further interaction.

We'll apply an existing template from one of my previous articles to avoid re-inventing the wheel. But first, let's take a look at Select2 native API, which we could use in our wrapper's implementation.


This is a common Select2 structure. As you can see, old select control with options' list is located below the main component. It's usually hidden. Options' values may differ from displayed text. So ideally, it'd be nice to get some option's value by visible text first. And then ask Select2 to display it.

Let's play with browser console first. Assuming that we want to select Monday from dropdown, we need to retrieve option's value, which is equal m first.


As you can see, it could be done via pure jQuery syntax.

So how we could ask Select2 to display an option, which has m value? There's a special function select2, which allows to specify different native actions like open, val, data, etc. It allows us to pass option's value directly to Select2 control for displaying Monday.


Technically, it's everything we need to reach our initial goal. Let's create Select2 wrapper now.
We're calling JavascriptExecutor internally to apply scenario we've already played with in browser console before. Our wrapper extends HTMLElement to allow using custom component directly in PageObjects without explicit initialization.
Hope it'll help you to forget about StaleElementReferenceException, while working with WebDriver and Select2. You can find sources as usual on GitHub.

Monday, August 10, 2015

Selenium content supplier

Assuming that you've ever worked with Selenium, you know that to run tests in Chrome / Internet Explorer browser you should supply special so-called standalone server (chromedriver / IEDriverServer), which implements WebDriver wire protocol. When browser updates happen, most likely you'll need to update appropriate drivers as well. If you work with Grid, sometimes you may also need to update appropriate standalone server when new Selenium version appears.

Well, it could be quite bothering to keep external Selenium content always up-to-date. Would be nice to automate this process somehow, right? As you may guessed, this would be our primary goal for today.

Let's start with some introduction first. There're 2 public resources where you can find a list of available items for downloading Selenium content: selenium storage and chromedriver storage. Direct links will move you to the root XML view, which are useless from end-user perspective. But from developers point it's a mine of information. If you look at these XMLs in details, you'll notice that there's a list of Contents nodes, and each of them contains a special Key. You may wonder how could it help us with our main task? In fact, this key is the last part of an end-point for downloading particular content. So if we concatenate a root URL with one of the listed keys, we'll get a full download URL of any available resource. And it means that we can use a simple GET request for retrieving any content we want. Well, it's quite easy task if you know which particular version is newest. But in fact, we don't have such information. Or we do? Let's take a look at our XML again:
As you can see, all the keys are sorted, so if we could parse this XML somehow and retrieve last node's info, it'd resolve our issue with latest version recognition.

Fortunately, we use IntelliJ IDEA, which can generate XSD schema from XML:


And then we can generate source code from XSD schema:


So to get our XML model, we just need to save it somewhere in project, and everything else could be done in several clicks using our favourite IDE.


Well, now we have a model, and it means that we could send a simple GET request to the root URL to retrieve a list of available contents and put everything into newly generated entities. I prefer REST approach, so let's see how we could do that with Jersey client:
First method tries to get XML content and put it into ListBucketResultType.class. Second overloaded method loops through each content node, applies filtering by key and returns last matching value. You may wonder, what Content type is about? It's a custom interface, which is intended to provide access to common Selenium content wrappers: ChromeDriver, IEDriver and SeleniumServer. As we may want to download different resources of different OS types / bitness, it was necessary to make our code generic.
As you can see, there was created some preceded configuration for easier content parsing. Each wrapper contains its own set of characteristics. Let's see how we can use this code with content downloading API:
As you can see, we're passing exact content type we want to download. Next goal is to parse XML and find out latest key using getLatestPath API. Received key should be additionally splitted, as it contains the following format - version/resourceName. Now we're ready to prepare a new saving path according to known info about resource name and output folder. When it's done, we just need to send a GET request to remote end-point and read appropriate response into InputStream. To save received file data, we may want to use Apache IOUtils copy API. The last step is to return a saved file name for further processing.

You may wonder, what kind of processing do we else need? Well, some of available artifacts are zipped. So it'd be nice to perform automatic unzipping when downloading is completed, right? That's why we return a saved file name. To unzip items we may want to use some existing library, like zip4j:
There's nothing specific in this code that needs to be discussed, so I'll leave it here for your own investigation.

Cool, so now we can download and unzip any Selenium content we want. But what's about remote delivery? If we're working with Grid, we may want to update not only a hub VM, but all the nodes as well. For this particular case there was added a server platform, with a simple file upload service:
Note that Java 8 parallel streams allows us to increase performance while files processing. Besides that, there was added unzip functionality as well. Now let's take a look at client API for file uploading:
Here we prepare a MultiPart files content for further sending according to passed list of paths. By default, unzipping feature is enabled. But it's safe, as backend side uses zip extension filtering.

That's it. Let's see some test samples to understand how it's easy to download latest Selenium content using this library:
In a first example we try to download the entire Selenium content and unzip it in the same output folder. Second sample represents how to download particular resources and send them to remote VM.

You may wonder, where do we need to specify ip / port of a remote server. It could be done via the main client class parametrized constructor:
For local files processing you may want to use just a default constructor.

That's all about Selenium content supplier. Hope you'll find it useful for your projects. By the way, you can combine it with EnvironmentWatcher service to implement the following scenario on fly: stop all services -> update Selenium content -> raise up the entire automation staff with new jars / drivers.

You can download sources on GitHub. And related samples as well. Main project is not in official Maven repository yet, so you need to build it by your own. Just use mvn clean install command to generate appropriate artifacts, and then you can add the following dependency to your own projects:

Sunday, August 2, 2015

Java 8 impact on test automation framework design - Part 2

In the first part I've shown you how to prettify UI tests using Java 8 interfaces. This time, we'll take a look at a more complicated example.

Well, you may know that there're 2 common ways of accessing web controls via WebDriver:
  1. @FindBy + WebElement -> automatic lookup with PageFactory.initElements.
  2. By -> delayed lookup with driver.findElement or WebDriverWait + ExpectedConditions.
Personally I prefer second option, as it gives better flexibility, while working with complicated JS-based websites, when we always need to wait for something. But on the other hand, By class seems a bit non-obvious, plus there's no factory implemented for this case yet. Well, actually in the first part we did created a custom factory, so we could say that this problem has gone. But besides that, it'd be nice to see some similar elements' definition style, as it was implemented for pure WebElement.

In this article I'll show you how to create custom typified elements and a generic initializer, similar to mentioned above initElements.

We'll modify some part of code from one of my previous articles, as the idea remains the same: creating a custom HTML annotation and HTMLElement class. But this time we'll also implement some more specific elements, like Button, TextInput, Label, etc., mostly as it was done in Html Elements framework.

Let's see HTMLElement (aka base element) first. It won't be a full listing, only some key moments:
As we're going to create more specific elements, it's important to pass WebDriver instance to our base element's constructor for further usage in a combination with WebDriverWait. To be honest, there're lots of ExpectedConditions we may use for element's locating, but for educational purposes we'll look only at the most popular: visibilityOfElementLocated, presenceOfElementLocated and elementToBeClickable. All these conditions could be described via the following function:
This function was applied in a generic waitUntil method, so that we could pass any of listed above ExpectedConditions as a parameter. Now we're ready to create some more specific elements, e.g. TextInput:
As you can see, it extends HTMLElement. Now we can use waitUntil method to locate only those inputs which are clickable. Besides that, we've defined a custom logic: clear input and type some text. Note that element's locator reference is physically located in a super class.

Let's assume that we've already created a set of specific elements. So how could we initialize them? If you look at PageFactory sources, you'll notice some dark reflection magic. It'll take you some time to figure out how it's implemented. But I'll show you an alternative, even more generic and darker way of elements' initialization. It'll be still reflection-based, but with a help of Java 8 features we'll see how it could be implemented within single interface.

To make this experiment more realistic, we'll create some other type of element, so that we could see that our elements' supplier is not hardly dependent from a single type and is definitely generic. So what type of element would it be? Have you ever heard something about SikuliX? It's an OCR tool, which may help us to resolve some complicated automation tasks, which are impossible with Selenium. So before starting with elements' initializer, I'll create a model for SikuliDriver, ScreenElement and its ImageElement implementation. Well, hope some time in the future I'll have enough capacity to implement a full functional approach to make SikuliX closer to WebDriver interface. But for now it'll be just a mock.

Here's a draft implementation, which will be further mocked in test:
There won't be any real clicks or text typing, but we need to know that element has been successfully initialized and we could perform some basic actions.

Well, our alternative model is ready, so now let's add WebDriver and SikuliDriver(mock) into BaseTest class.
Note that well-known initialization / quiting staff was skipped, but you can find a full source later on GitHub. Mockito library was used for mocking, so don't forget to add appropriate dependency into root pom.xml:
And now it's time for something very special. Welcome, our magic interface - ElementsSupplier. I'll try to explain everything within the following listing, as there's a complicated combination of reflection, streams, lambdas and default methods.
Let's start with the end. As you may noticed from HTMLElement and ImageElement constructors' signature, both receive specific drivers as a fist argument, which are needed for further elements locating:
We also have 2 custom annotations - HTML and Image, which values need to parsed and supplied to appropriate elements' constructors side by side with mentioned above drivers. It's a bit tricky moment. In case of a single element's type, we know exactly which annotation to parse, which driver to use and which constructor to call. But our case is more generic. We don't know exactly how many elements' types, drivers and annotations are there. So we can't predict which constructor to call. Here's our first requirement: a class which implements ElementsSupplier interface must provide a list of supported drivers and annotations:
In case or drivers, we may want a Stream of their instances for further passing to matching constructors. In case of annotations, we just need their types to detect if particular instance variable contains one of supported items. Now let's take a look at our BasePage class, which implements ElementsSupplier:
As you can see, we've overridden both abstract methods to provide WebDriver and SikuliDriver instances, as well as HTML and Image annotation types. Now our interface knows a search direction (annotations) and first constructors' arguments (drivers' instances).

We can also see that BasePage default constructor explicitly calls default initElements method, passing this as a parameter. You may wonder, what does this mean in such context? It's a reference on a top-level PageObject, which we have triggered to be initialized. So we ask our interface to initialize all the custom fields within particular class and its super-classes.

Now let's take a look at common algorithm for elements' initialization:
  1. First we need to loop through each declared field of a current PageObject class and its super-classes, until we reach a base Object.class. We can do that with Stream.iterate API. But with one important note. Pure java implementation doesn't support any good exit criteria except setting limit operation. By default we don't know a number of super-classes we want iterate, so the only valid condition for us will be !currentClass.equals(Object.class). Fortunately, there's a great Streams extension library com.codepoetics.protonpack.StreamUtils, which allows us to set appropriate Predicate to break an infinite loop, when condition is met (takeWhile API).
  2. Next we need to loop through all declared class fields and find out, if any of supported annotation types is present.
  3. If anything found, we retrieve annotation by its type and call specialized initElement method for further field initialization.
  4. initElement itself could be splitted into several logical parts. First of all, we need to retrieve all annotation values. It's a bit tricky part, as getDeclaredMethods() API doesn't guarantee to return an ordered list of methods (how they were declared in a class). But order is very important while passing arguments to appropriate constructor. That's why we are using custom methods' comparator (by name), which meets our order requirements. But anyway you could always override default methodsComparator() with your own custom logic.
  5. These are annotations' arguments, but what's about drivers? Our constructors require particular driver instance as a first parameter. Here's the other tricky moment. Both drivers are of generic interface type. And there's no easy way to guess which exact type is assigned to particular object. That's why we have to loop through all supported drivers, insert one to the beginning of annotations' arguments list and pass it deeper to createInstance method.
  6. createInstance uses common java reflection API to intialize our custom elements with provided arguments list. As I've mentioned above, there's no easy way to detect assigned interface type, so we additionally try to check whether WebDriver or SiluliDriver types are assignable to provided arguments. If yes, we return more specific type to be able to find matching constructor. In case of any exception, we return empty Optional. It means that there's no matching constructor found for particular combination of a driver / annotation arguments, and we should try another driver as a first parameter.
  7. The final step is to check if any object instance was created. In a positive case we make field accessible and put a newly initialized reference inside.
That's it. Now we can make sure that all the fields are initialized. There's only 1 note left. As SikuliDriver is mocked, we should also mock ScreenElements to check if our approach works. It could be done somewhere in @BeforeMethod.
Let's declare the same items e.g. in HomePage:
And add appropriate call into test case:
Well, there's no any valid logic in uploadFile call. The only purpose of this is to see a working WebDriver test with appropriate SikuliX console log messages:


That's all. You can find sources as usual on GitHub.

Thursday, July 30, 2015

Java 8 impact on test automation framework design - Part 1

Well, it took me a bit longer than I had expected to resolve all the urgent tasks. But finally I'm back and ready to share some new material with you.

In this article I'd like to describe some fresh thoughts about web automation framework design. I've been playing with different design approaches for several years. Primary goals were: increasing system tests' readability and reducing time on their support.

I believe that any good test should be written in terms of DSL, so that anyone could understand its context and purpose. As normally we're running tests via CI servers, it's important to reflect all the steps performed during execution in test results report. You can achieve this goal via AOP and custom annotation to collect everything and inject appropriate info into report. On the other hand, you can use some existing solution like Allure Test Report.

Well, this is all about test steps. But what's about verifications? Normally, we're using asserts to compare actual and expected result. When we're talking about UI tests, there could be much more than just a single verification used. Would be nice to see them all in test results report as well, right? Welcome, our first technical blocker. We can't annotate asserts inside test method's body. So the only way to workaround this is to create an assertions wrapper or custom matchers.

Wrapper implementation is a bit out of context of a common inheritance model. Let's assume that we have some BaseTest class, which is intended to control tests execution flow and some internal staff preparation. As you may know, multiple inheritance is mostly impossible in Java (I'll describe why mostly below). It means that our assertions wrapper should be transformed into utility class with static methods. Is it good or bad? There's no exact answer. So I'll leave it for your own analysis.

Matchers - seems like a better solution. But how much time should we waste on their preparation, customization and support? Depends on...

Anyway, I'd like to show you a third way, which is about 'inheritence mostly impossible'. Well, in Java 8 there was introduced a new concept - default interface methods. To get better understanding of what it is, I'd recommend to read Java 8 in Action book. Basically, recent interfaces allow us to create methods with bodies. You may wonder what do we need it for? One of potential purposes is to extend existing functionality without making outstanding impact on entire project(-s). As you may know from previous Java versions, when class is connected with interface, it agrees to implement all declared methods. Let's imaging that you're developing some popular library and one day you decided to extend an existing interface with some new method's definition. When you publish an updated version of your library, you may wonder how much angry emails would you receive. The reason is that users' code may fail to compile until they implement your new addition. Imagine if there were lots of entry points, where this interface was defined. Potential impact could be enormous. So how new Java interfaces could help? Well, first of all, default method doesn't require to be overridden. Now you can safely add some extended APIs directly inside interfaces without any impact on related classes. Sounds cool, isn't it? But what's about inheritance? Keeping in mind that default method looks like a common one except some minor syntax differences, plus a fact that a single class may implement any amount of interfaces, we may guess that this opens us a direct way to multiple inheritance. Wow, that's awesome! Let's see how it may help us with our automation routine.

As I've mentioned above, it would be great, besides common steps, to print all the verification staff into test results report as well. We'll start with some preparations first. To avoid re-inventing the wheel, Allure will be used as a code base for steps definition and printing. It'll be a multi-module maven project to achieve better domain part separation from framework core. In your root pom.xml you should add reporting section with allure-maven-plugin. Once it's done, just add 2 modules to your root: core / domain. Your pom.xml should now look something similar:

<pre class="brush: xml"></pre>
Let's create some common abstraction layer in core module. It'll be a BasePage and BaseTest classes. We'll leave them blank for a while and continue with domain module.

Assuming that you're already familiar with PageObject pattern, we'll need to create a template for some sample test scenario. Let's say we're going to check Google account authorization flow. To achieve this goal we need at least 2 pages: Login and Home. Keeping in mind that all the steps should be printed directly into report, we'll use appropriate @Step annotation from Allure framework:



As you can see, nothing specific. Just a simple authorization flow with username verification. Well, to resolve missing dependencies we should update domain module's pom.xml.


Note that Allure requires including AspectJ dependencies to perform steps interception in runtime. As TestNG was chosen as a unit framework, we had to add appropriate Allure adaptor, which implements a special listener for collecting necessary test data.

Finally, we can create a simple test using provided above steps.


Pretty straightforward script, isn't it? You may just wonder about loadUrl and homePage methods (by the way, it was first mentioned in LoginPage class). But let's keep an intrigue for a while.

So our main goal is to annotate assertEquals with @Step somehow. Besides that, another logical blocker occurs: URL loading action is something that happens mostly only first time, when browser is opened. So logically first navigation step doesn't relate to any page or application itself. In such case, where should we put this API? In core module? But how we'll return LoginPage instance then, if framework logically and physically is a completely independent unit, which shouldn't be related to any domain at all? So domain module then, right? Ok, but again where should we put it? Our test class already extends BaseTest. It means that we can't inherit anything else.

And a headshot - PageFactory. If you have ever worked with Selenium, you may know that there's a special factory class, which is intended for PageObjects + WebElements initialization. Well, and what if I don't use WebElements? What if use By locators? Where's my By factory? Someone may say: you don't need a factory, just use common class initialization technique. Ok, but where should I store getters for my PageObjects then? Ah, you're saying to create my own factory now? Behind the scene, I'm always wondering, why should I call such low level API directly in tests? Why should I save intermediate page objects state in variables to verify something or just break the chain for some other actions? Maybe I'm a bit idealistic, but I've been looking for some good design approach for a long time to make tests fancy as much as possible, to completely remove all the low level staff from highest abstraction layer. And now... now I can say that I found some technical approach to achieve this goal. As you may guess that's all about default interface methods.

Let's start with some light scenario - verification. Everything we need is to create an interface with a simple default method to verify 2 String values - expected / actual result.


As you can see, there's no magic at all. Common interface style, common method's signature, except default keyword. This basically means that a class, which implements above interface, may call verifyTextEquals method directly, like in case if this method was a part of it. Isn't it cool? The other big advantage is that we shouldn't necessarily override it. But we still have such opportunity, if really needed.

So now if we link this interface with our class, we could modify test the following way:


I hope you haven't forgot about main interface feature, which allows class to implement as many interfaces as it could, yet? Well, it's a good time to implement custom PageFactory then, isn't it?

Let's move back to core module. We need to modify BaseTest class for creating PageObjects' storage. You may know that PageObject pattern assumes that we'll often return a new instance of a page. But in case of delayed elements' search (By locators), do we really need to create redundant objects in memory? In such context it's better to think about page caching. Let's say we could avoid creating new objects, if page already exists in a storage. But with 1 small note: storage should be refreshed after each test execution to avoid keeping useless objects in memory for a long time. Let's see how could it look like:


Normally, we may want to hide storage from outside word. So only getter was made public. Here we're using TestNG specific annotation to automatically clear storage after each test execution. Storage itself is a common map, where value is a page object instance and key is of generic interface type. Let's see how it looks like:


Here you can see 2 static methods: first one is about providing page object instance by key, and second one - our magic navigation method. But it doesn't return any PageObject yet, intrigue. Also we may want to define a special create method, which is called while putting values into storage. But actual implementation left to higher abstraction layers (if you remember, we've discussed a role of framework as an independent unit a bit earlier).

The final piece of a puzzle lays in domain layer. Now we need to provide a more specific page objects creation logic. And as you may guess, it'll be implemented via another interface. Let's call it PageObjectsSupplier:


First thing we may noticed is a PageObject enum, which implements just created GenericPage. As you remember, we have previously defined an abstract method create to pass implementation details to domain specific area. So PageObject must implement this method now. As it has an enum type, each unique item will provide its own implementation. Exactly what we need!

There're also 3 default methods. You had a chance to see loadUrl before in provided above test implementation. So we've just wrapped original navigation method defined on a core level with a domain specific logic of returning new LoginPage instance. As this method is default one, we could call it directly in a test.

Others are just common page objects getters to avoid direct low-level getPageObject method's calls with type casting. So it's just some kind of synthetic sugar for more concise instance access. Note that we use putIfAbsent method for populating pages' storage. It means that there will be created only 1 instance of particular page. Well, it may seem a bit excessive to define both enum items and relative getters, but on the other hand it's technically and logically clearer than hundreds lines of reflection or just separated utility class. Plus we found a better place, where to store first navigation logic. Anyway, it's only an alternative approach and it's up to you what to choose.

Now we only need to connect newly created interface with our test class to apply multiple inheritance magic. Just a quick suggestion: to avoid excessive interfaces' enumeration, we could join them together in 1 more specific interface e.g. TestCase using inheritance.


So our final test case variant would be the following:


If we run this test and then generate Allure Report via mvn site command, we'll see all the steps, including verification. Isn't it look perfect?


Note that I'm using my own web sever for viewing reports. You may want to read official Allure docs to find out a list of available maven commands.

In a second part we'll take a look at more complicated and interesting example with custom PageElements. Source code will be also available later on GitHub.

Hope this article helped you to get better understanding of default methods and how they could improve your automation routine.