David DvoraJasmine Unit Tests – Testing Legacy Pages

Intro

When approaching testing client side JavaScript code, you first need to ask yourself – “What are we testing?”

We also tried answering it, when we realized that our client-side architecture was mixing DOM manipulation with application logic. This is not surprising for developers that use jQuery in the old fashioned way, as it seems that jQuery actually encourages that kind of behavior because of its syntax and the simplicity of using it.

So which part do we really want to test? This depends on what’s more important for you to verify in your application. One thing we agreed on: we need to separate the client side code into layers in order to make it testable. One layer will be pure application data logic, and the other one will be a wrapper to the code that actually manipulates the DOM (first one uses the latter).

(more…)

Read More

Gilad LazarovichUsing Applitools to radically reduce UI Automation code

We, the CloudShare Quality Assurance team, migrated from a web-based UI Automation implemented using C# and Watin (http://watin.org/) to a solution using C#, Selenium (http://docs.seleniumhq.org/) and Applitools Eyes (http://applitools.com/). With the use of Applitools Eyes, we validate all the UI elements of the application across multiple browsers (Google Chrome, Firefox and Internet Explorer) and across various resolutions.

Applitools Eyes has saved us quite a bit of time and effort from coding (including maintenance), and has allowed us to achieve much greater automated UI coverage. This tool has helped us find quite a few layout issues, flow issues as well as functional issue. As a result, we’ve reduced the overall number of missed bugs and improved the product quality.

We use CloudShare Production environments to host and run all our test environments (for testing both Production and our local Development labs). This includes machines from most modern Windows operating systems, and with all browsers combinations supported (Chrome, Firefox and Internet Explorer 9+). Our automation is executed with each deployment (which happens several times a day), with each Production deployment (every Sunday), and we have full regression cycles happening nightly and throughout the weekend.

Applitools Eyes allows labeling any regions to ignore (useful where we have animated gifs running as an example, or where we use dynamic data), or define floating regions (which can move several pixels depending on page), and these regions automatically propagate to all the screens that include similar elements. Additionally, when we make a change in the application which affects multiple pages in the same way (e.g. changing the header of our site), we only need to review and accept this change for a single window, and Applitools Eyes scans all other windows and automatically approves similar changes.

Following the addition of Applitools Eyes, the amount of Automation code has been greatly reduced (with almost all UI checks are now handled within the screenshot comparison) while increasing browser coverage, resolution coverage, overall test reliability and significantly lowering the time spent on code maintenance. With our previous solution, we only tested using Internet Explorer and missed a good number of bugs that only showed up on one of the other browsers. In addition, we now test with a minimum resolution of 1024*768 and up to 1920*1080 which lets us ensure the UI looks right.

Applitools provides constant functionality updates based on our feedback and was very simple to integrate into our Selenium based project. The latest releases provide OCR support (your can read text from regions labeled within their application and using your application screenshot). In addition, there will be support for several possible scenarios within a screen (when we have warnings on screen as an example during a deployment). As with Selenium, Applitools updates are handled via Nuget packages through Visual Studio.

Please note the following screenshots taken during automation execution and which demonstrate how UI bugs can be found using Applitools Eyes, and how you can configure dynamic data to be ignored when comparing screenshots.

Here’s a screenshot within Applitools that shows our Console access test, you can see the dynamic data regions that are ignored within the screen comparison:

(click image to get full size)

Applitools_Eyes_Console_screenshot_with_region_exclusion
Marking with applitools which UI region should be ignored

Here’s a screenshot within Applitools that shows our RDP access test, you can see the differences highlighted when the dynamic region that has the Windows time is not ignored:

(click image to get full size)

Applitools_Eyes_RDP_screenshot_differences_with_no_region_exclusion_time_different
What happens when clock area isn’t excluded

Here’s a screenshot within Applitools that shows a text overlap/layout issue that was found as part of using the Applitools screenshot comparison (with a resolution of 1024 by 768):

Applitools_Eyes_Text_overlap_layout_issue_detected_with_lower_resolution
Applitools_Eyes_Text_overlap_layout_issue_detected_with_lower_resolution

This is the story about how we reduced our UI Automation code by over 50% while greatly increasing the overall test coverage.

 

Read More

Ido BarkanUsing vagrant and fabric for integration tests

At cloudshare, our service is composed of many components. When we change the code of a given component we always test it. We try to walk the thin line of balancing the amount of unit tests and integration tests (or system tests) so that we can achieve reasonable code coverage, reasonable test run time and, most importantly, good confidence in our code.

Not long ago, we completely rewrote a component called gateway. The gateway runs on a Linux machine, and handles many parts of our internal routing, firewalling, NAT, network load balancing, traffic logging and more. It’s basically a router/firewall that gets configured dynamically to impose different network related rules according to the configuration it knows about. This is done by reconfiguring its (virtual) hardware and kernel modules. It is a python application packaged and deployed using good old debian packaging.

(more…)

Read More

Asaf KleinbortThe Road to Continuous Integration and Deployment

Brief history background:

Six years ago, when I joined CloudShare, I was really impressed by the fact that we release a new version every two weeks. Back then, many companies were still practicing the waterfall methodology and “continuous delivery” was still an unknown term for most of us.

Over time, we evolved. We migrated from SVN to GIT, and began practicing continuous integration using Jenkins. We even had two attempts toward ‘continuous delivery’. Both were purely technical, and handled solely the engineering aspect.

The first was focused on our development pipeline and had a very positive impact: our CI process has matured, we improved our infrastructure to allow much less downtime during deployment, and we upgraded our deployment scripts. However, the release cycle didn’t change.

The second effort was focused on configuration management. We initiated an effort (which is still on-going) of keeping all of our configuration in Git using Chef. As a result, the deployment and configuration management of our very complex service has become significantly more mature. However, again, the release cycle and our dev process in general did not change.

Our old process cons (in short):

Looking at our development process as a whole I identified a few problems and aspects that were ‘out dated’ and needed improvement.

1. We had a two week code freeze for each release. This resulted in developers working on a different version than the testers were testing. Actual time from finishing coding to release was generally 3-4 weeks and never less than 2 weeks.

2. Due to historical reasons, QA was not an integral part of the teams. A very ‘waterfall-like’ structure. The downside of this is described in many places. In my view, as a former developer and a former team leader in CloudShare, the most painful disadvantage in this structure is the fact that a team leader is not independent. Even when implementing a simple work item that was completely under her ‘jurisdiction’, the Team leader needs to depend on the QA team to finish her tasks. And QA teams usually have their own prioritization and plans.

3. We had everything coupled together: delivery schedule was coupled with user story definition, user story implementation and prioritization process.

The above resulted in a risk for our quality. Team leaders and developers were required to handle complex coordination tasks to ensure the quality of what they release. We had too many parallel items ‘in progress’., For a developer/team leader or QA engineer to “not drop the ball” had become a very non-trivial task.

Due to the above (and more) I had been thinking of changing our process for more than a year. But I always had excuses to postpone the change: “I have a new QA manager”; “We have a new product manger, this is not the time for a change”; “I am missing several QA engineers to support the new structure”; “We just need to finish our new build scripts / our new provisioning methods / our new something very technical”; and so on. You get the picture.

Change!

About a month ago, I was (again) challenged by our new product leader: “This is a nice process, but it is really not optimal and up-to-date. Why can’t we release every week?” I finally decided to bite the bullet and lead the change in our development process.

The change is happening right now, so it is much too early for a retrospective or conclusions. I’ll just mention the highlights of what we are doing and will elaborate on our choices and conclusions in different post(s).

What we are doing:

1. We re-organized the teams. QA engineers are now an integral part of each team. The QA team remained very small and is focused mostly on automation tasks. The main advantage here (among many) in my perspective is that every dev team is now able to independently deliver most of its items. Our Team leaders will continue to act as product owners of their teams, but will now have an interdisciplinary and more capable team,

2. We are moving to a delivery cycle of 1 week. The act of the deployment itself (which is pretty straight forward) will be done by all the Dev group members in turns.

3. We are implementing a Kanban method. This will allow us to decouple the delivery cadence from user story implementation. We are still trying to fit every task in no more than two weeks in order to keep our ‘delivery batches’ small, but we do not force a ‘strict time box’ for each work item.

It will also allow us to easily evolve towards continuous delivery for when our deployment pipelines are automatic and mature enough for us to deploy every day – or even several times a day. In other words, we are not waiting for the technology to lead us. We’re building a process that fits current and future deployment capabilities. We decoupled the prioritization process as well. We will have a ‘backlog grooming’ meeting once a month for start. Using Kanban, we will be able to enforce strict limits on the amount of our ‘work in progress’ and to identify our bottlenecks. We are still learning the limits.

Summary:

That’s it. This is our plan. We started last week. I am sure we will hit a lot of bumps in the road, but I am very excited and have a very good feeling about this change. We’ll update on specifics (like our Kanban board columns and limits), lessons learned, and other outcomes in future posts.

 

Read More

Asaf KleinbortCloudShare’s new TechBlog

Hi,

I am happy to introduce the new CloudShare Tech Blog.

It will be focused purely on technical aspects, mainly those that the CloudShare Dev Team encounters in our daily work.

A significant part of CloudShare customers are developers, so part of our customers or prospects might find the ideas and thoughts we will share here interesting. However, this blog is not about marketing to our customers – it is about sharing our ideas, challenges and thoughts with our peers in the software development community.

The idea of starting this blog came to me from the Netflix tech blog. I never really use Netflix, however I read and enjoy their tech blog, specifically during the period when they describe the challenges of migrating their infrastructure to AWS.

The service we develop here is complex and interesting from a software engineering point of view. We have a modern UI, a very complex businesses logic layer, and our own cloud, which is delivered to our customer as a service. We have a lot to share around these topics.

Another interesting aspect (especially for me) is the Development process.
There are many questions here: how to code, how to test, what to test and how much, how often we should release, what our org structure should be in order to provide the best quality and efficiency, and many more.

Since CloudShare’s early days, have been striving for efficiency in our process and have tried to keep up to date with what we thought were the best methodologies.
However, the world is moving as well, and fast. And we are continuously finding out that what is today a state of the art modern approach, can become commodity very quickly. And that there is always room for improvement. We will use this stage to share our thoughts around development methodologies.

I hope we can use this blog to hear back opinions from the developer community as well.
Thank you for reading and participating!

Read More