Securely storing passwords and login details with Set Encrypted Text in Katalon Studio

One of the new features for Katalon Studio 5.4 is the ability to store encrypted passwords right inside the test case using the Set Encrypted Text command. Previously, the username and password would be in clear text, so anyone who opened the file could see the login credentials. This now obscures that information while still allowing easy access.

The new command is available while editing the script in Manual mode. Change the normal Set Text command to Set Encrypted text, which brings up a small dialog window that encrypts the text as you type.

set-encrypted-text

With the Item column now changed, double click the input field to bring up the encrypted text dialog box. On this new input screen, click the Value input field and you will be able to type in your text and see the encrypted text. This is what will be saved in the Test Case.

input-encrypted-text

For the login test I have, I check to see which environment the test is running against and then pass the credentials for that environment. By simply changing the Set Text command to Set Encrypted Text, I obscured the username and password in mere moments.

The test now looks like this:

if (GlobalVariable.baseurl == 'https://myqasite.com') {
    //QA credentials
log.logWarning('Logging in to environment - ' + GlobalVariable.baseurl)
    WebUI.setEncryptedText(findTestObject('Page_Sign In/input_UserName'), 'XJ419vj6YqJLWAYDfHAYjLzfymSmyhCi')
    WebUI.setEncryptedText(findTestObject('Page_Sign In/input_Password'), 'e71pytG/LEFOTYb/96yNYh7DOujSLkGz')
} else {
//Staging and Prod credentials
log.logWarning('Logging in to environment - ' + GlobalVariable.baseurl)
    WebUI.setEncryptedText(findTestObject('Page_Sign In/input_UserName'), 'cbbsN3ywIVYTVYg1DVaCdC/EYK/MbMZwGmSPgZHWhNTAx6OdO9Wh9w===')
    WebUI.setEncryptedText(findTestObject('Page_Sign In/input_Password'), 'MihRDM3OZ2lC85FtfophvXwNOqe+xiW4fjG2a5CVrjqCtbHeBRcgvw==')
}
WebUI.click(findTestObject('Page_Sign In/span_Sign in'))

This is a pretty nice feature, and even if you’re just working in a QA or Staging environment, it’s nice to now you can obscure sensitive text from others who might be working on the same project, or from someone who might take a glance at the screen.

Other articles of interest:

Quasi Performance/Load Testing with Katalon Studio

For real load testing scenarios there are dedicated tools. But, if you need a quick way to generate traffic from multiple users hitting the same page or pages, you can launch Katalon multiple times from the command line. Katalon will jump through pages far faster than you can do it manually, which can give an approximation of user load.

The first thing to do is to build a Test Suite that runs all the Test Cases you’re interested in. This could be test cases that fill in forms, click a series of link, or load one page after another.

Once that’s done, use the Build CMD button from the menu bar to generate the command line that launches Katalon and your Test Suite.

katalon-build-cmd

Open a Terminal prompt and switch to the Applications directory. Paste in the command line to load Katalon and start the test. After launching the first instance, open a new Terminal tab, switch to the Application directory and paste the command line again. Repeat several times to get multiple instances of Katalon and your browser running.

cd /Applications/

./Katalon\ Studio.app/Contents/MacOS/katalon –args -noSplash  -runMode=console -projectPath=”/Users/XXX/Documents/GitHub/katalon/examples/Test1.prj” -retry=0 -testSuitePath=”Test Suites/New Test Suite” -browserType=”Chrome”

In each tab, Katalon will load and start executing the test. You should quickly see multiple browsers all running the defined Test Suite and pages loading.

Keep in mind, each execution is a full load of Katalon and the browser, so it will consume a fair bit of CPU power and RAM. My Mac Pro has 12 cores and 128GB of ram, so I’ve had 10 consecutive sessions running without incident. I could have easily opened more, but the traffic seemed sufficient. And to be honest, it was pretty amusing to watch.

This method clearly doesn’t compete with tools like Apache Jmeter, and I don’t know if Katalon condones such things, but if you need something quick, and have a good set of tests that can load a lot of pages, this works quite well.

Other articles of interest:

Creating a Data Driven Test Suite in Katalon Studio

In order to extend the reach of my regression tests, I wanted to run a Test Suite multiple times for different users, one after the other. My goal is to log in as User A, run the regression suite, log in as User B, run the regression script again.

For example, I want to log in as John, check multiple pages worth of sales figures to perform calculations and comparisons. I then want to log in as Jane and do the same thing.

I currently have a Login script that I edit between each run with the name of the user I want to be. It works, but I want it to run for a list of users rather than me having to make an edit between each run.

Katalon supports Data Driven tests, where you can supply information to be read from a file. However, that applies to a single Test Case, not a Test Suite. It means the Test Case will iterate through all the data in the file before moving on to the next Test Case in the list.

Based on the scenario above, it would log in for User A, then immediately log in for User B, then User C, and continue do this for each name in the file. At the end of the list, the next test in the suite would be executed. It would read the sales data for the last person in the list.

What I need is for the data to be applied at the Suite level, which isn’t currently supported. However, there is a way to make this work that only takes a couple of steps. In essence, the Login script call numerous Test Cases after reading in the name of the user. It’s not quite as drag and drop as building a Test Suite, but it works.

To convert my Test Suite to work in my Test Case I took the following steps. Since the Test Suite is an XML file it can be easily read. Using a Terminal session and grep, I grabbed all the <testCaseId> lines and wrote them to a file.

grep "Test Cases" QA\ Regression\ Test.ts > testcases.txt

I now have a list of all my tests in the form:

<testCaseId>Test Cases/Sales Dashboard Tests/Main Dashboard Tests/Check Import Date</testCaseId>

The next step is open the testcases.txt file into an editor or spreadsheet such as LibreOffice. Using the Search and Replace function, strip out the <testCaseId> and </testCaseId> tags. This is now the list of Test Cases we want to run and will be added to the Login script.

From here, it’s easy to combine the callTestCase function and append the name of all the test cases to it.

In a spreadsheet, paste all the Test Cases into Column A.

In Column B enter:

WebUI.callTestCase(findTestCase('

In Column C, enter:

'), [:], FailureHandling.CONTINUE_ON_FAILURE)

In Column D, enter

=B1&A1&C1

This takes the callTestCase command, appends the name of the actual test case, then appends the closing syntax. This generates:

WebUI.callTestCase(findTestCase('Test Cases/Sales Dashboard Tests/Main Dashboard Tests/Check Import Date'), [:], FailureHandling.CONTINUE_ON_FAILURE)

Fill this formula down for each Test Case.

With all the commands complete, copy the entire list into the Test Case you want to iterate through, in my case, Login.

The final test case will look similar to:

WebUI.navigateToUrl(GlobalVariable.baseurl + '/login')
WebUI.setText(findTestObject('Page_/input_login_Field'), impersonateUser)
WebUI.delay(1)
WebUI.click(findTestObject('Page_/btn_Submit-Login-Credentials'))
WebUI.delay(1)
WebUI.callTestCase(findTestCase('Test Cases/Sales Dashboard Tests/Main Dashboard Tests/Check Import Date'), [:], FailureHandling.CONTINUE_ON_FAILURE)
WebUI.callTestCase(findTestCase('Test Cases/Sales Dashboard Tests/Main Dashboard Tests/Confirm Dashboard Link'), [:], FailureHandling.CONTINUE_ON_FAILURE)

I have 50 callTestCase commands, the full list of my Test Suite, in the Login Test Case. This means my new Test Suite only has 2 Test Cases listed. The first is to load the web page details. The second is the Login test. Login now replaces the previous Test Suite.

When I run this new test, it gets to the Login script, loads a user, then executes 50 regressions tests. When that cycle is complete, it loads the next user and repeats the process.

Also note the FailureHandling.CONTINUE_ON_FAILURE that has been added to each Test Case. This means the regression tests will keep running if one of them fails. Without this, as soon as an error condition is reached, all the tests will be halted. This includes an element missing, the markFailed command and quite a few others.

While not quite as robust as a regular Test Suite, this method works and will allow a series of tests to be run multiple times. It took less than 10 minutes to convert my Test Suite to this method and get it working. For now, it’s a pretty reasonable workaround.

Other articles of interest:

Migrating Platforms – A Katalon Success Story

Several months ago, we began mapping out a migration strategy to move from one service platform to another. It would include new databases, new security authentication, a complete data load and new import and export functionality. Basically, new everything.

Since it’s a new platform, it needed a full regression test. It could be done by hand, but that would be slow and issues could easily fall through the cracks. The goal was to introduce automation, so back in December we started with Katalon as our tool of choice to handle the work.

And so we set off to automate as much as possible. It started as a brute force endeavor. Just to get started, tests were created to read data so we could compare Before and After results to confirm we didn’t lose anything. With those complete, it became possible to fill in forms. After a bit more experience and knowledge, we could count items on the page and validate their content. It was then possible to sum columns and confirm internal calculations to site values. Each test built on the last.

The tests continued to expand in depth and complexity until there was a solid set of tests that validated hundreds of data points. A full run could be completed in less than 10 minutes. Doing that same list of tests manually takes an hour to complete. Plus, the tests could be run in the background giving time back for other tasks.

As we practiced the migration, the scripts were quick to uncover issues. They even revealed a few of bugs due to how frequently they were running. After a week of repeated test runs, the environment was stable and we were quite confident to move the process to Staging.

To confirm it’s readiness, the same set of tests were run repeatedly and the small hiccups were ironed out before turning it over to the customer. What they saw was a nearly flawless upgrade experience.

Over the weekend, we did the real upgrade, with the final validation performed by Katalon and the automation scripts. In a fraction of the time, we were able to check dozens of links, read hundreds of sales figures, fill in multiple user forms, confirm each page loaded as expected and compared dozens of known data elements. We also had written confirmation of what we tested so everyone could agree on the sign-off.

I would say that our implementation of Katalon has been a big success. There is still a lot more to learn and more robust tests to write. But, as of now, we have a solid test suite that definitely validates whether the site is working or not.

Other articles of interest:

Reviewing the Execution Logs of Katalon Studio

While the output logs of Katalon Studio are extremely helpful, it’s not possible (as far as I can tell) to copy that output directly. The reason being, I want to run a baseline test to capture multiple data points. After a code deployment, I want to run the tests again and verify the results are the same. The logs look great in Katalon, but the raw logs contain a lot more information.

On my Mac, I can use Terminal and the Unix command, Grep, to start filtering. The cleanest file to work with is the JUnit_Report.xml file, located in the Reports directory. Inside, there are lines marked as:

[MESSAGE][PASSED]
[MESSAGE][WARNING]
[MESSAGE][FAILED]
[MESSAGE][ERROR]

The real file will contain lines that look like this:

2018-03-25 09:42:40 - [TEST_STEP][PASSED] - Statement - today = new java.util.Date(): null
2018-03-25 09:42:40 - [TEST_STEP][PASSED] - Statement - yesterday = today.previous(): null
2018-03-25 09:42:40 - [TEST_STEP][PASSED] - Statement - todayDate = today.format("MM/dd/yyyy"): null
2018-03-25 10:02:27 - [MESSAGE][PASSED] - Delayed 2 second(s)
2018-03-25 10:02:27 - [MESSAGE][WARNING] - Filter Results: 1679
2018-03-25 10:02:27 - [MESSAGE][PASSED] - Text of object 'Object Repository/Page_/Dashboard/Footer-Total Number of Users Returned' is: '1-25 of 1679 users'
2018-03-25 10:02:27 - [MESSAGE][WARNING] - Pagination Results: 1679
2018-03-25 10:02:27 - [MESSAGE][PASSED] - SUCCESS: The Filter Results Matches the Pagination Results]]></system-out>

The drawback are the [TEST_STEP][PASSED] lines. Those need to be filtered out.

At the Terminal prompt:

grep "MESSAGE" JUnit_Report.xml > firstpass.txt

This creates a text file that contains the [MESSAGE] lines.

This is good, but there will be hundreds of [MESSAGE][PASSED] lines, which really aren’t that important. That’s not a problem as those lines can be filtered out in LibreOffice using the AutoFilter.

Load the file, select AutoFilter and the option for Standard Filter. Set the filter so the lines that Do not contain the word PASSED are displayed.

libreoffice-standard-filter

Now we have our Warning, Error and Failed messages together. When the second test is completed and the results filtered in the same manner, they can be pasted into the next column to be compared line by line. Or, the EXACT function can be run to compare the two strings.

To illustrate, here is the Log Viewer from Katalon:

katalon-log-viewer

And here are the same results in LibreOffice:

libreoffice-log-viewer

If needed, the same filtering method can be used to see the Failed and Error messages. But this gives the ability to run a test, capture data, run the test again and compare the results.

Other articles of interest:

Recent Comments