Automation (Tests and Processes) pitfalls

During the last year I was involved in few test automation and other automation of processes and I wanted to share with you some points of interest that you should be aware (that as always will reduce your time)

Doesn’t clear the old artifacts

One of the first actions you do in automation tests is to clear the old artifacts from previous run. failing to do so could cause you in time to send automation report that include previous run results.

Doesn’t report correctly on failure

While the application works very good on the happy path, there are some part of it that might be ignoring errors or even worse doesn’t know how to determine success of a process.

This is issue is one of the most critical issues because it could lead you to wrongly report success although the process fail (this is not only for automation but true for every process you do).

Real life scenario was that one of our automated tools for our installation Kit build had a section that was suppose to copy the newly created MSI to our installation Kit folder. the code was wrapped with try .. catch and ignored the error, which in time caused our build process to not catch copy failure and our kit got OLD MSI (simple 3 line of code caused a sever issue)

Running with personal user

The general process of creation of automation process mainly start with you running the process with your own user (as a demo), this can cause in time some level of dependency on you for your password and will most probably mean that your password is stored as clear text in your automation scrip configuration.
It is advised always to have a special user (global) that is used for running the automation and that has the relevant permission on target servers and shares.
Most over if you can ask your IT to have this user with password that doesn’t have to be changed it is great coz it will reduce the overhead of administration (they won’t agree)

Automation manager process require logged on console

Most of the automation script managers (Cruise control, Hudson, etc.) support running in console (showing the trace) and as service (auto loading after boot)

Most of the projects I saw start using the console and actually stay with it, this means that they have dependency on the server to be logged in and the console running. and we always know that we have power supply issue and our server is not logged in –> Automation process no running.

I suggest to always prefer using the service method and review the logs using the trace files or even better through the dashboard (coz this will be your end use case)

Scripts are not found under source control

I cant explain why this occurs but most of the automation scripts I see remain outside source control for most of their life time (until major issues occurs), maybe because most of them start as a demo?!

This is not a suggestion but actually a requirement, ALWAYS save your scripts in source control (it is not different than other piece of software your write

Reports is not understood by end users

most of the time spent on test automation is used on making the test work while investing very little time on the report that is created by it, I see most of manager simply ignoring the end user and thinking “If I can understand the report, than it good enough coz it’s a waste of time and a never ending story”, while I can understand the need to invest in action that will show you ROI, you must understand that showing ROI is done also by people understanding that its working and understanding what is not working (so the report should be good enough to show it to them)

Even at the technical level you must make sure that you can understand the technical actions done in the test case so that you can trouble shoot and answer simple questions like “Why did it fail”, make sure your output your actions to log. this can be in the executive summery or in an additional report (focused for engineering)

Report creation is done by concatenation of string

I guess you think to your self “This is not important”, I really think that the output of the report must be data driven and not created during the execution by concatenation of text (or HTML) coz that mean that your are not easy to do business with.
I suggest that each step that your run will have interface that output some XML (well known schema) that will include the actions done, results, execution time, errors, link to files, etc. and than the report will simply be XSLT that convert to HTML.
This way you can simply change the output very easily and more over you can delegate the task of report formatting to external resource that will make sure it is done centrally and applies for all of your tests

Full automation script doesn’t support re-run in the middle

While I am sure your automation process is built out of small atomic tests and your main automation process simply orchestrate those small parts to build large story, you will always come to a situation that your long test fails and you want to run it in the middle BUT you cant coz you rely on the execution flow.

In essence you need to make sure that key section of the long story can be run in the middle and make sure you don’t rely too much on previous results.

You need to support the option to start the automation from specific section (maintain the state)

 

Hope that will help you

Automation (Tests and Processes) pitfalls Automation (Tests and Processes) pitfalls Reviewed by Ran Davidovitz on 12:55 PM Rating: 5

No comments:

Powered by Blogger.