Tuesday, January 7, 2014

List of GUI Testing Tools


GUI testing tools serve the purpose of automating the testing process of software with graphical user interfaces.
http://en.wikipedia.org/wiki/List_of_GUI_testing_tools#!

Non-Technical Tester

Short while ago, I read this article which provides some very useful insights on how to stop being a non-technical tester. Please read on..

Source - http://qablog.practitest.com/2011/12/stop-being-a-non-technical-tester/

A short while ago I posted an open question on twitter:
“Do tester need to be as technical as programmer
to be successful at their jobs?”
I got plenty of answers, so I will only post some of them representing the main opinions threads:
@TestAndAnalysis - Testing is a technical discipline that is different to programming and testers add a lot of value to projects
@huibschoots – No! It depends on the context they work in. But in general they need to have some basic technical skills (or the will to learn)
@datoon83 – I’d say that all people involved in the delivery need to talk 1 language – that of the domain and customers!
@klyr – Technical and non-technical testers have a different approach to testing, and will find different bugs.
@sgershon – Certainly do. Not sure about “more/less technical” as programmers, possibly they have to be “differently technical” than them.
@halperinko – @sgershon @joelmonte I normally look at it as in depth knowledge for devs, vs. System-wise knowledge for testers. (Some don’t follow rules)
There were also a couple of tweeps who replied with blog posts of their own sharing their opinion on the subject.
@diamontip – my answer – do tester need to be as technical as programmers to be successful at their jobs? bit.ly/v5vcXand
@adampknight – I personally dislike calling testers “technical” wrote about it herebit.ly/tVHDOB
To all the tweeps above and those I missed, thanks for the great feedback!
But as you surely guessed I have my own opinion on the subject and I want to share it with you, so here it goes…

My definition of Technical

Specially after reading Adam’s post I feel the need to explain what do I mean by technical, or better yet how do I differentiate a Technical Tester from a Non-Technical Tester. (If you read my previous blogs on “Why are some tester not really Professional Testers” then you should already have an idea…)
Technical Tester is not afraid of doing most of the following stuff on a regular basis as part of his job (without any specific order):
- Understand the architecture of the product he is testing,
including the pros & cons of the specific design, as well as the risks linked to each of the components and interfaces in the product.
He then uses this information to plan his testing strategy, to execute his tests and find the hidden issues, and also to provide visibility to his team regarding the risks involved in developing a specific feature or making a given change to the system.
- Review the code he needs to test.
He can do this on a number of levels, starting from going only over the names of the files that were changed, and all the way to reviewing the code itself. This information will provide valuable inputs to help decide what needs to be tested and how, as well as to find things about the changes that might have been missed by the developer or the documentation.
BTW, by code I mean SQL queries, scripts, configuration files, etc.
- Work with scripts & tools to help his work.
PractiTestA technical tester should be able to create (or at least “play”) with scripts to help him run repetitive tests such as sanity or smoke, and tasks such as configurations, installation, setups, etc.
He should also be able to work with free automation tools such as Selenium or WATIR (or any of the paid ones like QTP, SeeTest, TestComplete, etc) to create and run test scripts that will increase the stability of the product in development, and over time save time…
- Be up to date with the technical aspects of his infrastructure
(e.g. browsers, databases, languages, etc)
He should read the latest updates on all aspects of his infrastructure that may have an effect on his work. For example new updates to his O/S matrix, known issues with the browsers supported by his product, updates to external products they integrate, etc.
With the help of Google alerts and by subscribing to a couple of newsletters anyone can do this by reading 5 to 10 email 2 or 3 times a week. The value gained from becoming an independent source of knowledge greatly exceeds the time invested in the efforts.
- Is able to troubleshoot issues from Logs or other System Feeds.
He is aware of all the logs and feeds available in his system, and uses them to investigate more about any issue or strange behavior.
This information is helpful during testing to provide more information than simply writing “there is a bug with functionality X”. And it will be critical if he is called to work on a customer bug, where he needs to understand complex issues quickly and without access to all the information.
In addition to the above, a technical tester should also be able to:
- Provide feedback and run the unit tests created by his programmer peers.
- Run SQL Queries on the DB directly to help verify his testing results.
Install and configure the system he is testing.
etc.

Sounds like Superman or MacGyver?

It may sound like this, but actually it’s not!
As testers we work on projects that revolve around Software, Hardware, and/or Embedded products. The only way to do a good job in testing them is to have a deep understanding of both angles: technical and functional.
This doesn’t mean that you need replace or have the same technical dept as your developers, or surpass your Product Marketing’s knowledge of your users.
You need to achieve a balance, where you have “enough” knowledge and understanding of both these areas in order to do your job as a tester, or rephrasing the tweet from @halperinko – as testers we should achieve system-wide knowledge, as opposed to the in-dept knowledge required from the developers or the product marketing guys.

Is it black and white?

This time quoting “Cokolwiek” who commented on one of my latest posts, “Not everything is black and white”. Meaning there is no standard to define how technical should a tester be on every project and product.
Like in many other situations, the best answer to how technical do you need to be is: “It Depends…”
You should be at least technical enough to do your job effectively and to talk the same language with the rest of your programming and testing peers.
What do I mean by that?
If you work on a software development firm then you should understand enough of the languages used by your developers to be able to read the code and understand their changes. If you work on a heavily DB-related project then you need to understand enough of SQL and database management. If you work on a Website development firm then you should know enough CSS, HTML and JS, and so it goes…

So if I am not Technical enough, should I quit testing???

Definitely not!
If you like testing and you are good at it, why should you quit? On the other hand, this is a great opportunity to improve your work and (as @diamontip wrote on his blog) increase your market value as a tester ;)
And the best part of it is… it’s not very hard to become a more technical tester! Just start by asking questions, searching the web, reading books, etc.
I’m also confident that if you show the potential to increase the value of your current work to your manager, he won’t mind you investing sometime during your day to learn these new trades (as long as you manage to explain why this is also good for him!).
So, stop making excuses for not been technical enough. Just make a decision (or a New Year’s resolution…) to start working on improving your technical skills!
Go ahead and get rid of the NON-Technical Tester in you!
It will be worth your time and make your job more interesting and satisfying.

Monday, January 6, 2014

Automating tests vs. test-automation

This is an old post but I have come across this in practice recently in a new organization which I joined. I could see most of the things mentioned in the post were really being implemented in my current project. This post is almost 6 years old. Please read on...

Source - http://googletesting.blogspot.in/2007/10/automating-tests-vs-test-automation.html

In the last couple of years the practice of testing has undergone more than superficial changes. We have turned our art into engineering, introduced process-models, come up with best-practices, and developed tools to support our daily work and make each test engineer more productive. Some tools target test execution. 

They aim to automate the repetitive steps that a tester would take to exercise functions through the user
interface of a system in order to verify its functionality. I am sure you have all seen tools like Selenium, 
WebDriver, Eggplant or other proprietary solutions, and that you learned to love them.

On the downside, we observe problems when we employ these tools:
  • Scripting your manual tests this way takes far longer than just executing them manually.
  • The UI is one of the least stable interfaces of any system, so we can start automating quite late in the development phase.
  • Maintenance of the tests takes a significant amount of time.
  • Execution is slow, and sometimes cumbersome.
  • Tests become flaky.
  • Tests break for the wrong reasons.
Of course, we can argue that none of these problems is particularly bad, and the advantages of automation 
still outweigh the cost. This might well be true. We learned to accept some of these problems as 
'the price of automation', whereas others are met by some common-sense workarounds:
  • It takes long to automate a test—Well, let's automate only tests that are important, and will be executed again and again in regression testing.
  • Execution might be slow, but it is still faster than manual testing.
  • Tests cannot break for the wrong reason—When they break we found a bug.
In the rest of this post I'd like to summarize some experiences I had when I tried to overcome these problems, not by working around them, but by eliminating their causes.

Most of these problems are rooted in the fact that we are just automating manual tests. By doing so we are not taking into account whether the added computational power, access to different interfaces, and faster execution speed should make us change the way we test systems.

Considering the fact that a system exposes different interfaces to the environment—e.g., the user-interface, an interface between front-end and back-end, an interface to a data-store, and interfaces to other systems—it is obvious that we need to look at each and every interface and test it. More than that we should not only take each interface into account but also avoid testing the functionality in too many different places.

Let me introduce the example of a store-administration system which allows you to add items to the store, see the current inventory, and remove items. One straightforward manual test case for adding an item would be to go to the 'Add' dialogue, enter a new item with quantity 1, and then go to the 'Display' dialogue to check that it is there. To automate this test case you would instrument exactly all the steps through the user-interface.

Probably most of the problems I listed above will apply. One way to avoid them in the first place would have been to figure out how this system looks inside.
  • Is there a database? If so, the verification should probably not be performed against the UI but against the database.
  • Do we need to interface with a supplier? If so, how should this interaction look?
  • Is the same functionality available via an API? If so, it should be tested through the API, and the UI should just be checked to interact with the API correctly.
This will probably yield a higher number of tests, some of them being much 'smaller' in their resource requirements and executing far faster than the full end-to-end tests. Applying these simple questions will allow us to:
  • write many more tests through the API, e.g., to cover many boundary conditions,
  • execute multiple threads of tests on the same machine, giving us a chance to spot race-conditions,
  • start earlier with testing the system, as we can test each interface when it becomes 'quasi-stable',
  • makes maintenance of tests and debugging easier, as the tests break closer to the source of the problem,
  • require fewer machine resources, and still execute in reasonable time.
I am not advocating the total absence of UI tests here. The user interface is just another interface, and
so it deserves attention too. However I do think that we are currently focusing most of our testing-efforts on 
the UI. The common attitude, that the UI deserves most attention because it is what the user sees, is flawed. 
Even a perfect UI will not satisfy a user if the underlying functionality is corrupt.

Neither should we abandon our end-to-end tests. They are valuable and no system can be considered tested 
without them. Again, the question we need to ask ourselves is the ratio between full end-to-end tests and 
smaller integration tests.

Unfortunately, there is no free lunch. In order to change the style of test-automation we will also need to change our approach to testing. Successful test-automation needs to:
  • start early in the development cycle,
  • take the internal structure of the system into account,
  • have a feedback loop to developers to influence the system-design.
Some of these points require quite a change in the way we approach testing. They are only achievable if we 
work as a single team with our developers. It is crucial that there is an absolute free flow of information 
between the different roles in this team.

In previous projects we were able to achieve this by
  • removing any spatial separation between the test engineers and the development engineers. Sitting on the next desk is probably the best way to promote information exchange,
  • using the same tools and methods as the developers,
  • getting involved into daily stand-ups and design-discussions.
This helps not only in getting involved really early (there are projects where test development starts at the same time as development), but it is also a great way to give continuous feedback. Some of the items in the list call for very development-oriented test engineers, as it is easier for them to be recognized as a peer by the development teams.

To summarize, I figured out that a successful automation project needs:
  • to take the internal details and exposed interface of the system under test into account,
  • to have many fast tests for each interface (including the UI),
  • to verify the functionality at the lowest possible level,
  • to have a set of end-to-end tests,
  • to start at the same time as development,
  • to overcome traditional boundaries between development and testing (spatial, organizational and process boundaries), and
  • to use the same tools as the development team

Sunday, January 5, 2014

What kind of tester are you?

This below blog post has been written by James Bach. I find it very interesting. So please read on to know what kind of tester you are....

Source - http://www.satisfice.com/blog/archives/893

Most of my work is teaching, coaching, and evaluating testers. But as a humanist, I want to apply the Diversity Heuristic: our differences can make us a stronger team. That means I can’t pick one comfortable kind of tester and grade people against that template. On the other hand, I do see interesting patterns of skill and temperament among testers, and it seems reasonable to talk about those patterns in a broad sense. Even though snowflakes are unique, it’s also true that snowflakes are all alike.
So, I propose that there are at least seven different types of testers: administrative tester, technical tester, analytical tester, social tester, empathic tester, user, and developer. As I explain each type, I want you to understand this:
These types are patterns, not prisons. They are clusters of heuristics; or in some cases, roles. Your style or situation may fit more than one of these patterns.
  • Administrative Tester. The administrative tester wants to move things along. Do the task, clear the obstacles, get to “done.” High level administrative testers want to be in the meetings, track the agreements, get the resources, update the dashboards. They are coordinators; managers.  Low level administrative testers often enjoy the paperwork aspect of testing: checking off boxes on spreadsheets, etc. (I was a test manager for years and did a lot of administrative work.) Warning: Administrative testers often are tempted to “fake” the test process. This pattern does not focus on the intellectual details of testing, but more the visible apparatus.
  • Technical Tester. The technical tester builds tools, uses tools, and in general thinks in terms of code. They are great as advocates for testability because they speak the language of developers. The people called SDETs are technical testers. Google and Microsoft love technical testers. (As a programmer I have one foot in this pattern at all times.) Warning: Technical testers are often tempted not to test things that can’t easily be tested with the tools they have. And they often don’t study testing, as such, preferring to learn more about tools.
  • Analytical Tester. The analytical tester loves models and typically enjoys mathematics (although not necessarily). Analytical testers create diagrams, matrices, and outlines. They read long specs. They gravitate to combination testing. (If I had to choose one category to be, I would have to say I am more analytical than anything else.) Warning: Analytical testers are prone to planning paralysis. They often dream of optimal test sets instead of good enough. If they can’t easily model it, they may ignore it.
  • Social Tester. The social tester wants you! Social testers discover all the people who can help them and prefer working in teams to being alone. Social testers understand that other people often have already done the work that needs to be done, and that no one person needs to have the whole solution. A social tester knows that you don’t have to be a coder to test– but it sure helps to know one. A good social tester cultivates social capital: credibility and services to offer others. (I follow a lot of the social tester pattern. My brother, Jon, is the classic social tester.) Warning: Social testers can get lazy and seem like they are mooching off of other people’s hard work. Also, they can socialize too much, at the expense of the work.
  • Empathic Tester. Empathic testers immerse themselves in the product. Their primary method is to empathize with the users. This is not quite the same as being a user expert, since there’s an important difference between being a tester who advocates for users and a user who happens to test. This is so different from my style that I have not recognized, nor respected, this pattern until recently. People with a non-technical background often adopt this pattern, and sometimes also the administrative or social tester pattern, too. Warning: Empathic testers typically have a difficult time putting into words what they do and how they do it.
  • User Expert. Notice I did not say “user tester.” User experts may be called domain experts or subject matter experts. They do not see themselves as testers, but as potential users who are helping out in a testing role. An expert tester can make tremendous use of user experts. Warning: User experts, not having a tester identity, tend not to study or develop deep testing skills.
  • Developer. Developers often test. They are ideally situated for unit testing, and they create testability in the products they design. A technical tester can benefit by spending time as a developer, and when a developer comes into testing, he is usually a technical tester. Warning: Developers, not having a tester identity, tend not to study or develop deep testing skills.
When I’m sizing up a tester during coaching. I find it useful to think in terms of these categories, so that I can more efficiently guess his strengths and weaknesses and be of service.

Introduction

Hi All,

My name is Sunny Mutyala. This is my first blog post. I am feeling very excited.

I would like to take this opportunity and use this blog as a repository for all the interesting, useful, good to know, inspiring etc., articles, videos, review, posts on testing, automation, technology as well as motivational to share with my friends, community members, enthusiasts etc.,

I am into to habit of reading different blogs, social sites. While reading I come across fantastic resources and keep saving them locally. It would be really helpful if I could access all of them  (atleast most of it) online and share it with people. Hence this blog!

So good luck to myself and all those coming across this blog. Hope we find it useful. Will try my best to keep it as simple as possible

Thanks,
Sunny