The following is a guest post by – Harshit Paul – Product Growth Specialist at LambdaTest
Critical Thinking, A Key Behind Every Successful Tester
We live in a competitive world, period. A world where modern SDLCs are adopted at a large scale to accelerate product delivery. Where faster feedback loops are incorporated in regular intervals to push the next set of code changes based on the customers’ demands. All for the sake of outranking the competitors. In such a fast-paced environment, it is only natural for your higher-ups to have great expectations from you.
As a tester, it becomes your responsibility to understand what your customers want, and what are they getting from your development team. Clarity is everything. You need clarity in your conversations with your teammates and you need clarity when it comes to reading the SRS(System Requirement Specification) document. You may have your own way of thinking, your own way of communicating and you have to be adaptive with that to get your messages across to the stakeholders in a concise manner.
Now, there are people who are able to express their opinions better than others, even if those opinions are wrong. If you have such people on board in your team, chances are your wise opinions might get turned down due to their influence. This is why I believe that there is one skill that you must develop as a tester. Critical thinking!
What Is Critical Thinking?
Critical thinking is an ability that helps you see the facts more clearly than false assumptions. A tester who is a critical thinker has a knack for analysis, communication, interpretation, problem-solving. Critical thinking is about having an open mindset instead of clinging onto prejudices. Being a critical thinker would allow you to actively participate in a conversation so you are not being fooled into doing something wrong only because others believe it to be right. In fact, you will be able to clearly present your counter-points in such an argument. That way, you would take an action only after weighing all the aspects of a conversation & only after you believe what you are doing is correct.
We will talk more about critical thinking for testers as we go further but before we do that, it is important to realize your role as a tester in a release cycle.
As A Testers, You Are The Last Line Of Defense
Testers are considered the last line of defense against bugs. If a sign-off is passed based over a misunderstanding then it may bring an outage to knock over your doorsteps. And when that happens, you would need all hands on deck to come up with a hotfix. After the hotfix is applied, your team would have to dig deeper to present the RCA(Root Cause Analysis) to ensure the outage is never repeated. Based on your RCA, your team would have to decide if the hotfix would suffice the problem or a more permanent fix is required to be taken in the coming release cycle? Even if the hotfix suffices, you would have to perform a thorough round of regression testing to ensure that there are no regression defects. My point being, a tiny misunderstanding in one release can end up squeezing both time and resources in the next one. Why? Well, other than the planned activities in a sprint, your team would now have to deal with extra work that may come their way along with the outage.
What Leads To Such Misunderstandings?
As a tester, you are dependent on the information handed to you around a feature in migration. Now, there can be multiple sources of information, and the more sources there are, the higher are chances of misinterpretation. Broadly, in every release, a tester will have to rely on information from the 4 major sources:
A release involves numerous people. There are developers, project managers, stakeholders, end-users and then there is you as a tester. With the variety of people on-board, you are going to get numerous opinions around a feature under test. Experienced employees would have a different opinion than the novices. The more people on-board, the higher is the probability of misunderstanding. You need not be swayed by something which doesn’t feel right. Think properly whether people involved in the project are actually relating a fact or are they simply expressing their opinion based on their prejudices. You need to recognize that the opinions you end up following are the ones that are aligned with your company’s interest.
This is a common problem in most release cycles. You may have to go through a different set of documentation for understanding the feature under test. You would be reading project plans set by the project manager, the requirement gathering document to understand what the clients are expecting, design documents to understand how the designers are planning the user journey for a feature. You would be reading a lot and it is important that you stay sharp at it to avoid any kind of misinterpretation.
Ideally, your company would be having multiple staging environments to validate code changes being pushed to the Production environment beforehand. For a successful release, it is imperative to replicate your Production environment in the staging ones as closely as possible. These prepared staging environments are the ones where you need to test extensively before giving a thumbs up to a feature. However, many times you may encounter a scenario where the staging environment is not as identical as you & your management are expecting it to be. In such cases, it is important to let them note the discrepancies in the staging environment & the consequences that they may have for your feature under test.
Another information that leads to misunderstanding is around the test data. When you test over staging environments the infrastructure may not be challenged as heavily as your Production environment because your staging environment may not have real-time traffic over it.
You also need to be wary of the test tools being used, they may not work as well as you trust them. For example – If you are using a Selenium test automation tool, you must monitor your test reports daily and note down instances of false positives & false negatives.
Third-party integrations between your software & others may also lead to ambiguity. As you will have to configure APIs for each software that you plan to integrate into yours. A change over the third-party software may cause your integration to fail. Also, the sensitive information being shared between the two software should be well protected & well thought.
Now that we have realized the criticality of your job as a tester & the sources that may lead to misinterpretation and ambiguity. Let us get back to Critical thinking and see why it is important and how it can ensure that we are not swayed by the invalid source of information in a release.
Why Critical Thinking Is Must For A Tester?
Many times, I have noticed that the testers provide a sign-off simply because their test case passed. That isn’t all there is to it. As a tester, you are supposed to be the bridge that connects customers & developers. You are expected to give the sign-off only when you think that the provided solution from your developer is making sense for your customer, and not for your developer alone.
Even when you do have the right solution, you need to think about whether it is hindering the user experience or not. Think about questions like “Are we adding unnecessary clicks and navigations for a particular feature that should’ve been clearly visible from the moment a user lands over your website?”
Being a critical thinker would help you watch & comment on things more clearly. As a tester, it would help you to:
- Collect the requirements clearly.
- Understand the opinion of stakeholders over the feature under test.
- Become empathetic towards the user personas.
- Communicate better with the DevOps team, project managers, and higher-ups.
- Ask the right questions in a meeting.
- Avoid the ambiguities and misconceptions tied in any release.
- Write better test cases & find peculiar bugs.
I have seen and heard it many times in different companies, an outage popping up after a recent release and the first question that comes to mind is “who passed this feature if it was that buggy, to begin with?” All of a sudden the heat gets directed towards the tester who had passed that feature and the stakeholders are left entertained with the top excuses of a tester, no pun intended!
Why is this scenario so familiar and acceptable?
Well, every release cycle is bound to enhance the product and with it to increase the test requirements. So the more release cycles are passed, the more an agile tester has to note down in the QA checklist for the next one.
A release can have too many validation checks for you to list down. Contrasting opinions regarding an enhancement over team-meetings, figuring out regression defects of a hotfix, plus you need to test for different features over different stage environments. Frankly, it could be chaotic unless you settle your thought process and think critically over what makes sense. Now, how can you do that?
How To Think Critically?
Think of it as a process, there are certain phases. To assess a situation critically as a tester, you need to first understand different arguments of your colleagues and see what prejudices they may have and what facts are they providing based on their experiences. To do so you can ask stupid questions. Yes, you heard that right!
Asking stupid questions can help you determine the awareness that your colleagues have around the subject or the feature under test. For example, if your project is planning on incorporating a CSS Subgrid, during the meetings you can ask “I know this is a stupid question, but what exactly do we mean by CSS Subgrid? Will it ease the user experience?” You can then have multiple colleagues presenting their thoughts around the area. (I will also share an outage experience around the CSS Subgrid in a while, you wouldn’t wanna miss it! ;))
Based on different arguments, you need to think rationally about facts and sensible assumptions before you draw a conclusion.
Finally, you can go ahead and present your take over the subject in a concise manner so others are easily convinced. You would have to keep in mind that the people working on a project may have different cultural values and you need to frame your answers in accordance with that. You don’t want to joke around something which you feel is funny but could upset somebody else on-board. Also, be careful of the nitpickers who may try to disrupt your opinion over negligible flaws.
For example:- Post your test cycles, you may have a meeting to discuss the quality report of the test cycles & you come across a question: “Are we going to have a bug-free release? How many critical and minor bugs are we looking at?”
Now, before you answer these questions, realize that they may be nitpickers in the meeting. Don’t say Yes to the first question. You can’t be certain about what might happen once the changes are pushed to the Production environment. However, the vice-versa is also true! Others can’t guarantee that the release would ultimately be a blocker for the Production, especially, if it is passing in the Pre-Prod. This is something that you could use as a counter-argument as you present your numbers.
Give them your status with an estimate of how many bugs are already existing in the Production environment and talk about the irregularities noticed between the Pre-production and Production system. Give them a range of the number of bugs that they should expect and highlight the potential risks if any. If you get questions asking you to hold the release over false logic & assumption, simply counter them saying “The changes haven’t been a blocker for Pre-Prod, so we can’t really be certain that the changes would fail in Production. Even if they do, we have a backup plan ready so I believe we’re good to go live as planned.”
Let Me Share A Couple Experiences
Being a tester I have been a part of a considerable number of outages coming up over minute negligence and misunderstanding among the different teams. These could have been avoided by critical thinking. I will share two instances here. One where critical thinking saved the day, and one where my team regretted not thinking critically. Let’s take the one with the regret first and save the happy one for last.
I Wish I Had Thought Critically
There is a particular outage I remember where the web-application went south due to a compromised or invalid sign-off. Due to confidential reasons, I am not going to go into too many specifics around the company and release but would help you get the gist of the trouble that took place.
Outage Problem: Website’s Pricing Grid Looked Bizarre
I worked with a company that had a new pricing page designed. The new pricing grid was made by leveraging CSS Subgrid. Yeah, you heard that right.! Not using the CSS Grid but by using the newly introduced and less adopted, CSS Subgrid. The change was committed by the development team and was signed off by the testing department as it got pushed over to the production. It was after a few days that one of their customers shared a bizarre screenshot of how the pricing grid appeared for that user. It was far from design and was enough to put everyone into an alarming state.
In search of the root cause, everything was evaluated to check how it happened from our end. Ultimately, we asked for the system configurations to realize the browser + OS used by the customer and here is what we found.
The customer was using Google Chrome 75 and it wasn’t compatible with CSS Subgrid. Here is the browser support for CSS Subgrid.
As you can clearly see the Subgrid won’t help with a cross-browser compatible website. It works fine only over the latest Mozilla Firefox versions i.e. Mozilla Firefox version 71 and above. No wonder our boss’s frustration went off the charts when he came to realize that the pricing looked that bizarre for days!
Now, you may be wondering how come the testing team provided a sign-off for the new pricing grid?
Well, I was responsible for testing this feature and I preferred Mozilla Firefox browsers as my default. What’s comical here is that even the developer responsible for unit testing had a soft spot for Mozilla Firefox.
So this was one instance where the developers developed it over Mozilla Firefox and I ended up testing it over the same. Also, due to a narrow release window, I neglected cross-browser testing and paid dearly for it. Later on, I was facing a barrage of questions from the board and my project manager & I only added fuel to the fire with an excuse that sounded like “I tested on my system and it was working fine, so I gave a green chit.”
Now that I talk about it, it is almost embarrassing! I mean cross-browser testing is pivotal for every release cycle and as a tester, I should’ve known that a lot better than the developers. The company ended up losing potential customers for days after the new pricing grid went up. You can’t come up and say “it worked fine on my browser!”
I could have avoided this outage had I been proactive in the meeting when this feature was being discussed as an upcoming enhancement. I wish I had applied critical thinking back then to understand the purpose of the enhancement.
Well, you can imagine how my appraisal cycle might’ve gone through.! That said, the next time I was always on my guard. Now, I will share an instance where I was able to apply critical thinking and stack up some good points for the next appraisal.
Critical Thinking, Saving The Day!
So we had a big release coming up. Lots of changes were pushed into the CI/CD pipeline and everyone had their plates full. We were automating the regressing testing process to save us as much time as possible. So the changes were pushed from Stage 1 environment to Pre-prod environment. And everyone was going well, the automation tests were returning no failure and manual testing was going at a rapid pace. So everyone was relieved thinking that the changes are being validated properly. Everyone, but for me who had already learned a hard lesson to never assume anything as a tester.
I found it too good to be true. When a release of this complexity is pushed it is nearly impossible for your automation testing scripts to pass off everything. So I ran those automation testing scripts over the sandbox environment, which is the developer’s playground. Sandbox, as we know, is used for unit testing of changes and is hardly ever identical to the Production environment. And guess what happened to those automation testing scripts? They still passed!
This means the automation testing scripts weren’t reliable and my team was basically a victim to false negatives. Fortunately, I was able to think critically on time and saved a significant bandwidth & probably helped my team to avoid an entire rollback. Well, you can imagine how my appraisal cycle might’ve gone through this time. 😉
Next Time, Think Critically!!
Critical thinking can help you evaluate the reflections that may occur as a result of development. Be proactive while you are in meetings involving enhancement discussions or even in code review sessions. You can pinpoint possible issues that may come across in the application as a result of the suggested development plan. Think of it as being a detective, if you start digging you are always going to find something new that others might be missing out. Just keep an open mind and realize the difference between facts & false logic. So the next time you get a feature to test, think it thoroughly. Be considerate about the diversity of cultural values and experience of different colleagues on your team and think critically before you go ahead and present your opinion to them. Cheers and happy testing.! 🙂
Practitest is an end-to-end test management tool, that gives you control of the entire testing process - from manual testing to automated testing and CI.
Designed for testers by testers, PractiTest can be customized to your team's ever-changing needs.
With fast professional and methodological support, you can make the most of your time and release products quickly and successfully to meet your user’s needs.