Leading By Example

*The following is a guest post by Brendan Connolly, SDET at Agilysys. The opinions stated in this post are his own.


leading by exampleTesters, even when embedded on teams, end up being outsiders.  Their value is derived from, maintained by, and is continually put at risk by this status.

In the early days of software development there was a sense of sanctity around testing.  

Testers were housed on separate teams, they were segregated in different organizational hierarchies in order to avoid the corrupting influence of developers.   A testers vision was pure.  The systems under test were black boxes to which testers could apply input and make judgments upon the outputs.

While the intent was to remain unbiased advocates for quality, the tides turned adversarial and bureaucratic.   

As agile methodologies arose and became more popularly accepted, there was little place for dedicated outsiders proxying for customer feedback and quality.

The traditional and large dedicated testing teams gave way to test or quality specialists embedded on their smaller development teams.

While the relationships have improved, the “outsider” status remains.   In a sense, this means that a tester’s value is not described by a set of tangible or measurable skills, it is linked to their unique mindset. The “wielder of this mindset” takes on mystic or supernatural abilities and qualities.  Imbued with a unique world view that grants him or her the insight into circumstances and actions that, when executed correctly, can expose critical issues or vulnerabilities in the software.

A tester’s value is now measured by how differently he thinks compared to the rest of his or her team.

But this also means that their skills and influence are easier to dismiss, as their benefit is attributed to the intangible.   A token offering to the Quality Gods.

Methodologies & Frameworks

Companies and individuals have placed a great deal of effort into creating and standardizing development practices.   This provides a shared vocabulary for explaining and understanding.

Teams can use their shared vocabulary to effectively communicate ideas and progress.

On the other hand, there just isn’t a common framework for how testing is integrated onto their teams.

 

Influence Through Alignment

If people don’t really know what you do, how you do it, or how to talk about it, how successful can you be as part of your team?

Just like development, testers need common practices and a common vocabulary to be able to communicate their intents and actions easily.  

The problem is that testing does not carry the weight and influence needed in order to have their teams adopting a whole set of new processes and terminology.  Testing simply isn’t the dominant role or activity on a software team.

This means that the testing practices and its vocabulary need to align with the accepted development standards, processes and terms.   The additional semantics and processes only serve to invoke the historical baggage of testing as a bureaucratic and costly process.

The process of aligning is a three step process:  Comprehend, Co-Opt and Communicate.

 

Comprehend

Everything begins with comprehending the principles and benefits of the methodology and process. This doesn’t mean you need to become a world class expert in a subject, but you need to achieve a comfortable working understanding.

Co-Opt

Armed with this new understanding, we can identify when some of the core concepts and vocabulary can be recontextualized to address or explain the main aspects of testing.   This allows you to frame the testing activities within the processes that your team has already accepted and values.

Communicate

The simple act of attempting to understand the core concepts to the point of being able to better engage in team discussions can have a dramatic impact on people’s perceptions of your abilities.   You’d be surprised how eager and excited people are to share, teach or mentor you once you show interest in the things they do.

This understanding grants a sort of credibility, that will allow you to integrate the co-opted versions of these processes into discussions about your testing work.

Let’s look at a couple examples:

Test Automation

Automation has already followed this model, it is an outcome of Manufacturing’s influence on Software Development.  Organizations recognized that, to gain efficiency and speed-up their activities, they needed to leverage computing power the way assembly lines utilized robotics.

Testers saw a working blueprint and studied it. In gaining understanding they were able to take those ideas and co-opt them into their own domain (automation). This made acceptance and communication of these new ideas easier since there was a working and tangible example to point developers to.

Automation is also the friendliest boundary between developers, testers and even product team members. It’s a safe place where people can learn from each other. Developers can talk test, and testers can talk code, since generally each respective group isn’t expected to be an expert in the others area but they are expected to collaborate on a shared output.

Sprint Planning

During sprint planning teams create estimates and identify the tasks required to complete user stories. Developers are commonly expected to dissect their work into functional tasks that are easily measurable, describable and shareable.

This is much less likely to be true for testing work, it’s more likely to be a single “test it” task.   A more detailed account of the testing effort may be tracked in separate systems, or documented in test plans.

Why is should testing functionality need to be tracked separately from implementing functionality?

It is because testers are outsiders, whose process is less defined and understood. At its root, it is an alignment issue.

We recognize there is dissonance between a tester’s role in sprint planning and a developer’s role.

First, we can seek to comprehend why it is that developers are expected to provide such transparency for their work.   

It wasn’t that long ago that development was as much a black box of activity as testing often still is today. In that time developers expected to be given a project and some requirements, then they were supposed to be left alone until it was ready.

Development was spending a great deal of time and resources only to have the end result often be delivered late, and frequently not to be what people expected or wanted.

Agile sought to remedy this, by breaking work up into smaller consumable pieces, that enabled more frequent communication and demonstration.   In short, they increased transparency in order to create an active feedback loop.

Test Planning

But in that sense, test has not changed in many teams today, and so it’s not unusual to see a stream of agile development leading to a mini testing waterfall.

We have a blueprint for success, it just needs to be co-opted, so that agile test sprint planning looks more like agile developments sprint planning. We want to replace the mini testing waterfall cycle in a sprint with an agile testing workflow.

Skip the exhaustively documented test plan. It’s not that we don’t need a plan, but when you look at the development, you can see they don’t have an extensive either, it is not part of the approach. Consider having a testing strategy discussion, outline acceptance criteria for the testing effort, just as they do with development in Agile.

No more bucket-o-testing. If you would balk at a developer having a code it task or one single change set checked in for a feature, then you should be equally outraged by testing following the same pattern.

Start with a single task (or a few) in areas you know need testing. Make sure each one has a focused and explicit intent that can be verbally described, effectively time boxed, and with results or outcomes that can be easily communicated.

Then take an iterative approach.  As you complete a testing task reflect on what you have learned to assess progress then add or remove tasks as needed.

Code Review

Code review is a pretty straight forward concept, before code is integrated into a product it is shared with a person or group, and is inspected and feedback provided.

As you can imagine, this process isn’t universally liked, nor is there an explicit standard against which the code is evaluated.  If code quality is subjective, what are teams looking to get out of code review?

The first thing people think, is that it is done to catch bugs.   Next in line is the belief it is done to maintain the style, standards and conventions in the code are consistently applied.  These things may occur in code review, but the true benefit is that people behave differently when they know they are being observed.

The fact that a developer knows they will have to show their code to someone else, changes decisions they make while writing code.

Code review also provides an opportunity to converse about the code. Knowledge can be transferred about why things were done a specific way, increasing familiarity proactively instead of only looking at other people’s code when emergencies happen.

You may be thinking the idea of sitting in a room and being interrogated and scrutinized sounds like an awful and time-consuming proposition.   To combat this, often code review is not done face to face or even in real-time,  teams can use tools that allows a person to request review, share changes and provide feedback. This makes some of the subjective issues less likely to occur, and allows reviews to focus things like design and unit test coverage.

 

Test Review

Even if you team is writing detailed test cases, the act of testing can be ambiguous in nature.   The issue is that testers often do not have access to the feedback loop provided by code review.  

In code review the work of the developer is discussed and shared. If there is a test review, it is usually focused on what will be tested (test plan), and not the testing actually performed. This is justified by the idea that we want to make sure good testing will be performed. In this sense why have developers code up a solution first, isn’t the risk the same?

Instead of reviewing what might be done, shift test review towards discussing strategy up front, then regroup as work is completed to review progress and provide an opportunity discuss coverage and feedback.

Taking it a step further, consider an asynchronous tooling-supported approach, similar to what developers are using to cut down on the time requirements for review meetings.   

Test plans and test cases are so often attributed value as a training tool. Instead of counting test cases and reporting everything in an often ambiguous pass/fail selection, consider having testers take notes of their actual actions and thoughts while testing. These can be reviewed and archived for reference just like code.

 

Communicating Leadership

In these examples, the first step is seeking to understand.

The more deeply we understand the more insight we gain.

The more insight, the more effective our communication.

The more effective our communication the greater influence as tester has.

This allows a tester to be a functioning example to their team.

Then you can pass this “tester to tester”, modeling how testers can take successful processes from roles on their team and integrate them into testing practices elsewhere, to foster deeper alignment and integration of testers onto teams.

It can also be tester to developer.  You may be able to more successfully advocate for better sprint planning or healthy code reviews by modeling those activities successfully in a testing context first.  It’s hard to argue with a working example already occurring on your team.

Take the opportunity to grow your influence and lead by example.


About Brendan Connolly

Brendan ConnollyBrendan Connolly is a Software Design Engineer in Test based out of Santa Barbara, California with over 7 years of testing experience in a variety of different roles. He is responsible for creating and executing testing strategies and using his coding powers for developing tooling to help make testers lives easier.

If you have anything to comment your can find him here

, , , ,

Trackbacks/Pingbacks

  1. Five Blogs – 27 October 2017 – 5blogs - October 27, 2017

    […] Leading By Example Written by: Brendan Connolly […]

  2. Testing Bits – 10/22/17 – 10/28/17 | Testing Curator Blog - October 29, 2017

    […] Leading By Example – Brendan Connolly – http://qablog.practitest.com/leading-by-example/ […]

Leave a Reply

Shares