Last year, I wrote about my first experience with SBTM, in collaboration with Mike Talks. Since then, I’ve had the opportunity to use SBTM in the “real world”, on a client project that I was on for about six months. I’d like to reflect on that experience, and consider some of my learnings, and things I’d like to work on or do differently in future.
What Was in the Reports
I tested the client’s mobile app on Android and iOS, and was the only tester working as part of the mobile scrum team. The project was to introduce in-sprint testing, and there had previously been no exploratory testing, so I had a clean slate to start from. I created the session reports as sub-tasks on Jira, linked to the relevant user stories, and initially included the following information on every session report:
- Test Notes
You’ll notice that there are significantly less headings than in James Bach’s sample session report. In all honesty, I omitted most of the metadata because I didn’t really understand the value of it.
As I carried out more sessions and created more reports, the need for more headings emerged:
- The client had a lot of issues stemming from their use of multiple long-living feature branches, and it quickly became necessary to specify which branch had been tested during the session
- “Build” changed to the version number of the branch
- Test accounts
- I used to include these in the test notes when relevant, but including them as part of the standard report data made them easier to refer to when recreating issues
- Since reports were recorded in Jira, I initially relied on the Jira timestamps to keep track of the date and time for me, but I often forgot to hit “Start Working” or had a discrepancy between the actual session time and the comment time
- Similar to the start data, the Jira timestamps weren’t accurate enough to keep track of the duration of sessions; filling this out at the end of the session also forced me to consciously think about how long I was spending on each test charter
- Charter vs. opportunity
- I didn’t initially understand the usefulness of this information, but as I used more charters to guide my testing, I took more notice of how often I went “off-road”, and could use this information to spot patterns and think about why I went off-charter when I did
- Testing vs. bugs vs. set up
- Similarly, I initially underestimated the value in consciously measuring how much time could be spent on actual testing, and how much of it was taken up by investigating and reporting on bugs, or just trying to get the right setup to be able to start testing in the first place
- Test Notes
What Was in the Test Notes
Let’s take a closer look at the Test Notes section. James’ sample report leaves the structure of this fairly open, and my first experience with SBTM left me with doubts about whether I was making the “right” kind of notes in my reports. Since then, I came across the PQIP method for note taking, from Simon Tomes.
I felt like experimenting with a new way of sharing testing tips (it’s a new way for me). 2m 20s max, no editing! 😮
— Simon Tomes (@simon_tomes) April 4, 2018
In Simon’s structure, PQIP stands for: problem, question, idea, praise. I like the grouping format, as this gives me some guidance around the kind of notes to include, and I particularly like the praise section. As testers are often the ones delivering bad news, I think it’s really important for us to also balance criticism with positivity, and credit where credit is due.
I took the PQIP structure as a foundation, and adapted it to the project’s needs, after experimenting in a few sessions. The test notes structure that I ended up using regularly on the project was as follows:
- Issues / findings
- This was the same as “problems” or “bugs”, but it used the client’s terminology
- In my first experiment with SBTM, I questioned how I only seemed to record test results that I didn’t expect, or that I considered as potential issues. The tests that were “successful” or returned results within my expectations weren’t recorded, even if they might otherwise have been interesting. The “observations” section is where I could record anything interesting or of note that didn’t fit under the “issues” heading
- Here, I logged ideas for improvements or better UX, that the PO could take or leave as they saw fit
- Conflicting or missing requirements? Confusion over intended behaviour? I included these questions here, and discussed them with a PO during session debriefs
- I really liked having this section, as it forced me to look for things that had been done well
- Obstacles / constraints
- Here is where I included anything that James might call an “issue” (as opposed to a problem) – things that got in the way of testing, but were not necessarily bugs within the software, like known performance issues with a particular service or someone accidentally deleting all your test data part-way through a session
This format seemed to work well for both me and the team, as developers were able to read my session notes and follow up with questions or clarifications during the next stand-up. I was pleasantly surprised that other people were reading my session notes without any prompt from me, and I could see that they added value for the team.
I was the only tester on the project, so I decided to conduct session debriefs with the product owner(s). Structuring my test notes under clear headings proved useful for the debriefs, as this naturally led the format of the debriefs too We could also easily skip to the most interesting parts to keep discussions short and valuable. Having all the important questions together meant that we could quickly go through each one, and the product owner was also free to browse through the ideas and other notes in their own time.
I was really pleased with the feedback I received from one PO, who praised my attention to detail and said that what I shared in my reports was exactly what he was looking for. We were able to build a shared understanding of the product’s requirements and state, and it was easy for me to keep my team members well informed.
Upon reflection, I don’t know why I ever had a question about what to do after a debrief, or how to record it. It seems obvious to me now that I could have added a final section in the session reports for “debrief” and noted the outcomes of discussions, particularly the answers to questions that I had, and any actions to be taken.
What I’d Like to Work On
Although the format that I used for my test session reports seemed to work well, they were distinctly lacking in details of test strategies, heuristics, and oracles. I think it’s important for testers to be able to talk about the testing we’re doing, and why, but I’m really not used to doing it in such a formal way. I don’t feel confident in framing my testing like this, and that’s the honest reason why these details weren’t in my reports. In a way, I think that allowing myself to get used to using SBTM first was good, because it meant that I could focus on one thing at a time. However, I want to gain confidence through practicing and learning how to talk about testing more formally, so I definitely want to push myself to think about and include information on test strategies, heuristics and oracles in future test session reports.
I’m also aware that there can be an element of bias in how test notes are structured. Just like how having a “praise” heading encouraged me to look for good things, and previous lack of an area for interesting observations hindered me, there could be something else that I’m missing by using this particular structure for every test session. For that reason, I’d like to experiment with other note-taking structures too. I haven’t stumbled across very many, so if you have any suggestions for other things to try (no mind maps please), then please let me know in the comments.
As yet, I’ve still been riding the wave of lone tester on projects, but I’d really like to be able to learn from another tester and do more pairing. Although I’ve done some pairing with developers, I think the dynamic was different, as I was taking the lead and teaching them about testing, since none was previously done during sprints. Of course, there are things that I learnt from the developers as well, but I’m still looking forward to being on a project with other testers in the future and being able to learn from them too.
The main post is finished now, so you can stop reading if you’d like, but something is bothering me about it. I’m aware that I usually ramble on about topics more related to core skills and culture, so I tried to make this shorter and more to the point. However, it feels really dull to me, and lacking in personality. I’d planned to write about this for a while, so it’s not that the topic doesn’t interest me.
Perhaps it’s just my flatter mood coming through, or maybe the way I ramble on is part of how I show my personality in blog posts. I really don’t know.
What do you think? Do you prefer the shorter, more dry posts, or is a little bit of waffling okay with you? Please share your thoughts in the comments.