This post has gone through a few versions, as I figured out exactly what it was I wanted to get across. Please bear with me, but also feel free to skip.
I think my TL;DR is:
- I’d rather write automation code than use a no / low code tool, even though I don’t consider myself a programmer
- I still use AI to my advantage, even though I don’t fully trust it
- I’m a little bit stubborn when it comes to what I want to use and how, and I’m not sure what that says about me, as a tester
Automation and No / Low Code
As a tester, I’m a hands-on explorer first and foremost. I see the value in automation, but I don’t particularly like to code. Though I’ve written automation on previous projects, I don’t see myself as a programmer, and would not like to have the title Test Automation Engineer. You might think I’d be the perfect customer for some no / low code automation tool, but I’m not.
Maybe it’s my need for control. Maybe it’s my preference for being direct. But any no / low code product I’ve seen hasn’t appealed to me at all. Even though I don’t particularly like writing code, I do it happily over using a no / low code option, for several reasons:
- I like the benefits of the resulting automation
- It’s an opportunity to strengthen a tool on my tool belt
- I like to learn and develop different skills
- I like the flexibility
- The related knowledge and experience enhances my white / grey box testing skills and allows me to better contribute to, and influence, architectural discussions
- I don’t like the idea that I can’t do something
- I want to be able to contribute to whatever needs to be done
I recently set up the infrastructure and wrote the first automated UI flow for our new system version from scratch. I wouldn’t count coding as a hobby, and it’s Salesforce, so it was a pain in the butt, but I genuinely enjoyed doing this work. I learnt a lot, got the satisfaction of solving annoying problems, and received positive feedback. What would I have gained by using a no / low code tool? Would any time savings be worth missing out on the benefits listed above?
I don’t like to write automation, but I want to write automation. And I’d rather do it via a programming language than “natural” language.
AI and Vibe Coding
I’m definitely not on the bandwagon with AI super-fans, but I’m also not against AI. Though I’m very skeptical and cautious when it comes to AI, I do see potential. So did I vibe code my way through the UI automation I set up? Hell, no. I specifically did not vibe code, again, perhaps out of selfishness and a need for control.
- I don’t trust AI
- I don’t just want to produce, I want to learn
- I want to be able to understand the code and why it is the way it is
- I want to make conscious design / architectural decisions
- The general idea of vibe coding just makes me uncomfortable – I see too many risks
But that doesn’t mean I didn’t use AI at all. I found ChatGPT very helpful, for example, as I used it like a sparring partner. I didn’t just ask it to generate blocks of code for me, I asked it questions. Questions about development concepts, comparisons of potential approaches, good practices. I essentially asked it to be my teacher. I asked it to explain things to me, offer opinions, and suggest improvements. But I never used any code suggestions without understanding or scrutinising them. And a lot of the time, there were issues with the AI’s suggestions, or things I knew could be better, and challenged it on.
I also used GitHub Copilot as a kind of predictive text for coding. Some of the suggestions were absolute trash, but when I needed to basically repeat the same pattern several times, it picked up on that pattern and made it faster for me to create and edit lines of code. Again, I didn’t ask the AI to write whole blocks or files of code based on a description, but I did ask it to change very specific things about the code that would be tedious to do manually.
Maybe it’s underlying resistance or skepticism or stubbornness, or simply my desire to learn, but I’m very deliberate about not vibe coding, and perhaps oddly proud of it. Sure, it took a bit longer. Sure, I still got valid comments from human code reviews. But I learned so much, and could answer questions about the code. Any mistakes or weaknesses in it are mine; I can take responsibility for something I wrote, rather than be embarrassed or confused about something I out-sourced. I like that.
AI and Testing
I’ve been curious about the usefulness of AI for things like generating test ideas, and have tried it out on a few occasions. I haven’t been that impressed. It’s not that the output was bad necessarily, just that it didn’t really tell me anything I didn’t already know / hadn’t already thought of. I found Rahul Parwal’s Prompting for Testers course to be a great introduction to prompt engineering, and used what I learned from it to try and craft better prompts for better results. I definitely think it improved ChatGPT’s answers, but it still wasn’t what I was hoping for.
I’ve generally found that broad testing (t)asks like generating test ideas are boring to chat about with AI. What’s been more useful is asking it very specific questions about Salesforce, for example, the system I started testing for the first time last year. As was perhaps already known, my experience has been that AI is great at three things: providing high-level introductions, crawling the internet for, then summarising, niche information, and following set patterns. What it’s not so great at: replacing the thought work of an already capable and knowledgable human being, and providing instructions on how to complete a specific task in a specific system (hallucinations of options and features are rife).
Though I’m interested in, and have looked in to, AI-powered automation tools (I thought it could be great for flaky Salesforce locators), I’m still skeptical. I still lack the underlying trust needed to just let a tool like that run, without reviewing, analysing, and potentially improving what it’s built in order to automate the thing I asked of it. And I will be 100% honest with you. It could simply be a case of not trusting what I don’t understand.
When it comes to AI, I’ve been a user, a scrutiniser, a reviewer. But I haven’t yet been a developer or (paid) tester of AI. Perhaps more knowledge would give me more power to let go. But for now, I’d rather write automation code.
AI and Quality Engineering
Two other areas I’m still testing out with AI are: known and unknown unknowns (especially pertaining to risk), and strategic planning. I quite prefer it when AI seems to have an “opinion” on things, and can help me see things in a new way. More than just following instructions or telling me what I want to hear, I want AI to test my work, in a way, not only as a tester, but as a quality engineer. I want more moments that make me go, “damn, I’m glad I asked AI about that,” and really mean it.
Implications
So what does all this say about me, as a tester, apart from the fact that I desire control, like to learn, and have a lack of trust? Does the fact that I’m not a lover of coding go against me, even though I still want to do it? Does my resistance towards fully embracing new technologies mean I’m leaving myself behind, compared to other testers? I’d love to read your thoughts in the comments, so let me know how this adds to your perception of me as a tester, and what your attitudes are towards no / low code and AI for automation and testing are, if you’re also somewhat resistant.
Discover more from Cassandra HL
Subscribe to get the latest posts sent to your email.