laarcnew | comments | discord | tags | ask | show | place | submitlogin
Ask Laarc: Ideas or advice on setting up a QA process, on a small team?
6 points by emily to ask 428 days ago | 7 comments
We are a team of 5, and are setting up an actual QA process for the first time. What would you recommend including? What have you done before that worked, or that you liked?


4 points by davidk01 428 days ago

Make it a regular part of the engineering process so that it happens consistently instead of being a big bang effort near releases. So make every Friday QA engineering day otherwise it falls by the wayside and turns into a last minute thing that no one wants to do.

-----

2 points by akkartik 426 days ago

The term is a bit open ended. What's your goal for the process?

-----

3 points by emily 426 days ago

Right now our “QA” is total chaos; we have no standard process that we follow, and so much of our internal bug reports are things that have no clear path to reproduce and/or no clear path to “fix”, i.e. someone notices a thing we didn’t consider or an interaction that they don’t particularly like. It all goes into the same backlog as our normal roadmap tasks, so when we get to sprint planning later, we have these nebulous tickets that hopefully someone remembers the meaning of. The lack of process means we always miss something during testing.

So the primary goal is to clean up this mess somehow, and step 1 is to, at the least, get some sort of checklist in place as well as a few base requirements for submitting QA tickets. Beyond that I’m at somewhat of a loss, because I’ve never dealt with setting up a formal process, or following one on a team of our size (have only dealt with QA at large places, e.g. Leo Burnett, which would obviously be an impossible goal for us).

Basically just looking for viable places to start from, or general advice while I’m still at the jumping off point, not necessarily a targeted solution.

Edit: I just realized part of what you’re probably asking. I’m referring to testing release-candidates of our application, and continued testing of already released versions.

-----

3 points by akkartik 426 days ago

Thanks, that's helpful. So you're thinking about two things: testing release candidates and tracking/prioritizing bugs that are already in production. Some suggestions:

a) Keep the two concerns separate in your minds. Maybe call the first "checking releases", and the second "managing existing defects". I find that avoiding words ending in common suffixes like -ify and -ation (and, worst of all, -ification) helps keep everyone in sync about what the goal is, and also keeps the goal stable and memorable over time, more resistant to scope creep.

b) Start a checklist: manual tests to perform before every release. Checklists are great; give this book a quick skim: https://www.amazon.com/Checklist-Manifesto-How-Things-Right/dp/0312430000

c) Start writing automated tests. This is a lack I've noticed in the codebase :) Shawn knows I'm a big believer in tests. If you do them right you'll never need a QA organization. Unfortunately you started without them, a form of "tech debt" that is very hard to dig yourself out of.

d) When someone reports a bug, spend a few minutes trying to think about how it got through to production. Can we add something to the checklist? Does some feature require more automated tests?

e) Be more conservative in adding new features. It's great in the short term that someone suggests a feature and Shawn adds it immediately. But now you're on the hook to support it for all time. Or at least, unwinding it can take longer than creating it in the first place. Be wary of this implication.

I don't have much to say about the second concern, unfortunately. I'm so far into thinking about blue-sky ways to fix the first problem (https://github.com/akkartik/mu/blob/master/subx/Readme.md) that I seem to be trying to avoid all production bugs altogether. But of course that doesn't help you today.

Try to get to some sort of long-term balance between checklists, tests and bug tracking. Once you start tracking bugs it's easy for the list to blow too large and become useless, causing periodic bouts of bankruptcy and "marking all bugs as done" that happen in many large open source projects (ahem, Firefox). Once you do manual checklists, hopefully the rate of incoming bugs will go down. However now the checklist will start growing over time, eventually becoming useless. So spend time automating tests to remove items from the checklist. Finally, writing tests up front will hopefully reduce the "top of the funnel" pressure here, so you get less production bugs that result in new checklist items and that you end up writing automated tests for months later.

-----

2 points by emily 426 days ago

To be clear, this isn’t in reference to Laarc. :) My day job is at a small fintech startup called Medean (https://www.medean.com should link to the iOS app and web app). We have some amount of automated testing— pretty good unit test coverage, very little end-to-end tests written though, which is always on my mind).

The way we structure things, I don’t have much say in what new features we’re working on in a sprint or when we prioritize which tickets, but I do get to deal with any or all of the resulting chaos.

That said, your points a, b, and d are definitely relevant. Especially the separation of concerns, which is something I need to find a better way of articulating to the team without making it seem like I’m rejecting certain contributions to the “bug” list. Everyone has certain categories of things they think should be specifically prioritized, so maybe one glaring weakness is that we have no clear channel to incorporate user feedback and only subjective ways to measure impact.

-----


If I was in your shoes, I would focus on getting good data in.

Good classification is important, have to be strict. bug vs improve vs feature

Good write ups on the bug side of the house are important, will require lots of work to get them to be usable.

It kinda sounds like the team doesn't have a PM, which means everybody has to be a PM. In that scenario, it has to be a completely positive thing for bugs to be talked through. Also all the devs have to own the quality of the backlog, to the near point of obsession.

In a perfect world the PM would be moving things from user/staff land reports over to dev land, and handling all that leg work, which would "solve" alot of that doubt/waste feeling.

Aside, Medean looks interesting, but that is one crowded market, even has some nonprofits.

-----


Ah, thanks for clarifying that! That's a much more complicated situation, and honestly I am too bad at such more mature organizational dynamics to be offering advice here :)

-----




Welcome | Guidelines | Bookmarklet | Feature Requests | Source | Contact | Twitter | Lists

RSS (stories) | RSS (comments)

Search: