Definition of Ticket States
Sep 9, 2022
Peter Warner-Medley, Ester Ramos Carmona, Morgan Sadr-Hashemi, Ivan Spirkovski, Ollie Randall
#doc #process
Technical Story: PT-152
Context and Problem Statement¶
There's an occasional lack of clarity about where a ticket is and if we've really met the goal. Sometimes we're unsure if a ticket we're working on lives in In Review
if it's been reviewed and we're making changes; other times we can be a bit cheeky about whether or not we've got our green apple. We should introduce a definition of the various ticket states after To Do
(In Progress
-> In Review
-> Done
) to ensure a clear, shared mental model.
Decision Drivers ¶
- 66⅔% increase in engineering team on Monday
- Increasing frequency of question: "Yeah, but is it done done?"
Considered Options¶
- No Definition of Done
- Definition of Done with strict design review
- Definition of Done with permissive/optimistic design review
Decision Outcome¶
Chosen option: "Definition of Done with permissive/optimistic design/data review", because it fits well with the stage we're at with our product (getting big enough not to cowboy but healthy appetite for regression) and emphasises good judgement in a high trust team whilst providing a clear process for acting on those judgements.
Positive Consequences ¶
- Clarity around process making it easier to onboard engineers
- Consistent expectations between us all about when to move ticket to
Done
- Checking of outcomes on finishing work to increase confidence on working product
- Minimal friction added to process by only require light touch checks
Negative Consequences ¶
- Doesn't fit current Linear workflow neatly (Tickets -> Done on well named branch PR merge)
- Light touch allows drift in design, layout and data integrity over time
- Adds small amount of friction (vs. merge = done) to finishing tickets
Pros and Cons of the Options ¶
No Definition of Done¶
We continue without any agreed criteria for moving a ticket to done and rely on general good practice and discipline.
- Good, because lowest friction of all the options
- Good, because no change required
- Bad, because unlikely to scale well and our discipline may not be enough as the app becomes more complex and design becomes more important
Definition of Done with strict design/data review¶
Done means:
- No linting regressions
- Automated testing on all new functionality
- Unit tests to cover simple transformations/UI logic/etc.
- Integration/E2E tests to cover introduction/update of data model (e.g. changing of resolver or new nested object type on type)
- Peer reviewed
- All code merged
- Deployed and in production
- Full acceptance testing:
- Ollie Randall/designer reviews all affected Storybooks and/or user stories in a preview environment
- Ester Ramos Carmona/affected users reviewed data integrity of all affected tables/pipelines
We also add a definition of In Review
as:
"Needs input or decision from someone else (not assigned ticket)" (i.e. a synonym for
Parked
orWaiting
)
and In Progress
as:
"Needs work from ticket assignee"
- Good, because extremely high confidence in product
- Good, because maximum level of consultation
- Bad, because very high friction
- Bad, because unnecessary level of process (not DoDoI):
- e.g. Ollie Randall sees no need to check these things and recognises most design will get binned anyway
- Bad, because introduces change to current process
- Bad, because doesn't fit current Linear workflow
Definition of Done with permissive/optimistic design review¶
As above but with much more flexibility around acceptance testing. Done means:
- No linting regressions
- Automated testing on all new functionality
- Unit tests to cover simple transformations/UI logic/etc.
- Integration/E2E tests to cover introduction/update of data model (e.g. changing of resolver or new nested object type on type)
- Peer reviewed
- All code merged
- Deployed and in production
- Appropriate testing of deployed code:
- Acceptance testing when introduction of substantial change (e.g. whole new data source/new screen in UI/etc.)
- 'Smoke' testing by engineer otherwise (e.g. peer review on storybooks and assignee attempts to complete full user story from ticket using production app)
Other definitions included as above:
"Needs input or decision from someone else (not assigned ticket)" (
In Review
)"Needs work from ticket assignee" (
In Progress
)
- Good, because trade off of increasing confidence and friction
- Good, because introduces consistent mental model
- Good, because evolvable without new decision
- Good, because emphasises trust in team while giving framework for best practice
- Bad, because does not capture data engineering experience well (one person team can't really get 'peer' review and acceptance testing very unclear)
- Bad, because quite vague on when acceptance testing required
- Bad, because (due to above) relies on engineer judgement on when to refer for acceptance testing (but we are a high trust team)
- Bad, because introduces change to current process
- Bad, because doesn't fit current Linear workflow
Links ¶
- [Will add link to DoD in Notion here](link to adr)
- …