[teampractices] FYI: Measuring the effectiveness of VE process interventions

Joel Aufrecht jaufrecht at wikimedia.org
Tue Aug 11 23:38:49 UTC 2015


*tl;dr:* Metrics targeted exactly to the specific things we want to change
may be more helpful than general opinion surveys.

David was extremely helpful in clarifying my thinking on how I should
measure the effectiveness of the VE process work I've been doing.  So I
want to brain-dump before it evaporates.  Context: Neil and I did a general
process survey of the VE team and their stakeholders in April/May,
identified a handful of challenges, and proposed five specific
interventions, all of which are currently underway.  The most
people-intensive intervention is negotiating Service Level Understandings
between the VE team and each stakeholder group, and using these discussions
to uncover contradictions in goals and/or resource levels.  Arthur
suggested I use surveys to measure if these SLU discussions have any effect.

David proposes getting much more specific.  For each SLU discussion,
identify several specific actions to be taken, that are measurable, and
measure them.  Try to capture both input and output/outcome.  For example,
if QA complained (I'm inventing this as a fake example) VE keeps releasing
patches with IE bugs, then the VE team might say in response, well, you
don't test our patches in time and we have to release untested code.  So
they might agree on a lead time and a level of QA availability for
testing.  Then, we could measure an input (time it takes for each patch to
be tested) and an outcome (# of critical IE bugs found after release).

Because we don't know what to focus on until after the SLU meeting, we
probably don't have any Before metrics.  Where possible we can reconstruct
Before metrics, but we would probably have to accept having mostly
qualitative/anecdotal Before.

I will also proceed with a survey for all stakeholders included in the
process review (~30 people), but this doesn't have to be specific to the
SLU intervention.  Instead, I can survey for all of the ~10 challenges
identified:
1) Do you agree that X is a problem?
2) How does this affect you?

And run the survey once now, as a retroactive baseline ("As of april/may,
did you think ...") and then in a few more months, when most of the
recommendations are implemented and mature.

However, it occurs to me that I did run a similar survey in May/June, and
got 13 responses.  So maybe I won't do a retro baseline, but instead just
run a future survey later this year.



*--Joel Aufrecht*
Team Practices Group
Wikimedia Foundation
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.wikimedia.org/pipermail/teampractices/attachments/20150811/8f1aaff3/attachment.html>


More information about the teampractices mailing list