Skip to content

Idea: Set up automation to encourage basic impact measurement/knowledge sharing #12

@theecrit

Description

@theecrit

Context

Based on the latest brigade priorities, we'd like to be providing more learning opportunities and measuring our impact more directly.

But we currently don't have a set of defined success metrics or regular reporting cadence and beyond a monthly Demo Night (which is often treated as a way to orient newcomers), we don't have any structure for sharing learnings with each other.

Idea

Start with a very lightweight/low-overhead way to make data collection and knowledge sharing easier by implementing an automation that prompts members to share quick learnings and feedback that can be automatically routed, aggregated when necessary, and reported out in useful ways.

Potential workflows

  1. Set up a weekly Slack reminder that prompts members to complete a form recording the amount of hours they contributed and any learnings and needs/requests/concerns/questions.
  2. Member completes form, which saves to Google Sheet.
  3. Some clever spreadsheet magic calculates the following (where possible):
  • No. of returning volunteers
  • No. of new volunteers
  • Retention/attrition in the past week and month
  • Total hrs volunteered last week/month-to-date by project
  • Total hrs volunteered last week/month-to-date brigade-wide
  1. Zap runs weekly and...
    4.1 Rolls the aggregated data into a report that pushes to Slack (and maybe even to email or social channels).
    4.2 Does something interesting with the qualitative learnings
    (maybe compiles them into a separate report and shares out, or randomly selects and posts them Slack and/or social channels).
  2. Follow-up zaps could even report aggregate numbers and period-over-period changes on a monthly basis, how many learnings were shared amongst members, etc.

Caveats

  • These measures don't measure direct impact per se, but they do measure activity/engagement, and can be used as early proxies for effort as we develop more robust measures. It's a way for us to take small steps towards bigger impact. If we keep waiting to collect perfect data, we risk neglecting to collect any data at all.
  • There are probably all kinds of cool ways to use the "learning snippets" - maybe they get read at meetups, used as discussion/circle prompts, etc.
  • There is a risk with the learning that they end up in a sort of black hole. How can we integrate the learnings shared into future work? How can we preserve any evergreen lessons shared so they resurface in the future? How do we ensure responsive to feedback and/or concerns?

Metadata

Metadata

Assignees

No one assigned

    Labels

    Input wantedIssue needs more discussion, clarity or specificity to be completed.suggestionHave an idea for how OpenOakland could work better?

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions