Discussion:
[mb-devel] Community transparency
Robert Kaye
2015-02-26 13:02:13 UTC
Permalink
I would like to take a minute to start a conversation about community transparency and what that means. Previously I've used the term "public shaming" and since then have gotten quite a bit of push-back from that. So, I'm officially retracting the use of that term. Mea culpa.

However, I would like to get feedback on one important mechanism of keeping a community informed on conflicts that are happening inside the community. Let me given an example of what happened in the past that illustrates this:

Back in 2006 we had one poisonous person divide the community into factions. People were either for or against this person and this person's speech continued to reinforce this split. Things had gotten quite out of hand and I started a series of conversations with this person, pointing out their fractious behavior. But, nothing ever changed over the course of these conversations. They kept elevating in tone and severity and still nothing changed. These conversations were held in private between myself, the person and one mediator. At the time the wisdom was to "not air your dirty laundry", meaning that you shouldn't manage disputes in the public view.

When finally I decided that no resolution was to be had and that I needed to eject this person from the community, I did so in a blog post. That, in hindsight, was clearly a bad idea. Because there were a number of people in the community who had no idea that this person was causing trouble nor that this person was about to be ejected from the community. This caused quite a stir in the community and some members felt that MusicBrainz was no longer a safe place to be, for fear of being indiscriminately tossed out at the whim of the the BDFL.

Since then, I've adopted a policy of engaging with people who are causing trouble in the community on a more public level, but I'm having a hard to putting that "policy" into words that do not cause drastic responses from the community. Clearly, some public transparency is needed in order to keep the community informed of trouble so that when serious action is taken, so that people are not surprised or feel that their involvement in the community is threatened when action is taken.

How should this happen? When should this happen? What sorts of bad behaviors warrant public notification? What don't?

--

--ruaok

Robert Kaye -- ***@musicbrainz.org -- http://musicbrainz.org
Nicolás Tamargo de Eguren
2015-02-26 13:12:02 UTC
Permalink
Post by Robert Kaye
How should this happen? When should this happen? What sorts of bad
behaviors warrant public notification? What don't?
My feel about this is that one-off behaviour doesn't generally warrant
anything else than a private slap on the wrist - we all make mistakes, we
can all be angry one day and shout at someone or whatever, and that doesn't
need to lead to the user being marked as "Officially A Bad Person" (I'm
sure there are exceptions, if someone "one-off" doxxes another user for
example I'd be happy to take public action, although I'd like to think
that's unlikely to happen in MB).

If bad behaviour is consistent and requires serious action, I think it
makes sense to make enough information about it all public that it doesn't
feel like "wtf just happened" when a decision is taken.
Paul Taylor
2015-02-26 16:30:55 UTC
Permalink
Post by Robert Kaye
I would like to take a minute to start a conversation about community transparency and what that means. Previously I've used the term "public shaming" and since then have gotten quite a bit of push-back from that. So, I'm officially retracting the use of that term. Mea culpa.
Back in 2006 we had one poisonous person divide the community into factions. People were either for or against this person and this person's speech continued to reinforce this split. Things had gotten quite out of hand and I started a series of conversations with this person, pointing out their fractious behavior. But, nothing ever changed over the course of these conversations. They kept elevating in tone and severity and still nothing changed. These conversations were held in private between myself, the person and one mediator. At the time the wisdom was to "not air your dirty laundry", meaning that you shouldn't manage disputes in the public view.
When finally I decided that no resolution was to be had and that I needed to eject this person from the community, I did so in a blog post. That, in hindsight, was clearly a bad idea. Because there were a number of people in the community who had no idea that this person was causing trouble nor that this person was about to be ejected from the community. This caused quite a stir in the community and some members felt that MusicBrainz was no longer a safe place to be, for fear of being indiscriminately tossed out at the whim of the the BDFL.
Since then, I've adopted a policy of engaging with people who are causing trouble in the community on a more public level, but I'm having a hard to putting that "policy" into words that do not cause drastic responses from the community. Clearly, some public transparency is needed in order to keep the community informed of trouble so that when serious action is taken, so that people are not surprised or feel that their involvement in the community is threatened when action is taken.
How should this happen? When should this happen? What sorts of bad behaviors warrant public notification? What don't?
I think a few more recent examples might help a bit.

Paul
Ian McEwen
2015-02-26 23:27:45 UTC
Permalink
Post by Robert Kaye
I would like to take a minute to start a conversation about community transparency and what that means. Previously I've used the term "public shaming" and since then have gotten quite a bit of push-back from that. So, I'm officially retracting the use of that term. Mea culpa.
Back in 2006 we had one poisonous person divide the community into factions. People were either for or against this person and this person's speech continued to reinforce this split. Things had gotten quite out of hand and I started a series of conversations with this person, pointing out their fractious behavior. But, nothing ever changed over the course of these conversations. They kept elevating in tone and severity and still nothing changed. These conversations were held in private between myself, the person and one mediator. At the time the wisdom was to "not air your dirty laundry", meaning that you shouldn't manage disputes in the public view.
When finally I decided that no resolution was to be had and that I needed to eject this person from the community, I did so in a blog post. That, in hindsight, was clearly a bad idea. Because there were a number of people in the community who had no idea that this person was causing trouble nor that this person was about to be ejected from the community. This caused quite a stir in the community and some members felt that MusicBrainz was no longer a safe place to be, for fear of being indiscriminately tossed out at the whim of the the BDFL.
Just for those of us who weren't around for the Great Dispute (as it's
called) to which Rob refers, the blog post in question is
http://blog.musicbrainz.org/2006/08/15/developer-changes/ --
http://wiki.musicbrainz.org/History:Great_Dispute has a page with the
discussion after the fact, condensed into a page.
http://blog.musicbrainz.org/2006/10/10/who-is-or-will-be-the-musicbrainz-server-developer/
has some more of the aftermath.

That particular dirty laundry's had a lot of years to air out at this
point so I hope we can link it directly to be clear what context we're
working in, at least to as much an extent as those of us who weren't
there can.
Post by Robert Kaye
Since then, I've adopted a policy of engaging with people who are causing trouble in the community on a more public level, but I'm having a hard to putting that "policy" into words that do not cause drastic responses from the community. Clearly, some public transparency is needed in order to keep the community informed of trouble so that when serious action is taken, so that people are not surprised or feel that their involvement in the community is threatened when action is taken.
One thing I notice, looking at the wiki page, is that transparency for
issues is only one of the guiding principles said to have come out of
this, and I'm not strictly sure we're good about following the other
ones -- and I think the points go together more than might be recognized.

One thing that's on my mind since I left an extensive and wordy comment
on the blog about it yesterday is some of the "Underlying Problems"
section -- clarity about rules and expectations, and the processes for
resolving conflict. One thing in particular I'd point out here is that
part of why keschte's eviction (and later, and in many ways similarly,
brianfreud's) went sour is that it became personally about Rob. Having a
single person who is the arbiter of the community, as we did and do,
means that it can easily become that person against the other, and when
it does people will choose sides. Such is the role of having a
*process*, rather than a *person*, who solves disputes; or, to put it in
a context that'd fit on the Great Dispute wiki page, responsibility
needs to be distributed in this regard too, and mediators need to exist.
A thing that page doesn't seem to get into that's very important to me
(and to the project, I think, given our perpetual problems with
under-manning) is that documented and followed processes serve as
mediators too.
Post by Robert Kaye
How should this happen? When should this happen? What sorts of bad behaviors warrant public notification? What don't?
I think this may be another place where the Great Dispute didn't go deep
enough (at least, in the summary -- I'll admit I didn't read all of the
blog comments, all of the IRC discussion, or the mb-users thread. I'd
never finish writing this email, much less starting it, if I'm as
aggravated as I'm sure that'd make me :P). There's another dimension (or
perhaps several) to public notification that I'm not sure you've
registered, which is permanence. Most public notification systems, like
reporting systems (as were the topic in the dev meeting most recently),
don't forget things -- or, only marginally better, can only be made to
forget things manually, upon deliberate appeal. For many, faced with an
option of a permanent black mark if they screw up, which can only be
removed by prostrating themselves in front of the appropriate person, or
not participating, the latter is what will get chosen.

There's two rough categories of solutions to this that I can see, which
I'll characterize as deliberate forgetfulness and retrospective
publicization. The former says we make things public rapidly, but either
completely remove, or at least hide or make more inaccessible,
complaints as they get older. A standard sort of approach to this would
be something like "recent [validated] reports", with either
no way for non-admins to view further back, or it being put behind a
link and probably deliberately marked as "this happened a long time ago,
don't judge people based on stuff from years ago". The latter says that
reports are private until they need to be made public -- history is
carefully recorded, but kept private until such a time as it's needed as
defense of some sort of action. The two aren't independent -- a naïve
hybrid strategy would be something like no reports are published until
you get three, but they then disappear after six months without any
more.

The other axis I think needs considering here is vague, and I don't have
a term for it I really like, but something like "personality type".
Different people react very differently to different sorts of actions,
and a system needs to incorporate accommodations for all of these sorts
of people. Most proposals I've seen, especially from you, seem to trend
toward strategies that would be effective toward people like you, and
toward people who have been problems before. That is, public people,
neurotypical, relatively immune to shame, perhaps somewhat stubborn. On
the blog post, I see a comment from keschte that starts with "I'm not
about to defend myself, since that might be interpreted as me
acknowledging doing something wrong." Think about the likelihood of,
say, me or nikki saying something like that; I think you'll agree it's
pretty unlikely. I won't doubt that people with that sort of a type have
a bigger propensity for being bigger problems, but the same processes
and tactics that apply to them also need to apply to everyone to be
mediators that can be relied on within the community. Certainly in the
past I'd have attracted reports on a few occasions; I'll leave finding
examples as an exercise to the viewer -- suffice to say I'm working on
it and I think it's been a lot better lately; if those instances were a
permanent "X instances of being BAAAAAAD" on my user page forever, that
wouldn't be representative, and it'd probably not make me want to edit.
Of course, I hardly do now, but a big part of that is that I *also* want
a process that can be relied on and to which I can be sure to conform.

--

Hopefully that's not too bad of a wall of text. I have the lurking
feeling I haven't explained my thoughts well enough even so, so, sorry
about that.

I think my best bullet-point-able condensed thoughts are:

a.) conflict resolution cannot be just one person -- the load needs to be
spread out, both to other people and to reified and documented processes
b.) our system cannot be all public *or* all private (a point on which I
think others agree, but it bears stating again)
c.) the system we create must have intermediate steps of reining in
behavior and a built-in degree of forgiveness
d.) the system we create must apply to many types of people and ways of
interacting -- publicity and permanence can have a chilling effect for
some participants and a galvanizing "stick to your guns, even if you're
wrong" effect for others
e.) clearly specified guidelines, systems, and processes help avoid disputes
becoming personal/ad-hominem/accused of impartiality and help those who have
trouble but want to stay within acceptable behavior to do so

And one more that I didn't say, which is hopefully obvious, but maybe
bears stating:
f.) we need to improve, but we aren't about to die as a community.
baby steps -- just start taking them soon. much better than trying
to construct the perfect thing to implement in two years

The best condensed idea I have is probably the hybrid system I mention
above: reports require validation and followup with the user by an
admin, a certain number need to be filed and validated before they
become public, and over time reports fade into the background/become
private again. An improvement would probably be to include specific
records of actions taken as well; "X person was reminded that opening
edits that they would know to be controversial without pointing past
voters to the new edits is against the code of conduct" is more useful
than knowing it was reported but not what was done about it.

As far as baby steps, I think the initial version of a reports system
should be entirely private to start and as we hammer out exactly how it
should look.

(I guess I should start working on that instead of writing more in this
email, huh...)

--
Ian
Post by Robert Kaye
--
--ruaok
_______________________________________________
MusicBrainz-devel mailing list
http://lists.musicbrainz.org/mailman/listinfo/musicbrainz-devel
Tom Crocker
2015-03-06 19:29:23 UTC
Permalink
I think Ian's comments make a lot of sense.

On distributing conflict resolution/sanctions:
I basically agree. I wonder if having juries/panels of active mb-users or
auto-editors when it's thought sanctions need to be taken would work. It's
also worth acknowledging some of the potential drawbacks of a more
procedural, distributed approach. We've just got rid of the style council
in favour of a BDFL that has resulted in much greater throughput of style
issues, with a consistent arbiter who isn't hampered by the rules. That's
not to say we shouldn't move as Ian suggests but to help weigh the
arguments (I'm sure there are others).

On reporting:
again, I agree. two or three confirmed reports (perhaps more for newbies or
if they all come along at once) and then like a criminal record they
disappear from public eventually. Make it clear to the offender at each
case what the process will be and the consequences of further offences,
including that their actions will be publicised. Report the actions taken
in response. Should there be different classes of offence with different
periods of publicity (and possible sanctions)? e.g. not leaving an edit
note, pretty minor, engaging in personal attacks or creating multiple
accounts to vote more serious.

We should also make it much easier for people to report problems - to even
know this is possible. (I'm pretty sure reosarevok wrote this somewhere
recently. CoC debate?) They should also be reassured that if it's for
something minor it won't result in drastic action.

Just my thoughts
Ian McEwen
2015-03-07 12:22:43 UTC
Permalink
Post by Tom Crocker
I think Ian's comments make a lot of sense.
I basically agree. I wonder if having juries/panels of active mb-users or
auto-editors when it's thought sanctions need to be taken would work. It's
also worth acknowledging some of the potential drawbacks of a more
procedural, distributed approach. We've just got rid of the style council
in favour of a BDFL that has resulted in much greater throughput of style
issues, with a consistent arbiter who isn't hampered by the rules. That's
not to say we shouldn't move as Ian suggests but to help weigh the
arguments (I'm sure there are others).
I do see your point here, and I think reporting is the first step --
once we have a better idea of load we'll have a clearer idea of how we
need to spread the workload. But being myself, I also have to note two
things:
a.) the proposal out of which the style BDFL was eventually made (which
was, at the summit, presented by me) was more of a style council -- but
a smaller one, appointed to the task by the powers that be. This turned
into the style BDFL because it seemed like too much buildup of process
for this task. But...
b.) in practice, that's how it works, it's just invisible. reosarevok,
as any reasonable dictator, isn't making decisions without running them
by other central people in the community. And he wouldn't get far if
they disagree! This just usually isn't the case -- certainly not with
the years of built-up cruft with relatively clear solutions to be dealt
with first.

These two things in combination is why I've said a vague thing like
"admins" being in charge, but not specified exactly who that is (not
least of all since it can change!) nor suggested a more built-up process
like juries. We certainly don't want anything to turn into an
mb-style-esque morass, but I'd really like it if we were more explicit
rather than implicit about who's getting input and making decisions,
even if that boils down to there being a list of who's on these
particular email threads rather than nothing, and all of it publicly
being Rob (or reo, for style).
Post by Tom Crocker
again, I agree. two or three confirmed reports (perhaps more for newbies or
if they all come along at once) and then like a criminal record they
disappear from public eventually. Make it clear to the offender at each
case what the process will be and the consequences of further offences,
including that their actions will be publicised. Report the actions taken
in response. Should there be different classes of offence with different
periods of publicity (and possible sanctions)? e.g. not leaving an edit
note, pretty minor, engaging in personal attacks or creating multiple
accounts to vote more serious.
The main reason I hadn't included multiple levels is because most of
this stuff is pretty obvious, I guess, and in a system that's got a
prudent amount of impermanence built in, it's not going to cause harm to
have your profile say "didn't leave edit notes $n times" for a few
months. It's still a repeat offense, and I think our editors are smart
enough to discern it's less serious of a thing than your other examples.
So why make it more complex than it needs to be? :)
Post by Tom Crocker
We should also make it much easier for people to report problems - to even
know this is possible. (I'm pretty sure reosarevok wrote this somewhere
recently. CoC debate?) They should also be reassured that if it's for
something minor it won't result in drastic action.
Yeah, this is planned. Hence my quip at the end about how I should
probably get to writing it! The exact details aren't completely clear
yet, but we certainly want "report this user (privately, for now)" to be
a more prominent option than the current "maybe email support, if you
can find that email, or pester someone in IRC?". Whether that'll be a
quicker button-press to send an email to ***@musicbrainz.org and
then it gets cataloged and processed manually, or something more
complicated, we'll figure out. And probably evolve as we figure out what
we want :)
Post by Tom Crocker
Just my thoughts
_______________________________________________
MusicBrainz-devel mailing list
http://lists.musicbrainz.org/mailman/listinfo/musicbrainz-devel
Continue reading on narkive:
Loading...