Generative AI policy: comments, chat, etc. #160
Replies: 5 comments 3 replies
-
+1 I haven't noticed much of this in issues or PRs on repositories I maintain though I've seen it show up in other places -- e.g. email and to a lesser extent Discord. |
Beta Was this translation helpful? Give feedback.
-
+1 I think an easy line to draw here is issues/prs/comments. If you're generating that text rather than writing it yourself, it not in any way useful. GitHub is intended to be high density prose and does not in any way benefit from AI inflation. Fully synthetic profiles are necessarily a subset of anyone who is otherwise real but for some reason generates their human engagements, and both should be verboten. Just not worth the effort for maintainers. Code itself is more nuanced and harder to speak to, but as you explicitly set it aside, I think we're on very firm footing. As an aside, it can be hard for maintainers to tell generated from organic content. That's sort of the point. In such cases the presumption should be innocence and good faith. If the content is indistinguishable from real human engagement then it's probably not slop anyway, so no harm done. |
Beta Was this translation helpful? Give feedback.
-
Yep, agreed. I like to assume good intent on the part of Group 2, but
being clear and firm that that isn't the way we do business seems
appropriate.
…On Thu, Mar 20, 2025 at 3:29 PM Ross A. Baker ***@***.***> wrote:
We are seeing a surge in generative AI comments on issues across our
projects. We can and should debate the broader role of Generative AI in
Typelevel, but I'm focusing on the slop comments. They come from two groups:
1. Fully generative accounts.
2. New contributors, genuinely trying to engage.
Group 1 is pure nuisance. There are no feelings to hurt, no opportunities
squandered, and the banhammer makes a satisfying clang against their horrid
bot torsos.
We don't want to be discouraging or unwelcoming to Group 2, but it's labor
intensive to click through profiles, check with other maintainers, and
other diligence to distinguish the groups. Also, it'I find it quite rude to
expect us to read what nobody bothered to write. The effort put into this
group can be commensurate with the effort put in by this group.
It would be helpful for Typelevel to develop a Generative AI policy.
Maintainers could moderate these issues with a reference to the policy, and
well-intentioned members of Group 2 can engage again if they're ready to
attempt a course correction.
—
Reply to this email directly, view it on GitHub
<#160>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAQVNYP3IHCIMFPFY6D4EIT2VMJKJAVCNFSM6AAAAABZOJPCKGVHI2DSMVQWIX3LMV43ERDJONRXK43TNFXW4OZYGEYTAOJSGI>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
An emerging pattern I'm seeing is slop coming from GitHub profiles with LinkedIns that 404. In over a decade, this may be the most utility I've gotten from LinkedIn. |
Beta Was this translation helpful? Give feedback.
-
Btw another aspect of this that I forgot about is that generative AI can be used to bridge some linguistic gaps in such areas. That's probably a point worth considering. In general, I would say that direct translation use cases (understanding that this is an impossible line to rigorously and objectively define) feels valid and acceptable, while the broader textual inflation is unwelcome. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
We are seeing a surge in generative AI comments on issues across our projects. We can and should debate the broader role of Generative AI in Typelevel, but I'm focusing on the slop comments. They come from two groups:
Group 1 is pure nuisance. There are no feelings to hurt, no opportunities squandered, and the banhammer makes a satisfying clang against their horrid bot torsos.
We don't want to be discouraging or unwelcoming to Group 2, but it's labor intensive to click through profiles, check with other maintainers, and other diligence to distinguish the groups. Also, I find it quite rude to expect us to read what nobody bothered to write. The effort put into this group can be commensurate with the effort put in by this group.
It would be helpful for Typelevel to develop a Generative AI policy1. Maintainers could moderate these issues with a reference to the policy, and well-intentioned members of Group 2 can engage again if they're ready to attempt a course correction.
Footnotes
Generative AI policy: things that go into git #161 ↩
Beta Was this translation helpful? Give feedback.
All reactions