Other accounts:

@Danterious@lemm.ee

All of my comments are licensed under the following license

https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en

  • 59 Posts
  • 308 Comments
Joined 1 year ago
cake
Cake day: August 15th, 2023

help-circle





  • Curating this volume of content is impossible, and there are legitimate dangers in giving the government too much ability to shut down free speech

    Agreed. We have already given more than enough control to the government in other areas of our lives. We now have alternative social platforms that give us a chance to actually have more direct control over our media landscape which hasn’t been true in such a long time.

    you have to build a society that doesn’t want to engage with bigotry, and explore and question its own assumptions (and that’s not ever a fixed state, it’s an ongoing process).

    I think this is what they were trying to get across when they mention media ecology. They were pointing out how the structure of where media is shared and its sources can be more important for quashing disinformation than the actual content itself.

    So when something is shared through YouTube there are certain pressures that over time mold the source of information into a specific format.

    I’d say the same is true of the Fediverse as well. That’s why its important we get the structure here right because it will determine what kind of platform this place turns into.

    Edit: grammar

    Anti Commercial-AI license (CC BY-NC-SA 4.0)









  • Thx for the feedback. If I was to try this what do you think I could do get black rather than brown dye? Would I have to use flowers that are closer to cyan, magenta, and yellow so it is closer to the cmyk color model or does that not help?

    Also yeah idk if the contrast problem could be solved.

    Trying to do say three layers would then take six hours, so whatever you’re using as a mask has to be super absorbent or reflective to UV (think very thick black paper or tinfoil). Anything else, and your underlying layers are going to bleach somewhat.

    Do I need to do it in multiple layers? Also I was planning on trying to use the paper as a replacement for photo paper in pinhole photography with it instead of a mask.

    Anti Commercial-AI license (CC BY-NC-SA 4.0)






  • The point is to pick out the users that only like to pick fights or start trouble, and don’t have a lot that they do other than that, which is a significant number. You can see some of them in these comments.

    Ok then that makes sense on why you chose these specific mechanics for how it works. Does that mean hostile but popular comments in the wrong communities would have a pass though?

    For example let’s assume that most people on Lemmy love cars (probably not the case but lets go with it) and there are a few commenters that consistently shows up in the !fuck_cars@lemmy.ml or !fuckcars@lemmy.world community to show why everyone in that community is wrong. Or vice a versa

    Since most people scroll all it could be the case that those comments get elevated and comments from people that community is supposed to be for get downvoted.

    I mean its not that much of a deal now because most values are shared across Lemmy but I can already see that starting to shift a bit.

    I was reminded of this meme a bit

    Initially, I was looking at the bot as its own entity with its own opinions, but I realized that it’s not doing anything more than detecting the will of the community with as good a fidelity as I can achieve.

    Yeah that’s the main benefit I see that would come from this bot. Especially if it is just given in the form of suggestions, it is still human judgements that are making most of the judgement calls, and the way it makes decisions are transparent (like the appeal community you suggested).

    I still think that instead of the bot considering all of Lemmy as one community it would be better if moderators can provide focus for it because there are differences in values between instances and communities that I think should reflect in the moderation decisions that are taken.

    However if you aren’t planning on developing that side of it more I think you could probably still let the other moderators that want to test the bot see notifications from it anytime it has a suggestion for a community user ban (edit: for clarification) as a test run. Good luck.

    Anti Commercial-AI license (CC BY-NC-SA 4.0)


  • But in general, one reason I really like the idea is that it’s getting away from one individual making decisions about what is and isn’t toxic and outsourcing it more to the community at large and how they feel about it, which feels more fair.

    Yeah that does sound useful it is just that there are some communities where it isn’t necessarily clear who is a jerk and who has a controversial minority opinion. For example how do you think the bot would’ve handled the vegan community debacle that happened. There were a lot of trusted users who were not necessarily on the side of vegans and it could’ve made those communities revert back to a norm of what users think to be good and bad.

    I think giving people some insight into how it works, and ability to play with the settings, so to speak, so they feel confident that it’s on their side instead of being a black box, is a really good idea. I tried some things along those lines, but I didn’t get very far along.

    If you’d want I can help with that. Like you said it sounds like a good way of decentralizing moderation so that we have less problems with power tripping moderators and more transparent decisions. I just want it so that communities can keep their specific values while easing their moderation burden.

    Anti Commercial-AI license (CC BY-NC-SA 4.0)