Just chilling and sharing a stream of thought…
So how would a credibility system work and be implemented. What I envision is something similar to the up votes…
You have a credibility score, it starts a 0 neutral. You post something People don’t vote on if they like, the votes are for “good faith”
Good faith is You posted according to rules and started a discussion You argued in good faith and can separate with opposing opinions You clarified a topic for someone If someone has a polar opinion to yours and is being down voted because people don’t understand the system Etc.
It is tied to the user not the post
Good, bad, indifferent…?
Perfect the system
People will vote for what they like, not what’s good faith.
I love the concept, but the ugly reality is that anyone can spin up an instance and pour in an arbitrary number of votes to themselves or anyone else. I think the credibility score would give people a false confidence and honestly do more harm than good unfortunately
your attempt at convincing people why to use a button will fail. they will do what they want. technical solutions for human behaviors can be difficult because humans do not generally like to be told what to do
mbin already has ‘reputation’ exposed
There was a great DefCon talk recently about how a guy gained credibility on the dark web over the course of a few years and it was easy to do by just being helpful to others. People tend to trust those who are helpful.
After awhile, he got busted and the feds took over his ToR identity and used his credibility to bust some criminals on the dark web.
I recommend being suspicious of everyone you interact with online.
Exactly the same way they do it IRL and have forevee. Bust someone trusted, make them wear a wire, bust someone higher up that way.
I think we should take another look at Slashdot’s moderation and meta-moderation system:
- Users couldn’t just vote on everything; “modpoints” (upvotes/downvotes, but also with a reason attached) were a limited resource.
- Comments scores were bounded to [-1, 5] instead of being unbounded.
- Most importantly, what wasn’t limited was that users had the opportunity to “meta-moderate:” they would be shown a set of moderation actions and be asked to give a 👍 or 👎 based on whether they agreed with the modpoint usage or not.
- Users would be awarded modpoints based on their karma (how their own comments had been modded by others) and their judgement (whether people agreed or not with their modpoint usage).
Admittedly the exact formula Slashdot used for awarding modpoints was secret to prevent people from gaming it, which doesn’t exactly work for Lemmy, but the point is that I think the idea of using more than one kind of signal to determine reputation is a good one.
Most people (including myself) would like to agree with you on building some sort of system to create credibility or honesty or reliability among people on a social media platform. I think the majority of people that use any social media (including Lemmy) would probably agree and more than likely would participate in it.
Unfortunately, it only takes a small group of people to upset the system, game the system, play with the system or create situations or systems of their own to manipulate everything … either to fight against others, or to generate some sort of power or control of their own. All it would take is this small group to completely change everything and make everything difficult and non functional.
It’s a lot like the democratic system of government. When you think about it the majority of everyone would like to participate in it and make it work … unfortunately, its only a small group of powerful individuals who have gamed the system to give themselves and their friends power over everyone else.
I didn’t read your post, I just downvoted because I don’t like your username. Whatcha’ going to do about it?
(Jk, I picked the instance I joined based on the fact that it doesn’t do downvotes. I think downvotes drive perverse incentives)
( thanks! do you happen to know other instances that have downvotes disabled? up until know, i just knew of BeeHaw. Choosing between an upvote or engaging on conversation is more enticing when you can’t just give a thumbs down and leave the room )
I don’t. And because it’s an admin setting that can be toggled easily, any websearch you would do to find other people talking about instances that don’t downvote should probably be double-checked with the instance itself. Even mine had a brief discussion about changing course and enabling downvotes.
There’s a GitHub project to compare instances. I don’t think it includes downvote setting, but maybe the other factors will at least help you narrow down. https://github.com/maltfield/awesome-lemmy-instances?tab=readme-ov-file
I award you 2 MeowMeowBeenz
The issue is that people will use votes for if they like the thing or not instead of if it’s in good faith, even if you tell them not to, both on purpose to harm opposing views, and unintentionally because they’re more likely to notice a bad faith tactic coming from someone disagreeing than from someone agreeing with them.
While I would never support it, the main way to improve online discussion is by removing anonymity. Allow me to go back a couple decades and point to John Gabriel’s Greater Internet Fuckwad Theory. People with a reasonable expectation of anonymity turn into complete assholes. The common solution to this is by linking accounts to a real identity in some way, such that online actions have negative consequences to the person taking them. Google famously tried this by forcing people to use their real name on accounts. And it was a privacy nightmare. Ultimately though, it’s the only functional solution. If anti-social actions do not have negative social consequences, then there is no disincentive for people to not take those actions and people can just keep spinning up new accounts and taking those same anti-social actions. This can also be automated, resulting in the bot farms which troll and brigade online forums. On the privacy nightmare side of the coin, it means it’s much easier to target people for legitimate, though unpopular, opinions. There are some “in the middle” options, which can make the cost to creating accounts somewhat higher and slower; but, which don’t expose peoples’ real identities in quite the same way. But, every system has it’s pros and cons. And the linking of identities to accounts
Voting systems and the like will always be a kludge, which is easy to work around. Any attempt to predicate the voting on trusting users to “do the right thing” is doomed to fail. People suck, they will do what they want and ignore the rules when they feel they are justified in doing so. Or, some people will do it just to be dicks. At the same time, it also promotes herding and bubbles. If everyone in a community chooses to downvote puppies and upvote cats, eventually the puppy people will be drown out and forced to go off and found their own community which does the opposite. And those communities, both now stuck in a bias reinforcing echo chamber, will continue to drift further apart and possibly radicalize against each other. This isn’t even limited to online discussions. People often choose their meat-space friends based on similar beliefs, which leads to people living in bubbles which may not be representative to a wider world.
Despite the limitations of the kludge, I do think voting systems are the best we’re going to get. I’d agree with @grue that the Slashdot system had a lot of merit. Allowing the community to both vote on articles/comments and then later have those votes voted on by a random selection of users, seems like a reasonable way to try to enforce some of the “good faith” voting you’re looking for. Though, even that will likely get gamed and lead to herding. It’s also a lot more cumbersome and relies on the user community taking on a greater role in maintaining the community. But, as I have implied, I don’t think there is a “good” solution, only a lot of “less bad” ones.
I think mob rule as a moderation system is bad, and having a few power-users in charge is not the worst answer to that.
In my head: you’d have small web of trusts (I can vouch for you, you can vouch your friend, your friend can vouch for me, I must be somewhat trustworthy), and these webs would have some kind of voting power over flagged comments. Of course, that can be gamed…
Are you thinking of something like Stack Overflow’s reputation system? See https://stackoverflow.com/help/whats-reputation for a basic overview. See https://stackoverflow.com/help/privileges for some examples of privileges unlocked by hitting a particular reputation level.
That system is better optimized for reputation than the threaded discussions that we participate in here, but it has its own problems. However, we could at minimum learn from the things that it does right:
- You need site (or community) staff, who are not constrained by reputation limits, to police the system
- Upvoting is disabled until you have at least a little reputation
- Downvoting is disabled until you have a decent amount of reputation and costs you reputation
- Upvotes grant more reputation than downvotes take away
- Voting fraud is a bannable offense and there are methods in place to detect it
- The system is designed to discourage reuse of content
- Not all activities can be upvoted or downvoted. For example, commenting on SO requires a minimum amount of reputation, but unless they’re reported as spam, offensive, fraudulent, etc. (which also requires a minimum reputation), they don’t impact your reputation, even if upvoted.
If you wanted to have upvoted and downvoted discourse, you could also allow people to comment on a given piece of discourse without their comment itself being part of the discourse. For example, someone might just want to say “I’m lost, can someone explain this to me?” “Nice hat,” “Where did you get that?” or something entirely off topic that they thought about in response to a topic.
You could also limit the total amount of reputation a person can bestow upon another person, and maybe increase that limit as their reputation increases. Alternatively or additionally, you could enable high rep users to grant more reputation with their upvotes (either every time or occasionally) or to transfer a portion of their rep to a user who made a comment they really liked. It makes sense that Joe Schmo endorsing me doesn’t mean much, but King Joe’s endorsement is a much bigger deal.
Reputation also makes sense to be topic specific. I could be an expert on software development but be completely misinformed about hedgehogs, but think that I’m an expert. If I have a high reputation from software development discussions, it would be misleading when I start telling someone about hedgehogs diets.
Yet another thing to consider, especially if you’re federating, is server-specific reputations with overlapping topics. Assuming you allow users to say “Don’t show this / any of my content to at all,” (e.g., if you know something is against the rules over there or is likely to be downvoted, but in your community it’s generally upvoted) there isn’t much reason to not allow a discussion to appear in two or more servers. Then users could accrue reputation on that topic from users of both servers. The staff, and later, high reputation users of one server could handle moderation of topics differently than the moderators of another, by design. This could solve disagreements about moderation style, voting etiquette, etc., by giving users alternatives to choose from.
Is this for an online community like Lemmy, or more oriented towards fixing the credit institutions?
in any case, a credibility metric would soon turn into a goal to achieve ^(karmafarming says what?)^
A metric ceases to be useful when it becomes a goal.
You know that the current voting system isn’t like/dislike, right? Or it’s not supposed to be. Your proposed system would have the same problem: users would use it as like / dislike buttons.
I have an idea. Have every single article or comment posted by a user scanned by an LLM. Prompt the LLM to identify logical fallacies in the post or comment. Post the user logical fallacies counts on a public scoreboard hosted on each federated instance. Now, ban the top 10% scoring users each quarter who have a fallacy ratio surpassing some reasonable good faith objective.
Pros: Everyone is judged by the same impassive standard.
Cons: 1) A fucking LLM has to burn coal for every stupid post we make. 2) LLM prompt injection/hijacking vulnerability.