> 2.) What counts as a claim? Is 'the sky is blue' a claim, or is that common knowledge? We clearly wouldn't flag common knowledge because using the flag when it's not necessary deprecates the usefulness of the flag, but then who decides what counts as common knowledge, particularly across different cultures and countries?
Everything, including your example. The entire point is to discourage people from making claims.
> 3.) A common metadata way of judging someone online is through their sources. Think of many subreddits which don't allow certain 'left-wing' or 'right-wing' sources. The supporting text will be upvoted and downvoted based on the readers' idea of the source: In the politics subreddit, a NYT 'claim support' would be upvoted while a WSJ one would be downvoted. Instead of using headlines as proxys, the URL would be: "Oh, it's X. They just always lie. Don't even need to check; downvote, nobody listening to THEM could be correct."
This is a good critique, but if one were actually implementing this, you'd also need to implement some way to verify that people are actually reading the sources (which in itself is already a big annoying problem).
Your other points are also good, but ultimately this thing I'm claiming would be niche due to the friction involved. If it were to not be niche you'd want to put in effort to "validate", or in other words have "trusted" (another problem) members randomly select controversial claims and verify them manually.
I don't believe a fully algorithmic approach, even with "the crowd" can work.
> Everything, including your example. The entire point is to discourage people from making claims.
Then you wouldn't have discussion, at least not of this type. It would resemble more a slightly faster version of academic papers as everybody has to read everything and becomes more invested in preventing themselves from being blasted for making an unsupported claim on accident than in contributing.
> This is a good critique, but if one were actually implementing this, you'd also need to implement some way to verify that people are actually reading the sources (which in itself is already a big annoying problem).
Right, but when you say 'we should do thing X' and when somebody says 'we can't do thing X without solving Y' you can't just reply with 'also solve thing Y'. For example, we should go to Alpha Centauri, but I think figuring out FTL travel is sort of necessary first.
> Your other points are also good, but ultimately this thing I'm claiming would be niche due to the friction involved. If it were to not be niche you'd want to put in effort to "validate", or in other words have "trusted" (another problem) members randomly select controversial claims and verify them manually.
I don't believe a fully algorithmic approach, even with "the crowd" can work.
Yeah, it might work in that particular usecase. Perhaps as an adjunct to academic listservs.
Everything, including your example. The entire point is to discourage people from making claims.
> 3.) A common metadata way of judging someone online is through their sources. Think of many subreddits which don't allow certain 'left-wing' or 'right-wing' sources. The supporting text will be upvoted and downvoted based on the readers' idea of the source: In the politics subreddit, a NYT 'claim support' would be upvoted while a WSJ one would be downvoted. Instead of using headlines as proxys, the URL would be: "Oh, it's X. They just always lie. Don't even need to check; downvote, nobody listening to THEM could be correct."
This is a good critique, but if one were actually implementing this, you'd also need to implement some way to verify that people are actually reading the sources (which in itself is already a big annoying problem).
Your other points are also good, but ultimately this thing I'm claiming would be niche due to the friction involved. If it were to not be niche you'd want to put in effort to "validate", or in other words have "trusted" (another problem) members randomly select controversial claims and verify them manually.
I don't believe a fully algorithmic approach, even with "the crowd" can work.