Why not make it easier to share the truth? part 1

It’s become increasingly obvious that of all the wonderful things the internet has made possible, the increased ease of sharing lies is one of the downsides. Fundamentally, this phenomena was in part caused by the ways that various interlocking systems, both natural and artificial, have evolved over the past decades to make the truth relatively more difficult to share than lies. And although the truth has also been made much more accessible than before, it’s the relative strength, or more precisely relative incentives that really determines real world behavior. And this is a fight that lies have been steadily gaining in.

There is a growing asymmetry between the relative ease of sharing truth and lies that leads to ever growing incentives to do so. Because incentives are a powerful motivator of human behaviour this then propels new methods to make it even easier to share lies. Thus a negative feedback cycle is formed, one that could have exponential growth. This will eventually lead to deleterious consequences if left unchecked.

So why not realign the incentives? Why not fix this asymmetry? Why not make it easier to share the truth?

There are some downsides that come to mind though upon closer analysis these downsides could likely be significantly lessened through smart systems design to reduce potential for abuse, clear foresight to anticipate trouble, and a willingness to fix any problems that do come up transparently and honestly.

An example would be that making it easier to share the truth involves determining levels of trustworthiness among individuals and organizations. The common wisdom is that this is a fraught endeavor to undertake, yet we certainly do exactly this everyday. It is difficult to imagine that there is some fundamental limit preventing humans from scaling up and making concrete the informal trust systems we use everyday.

A derivative implication would be that a ‘score’ would have to be assigned to quickly tabulate and summarize trustworthiness. This is entirely a technical and logistical limitation of the current paradigms. It is not written in the stars that a ‘score’ must be the one and only way to compare trustworthiness in a real system.

A ‘score’ may be used as a stopgap measure, as a good enough solution i.e. credit scores, or for a variety of other reasons that may or may not be valid. There is nothing inherent about them that makes it the only possible end state of a trustworthiness system, or even more generally of any system whatsoever. Humans do not innately assign trust scores, in the everyday usage of the word, on some imaginary ranking like contest judges.

In mathematical and logical terms there may eventually have to be a ranking of some type to produce the attributes we would desire, I haven’t done the calculations yet to say either way. And philosophically, at a sufficiently advanced state of development it may indeed become ‘scoring’ of some type, if not through technical change at least through human change, since human tendency is to redefine words in more convenient ways as time progresses. Nonetheless, at this more advanced state of civilization there will be a greater capability to address the issues that would occur.

Of course technical limitations due to budget, software architecture, etc., may require some kind of ‘scoring’ somewhere along the way, and even if so a low score doesn’t represent any kind of metaphysical judgement. A number, of course, doesn’t fully describe a person and even though many use numbers to judge pro athletes that still is congruent with the fact that pro athletes are, usually, highly respected. Even in the worst case that low scores do carry some judgement it’s hard to imagine how that would be much more onerous than what already exists with low credit scores.

Thankfully there are more, and probably superior, ways of evaluation. For practical reasons trustworthiness has to be really easy to evaluate at a glance for the everyday use of such a method, remember the key idea is that it has to be easier than current methods and approach, if not exceed, the ease of spreading lies.

A better way might be through simplicity, to have two really broad categories as follows:

1. Verified

2. Not Verified

Yes, anything in the ‘Not Verified’ category could be ranging from totally false to mostly true with difficulties in verification. This is fine. A system like this is not meant to cater to everyone at the beginning. Trustworthiness in general isn’t everything, and keeping tabs on people and organizations in a database/blockchain/etc., which this will likely be at the beginning, is certainly a small subset of trustworthiness in general.

The beauty with this sort of simplicity is that one can simply at a glance see what’s verified or not without the need for a ‘score’. A real world working example of such a concept would be Twitter’s verified checkmark, which mostly accomplishes what it was originally envisioned to do.

Even better, the upsides for an easy to access, easy to use, and widely disseminated trustworthiness system are immense and, literally, too numerous to count.

The most obvious and perhaps greatest benefit is that honest behavior would be incentivized. Positive benefits would accrue from positive feedback loops of ever increasing trust. In economic language, ‘positive externalities’ would be generated, that although might not show up on a balance sheet would greatly improve the fabric of society.

Continued in part 2

Edited from original facebook post on August, 2020