Traditionally, the most dangerous attacks on reputation-based decentralized autonomous platforms have been Sybil attacks, tyranny of the majority, and the 51% attack. In this section we discuss how Semada inhibits these attacks.
Simple Sybil attacks consist of any anonymous single user opening any number of expert accounts to use worthless automated work in order to unjustly profit from the system. Semada is Sybil attack resistant because all power in Semada is weighted by reputation. An owner of a single account with 1000 tokens has the same or more power than an owner of 1000 accounts with one token.
The power of reputation in Semada has 3 uses: being chosen as an expert for off platform work, voting on posts, and salaries. All of these actions are unaffected or weakened by spreading reputation between accounts. Thus, simple Sybil attacks are prevented.
Since fees are paid indirectly as salaries, the platform encourages development of reputation. Since all newly-minted sem tokens join the platform as 50/50 up- and downvote bets, there is little opportunity to overwhelm the system with outside power, since established experts always have complete control to decide whether new reputation is given to a user. Thus reputation is only ever vested to those who can prove to existing experts that their contributions improve the platform.
Another significant threat to fair and just decision-making in an anonymous and democratic reputational system is the potential for tyranny of the majority. Alexis de Tocqueville coined the term tyranny of the majority in order to describe the corruption in a system that manifests itself in decisions made with a greater concern for following what is perceived as popular than what is right and just.
Similarly, the tragedy of the commons occurs in any system which does not have well designed incentive structure. This is called the “nothing at stake” problem in blockchain proof of stake design, where unregulated systems lead pseudonymous users to abuse the system.
As an example of tyranny of the majority and tragedy of the commons problems, spend an afternoon reading YouTube video comments. As a second example, Reddit’s upvote systemcurrently has insignificant punishment for voting randomly, which compromises the value of the meaning of an upvote4.
There are three ways we prevent tyranny of the majority in Semada. First, in a weighted voting system greater power generally accrues to those with greater expertise. Then, in the worst case scenario, voters will be concerned with voting in line with powerful experts, which generally make valuable decisions and favor precedent and predictability.
The second mechanism that counters the tyranny of the majority is the time-delay in announcing the results of upvotes, so experts cannot plainly see the majority position before posting their stake.
Finally, any expertise tag which is corrupted will not attract fees, and it is easy to create a competing expertise tag to replace it.
One way to game the system is for a group of malicious actors who hold 51% of the tokens to collude and vote against common sense, thus taking a good percentage of the community’s resources. In an open system, however, such an attack would be quickly apparent. This would erode trust in the platform, and therefore use and fees would diminish, making the 51% holding of reputational salary less valuable. Therefore this tactic generally has the potential to destroy the expertise tag, but not enrich the attackers.
However, it is feasible that an arbitrage opportunity could evolve if the experts do not police their expertise. If a significant percentage of the (technically fungible) tokens were put on sale in a token exchange, a malicious actor would have the opportunity to 1) buy 51% of tokens, 2) override the bench by voting against common sense in a popular betting pool, taking a significant amount of tokens, 3) sell the tokens quickly, making a profit before the tokens lose their value due to the attack.
This attack can certainly occur in theory. However the system naturally guards against such an attack. First, it is difficult to sell off 51% of anything in a short period of time. In an open system, it is unlikely the attacker could sell all 51% of the tokens immediately after ruining the trust of the system by voting against common sense. It is extremely likely in this scenario that the 51% will lose more value than would be gained.
Secondly, it is unlikely that a malicious group will ever gain 51% of tokens. Every token ever created is initially given to someone who has demonstrated their expertise and investment in the platform. While such tokens are fungible and there are circumstances where they will be sold, healthy expertise tags will not leave significant tokens in disuse on the market. (A healthy expertise tag is defined as one which is generating fees.)
Tokens are more valuable to those global users who truly have the expertise for doing the relevant work and policing, than they are for people who wish to own them as an investment without using them. In a healthy expertise tag, tokens will continually be used to evaluate posts to earn more sem tokens and the reputational salary.
Therefore the total of all tokens not in use by actual experts, will not likely ever amount to 51% in a health expertise, since new tokens are minted continually and used by those who earn them more than they will be sold.
A malicious actor could attempt to gain 51% of tokens over a long period of time, as they become available. But there are two problems with this strategy. First, because new tokens are being minted and used all the time, they will likely never amount to 51% in a healthy expertise, even if tokens are continually added to exchanges. Secondly, if the expertise is not healthy enough to generate more tokens which are used than are sold on exchanges, buying tokens on an exchange over a long period of time will be very expensive because of the aforementioned natural inflation. So the arbitrage opportunity doesn’t exist unless a single transaction is worth much more than the entire reputational value of the expertise. And if a malicious actor wishes to destroy a tag, they must spend much more than the entire reputational value of the expertise. Healthy expertises are secure because it is easier to profit from them by improving them than by harming them.
The other way to gain 51% of the tokens is to pay fees to the system and earn new tokens by winning betting pools. A reasonable estimate for the price of this attack (even when the bench makes absolutely no effort to police this action) is 6 times the value of the entire quantity of sem tokens in existence in the expertise, when the expertise is healthy, i.e., earning fees. At the same time this action would reward all the good faith actors in the system beyond what they invested.
In either case, if an attacker does not acquire 51%, it is extremely risky to attempt the attack, as all their tokens are likely to be lost when betting against common sense. And if an attacker does not acquire 51%, they will lose a significant amount of their value to the natural inflation of the coin while they are attempting the attack.
Most importantly, a key benefit of the system is that as it is used, it becomes measurably more secure.