
 
The solution proposed here tends to identify the 
common interest of the attacker with some analytics 
queries. Making these verifications before blocking 
the target profile can be beneficial to the victims of 
the coalition attacks. Of course these checking 
should not prevent any human intervention of the 
SN moderators to evaluate the credibility of a set of 
abuse reports. We also propose a reputation and 
penalties system for the attackers if the SN 
moderator detects the coalition attack via our 
solution. A kind of caution message can be sent to 
the suspected attacker to warn him against a fake 
abuse report action. 
5.2  Preventing Automated DoS Attack 
It is clear that the execution of the automated DoS 
attack is more powerful and damaging than the 
coalition attack. Few seconds are sufficient to block 
any profile.  The countermeasure is less complex 
than the one proposed for the coalition attack. The 
traditional client-side security challenge response 
tests (Mirkovic, 2004) are to our opinion the most 
appropriate solution. These solutions must be 
applied during the creation of new SN accounts, then 
during the abuse response sending. The challenges 
will limit the creation of fake accounts, and the 
generation abuse reports without a human 
intervention. 
6 CONCLUSIONS 
In this paper we have formally identified a serious 
vulnerability in the abuse reporting systems that are 
currently deployed in most of the SN websites. We 
first observed the problem in the real world where 
ideological groups of users in different SNs are 
permanently setting up coalition attacks based on a 
particular misuse of the abuse reporting systems in 
order to block other innocent users that are judged as 
ideological enemies. We provided a technical 
analysis of this attack then we proposed to automate 
it in order to exploit this vulnerability through a DoS 
attack. We developed a proof of concept exploiting 
this vulnerability in the SN website Facebook that is 
also considered as one of the most secure. Although 
incomplete, the first results obtained clearly 
demonstrate the damages that can be caused by such 
DoS tools, especially if we upgrade the attack from a 
DoS to a DDoS where (executing the attack 
simultaneously). We propose two different 
approaches to prevent against such attacks and 
specially the coalition attack. The study is still at its 
initial phase, we are not yet able to clearly define the 
variable N representing the number of abuse reports 
that will automatically block a user profile. More 
advanced tests are currently executed to explore all 
the dimensions of this vulnerability.  
ACKNOWLEDGEMENTS 
The research leading to these results has received funding 
from the European Community's Seventh Framework 
Program in the context of PPP Fi-Ware project and the 
EIT – KIC Trust in the Cloud EU Project. 
REFERENCES 
Morozov, E., 2011. “The Net Delusion: The Dark Side of 
Internet Freedom” (New York: Public Affairs, 2011) 
Srivatsa, M., Xiong, L., Liu, L., 2005. “TrustGuard: 
Countering Vulnerabilities in Reputation Management 
for Decentralized Overlay Networks”, Proceeding of 
the 14th international conference on World Wide Web 
(WWW’05), New York, USA 
Hoffman, K., Zage, D., Nita-Rotaru, C., 2007.  “A Survey 
of attacks on Reputation Systems”, Computer Science 
Technical Report – Number 07-013 – Perdue 
University (2007)  
Tennenholtz, M., 2004 “Reputation Systems: An 
Axiomatic Approach”, UAI '04 Proceedings of the 
20th conference on Uncertainty in artificial 
intelligence - Pages 544-551 - AUAI Press Arlington, 
Virginia, United States 2004. 
Cristani, M., Karafili, E. and Viganò, L., 2011. “Blocking 
Underhand Attacks by Hidden Coalitions”, 3rd 
International Conference on Agents and Artificial 
Intelligence, Rome, Italy, 28-30, ICAART 2011. 
Pauly, M., 2002 “A modal logic for coalition power in 
games”.  Journal of Logic and Computation, 
12(1):149–166. (2002) 
M. Pauly, “Logic for social software”. PhD thesis, 
Institute for Logic Language and Computation, 
University of Amsterdam. (2001) 
Facebook, https://www.facebook.com/  
Von Ahn, L., Blum, M., Hopper, N. and Langford, J., 
2003. “CAPTCHA: Using Hard AI Problems for 
Security”. In Proceedings of Eurocrypt 2003, May 4-
8, 2003, Warsaw, Poland. 
Mirkovic, J. and Reiher, P., 2004.  “A taxonomy of DDoS 
attack and DDoS defense mechanisms”. ACM 
SIGCOMM 2004, Comput. Commun.  
OWASP Zed Attack Proxy Project  
Slashdot, http://tech.slashdot.org/story/11/04/15/1545213/ 
Crowdsourcing-the-Censors-A-Contest 
AbusingSocialNetworkswithAbuseReports-ACoalitionAttackforSocialNetworks
505