Whilst social media is not one of the causes of the Israeli-Palestinian conflict, its features make it an effective, tactical tool that can be used in any conflict; allowing more people to be exposed to more information in almost real time. Internet giants should therefore invest in developing algorithms that identify certain key words or violent footage and manage its exposure – in the same way they control pornographic content – in order to strengthen its contribution to resolving conflict.
By Arik Segal
The current surge in violence in the Israeli-Palestinian conflict has been characterized by a massive use of social media. Facebook, YouTube, WhatsApp and Twitter all serve as platforms to deliver messages of hate, call for violent actions and share gruesome pictures or videos from recent attacks. Smartphones are used to take pictures and videos which are instantly shared in WhatsApp groups, Facebook posts and a few hours later reach the headlines of mainstream media. The development of technology, the human thirst for reality TV and media greed for profits help perpetuate an intractable conflict and keep it in a vicious constantly growing circle.
The central role which social media receives in the current wave of violence is demonstrated through political statements and discussions. On October 19th, Israeli Prime Minister, Benjamin Netanyahu, said that “what we see here is a linkage between fundamentalist Islam and internet, between Bin-Laden and Zuckerberg”. Moreover, on October 15th the Knesset held a special day to discuss “Fighting cyber-bullying and online violence on the virtual space and social media”. Simon Milner – Facebook’s policy director in the UK and MENA region – attended the discussion and expressed Facebook’s policies and views about the situation, commenting that “Nothing is more important than the safety of people using Facebook…We also want to encourage you to use our reporting tools if you see content that doesn’t belong on Facebook…When content is reported to us we investigate to see if it breaches our standards and if it does, we take it down”.
It is important to note that social media is not one of the causes of the Israeli-Palestinian conflict, but its features make it an effective, tactical tool that can be used in any conflict. Its applications allow more people to be exposed to more information in almost real time. The audience is not bound to censorship or professional interpretation, leaving the “truth” to be decided according to one’s nationalistic behavior and emotional intelligence. While each side believes that using social media will help it win its narrative, the net effect on the conflict is negative – the conflict is exacerbated, hate, prejudice and violence increase.
Theoretically, the same social media applications can be used to spread messages of peace and reconciliation, thus promoting security and stability. In reality however, conflict mitigating content in social media is less in quantity and weak in effectiveness. In times of harsh conflict, group solidarity receives high importance by its members. Those who promote peaceful messages are usually excluded as traitors and unrealistic as the rosy picture they portray irritates the majority that is “living the true struggle”.
Nevertheless, social media can be an effective tool in conflict resolution when the content is monitored and users are taking part in a structured and facilitated process performed in smaller groups. Since it is impossible to monitor all of the content that is uploaded to social media and the only measures to deal with destructive content are left to users who can “unfriend”, report or delete it, it is up to the social media companies themselves to develop and install conflict mitigation features in social media applications.
Such features should be developed by tech and conflict professionals, and be based on dispute resolution principles that will be adapted technologically. Users should first understand the damage their posting or sharing of violent content causes and, second, to be prevented from doing so by warning them, blocking the content and limiting their access to their account. Since “reporting” dangerous content as a monitoring tool has many flaws such as non-objectivity and time that passes until the harmful post is deleted, there is a need for a tool which is identifies content immediately and blocks it from being posted and shared. Internet giants should invest in developing algorithms that identify certain key words or violent footage and manage its exposure – in the same way they control pornographic content.
A good example for a build-in conflict mitigation software is Ebay’s Dispute Resolution Center that was created to manage acquisition and transaction disputes. Ebay’s decision to develop this application is rooted in its own interests. Recognizing that the process of buying and selling goods online will sometimes end up in disputes, it is in the interest of the company to have a mechanism to resolve those. Since Facebook, Twitter and other social media applications still do not have similar effective mechanisms, it implies that they still did not have a business interest in developing it. If moral considerations will not drive them, it will be up to governments to use legal measures which will ensure that social media is not used to enhance and perpetuate conflicts.
Arik Segal is a conflict management expert, he specializes on the use of technology in conflict management. For more info please click here.