Research Fellow at CESRAN International
With the advancement of technology, big data has gained increasing popularity and attention worldwide. In terms of what big data offers, there has been a lot of discussion on its pros and cons. While some argue that big data is an essential tool for the efficiency and productivity of governments, others argue that big data also poses severe concerns for humanity, such as privacy issues and disinformation. Nowadays, big data has penetrated all spheres of life, including election manipulations through social media. Technology giants such as Facebook and Twitter have been hit by scandals concerning their role in influencing democratic processes. Therefore, this paper takes a critical approach to big data and lays out how big data presents challenges for democracies in maintaining democratic norms and values. In doing so, the paper discusses three aspects: surveillance, artificial intelligence, and social media. Lastly, the paper reflects on the future of democratic ideals.
Surveillance is the key tool to achieving personal information (Richards, 2013). Regardless of being the private or public sector, surveillance takes place through the same means and technologies. Once collected from the individual, it comes with the cost that the individual loses control over its usage. For instance, given the websites and their privacy policies, their terms usually state that they have a sharing platform with third parties. They are not responsible for how the third parties use the information individuals provide (Zwart et al., 2014: 715). It means individuals are subjected to a potential privacy breach.
From the perspective of (government) security agencies, surveillance becomes more problematic. In this case, the surveillance of an entire state comes into play, called mass surveillance. After Edward Snowden’s exposure of the NSA’s (National Security Agency) engagement in spying programs called PRISM and section 215 on both US and foreign citizens, the debate over mass surveillance and espionage practices increased substantially (Ackerman, 2014). For instance, the former consisted of collecting personal data from Google and Apple, whereas the latter assisted in gathering telecommunication data such as phone numbers and locations. This raises problems for individual liberties, but it also means that such information-gathering activities are performed under government precision. Moreover, the US was also blamed for bugging Angela Merkel’s phone (Traynor, 2013). Following these incidents, the US government responded by publishing a Liberty and Security Report. In the report, there was no mention of the elimination of data collection and surveillance. Instead, the report stated that the US “must continue to collect signals intelligence globally in order to assure the safety of US citizens at home and abroad and to help protect the safety of our friends, our allies, and the many nations with whom we have cooperative relationships” (Clarke et al., 2013: 11). What the clause justifies is a democratic government’s continuation of espionage activities on a massive scale by arguing that it is for the greater good. This certain strategy of framing the security dimension is visible among authoritarian regimes such as China, where individuals are under constant surveillance. Yet, the report, as mentioned above, belongs to the US government. As the leader of the free world and the defender of civil liberties, this situation is concerning for the survival of democratic ideals and puts US democracy under question.Moreover, from the administrative perspective, the utilization of CCTV cameras (closed-circuit television) and GPS (global positioning system) contribute to the collection of personal data both voluntarily and involuntarily (Zwart et al., 2014: 715). Democratic governments tend to argue that the presence of these systems is solely for the sake of their citizens’ security and wellbeing. However, in the meantime, they do not always ask for consent. The line between authoritarianism and democracy becomes therefore blurry, considering that in democracies, people have the right to privacy. Likewise, the absence of consent in the utilization of personal data creates ethical issues for the governments and violates basic universal human rights. To illustrate, a study in Australia reveals that 47% of Australians deliberately gave false information about their age, date of birth, and so on (ACMA, 2013: 6). This finding indicates that individuals are willing to lie when providing information to protect their identity.
Artificial Intelligence (AI)
Developing AI technology for illegitimate use gives rise to great challenges. For instance, using facial recognition systems outside its formal usage (such as to detect criminals) leads to ‘potentially totalitarian control’ over civilian lives (Pijl, 2020: 32). Zuboff (2015: 81) calls for a concept called ‘surveillance capitalism’ and indicates that “impersonal systems of discipline and control produce certain knowledge of human behaviour independent of consent”. Her argument supports the idea that the formation of AI has the capacity to manipulate and direct individual preferences. This is depicted by the remarks of Marcello Ienca, who is a Swiss neuro engineer. He warns that people have the right to psychological continuity against AI interventions— interventions that are being experimented on in the army (Ienca, 2017). In sum, AI is a beneficial tool for human life to the extent that it does not penetrate or invade the human mind. Furthermore, it is evident that today’s global economy depends on the advancement of AI. The 2008 economic crisis has completely transformed the world into an IT-based neoliberal capitalist order where almost everything can be digitally traceable, from payments to social media accounts (Pijl, 2020: 32). This new order entails shifting from collective to qualitative decision-making by hinting at an oligarchic structure centred around big corporations and banks (Carrol, 2013). Consequently, the more the economy becomes intertwined with global issues (for example, human health), the less democratic it becomes in the decision-making processes (Carrol, 2013). Hence, there is a need for a transparent, rule-based AI development system for democracies to maintain civil liberties and the participation of all individuals in democratic processes.
A distinct AI formation that is detrimental to democracies is the so-called ‘deepfakes’. They are “highly realistic and difficult-to-detect digital manipulations of audio or video” that are, in reality, fake (Chesney and Citron, 2019: 147). Yet, when compared to the original video or audio, the differences between the two become inseparable. For instance, it was displayed in one of President Obama’s speeches. In the middle of his actual sentence, the person who creates the deepfake version takes over. He changes the rest of the sentence into street jargon with a couple of curse words that sound exactly like President Obama’s own words (The Atlantic, 2019). The abuse of AI systems as such may lead to serious danger in terms of public speeches or diplomatic relations. Even more extreme, these constructs can lead to conflicts and deception between rival superpowers. Hence, democracies are more vulnerable than ever with the emergence of AI, causing misinformation and fake news.
The Internet leaves data traces every time after its usage, including on Facebook or Twitter (Boehme-Neßler, 2016: 222). Especially social media’s role as an instrument for political purposes in recent years has come under the spotlight. The Cambridge Analytica (CA) scandal illustrates how Facebook was used for political campaigning both in the US and UK. In 2018, Cambridge Analytica, a political data analysis firm, was accused of using Facebook data of over 50 million users in the election of Donald Trump and the Brexit campaign. CA used personal data to create psychological profiles for voters through targeted advertisements. Via the ‘thisisyourdigitallife’ third-party app, a Cambridge academic designed, CA was able to access not only the data of those who downloaded it but also the data of their friends and family (Wired, n.d.). In the aftermath of the scandal, Facebook received massive backlash from its users and the public. Many argue that social media is one of the most influential gadgets for political gain and office. According to Margetts (2017:1), “the acoustics of social media, orchestrated by firms like Facebook, are implicated in the waves of political populism and even extremism that have swept across the United States and many European countries”. The CA affair confirmed that democratic decline is real. Carole Cadwalladr, who exposed the CA case, even argues that social media platforms generate a ‘9/11 of democracy (Margetts, 2019: 8). Another instance of election manipulation took place again in the Trump election. Russia’s efforts to strengthen racial discrimination in the US through disinformation tactics which looked like they were coming from the American people’s and interest groups’ social media accounts, caused rumours that the election was rigged (Gayard, 2018: 119-120). This means that cyberspace is a global playing field and is open to producing biased outcomes, especially with social media data. In this way, the social media platforms’ owners hold enormous corporate and centralized power to dictate the future of democracy (or authoritarianism). This supports the former point (under the AI section) that there is an oligarchic structure in the current order. Big data enables the centralization of power in the hands of a few through fragmenting and marginalizing societies for political interests. Nowadays, the warfare over which country is a superpower depends largely on who has a bigger hand in using big data.
This paper examined three features of big data from a critical lens. It showed how big data’s risks and illegal practices affect democracies and potentially lead them to become more oligarchic. The paper suggests that democratic backsliding is on the table, presented in the cases mentioned above. In order for democracies to survive, a transparent administration of big data is essential without interfering with fundamental individual rights. This is possible through necessary institutional oversight mechanisms that adhere to democratic ideals for providing a safer environment for citizens.
Ackerman S (2014) US tech giants knew of NSA data collection, agency’s top lawyer insists. The Guardian, 19 March. Available at: https://www.theguardian.com/world/2014/mar/19/us-tech-giants-knew-nsa-data-collection-rajesh-de (accessed 10 July 2021).
Australian Communications and Media Authority (ACMA) (2013) Managing your digital identity Digital footprints and identities research Short report 1. Available at: https://apo.org.au/sites/default/files/resource-files/2013-11/apo-nid36376.pdf
Boehme-Neßler V (2016) Privacy: A matter of democracy. Why democracy needs privacy and data protection. International Data Privacy Law 6(3): 222-229.
Carroll WK (2013) The Making of a Transnational Capitalist Class. London: Zed Books.
Chesney R and Citron D (2019) Deepfakes and the New Disinformation War: The Coming Age of Post-Truth Geopolitics. Foreign Affairs 98(1): 147-155.
De Zwart M, Humphreys S, and Van Dissel B (2014) Surveillance, big data and democracy: Lessons for Australia from the US and UK. University of New South Wales Law Journal 37(2): 713-747.
Gayard L (2018) Darknet: Geopolitics and Uses. London: John Wiley & Sons.
Ienca M (2017) Do We Have a Right to Mental Privacy and Cognitive Liberty? Available at: https://blogs.scientificamerican.com/observations/do-we-have-a-right-to-mental-privacy-and-cognitive-liberty/
Margetts H (2017) Political behaviour and the acoustics of social media. Nature Human Behaviour 1(4): 1-3.
Margetts H (2019) Rethinking Democracy with Social Media. The Political Quarterly 90(S1): 107-123.
Pijl KV (2020) Democracy, Planning, and Big Data: A Socialism for the Twenty-First Century? Monthly Review (New York. 1949) 71(11): 28-41.
Review Group on Intelligence Communications Technologies Washington DC. (2013) Liberty and Security in a Changing World: Report and Recommendations of The President’s Review Group on Intelligence and Communications Technologies. Liberty and Security in a Changing World: Report and Recommendations of The President’s Review Group on Intelligence and Communications Technologies. Washington DC.
Richards NM (2013) The Dangers of Surveillance. 126 Harvard Law Review: 1934-1953.
The Atlantic (2019) Ahead of 2020, beware the deepfake. Available at: https://www.theatlantic.com/video/index/593170/deepfake/ (accessed 10 July 2021)
Traynor I (2013) Angela Merkel: NSA spying on allies is not on. The Guardian, 23 October.
https://www.theguardian.com/world/2013/oct/24/angela-merkel-nsa-spying-allies-not-on (accessed 10 July 2021).
Wired (n. d.). The Cambridge Analytica Story, Explained. Available at: https://www.wired.com/amp-stories/cambridge-analytica-explainer/
Zuboff S (2015) Big other: surveillance capitalism and the prospects of an information civilization. Journal of Information Technology 30(1): 75-89.