It started with the ads.
I’ve never searched for the word “intelligence” or even watched a James Bond film. But I’ve been receiving targeted ads from the Australian Government to join their intelligence sector — daily, relentless, and weirdly specific.



Most of the ads featured people of colour and young women. I noticed back in March 2025 at the University of Sydney’s Career Fair, all representatives from the Australian Signal Directorate (ASD) were young women. Even the poster girl for the Defence Graduate Program was an Asian woman, making me question who the state imagined as its model recruit.


For context, I am a 25-year-old Asian woman pursuing a Masters in Digital Communication and Culture – the last person in the world you’d assume the Australian Government would recruit for intelligence. But state initiatives indicate otherwise.
In 2022, the ASD announced the REDSPICE initiative – Australia’s most significant investment in cybersecurity worth $9.9 billion. REDSPICE introduced 1900 new roles and tripled ASD’s cyber capabilities to ensure that Australia was best prepared to respond to the evolving strategic and technology environment. In 2024, the National Defence Strategy announced that Australia would expand its role in Indo-Pacific security in response to the growing assertiveness of the Chinese military.

And the most suspicious part of these ads? Applicants are told not to tell anyone they’ve applied—not even friends or family. For organisations demanding secrecy, this algorithmic visibility felt like a haunting echo of mass wartime recruitment— but now mediated by machines. As a woman studying digital technology and surveillance, I found it both fascinating and unnerving. Why me? Why now?
From Gaming to Marketing: Intelligence Technology is used everywhere
The birth of Artificial Intelligence can be traced back to the military. In Atlas of AI, Kate Crawford explores how the military’s history of AI has shaped our practices of targeting, data extraction and risk assessment we see today. The Snowden Archives reveal that the field of AI was initially developed for warfare and heavily funded by DARPA: the Defence Advanced Research Projects Agency. “A whole generation of computer experts got their start from DARPA funding,” former DARPA director Robert Sproull proudly announced. Many of AI’s functionalities was designed with the military priorities of command and control, automation, and surveillance (Crawford, 2021, p. 184).

So, what happens when military technology is rolled out into society? They’re commercialised into games. In Fantasies of Virtual Reality, Carter & Egliston reveal how virtual reality headsets used in first-person shooting games like Nuclear Rush were originally designed to train fighter pilots to become deadlier killers (p.7). 6-DOF motion technology programmed into many gaming consoles like the Wii remote can be traced back to the Manhattan Project. Researched used remotely controlled robotic arms to handle dangerous radioactive materials when developing nuclear bombs (p.81).
But it’s not only gaming consoles that use military technology. We as a society are taught how to think like intelligence officers – even brainwashed into believing that surveillance and targeting benefit us.
The logics of artificial intelligence are infused with a type of classificatory thinking (Crawford, 2021, p. 185). From explicitly battlefield-oriented notions such as “target” and “asset” to constant situational awareness, the techniques of warfare take shape in advertising. In her rallying pocket handbook An Artificial Revolution, Bartoletti warns, “personalisation, profiling optimization, customization – these are all buzzwords of marketing and PR [that] present individual marketing as a bespoke experience, rather than the exploitation of our vulnerabilities” (p. 71).
Just as the Intelligence officer scours information on their target using Open-Source Intelligence (OSINT); businesses harvest data about their users on social media, news websites, and forums. Personal data is then interpreted into SWOT analysis charts, risk assessments and customer profiles.
Turning the Crosshairs from Enemies to Civillians
What happens when intelligence tools, originally designed to target & kill enemies, are deployed into society?
First, we have predictive behaviour control that leverages individual data to manipulate behaviour at scale. Shoshana Zuboff describes this strategy as “surveillance capitalism”: each user’s data is fed into advanced manufacturing processes known as “machine intelligence” and fabricated into prediction products that anticipate what you will do now, soon, and later (as cited in Bartoletti, 2020, p.76).

Corporations have used intelligence tools to manipulate voters. Cambridge Analytica, a British political consulting firm, harvested data from millions of Facebook users to create detailed psychographic profiles. Gathering data on personality traits, behaviours, and preferences from over 50 million users. Cambridge Analytica delivered highly personalised political advertisements to undecided swing voters during the 2016 U.S. presidential election and ambivalent voters during the 2020 Brexit Referendum.
These micro-targeting messages triggered each user’s fears, desires and vulnerabilities to manipulate their votes, ultimately tipping the balance in favour of Donald Trump or supporting the Leave campaign. Intelligence tactics can be repurposed to exploit our personal information – without our knowledge nor consent – to alter political outcomes.

Second, we have punitive algorithmic systems that categorise citizens on a threat scale. Sociologist Sarah Byrnes observed that law enforcement cannot undertake a search and gather personal information until there is a probable cause. By contrast, intelligence is fundamentally predictive…pre-emptively intervening based on the intelligence required (as cited in Crawford, 2021, p. 192).
But intelligence predictions can be flawed—and when weaponised, they inflict serious harm on ordinary citizens. The Robodebt scandal (2016–2019) saw the Australian government use an unlawful automated debt system to identify and recover alleged welfare overpayments. The system accused 430,000 citizens with inaccurate Centrelink debts.
The punishments were severe: garnishing wages, seizing tax refunds, and an unlawful debt amount totalling $720M. Many recipients – most notably low-income earners, students, and people with disabilities faced significant financial stress and emotional distress, with reports of severe mental health impacts and even suicides linked to the scheme.
Systems like Robodebt and its American equivalent, MiDAS, are punitive – designed on a threat-targeting model (Crawford, 2021, p. 206). Scoring and risking have deeply shaped how government systems work, with automated tools reinforcing how citizens and communities are judged, valued, and treated.
Finally, we have dual-used technologies deployed in military warfare, developed using the non-consensual, forced labour of tech workers and STEM graduates. The military was always avid sponsors of academia. The University of Chicago was a key site for developing the Manhattan Project. The University of Southern California has a dedicated research centre funded by the Department of Defence (Carter & Egliston, 2024, p.81). Even the University of Sydney has a long history of contributing to defence and security projects. But do these researchers know that their work is used for war? And do they agree with it?
Engineers from Big Tech don’t. In fact, they were outraged when they discovered that their work was being used in algorithmic warfare. In 2017, the Department of Defence launched Project Maven, a $10 billion bid awarded to the tech company that built an AI system that could identify targets and match them across all existing drone footage.
Google initially accepted – until 3,100 employees signed a petition opposing the company’s involvement in warfare and demanding the contract’s cancellation (Crawford, 2021, p.191). Microsoft quietly stepped in, and has been churning out projects for Defence ever since. Any employee who rallied against the sale of Azure cloud computing platform for warfare? Immediately terminated.
Tipping the Scales of Power
But what happens when technology goes wrong?
It is clear that the partnership between the military and tech corporations has a long, strong history. But now, we see this relationship increasingly tenuous as interests diverge and power balances shift.
“Algorithmic governance is both part of and exceeds traditional state government.”
Kate Crawford, Atlas of AI, p.186
The AI industry both enforces and challenges state power, empowering governments to expand their geopolitical power while also taking on the role of the state. In The Stack: On software & sovereignty, Bratton argues that platforms offering cloud services, smart city systems, and apps, are taking on functions traditionally managed by states (Bratton, 2016). Sorting people, regulating flows (of people, goods, data), and enforcing behaviours on platforms take precedence over state regulation.
Power and decision-making flow not through parliaments but through algorithms and digital infrastructure. Platforms don’t just support state governance – they now increasingly replace and supplant it.
In Regulation of and by Platforms, Gillespie exposes how platforms have amassed sweeping regulatory power in the absence of clear laws, allowing them to operate above accountability and reshape public life on their own terms. Despite their deeply political role in content moderation, platforms remain shielded from legal responsibility under “Safe Harbour” laws, which permit self-regulation and shields platforms from the harms caused by their algorithms, including the spread of hate speech and violent content (Gillespie, 2017, p.258).


In addition, platforms challenge governments when they enforce policies beyond state reach. Notable cases include:
- Good Shepherd Church Stabbing: Elon Musk refused to comply with court orders from Australia’s eSafety commissioner to globally remove violent footage of a Sydney stabbing.
- News Media Bargaining Code: In 2021, Meta signed a deal with Australian news outlets to pay for displaying news on their platforms. In 2024, they refused to renew the deal, essentially cutting off payments to local publishers.
By resisting state policies, platforms demonstrate their growing power to override government efforts. In Regulating Platforms, Terry Flew argues that “platforms are increasingly promoting themselves as representing the polity more effectively than do its elected representatives” (Flew, 2021, p. xvi). Whether that means supporting grassroots organisations against corrupt governments during Arab Springs or tackling hate speech in Asia Pacific nations that lack protective laws, Gillespie argues that platforms increasingly present themselves as “public arbiters of cultural value”— and thus, representing the people’s interests better than the government (Gillespie, 2017, p. 264).
Public trust in government is crumbling. And as platforms eclipse the state’s regulatory power, Wendy Chun and Tung Hui Hu observe that governments are scrambling to reclaim control (Crawford, 2021, p.187). In response, the Australian Government appears to be aggressively recruiting STEM graduates to rebuild internal digital capabilities—an urgent effort to reclaim technological sovereignty and reduce dependence on third-party infrastructure.
The Partnership Solidified
As Gillespie notes, platforms function in a grey ‘intermediary’ area, operating above regulatory norms. Another group that operates above the law are intelligence agencies. Australian intelligence agencies are exempt from the Privacy Act 1988. This law regulates how personal data is collected, used and disclosed to protect individuals’ privacy rights.
While this raises the uncomfortable question of ‘how much do they know without my knowledge?’ – this does not mean intelligence officers are allowed to go rogue and do whatever they want. Rather, Intelligence officers are still bound to “stringent oversight and accountability mechanism” – notably the Intelligence Services Act 2001 and audits from the Inspector-General of Intelligence and Security (IGIS).
But currently, there are no laws governing Big Tech’s dual-use surveillance technologies.

Data from Google Nest cameras, Microsoft Azure AI, and Apple AirTags can be repurposed to monitor, profile, and police individuals. Law enforcement agencies have partnered with Amazon to access video footage from the Amazon Ring and Neighbours app – without owner consent. U.S. Government agencies use powerful systems from Palantir to detect Medicare fraud or deport immigrants (Crawford, 2021, p. 194-195). These dual-use technologies fall outside the scope of the Wassenaar Arrangement: the international law regulating conventional arms and dual-use technology exports.
But can’t we protest or pass laws to regulate Big Tech’s dual-use products? We can try – but both Big Tech and governments are notorious for deflecting blame. States claim: “we can’t control what we don’t understand,” while tech companies say: “we’re not liable for government misuse,” leaving a dangerous gap in accountability.
A Vacuum of Accountability
In Algorithmic warfare and the reinvention of accuracy, Lucy Suchman revealed that in 2010, U.S. forces killed 23 unarmed Afghan citizens in Uruzgan using Unmanned Aircraft Systems (UAS). These drones were not manned by a pilot – but by machines. Ironically, cutting edge weaponry such as UAS – once promised to lead to more humane outcomes – were used to justify exorbitant government subsidies into Defence. Just like VR was designed to train soldiers and motion-tech was used to handle radioactive material, algorithmic warfare was developed to compensate for a human deficit.
But who is building these weapons and using what data, and how are things labelled as a threat?

Lucy Suchman argues that investing in more precise technology won’t prevent human mistakes if we train the machine to target the wrong people. From facial recognition technology to UAS, algorithmic warfare still relies on crude racial profiling and stereotyping. It’s why 3,341 people were killed by ‘precision air strikes’ in Pakistan between 2004 and 2015 – but only 1.6% of those killed were “high value” targets (Suchman, 2020, p. 177). Why 23 innocent civilians were killed in Uruzgan.
All because an algorithm was (mis)trained to perceive all Middle Eastern faces as a threat.
Algorithms don’t create bias—they are built by people who encode existing prejudices into systems of ranking and control. Human decisions – not machines – determine whose lives are devalued. Technology becomes the tool through which violence is rationalised, scaled, and made to appear objective. Responsibility is deflected. Tech giants and nation-states, complicit and bloodstained, choose silence over accountability.
But of course, you’re an intelligence officer – you can’t tell anyone. It’s part of the job.
Contact
Justine Kim
University of Sydney
+61 450 360 692
skim8842@uni.sydney.edu.au
References
Al Jazeera. (2020, December 17). What is the Arab Spring, and how did it start?https://www.aljazeera.com/news/2020/12/17/what-is-the-arab-spring-and-how-did-it-start
Australian Signals Directorate. (n.d.). REDSPICE A Blueprint for growing ASD’s Capabilities. Australian Government. https://www.asd.gov.au/sites/default/files/2022-05/ASD-REDSPICE-Blueprint.pdf
Bartoletti, I. (2020). An Artificial Revolution: On Power, Politics and AI (1st ed.). The Indigo Press.
Bratton, B. H. (2016). City Layer. In The Stack: On Software and Sovereignty. The MIT Press. https://doi.org/10.7551/mitpress/9698.003.0012
Caloca, N. (2024, August 1). Australia’s Growing Defense and Security Role in the Indo-Pacific. Council on Foreign Relations. https://www.cfr.org/in-brief/australias-growing-defense-and-security-role-indo-pacific
Carter, M., & Egliston, B. (2024). Fantasies of Virtual Reality: Untangling Fiction, Fact, and Threat. The MIT Press. https://doi.org/10.7551/mitpress/14673.001.0001
Crawford, K. (2021). The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (1st ed.). Yale University Press. https://doi.org/10.2307/j.ctv1ghv45t
Flew, T. (2021). Regulating platforms. Polity Press.
Gillespie, T. (2017). Regulation of and by Platforms. In The SAGE Handbook of Social Media (pp. 254–278).
Hamilton, E. (2023, August 9). Crude, cruel and unlawful: Robodebt Royal Commission findings. Law Society of NSW Journal. https://lsj.com.au/articles/crude-cruel-and-unlawful-robodebt-royal-commission-findings/
Scott, M. (2018, March 27). Cambridge Analytica helped ‘cheat’ Brexit vote and US election, claims whistleblower. POLITICO. https://www.politico.eu/article/cambridge-analytica-chris-wylie-brexit-trump-britain-data-protection-privacy-facebook/
Sinpeng, A., Martin, F. R., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Department of Media and Communications, The University of Sydney.
Suchman, L. (2020). Algorithmic warfare and the reinvention of accuracy. In Critical Studies on Security, 8(2), 175–187. https://doi.org/10.1080/21624887.2020.1760587
The University of Sydney. (n.d.). Current defence and security research collaborations. https://www.sydney.edu.au/research/our-research/current-defence-and-security-research-collaborations.html
Williams, T. (2025, April 8). Microsoft employees allegedly fired after Israel protests. Information Age. https://ia.acs.org.au/article/2025/microsoft-employees-allegedly-fired-after-israel-protests.html?ref=newsletter&deliveryName=DM25758
Be the first to comment