Israeli Military Develops AI Tool Using Palestinian Surveillance Data

New AI Model Raises Ethical Concerns Over Surveillance and Bias
Israeli intelligence officers
Israeli intelligence officersLevi Meir Clancy
Updated on
2 min read

The Israeli military's Unit 8200 is reportedly developing a sophisticated artificial intelligence tool modeled after OpenAI's ChatGPT, utilizing a vast collection of intercepted communications from Palestinians. This initiative aims to enhance the military's surveillance capabilities by analyzing large volumes of phone calls and text messages, primarily in Arabic language.

An investigation by the Guardian, in collaboration with Israeli-Palestinian media outlets, has uncovered that this AI model is being trained to understand spoken Arabic, focusing on both Palestinian and Lebanese dialects. The project reportedly gained momentum following the escalation of conflict in Gaza in October 2023. While it remains unclear if the tool has been operational, sources indicate that it is designed to answer inquiries about individuals under surveillance and provide insights from the extensive data collected.

Chaked Roger Joseph Sayedoff, a former intelligence officer involved in the project, highlighted the ambition to create a comprehensive dataset, stating, “We tried to create the largest dataset possible.” The model is said to consist of approximately 100 billion words, collected from everyday conversations, which raises serious ethical questions about privacy and consent. This expansive collection includes personal communications that are often innocuous, yet could be misused to incriminate Palestinians.

Human rights advocates express alarm at the potential implications of such technology. Zach Campbell from Human Rights Watch emphasized that the AI could lead to significant errors, describing it as “a guessing machine” that may unjustly incriminate individuals. Critics argue that using personal data for military purposes is a grave violation of human rights, particularly as many of those surveilled are not suspected of any crimes.

Unit 8200's efforts reflect a broader trend among intelligence agencies globally to integrate AI into their operations. However, experts warn that such systems can perpetuate biases and make flawed decisions, often without transparent reasoning. As this technology evolves, the ethical ramifications of its deployment in conflict zones continue to spark debate, highlighting the tension between security and civil liberties, particularly concerning the rights of Palestinians under military occupation.

Related Stories

No stories found.
Inter Bellum News
interbellumnews.com