At present, the Israeli military mainly uses two artificial intelligence systems to participate in military operations. One is a system for processing a large amount of data and selecting air strike targets; The other is an artificial intelligence model used to calculate the ammunition load and make a surprise attack plan.
"If there is an error in the calculation of artificial intelligence, and artificial intelligence itself cannot explain it, then who should be responsible for this error? You may kill a family because of a mistake. "
Israel Defense Forces (IDF) began to use artificial intelligence (AI) system to select air targets and organize wartime logistics work to cope with tensions with Iran and other countries.
According to a report by Bloomberg on July 17, at present, the Israeli military mainly uses two artificial intelligence systems to participate in military operations. One is a system for processing a large amount of data and selecting air strike targets; The other is an artificial intelligence model used to calculate the ammunition load and make a surprise attack plan. Military officials said that it is now possible to select air targets and carry out attacks within minutes, which is unprecedented speed.
Supporters believe that these advanced algorithms may surpass human capabilities and help the military minimize casualties, while critics warn that over-reliance on increasingly autonomous systems may have fatal consequences.
Designed for total war, it has gained practical experience in Gaza.
Israeli officials said that in addition to using artificial intelligence systems to process a large amount of data and select air strike targets, the military is rapidly carrying out subsequent raids through another artificial intelligence model called "Fire Factory", which calculates the ammunition load according to the air strike target data approved by the military, prioritizes thousands of targets and assigns them to aircraft and drones, and makes a timetable.
Giving artificial intelligence a high degree of control over military operations has caused a lot of controversy and discussion. An IDF official said that every target and air strike plan of these two systems are currently supervised and audited by human operators, but this technology has not been regulated at any international or national level.
"If there is an error in the calculation of artificial intelligence, and artificial intelligence itself cannot explain it, then who should be responsible for this error?" Tal Mimran, a lecturer in international law at Hebrew University in Jerusalem and a former military legal adviser, said, "You may kill a family because of a mistake."
The details of the military’s use of artificial intelligence are still largely confidential. According to military officials, the IDF conducted regular air strikes in the Gaza by using controversial artificial intelligence systems, and gained battlefield experience. Earlier, Israeli air strikes were frequently carried out in the Gaza in response to rocket attacks. In 2021, the IDF described the 11-day conflict in Gaza as the world’s first "artificial intelligence war", saying that it used artificial intelligence to identify rocket launching pads and deploy drones. Israel has also carried out air strikes in Syria and Lebanon, targeting arms shipments from pro-Iranian militias such as Hezbollah in Lebanon.
In recent months, Israel has warned Iran about its uranium enrichment activities almost every day. If a military conflict breaks out between the two countries, the IDF expects Iranian proxy organizations in Gaza, Syria and Lebanon to fight back, which will lead to the first serious multi-front conflict in Israel since the Yom Kippur war triggered by the Egyptian and Syrian raids 50 years ago.
IDF officials pointed out that tools based on artificial intelligence such as "Firepower Factory" were tailored for this scenario. "What used to take a few hours now only takes a few minutes, plus a few minutes of manual review." Colonel Uri, head of the army’s digital transformation department, said that only his name could be used for security reasons. Speaking at the headquarters of the Israel Defense Forces in Tel Aviv, Uri said: "We have done more with the same number of people."
These officials stressed that the system was designed for total war. The IDF has been using artificial intelligence for a long time. Recently, it has extended these systems to various departments, trying to position itself as a global leader in autonomous weapons. Some of these systems were built by Israeli defense contractors, while others, such as the StarTrack border control camera, were developed by the military. These cameras identify people and objects by watching thousands of hours of video. Together, they form a huge digital architecture, which is specially used to analyze a large number of unmanned aerial vehicles and closed-circuit television videos, satellite images, electronic signals, online communications and other data for the military to use.
Processing these massive information is the task of the Data Science and Artificial Intelligence Center run by the 8200 military department. The center is located in the intelligence department, where many Israeli technology millionaires complete their compulsory military service before the company succeeds in starting a business. A spokesman said that the system developed by the center "changed the IDF’s concept of targets".
Serious concern
The secrecy of the development of such tools has aroused serious concern. Some people think that the gap between semi-automatic systems and fully automatic killing machines may narrow overnight. In this case, machines will be given the ability to locate and strike targets, and human beings will be completely excluded from decision-making.
"Only some software changes are needed to change them from semi-automation to full automation." Catherine Connolly, an automated decision researcher at Stop Killer Robots, said.
Another worry is that the rapid adoption of artificial intelligence exceeds the research speed of its internal operation. Many algorithms of artificial intelligence systems are developed by private companies and the military, which have secret patent information. Critics emphasize that the inherent mechanism of the algorithm’s conclusion lacks transparency. The IDF admits this problem, but says that the output results are carefully reviewed by soldiers, and its military artificial intelligence system will leave records so that human operators can trace its steps.
"Sometimes, when you introduce more complex artificial intelligence components, such as neural networks, understand ‘ What has happened to its brain ’ Is quite difficult. So I say I am satisfied with traceability, not with interpretability. In other words, I want to know what is important to me in this process and monitor it, even if I don’t understand every ‘ Neuron ’ What are you doing? " Colonel uri said.
The Israeli Ministry of Defence refused to disclose the amount of its investment in artificial intelligence, and the military was unwilling to discuss specific defense contracts, although it confirmed that the "fire factory" was developed by Israeli defense contractor Rafael Advanced Defense Systems Ltd.
"We can assume that the United States, China and several other countries also have advanced systems in the field of artificial intelligence." Liran Antebi, a senior researcher at Israel’s National Security Institute, said, "But unlike Israel, as far as I know, they have never demonstrated the actual operation and successful use of the system."
In February this year, the first global summit on responsible artificial intelligence in the military field was held in the Netherlands. More than 60 countries, including China and the United States, participated in and signed an action initiative on responsible use of artificial intelligence in the military field, and Israel was the only country that did not sign it.
At present, there are no restrictions on introducing artificial intelligence systems into military operations. Although the United Nations has presided over the negotiations for 10 years, there is no international framework that stipulates who should be responsible for civilian casualties, accidents or unexpected escalation when computers make mistakes.