What Role Could AI Play in Intelligence Analysis?

 

The Joint Intelligence Committee (JIC) is the pinnacle of British strategic intelligence assessment. This body sits to consider intelligence assessments produced by the Joint Intelligence Organisation (JIO), a body staffed by experts drawn from across Whitehall and who specialise in fusing together intelligence analysis from a diverse range of intelligence material. The main output, the so-called ‘JIC paper’ is a high level paper written for the Prime Minister and senior government members setting out the intelligence communities assessment of global situations.

JIC papers are vital documents whose findings can, and do, shape the decisions made by Cabinet on national policy. To succeed they must be accurate, timely, and free from bias, presenting analysis on complex issues in a readable way. They use precise language and carefully drafted text to set out JIO’s (and by extension the wider intelligence community) understanding on an issue. As anyone involved in the crafting of such papers can attest, the process of creating one requires much debate and discussion around individual words and debating intentions based on incomplete or uncorroborated reports. The final paper must reflect agreement between all stakeholders and represents the definitive UK Government position on an intelligence issue – unlike in the US where different agencies may have differing views, the UK system produces a single report that fuses findings through consensus.

This week saw the announcement that Madeline Alessandri has been appointed the next Chair of the JIC, representing only the second time that a woman has held this vital role (the first was Dame Pauline Nevil-Jones in the 1990s). Every other holder of the office has been a white male, usually having followed a very traditional career in the diplomatic service and national security. This is not about ‘wokery’ or other such nonsense, but it is worth reflecting that the system produces people whose values, culture, insight and lived experiences reflect a very specific set of circumstances. There is a lack of diversity of thinking which may change how data is interpreted or cultural understanding drawn from having a different family heritage or background. While some will see no problem with this, the question we struggle to answer is ‘what is being missed by not having people in positions of power who may see, and interpret, information differently’?

This lack of diversity stretches more deeply into the intelligence analyst community. Look at the challenges of securing ‘Developed Vetting’ clearance, which carries out an in-depth assessment of an individual, the potential risks they may pose (financial, familial, personal and so on) and assesses their suitability to have access to the most sensitive intelligence material. This process is lengthy (often a year or more) and deeply intrusive and is only extended to people with strong links to the UK (e.g. British nationals usually with British parents). The challenge is this significantly reduces the diversity of applicants, in turn reducing the diversity of thinking and challenge within the intelligence community. If everyone round the table is a white individual with a good university education and no real experience of life in far off places, are they are able to understand the subtleties and nuances of the nation they are assessing? Would have a more diverse work force, drawing on people with deeper cultural and family links to a region or nation help provide counter analysis or challenge that could produce more accurate predictions and policy assessments?

One possible solution to this challenge is to ask if AI could be used to produce intelligence assessments that reflect either objective findings, or interpret data differently. The rapid rise of AI and its ability to learn quickly could be a game changing chance to revisit the fundamentals of intelligence analysis and look for the input of machines over people now. The premise is simple in that it may be possible to create a suitably compartmented classified AI system that would be able to access databases containing current intelligence material, historical papers and wider open source information and produce analytical papers in the style of a JIC paper to answer the same questions that human analysts are addressing.

Used appropriately it is likely that over time this system could be used to provide emotion and bias free analysis that would be used to provide assessments that could provide either a challenge function – enabling the JIC to ask why the AI was reaching different conclusions, or to reaffirm the findings of the human analyst. As the system matures this may reach the stage where draft papers are presented ‘blind’ to the JIC without stating which is written  by the AI and which is human, to reduce the potential for bias.

It may also be possible to train the AI to look at problems through a cultural lens, trying to use its advanced skills to produce analytical papers that may reflect how different nations may assess a situation -for example writing a paper that could assess how Russia or Iran may act in a given situation based on the perspective of being a Russian analyst. This opens doors to some fascinating ‘red team’ scenarios whereby AI can be used to guess and second guess different perspectives of how nations could react. Over time and with learning these predictions will only become more accurate

The interesting moral question is one about the role of the human in the process. Intelligence analysis requires judgements to be made on issues which can commit nations to war, or predictions that if made incorrectly could cause massive long term damage to a nations prospects. These products are the products of humans who think and assess situations as humans do, and who even with the best of intentions are still subject to biases and mistakes. But there are humans in the chain who can explain their thinking and logic and have it changed or impacted by the views of others during the drafting process. By contrast AI is coldly efficient and can do in seconds what takes humans days. Given the potential repercussions that emerge from intelligence reporting is it appropriate to take humans out of the loop, or should there always be a human in the intelligence chain? It is possible that with increased use of AI tools we may see intelligence analysts rely on it as a way to draft papers, test hypotheses and analyse disparate material to produce credible assessments. But it seems unlikely that any nation will want to rely solely on AI as its means of national intelligence assessment, for fear of taking humans entirely out of the equation.

One interesting opportunity that AI opens up is the potential increase in national intelligence analytical capabilities. Most nations have small intelligence staffs and finite collection capability. AI may function as a means of uplifting this, providing expanded intelligence analysis means, particularly using the vast amounts of open source intelligence out there to provide small staffs with far more intelligence assessment capability than has previously been possible. Will nations with limited budgets rely on AI to conduct their intelligence assessments for them and will humans increasingly be forced out of the loop?

Look to the next few years and the question may be why does the UK need a human intelligence analyst capability and are the skills required to get things right better placed in the hands of an objective AI that is not subject to the same biases as human authors? It is not beyond the realm of possibility that as AI capabilities increase, the role of the intelligence analyst may be less that of collater and analyser, but that of interpreting and checking whether AI output correlates and is accurate against what is known. There may need to be difficult decisions on the role of AI versus the role of humans in the intelligence cycle, particularly at the tactical level where AI could find itself being used to produce intelligence that in turn sits within the ‘kill chain’ of humans taking targeting decisions based on reporting that no human has produced. While there may be no such thing as ‘killer robots’, are we on the verge of ‘killer intelligence analysts’ that will provide advice that could see humans killed?

 

Comments

Popular posts from this blog

Is It Time To Close BRNC Dartmouth?

"Hands to Action Stations" Royal Navy 1983 Covert Submarine Operations Off Argentina...

Cheap Does Not Mean Affordable - Why The Royal Navy Sold the PEACOCK Class