Better together: lifting the analyst’s burden with human-machine teaming
Typically employed in so-called dull, dirty, and dangerous jobs (the three Ds), our use of robots and machines is most associated with relatively straightforward, process-driven tasks that require limited cognitive ability, and it is only recently that we have seen levels of artificial intelligence (AI) introduced that enable truly complex tasks to be handed off to machines.
Indeed, current concepts of operation for advanced AI often see humans and machines operating together, in so-called human-machine teaming, or HMT, rather than machines working entirely independently.
In the military domain HMT brings connotations of uncrewed wingmen in the form of advanced aircraft or scaled-down armoured vehicles working alongside their crewed counterparts, often in cognitive swarms and acting as force multipliers, operating in the vanguard, and keeping their human teammate from harm’s way.
While these examples may garner much attention, the applications of HMT are myriad and amongst the easiest to quickly iterate and field come in the form of AI-driven software applications.
Lifting the burden
Like their physical counterparts, AI software can unburden a human ‘teammate’ from labour intensive, time-consuming tasks, enabling them to focus on higher value-add activities. AI software is already being fielded in a range of sectors and it is increasingly being recognised as having utility in the intelligence community.
However, the intelligence domain has an especially high barrier for acceptance, and as noted by a panel of experts at a recent forum on AI held by Adarga, keeping the human-in-the-loop is of paramount importance, and it is recognised in the community that AI will augment rather than replace the analyst.
So, what does that mean for the capabilities required of the intelligence analyst’s AI teammate? Of the so-called three ‘Ds’, AI in the intelligence domain is most readily applicable to the ‘dull’ tasks that analysts face.
Across all sectors – commercial, military, and government – there is a recognition that AI software can be applied to one of the most pressing concerns for analysts: how to handle the exponential growth in information available to them, whether that be from open-sources, proprietary data, or in-house and classified information.
Compounding this challenge is the growing diversity in the sources of information, and with it now much easier for analysts to access worthwhile information in a far greater variety of languages it is essential that this can be automatically translated.
The likes of ChatGPT have brought to the public’s attention the art of the possible when it comes to applying AI – in the form of Large Language Models (LLMs) – to search vast bodies of information and generate concise and informed responses to questions.
Being able to hand off the task of processing this information – and to do so in a reduced timeframe – is a significant force multiplier for analysts and researchers, in the military domain it can enable commanders to get inside an adversary’s OODA loop and in the commercial sector provide a crucial competitive advantage.
A trusted teammate
However, as with any teammate given critical tasks, trust in their work is front of mind, and the US Department of Defense has recognised this as an impediment to the widespread use of LLMs across the US military, with concerns around the reliability of outputs and the potential for ‘hallucinations’.
In a recent blog, Adarga outlined its approach to overcoming some of the misgivings with the use of LLMs, including methods for preventing hallucinations and introducing citations in outputs, which can be of great importance for intelligence analysts to demonstrate how they arrived at their conclusions and show the sources of information when presenting reports that have used AI.
This ‘information provenance’ is essential if decision makers are to trust in the outputs of AI and ‘buy-in’ to its widespread application.
To really enhance the analyst’s capabilities, however, the AI teammate must be capable of not just the ‘heavy lifting’ of research but also undertake more complex tasks such as triaging, helping with targeting and the prioritisation of resources, formulating hypotheses for further examination, and gap analysis – such as corroborating classified intelligence.
AI can also be used to ‘enrich’ the intelligence product. For example, Adarga’s Vantage information intelligence software can generate thematic reports on a topic of interest that include differing perspectives from diverse sources, with this variation helping to corroborate findings and improve confidence.
It also features an innovative question and answer (Q&A) functionality that enables users to ask complex questions of curated data sets and reports generated in the software, instantly providing insightful answers that include references to the sources used.
Sitting in the swivel chair
Stovepiped systems and an inability to reach across classification levels has long been a chokepoint in the intelligence cycle and continues to necessitate a ‘swivel chair’ approach to the retrieval of information in many organisations. Enabling the AI ‘teammate’ to access multiple repositories of information at different classification levels and seamlessly fuse, for example, OSINT with classified sources, has the potential to be a significant force multiplier for analysts and greatly speed up decision making. Indeed, it may be essential if the maximum benefit is to be gained.
As with the introduction of new technologies and methods to any endeavour that was previously human-centric, it will be a process of gradual acceptance and cultural change before AI is integrated into the intelligence cycle and its full benefits brought to bear, with changes in training and tradecraft as important as the software itself. The onus is now on the developers of AI software and those bringing it into service to ensure systems meet the exacting demands of the intelligence community.
Crucial here will be getting the technology into the hands of users for evaluation and feedback, and then rapidly iterating solutions to meet evolving requirements. A close collaboration between the customer and industry is essential, as is an understanding that the speed of development and innovation in the AI is like no other, and existing procurement and fielding practices may not be suitable.
The stakes are high in what is an increasingly dynamic geopolitical landscape and while you may forgive ChatGPT for generating an erroneous dinner recipe, those making critical decisions on matters of national security must be assured of the accuracy of the intelligence reports on which they make potentially life and death decisions. This is not a simple task and one to which any system will do.
To learn more about how Adarga's Vantage information intelligence software is acting as a force multiplier for users in the military, national security, and commercial sectors click here
By Charlie Maconochie
Charlie is the Public Sector Director with Adarga. Adarga is one of the UK’s leading developers of artificial intelligence software for Defence, National Security and the public sector.Also Read
- Better together: lifting the analyst’s burden with human-machine teaming
- Government digital transformation hampered by data-sharing challenges, research reveals
- Suppple unveils Africa’s first conversational AI for multilingual information search & access
- Hacking for Change: Government data teams join forces at the ‘Snowflake x GTM Hackathon’