With a history that traces back to World War Two, analytics isn’t a new discipline for government agencies. What has changed, however, is the access to data and computing power to process it rapidly.
Those forces are driving the application of artificial intelligence and machine learning across departments dealing in counterterrorism, border control, criminal justice and law enforcement.
“Both of those things have come together, along with the history of development with government and make it a really powerful time to be talking about [AI and ML] solutions for government,” said Steve Bennett, Director of Global Government Practice at SAS.
Bennett spent 12 years with the US Department of Homeland Security and was responsible for applying quantitative analysis to security decisions facing the country in the aftermath of September 11.
As an advocate of the use of analytics in government, he believes ML has the potential to help governments make faster decisions and better allocate limited operational resources when it comes to matters of national security.
Needles and pins
“The big problem that these agencies are trying to solve is finding the needle in the haystack. Whether it is a person you don’t want to let into the country because they are dangerous or a package that looks suspicious, you are trying to find that anomaly,” Bennett told Which-50.
Traditional approaches to this task have following a set of rules to determine whether or not a particular person or thing is dangerous. Machine learning flips that on its head, discovering signals in the data to identify danger.
“Rather than starting with the rules, machine learning turns the whole process upside down and lets us start from data. And the data tells us what’s the best way to find that needle in a haystack,” Bennett said.
Machine learning and artificial intelligence aren’t without its risks; too little data or poorly trained models can result in bad decisions.
“If you are going to implement machine learning or artificial intelligence, they are pretty data hungry. You need a lot of information to be able to train those models. If you don’t have that data you can do worse with those models than you would have with classical approaches,” Bennett said.
He noted that governments are often dealing with “high regret decisions” which means challenges often relate to the governance of the technology, rather than the technology itself.
For its part, Bennett says SAS spends “quite a bit of energy thinking about the ethical and societal angle to artificial intelligence and machine learning.”
He argues government use cases of the technology will lag private enterprise due to the ethical implications of the high stakes nature of the problems AI is trying to solve.
“Banking and retail use cases will be adopted a lot faster because they don’t have the same type of accountability that government agencies in western democracies face,” he said.
As well as ethical considerations around how models are trained, Bennett believes machine learning should never be the final decision maker in an action that involves a citizen.
“It should never be the case that the model alone is driving a potentially negative outcome for citizens,” he said.
A level of transparency is also required to ensure agencies are capable of explaining how AI come up with a decision and citizens so their confidence in what their government is doing. That may require legislative and policy changes, Bennett said.
“We send a lot of time thinking about how to improve the explainability and transparency inside artificial intelligence and machine learning models. That consumes just as much of our time as the technology.”
On the issue of regulating AI to ensure it meets societal standards of ethical behaviour, Australian data expert Ellen Broad argues some kind of regulation will be necessary to prevent seriously harmful behaviour.