April 2019 - Cybersecurity | Artificial Intelligence

Security Risk Analysis and AI Support

Sebastian Kurowski from the Fraunhofer Institute for Industrial Engineering looks at how artificial intelligence can be used to optimize IT security risk analysis in companies, offering relevant support for ITR specialists and for strategic decision-making.

Risk analysis – The ideal and the reality

IT risk analysis in organizations is unfortunately very often performed on the fly. The reason for this is not because people don’t know better. It is because doing risk analysis right means there are a large number of things to consider. First of all, a business risk only materializes in security if an attacker is successful in following a certain attack path.

This can basically involve just one vulnerability being exploited, such as a home page being taken down due to a denial of service vulnerability, or a customer database being leaked from a web shop due to an SQL injection vulnerability. However, other attacks require more steps to be followed. Examples of these are the attacks that we have seen on Tesco, and attacks that we have seen on building management systems. Just discussing the amount of steps involved in even one such attack could easily constitute one complete research paper.

If we want to discuss all of the attacks – which is what a security expert needs to do to assess the risk – in reality, we often end up just having a quick talk about things and don’t get around to documenting anything. 

But what if we want accountability of this analysis? What if we want to learn from this analysis in the future, if we want to leverage the process or even optimize the process in order to make it better and more cost-effective? At the end of the day, we are left with no choice: we have to additionally document the process. And this simply takes a lot of time.

There are several methodologies that are often used as a starting point, such as ISO 27005 and NIST SP830, which provide quite well-understood and sophisticated security risk analysis frameworks. However, a lot of them leave out concrete steps and guidance on how to perform a risk analysis. The consequence of this is that risk analysis is nowadays most often done in a manner where, in the best-case scenario, security experts from the organization meet up with people who are responsible for a certain application or system or an infrastructure, and they discuss the possible risks, the likelihood of those risks, and the impact of those risks. And then this is put into an excel sheet showing all the different risks that are possible.

This approach usually ends up in risks that consist of various different materialization possibilities. Talking about what can happen is not the same as talking about how it happens. But the latter is critical for taking actions, deciding on budgets, and so on. The amount of detail that is put into this question creates accountability of the analysis. The approach that is usually taken in meetings on risks, however, does not usually result in an accountable risk analysis. The effort required to discuss and document so many details is so considerable, that the question of how it can happen being included would render the analysis quickly infeasible. 

There are several spreadsheets and documentation-focused tools but, apart from these, there is nothing that is really used to support risk analysis. These tools can decrease the documentation efforts a bit, but usually by not enough. Then there are several approaches that conduct an automated analysis, but these are mostly restricted to technical vulnerabilities. Attacks that involve social or socio-technical attack steps, such as those on Tesco, those involving the exploitations of Building Management Systems, or some of the currently ongoing LockerGoga Attacks, cannot be fully considered by these approaches. At some point, a social actor is required for a threat to materialize – for example, a person who executes malware, or opens an email attachment, or picks up a USB stick, and so on, to become part of an attack.

There is obviously no sufficient tool support beyond documentation, and even the documentation tool support does not remove the necessity for an associated and time-consuming in-depth discussion of the relationship between exploits, vulnerability, threats, threat agents, and the security risk and the respective business risks that materialize as a consequence.

Artificial intelligence and its advantages in terms of process support

AI can potentially provide this kind of support. An AI can learn mundane tasks that are time and resource-intensive and are usually processed as part of a security expert’s work. For instance, in the end, analysis involves deciding on different risk-addressing strategies. There is a pattern within these decisions. For example, there could be numerous organizations where an insurance risk transfer would be a much more viable option than re-engineering the whole IT network. Some organizations may prefer transferring the risk to others. Or it could work the other way around – for instance, with companies like Google, who have a very detail-driven management of their infrastructure, it might be more cost-effective and be more viable to just adapt their infrastructure. 

This pattern is not only influenced by the question of costs, but also by the question of how important it is not to have a certain security leak. What are the compliance requirements, or how important is certain data for the company? For some organizations, confidentiality may not be quite so business-critical as for others. For these, the loss of customer data may be something which merely elicits a concern about a brief drop in their public value. For other organizations, losing customer data cannot be aligned with their companies’ internal moral compass.

These are unformalizable patterns. We can’t put these into numbers, but we can learn from it. Wherever you have relationships between non-quantifiable things, this is the point where AI becomes meaningful, because an artificial intelligence could actually simply learn what the preference of that organization is. 

Or it could start consolidating information from different places and interpret what information belongs together. This is also a very mundane task, but you need to have at least some knowledge to carry the task out. This way you can already free up a lot of resources required for risk analysis. So I would say the advantage of AI-based solutions is process support.

Artificial intelligence and the materialization of business risks

There is a range of AI-based solutions available for dealing with vulnerability assessments, but not really anything when it comes to the materialization of business risks. The reason for this is that a lot can be done without AI. Let us take the previous example: knowing about an organization’s priorities and preferences. If I worked for a bank, for instance, it would only take me a couple of months to find out which internal customer has which preference. But it would cost me at least twelve thousand Euros to purchase the type of infrastructure which would enable an AI to learn these preferences. 

So, although it could save me a lot of time in the long run, before the artificial intelligence can even function, I would have to put in quite a lot of work and a significant amount of resources into training it. This wouldn’t make sense for an individual organization. It would only make sense if you were able to make use of a system which can learn about the organization with very little training, because it already has a quite knowledgeable baseline. At this point, a purchase would make sense, but along the road to this destination, there is not a lot to gain.

The SMART-I AI project

The SMART-I collaboration between DFKI, the eco Association, Neofonie, and Fraunhofer will attempt to provide a complete integrated risk analysis approach that can work in heavily resource-constrained environments.  

With regard to information security in organizations, we can quite easily distinguish between the reactive and the proactive information security worlds. In the reactive information security world, I want to know what is happening right now. The things that we care about in this world are our vulnerabilities, and our indicators of compromise. Usually when I start such a day as a reactive information security expert, I will start by looking at the “indicators of compromise”. These indicators of compromise provide me with MD5 hashes of viruses or remote access Trojans, email addresses I should look out for, maybe IP addresses that are called as part of an ongoing attack – in short indicators that a certain attack is already in progress. I can use these indicators of compromise to check my network and my traffic, and look for those patterns.  

Indicators of compromise do not only come from one source. They can come from different providers from all over Europe, even worldwide. You may have a lot more sources for indicators of compromise if you are, for instance, part of the critical infrastructure domain. So every day starts off with looking at those different streams and manually consolidating the indicators of compromise.

So what we want to do as a first part of the SMART-I project is to provide a consolidation of the indicators of compromise and vulnerabilities, and provide a dashboard that shows one single semantically consolidated indicator of compromise stream, where the reactive personnel can see what relevant attacks are going on right now and can use this information.

Now, the reactive and the proactive part usually and unfortunately do not operate too close to each other. SMART-I takes this knowledge from the reactive world and puts it into a knowledge model of attacks. So we basically extract what can happen. This is a gradually growing knowledge base for the organization.

SMART-I: Not just reactive, also proactive

If application owners now put their application models into SMART-I, the SMART-I system can then use this first semantically consolidated, then extracted knowledge on attacks to see which attacks could be a concrete threat to their application. And as a consequence, which business risk could materialize. This can also then be used to patch the applications to make them less vulnerable. Or you could think about redesigning your application. That’s where we move into the proactive security world of the organization.  

Here the second artificial intelligence part of SMART-I comes in – an artificial intelligence that learns about what the organization or certain users like to do, in order to address a certain risk. It could be, for instance, that you have a production plant where you have a lot of un-authenticated traffic because a lot of those vendor-specific protocols are un-authenticated. This is not necessarily something that you can change in your plant.  

The second AI of SMART-I will learn about such constraints. It will learn about what the technical preferences are and what social preferences are. Whether you prefer security countermeasures that are mostly based upon the goodwill and contribution of the users. Whether you are in a very trusted environment when it comes to users. Or what budget constraints are available for addressing risk. By knowing what the constraints are, and what the preferences of certain users are within those constraints, SMART-I can then decide which countermeasure would be best, and provide a prioritization based upon what countermeasure would most effectively work in order to avoid a certain risk materialization, and what countermeasure works within the preferences of the organization.  

The project is planned to run for three years, if we receive the grant from the Ministry of Education and Research that we have applied for. In the first year, we are hoping to have achieved a first version of the consolidation of indicators of compromise and the vulnerabilities: so the dashboard, basically, for the reactive security world. Until round about the middle of the project, we are hoping to have achieved completely automated risk assessment based on this. And then, by the end of the project, we are hoping to have achieved a fully integrated system that learns the preferences of an organization, parses indicators of compromise and vulnerabilities, consolidates them, informs reactive personnel, then provides the proactive personnel with the capabilities to do very a detailed analysis of the business risks. It would further provide the application owners with appropriate countermeasures based on the learned preferences of the organization. So what we basically will have by the end of the third year is one integrated process from reactive to proactive security, and one systematic and accountable approach to risk assessment.

The potential benefits of a fully automatic accountable risk analysis for enterprises

There are several major benefits to automated risk analysis. The first is the cost. You can do it more frequently, without it heavily impacting your resources. Currently, risk analysis is carried out mostly when implementing a system or an application, or perhaps with a recertification, but that’s basically it. With an automated system like SMART-I, it can essentially be done with the click of a button. You could do it daily if you want to.

Take for instance the WannaCry ransomware. If you try to look into the future, it’s always hard to anticipate an attack like this. But the exploit that was used by WannaCry was already known in a reactive world. With SMART-I, you could basically just take this exploit and see what systems would be affected and how these exploits can be used to execute codes, and where you have to patch first. And so you would be vastly better prepared for what might happen in the future.

And, of course, when it’s systematic and when it’s more detailed, it’s accountable. When it’s accountable, you can know what went wrong. If something happens, you know where you missed something. And analyzing in a systematic way which allows for reproducibility, comparability, and accountability of the analytic steps is, of course, the bedrock for any process improvement.

Might AI replace IT personnel in the company? An unlikely outcome

It could happen that this system would then replace IT personnel. It would undoubtedly free up a lot of resources. We suspect that the resource savings for the risk assessment and analysis processes will be between one third and even one half of what is currently required to do a risk analysis. But (and here is the big but), I think that the consultancy that is part and parcel of this kind of system still provides more than enough work for its security experts. This ranges from helping application owners to model their application accordingly, discussing the probability of a certain attack step taking place when extracting either the vulnerability knowledge or the attack knowledge from vulnerabilities and indicators of compromise, fine tuning and working with the countermeasures selection, and then in that part working towards generic policies and security strategy. Risk analysis could be conducted more often. Organizations would be better prepared for potential threats. And all this would be accompanied with a higher level of quality in the process itself.

So I think that kind of personnel will not be reduced in an organization. On the contrary, I think they will be able to work with the system, with the process, and that maybe even additional personnel will be needed – because better quality may also reveal that too little has been done so far and that more action is required.

Humans are by no means dispensable

Put simply, I think humans will always be needed alongside AI in information security. In 1932, Kurt Gödel showed that a formal system that is powerful enough to describe and provide an understanding of a large enough object in the real world would be either incomplete or inconsistent. That’s the incompleteness theorem of Kurt Gödel. Now, if you take, for instance, a neural network, it works because it can live with inconsistencies. But those inconsistencies are not accountable, and if they are unsupervised, if they are working completely alone beyond the point of decision support, I think these systems will just start to do strange things, not dangerous by any means, but just lacking in sense or value. It would be like throwing randomness on randomness.

I think humans need to be involved because we ourselves have inconsistent and incomplete thinking. But we are accountable for that and we can act on that. We are accountable in a legal fashion, but also we are accountable in the sense that I can explain to you why I did something. Artificial intelligence can’t do that because you need an additional layer of conceptualization in order to provide the reasoning.

As a result, I think AI will have a huge impact on decision support, it will save a lot of resources – but it will never be a stand-alone, completely isolated, autonomous entity, as it is so often portrayed.

 

Sebastian Kurowski has been affiliated with Fraunhofer and the University of Stuttgart since 2010. He received a Master of Science in Information Systems from University of Hohenheim in 2013 and holds a CISSP certification from ISC² since 2017. His research interests are within the economical and social aspects of information security governance in organization. His main research topic focuses on the social, economical, and technical integration of countermeasures in automotive organizations.


Please note: The opinions expressed in Industry Insights published by dotmagazine are the author’s own and do not reflect the view of the publisher, eco – Association of the Internet Industry.