February 2020 - Artificial Intelligence | Internet Governance

AI and the Need for Transparency and Oversight

Joseph Carson from Thycotic talks to dotmagazine about the ‘A’ in AI and the thought that needs to be put into developing AI solutions.

AI and the Need for Transparency and Oversight

© grandeduc | istockphoto.com

Watch the 6-minute video here or on YouTube, or read the transcript below

Transcript

dotmagazine: How would you define AI?

Joseph Carson: I think what is important when we get into the AI discussion is that we need a reality check. We hear the term ‘AI,’ and when I talk to different industry experts, and different companies, and fields, we all have different interpretations of what the ‘A’ means. The common terminology, specifically in media or on the news, is ‘artificial intelligence’. In the industry, we tend to talk about ‘augmented intelligence.’ I’ve heard more recent kind of variations of it being ‘artifact intelligence’ or ‘assisted intelligence.’ 

The reality check is that the majority of the systems and vendors that are actually providing these solutions today are not doing artificial intelligence. They’re actually doing augmented intelligence. They’re doing assisted intelligence. This is data that is being generated to help us humans make decisions. In simple terms, it’s advanced automation.

So, the reality check of most things we see in the industry today is that it’s an automated way of doing things. It’s helping humans make faster decisions, it’s helping us to take large amounts of data and put it through algorithms that help us to make a decision or suggest an outcome based on that information. So that’s a reality check.

But artificial intelligence is coming. It is being developed. We are moving closer to the true term, which is actually self-learning, basically self-sufficient in independent systems and algorithms. So, I think that the reality check is that we’re getting there in regards to intelligence, but – today – it’s more of an augmented or assistant intelligence.

dot: Does AI pose a security threat in terms of vulnerabilities? What kind of thought needs to be put into developing AI solutions for the future?

Carson: What we have to remember when we’re looking at AI – whatever the ‘A’ will mean – is that we have to look at it from the perspective that it’s a tool. It’s a mathematical algorithm, it’s a processing tool. And, in regards to the tools themselves, that is going to either be used for good purposes, for basically helping us with our lives, and helping make good decisions, and helping us make advancements in society and digital nations, but it also be used for bad. It can be used to be harmful; it can be used for cyber attacks, for citizen profiling, for surveillance. We see a lot of things like facial recognition, and then that being used to interpret people’s emotions, and feelings, and decisions. So that perspective is that it has both those positive and negative perspectives. But at the end of day, humans are creating it and we are humans. That’s great. We don’t create things that are necessarily perfect. You know, if we were perfect in creating software, there would be no vulnerabilities.  

And the same goes, too, when we’re actually creating AI algorithms and cognitive interactions, and science, is that we will – by nature – potentially build flaws into them – knowingly or unknowingly, depending on who is doing it. So, yes, AI algorithms will have flaws. They will have vulnerabilities. They can be abused and that abuse could actually have good outcomes and bad outcomes.  

dot: Can you explain the importance of transparency and explainability in AI development?

Carson: Absolutely. This is where we start talking about building trust. This is how AI is being in regard to whether it's being used by governments, by industries, by social media companies. Even political parties are using it for their own advantage: where do I need to make changes in the voting system in order for them to win? So ultimately, what it comes down to is that I’m not a big fan of when a company is using AI, because AI should be classified as a weapon. It is a weapon that can be used to change outcomes of many things and we should be classifying it like that.

So when someone’s using AI in regards to some type of automated decision making, they must be transparent about the uses and purposes. There has to be not just transparency, but also oversight. Explainability is: How did that AI system come to the decision it made? And what was the purpose, what was the kind of input and outputs that helped it to come to that decision? So explainability is really important, transparency is really important, because those two items will help us build trust and understanding, and then allowing AI to be used more effectively and ultimately for good purposes as well.

dot: How would the oversight work for this? Who should be responsible?

Carson: It definitely shouldn’t be the manufacturer. It should be like an independent party or body that should be formed. Governments should have a separate working strategy that is there for oversight of AI use within societies. It should definitely be independent and it should be separated from a lot of the government decision making. So it should be something that allows complete independent oversight. Almost like what you have with, for example, the food industry – you have a food industry oversight that is responsible for making sure that the food is consumable and is meeting the right standards. So, yes, that oversight should also have standards that should be actually built into the strategy.

dot: And is this oversight already developing today?

Carson: It is, actually. I just participated in the Tallinn Digital Summit, which was recently (ed: September 2019) in Estonia. The Estonian government brought together many nations from around the world and they brought in their experts from digital societies. This time they specifically focused on AI and they brought in all of the countries that either have strategies that are in place or countries that are also building their strategies. And they came to Estonia in order to have this massive discussion. And a large part of discussion was around standards and common terminologies. It was around the strategies. It was around trust and explainability. And where AI could be used that would be, let’s say, effective for the good purposes. So, yes, there are discussions in that direction, but we are still very much in the early stages.

 

Joseph Carson, Chief Security Scientist & Advisory CISO at Thycotic, has more than 25 years of experience in enterprise security. He is a CISSP and an active member of the cybercommunity, speaking at conferences globally. He’s a cybersecurity advisor to several governments, as well as critical infrastructure, financial, and maritime industries.


Please note: The opinions expressed in Industry Insights published by dotmagazine are the author’s own and do not reflect the view of the publisher, eco – Association of the Internet Industry.