By Tan Shong Ye, IT Risk Assurance Leader, PwC Singapore
Tan Shong Ye, IT Risk Assurance Leader, PwC Singapore
Although the concept of artificial intelligence (AI) is not new, the way it can be integrated and leveraged by businesses is. In a survey by PwC in 2017 on attitudes towards AI, more than 70 percent of CEOs surveyed believe that AI will be the business advantage of the future. However, globally, the progress on AI has been mixed with many organizations at the beginning of their journey.
Businesses want to unlock the potential of AI, but there are many considerations to take into account and understand. In the same survey, 67 percent of CEOs indicated that AI will have a negative impact on stakeholder trust. Businesses need to address and manage this as they embark upon their AI journeys, as failure to do so could prove to be costly to businesses in both, a financial and reputational sense, with potential long-term damage.
Analysing AI Risks
Conducting a thorough assessment of the risks of AI, organizations can develop best practices, minimise risks and encourage responsible adoption. Before beginning this journey, here are some common risks to consider:
1) Beware of Biases
Several countries have gone live with predictive AI systems that help police officers assess the risk of suspects re-offending. Although claiming close to 98 percent accuracy there are concerns that the algorithms could have inherent biases based on race, gender, and appearance resulting from conscious or unconscious biases of the AI implementer, the design process or testing of the AI system.
2) Black Box
AI models are often perceived as a “black box” in which decisions are difficult to understand or accept. But in fact, often AI is self-learning from its new “experiences” or inputs. For example, an adaptive fraud detection AI programme will flag out “potential frauds”, but not the reason behind it. Hence, it’s hard for the AI decisions to be trusted.
AI models are mostly goal-based with a specific objective to achieve – for example to win a chess game - which needs to be programmed. Hence, most AI models are myopic as they are focused on one pre-defined objective without considering its broader context. An AI algorithm does not have a “gut check” and therefore acts according to its defined parameters and objectives. For example, if an autonomous car must choose between injuring its passenger or the pedestrian, it’s unclear how (or why) the algorithm chooses one over the other.
4) Adversary Attack
‘Adversarial attacks’, or in essence hacking, are impotent concerns when it comes to machine learning, especially deep learning models. This can be mitigated by simulating adversarial attacks and creative hacking tests on your own models and retraining them to recognise such attacks. In addition, during the design phase, specialist software can be developed to ‘immunise’ your models against such attacks.
Traditional testing methods are too linear and don’t take into account that AI systems learn and behave differently over time
Six Steps for Managing AI Risks
So goes the saying, nothing ventured, nothing gained. Often, what we see in successful organizations is their ability to recognise risks and learn how to mitigate them. They also bear in mind the following:
1. Define Out-of-bound Zones
Be aware of the human error by putting in controls to set boundaries or limits to the AI system, for example, setting no access to confidential or sensitive data or no permission to take certain actions. AI can be prevented from being tricked into providing malicious attackers with sensitive corporate information or customer personal data, or carrying out hazardous activities.
2. Rigorous and Creative Testing
Traditional testing methods are too linear and don’t take into account that AI systems learn and behave differently over time. So to test an AI system, adopting a different methodology which is more creative (similar to how a security penetration tester would try to hack into a network) is the key.
3. Continuous Log Analysis
The whole point of AI is to allow technology to take over certain responsibilities within a well-defined scope to allow humans focus on more strategic areas of work and responsibility. As such, we only know what the AI has performed through activity logs. It is therefore important to have this detailed data available so that real-time analytics can be done to raise alerts in case the intelligent system is suspected of making mistakes. Logs are also critical for investigations when things go wrong.
4. Have Clear Process and Operational Governance and Controls
Just like responsibility cannot be delegated to subordinates, it cannot be pushed to AI. Senior management is ultimately responsible for any activities and tasks performed by AI, and will likely be held accountable should there be any illegal actions performed by AI.
5. Define Metrics for Success and Failure
Due to the nature of machine learning, it is necessary to do regular reviews to assess the AI’s performance. It is often helpful to have metrics to identify weak spots and areas for improvement. It is also important to have clear processes and procedures for escalation of issues and for rectification action to be taken.
6. Plan for Contingencies
One key reason for replacing humans with machines is that they can work without needing to rest. However, machines are not invincible and are susceptible to breaking down, power outages, corrupted data, and hardware issues. As the saying goes, failure to plan is planning to fail. Businesses must map out emergency scenarios like this so that operations can resume or run as usual as quickly and seamlessly as possible.
With AI becoming increasingly prevalent, mitigating its associated risks and addressing the concerns of stakeholders will help organizations take full advantage of AI’s potential. Increasing transparency and awareness around how AI is being used, the jobs it performs, the decisions it makes, and the opportunities it brings is key – we believe this is the essence of responsible and explainable AI.