As Artificial Intelligence (AI) becomes increasingly embedded in the daily lives of Australians, the Australian Signals Directorate (ASD) is enhancing its AI guidance to ensure secure engagement with AI systems.
This initiative responds to the heightened global focus on AI, notably at the UK’s 2023 AI Safety Summit.
AI, which encompasses the creation of computer systems capable of tasks like visual perception, speech recognition and decision-making, poses unique challenges and risks.
The ASD's move is timely, given the rising use of AI in various sectors in Australia, such as the CSIRO's Spark system that aids in predicting bushfire spread.
While AI presents vast opportunities, it also brings inherent risks that need careful consideration and mitigation.
Artificial Intelligence (AI) is increasing in uptake and complexity in the modern world and will play an increasingly influential role in the everyday life of Australians.Read ASD’s AI guidance to securely engage with AI systems ???? https://t.co/grUirBB0Cq pic.twitter.com/cOAHF5wd1j
— Australian Cyber Security Centre (@CyberGovAU) November 23, 2023
Key risks
Key risks associated with AI include data poisoning, where the quality of training data can significantly affect AI performance.
Alterations in this data can lead to erroneous AI decisions.
Adversarial example attacks pose another threat, wherein AI models, once operational, can be manipulated with crafted inputs to produce mistakes.
Generative AI, a rapidly advancing area, raises concerns about the creation of convincing scam messages, including fake voice and video clips.
Privacy concerns are paramount, as AI's ability to process large data sets may enable attackers to re-identify individuals in anonymised data pools.
Additionally, AI systems can automate data collection and analysis, reducing the effort required to find exploitable vulnerabilities.
Recommendations
To navigate these risks, ASD recommends both individuals and organisations adopt specific strategies.
For individuals, it’s crucial to apply basic security principles, such as evaluating the system's reputation and understanding how personal information is used.
Organisations should consider ASD's Essential Eight framework to mitigate various cyber threats, including those posed by AI.
Questions around the AI system's security design, supply chain risks and accountability mechanisms are vital.
Looking ahead
As the AI landscape evolves, the ASD is updating its guidance on AI, keeping pace with technological advancements.
This includes cooperation with international partners like the UK's National Cyber Security Centre, which is set to release secure AI system development guidelines.
Such collaborative efforts are crucial for harnessing AI's benefits while safeguarding against its risks.