For any application that exposes an external interface or uses personal or sensitive data, it is imperative to reflect and take actions to improve its security. Security is known as an arms race, with attackers constantly improving their techniques, and defenders updating their systems in order to predict and prevent new threats. Therefore, ensuring security is a continuous task.
Besides classical cyber threats that apply to software systems, machine learning adds new security risks both during training and deployment. Training time attacks are known as data poisoning, and consist of attackers trying to alter the training data in order to induce malicious behaviour – such as misclassifying certain examples.
Test time (or inference) attacks are more diverse, and consist of adding small perturbations to test data in order to induce malicious behaviour (adversarial attacks), reverse engineering the model or checking if some data was used for training (membership attacks). Like other branches of machine learning, security is also a growing field of study.
As mentioned earlier, security requires a proactive approach, with some mechanisms including security code reviews, using security analysis tools, penetration testing, and actively performing red teaming exercises.