One of the initial hesitations in many enterprise organizations moving into the cloud in the last decade was the question of security. Significant amounts of money had been put into corporate firewalls, and now technology companies were suggesting corporate data reside outside that security barrier. Early questions were addressed, and information began to move into the cloud. However, nothing stands still, and the extra volume of data and networking intersects with the increased complexity of attacks, and artificial intelligence (AI) is being used to keep things safe.
The initial hesitation for enterprise organizations to move to the cloud was met by data centers improving hardware and networking security, while the cloud software providers, both cloud hosts and application providers, increased software security past what was initially offered in the cloud. Much of that was taking knowledge from on-premises security and scaling it to the larger systems in the cloud. However, there’s also more flexibility for attacks in the cloud, so new techniques had to be added. In addition, most organizations are in a hybrid ecosystem, so the on-premises and cloud security must coordinate.
This means an opportunity for AI to provide enhanced security. As mentioned with other machine solutions, security is a mix different AI and non-AI techniques to fit the problem. For instance, there’s deep learning. Supervised learning can be used for known attacks, while unsupervised learning can be used to detect anomalous events in a sparse dataset. Reinforcement learning classification can even be done with statistical analysis in time series, and not always require AI. That can provide faster performance in appropriate cases.
On a quick tangent, let’s talk about supervised learning and reinforcement learning. Some folks present them as different; I think of the latter as an extension of the former. “Classic” supervised learning is when input is labeled and the labels are important for the AI system, as they are used to understand and organize the data. When there are errors, humans add more annotations and labels to existing data, or they add more data. In reinforcement learning, feedback for the neural network is given as to how far the results of an iteration are from a set goal. That feedback can be put back into the system by programmers changing weights or, in more advanced systems, by the AI software doing the comparison and adapting on its own. That is a type of supervision, but I’ll admit it’s a philosophical argument.
Back on track, let’s add another complexity. In the early days of the cloud, applications were larger but still followed a similar pattern of scale-up and scale-out. Now there’s something changing both environments: containers. Simply put, a container is a piece of software that wraps around an application, it has basic services and even a virtual operating system. That allows containers to run on multiple operating systems regardless of internal application code. It also allows cloud platforms and servers to more finely control services to their clients in order to meet service level agreements (SLA’s) that provide quality performance to the end customer.
MORE FOR YOU
“As more applications migrate to a container architecture, it’s important for security to keep up,” said Tanuj Gulati, CTO, Securonix. “Light weight collectors can run within application containers, such as with Docker, collecting and sending relevant event logs to the more robust security monitoring applications running separately. This provides strong security in the new environments without significant burden being added to application performance.”
In my discussion with Tanuj Gulati, he explained that they first worked in the virtual machine (VM) environment in local data centers. That provided both an understand that helped extend security to Docker, but also in integrating security between on-premises and cloud systems in a hybrid environment.
Detection V Response
Artificial intelligence is focused on detection, but a complete system must also address the response to a perceived threat. The basic system can detect attacks, and based on known problems rules can then determine responses. Unknown problems have unknown responses. Humans must be flagged to handle those questionable transactions, then feedback can be given to reinforce the system. Depending on how complex a system is created, those new rules can be incorporated into the neural network or added to a rules set.
The state of the industry, both in technology and human comfort levels, shows that there will continue to be human oversight before responses to new attacks as the predominant method in the next few years. Advances will push the security industry into more system action and then reporting, review, and adjustment by humans, but that will happen slowly. What will help is that better explainability will be required, as the deep learning “black box” will have to become more transparent.
Cloud computing and artificial intelligence are growing in parallel. The complexity of the cloud is driving the need for AI, but the complexity of AI is also creating the need for it to work better in the cloud environment with efficiency, transparency and control.