- Automated responses are the only rational response to automated attacks
- Orchestration brings automation
- Orchestration can be done incrementally
The idea of synergies is not new. We all know about how combining elements can create a whole greater than the sum of the parts: chocolate and peanut butter, beans and cornbread, Abbott and Costello, bass and drums. These are things we know and appreciate. So why do we hesitate when it comes to having security tools work together?
Well, one synergy as popularized in the Terminator film series is that of Skynet… specifically, an artificial intelligence entity that decides wiping out humanity is the best course of action. Nobody wants to plug one tool into the other and wind up with Skynet. Even a Skynet on a smaller scale that just wipes out a production line, an ATM network, or blocks the CEO’s iPad is disastrous enough. We humans tend to shy away from empowering machines.
At the risk of angering fans of the Terminator franchise, I’ll say that, ultimately, the fear of connecting our security tools is a fear of the unknown. IT has moved from a world where we had plenty of time to think and ponder about attacks to one where responses must be automated. But we continue to romanticize people like scientist Cliff Stoll, who tracked down a KGB-sponsored hacker with Sherlock Holmes-like deductive reasoning. Stoll could afford to do that because it was the ’80s and the attacker was coming at him at modem speeds—and processors were glacial compared to today.
Put another way, if you’re being attacked by an army of tree sloths and their hordes of snail soldiers, you have time to think. If a pride of lions has decided that you are their next gazelle, pondering things may result in your downfall. I’ve read complaints from IT managers that the push to zero trust security is moving too fast for their organizations (in fact, we just ran a survey that confirmed this sentiment among federal agencies—download it here). The bad news for them is that the attackers are also moving too fast for their liking. Since the attackers won’t slow down, we need to speed up.
Besides, we’re not creating artificial intelligence as much as we’re creating artificial instinct. The Polish science fiction author Stanislav Lem made the distinction in his essay “The Upside-Down Evolution.” Artificial instinct isn’t a thought process, neither does it involve learning. It’s a few lines of if-then code statements. If there is something bad, then make it stop, things like that. We humans get to define the something bad and what “make it stop” looks like, but once we’ve written the code and tested it to our satisfaction, it’s ready to go.
Part of the fear of the unknown is that we don’t yet know what it is we’ll block. So, block things a little at a time. Borrow from Marie Kondo if it helps—I know it helps me. For example, telnet being open does not spark joy. So, I write code on my vulnerability scanner to send a syslog written in a certain way when it discovers telnet is open. That’s nice. Now, let’s look at the orchestration triggered by that action.
The VA scanner sends the syslog. The syslog server has code written on it to send a piece of information to the network access control system. When the NAC gets the information, it triggers a policy to place an access control list (ACL) on the device with telnet open that blocks the telnet traffic. The syslog server also sends out a piece of code to the helpdesk ticketing system and opens a ticket. The opened ticket then connects to the configuration management utility that immediately sends code to the device to shut down that telnet port, then communicates back to the ticketing system to close the ticket. The closed ticket then sends a syslog back to the syslog server, which then lets the NAC system know it’s OK to remove the ACL. All this can happen in seconds, and we find out about it because of the activity report that kicks out daily from our systems.
“Ah! But what if the telnet was part of an important business process, what then?” I hear paranoid admins cry out. Two options exist to deal with that. The first is to tell the owners of the important business process that telnet’s off the table. Fix that important business process ASAP before either the auditors or attackers find that telnet traffic. This is 2022, get with the times! The second option is to write code that exempts the system from the telnet check – easy enough to do – and then bolt on so much perimeter security that the process owners will wish they had gotten telnet out of their process. I suppose there’s a third option in having a friendly lunch with somebody in Governance and mentioning the open telnet situation, and they’ll come back and make sure the first or the second option is put into place.
And now we have an automated solution for telnet all thought out. “Ah! But what if we must purchase additional software from vendors to make these orchestrations, what then?” I hear budgetarily-constrained admins cry out. The key answer here is to buy that additional software. Start with your most painful use cases and get the orchestration in place to handle those. Chances are, additional use cases can be added in at very little cost. Products like Swimlane, ServiceNow, and Forescout offer tremendous potential for interactions with other solutions. Once the connector between the solutions is licensed, it’s there for you to use as much as you like, for the most part. You’ll get the budget when you make the case for the effectiveness of the orchestration.
Do you need help making that case? Do you know how much your organization’s cyberinsurance policy costs decrease as you automate responses? No? Well, go and find out. Part of doing security work is finding information, after all. Do you have a recent audit failure that’s about to put your higher-ups on the hot seat? Excellent. They’ll be all ears for your orchestration proposal. Do you have higher-ups that live in fear of failing an audit and losing their jobs? Just as excellent, this proposal will help them sleep better at night.
Once you have successfully written, tested, and piloted an automation, additional automations that use similar processes with similar tools become that much easier to implement. For example, say we also want to block RDP being open on our Windows laptops and desktops. Use the process above and replace telnet with RDP. Or say we want to have a similar automation to prohibit a PC that our endpoint protection suite has discovered indicators of compromise on. Just substitute the endpoint protection suite where the VA scanner was and the Windows patching system instead of the configuration management system, and we have automated action.
This is not all that hard to do when we go use case by use case. Yes, there are too many use cases for us to think about in one sitting. But as we think of them and implement automated solutions, we think of more that can be solved in similar ways. It will be a natural, evolving process that is constantly updated as conditions around us change. And that is a good thing.
The deeper consideration to orchestration is the choice of what system or systems will act as signal junctions. Which of your tools will be receiving information from other tools and then sharing that information? I used the syslog server as an example, but it could be a NAC/visibility solution, a SIEM, a SOAR, or a CMDB/ticketing system. Maybe there is more than one signal collector and it peers with the other collectors to get that orchestration. This can happen if some tools work better with one solution, and other tools work better with another. We know that, with the right coding, if A is connected to B and B is connected to C, then A is connected to C… and to whatever else C is connected to, and so on.
As we make choices for our “big picture” orchestration, our hoped-for end-to-end set of connections, we must make those connections one at a time. We may have an end-to-end vision for dealing with that rogue telnet traffic, but that must wait for the end-to-end system to exist. In the meantime, any orchestration helps a great deal, so enjoy the connections as they are made. Keep in mind that communication from A to B to C involves first successfully getting from A to B. Get that implementation done correctly before puzzling through how signals based on information from A will get from B to C.
Following the above arguments, I’d propose first connecting systems that offer bi-directional communications with each other. These are the best candidates for sharing information from other systems, particularly those that only have one-way communication possible. The bi-directional communication paths will carry not only inbound information but outbound control and notification signals.
That’s my case for optimization through orchestration. We’ve automated and cross-connected our websites with databases, our search results with advertisements, and our credit system with point-of-sale terminals. We did those automations and cross-connections to optimize those systems. Well, guess how we optimize our security tools? No matter how awesomely configured one of your security tools may be, it won’t be optimized until it’s working with other tools. Optimization requires orchestration.