Editor's note: This story originally appeared in Dark Reading.
New tech often requires new thinking — but that's harder to install
Here's a provocative question: Is it possible, given the vast array of security threats today, to have too many security tools?
The answer is: You bet it's possible, if the tools aren't used the way they could be and should be. And all too often, they aren't.
New tools introduce new possibilities. Conventional thinking about security in a particular context may no longer be applicable exactly because the tech is new. And even if conventional thinking is applicable, it may require some modification to get the best use out of the tools.
That's a real problem for security executives. And the more powerful, sophisticated, and game-changing security tools may be, the higher the odds this problem will apply.
This is frequently the case with zero trust, since it differs so much from traditional security. New adopters sometimes expect a more high-powered firewall, and that's fundamentally not what they get. They've decided to invest in next-generation capabilities, yet they begin with a perspective that is often last-generation in character, and it really diminishes their ROI.
It's the response, not the request, that's risky
The traditional perspective on corporate web access, for instance, says that, within a business context, some sites are good and some sites are bad. Examples of good sites include tech media, industry partners and competitors, and news services. Examples of bad sites include gambling, pornography, and P2P streaming.
The traditional response is to whitelist the good sites, blocklist the bad sites, and call it a day. Beyond the fact that this line of thinking can lead security teams to make hundreds of rules about which sites to block and which sites to allow, I'd like to suggest it misses the point.
Today, we know that optimized cybersecurity is not so much about the perceived character or subject matter of a site. It's more about what kind of threats may be coming from the site to the organization, and what kind of data is leaving the organization for the site. That means you're going to need new approaches to asking and answering questions in both categories, and that, in turn, means new tools and a new understanding.
This situation comes up in the context of content delivery networks (CDNs). They represent a huge fraction of all internet traffic and, for the most part, it's true that the content they deliver will be innocuous as a security threat. That's why many security admins have set up rules to allow all traffic from such sources to proceed to corporate users on request.
But is it really wise simply to whitelist an entire CDN? How do you know some of the sites it serves up haven't been compromised and aren't a de facto attack vector?
Furthermore — and this is where it gets interesting — what if you actually have a tool so powerful and so fast that it can assess CDN content, in or in very close to real time, for its potential as a security threat before it reaches users? Wouldn't you be wise to use that tool, if properly configured, as opposed to not use it?
In this scenario, the old assumption that no tool could be that powerful and fast, which used to be true, is now false. It's no more valid than the old assumption that CDN-sourced content must inherently be safe.
So to implement this new and more sophisticated perspective on Web access, it's pretty clear more is required than simply implementing new tech (rolling out new tools). People will have to be trained in the tech's feature set and capabilities, and processes will have to be adjusted to take that new info into account. If that doesn't happen, security admins who are simply given new tech will not be getting the best use out of it. They will be, if you'll forgive the term, a fool with a tool.
Stay on top of capabilities and configurations
Streamlining your vendor security stack is always preferable to bolting on new tools with niche functionality. Otherwise, chief information security officers (CISOs) may end up trying to secure a supply closet, not knowing which locks are actually in effect. Even so, this isn't a one-and-done responsibility.
Suppose, for instance, it selects one partner for network security, another for endpoint security, and a third specifically for identity management. Suppose all three partners are genuinely top-tier.
If the organization's people and processes don't understand and take full advantage of the partners' capabilities, those capabilities will not deliver total value, and the organization will not be as protected as it could be. The number of security tools has essentially been reduced to three great tools, but the security architecture still needs ongoing attention.
In the age of the cloud, updates, and features are being pushed constantly. That means configuring a new security tool once and stepping away isn’t enough. Because new functions can disrupt a business's operations in ways unforeseeable to a vendor, they are often turned off by default when first released. To be their most effective, security tools must be reconfigured regularly.
I'll conclude with a common example I see frequently. Because botnets are a major ongoing problem, it's important to have some bot detection/bot blocking capabilities in place. This may take the form of monitoring logs for things like compromised endpoints, which command-and-control servers may try to contact to deliver instructions.
This is precisely the kind of information security managers should be thrilled to get.
But because many departments don't have the time or inclination to analyze their logs, they don't benefit from the information contained within them. As a result, compromised endpoints aren't cleaned and no forensics are conducted to learn how they were compromised in the first place.
This brings me to my bottom line: Keep your eyes open, understand what new tech and new partners can do, and capitalize on it to the best effect. Your organization and career will both benefit.
What to read next