Products You May Like
Reality has a way of asserting itself, irrespective of any personal or commercial choices we make, good or bad. For example, just recently, the city services of Antwerp in Belgium were the victim of a highly disruptive cyberattack.
As usual, everyone cried “foul play” and suggested that proper cybersecurity measures should have been in place. And again, as usual, it all happens a bit too late. There was nothing special or unique about the attack, and it wasn’t the last of its kind either.
So why are we, in IT, still happily whistling into the wind and moving along as if nothing happened? Is everyone’s disaster recovery plan really that good? Are all the security measures in place – and tested?
Let’s Do a Quick Recap (of What You Should Be Doing)
First, cover the basics. Perform proper user training that includes all of the usual: password hygiene, restrictions on account sharing, and clear instructions not to open untrusted emails or to access unscrupulous websites. It’s an inconvenient fact that human actions continue to be the weakest link in cyber defense, but it’s a fact.
Thinking about the infrastructure side, consider proper asset auditing, because you can’t protect what you don’t know exists. As a next step, implement network segmentation to separate all traffic into the smallest possible divisions.
Simply put, if a server does not need to see or talk to another server, then that server shouldn’t be connected to the same VLAN, no exceptions. Remote access should move from traditional VPN access to zero-trust networking alternatives.
Everything must be encrypted, even if communication is internal only. You never know what has already been breached, so someone can eavesdrop where you least expect it.
Finally, don’t let users randomly plug devices into your network. Lock ports and restrict Wi-Fi access to known devices. Users will complain, but that is just part of the tradeoff. Either way, exceptions should be kept to a minimum.
Patching Your Servers Really Matters
Moving on to servers, the key advice is to keep everything updated via patching. That’s true for exposed, public-facing servers, such as web servers – but it’s equally as true for the print server tucked away in the closet.
An unpatched server is a vulnerable server and it only takes one vulnerable server to bring down the fortress. If patching is too disruptive to do daily, look to alternative methods such as live patching and use it everywhere you can.
Hackers are crafty individuals and they don’t need you to make it easier for them, so plug as many holes as possible – as fast as possible. Thanks to live patching, you don’t have to worry about prioritizing vulnerabilities to patch, because you can just patch them all. There is no downside.
Take a Proactive Approach
If a server no longer has a reason to exist, decommission it or destroy the instance. Whether it’s a container, VM, instance, or a node, you need to act ASAP. If you don’t, you’ll end up forgetting about it until it is breached. At that point, it’s too late.
So, you should maintain a proactive approach. Keep up with the latest threats and security news. While some vulnerabilities have a disproportionate share of attention due to being “named” vulnerabilities, sometimes it’s one of the countless “regular” vulnerabilities that hits the hardest. You can use a vulnerability management tool to help with this.
Put in place a disaster recovery plan. Start from the simple premise of “what if we woke up tomorrow and none of our IT worked?”
Answer these questions: How quickly can I get barebone services up and running? How long does it take to restore the entire data backup? Are we testing the backups regularly? Is the deployment process for services properly documented… even if it’s a hardcopy of the ansible scripts? What are the legal implications of losing our systems, data, or infrastructure for several weeks?
Most Importantly: Act Now, Don’t Delay
If you struggle with any of the answers to the questions above, it means you have work to do – and that’s not something you should delay.
As an organization, you want to avoid getting into a position where your systems are down, your customers are going to your competitor’s website, and your boss is demanding answers – while all you have to offer is a blank stare and a scared look on your face.
That said, it’s not a losing battle. All the questions we posed can be answered, and the practices described above – while only just scratching the very surface of everything that should be done – are a good starting point.
If you haven’t yet looked into it… well, the best starting point is right now – before an incident happens.
This article is written and sponsored by TuxCare, the industry leader in enterprise-grade Linux automation. TuxCare offers unrivaled levels of efficiency for developers, IT security managers, and Linux server administrators seeking to affordably enhance and simplify their cybersecurity operations. TuxCare’s Linux kernel live security patching, and standard and enhanced support services assist in securing and supporting over one million production workloads.