Protect your Linux systems: top 10 hardening measures

Cyber threats are only going to keep growing and get more sophisticated. So, it’s never been more important to protect your tech. System hardening is the process of looking closely at your IT to see where the gaps are that need to be secured. It’s a series of effective actions to reduce vulnerabilities and entry points that cyber criminals would only be too happy to exploit. 

There’s a range of standards that provide basic recommendations and actions that you could use, but they all say different things.  Some – including the CIS Benchmarks and STIG (Security Technical Implementation Guides) – provide specific recommendations for each operating system, making it impossible to manage a heterogeneous fleet. 

With so many standards and recommendations, how can you possibly know which ones to rely on? Which hardening rules are essential for securing your systems?

We went through the various benchmarks and identified the top 10 measures that you can’t do without. Simple and easy to deploy on all your Linux OSs, these golden rules will help you maintain an essential and effective level of security against cyberattacks across your entire infrastructure.

1. Restrict application rights

Controlling application rights and permissions is a fundamental measure, since it’s a native feature in the various Linux OSs.

It involves enabling security modules built into the Linux kernel, like AppArmor (for Debian systems) or SELinux (for Red Hat systems), and configuring them. These enable granular access control by defining rights per application (such as network access or file read and write permissions) and associating them with a security profile that restricts access to the OS. This helps keep your systems protected.

2. Disable non-essential services

The more software or components installed on an OS, the greater the potential attack surface. Limit, disable or remove access to your servers through non-essential services as much as possible. 

These services could be graphical interfaces, print servers, Telnet, email or web servers, for example, that could allow data to be exfiltrated.

3. Remove risky application clients

Some applications, such as FTP (File Transfer Protocol), Telnet or RSH (Remote Shell) clients, allow you to exchange files and data across the network. 

Depending on the application, data could be sent through a network unencrypted, making it highly vulnerable to sniffing attacks – the process of intercepting traffic to access confidential data, such as username and password.

So, make sure that none of these clients are running on your machines unless they’re needed. Blocking these applications prevents attackers from accessing your system through them and ensures that your data is not compromised.

4. Audit your server configurations

Use stronger wired connections instead of wireless ones wherever you can. 

Also protect access to CPU activity on your machines to prevent side-channel attacks, which work by monitoring the system’s behavior (CPU activity in particular) in relation to certain user actions in order to extract information. Finally, make sure that your servers are scheduled to shut down if an incident is identified.

5. Enable logs

Log management is essential in a hardening process, especially to carry out post-mortems analysis after an incident.

Make sure that auditd and journald are enabled and configured, and that the logs they generate are properly compressed.

Auditd generates event logs in the system (date, time, type, number of access attempts, imports, exports and so on). Journald collects and centralizes all the system logs and helps manage the various log sources.

6. Secure your SSH access

Secure Shell (SSH) is a protocol for securely sending commands to a computer on an unsecured network. Make sure that this sensitive service is correctly configured.

This means:

  • you cannot log in as an administrator 
  • you can only log in using keys and not passwords
  • these keys are secure enough. 

Ensure you also prevent access from being transferred from one environment to another. And enable automatic logout in case of repeated failed login attempts.

7. Get privilege creep under control

Privilege creep is a major risk when it comes to directories, including symbolic links to higher privileged directories. User rights must be assigned on the principle of least privilege by default. But sometimes you need to give administrator rights temporarily. 

Sudo is a tool allows admins to temporarily grant privileges to other users to execute commands with administrative rights. Make sure that the Sudo package is properly installed and configured, and that users have the appropriate permissions and restrictions. 

You can set a maximum duration for using this command and track all the actions performed with privileges. It also allows you to control user rights at file system level. 

Data leaks can also occur through dump files generated by application crashes.

8. Check local users and groups

Get a grip on user permissions and rights, as well as access to related information. The file that stores user passwords, for example, needs to be encrypted and accessed only by the administrator. 

This avoids user duplication and assigns a unique identifier to each user, making actions more traceable. Don’t forget to check that the machine has only one superuser and that user groups are set up correctly.

Want to stay up to date with the latest? Sign up for our newsletter:

9. Secure local users’ passwords

As well as managing users, you need to define secure authentication parameters for them. 

Check that users cannot have empty passwords and require the password chosen to follow a specific protocol (minimum number of characters, non-alphanumeric characters, no old passwords, etc.). Finally, make sure that access is blocked in case of too many attempts with incorrect passwords.

10. Check for application updates

Vulnerabilities are discovered and released all the time. Failing to install patches and updates leaves you with a bigger target on your back. The waves of malware attacks that strike after patches are released show the need to be proactive and take action. 

The goal with this hardening technique is to always keep an eye out to make sure your systems are up-to-date with the latest available version. Even though you need to perform regression testing before rolling out updates, excessive delays can prove to be a dangerous strategy.

Manage and automate your system hardening – with a single tool

Ensuring that all these parameters are properly configured at all times and across all your Linux systems is a tough job, but someone’s got to do it. The best way to do this is to use a tool that manages and secures your entire infrastructure automatically.

That’s where Rudder comes in. Rudder not only deploys these rules on your systems, it also makes sure they are continuously applied: see for yourself.

And that’s not all: we also offer a subscription-based patch management module. This helps to identify the updates available and automate patch campaigns for your systems.

Share this post

Scroll to Top
Rudder robot

Release 8.1: no need to play hide-and-seek with compliance, Rudder Score has been deployed!

Security management module details

This module targets maximum security and compliance for managing your infrastructure, with enterprise-class features such as:
Learn more about this module on the Security management page

Configuration & patch management module details

This module targets maximum performance and reliability for managing your infrastructure and patches, with enterprise-class features such as:

Learn more about this module on the Configuration & patch management page