Since over 2 years now we have lived in a world where high-level threats have occured on a more than yearly base, rather (OpenSSL, looking at you) it can be less than 4 weeks between two patch orgies. Vendor patches come in with delays that some of those attacks don't allow to wait through.
The goal is to document which measures people took to quickly intervene emerging high-risk attacks.
These are bandaids that do not replace patching and can break things or leave some holes.
But they are likely to reduce your attack surface a lot and can cover the hours / days until distros give you patches.
So far, we don't have a framework for coming up with things like this.
We've come a long way and learned very little!
This was a special one in that it requires you to regenerate ALL crypto keys in a system and restart ALL services
Due to lack of PFS in 200x the most reasonable solution in a scenario like this:
Spin up new servers, attach old data disks. If you don't have separation of OS + Data, then, well, you made the choice.
Encrypted backups - need to migrate them to new backup media, no matter if disk or tape.
This was the first time we had to re-generate our SSH host keys, among other things.
if your system isn't crap full of gnuisms (then it might just not boot)... move away bash to bash.off && chmod 000 /bin/bash.off
otherwise you'll need to move all user shells over.
mod_security rule to script it from web requests:
+ regenerate ssl keypairs, since this attack was already exploited before it became public.
openssl dh params
copy that to the right location, if you include larger sizes >2048 <= 4096 it'll churn away for an hour or more.
Make sure you turn off haveged and just don't fucking bootstrap your security this in a VM.
Test via SSLtest that it did actually WORK. I.e. older Apache might just not work.
might break dnssec, which you will notice. and apparently it doesn't.
unpatched copies in memory
finding lingering mapped libraries after update -> processes to restart
(Because Debian's glibc + openssl hooks are darn incomplete anyway)
Super-easy cipher suite
Cuts off android 4.0.4 and some other BROKEN & UNSAFE ones like that. Gives a VERY "A" SSLLabs rating instead ;-)
(So, ask yourself, is it OK that my daughter's website has better crypto than most small web shops? Seriously?)
IPTables became the tool of choice
It appears that IPtables is becoming the most common solution to protect applications. The rules can be dropped in at run time, saving an immediate restart of affected applications. I think this is a good thing. Sometimes you run into applications where a downtime needs like 12 months notice. Thinking those could just be restarted at a whim to fix even the most critical issue is some kind of mental issue. We don't get that option, sorry, no. So using a host based firewall to filter malicious traffic on application level is a nice stopgap measure.
We need to come up with a more standardized way of writing "content filter rules" like these. These rules are not part of what makes up a classic firewall policy, but belong to the application realm. If you have a routed virtualization setup, you get another benefit, being protection of connected VMs. Generally, those rules should be written to be rather "wide" with regard to the traffic they process, as long as you can handle the extra latency. Realistically, if your system is a web+mail server, it'll mostly be having traffic for that. So there won't be a big difference if you match ALL (wildcard) traffic or only the web+mail traffic. That would be ALL minutes the 0.1% of *other* traffic, but keeps you more secure.
There's two potential problems with the IPTables approach:
People use different modules to the same end (string vs u32, etc). Sometimes those submodules are not immediately available on stable distributions.
Also keep in mind that those rules by themselves are not persistent over reboots. So maybe a rule group in Capirca, or Ferm, or whatever tool you like best is a good thing to do it.
Finally, as seen in the shellshock example, the idiotic design of splitting iptables and ip6tables means you will need rules for both on a dualstack v/4/v6 host. Yet another reason to run 6only behind nat64 instead of dualstacking.
So what we need is standardized "look" and "practice" of defining application-slapping iptables rules, and also a second way to do this. For shellshock RedHat released a LD_PRELOAD library that mapped away the function from bash. This is not really there yet since it requires a restart nonetheless. For the most recent glibc resolver bug Oracle extended ksplice to run-time patch not just the kernel but also userland components. That is pretty much the way to go and the saved reboot probably pays off for the basic subscription + ksplice?
My wish would be for more energy to be put into (signed) live patching, up to a point where it fits in even with advanced OS like NixOS instead of the current "restart/reboot/redeploy" approaches different OS have. But this would need to be driven outside of a vendor and probably outside of LinuxFoundation, too. We need djb/phk-level experts on this, not vendors or anyone influenced by them via by LF's cash flow.
(Yeah I miss OSDL)
- The key generation section of your SSH init script. Why is it still doing 1024bit keys? Why?
sorry(*), you are fucked by design.
*"if ever has any security hole in one of the network-fed services that it partially implements"
Current attack profile
- dhcp client
- partial dns resolver
- ntp client