Remember the NSA's infamous "I hunt sysadmins" [1].
Software engineers, ops people, and sysadmins at big tech companies with interesting data are high-value targets. If you can can run code on production infrastructure or a large install base, you should assume you are being actively, personally targeted by multiple advanced persistent threats, including at least one state intelligence agency, and adjust your OPSEC accordingly.
They want into your infrastructure, and the easiest way into your infrastructure may well be your SSH key.
What's paranoid for the general public is a pretty good idea for you.
To give some practically implementable advice, what you should do is run anything that parses untrusted input in a virtual machine or physically separate machine.
This means you should browse the web, read e-mail and read any Office/PDF/whatever files sent to you in a virtual machine that doesn't have access to anything more than strictly needed.
The reason is that most desktop software and OS kernels are not written with total security against NSA-like adversaries in mind, and as a result often have exploits that completely compromise the system. While they can still provide reasonable security against random attackers, they cannot provide any significant security against organizations (such as intelligence agencies) who can offer millions of dollars for exclusive access to unpatched exploits and MITM any connection.
You can install Qubes [https://www.qubes-os.org] if you want a system that makes it more convenient and secure to use virtual machines this way (and you trust its authors to not have backdoored it).
Furthermore, you should only install software from official sources and when you install software, you should make sure to check its digital signature or hash (from a separate virtual machine, since the one you are browsing with is potentially compromised as discussed above)
If you don't already have a trusted signing key stored on your machine, you should download the file itself or something guaranteeing its integrity if available (signing key, HTTPS certificate, file hash) from multiple Internet connections, such as a residential ISP, mobile ISP and multiple instances of Tor and make sure the downloaded data is the same. This is very likely to foil a targeted attack specifically designed to give a special trojaned version of the file or a replaced signing key under the attacker's control.
Of course it's impossible to defend against the original authors of the software being malicious or compromised themselves, or against an attack designed to give everyone maliciously altered software when you don't have a trusted key/certificate, but those are more likely to be discovered. Even if the downloaded software is malicious, if you are separating software by virtual machine as described above, the damage will be limited to the virtual machine the software runs in.
There are a lot of other things to consider, such as defending from physically targeted attacks (e.g. someone breaking in your home and installing a keylogger) if you are not anonymous and keeping your anonymity if you are.
Congratulations, you just described a security compliance program that's smaller than DFARS 252.204-7012, COBIT, PCI-DSS, HIPAA/HITECH, CLOUD SECURITY MATRIX, SANS CRITICAL CONTROLS, SOX, FEDRAMP/FISMA, and ISO 27001.
I keep reading (IMO nonsense) that best practices like those compliance frameworks aren't enough (mostly coming from Josh Corman). However, breaches are the on the rise and only 10% of the IT industry is goes through this thorough cleansing process.
Security Compliance forces you to go through that cleansing. This isn't a compliance burden, its just thorough hygiene.
Key Rotation process, Log Review process, Change control process, etc are examples of things in Compliance Frameworks.
Even the Chinese government is demanding that US Tech firms go through one of the above compliance programs.
Sysadmins continuously foo foo that compliance is not security.
They foo foo on it because they're self-compliant - doing security processes right the first time around. Therefore those self-compliant sysadmins are already security competent and don't need to adhere to an external set of security compliance standards.
Well, for starters those standards don't seem to be targeted to individuals, and often the goal seems to be to just allow someone to say "I'm compliant with XXX" rather than telling them how to be secure.
The practical advice they give is in the form of obvious generalities ("setup your firewalls", "update your OS") without any concrete details on how to best do that or even a discussion on which attacks are prevented and which aren't
Threat model discussions, advice on choosing which threats you can realistically defend against, theoretical principles and talk about possible attacks also seem to be non-existent.
>The practical advice they give is in the form of obvious generalities ("setup your firewalls", "update your OS") without any concrete details on how to best do that
I hope you're not suggesting that security compliance isn't necessary because of the time-consuming external research involved. Sure its a usability problem but does that mean we should throw it out and just stick with Penetration Testing, Intrusion Detection Systems, and Risk Assessments? A security guy actually told me that just doing Pentesting, Risk Assessments, and IDS was good enough.
Personally, I believe the rest of the world doesn't have a reasonable alternative except for going through a thorough cleansing process of security compliance. That's why its been my mission to take out the guesswork from security compliance frameworks like DFARS 252.204-7012, COBIT, PCI-DSS, HIPAA, FEDRAMP/FISMA, ISO 27001/2.
However, I understand that these Compliance Frameworks might not be for you. It seems like you're already self-compliant with your security processes and you're so security-competent that you're doing things right the first time around.
It seems that organizations where security is a compliance-driven process are barely concerned or not concerned at all about security breaches, only regulators.
Some of those processes are a fucking joke. The HIPAA technical safeguards include nothing particularly interesting; the hard part is the paperwork and legal ass-covering. Some PCI-DSS "auditors" are nothing more than salespeople who bought Nessus or similar and charge $10k/pop to run it, slap a logo on its report, and email it to you. Security regulations that businesses at large actually seem to care about have nothing at all to do with secure software engineering, just checking boxes like "have a firewall" and "have a password policy" and "have a network security policy" as if producing an endless trail of Word documents will make you less vulnerable.
superuser2: you're telling me that having a process for firewall changes or rotating your keys is a joke? What other process is a fucking joke? System Hardening? Log review? Source code analysis? Updating your network diagrams? Physical access monitoring? These are all processes (and more) that compliance says you should do.
You bitch about word documents when I bet you've never even gone through a thorough compliance process.
Not even that -- there are tech firms that have teams that span multiple geographic boundaries and spoken languages, and they have varying degrees of opsec (from none to excellent, with your typical bellcurve).
These firms receive contracts from Fortune 500 companies that have no interest in hiring/maintaining technical staff but have a need for apps that reach their userbase (of which numbers from hundreds of thousands to several millions across various jurisdictions).
The world of software development is growing and there are cracks appearing everywhere, a malicious individual should have no trouble accruing a healthy collection of exploitable code across various tech stacks (be it Android, iOS, server-side, or otherwise).
Proper opsec is expensive and many companies don't even bother (or are completely unaware that they could be in trouble), and that's not even touching on designing secure systems. A malicious individual could hold code for several months before deploying an exploit that reaches end-users.
Hunting sysadmins is most definitely a serious problem, but so is outsourcing.
>The world of software development is growing and there are cracks appearing everywhere, a malicious individual should have no trouble accruing a healthy collection of exploitable code across various tech stacks (be it Android, iOS, server-side, or otherwise).
Even more reason to enforce a compliance program (e.g. ISO 27001) to clean your systems and your code.
In fact, you're talking about growing cracks appearing everywhere, and when I look at your code right now, I see even you don't follow secure coding practices for Software Development. Not using the Pull Request Model? Just pushing commits directly into master? These (and more) are all bad security processes that I've identified in your github account.
Software engineers, ops people, and sysadmins at big tech companies with interesting data are high-value targets. If you can can run code on production infrastructure or a large install base, you should assume you are being actively, personally targeted by multiple advanced persistent threats, including at least one state intelligence agency, and adjust your OPSEC accordingly.
They want into your infrastructure, and the easiest way into your infrastructure may well be your SSH key.
What's paranoid for the general public is a pretty good idea for you.
[1] https://theintercept.com/2014/03/20/inside-nsa-secret-effort...