The text should literally read "Price (USD per month)". Then it will be clear, unambiguous, and obviate the need for the snippet at the bottom about "$USD" (which is not a standard notation).
This was indeed a pleasant surprise when I logged into my CloudFlare account.
What intrigues me is that CloudFlare missed an opportunity to allow secure self-signed certificates.
The new CloudFlare SSL setup allows the origin server to present to CloudFlare's servers either (i) an unverified self-signed certificates; or (ii) a certificate signed by a CA. Neither provides great security. In the former case, a MITM can trivially generate a new self-signed certificate. History has shown the latter case is also problematic, as there have been several events where CAs have generated invalid keys [1].
What would be nice is if I could generate a self-signed certificate and upload the fingerprint(s) to CloudFlare. CloudFlare would could then verify the fingerprint when connecting to my origin server, without needing to trust a CA.
Am I missing anything obvious as to why this wouldn't be as secure (or more secure) than the two options CloudFlare has introduced?
I recall reading here on HN recently something about CloudFlare having their own internal CA infrastructure. I would expect them to allow customers to start generating certificates (signed by their internal CA) that they can deploy on the origin server.
Personally, I was really hoping the backend connection would be secured by spiped, cutting the unnecessary complexity of TLS and certificate validation completely out of it.
That might exclude Windows servers (until someone ports it), and maybe harder in some cases to setup.
> Maybe they have automatic theorem prover running on Jenkins and they just refuse to merge changes which would be proven to be wrong?
Yes, that's the plan.
Internally this is how we have been operating since 2009, when the proof was first completed: You don't push a change to the verified kernel branch unless you are willing (or are able to convince someone else) to do the proof updates.
We don't yet have a regression website publicly available, but it's on the short-term roadmap.
I'm not sure who submitted the link (I don't think it was anyone from the seL4 team), but I inserted the http://sel4.systems/ link into the repo's README file, which will hopefully help a little.
The prover has a bug (most proofs are done automatically).
Minor point: while many formal verification projects are proven automatically, seL4 is proven in Isabelle/HOL which is an interactive theorem prover. This means that the proofs are actually carried out mostly manually. (The downside is that it takes much longer to carry out proofs; the upside is that you can prove far more sophisticated properties than is possible with the current state-of-the-art automated provers).
You are still right that the proof checker could have a bug, though. Proof checkers tend to be much smaller and simpler than things like SMT solvers, which helps increase confidence in them, however.
INTEGRITY-178B and seL4 are not related in terms of their source code or origin, though they do both aim at the same target audience (security and/or safety-critical systems).
I don't have a good knowledge of INTEGRITY-178B, but as far as I can tell some differences are:
* INTEGRITY-178B is a static separation kernel. seL4 can be used as a static separation kernel, but also allows for dynamic systems, for instance with processes being created and torn-down dynamically at run-time;
* INTEGRITY-178B has a proof that a model of the code satisfies particular security properties, while seL4 has a proof that the actual C implementation satisfies particular properties;
* INTEGREITY-178B is certified to EAL6+, while seL4 has not undergone any external certification process. (Without having a good knowledge of EAL6+, my suspicion would be that the code-level aspects of seL4 would meet or exceed EAL6+ certification, while the process-level aspects would need work on the seL4 side.)
If someone has worked with INTEGRITY-178B, please correct me if I have made any mistakes.
Thank you for responding that is a good description.
I remember someone from one of the government agencies gave a talk at in college many years ago about INTEGRITY-178B and about this separation kernel idea.
That was maybe 7-10 ago. The idea was pretty neat. And the claim was that the future will belong to more secure OSes based on this separation kernel (microkernel?). And how say every little component -- memory, filesystem, mouse, display are all in userspace. He talked about ok general purpose computer at that time were too slow to operate in that way (so Linux was better and winning because of performance). But just wait some 10 years or so and machines will be so fast that it won't matter.
So since then that story kind of stuck with me that kind of prompted the question.
Just a little bit more on this: NICTA was originally set up as a research group by the Australian Government with the explicit mandate of commercialising its research.
The OKL4 and seL4 microkernels developed by NICTA were commercialised through the company Open Kernel Labs, which was subsequently acquired by General Dynamics (who thus acquired the IP of both projects).
At NICTA, we are very happy to see seL4 finally being open sourced: we really do want to see our work be used as widely as possible, and open sourcing it is going to be the best way of this happening. (So, in response to the grandparent thread, that is why we are making a big fanfare: we are excited, even if nobody else is.)
The seL4 kernel currently supports the ARMv6, ARMv7 and x86 architectures, though the proof only applies to ARMv6.
I am not sure what Genode's plans are. seL4 is a different kernel to OKL4, with a substantially different API, so it will be quite some work to move it across from OKL4 to seL4.
The proof states that the C code of the kernel is a refinement of an high-level "abstract" specification of the kernel. This means that the C code only exhibits behaviours that are also exhibited by the high-level specification. The high-level specification (by construction) doesn't have certain bad behaviours that you often see in C programs: NULL pointer dereferences, buffer overruns, signed integer overflow, etc. Thus this class of bugs is proven to be absent. A good primer is available at: http://www.ertos.nicta.com.au/research/l4.verified/proof.pml
The original proof didn't make any claims about the high-level behaviour of the kernel, such as security. So for instance, the original kernel might have had a bug where you set up a system so that two processes shouldn't be able to communicate with each other, but after some unexpected sequence of API calls they form a communications channel between them.
NICTA in the following years after the initial proof carried out two security proofs: an integrity proof and an information flow proof. These proofs state that the capabilities possessed by a process determine (i) what other processes a particular process can modify; and (ii) what other processes the process can communicate with. Like the original proof, these security proofs (especially the second) have assumptions. In particular, they can only talk about parts of the system that are modelled. Timing channels, hidden CPU state, etc. not modelled by the proof may still allow information flow between processes, for instance.
Sorry, I don't think you can claim you have a mathematical proof of the correctness of your kernel when you assume this much.
You need to assume something. Every operating system kernel makes these assumptions: seL4 is just explicit about them. Some of the assumptions are bigger than others (management of caches is a pain to get right and hard to test, for instance). Some assumptions remain just due to a lack of time, such as proving the initialisation code of the kernel.
Additionally, as time has gone on the assumptions have been reduced. For instance, the work of Thomas Sewell and Magnus Myreen mean that the compiler no longer needs to be trusted: https://www.cl.cam.ac.uk/~mom22/pldi13.pdf
And where is this proof? I can't seem to find it anywhere on their web site.
The proof will be part of the open-source release. It consists of around 200,000 lines of Isabelle/HOL proof script, which is machine-checked. Anyone will be able to inspect/modify/check the proofs after the release.
This means that the C code only exhibits behaviours that are also exhibited by the high-level specification. The high-level specification (by construction) doesn't have certain bad behaviours that you often see in C programs: NULL pointer dereferences, buffer overruns, signed integer overflow, etc. Thus this class of bugs is proven to be absent.
The problem I take with the literature is that it does not say that it is free of certain classes of bugs. The wording implies that it is completely bug-free. As any software developer knows, bug-free software does not exist.
Furthermore,
You need to assume something. Every operating system kernel makes these assumptions: seL4 is just explicit about them.
they claim they have proven that their kernel is completely bug-free. They say they have proved this under the assumption that the assembly code was written correctly. It seems that what they have claimed is: we have proven our kernel is correct with a proof that assumes that the code is written correctly. The logical flaw in this is obvious.
I don't take issue with claims of correctness, given that they are precise about what they have proven. I do take issue with inflated claims ("bug-free" vs. "free of bugs from this specific class of bugs") for marketing purposes.
I understand the word "must" to mean that they cannot add additional strings, such as payment, to their obligation to revoke the certificate. Is there another way of interpreting it that I am missing? I guess you could interpret it as "must provide a mechanism", but I can't see that that was the intent of the original document.
Mozilla's use of the word "must" here I think is important, because the barriers to correctly dealing with a security breach should be minimized. For better or worse, root CA's are entrusted with maintaining the security of large chunks of the internet. Charging users who suspect that their certificates _may_ have been compromised (due to the Heartbleed bug, in this case) will cause users to err on the side of inaction, which is going to weaken internet security in the long run.
I wouldn't have put it better myself.
I just added a new update on the website.
Saturday, April 12, 09:50 (GMT-3)
OK, so here's my reply to Nikolai:
"Let me address this question.
> Anything about free revocations there?
It doesn't, but that's not relevant.
It's pretty damn clear: You see the evidence,
that alone should be enough for you to take action.
If you take Mozilla's policy by the letter,
one doesn't even have to own a certificate to be able to request its revocation.
All that should be needed is the evidence of compromise.
If I disclosed the private keys for a certificat I don't own,
would you just ignore that information?
Or would you come after the certificate owner demanding payment first?
You're a CA, A CA!!!
You should be worried about the security of the internet above all things.
You should also be worried that you have a bunch of green padlocks around that don't mean what they once did.
You're not worried about that.
So in my opinion you don't deserve the trust of the internet anymore.
"Visible on screen" unfortunately is a hard thing to determine. You can easily imagine a textbox with white text on a white background, or a very small textbox, or a textbox that briefly pops up whenever you type a keystroke or click the mouse, or...
Browsers really need to support some mechanism where the user can determine precisely what information will be filled prior to it being handed over to the website. This needn't be difficult; Chrome's existing autofill popup already displays a subset of the information, the popup just needs to give a fuller picture.
----------------------------------------------------
| This webpage is asking for information that your |
| browser can automatically fill in for you. Check |
| the box next to each item you'd like to include. |
| |
| [x] Email address - bob@example.com |
| [ ] Name - Bob Jones |
| [ ] Address - 123 Sample Street |
| |
| [Autofill] [No thanks] |
----------------------------------------------------
Edit: lower down in the fine print it states "per month", but the table heading states "p/message".