Closed
Bug 566008
Opened 15 years ago
Closed 9 years ago
SSL: Add an "I know what I'm doing" option to self-signed certs
Categories
(Core :: Security: PSM, defect)
Core
Security: PSM
Tracking
()
RESOLVED
WONTFIX
People
(Reporter: jhill, Unassigned)
References
(Blocks 1 open bug)
Details
(Whiteboard: [psm-cert-errors] [Advo])
(Note: this is filed as part of the “Paper Cut” bugs — we assume that there may be multiple existing bugs on this. Please make them block this bug, and we will de-dupe if they are indeed exactly the same. Thanks!)
To reproduce:
1. Go to a website with a self signed cert that you trust
2. try to allow an exception for this website
3. notice you have to go through at least 5 clicks
Recommendation:
Add a new button that allows a user to say that they know what they are doing (not just temporarily). This would all the site to the exceptions list for the user without the user having to go through 5 extra clicks.
boolean general.I_understand_SSL ;)
Possibly use Perspectives? http://www.cs.cmu.edu/~perspectives/index.html
There should be a one click way to allow the certificate once without having to save exceptions.
Updated•15 years ago
|
Assignee: nobody → kaie
Component: General → Security: PSM
Product: Firefox → Core
QA Contact: general → psm
![]() |
||
Comment 1•15 years ago
|
||
Just so this doesn't get shut down immediately… ;)
This bug is mostly about doing a second round of reviews of how we handle self-signed certs. The intent is good now, but it's a little bit overly onerous, and I think there could be better ways to approach this while still indicating that those certs aren't good.
It'll be my job to corner johnath about this sometime. (Adding him to CC)
![]() |
||
Comment 2•15 years ago
|
||
I'm counting two clicks, if you consider toggling "I Understand the Risks" as a click, so it's three.
But didn't we had the one-click-through UI already in the past which failed utterly to prevent users accessing just about anything no matter what? Is this really what you want? Is there any good reason today to use self-signed certificates?
Comment 3•15 years ago
|
||
There are existing prefs that make it easier to add an exception, fwiw - see bug 399275 comment 50. They're not exposed in the UI. That bug and bug 427293 are also worth reading before opening this discussion up again...
Comment 4•15 years ago
|
||
FWIW, I like a lot of the comments Zack made in http://www.owlfolio.org/htmletc/ssl-errors/
I agree that the current page is a little heavy, but the intent is solid. We should be making sure that the website's IDENTITY can not be assured, although it's offering to protect the traffic. We should use terms that the user can understand, and give them options that are clear.
The successful design will not have a "whatever" button - there must be evidence of attention being paid, and not a simple clickthrough.
Good luck! :)
Updated•15 years ago
|
Assignee: kaie → nobody
Whiteboard: [psm-cert-error-pages]
Updated•15 years ago
|
Whiteboard: [psm-cert-error-pages] → [psm-cert-errors]
Comment 5•15 years ago
|
||
(In reply to comment #2)
> But didn't we had the one-click-through UI already in the past which failed
> utterly to prevent users accessing just about anything no matter what? Is this
> really what you want? Is there any good reason today to use self-signed
> certificates?
I would assume this would be implemented as an option unexposed in the UI -- as in the "general.I_understand_SSL" mentioned in the initial report. Personally, I think the current page and behaviour are fine for 95% of the user base.
"Invalid" certificates are quite common on hardware devices or in virtual hosting environments; network administrators such as myself would be the primary benefactors from this change.
Comment 6•15 years ago
|
||
I just wanted to share that I have *never* heard anything positive about the current SSL UI from a _normal_ user (the ones who don't work on a browser or other web-related project). The intend may be a good one and I really liked the design when it was introduced but in the meantime I am not sure if it does more good than harm, really.
![]() |
||
Comment 7•15 years ago
|
||
Security isn't always about convenience, but unfortunately we also don't know how many attacks were actually prevented with this UI. Probably more than with the old warning popup's which were simply clicked away.
Comment 8•15 years ago
|
||
BTW, one problem with the current UI is that it makes it look like HTTPS with a self-signed cert (or one signed by an untrusted CA, like Cacert) is *less* secure than plain HTTP, when in reality it's just the opposite.
![]() |
||
Comment 9•15 years ago
|
||
(In reply to comment #8)
> BTW, one problem with the current UI is that it makes it look like HTTPS with a
> self-signed cert (or one signed by an untrusted CA, like Cacert) is *less*
> secure than plain HTTP, when in reality it's just the opposite.
Well, actually it's the same as plain text if you haven't vetted the certificate independently.
Comment 10•15 years ago
|
||
(In reply to comment #8)
> BTW, one problem with the current UI is that it makes it look like HTTPS with a
> self-signed cert (or one signed by an untrusted CA, like Cacert) is *less*
> secure than plain HTTP, when in reality it's just the opposite.
I hear this argument a lot. The risk is not in showing the site (call it foo), but in changing the meaning of https://foo/... URLs wherever they might appear as well as the https://foo security origin. Anyone who interacts with https://foo via cross-site XMLHttpRequest or window.postMessage (in either direction) or similar mechanisms potentially becomes vulnerable. Bookmarks and links from email and other web sites are also affected. The big ugly error page is supposed to make sure this is what you want.
The argument would hold if you follow a YURL for the presented certificate instead of adding a certificate exception.
Comment 11•14 years ago
|
||
(In reply to Matt McCutchen from comment #10)
> In essence, the https URI scheme stands for some level of server
> authentication. There are plenty of problems with the current regime --
> think of the "s" as standing for "something" if you like -- but to negate it
> would be a mistake.
The padlock stood for something, too, before the security UI folk decided that it wasn't needed anymore. Before that, the old gold skeleton key on a blue background stood for something before it was obsoleted for the same reason.
Notably, those had actually had public word-of-mouth marketing buy in. That's only slightly easier to generate than willingness to adhere to arbitrary legally-unenforceable mandates from an organization which has no interest in the user's legal affairs or even existence.
> If we want a mode that does not require server
> authentication, it would have to be a different URI scheme, say "httpz", and
> we'd have to virtualize things so the site thinks it is using "https" while
> actually keeping the secure cookies separate and so forth. Then
> about:certerror could offer a link to the "httpz" URL as an alternative to
> adding an exception. A valid RFE, but I'd guess Mozilla would be more
> interested in clearing obstacles to the wider use of server authentication.
Obstacles to wider use of server authentication?
1) the inability to present multiple certificate chains in a manner which doesn't rely upon additional cooperation between the server and the client.
2) prohibitively complex and arcane auth setup procedures that are also prohibitively expensive in the normal skills marketplace.
3) no legal requirement to adhere to rules set by an organization with no law-making authority which require that value be transferred in a commercial transaction.
4) Mozilla's PKI/PSM developers have acted as though they think that they can do no wrong and that every criticism of their work can be dismissed, and that the demands of a sizeable portion of its userbase can go unheeded. If Mozilla's developers won't listen to the users, why should anyone else listen to Mozilla's developers?
I seek the capacity for opportunistic encryption to get everyone used to using cryptography that Just Works to protect everyone from passive attacks. I seek the removal of the artificial commercial barriers to adoption of cryptography. Only once the underlying technology is entrenched should we be really worried about authentication. (SSLv1 was opportunistic, non-authenticated.)
HTTP auth credentials are at most as vulnerable in an opportunistic/MITM scenario as they are in a plaintext scenario. They certainly can't be any more vulnerable than plaintext. Unauthenticated, opportunistic encryption would serve to deter automated passive scanning at such places as DefCon (and the telcos) by making the act of reading any arbitrary message prohibitively expensive, requiring instead active, targeted attacks which leave some trace.
Demanding that 'https' be limited to authenticated channels is counterproductive. Users no longer have any surety that we can find the legal identity of those we communicate with, due to domain-validation practices. There's no value-add to the end user or the site owner to keep the magic from working solely because a particular application's security model and policy have no need for publicly-trusted CA certificates, or because the user has chosen not to trust Mozilla's builtin token. The marketplace has had a decade to become entrenched, and publicly-trusted CA certificates are the norm on any commerce site. And as I mentioned, we've already obsoleted the padlock icon (and the gold key) in favor of colored-bar based alternatives.
If the problem is cookies and authentication data, the problem becomes to determine how difficult it would be to implement safely. In this case, it could perhaps be implemented with multiple cookie/authentication spaces. Instead of relying upon https or http to determine what cookies to send across the channel, authenticate the channel's certificate and use that as a guide to figure out what cookies and credentials to send across which channels.
Green would inherit blue would inherit white, but blue would not be able to see green, and white would not be able to see either blue or green.
For HTTP Basic and Digest authentication data, require that it be sent across EV-authenticated channels if the UI is displaying a green bar, DV-authenticated channels if the browser is displaying blue, and same-origin white if the certificate isn't trustworthy. Otherwise, the authentication information is being sent across very few (usually one) HTTP connections.
For bonus points:
- create a yellow/gold bar which includes certificates acceptable because of a trust decision made outside of the builtin list (something outside of the builtin token); this would receive cookies for white-bar and yellow-bar only, and yellow-bar cookies wouldn't propagate to blue or green by default unless requested by the green/blue
- create a red bar with no cookie access which includes certificates which are technically invalid for some reason (including site name mismatches and duplicate issuer/serial tuples, a corner case which currently penalizes both the site owner and the browser user with nonoperability due to negligent or malicious failure of CA controls)
- create a gray ? or a hazard-yellow/black striped address bar with a delta-bang (ISO caution W001) for sites using untrustworthy https
![]() |
||
Updated•13 years ago
|
Whiteboard: [psm-cert-errors] → [psm-cert-errors] [Advo]
![]() |
||
Comment 12•12 years ago
|
||
(In reply to Kyle Hamilton from comment #11)
You've said it better than I could, so I will not repeat it.
However, I would like to add one thing. First of all, in comment 8 it was written that:
> one problem with the current UI is that it makes it look like HTTPS with a
> self-signed cert (or one signed by an untrusted CA, like Cacert) is *less* secure
> than plain HTTP, when in reality it's just the opposite.
As a user and website owner, I want to stress this point. I want to use encryption on my site. I've looked at companies providing certificates, and if you look really hard you may find one that does it for no money, but they have annoying procedures. There's really only one that I trust, and that is CAcert, which isn't trusted by mainstream browsers including Mozilla's.
So I'm using a certificate signed by CAcert, and now my users are seeing scary messages that my site is not safe.
I can understand that the browser considers it less safe than a site which uses a certificate from a CA in the trusted list. My certificate is as safe for (most) users as a self-signed certificate, and it rightfully gets equal warning messages.
However, a self-signed certificate really should get fewer warnings than no encryption at all. You're telling the user "Hey! You! You're using a connection with a self-signed certificate, do you have any idea how DANGEROUS that is?!!! Turn back while you still can! You don't want to visit this site!" And if I decide to remove encryption entirely and let them connect over a plain text link, there will be no warning at all. Everything seems fine.
In other words, the current system pushes owners of simple websites which don't really really need encryption to use plain text everywhere.
If you're using this kind of force (which is impressive, and can accomplish a lot) to push website owners away from one of the options, please push them away from unencrypted sites, not toward them.
(In reply to Eddy Nigg (StartCom) from comment #9)
> (In reply to comment #8)
> > BTW, one problem with the current UI is that it makes it look like HTTPS with a
> > self-signed cert (or one signed by an untrusted CA, like Cacert) is *less*
> > secure than plain HTTP, when in reality it's just the opposite.
>
> Well, actually it's the same as plain text if you haven't vetted the
> certificate independently.
Wait, this comes from someone who works for a CA? My trust in CAs just went through the floor...
The only point of the signature on the certificate is to prove that you have the correct key. If you do have the correct key, your communication is secure. In other words, it protects against MitM attacks. The communication can be intercepted by WAY more people when it is transmitted as plain text than when it is encrypted, even if someone is actually DOING a MitM attack.
Also, a MitM attack is detectable. With all the recent news about the NSA, I'm sure some people will compare their keys when retrieved from several locations, including directly from the host serving them. (If connecting to a site with a laptop while traveling, this check is also automatically done.) If any of those checks fails, it will be big news in the community. Then many people will check their keys. If many of those checks fail, it will be big news everywhere.
And the NSA knows this, too. So they're not going to do large-scale MitM attacks. Therefore, self-signed certificates are fine for the majority of websites (not including "real" organisations like banks).
What the NSA will most likely also do, is fuel discussions like this one with arguments as have been heard here, suggesting that unauthenticated encryption is just as bad as no encryption. Please, if people say that, ignore them and do what is right: help to encrypt the internet.
Stop scaring people away from unauthenticated encrypted websites, please.
Thank you.
Comment 13•12 years ago
|
||
(In reply to Bas Wijnen from comment #12)
I would really like to agree with most of what you've said, but there's a pretty big sticking point:
> Also, a MitM attack is detectable. With all the recent news about the NSA,
> I'm sure some people will compare their keys when retrieved from several
> locations, including directly from the host serving them. (If connecting to
> a site with a laptop while traveling, this check is also automatically
> done.) If any of those checks fails, it will be big news in the community.
> Then many people will check their keys. If many of those checks fail, it
> will be big news everywhere.
In the present state of commonly used browsers, MITM attacks are *not* detectable. You have to load the site and then manually bring up its server certificate, understand the jargon in that dialog box, and manually compare the fingerprint to a fingerprint you wrote down on a piece of paper or something like that. You know how to do that, but how often do you actually do it? I know how to do it and I would only ever bother if I had some reason to think something was wrong -- such as if a site that I know is supposed to have a CA-signed certificate suddenly starts displaying the "self-signed certificate! DANGER WILL ROBINSON!" barrier screen. This has actually happened to me. The coffee shop I was in had a virus-infected wifi router, probably stealing everyone's passwords. I tried to explain this to the counter clerk, who - I am not making this up - *clicked through the barrier screen for me* and said "see? Nothing's wrong!"
This doesn't mean you're wrong about self-signed certificates! But right now the self-signed certificate barrier is the only line of defense we have against that kind of MITM attack, which I guarantee you is orders of magnitude more common than attacks by nation-state surveillance agencies. Until we have another defense *implemented* that blocks that attack, we really can't change what we do with self-signed certs. I say *implemented* because there is no shortage of proposals that could make things better. Auto-pinning (for all certs, not just self-signed) would help. Notary servers would help. Certificate Transparency. DANE. Namecoin. We need UI and code and backend ops budget, not ideas.
(If you're thinking, wait, a MITM can perfectly well downgrade to plain http:// and lots of people won't even notice, you are absolutely right! There are off-the-shelf attack programs that do exactly that. Again, there are proposals to fix that, and what is needed is code.)
If you are seriously interested in getting this solved, please have a look at bug 644640. That is the core code change required before extension authors can make any headway.
Comment 14•10 years ago
|
||
I agree with #12.
Non-commercial / private website projects need a way to offer 0$-effort encryption, which you basically only get via self-signed certs.
Concerning worries about NSA-scale MITM attacks, even commercial certs will arguably provide no more trusted safety to your communication. TÜRKTRUST being an example for arguable additional trustability by commercial CA's in less clandestine scenarios, by the way.
But self-signed certs for encryption are better than no encryption at all, providing at least a certain level of privacy and security due to routing over many, probably malware-infected or compromized servers.
Regarding cross site tracing attacks, I figure, those need to be prevented by means of a secure client implementation.
I'd vote for introducing a new padlock color (maybe black or brown) or symbol (grey open padlock) for self-signed-cert-based encryption, once accepted by the user (after a message informing of the theoretical risks has been displayed).
If the cert-fingerprint was displayed both on the website's top page and via mouse-over on the browser's padlock, users could easily validate for themselves. This would probably require something like a new consensus or web standards definition about how websites provide self-signed cert validation, however, or an alternative assistive method.
![]() |
||
Comment 15•9 years ago
|
||
Currently it's 3 clicks. Research shows this reduces the rate at which users add overrides, which is good for security.
Status: NEW → RESOLVED
Closed: 9 years ago
Resolution: --- → WONTFIX
You need to log in
before you can comment on or make changes to this bug.
Description
•