Web Browser
> The sending of packets on exit to the currently opened site still persists though.
It might be some protocol exchange (hand-shaking) to terminate an open connection.
Well, if the URL is "httpS" then the communication is connection based. The browser can just drop the connection (without handshake) and the server keeps the connection open till it times out. This (keeping a dead connection open) can put a small burden on the server, cumulatively, i.e. if every client just drops line wthout a "bye". Qupzilla seems to be playing nice. Just guessing. Looking at the payload of outbound and inbound packets could reveal it.
But anyway since packet exchange occurs with currently opened site only, that means it is not related to spying (privacy).
> Do you think you could probably point me out to the correct RFC (or whatever web standard document applies) to read more about that process?
I am not well versed on this, but AFAIK http requests are connectionless, i.e. there's a request and a response, whereas https is connection based.
Because https communication is basically an autenticated and encrypted session, i.e. a VPN tunnel, which requires a sustained connection. Otherwise, each simple request such as GET should have been preceded by establishing an SSL tunnel, and terminating the connection upon response. This is simply too expensive to implement. So an SSL tunnel (connection) must be permanent one, in which, requests and responses exchanged. Even if there's no exchange of request/response pair between the client and server, the connection should stay open once established, until either connection times out or parties terminate the connection gracefully (by handshaking). This is as far as I remember. I might be omitting important details.
As for a link, I don't have one readily available, but a DDG search with combinations of "ssl" , "connection" , "https" , "tunnel" , "rfc" words should lead you to relevant resources, I think.
> HTTPS is not VPN tunnel. What are you talking about? A metaphor?
It's *literally* not VPN but, *functionally* equivalent (or similar) AFAIK. I don't know if this is within the definiton of metaphor.
BTW, why don't you use plain http URL's to test? The less protocol complexities are involved, the less parasitic effects there are. This also goes for DNS lookups. It might be worthwile to use direct IP addresses instead of domain names. Of course it wouldn't work on shared host sites but then you don't have to test with shared host sites either. Just find a convenient site to test, which is accesible through raw IP address, offers plain http service, and the test page is script-free. You are not testing the site, anyway, you are testing the browser.
> I actually thought of what you suggest. But:
Let me put it this way: You are testing the browser, and there can be 2 modes of failure.
(1) Malevolence = Deliberate info leaking. In this case, no matter where you access, what the content or protocol is, the browser will do its thing. To isolate malicious behavior and to make it stick out, the least parasitic environmet (simplest protocol, no scripts, etc.) is best.
(2) Inferiority = Inadvertent info leaking. This is much more difficult to spot than the former case. Leakage due to inferior implementation can occur almost anywhere and everywhere. You need to test zillion combinations, and spot the leakage among the chatter. Sorry but this is beyond my mortal capabilities. Good luck, if you want to test that.
Also, inferiority means bug, and this is a technical failure (which can occur in any software any time) rather than a behavioral one. So I assume you are after behavioral failures (deliberate spying), that is you are after (1).
Therefore I sustain my original suggestion - of the simplest test case possible.
As for doing the tests myself, I'm also aware of the fact that everyone is just clapping from the sidelines for something they would directly benefit from. But for me, while I find your work very commendable and very useful for many, I'm only interested in it as a technical debate, and not concerned enough to protect my privacy. I don't know why - I should have been. For instance I haven't tried the user.js you have shared (yet). So I talk the talk, but don't walk the walk. :) (correct usage of the term, I hope)
> ... everyone is just clapping from the sidelines ...
BTW I must apologize for this sweeping generalization. It was unfair.
As for direct IP addressing, it should be straightforward to filter out DNS queries and responses from the chatter, so access by domain names should be tolerable - as long as you filter DNS part from the chatter. But then, since you include DNS chatter to the test case, that means you also want to inspect that. And this adds up to the work you're carrying on.
That aside, I can't really see what can go wrong - deliberately - with a simple DNS name resolution. But since root servers are 0wned, you may have a point in wanting to inpect DNS chatter. It maybe worthwhile not to assess the browser, but to assess DNS infrastructure. Then again, testing DNS infrastructure is a different case that should be isolated from browser tests, I think.
> (1) Malevolence = Deliberate info leaking. In this case, no matter where you access, what the content or protocol is, the browser will do its thing.
Giving it a second thought, this can depend on how wisely a spyware is written. A good spyware would be wise enough to stick its nose out *only* in a "noisy crowd". For instance, it wouldn't call home if the browser is not accessing a page with JS which makes outbound connections. The JS (and its outbound connections) has nothing to do with the spyware or its home address. A spyware can behave like this just to confuse the matter, so that you would never know which address is which, and whether spy-home is accessed by JS or by spyware.
There may be other examples of a spyware hiding behind complexity. So, it can be rather difficult to catch an intelligent spyware. To catch sophisticated spyware, a detailed strategy to outsmart them should be devised, which I don't have currently.
To our relief, Mozzarella the cheesy borser is not that wise apparently, as it bluntly goes out to various 3rd party sites no matter what (I hope they are not lurking here). But who can say all the spyware out there are as dumb?
> For instance, it wouldn't call home
> if the browser is not accessing a page with JS which makes outbound
> connections. The JS (and its outbound connections) has nothing to
> do with the spyware or its home address.
Yes, that would be the smart way to do it. I'm glad you don't work for Mozilla.
> To our relief, Mozzarella the cheesy borser is not that wise
> apparently, as it bluntly goes out to various 3rd party sites no
> matter what (I hope they are not lurking here). But who can say all
> the spyware out there are as dumb?
Well, if we want to be fully paranoid, there's no reason Mozilla couldn't have Firefox make blatant third-party connections, be somewhat transparent about their existence, provide security rationales for having them and half-assed broken documentation for disabling them, while *also* doing as you describe with additional connections that are completely undocumented and only occur when there is sufficient noise. I suspect you're right, though, and that this is giving Mozilla too much credit.
> Same with privacy. If I say (like it's popular) "I have nothing to hide" am actually saying "I don't care about you either. Anything you send to me can end up in the wrong hands."
I see your point. And I was a bit exaggerating (or misrepresenting the matter) when I said "not concerned enough to protect my privacy". More precisely, I take some radical "root" precautions and leave it at that, omitting minute details. I'll cover my reasons sometime in the other thread in troll lounge.
My current precautions (which are relatively basic and easy to implement) provide for reasonable privacy against commercial intrusion, while it's nowhere near protecting me from institutional intelligence (local and global governments). I believe it is somewhat futile to achieve that, anyway, as root DNS servers are owned, the whole internet backbone is owned, communication channels are owned, certificate providers are owned... we are living in a glass chateau on internet.
Internet aside, I carry a mobile (dumb 2G) phone, bank cards, various other cards registered to my name. If need be, my steps can be counted. :) With this grand technological infrastructure (internet and non-internet) real security and privacy can only be achieved through hiding and isolation, neither of which I can afford. And mind the thumb rule of security: It's a chain. A single broken link can be enough to nullify all the other security measures you took.
While you may feel secure with your browser settings and internet usage patterns, these are only effective against commercial intrusion. As for the government intelligence, all your traffic is flowing through "glass pipes" and I wouldn't rely much on https either.
So I know my limits and don't bother to achieve a security/privacy level beyond commercial intrusion. That's what I have meant with "not concerned enough to protect my privacy".
I believe it's not a defeatist approach, it's a sober one.
Latest news is from 2016.11.27 and it is not included in Debian, hinting that maybe it is not yet quite ready for prime time.
- « first
- ‹ previous
- 1
- 2
- 3
- 4