Lightweight Browser

35 respostas [Última entrada]
MD. SHAHIDUL ISLAM
Desconectado
Joined: 10/14/2015

Midor is a lightweight and fast browser. But it is not enough for me. It does not work perfectly with facebook. It can not rendering Bangla (as I am a Bangladeshi) font properly.
PaleMoon also a lightweight browser and perfect for me but it's binary package is proprietary.
I need a lightweight browser that will be a right alternative of abrowser.

aloniv

I am a translator!

Desconectado
Joined: 01/11/2011

You can try Seamonkey/Iceape - it is also a Mozilla based browser which is compatible with most add-ons. If you want a browser based on older ESR Firefox (version 52) you can try IceCat/Iceweasel. If you want a browser similar to Abrowser but with support for legacy add-ons you can try out Waterfox. Another option is the Epiphany/Web browser which like Midori is based on WebKit (but currently supports less extensions).

Abdullah Ramazanoglu
Desconectado
Joined: 12/15/2016

Qupzilla (Falcon)?

WebKit based, not much plugins but essential ones are there, is fairly compatible (better than Midori in my experience), fairly lightweight (memory footprint approximately half of Firefox and twice of Midori or so)

One drawback (from my POW) is that it is being integrated into KDE. The author specifically states that "there will be no hard dependencies on KDE libraries" but already there are efforts for kwallet integration. Actually in Debian, kwallet was first added as hard dependency to qupzilla, only recently reduced to "recommends" (which still translates into hard dependency with Debian's default apt settings).

Things may get out of hand and Qupzilla may become just another Konqueror (from KDE suit dependencies perspective). I hope it won't.

http://blog.qupzilla.com/2017/08/qupzilla-is-moving-under-kde-and.html

Also there is netsurf but I haven't tried is as yet. I gather that it is somewhat on par with midori.

Midori itself is already a very good compromise (functionality vs resource consumption) and would probably have been better than qupzilla, if only it was not orphaned for years. It is so promising a platform that I'm wishfully sure someone will pick it up and continue its development.

aloniv

I am a translator!

Desconectado
Joined: 01/11/2011

Qupzilla has severe freedom issues according to Hyperbola - it depends on nonfree qt5-webengine.

Netsurf is very basic (it doesn't even fully support JavaScript).

Abdullah Ramazanoglu
Desconectado
Joined: 12/15/2016

> Qupzilla has severe freedom issues according to Hyperbola - it depends on nonfree qt5-webengine.

Are you sure about that?

Initially I had taken your word for it, then recently wanted to check it for myself. Both Qupzilla and all its dependencies (including libqt5webengine*) are in the Debian main repository, which complies with GNU FSDG. So Qupzilla seems to be free as a bird.

onpon4
Desconectado
Joined: 05/30/2012

DFSG is not 100% compatible with GNU FSDG.

That said, the reason for the assertion that WebEngine is non-free is that it's based on Chromium. I have never seen anyone actually evidence the claim that Chromium is proprietary.

onpon4
Desconectado
Joined: 05/30/2012

The question that comes to mind for me is, "lightweight" in what sense? I don't think anyone who requests a "lightweight" browser really understands what they're asking for. Here are a few possibilities I can think of:

1. Low memory footprint (e.g. because your computer only has 1GB of RAM)
2. Low download size (e.g. because you're running your OS on an SD card)
3. Faster performance
4. Simple interface (as opposed to a highly stylized one)

Midori happens to be #1, #2, and #4, in comparison to Firefox. Pale Moon is just #4 compared to Firefox. Firefox (and Abrowser) is #3 compared to both of these other browsers. Any browser can get #3 just by turning JavaScript off. And of course, the masters of all of these are the command-line browsers, like lynx and elinks, and out of graphical browsers, NetSurf easily wins all of these categories.

Magic Banana

I am a member!

I am a translator!

Desconectado
Joined: 07/24/2010

Caches are improving performances when you revisit a site.

Mangy Dog

I am a member!

I am a translator!

Desconectado
Joined: 03/15/2015

Keeping in mind that security is important too.
https://www.howsmyssl.com/

onpon4
Desconectado
Joined: 05/30/2012

Exactly. I don't think a lot of people understand that increased RAM and hard disk consumption is often done intentionally to improve performance. The only way reducing RAM consumption will ever help performance is if you're using so much RAM that it's going into swap, and very few people have so little RAM that that's going to happen.

onpon4
Desconectado
Joined: 05/30/2012

No, if you're not swapping, there's no performance loss. There is zero benefit to having RAM free that you're not using. If you're only ever using 2 GB of RAM out of 16 GB, those other 14 GB are doing absolutely nothing for you.

Magic Banana

I am a member!

I am a translator!

Desconectado
Joined: 07/24/2010

If you constantly allocate and deallocate huge amounts of memory this is an overhead. So caching in RAM is not a performance benefit per se.

Yes, it is. It is about *not* deallocating recent data that may have to be computed/accessed again, unless there is a shortage of free memory.

Starting a new program requires free memory. If all (or most) memory is already full, this will cause swapping. You need to have enough free memory.

Onpon4 did not say otherwise. She also rightfully said that "there is zero benefit to having RAM free that you're not using". How often does your system run out of RAM?

onpon4
Desconectado
Joined: 05/30/2012

> Starting a new program requires free memory.

Yes, but if you pay more attention to the context of what I was saying, that would be included under the umbrella of "use". There's a difference between using 2GB right now and using 2GB ever. The point is that if you can spare RAM, ideally, you should be using all of it. In a perfect world, the programs you're running would use every byte of RAM available and then release it to new programs as they launch. We of course don't live in a perfect world, so some inefficiency (i.e. leaving RAM unused) is inevitable. Thankfully, Linux makes use of that RAM for disk caching while it waits to allocate it to a program, so it's not a total loss.

In any case, that's the point: if you can afford RAM use (and yes, you can afford to have a program use hundreds of MB if you have 4, 8, or even 16 GB total), then it is always beneficial.

> If you constantly allocate and deallocate huge amounts of memory this is an overhead.

That's not what I would do. In fact that sounds like what I would expect someone to do to be more "efficient" with their RAM use.

> Consider also memory fragmentation

RAM doesn't "fragment" in any meaningful way. It's random-access. I assume you're referring to disk fragmentation, which occurs on hard drives; it matters there because hard drives are not random-access, and having related files in completely different physical locations or, worse, having one file split into multiple physical locations means it takes longer to read. But this hasn't really been an problem in years.

> When someone says "It is possible to use RAM inefficiently" you present a counter argument with an example of efficient use and by that you are trying to abolish the actual irrevocable fact that inefficient RAM usage is possible.

MB was responding to the last sentence in that post. He was disputing your claim that "caching in RAM is not a performance benefit per se".

> The thread is about lightweight browsers. So far I see zero posts answering the OP's question or being helpful in any way.

This little deviation came about from me questioning what "lightweight" even means. That's actually important. I don't think most people who ask for "lightweight" really understand what that they're looking for. In particular, if you're really after speed, then "lightweight" is the opposite of what you really want. Re-branded versions of Firefox (like Abrowser) are probably the fastest libre browsers out there. On the other hand, if you need very low RAM use because you're using an OpenPandora, your best choices are things like NetSurf, Links, and Arora. If you just like a traditional interface, I'd suggest Midori or SeaMonkey, and if you're almost out of hard drive space, maybe just stick to a text-based browser.

So you see, it's a relevant question to ask before giving a browser recommendation.

Magic Banana

I am a member!

I am a translator!

Desconectado
Joined: 07/24/2010

I don't know what your programming experience is but your expectations of efficiency are contrary to the basic programming principle: that a program should use only as much memory as it actually needs for completing the task and that memory usage should be optimized.

You can often get a faster program by using more memory. See https://en.wikipedia.org/wiki/Space%E2%80%93time_tradeoff for examples. As long as the system does not swap, it is the way to go.

Occupying as much RAM as possible just because there is free RAM is meaningless.

Storing in memory data that will never be needed again is, of course, stupid. We are not talking about that.

RAM access is sequential.

You know that RAM means "Random-access memory", don't you? The access is not sequential. Manipulating data that is sequentially stored in RAM is faster because of CPU cache and sequential prefetching: https://en.wikipedia.org/wiki/Cache_prefetching

The idea of CPU cache is, well, that of caching: keeping a copy of data/software closer to the CPU because it may have to be accessed soon. The same idea, at another level, that a program can implement to be faster: keeping in main memory data that may have to be accessed soon because accessing it is faster than recomputing it.

Are you also arguing that having free space in the CPU cache has benefits?

On a system with more memory (e.g. 16GB) you can keep more data cached in RAM but that doesn't mean that programs should simply occupy lots or all because there is plenty of it and/or because RAM is faster than HDD.

It kind of means that. You want fast programs, don't you? If that can be achieved by taking more memory, the program will indeed be faster, unless the memory requirements exceed the available RAM and the system swaps. So, I ask you again: "How often does your system run out of RAM?". If the answer is "rarely", then choosing programs with higher memory requirements may be a good idea: they can be faster than lightweight alternatives.

It is more time consuming to manage scattered memory blocks and thousand of pointers than reading a whole block at once.

It is not because of fragmentation (which only becomes a problem when the system swaps: free RAM cannot be allocated because too little is available in continuous blocks). It is because of sequential prefetching in the CPU cache. Not an argument against caching. Quite the opposite.

Abdullah Ramazanoglu
Desconectado
Joined: 12/15/2016

> Manipulating data that is sequentially stored in RAM is faster because of CPU cache and sequential prefetching

And also because DRAM is accessed page-wise. Changing page is much more expensive than accessing data on the same (already selected) page.

> The same idea, at another level, that a program can implement to be faster: keeping in main memory data that may have to be accessed soon because accessing it is faster than recomputing it.

This, along with onpon4's similar views, is overlooking a basic fact: That the kernel is already using free memory for data caching. A user space program attemting to do its own data caching is a grave error (a bug, in essence) because it tries to overtake kernels job on itself, rather selfishly. It's selfish, because it *grabs* memory chunks from fair sharing (as a kernel service) and simply sits on it, while the other user space programs can conveniently starve for RAM and go to hell. :)

> So, I ask you again: "How often does your system run out of RAM?". If the answer is "rarely", then choosing programs with higher memory requirements may be a good idea: they can be faster than lightweight alternatives.

And how would you really check that from within a user space program and take necessary steps? I.e. would you periodically check swap usage and relinquish some RAM back to the system in order to stop swapping all within your *user space* program? That is the kernels job!

The obvious thing to do is that, you must allocate no more RAM than you really need, and leave the rest (deciding what to do with free RAM) to the kernel.

Magic Banana

I am a member!

I am a translator!

Desconectado
Joined: 07/24/2010

And also because DRAM is accessed page-wise. Changing page is much more expensive than accessing data on the same (already selected) page.

Yes, there is that too. And accessing recent pages is fast thanks to yet another cache, the translation lookaside buffer: https://en.wikipedia.org/wiki/Translation_lookaside_buffer

This, along with onpon4's similar views, is overlooking a basic fact: That the kernel is already using free memory for data caching. A user space program attemting to do its own data caching is a grave error (a bug, in essence) because it tries to overtake kernels job on itself, rather selfishly.

The kernel cannot know a costly function will be frequently called with the same arguments and will always return the same value given the same arguments (i.e., does not depend on anything but its arguments). A cache at the application-level is not reimplementing the caches at system-level.

And how would you really check that from within a user space program and take necessary steps?

I am not suggesting that the program should do that. I am only saying that there is no benefit in choosing "lightweight applications" (or configuring applications to reduce their memory footprint, e.g., disabling the cache) and always having much free RAM. If you always have much free RAM, you had better choose applications that require more memory to be faster.

The obvious thing to do is that, you must allocate no more RAM than you really need, and leave the rest (deciding what to do with free RAM) to the kernel.

An implementation strategy that minimizes the space requirements ("no more RAM than you really need") will usually be slower than alternatives that require more space. As with the one-million-line examples I gave to heyjoe. Or with the examples on https://en.wikipedia.org/wiki/Space%E2%80%93time_tradeoff

Abdullah Ramazanoglu
Desconectado
Joined: 12/15/2016

> The kernel cannot know a costly function will be frequently called with the same arguments and will always return the same value given the same arguments (i.e., does not depend on anything but its arguments). A cache at the application-level is not reimplementing the caches at system-level.

You see, we are back to the subtleties between grand design and tactical design choices. It really depends on for which purpose you allocate RAM. If it is for *direct* data caching, then it is both "selfish" (as I've explained in previous post) and inefficient too, as the kernel makes more efficient use of free RAM. But, on the other hand, if it is related to *indirect* caching, that employs a rather complicated algorithm beyond that of kernel's idea of data caching, then it may be worthwhile. All depends on the grand design, on the intricacies of the use of that cunk of allocated memory.

Then again, all this started with the idea of "allocating as much as there is free RAM" which has nothing to do with design excellence. It is simply using brute force at the cost of the rest of the entire system.

> An implementation strategy that minimizes the space requirements ("no more RAM than you really need") will usually be slower than alternatives that require more space. As with the one-million-line examples I gave to heyjoe. Or with the examples on https://en.wikipedia.org/wiki/Space%E2%80%93time_tradeoff

I have seen, and already know the trade-offs mentioned in that wiki page. As for the explanation of "no more RAM than you really need" please see above.

Magic Banana

I am a member!

I am a translator!

Desconectado
Joined: 07/24/2010

You see, we are back to the subtleties between grand design and tactical design choices. It really depends on for which purpose you allocate RAM.

I agree. There is no reason to re-implement what the kernel does (probably better).

If it is for *direct* data caching, then it is both "selfish" (as I've explained in previous post) and inefficient too, as the kernel makes more efficient use of free RAM. But, on the other hand, if it is related to *indirect* caching, that employs a rather complicated algorithm beyond that of kernel's idea of data caching, then it may be worthwhile.

The algorithm does not have to be complicated. It is just that the kernel cannot take initiatives that require application-level knowledge (such as the fact that a function will often be called with the same arguments). The programmer has to do the work in that case.

Abdullah Ramazanoglu
Desconectado
Joined: 12/15/2016

> It is just that the kernel cannot take initiatives that require application-level knowledge (such as the fact that a function will often be called with the same arguments). The programmer has to do the work in that case.

No, it is perfectly within the kernel's initiative. Kernel does not have to have application-level knowledge for this. Please note that the very purpose of caching is prioritizing resource allocation between "consumers" competing for the same resource. Any consumer (process) doing this by itself undermines the whole logic behind system-level data caching. A process has to *compete* for the resources, not *arbitrate* them. Arbitration is the kernel's job.

Following your example, if some data will be repeatedly accessed by a program, than that chunk of data will get ahead of competition and will always be in-cache. A user space program doesn't have to do anything special for this to happen. The kernel will ensure that most frequently referenced data will be cached. OTOH, if another competing program happens to access its data more frequently than yours, then it will win the competition. You *forcing* your data to be cached breaks the fair-share algorithm of the kernel. So, a program force-caching its own data for *tactical* purposes is both inefficient and selfish.

The only exception to this is when you have a higher (strategic) reason for application level caching.

Magic Banana

I am a member!

I am a translator!

Desconectado
Joined: 07/24/2010

Here is a Shell script, "fact.sh", that reads integers on the standard input and factorizes them:
#!/bin/sh
while read nb
do
factor $nb
done

Calling it with twice the same integer and measuring the overall run time:
$ printf '5316911983139663487003542222693990401\n5316911983139663487003542222693990401\n' | /usr/bin/time -f %U ./fact.sh
5316911983139663487003542222693990401: 2305843009213693951 2305843009213693951
5316911983139663487003542222693990401: 2305843009213693951 2305843009213693951
185.00

The result of the first call of 'factor' was not cached. As a consequence, the exact same computation is done twice.

The kernel cannot know that, given a same number, 'factor' will return the same factorization. It cannot know that, for some reason, 'factor' will likely be called on an integer that was recently factorized. The programmer knows. She implements a cache of the last 1000 calls to 'factor' (the worst cache ever, I know):
#!/bin/sh
TMP=`mktemp -t fact.sh.XXXXXX`
trap "rm $TMP* 2>/dev/null" 0
while read nb
do
if ! grep -m 1 ^$nb: $TMP
then
factor $nb
fi | tee $TMP.1
head -999 $TMP >> $TMP.1
mv $TMP.1 $TMP
done

And our execution on twice the same integer is about twice faster:
$ printf '5316911983139663487003542222693990401\n5316911983139663487003542222693990401\n' | /usr/bin/time -f %U ./fact.sh
5316911983139663487003542222693990401: 2305843009213693951 2305843009213693951
5316911983139663487003542222693990401: 2305843009213693951 2305843009213693951
92.71

It does not look like there is what you call "a higher (strategic) reason" here. Just a costly function that is frequently called with the same arguments and whose result only depends on these arguments. A quite common situation.

Abdullah Ramazanoglu
Desconectado
Joined: 12/15/2016

> Here is a Shell script, "fact.sh", that reads integers on the standard input and factorizes them:

Thank you for elaborating on this. Actually your example perfectly falls within "caching for a strategic reason". There are 2 points in your example, deserving further elaboration.

1) It caches some computational output (for reuse), as opposed to caching addressed data. (Actually there are cases where addressed data is eligible for application caching too. Should be decided case by case basis.) This is no different than caching a rendered SVG image, which is along the lines of time-space compromises you have been explaining along.

2) The data that is cached (the space) is too small compared to computational overhead to produce it (the time).

This is a perfectly legitimate case of caching by application, and there are many other possible examples too.

I have never said *any and all* caching must be left to the kernel. I have just said;

[QUOTE]
It really depends on for which purpose you allocate RAM. If it is for *direct* data caching, then it is both "selfish" (as I've explained in previous post) and inefficient too, as the kernel makes more efficient use of free RAM. But, on the other hand, if it is related to *indirect* caching, that employs a rather complicated algorithm beyond that of kernel's idea of data caching, then it may be worthwhile.
[/QUOTE]

And your example falls within the definition of *indirect* caching here. However, I have also said that;

[QUOTE]
Kernel does not have to have application-level knowledge for this.
[/QUOTE]

Which can be misleading. In its own context, I thought it conveys what I actually mean, but nevertheless it is literally wrong. Yes, the kernel *does* have to have application level knowledge for correct caching of *all* kinds of data access. Where it cannot know, and where it is *worthwhile*, the application should do its own caching. My statement above could have been better worded.

Your example is right. But then again, the preceding main context, which was "seizing whatever memory available and filling that with addressable data to prevent disk IO" was a completely different phenomenon. My previous posts was against that stance, which I think you would concur.

Magic Banana

I am a member!

I am a translator!

Desconectado
Joined: 07/24/2010

Thank you for the discussion. :-)

onpon4
Desconectado
Joined: 05/30/2012

> your expectations of efficiency are contrary to the basic programming principle: that a program should use only as much memory as it actually needs for completing the task and that memory usage should be optimized.

That is only a "basic programming principle" if RAM is scarce. RAM is not scarce in modern computers. Since we're talking about Web browsers, let's look at those as an example: I looked up benchmarks for Web browsers, and Google Chrome on GNU/Linux seems to use the most RAM out of all the major browsers at around 1.5GB. That amount is no problem if you have 4, 8, 16 GB of RAM. And modern computers do have that much RAM, or even more than that. It's not 1999 anymore.

> efficiency in programming is the art of optimizing resource usage, not of wasting resources.

Using RAM that isn't being used for anything else is not "wasting resources". What is wasting resources is spending CPU time (which uses an actual resource, electricity) redundantly to save RAM that you don't need to save.

> RAM's speed is not infinite and RAM access is sequential.

RAM is random-access, not "sequential". It's kind of in its name. As for speed, yeah, of course it takes time, but not that much. Recalculating redundantly almost always takes longer.

Here, I'll prove it:

https://pastebin.com/3tZ59K6m
https://pastebin.com/qZsu0651

The first one uses variables. The second one recalculates everything only based on the original three variables, i.e. avoids unnecessary RAM use. I get about 13.5 seconds with the one that uses RAM freely, and about 19 seconds (much slower) with the one that recalculates everything redundantly.

> The Linux kernel can be tuned to work in direction of keeping more memory free (swapping more aggressively) or to keep cache in RAM for longer.

Controls for swapping don't "keep more memory free". Swapping only occurs if your RAM is past full, therefore requiring the use of the disk. Which is always much slower than using RAM, hence why if you're swapping, you need to cut down your RAM use.

> but that doesn't mean that programs should simply occupy lots or all because there is plenty of it and/or because RAM is faster than HDD.

They should use all the RAM they have a use for. I never said that programs should throw meaningless data into RAM just for laughs.

> Being random access has nothing to do with fragmentation.

But it does have to do with the consequences of fragmentation. Fragmented RAM is not going to make a meaningful difference to access speed in real terms. A fragmented disk is going to cause problems because you can only access one physical location at a time.

> It is more time consuming to manage scattered memory blocks and thousand of pointers than reading a whole block at once.

I think "thousands" is a bit of a stretch, to say the least. Most of the time you're allocating RAM, it's such a tiny, tiny fraction of how much RAM is available.

Let's say you malloc for an array of 10,000 64-bit integers. That's 640,000 bits = 80,000 bytes = 80 KB. That's a tiny, tiny fraction of the multiple gigabytes of RAM typically available, and most of the time you're not using arrays that huge.

But how about you prove that running a program on a system using most of its RAM (but without swapping) is slower than on a system using only half its RAM? It would be a lot more convincing than a bunch of ifs, buts, and maybes.

> So back to what lightweight means: Usually that implies low resource usage, not exhausting every single bit of the system (which creates a heavy weight for the system).

That's not a clarification. It's too vague to have any meaning.

Magic Banana

I am a member!

I am a translator!

Desconectado
Joined: 07/24/2010

Resources are always scarce (limited) and should be used responsibly.

They are always limited. They are not always scarce. In the case of memory, as long as you do not reach the (limited) amount of RAM you have, there is is no penalty. And by using more memory, a program can be made faster.

I pointed you to https://en.wikipedia.org/wiki/Space%E2%80%93time_tradeoff for real-world examples. But I can explain it on a basic example too.

Let us say you have a file with one million lines and a program repetitively needs to read specific lines, identified by their line numbers. The program can access the disk every time it needs a line. That strategy uses as little memory as possible. It is constant-space, i.e., th memory requirements do not depend on the size of the file. But the program is slow. Storing the whole file in RAM turns the program several orders of magnitude faster... unless there is not enough free space in RAM to store the file. Let us take that second case and imagine that it often happens that a same line must be reread, whereas most of the lines are never read. The program can implement a cache. It will keep in RAM the last lines that were accessed so that rereading a line that was recently read is fast (no disk access). The cache uses some memory to fasten the program. As long as the size of the cache does not exceed the amount of available RAM, the larger the cache, the faster the program.

Going on with our one-million-line file to give another example of a trade-off between time and space: let us imagine 95% of the lines are actually the same, a "default line". If there is enough free memory, the fastest implementation strategy remains to have an array of size one million so that the program can access any line in constant-time. If there is not enough free memory, the program can store the sole pairs (line number, line) where "line" is not the default one, i.e., 5% of the lines. After ordering those pairs by line number, a binary search allows to return any line in a time that grows logarithmically with the number of non-default lines (if the line number was not stored, the default line, stored once, is returned). That strategy is slower (logarithmic-time vs. constant-time) than the one using an array of size one million, if there is enough free space to store such an array.

In the end, the fastest implementation is the one that uses the more space while remaining below the amount of available RAM. It is that strategy that you want. Not the one that uses as little memory as possible.

You need free RAM for handling new processes and peak loads.

That is correct. And it is an important point for server systems. On desktop systems, processes do not pop up from nowhere and you usually do not want to do many things at the same time.

Python is an interpreted language and you don't know how the interpreter handles the data internally. A valid test would be near the hardware level, perhaps in assembler.

You certainly run far more Python than assembler. So, for a real-life comparison, Python makes more sense than assembler. The same holds for programs that takes into consideration the specificities of your hardware: you run far more generic code (unless we are talking about supercomputing).

I was talking about the algorithmic memory fragmentation which results in extra CPU cycles.

No, it does not, because it is *random-access* memory. As I was writing in my other post, RAM fragmentation is only a problem when you run short of RAM: there are free blocks but, because they are not contiguous, the kernel cannot allocate them to store a large "object" (for example a large array). It therefore has to swap.

Running a browser like Firefox on 512MB resulted in swapping. If you assume that software should be bloated and incompatible with older hardware, you will never create a lightweight program.

There can be space-efficient (and probably time-inefficient, given the trade-offs as in my examples above) programs for systems with little RAM, where the faster (but more space-consuming) program would lead to swapping. For systems with enough RAM, the space-consuming programs are faster and fast programs is what users want. It makes sense that programmers consider what most users have, several GB of RAM nowadays, when it comes to designing their programs to be as fast as possible.

Your tests show that 'dd' is faster with a cache. It is what onpon4 and I keep on telling you: by storing more data you can get a faster program.

Magic Banana

I am a member!

I am a translator!

Desconectado
Joined: 07/24/2010

Scarce means restricted in quantity.

No, it does not. It means "insufficient to satisfy the need or demand": http://www.dictionary.com/browse/scarce

When your system does not swap, the amount of RAM is sufficient to satisfy the need or demand. There is no scarcity of RAM.

One more time (as with the word "freedom"), you are rewriting the dictionary to not admit you are wrong. You are ready to play dumb too:

Reading/writing 1GB of RAM is slower than reading/writing 1KB of RAM.

Thank you Captain Obvious.

You are also grossly distorting what we write (so that your wrong statements about the sequentiality of RAM or its fragmentation "resulting in extra CPU cycles" or ... do not look too stupid in comparison?):

Which implies that one should fill up the whole available RAM just to print(7) and that won't affect performance + will add a benefit, which is nonsense.

And then you accuse us of derailing the discussion:

The space-time trade-off has absolutely nothing to do with where all this started. (...) Then the whole discussion went into some unsolicited mini lecturing

By the way, sorry for using arguments, giving examples, etc.

It is easy to verify who derailed the "whole discussion" because he does not want to admit he is wrong: just go up the hierarchy of posts. It starts with you writing:

It is possible to optimize performance through about:config settings (turn off disk cache, tune mem cache size and others).
https://trisquel.info/forum/lightweight-browser#comment-127383

Me replying:

Caches are improving performances when you revisit a site.
https://trisquel.info/forum/lightweight-browser#comment-127396

And onpon4 adding the amount of RAM, which is not scarce nowadays, as the only limitation to my affirmation, which can be generalized to many other programming techniques that reduce time requirements by using more space:

Exactly. I don't think a lot of people understand that increased RAM and hard disk consumption is often done intentionally to improve performance. The only way reducing RAM consumption will ever help performance is if you're using so much RAM that it's going into swap, and very few people have so little RAM that that's going to happen.
https://trisquel.info/forum/lightweight-browser#comment-127400

Then you start saying we are wrong. Onpon4 and I are still talking about why programs eating most of your RAM (but not more) are usually faster than the so-called "lightweight" alternatives and how, in particular, caching improves performance. In contrast, and although you stayed on-topic at the beginning (e.g., claiming that "caching in RAM is not a performance benefit per se"), you now pretend that "the space-time trade-off has absolutely nothing to do with where all this started" and that "Reading/writing 1GB of RAM is slower than reading/writing 1KB of RAM" is a relevant argument to close the "whole discussion".

Also, earlier, you were trying to question onpon4's skills, starting a sentence with "I don't know what your programming experience is but". Kind of funny from somebody who believe the Web could be broadcast to every user. FYI, both onpon4 and I are programmers.

quantumgravity
Desconectado
Joined: 04/22/2013

Thank you both, MB and Onpon for the interesting elaboration on this whole "lightweight program" topic. Though I already knew the underlying principles, I never actively thought about what "lightweight" actually means.

But I guess the discussion is just wasting your time from here on... there are people who will never stop replying even if they were obviously proven wrong. Instead, they will write a ginormous amount of pseudo-smart nonsense...

Magic Banana

I am a member!

I am a translator!

Desconectado
Joined: 07/24/2010

Etymologies are not definitions. The source you cite tells it on its front page:

Etymologies are not definitions; they're explanations of what our words meant
https://www.etymonline.com

In the case of "scarce", the page you show says "c. 1300". So, that is what "scarce" meant circa 1300. Today, it has a different meaning.

Abdullah Ramazanoglu
Desconectado
Joined: 12/15/2016

>>>> Resources are always scarce (limited) and should be used responsibly.
>>> They are always limited. They are not always scarce.
>> Scarce means restricted in quantity.
> No, it does not. It means "insufficient to satisfy the need or demand": http://www.dictionary.com/browse/scarce

Now we are once again astraying to literal vs technical term comparisons. Technical terms are *loanwords* to define some technical phenomenon. Like "swapping", "assembling", "compiling", "linking", "debugging", "stepping"... They can not be literally compared to dictionaries. Attempting to refute a technical use of a term by resorting to literal meaning is, beyond pedantry, incorrect - if not fallacy.

Let's all please look at what a term means in its *technical* sense. Not in *literal* sense. Otherwise a simple topic (*any* topic) can spiral into oblivion...

onpon4
Desconectado
Joined: 05/30/2012

heyjoe's insistence that "scarce" just means "finite" is nothing more than a linguistic distraction to avoid admitting that no one ever said RAM was infinite. MB has already clarified what he means by "scarce". I use the same exact definition. If you or heyjoe want to use "scarce" to mean "finite", fine, but you can't then interpret what we are saying using a definition we are not using.

onpon4
Desconectado
Joined: 05/30/2012

> Python is an interpreted language and you don't know how the interpreter handles the data internally.

https://github.com/python/cpython

Abdullah Ramazanoglu
Desconectado
Joined: 12/15/2016

> The question that comes to mind for
> me is, "lightweight" in what sense?

In essence #2 and #4 are redundant - they are by-products of #1 and #3 to large extent.

That leaves us with #1 and #3.

It is true that in programming it is often necessary to trade one for the other: I.e. given the same functionality, conserving on memory often entails higher CPU usage and vice versa. But this is usually a small time trade off compared to master design differences and functionality span.

So, if we take a step back and look at the issue in its entirety, a program's CPU usage and memory footprint are affected, to a large extent, by (a) design and (b) functionality. IOW, given the same functionality, a good design will have both smaller footprint and low CPU consumption. Or, given the same design, both memory and CPU usage will be dependent on functionality.

So, in practice, there is really only one definition of "lightweight" which entails *both* CPU usage and memory footprint. They usually both go up and down (also directly affecting #2 and #4) depending on design perfection and functionality span.

Choice based on lightness vs functionality is so basic that I don't consider it a solution - it is just a basic trade off. A good selection entails (i) assessing design perfection of the software and (ii) balancing personal needs against the functionality offered by the software.

So I think selecting a lightweight browser (or any software for that matter) means finding a browser that *both* DOESN'T offer more functionality than actually needed, *and* has a good design.

Midori, for me, strikes almost a perfect compromise. Unfortunately it's orphaned for the time being.

Abdullah Ramazanoglu
Desconectado
Joined: 12/15/2016

Totally agreed. And I've touched the issue in my post also. But again, this is a small time trade off compared to design excellence and functionality.

Magic Banana

I am a member!

I am a translator!

Desconectado
Joined: 07/24/2010

In essence #2 and #4 are redundant - they are by-products of #1 and #3 to large extent.

No, there are not. #1 is about RAM consumption, #2 about disk consumption, #3 about (CPU) time consumption and #4 about human-computer interaction

So, in practice, there is really only one definition of "lightweight" which entails *both* CPU usage and memory footprint. They usually both go up and down (also directly affecting #2 and #4) depending on design perfection and functionality span.

It is not true. There is often a choice to be made between storing data or repetitively computing them, i.e., a trade-off between (CPU) time and (memory) space.

Abdullah Ramazanoglu
Desconectado
Joined: 12/15/2016

> No, there are not.

Did you read the whole meat of what I have posted? If you did, I wouldn't now have to post this rather pedantic clarification.

#1 Memory consumption
#2 Download size
#3 CPU consumption
#4 UI complexity

I didn't say #2 and #4 are the *same* variables. I said they are *redundant* because they are, to a large extent (usually), by-products of other characteristics. As in, a browser with large memory footprint and heavy CPU usage will usually also have large package download size and more complex user interface. There is a strong correlation between them. So, given #1 and #3 attributes of a program, it is usually straightforward to deduce #2 and #4 attributes from them, rendering #2 and #4 as *dependent*, thus redundant variables. While you might be able to dig up an exception, it would still be an exception to the rule, which I was talking about in the first place.

> It is not true.

It is true, in spite that I agree with your explanation why "it is not true". From this oxymoron, I have to assume that you have not understood what I have originally written. I have already mentioned and acknowledged this trade off, both in my original post, and in the reply to heyjoe. And added that it is a rather *small time compromise* compared to the effects of design and functionality. Let me rephrase it: In any given design/functionality plane, it is possible to tilt the resource consumption, to a certain extent, towards RAM or CPU, at the cost of one another. But the very plane itself is drastically moved upwards/downwards depending on design excellence and functionality span of the program. For instance, one may reduce memory consumption by 30% at the cost of 30% higher CPU usage (just for the sake of example, no pedantry please) but a bad design can boost both CPU and RAM usage 200%. Likewise a full featured software can use 400% more resources than a barebones one. This is what I have meant by "small time" referring to RAM vs CPU compromise. What good is tilting RAM/CPU balance +/-30% when you have a bloatware put together by a team of wannabes? So, the main variables affecting RAM and CPU usage is (a) design excellence and (b) software complexity. RAM/CPU compromise follows far behind.

Magic Banana

I am a member!

I am a translator!

Desconectado
Joined: 07/24/2010

As in, a browser with large memory footprint and heavy CPU usage will usually also have large package download size and more complex user interface. There is a strong correlation between them.

That may be, in practice, the "rule" for Web browsers (I am not sure). That is not true in general.

While you might be able to dig up an exception, it would still be an exception to the rule, which I was talking about in the first place.

The program I work on (pattern mining, nothing to do with Web browsers) is a 650 kB binary which can easily use GB of RAM (it depends on the data at input) and, with such a memory consumption, it can take 100% CPU during seconds or during hours (it depends on the parameters it is given). One of the parameters actually controls a trade-off between space and time (a threshold to decide whether the data should be stored in a dense way or in a sparse way).

For instance, one may reduce memory consumption by 30% at the cost of 30% higher CPU usage (just for the sake of example, no pedantry please) but a bad design can boost both CPU and RAM usage 200%.

I am not sure what you call design. Design includes choosing a solution with a good trade-off between CPU usage and memory usage.

The gain/loss is usually not fixed. It depends on the size of the data at input. The choice is often between two algorithms with different time and space complexities (in big O notation), i.e., the percentage is asymptotically infinite. You may say theory does not matter... but it does. There are popular computing techniques (e.g., dynamic programming) that precisely aim to get a smaller time complexity against a higher space complexity.

https://en.wikipedia.org/wiki/Space%E2%80%93time_tradeoff gives many other examples. Including one that deals with Web browsers, about rendering SVG every time the page changes or creating a raster version of the SVG. It is far from 30% here: SVG is orders of magnitude smaller in size but it takes orders of magnitude more time to render it.

Likewise a full featured software can use 400% more resources than a barebones one.

A feature that you do not use should not take significantly more resources.

Abdullah Ramazanoglu
Desconectado
Joined: 12/15/2016

> The program I work on (pattern mining, nothing to do with Web browsers) is a 650 kB binary which can easily use GB of RAM

Dedicated software usually has its own very peculiar resource needs. Once I was working on an R program of 100K or so in size that consumed moderate RAM while maxing out all the CPUs in parallel for days for a single optimization branch (of many). These are special cases. In this context, however, I was talking about GUI productivity software in general and web browsers in particular.

> I am not sure what you call design. Design includes choosing a solution with a good trade-off between CPU usage and memory usage.

Those kind of trade-offs are *tactical* design decisions. With strategical design I mean higher levevels. For instance analytical versus quantitative/empirical approach to a problem (no, I'm not talking about table look-ups). Or, inversely, imagine using Monte Carlo method on a 2nd degree curve. If you use MC for a linear graphic, then you've lost the game at the beginning. How you apply those low level trade-offs would not matter.

Or, "grand design" can alternatively be defined as the design difference by a genious versus by a mediocre programmer. Supervise them both, ensure that the same tactical coding techniques are employed in both projects, just leave the grand design to them. And see what comes out of the two.

> A feature that you do not use should not take significantly more resources.

I had tested and and noted the differences, but have no time now to re-test startup times and RAM usages of a freshly started FF-ESR, Qupzilla, Midori, and Dillo. Please try them on the same blank, static, and scripted pages and see for yourself.