how to pull content from a web?

3 replies [Last post]
rova
Offline
Joined: 07/04/2013

Example: http://educacionquimica.info/ has lots of gratis articles in pdfs, how can i pull them to my hard drive the easy way? theres something like apt-get, or there is a program i dont remmeber that do it.thanks!

Michał Masłowski

I am a member!

I am a translator!

Offline
Joined: 05/15/2010
lembas
Offline
Joined: 05/13/2010

We can find the content here http://listas.trisquel.info/pipermail/trisquel-users/2014-February/032696.html

>wget supports recursive downloads which could be used for this, see the -r, -l, -A and -R options. When I need something similar, I get URLs From a page (using sed or Python with lxml) and use wget on them.

aloniv

I am a translator!

Offline
Joined: 01/11/2011

Try httrack/webhttrack (it's in the repos).