One-click download of websites
September 8, 2005 5:22 PM   Subscribe

Help me get through long classes without Internet access - I want to click one button while I'm online and download all my newspapers, blogs, etc. into a flat file that I can view offline. Bloglines often doesn't give me the full article.
posted by Saucy Intruder to Computers & Internet (11 answers total) 1 user marked this as a favorite
 
I've been playing with a new Firefox extension called Scrapbook - it allows you to save and organize web pages very quickly. But I think you'd have to open each page to do so. I plan to use it to save things to read for long flights.
posted by kdern at 5:46 PM on September 8, 2005


Response by poster: As someone who has spent nineteen years in school, I would say that good teachers are unthreatened by technology and trust their students to do what helps them learn best and which isn't disruptive to other students.
posted by Saucy Intruder at 6:28 PM on September 8, 2005


Response by poster: I am now playing with the Scrapbook extension, but I wonder if there is a way to automatically mine a website - for example, to grab the Askme front page and then all the threads within it.
posted by Saucy Intruder at 6:43 PM on September 8, 2005


Mod note: a few comments removed, if you'd like to discuss the ethics of reading blogs during class, please take it to metatalk
posted by jessamyn (staff) at 7:00 PM on September 8, 2005


Best answer: If you're using Windows, I'm pretty sure this is what HTTrack does.
posted by nicwolff at 7:12 PM on September 8, 2005


Oh look, HTTrack is actually for Unix too, including OS X.
posted by nicwolff at 7:13 PM on September 8, 2005


In Scrapbook, it looks like you can highlight a section of text that includes links and right click, select "capture as" and click on "save all pages sequentially". It saves more pages than I'd like, but it's worth experimenting with. I like programs that work within Firefox.
posted by kdern at 8:07 PM on September 8, 2005


Can't curl or wget be made to grab stuff recursively? A bit of coaxing with shell or perl might be necessary, but should be quite doable. Take the HTML that results, and use any of the HTML tag stripping utilities ou there.

Stick the script in cron. Profit.
posted by NucleophilicAttack at 9:04 PM on September 8, 2005


Maybe Newsgator ?
posted by webmeta at 11:21 PM on September 8, 2005


I would consider the Slogger extension to be preferable to Scrapbook for this purpose. Autologging, good indexing of pages, highly configurable.
posted by catachresoid at 5:53 AM on September 9, 2005


>Can't curl or wget be made to grab stuff recursively?

wget definitely can, with no coaxing at all. I think it's as simple as putting -r as an option.
posted by AmbroseChapel at 3:09 AM on September 11, 2005


« Older Central A/C fan running backwards?   |   Wines that kill Newer »
This thread is closed to new comments.