Skip

I've read a lot of books. Now I want to do something productive.
March 12, 2012 9:25 AM   Subscribe

Is there a way to download a list, en masse, of ALL the digital books I've bought via Amazon.com?

I've come up with this idea of wanting to build a little db in DevonThink for all the books I read, along with ancillary notes about same. I buy all my digital books via Amazon, and I buy (and read) a lot.

I could conceivably do this by looking at individual titles on my Kindle and separately entering them into DevonThink, but I was hoping for a way to get some sort of a digital file that has all the names, so that I don't have to re-type everything.

Is this possible? Maybe there's a way to get at the underlying Kindle data?
posted by dfriedman to Computers & Internet (4 answers total) 2 users marked this as a favorite
 
Did you go to the "Manage your Kindle" page on Amazon? Possibly this link -- that has all items in My Kindle Library.
posted by brainmouse at 9:33 AM on March 12, 2012 [1 favorite]


calibre might be of some help as well
posted by edgeways at 9:42 AM on March 12, 2012


there are hacky ways.

1. go to kindle.amazon.com, mark all your books as public
2. go to your public profile page. you can find this by going to "Hello, dfriedman" in the top right of the page and choosing Your profile

3. it will only give them to you in sets of 10, but the page number is in the url. For instance, mine base url to my profile page is

https://kindle.amazon.com/profile/[my username here]/[my ID num]

4. in the left of that page, i click on "read" or whatever other status i want to download. now i have


https://kindle.amazon.com/profile/[my username here]/[my ID num]/[read]

Clicking on "page 2" at the bottom takes me to

https://kindle.amazon.com/profile/[my username here]/[my ID num]/[read]/2

Note that /1 will get the first page, although they don't show you that at first

So you can hack up a shell script or what have you.


for hour in 1 .. 200
do /usr/bin/curl https://kindle.amazon.com/profile/[my username here]/[my ID num]/[read]/$_ | grep "some title or book url identifier here" >> /path/to/file/to/store/output
done

Clearly if you know python or some other language, you can do much better.


As far as what to grep for, a title looks like this:

<div class="title"><a href="/work/great-curries-india-camellia-panjabi/B000ACUW56/1904920357">50 Great Curries of India</a></div>

So you can match on "div class=title up to the /work/" in there. After the /work/ is the title, then the ISBN.

I did a shell script to scape out the ISBNs to do a massive goodreads import at one point, and i tackled it like this. sadly i chucked the shell script when I was done because it was so hacky, but that was the general gist of it.


have fun!
posted by lyra4 at 10:32 AM on March 12, 2012


Thanks for the suggestions.

I think the best choice for my purposes right now is to use brainmouse's link. It only shows 15 titles at a time, and I have about 200 books total, but I can just select 15 books at time, copy and paste the titles/authors into TextWrangler and then format.

Not ideal, but it works for now....
posted by dfriedman at 10:41 AM on March 12, 2012


« Older I'm looking for a reliable way...   |  Is there any way to stop mysel... Newer »
This thread is closed to new comments.


Post