Can the singularity help
November 10, 2011 2:00 PM Subscribe
TediousTaskHelpFilter: For a project I'm working on, I manually went through 4000+ article abstracts from a literature database and classified articles as either target articles or irrelevant articles (with broad classification for why the article was irrelevant). My advisor has indicated that it's standard practice for me to do this same search on a database that will, overwhelmingly, give me many repeats from my last search. As this is nowhere near the only thing on my plate, I'd like to optimize this task. Is there any computer-based solution to eliminating the repeats in hits between these two databases?
posted by Keter to computers & internet (5 answers total) 2 users marked this as a favorite
For what it's worth, the two databases in question are PubMed/Medline (what I used originally) and PsycINFO. I believe I can manage to get text dumps of each pool of results. It'd be really sweet if I could pare the PsycINFO result list to not include everything on the PubMed list. I do know some computer science-y folks who might be able to help me out, but if the implementation is simple I could conceivably do it myself? Any ideas would be great. Thanks!