How does a non-expert identify experts and important publications?
March 1, 2010 6:17 PM Subscribe
I'm trying to find a way of determining the "trustworthiness", for lack of a better term, of a scientific study. To do so, I would really like to know if there are systematic methods you can use to map out disciplines in which you are not already an expert. Help me sort this out.
Basically I'm speccing out a lit review for a research project. The review is a "scoping review", not a systematic review, and to do it I'm going to be covering a lot of different subject domains, many of which I don't have much experience in. I'm not interested in exhaustively cataloging all of the primary literature, nor do I have time to do so; instead I would like to get a sense of the expert consensus in each domain as it relates to my topic, where such a consensus exists. Right now I'm writing a proposal and I want to formalize my search process a bit and could use some help.
After some thought I've defined a paper to be "trustworthy" if it is accepted by a consensus of the appropriate community of experts. Some signifiers of this acceptance are favorable citation in other trustworthy papers, publication in a trusted platform, authorship by a trusted author, and peer review.
Evaluating these signifiers is easy if you are in your home domain, since you know who the experts are and you know what the trusted platforms are. However, if you're outside your home domain, it seems to be very difficult if not impossible. This seems to be a pretty fundamental problem in science (see here) and something that everyone has to go through when they need to pull info from a field that's new to them.
Normally I would look for surveys and reviews to start from, but this particular topic is pretty new and there isn't much out there yet. The topic is pretty niche as well and doesn't have its own journals, per se; rather, results that are relevant to this topic are reported in a gaggle of journals from all kinds of different disciplines and fields (hence the need for this scoping review). The only good way forward I can see is to consult experts in all the relevant domains, but this may not always be possible, and in any case there is a chicken-and-egg problem in that I first have to find out who the experts are before I can consult them.
I can state it simpler.
- How can you find out who is an expert in a field?
- How can you find out what are the trusted journals (or other platforms) in a field?
- In particular, are there systematic ways of exploring literature that can help you figure these out?
Basically I'm speccing out a lit review for a research project. The review is a "scoping review", not a systematic review, and to do it I'm going to be covering a lot of different subject domains, many of which I don't have much experience in. I'm not interested in exhaustively cataloging all of the primary literature, nor do I have time to do so; instead I would like to get a sense of the expert consensus in each domain as it relates to my topic, where such a consensus exists. Right now I'm writing a proposal and I want to formalize my search process a bit and could use some help.
After some thought I've defined a paper to be "trustworthy" if it is accepted by a consensus of the appropriate community of experts. Some signifiers of this acceptance are favorable citation in other trustworthy papers, publication in a trusted platform, authorship by a trusted author, and peer review.
Evaluating these signifiers is easy if you are in your home domain, since you know who the experts are and you know what the trusted platforms are. However, if you're outside your home domain, it seems to be very difficult if not impossible. This seems to be a pretty fundamental problem in science (see here) and something that everyone has to go through when they need to pull info from a field that's new to them.
Normally I would look for surveys and reviews to start from, but this particular topic is pretty new and there isn't much out there yet. The topic is pretty niche as well and doesn't have its own journals, per se; rather, results that are relevant to this topic are reported in a gaggle of journals from all kinds of different disciplines and fields (hence the need for this scoping review). The only good way forward I can see is to consult experts in all the relevant domains, but this may not always be possible, and in any case there is a chicken-and-egg problem in that I first have to find out who the experts are before I can consult them.
I can state it simpler.
- How can you find out who is an expert in a field?
- How can you find out what are the trusted journals (or other platforms) in a field?
- In particular, are there systematic ways of exploring literature that can help you figure these out?
Um, what is the topic? I would ask subject-specialist librarians at your university for help. Their contact info will usually be located from the library website.
You can use the database called Ulrich's to find out if a journal is refereed/peer-reviewed or not. Science and social science journals also can have these things called "impact factors" which are controversial and not at all definitive, but many people use them to judge the trustworthiness or prestige of journals nonetheless.
posted by unknowncommand at 6:26 PM on March 1, 2010 [1 favorite]
You can use the database called Ulrich's to find out if a journal is refereed/peer-reviewed or not. Science and social science journals also can have these things called "impact factors" which are controversial and not at all definitive, but many people use them to judge the trustworthiness or prestige of journals nonetheless.
posted by unknowncommand at 6:26 PM on March 1, 2010 [1 favorite]
A good place to start is by looking at recent review articles. These summarize the state of a particular area of inquiry, with numerous citations to the relevant literature. They'll give you a good overview of the topic and allow you to get an idea of which points are current consensuses, and which are points of contention.
(I'm speaking about biomedical literature here - I have little knowledge of conventions in other fields)
posted by chrisamiller at 6:31 PM on March 1, 2010
(I'm speaking about biomedical literature here - I have little knowledge of conventions in other fields)
posted by chrisamiller at 6:31 PM on March 1, 2010
Sorry, hit post too soon. The database "Web of Science" (which actually includes science, social sciences, and humanities indexes, too) will let you do a search by topic, and then it will let you make a quick chart of the results by most frequently cited author/institution. You can also sort your results by number of times cited, and then limit to reviews, which tends to bring the most "central" papers to the top.
Another option would be to search Dissertations Abstracts to see how other people have done what you're trying to do. Feel free to memail me if you have any questions!
posted by unknowncommand at 6:32 PM on March 1, 2010 [1 favorite]
Another option would be to search Dissertations Abstracts to see how other people have done what you're trying to do. Feel free to memail me if you have any questions!
posted by unknowncommand at 6:32 PM on March 1, 2010 [1 favorite]
Citations are one way of going about determining authority. It's how Google's pagerank works. The problem with citations is that usually they fall into 'agree' and 'disagree' categories. So a popularly rebutted paper can fall into your study. Pagerank uses context of citation to combat this.
If the field is truly widespread and growing, it might very well be in your interests to contact the people you've identified as potential contributers to the field to identify who they think the experts are. If you gently stroke their ego by asking for other experts in the subject, it should make for good networking. It'd be a great way to organize a conference on the subject.
posted by pwnguin at 6:37 PM on March 1, 2010
If the field is truly widespread and growing, it might very well be in your interests to contact the people you've identified as potential contributers to the field to identify who they think the experts are. If you gently stroke their ego by asking for other experts in the subject, it should make for good networking. It'd be a great way to organize a conference on the subject.
posted by pwnguin at 6:37 PM on March 1, 2010
This can certainly be a challenge if it's a new field. But most new fields don't just emerge from nowhere fully formed – generally they are an offshoot of a more broad field (or possibly several broad fields).
What you might want to look for are (possibly informal) groups formed by researchers/scholars in the field. Meetings, conferences, and mailing lists are generally the things to look for. Even streams at broader conferences. Basically anywhere that people interested in this new field are hanging out to get to know each other. In my experience, it's when these groups getting organised that tends to result in a peer-reviewed journal or two.
If it's not brand new, the next thing to look for will be edited books on the topic. Often these will arise out of the meetings/conferences noted above. These books tend to be only as good as their editors, but good editors will try and get solid people to contribute, and will act as a filter to the crap.
If you can track down groups/mailing lists/conferences/whatever, then the next step I'd take is looking at the publication history of the main players (the names that keep coming up). What/where are they publishing, even if it's not all directly in the field of interest. Track down the web sites of the journals they're publishing in if you don't know the field well – do they have editors/editorial boards from respected institutions? Are they published by the big publishers? Do they outline what their peer review policy is? Metrics like Impact Factor may or may not help – some fields have low citation rates, even in good journals, so you really need to compare like with like to get an idea of how they rank. There are always ways to play the journal system, of course, but it's probably better than nothing.
I don't think there are any easy systematic ways to answer your specific questions – the first step is going to be one of general exploration. The key is to get a few names, and that's where conferences and meetings and mailing lists and all those things which are usually pretty well documented and searchable online are helpful.
Once you do find some names, you can look at things like where they're publishing, who they're publishing with, and so on. If it's a small, new field, you'll probably find they all know one another anyway, so it might be easier just to pop emails off to a few people and ask them.
posted by damonism at 6:41 PM on March 1, 2010
What you might want to look for are (possibly informal) groups formed by researchers/scholars in the field. Meetings, conferences, and mailing lists are generally the things to look for. Even streams at broader conferences. Basically anywhere that people interested in this new field are hanging out to get to know each other. In my experience, it's when these groups getting organised that tends to result in a peer-reviewed journal or two.
If it's not brand new, the next thing to look for will be edited books on the topic. Often these will arise out of the meetings/conferences noted above. These books tend to be only as good as their editors, but good editors will try and get solid people to contribute, and will act as a filter to the crap.
If you can track down groups/mailing lists/conferences/whatever, then the next step I'd take is looking at the publication history of the main players (the names that keep coming up). What/where are they publishing, even if it's not all directly in the field of interest. Track down the web sites of the journals they're publishing in if you don't know the field well – do they have editors/editorial boards from respected institutions? Are they published by the big publishers? Do they outline what their peer review policy is? Metrics like Impact Factor may or may not help – some fields have low citation rates, even in good journals, so you really need to compare like with like to get an idea of how they rank. There are always ways to play the journal system, of course, but it's probably better than nothing.
I don't think there are any easy systematic ways to answer your specific questions – the first step is going to be one of general exploration. The key is to get a few names, and that's where conferences and meetings and mailing lists and all those things which are usually pretty well documented and searchable online are helpful.
Once you do find some names, you can look at things like where they're publishing, who they're publishing with, and so on. If it's a small, new field, you'll probably find they all know one another anyway, so it might be easier just to pop emails off to a few people and ask them.
posted by damonism at 6:41 PM on March 1, 2010
Seconding impact factor. It's a way of ranking the importance of a journal.
posted by alms at 6:45 PM on March 1, 2010
posted by alms at 6:45 PM on March 1, 2010
Response by poster: Thanks for the help so far. I'm familiar with impact factors and citation counts and I'll probably rely on these if I don't get anywhere by other means. The literature is a little too sparse and spread out to do very much with these tools. Personal networking is probably the only way here, either with librarians or other experts.
What's bothering me is the lack of formality to the process. I want to be able to do this in a way that everyone would agree is correct - that the papers I've identified are definitely trustworthy (according to my definition).
If I assert a claim that "Dr. X is an expert in this field" -- how can I demonstrate this claim? I can make up a few criteria - years of experience, lots of papers published, and so on, but where's the line between expert and, say, dabbler? I'm struggling to find a way to specify this line. I want a process that I can fall back on so that I'm not just making it up as I go, at the very least something I can cite so that my process for identifying an expert is not controversial. Does that make sense? The more I think about it the more I suspect such a process doesn't really exist, though, since "expert" is a subjective label.
posted by PercussivePaul at 6:53 PM on March 1, 2010
What's bothering me is the lack of formality to the process. I want to be able to do this in a way that everyone would agree is correct - that the papers I've identified are definitely trustworthy (according to my definition).
If I assert a claim that "Dr. X is an expert in this field" -- how can I demonstrate this claim? I can make up a few criteria - years of experience, lots of papers published, and so on, but where's the line between expert and, say, dabbler? I'm struggling to find a way to specify this line. I want a process that I can fall back on so that I'm not just making it up as I go, at the very least something I can cite so that my process for identifying an expert is not controversial. Does that make sense? The more I think about it the more I suspect such a process doesn't really exist, though, since "expert" is a subjective label.
posted by PercussivePaul at 6:53 PM on March 1, 2010
If I assert a claim that "Dr. X is an expert in this field" -- how can I demonstrate this claim?
Ultimately, it's your peers who decide whether you're an expert. They do this by citing the papers you've written, inviting you to present your work at conferences/symposia (and there is a difference between just submitting a paper to present at a conference and being an invited speaker), inviting you to submit papers to edited volumes and special issues of journals, electing you to roles in professional societies, putting you on editorial boards of journals, and having you edit books and journals in the topic.
Not everyone who is an expert will fit all criteria (some people just want to get on with their work, and not get involved in the politics of research), but they're all pretty objective, researchable criteria.
A bad way to judge expertise is whether they've appeared in the media as an expert on the topic or in a court as an expert witness. Journalists are lazy, and will seek your opinion because your name comes up in a Google search on the topic, and people become expert witnesses if they'll reliably take a predicable line on an issue. Neither are good, objective criteria for establishing expertise, IMHO.
The other complication you'll get is that the more "expert" someone is, the narrower their expertise usually is. In my discipline, I might be an expert in a very specific topic, but I'm no more than an informed dabbler in the discipline as a whole, and quite uninformed about particular other specialities within the discipline. Most people don't have time to be a renaissance person.
posted by damonism at 8:22 PM on March 1, 2010
Ultimately, it's your peers who decide whether you're an expert. They do this by citing the papers you've written, inviting you to present your work at conferences/symposia (and there is a difference between just submitting a paper to present at a conference and being an invited speaker), inviting you to submit papers to edited volumes and special issues of journals, electing you to roles in professional societies, putting you on editorial boards of journals, and having you edit books and journals in the topic.
Not everyone who is an expert will fit all criteria (some people just want to get on with their work, and not get involved in the politics of research), but they're all pretty objective, researchable criteria.
A bad way to judge expertise is whether they've appeared in the media as an expert on the topic or in a court as an expert witness. Journalists are lazy, and will seek your opinion because your name comes up in a Google search on the topic, and people become expert witnesses if they'll reliably take a predicable line on an issue. Neither are good, objective criteria for establishing expertise, IMHO.
The other complication you'll get is that the more "expert" someone is, the narrower their expertise usually is. In my discipline, I might be an expert in a very specific topic, but I'm no more than an informed dabbler in the discipline as a whole, and quite uninformed about particular other specialities within the discipline. Most people don't have time to be a renaissance person.
posted by damonism at 8:22 PM on March 1, 2010
I second the idea of looking at impact metrics. You might want to look at this project: http://scholarometer.indiana.edu/. It relies on user submitted data, so it's database may not be complete for your niche field, but it will calculate impact factors for you.
In addition, have you tried using Google Scholar to find the citation count for a paper, and what other papers cite it? The best way of doing what you are trying to do, as far as I know, is to look at the author's prestige and at how the papers is cited. Both of which you mention in your message. Are you finding it difficult to obtain the data?
posted by albatross84 at 8:41 PM on March 1, 2010
In addition, have you tried using Google Scholar to find the citation count for a paper, and what other papers cite it? The best way of doing what you are trying to do, as far as I know, is to look at the author's prestige and at how the papers is cited. Both of which you mention in your message. Are you finding it difficult to obtain the data?
posted by albatross84 at 8:41 PM on March 1, 2010
Yeah, if you're looking to justify and quantify expertise, you're pretty much left with things that are roughly quantifiable (# of times cited, years of experience/tenure, votes of confidence from peers, reputation of institution/journal, grant money awarded, and such). You could try choosing the top X% of people in a field based on these, such that there are a reasonable number of publications for you to look at. If you're concerned that there may be a significant jump in quality (rather than an arbitrary distinction) between dabblers and experts in a particular field, you could graph your results and see if that's the case? Basically it's up to you to consult the literature regarding operational definitions of expertise, or to come up with your own and see if it flies.
posted by unknowncommand at 9:09 PM on March 1, 2010
posted by unknowncommand at 9:09 PM on March 1, 2010
What's bothering me is the lack of formality to the process. I want to be able to do this in a way that everyone would agree is correct - that the papers I've identified are definitely trustworthy (according to my definition).
There is no way to do this that everyone would agree is correct. No matter how you make the division between expert / non-expert, there will be people who could make a good argument against it.
There is no ultimate science club, where once you're admitted, you don't have to back up your shit anymore. Granted, once you get a certain level of name recognition, you can probably get away with things that an unknown n00b might not, but the scientific community is based on peer review for a reason. No one can say better who is or is not on the level than other people who study the same thing.
Even among top notch scientists in specific sub-disciplines, there is squabbling and disagreement about whose science is or is not tight.
Evaluating these signifiers is easy if you are in your home domain, since you know who the experts are and you know what the trusted platforms are. However, if you're outside your home domain, it seems to be very difficult if not impossible
I think sometimes it's even harder in your home domain. Especially if someone has cranked out consistently good work for years and then lets a stinker slip through (This happens more often than it should once people get to the point where they're more oriented toward running a huge lab than getting their hands dirty checking the details of everything they put their name on).
The whole point of science is that you aren't supposed to just accept arguments from authority. Just cause a big name is on a paper, that doesn't mean it's automatically good. You gotta check that shit out. However, it's certainly not possible to chase every single finding down to the most basic assumptions underlying it. It's tough to know where to draw the line for what assumptions you're ok with and which ones you have to personally dissect. There is no easy method.
For your purposes, I'd echo the suggestions that you look at impact factor, or talk to someone you trust in each of the fields you're tackling. You'll still be basing your decisions on criteria other than your own scientific judgment, which isn't really what science should be all about, but if you want to do massively interdisciplinary work, I guess you gotta trust other people. I trust other people, however, a lot more than I trust big names.
posted by solipsophistocracy at 10:13 PM on March 1, 2010 [1 favorite]
There is no way to do this that everyone would agree is correct. No matter how you make the division between expert / non-expert, there will be people who could make a good argument against it.
There is no ultimate science club, where once you're admitted, you don't have to back up your shit anymore. Granted, once you get a certain level of name recognition, you can probably get away with things that an unknown n00b might not, but the scientific community is based on peer review for a reason. No one can say better who is or is not on the level than other people who study the same thing.
Even among top notch scientists in specific sub-disciplines, there is squabbling and disagreement about whose science is or is not tight.
Evaluating these signifiers is easy if you are in your home domain, since you know who the experts are and you know what the trusted platforms are. However, if you're outside your home domain, it seems to be very difficult if not impossible
I think sometimes it's even harder in your home domain. Especially if someone has cranked out consistently good work for years and then lets a stinker slip through (This happens more often than it should once people get to the point where they're more oriented toward running a huge lab than getting their hands dirty checking the details of everything they put their name on).
The whole point of science is that you aren't supposed to just accept arguments from authority. Just cause a big name is on a paper, that doesn't mean it's automatically good. You gotta check that shit out. However, it's certainly not possible to chase every single finding down to the most basic assumptions underlying it. It's tough to know where to draw the line for what assumptions you're ok with and which ones you have to personally dissect. There is no easy method.
For your purposes, I'd echo the suggestions that you look at impact factor, or talk to someone you trust in each of the fields you're tackling. You'll still be basing your decisions on criteria other than your own scientific judgment, which isn't really what science should be all about, but if you want to do massively interdisciplinary work, I guess you gotta trust other people. I trust other people, however, a lot more than I trust big names.
posted by solipsophistocracy at 10:13 PM on March 1, 2010 [1 favorite]
There's no bibliometric tool that maps to "trustworthy". Most published research eventually turns out to be false, and even experts in the field can't help you know for sure which papers will be proved correct. You can learn to evaluate the quality of experimental and observational studies (and for clinical work the Equator Network and CEBM are good places to start), however even with the most finely honed sense of methodological propriety your scoping review can't divine the "truth". You review can only try to encompass competing theories, and the plurality of current opinion.
posted by roofus at 4:28 AM on March 2, 2010
posted by roofus at 4:28 AM on March 2, 2010
Annual Reviews (you'll need to purchase or find through your academic library) are a very traditional way of seeing what the up-and-coming topics are, and can help identify consensus / non-consensus around a topic.
They summarize current primary research in a critical lit review style, various topics each year. None in the humanities, unfortunately, but an awesome old-school resource for the sciences and social sciences.
posted by lillygog at 10:12 AM on March 2, 2010
They summarize current primary research in a critical lit review style, various topics each year. None in the humanities, unfortunately, but an awesome old-school resource for the sciences and social sciences.
posted by lillygog at 10:12 AM on March 2, 2010
This thread is closed to new comments.
But if that's impossible for some reason, raw citation volume seems like a pretty good outside-view metric. Most-cited authors will tend to be roughly the most respected; papers with a lot of citations are important; papers in top journals attract more citations.
posted by grobstein at 6:24 PM on March 1, 2010