I see so much nonsense written by people who purport to understand who has a “good” publication record and who does not (no names named, you know who you are). Here are a couple of hints.
If you want to see peer reviewed journal articles and papers in formal working paper series (e.g., NBER), then go to Web of Science, which encompasses the Social Sciences Citation Index (that we used to look at in my grad student days). I don’t know how to tabulate the results easily, but I think this is the most stringent screen, which identifies the works, and then identifies the citations. There is no weighting scheme on the citations, though. And you need access to the database; I get it through my university’s library.
SSCI has the drawback of not covering books and book chapters. Web of Science has added a book search facility, which I have not used myself, so don’t know how comprehensive it is.
For those of us without access to SSCI, there is still no excuse not to have an insight on impact. Google Scholar has a search function; it identifies articles on the web, so it doesn’t screen out articles not published in peer reviewed journals. However, it does then give you citations in articles on the web, very easily and intuitively. The number then gives you an idea of impact. Google Scholar formalizes impact, but tabulating number of citations, h-index, and i10 index, both cumulative, and cumulative over the preceding 5 years. If the author has a google scholar page, then there will be a very useful and pretty comprehensive listing of works (with citation numbers) there. Here is George Akerlof’s GoogleScholar page.
Now, I’ve seen assertions of “frighteningly thin” publication records. Assessing publication records is a hazardous enterprise. One has to be clear about the criteria one is using. Is it number, influence, innovativeness or relevance? Any one of these might (or might not) be correlated with the prominence of the publication venue – recalling that many important topics in economics have traditionally been relegated to what are thought of as “not” the top journals.
Really important commentary, for which I am grateful.
Do we not but sew ideas as seeds, hoping some day they will receive water and grow?
rsm: And here I thought one “sows” seeds, not “sews”.
Menzie,
Not a problem; this is within the very wide error margins that exist around all comments made here by rsm, :-).
I note that there is no perfect measure of scholarly impact. As Menzie notes there are several different entities that try to track citations, the most widely accepted measure of impact, although even that measure has its imperfections as, well, some citations are more important than others. But besides those he lists I also note RePEc. As it is, different sources count different things, so one can get quite different numbers and rankings from different sources.
As it is, google scholar is probably the one that gets looked at the most. It generally gives the largest numbers, but it must be noted that it is noisy, if not noise. It is known to be subject to lots of errors, listing pubs by other people under somebody’s list as well as not covering pubs somebody has (my most cited publication does not show up on my list). It also is sloppy by lots of miscounting in various ways, double counting, not correcting for self-citations, and more. Nevertheless, for better or worse, it is the most widely used source for these sorts of things.
Needless to say, George Akerlof has one heck of an impressive google scholar list. But then he is one of those Nobel Prize winners who really deserves it.
Oh, something that perhaps should be explained for those not familiar with the google scholar lists is what the hi-index is, named for a mathematician named Hirsch. It is the number of publications one has that have been cited more than h times, widely viewed as a sort of pretty good measure of kind of depth of influence somebody has as well as the aggregate amount. This sort of corrects for people who have like one major pub that has gotten lots of citations, but they really do not have much else that anybody has paid much attention to. This latter sort of case may lead to a lot of total citations, but not that much of an h-index.
BTW, both George Akerlof and Jim Hamilton have specific publications that have really a lot of citations, in George’s case his Nobel Prize winning paper on the market for lemons, in Jim’s case his Time-Series Analuysis book. But both of them also have very substantial h-indexes as well, showing that neither of them is a one hit wonder. It also helps h-indexes to have been around for a longer time, and neither of them is a spring chicken, but even correcting for that, they both would have impressive citation records even if one were to remove their respective super big hits.
Thanks for the primer on this h-index. Since Princeton Steve thinks he is now President and he has the right to make Uhlig a FED governor, his h-index is 54 as compared to the h-index of Peter Diamond which is 81. Google Scholar says Diamond has over 46 thousand citations v. only 20 thousand for Uhlig.
Stevie just learned about Uhlig but now he says Uhlig is his hero. I wonder if it was some of Uhlig’s racist excesses that made Stevie heart him so much. Now I’m sure you know why I raised Diamond but I bet the ranch Stevie has no clue who he is.
BTW, my h-index should be one greater as my gs list is missing two highly cited pubs (one an article, one a book) while including an item that is not by me, so add two and subtract one to get a net gain of one, not all that atypical of what finds for many people, and not a bad sign of the sort of sloppy inaccuracies that are in the google scholar lists.
You should go and edit your Wikipedia page. It seems a lot of other people are. Like anyone would rely on Wikipedia for qualifications.
pgl,
The problem is my Google Scholar page, not my Wikipedia page. I am not able to edit my GS page, so unable to correct errors there. I have actually made efforts to communicate with the people who run GS, but those have failed.
Oooooh, I just checked, and Menzie has a higher h-index than any of us: 74 to 69 for Jim to a mere 63 for George Akerlof, with me dragging behind at a piddly 40, :-). Menzie has a lot of influence, even if he has not had a super smash hit like either Jim or George, and he is younger than all three of the rest of us.
With that h-index, Biden should put you on the FED’s board of governors!
And since I started it, Harald Uhlig has an h-index of 54, behind Menzie, Jim, and George, but ahead of me, with Lisa Cook having one of 13. But, as noted elsewhere, besides being younger, she has a lot of policy experience across a lot of areas and entities, including various parts of the Fed system, as well as knowledge of some topics that are important to the Fed others do not have.
“no names named, you know who you are.”
Initials S.K.??
Has anyone done this, for the person of interest? A blogger or the like. Even if imperfect, a decent cut.
Anonymous: Yes. It’s called a “tenure review”.
Can we check on how many publications in AER and JPE who know who has? That should not take too long
pgl,
If you are going to fall into the “top journals” crevice at least expand it to the supposed “top five” one must pub in if one is to get tenure at a top, or even semi-upper midldle, department. AER may be the worst of them these days, frankly, and JPE is questionable at times, especially with Uhlig at its helm. The other three are QJE, Econometrica, both of which are probably ahead of both the AER and JPE, and also the Review of Economic Studies (RES).
Frankly, on the recursive discounted RePEc impact measure, the one I take most seriously for journal rankings, Several of the AEJ field journals are currently ahead of the AER, even though I kind of sneered at them when they first came out, with Macro and Policy both very good, and Micro the one further behind.
But, in fact, Menzie is right. A lot of important things get pubbed in non-top jourjals. The top journals suffer from not allowing anything that anybody can object to, it is so hard to get into them. As an editor I can say that some of the papers I have published that got the most citations and attention were ones that had opposition to them being published, in some cases by prominent people, because they were controversial.
You’re right of course. My point still stands as Stevie has zero publications in any economic journal
“A lot of important things get pubbed in non-top jourjals. The top journals suffer from not allowing anything that anybody can object to, it is so hard to get into them.”
There maybe another issue with top journals, they (e.g. Nature, Science) have high impact because of very few excellent papers, however, the majority of the papers are cited clearly below the impact factor. There is NO normal ditribution of citations around the impact factor. The situation is much better in many non-top journals.
U.,
It is well known that citations are power law distributed, thus highly skewed. That holds at all journals, unless they are so obscure and poor they get no citations.
To be fair, I had not checked this in detail, only for Nature and Science and some chemical journals that showed “better” distribution, all were skewed but the non-top ones were much better.
S.K. touted his academic credentials:
“As for racism and economics departments. I pretty much did the first year of the econ PhD at Columbia and was offered a full ride there. That first year was the singularly least pleasant year of my life. It was both extremely hard — I was good in math in high school, but was all liberal arts in college — and singularly unhelpful.”
I will have ask the folks I know at Columbia why on earth they would offer a scholarship to S.K. (assuming they really did). Gee the first year of graduate economics was too hard for this arrogant know nothing? Boo hoo. And this arrogant know nothing thinks he has the right to judge the credentials from someone who did get a Ph.D. from UC Berkeley? Seriously?
pgl,
As it is, Steven K. has a Google Scholar page. He has 29 citations and an h-index of 3.
I note that not everybody who publishes has one. I know of some famous people, including even some very prominent people with lots of publications and citations, who do not have such a page. How these are managed and who does it remains a bit murky, frankly.
I think I saw some references to his publications which seem to include Powerpoint presentations – which is not exactly a referred academic journal. His latest “publication” was some insane rant on the Ukraine crisis published by The American Thinker. I would be ashamed to have that rag publish I happened to pen.
On Steven’s GS page, this is one reason why some people prefer things like Web of Science or RePEc, which only count publications in certain outlets and citations in certain outlets. I am sure Steven does not show up in either of those, although I am not going to check. But, in fact many people favor GS despite its sloppiness because of the wider net it counts in terms of both what gets counted for being cited and where things are cited.
I note there have been some extreme examples of important papers that did not get published for many years, or even ever, for all sorts of odd and obscure reasons, but that nevetheless came to be highly cited and influential. In dome of those cases I am aware of them getting published many years after their initial influence, but most of the citations out there are.were to the earlier unpublished versions. As some here may be realizing, a lot of weird stuff goes on with regard to publishing in academic journals, and I have written a bit about this in my paper on the problem of plagiarism. But I habe many more tales than those, most of which cannot be told, and I am sure Menziw has some as well, being a major journal coeditor as he is But most of the things we really know about, we cannot speak of, except perhaps in general terms, not specifici cases.
BTW, it is kind of funny that Menzie presented Geoerge Akerlof’s GS page. He is someone who did not have one as recently as about a year ago. I went looking for his citations and could not find them. But he has one now, and, yeah, over 100,000 thousand citations. But then, he is a famous and super respected Nobel Prize winner, who also happens to be a super nice guy in person, widely regarded to be one of the nicest people in the whole economic profession, and so a good person to have as husband for probably the most influential woman economist in the world, Janet Yellen, coauthor on many of his widely cited papers.
Based on my experience in economics, a large part of getting a “good” publication record (at least at the beginning of one’s career) is schmoozing with the right people, and writing papers that fit nicely into the mainstream. It helps if one can base it on the seminal work of somebody famous, and praise the seminalness of it strongly enough. So, I would not really want to base my evaluation of somebody’s academic brilliance (or not) on any publication record.
I see so much nonsense written by people who purport to understand who has a “good” publication record and who does not. These records are largely useless as they do not control for the many issues in academic research. E.g. per Google scholar Dan Ariely has 2x the citations as Menzie. Given recent developments, how much should one deem Ariely’s publication record as “good”? The answer is that we do not know.
Econned,
There certainly are plenty of problems and complications in comparing publication records, as the discussion so far has already made clear. But Dan Ariely’s problems are specific to him and tell us nothing at all about these more general issues.
Ariely’s problems are not specific to him. What about his coauthors? What about the journals and questionable peer review process? Hell, it’s an issue with academic research.
Oh, Econned, this is one case, even if others are involved. If you want to make a point about general problems with comparing peoples’ citations or pubs or whatever, there are many other problems that are very serious and much more widespread. One of those is plagiarism, or the more subtle matter of just stealing ideas from other people. This is all over the place and a terrible problem. Editors have to deal with it regularly. I have written on this problem as an editor, and have noted my paper here on this matter previously that one can access on my website.
As doe Ariely’s problems, it remains a matter of debate as to just how bad what he did was. Most of us think he now has an asterisk on his record, but he and some others are still defending him, to varyiing degrees, and I have zero interest in debating or discussing that particular weird case that as I said is of zero general interest, especially in the context of this thread and compared to a bunch of other issues that have been raised.
But perhaps you just want to in general raise doubts about the publication record of anybody, since you like to engage in perfervid personal attacks on people here who happen to have publication and citation records that are not all that bad, whereas I gather you have zero of both. But you want to put yourself up on a pedastal of knowing more than people here with publications and citations, even though you almost never make a comment that has any actual intellectual or factual substance, once in a while sort of, but mostly just trollish personal insults.
And I am not going to comment on this in any further replies to you here, given your history here.
Econned might have an ounce of credibility if he shared with us his CV. I’m taking the under as to whether he has even one publication in an academic journal.
Of course this thread was prompted by Princeton Steve’s attacks on the publication record of Dr. Lisa Cook. Stevie’s rants were so incredibly dishonest and/or stupid that no sane person would defend what he did. But here is Econned doing his best to back his good buddy.
http://datacolada.org/98
Here is a detailed account of what Econned alluded to. It seems Econned has no clue WTF he is babbling about (as usual) as Econned thought there were multiple authors of the specific study that had data issues. No this was Ariel’s paper which was one of a few studies.
This account also notes it is unclear who manipulated the data. It could have been the insurance company.
So Econned goes off on one of his patented off topic rants, does not provide a link to WTF he was babbling about, and of course gets the story very wrong. Par for the course with this pointless troll.
Given that pgl has provided some information, which makes it clear that it is not at all clear exactly what happened or who was responsible for the data problems in the paper by Ariely et al that has gotten the negative attention, I shall make a few further comments, on him and his record with none on that paper or its problems, as I have no further insight or information beyond basically what pgl posted aside from noting that Dan has gotten the attention because he is by far the most prominent of the five or so coauthors involved on the paper.
For starters, while it was claimed this controversy should put a cloud on the 58,000 or so citations Dan has, that paper only has gotten like 465, less than one percent of all the citations he has received. It is a minor sideshow in his record, even if in the end it turns out he was personally responsible for all the bad things in that paper. He is also quite prominent as some of his research has gotten public attention. Not only has he done a lot of important research on self-deception and self-control, but he has published some papers on sex that got a lot of attention, although some of these showed obvious things like if you wave pictures of sexy looking women in front of heterosexual males they are likely to get distracted from performing various tasks, duh. I note he is a joint prof of both econ and psych at Duke.
I happen to know him personally, although not all that well, and published two of his papers in one of the journals I have edited, no not this one that is in trouble. He is a very lively and personably guy and I am not surprised he has gotten himself into publicly interacting with lots of people in various venues. I also note that one of those papers I published by him was one of those that at least one member of my editorial board opposed publishing, but that has since gotten over 300 citations. I thought and still think it is an excellent paper. I am sorry that he is now in some trouble, but even smart people do make mistakes, see Ken Rogoff, whom I also know personally and respect a lot, who definitely messed up big time and with much more serious policy consequences as a result than this matter with Dan Ariely.
PaGLiacci,
You are an idiot.
https://www.pnas.org/content/109/38/15197.short
A complete idiot.
If you have “ have zero interest in debating or discussing that particular” you should just stfu. You can’t. Your senility will not allow it.
Ariely’s issues are not “of zero general interest, especially in the context of this thread”. It’s directly related to this thread because these measures of “who has a “good” publication record” are useless if they included sh*t for research.
i love the clownish proclamations that Barkley Rosser will not comment further. We all know the reasons Barkley makes these proclamations is only because he’s concerned his senility will prevent him from remember when/where/how/why he makes the comments that he makes. He’s probably worried someone will find the same issues in his “research” as they found in Ariely’s.
Ah yes, of course. I knew that if I let myself comment further on what pgl put here that would trigger Econned to comment further, shame on me, I apologize to all for getting him going further with his vacuous and wothless posts on this.
So, no ,Econned has added zero of any substance as pretty much always. All we have is a claim that probably I have a problem like Ariely’s after I claimed his problem was unique as well as yet another repetition of his favorite personal insult for me, which he likes to spout here every chance he gets.
Anyway, Econned, please feel free to go dredging through my over 200 publications to see if you can find me having done anything like what was done in this paper now in trouble by Ariely et al, but as it is I have neither written anything on that topic nor have I done anyrthing that used the methodology in that paper, so you will have a rather disappointing search, I fear, oh ever so productive Econned.
I shall note that the main reason I posted more detail about Ariely was in fact that one of his papers that I published in a journal was an example of what I had mentioned earlier, a paper that got published in a non-top-five journal even though somebody opposed publishing it, which does not happen in those top-five journals. I noted that sometimes such papers are very interesting and get a lot of citations, and that happened to be the case with this paper in particular, although since I published two of them by him, and they both got over 300 citations, you will not be able to figure out which one it was. I also note that this is increasingly rare these days, with many journals being run as decentralized committees with top editors just rubber stamping what others say and not exercising their own judgments. It takes a strong editor with some degree of confidence in his own judgment to go against advice of those around them.
So Econned FINALLY provides a link to the paper he alluded it and fulfilled his usual purpose…hurling childish insults as he claims only he is interested in a serious discussion… which is why everyone is mocking this worthless troll
Research on honesty relying on fraudulent data is weird but be honest for change as I hear who worked for that insurance company that gave him the data.
It seems SK will duck this as he is busy writing more racist rants for Steve Bannon
“If the author has a google scholar page, then there will be a very useful and pretty comprehensive listing of works (with citation numbers) there. ”
I do not understand this statement. If you publish you are found (as scientist) in Google Scholar, only the H-indices require a google account.
U.,
I have no idea what you are talking about here. Whether you have either a Google Scholar page or not or an h-iindex or not I think does not depend on one having one’s own google account. It is somebody else out there who constructs these things, I still do not know who. I wish I did. In any case, if they put together a Google Scholar page for somebody and start keeping track of it, which they do if they construct one, then they also estimate an h-index as part of it. They also construct an h-10 index as well, which gets less attention, that being the number of publications somebody has that have been cited more than ten times.
In any case, if one has a list of publications and the number of times each one is cited, then it is trivial to calculate both of those inices from that data, and anybody who has a GS page has both of them calculated. The problem is that the basic data on one’s GS page may be inaccurate, which happens to be the case for my page. This accuracy or inaccuracy of data on somebody’s GS page has nothing to do with whether or not they have a google account or not.
“Whether you have either a Google Scholar page or not or an h-iindex or not I think does not depend on one having one’s own google account.”
Here I am not so sure. I had used for years GS to check some citations and some issues with other data bases. I did not have the profile and h-index.
A few months ago I created for others reasons a google account and suddenly had a GS profile and h-index. Strange. 🙂
Addendum: I checked some publications with other guys from my university: In GS the ones with GS profile are underlined, they have h-factor etc., the not underlined are found in GS, but there is no profile with h-factor etc. available. Working for the same institution is not sufficient.
Therefore, my bet is that you need a google account to get a profile.
Ulenspiegel: Yes, your last line was exactly what I wrote in my post, and to my knowledge what a typical academic in the US knows.
Don’t use individual Google Scholar results out of context. The appropriate context compares an individual’s results against others KNOWN TO BE IN EXACTLY THE SAME FIELD. For example, to see how Maury Obstfeld is doing in International Finance , compare him against Ken Rogoff, and Jeff Frankel. Different areas have different publication rates .Macro and Labor are very big areas – pure theory not as big.
Robert Flood: Excellent points! For instance, econometricians or researchers who put together widely used data sets (e.g., latter case- me!) will have a lot of citations just because our data set is used, not necessarily because of the innovativeness of the publication.
you see the same thing in the physical and biological sciences. somebody who puts together a lab technique can be cited thousands of times more than somebody with a uniquely innovative and genius idea. everybody who conducts that lab will cite the article. the most sited biological paper is from decades ago, and is a process that is used countless times in labs every day. so its figure keeps rising. a useful computer algorithm will go nuts with citations for decades. many experimentalists produce high citation papers but at a rather slow pace. some experiments can take years to conduct and analyze. this is one reason tenure is assessed by ones peers.
Weighing in to agree with Robert Flood also, one of the many complications here I did not mention. It is true that theory tends not to get many citation with history of economic thought also not getting many.
I would note that some areas that also get more citations are some areas that are effectively multi-disciplinary, especially when policy implications are involved. This applies to health economics and environmental economics, to pick on two in particular.
BTW, as this thread more or less fades away, I think the folllowing are the top three most cited economists of all time, although somebody can find an exception if they like. Karl Marx around 402,000, Andrei Shleifer around 369,000, and Amartya Sen around 339,000, with only Sen of those having a Nobel Prize.