In particular, while we prefer to direct our users to the official published version, we know that any free copy will often work in a pinch and Google Scholar in particular is the #1 tool out there to look for free copies floating on the web.
With the rise of Open access particularly Green Open access, with researchers depositing preprints into institutional repositories and discipline specific repositories (not to mention sites like ResearchGate, Academia.edu which may or may not be legal etc) such a strategy of searching Google Scholar for "free" copies is getting more and more important.
I recently discovered this Chrome plugin called Lazy Scholar that automates this step of searching for free copies via Google Scholar.
This plugin appears to be created by a Phd student Colby Vorland and does not appear to originate or is influenced via LibraryLand (Thinks to Chris Bourg for drawing my attention to it via Twitter) , so it is interesting to see how this stacks up with the Libx plugin which is "is a joint project of the University Libraries and the Department of Computer Science at Virginia Tech."
In libraryland, we have basically solved the issue of users searching Google Scholar and linking to full text via Google Libary Link Program
But what happens if users just Google (or link via other means) and land up on the publisher page, or some other indexing service that has no full text like RePec, or PubMed?
Our solutions tend to be either adding the proxy (via Proxy bookmarklet is most popular though there are many many ways), or the more powerful Libx plugin , which allows among other things to leverage unique identifiers like DOI, PMID and use the library's link resolver to find the appropriate copy.
Both these solutions focus on getting access to the official published copy with searching Google Scholar for free copies as a secondary thought. (though the link resolver might have a link that you can click to search for a copy in Google Scholar as a secondary method if it can't find a subscribed version).
Lazy Scholar's approach is different. It attempts to locate the free copy first, though it does give you the option to add the proxy similar to proxy bookmarklet as well as leverage the link resolver via Scraping library links in Google Scholar. Arguably, in many ways, we can see how this plugin reflects the mindset of a researcher and how this plugin reflects the mindset of a researcher. Why go through the library with complicated passwords when you can get the free copy first?
By default, you need to click the Lazy Scholar button, and it will attempt to scan the page you are on for an article and it will display a notification if it detects an article where full text can be found (via Google Scholar).
What happens if Google Scholar can't locate free full text? Lazy Scholar will display the below up to 2 other options.
Of course, adding the proxy directly to the page often doesn't work, because either (1) You may have full text elsewhere rather on the current page, or (2) It is a page that has only the abstract and no full text itself (eg Pubmed, Repec, repositories that list metadata no full text etc)
This is where the useful "I noticed you are signed on into institution login ....." link comes into play. Where does that link go to?
What happens is that Lazy Scholar will check the Google Scholar result to not only look for free full text but it will also see if a library link to full text is available -
This is the normal Findit@.... entry you find next to Google Scholar, if you have set up library links (see above).
This link will be scraped and added to the "I noticed you are signed on into institution login. Click here to go there" link.
Note : There is currently a known bug if you are using Google Scholar outside the United States. Typically when you use Google Scholar you will be redirected to a country specific subdomain, like in my case, I am always sent to Scholar.google.com.sg rather than Scholar.google.com and if you set up library links it will be on the country subdomain version and Lazy Scholar won't be able to detect it.
The workaround is to go directly to scholar.google.com/scholar_settings (notice the lack of the .sg) and set library links there as well.
You can also turn on auto-detect in the options (right click on LS icon and select options), though you have to set up permissions.
If these recommended settings are set, when you visit any page that it detects as an academic article, and if it detects free full text via Google Scholar, there will be a popup on the top right and clicking on it will send you there.
If no free paper is found on Google Scholar, you will see a different popup.
Could be wrong but the autodetect doesn't seem to give the option of "I noticed you are signed on into institution login. Click here to go there" link.
In any case while I haven't done a full test, Lazy Scholar seems capable of recognising titles from a wide variety of sites including but not limited to
- ACM Digital Library
- Taylor & Francis
- Oxford Journals
- Science AAAS
JSTOR doesn't seem to work at all and Wiley works sometimes (there is a bug for some). I am unsure how detection works (Libx uses COINS)
In any case, Lazy Scholar's greatest benefit is when used on pages that do not host full-text themselves.
Just for fun, I tried it on our Dspace institutional repository, which currently consists mostly metadata of articles published by our researchers. Lazy Scholar works beautifully (though where it links to is interesting)
My testing shows auto-detect can be a bit buggy, it may take a while to pop something up and sometimes it is faster just to click on the button manually. Even manually clicking on the button will occasionally be slow.
That seems to be the main function finding free full text via Google Scholar, but there are other features.
Display times cited and altmetrics
Lazy Scholar also tries to be helpful to assist in assessing the quality of the paper.
It also displays Google Scholar Times cited and Web of Science times cited scores(available if you are in-campus at an institution that has Web of Science) - scraped from Google Scholar.
Altmetric scores are also included not sure how useful this is, but it was probably added because it was easy to do, though you do have to opt in via options.
That said the scraping of library links from Google Scholar is outright brilliant.
Ideally on any page with article metadata, the ideal way is to somehow evoke the library link resolver to be sent to the appropriate copy or place where you have access.
Adding the proxy doesn't solve the appropriate copy problem so while it works most of the time, it is not the most accurate and may fail even if the library has access.
Both Lazy Scholar and Libx provides ways to activate the link resolver but in different ways and degrees.
Libx supports COINS as well as autolinking of unique identifiers like DOI, ISSN and leverages basically the link resolver, with no reliance on Google Scholar (the "magic button" function is the only thing that usesGoogle Scholar).
On the other hand, Lazy Scholar relies almost entirely on Google Scholar, pulling in free text or via library links (using the librarylink resolver).
This caters to users who Google (not Google Scholar) or otherwise managed to get to some page with article metadata and don't know where to get the full text. They can now use the link resolver (in a way) to get to full text.
The main weakness I can see is if something isn't covered in the index of Google Scholar, Lazy Scholar can't do anything beyond adding the proxy. This is of less concern then you think because Google Scholar has one of the largest if not largest index of Scholarly material, so almost everything you come across in other sources would probably be indexed in it.
It is probably the librarian in me talking but LibX still feels better to me since it handles books etc, it will be interesting to see if Libx can incorporate this particular feature, though philosophically you can see the difference between the two.