The Route of Open Science

I want to look at an issue way back when we were looking at JSTOR. In this post, we look at the practice of ‘peer review’ in scientific journals. There will also be a mini-Spotlight which focuses on ResearhGate.

The Peer-Review Process

The journal titled ‘New England Journal of Medicine’ has celebrated its 200th anniversary in 2012. To celebrate this milestone, the journal will show with a timeline of scientific advances that was first described within its pages, starting way back in 1816 with the stethoscope, use of ether for anesthesia (1846), and disinfecting hands and instruments before surgery (1867), among others.

But this isn’t so much the problem. The problem, is that scientific journals has operated in one single way – through research done in private, then submitted to science and medical journals to be reviewed by peers and published for the benefit of other researchers and the public at large. But to many scientists, this is something of a thorn.

The system has been criticised by scientists as expensive and elitist. Peer review can take months, journal subscriptions can be prohibitively costly, and a handful of gatekeepers limit the flow of information. We saw in JSTOR how publishers can severely restrict the access to content both to people would find it useful, and the people who actually wrote it. It is an ideal system for sharing knowledge, said the quantum physicist Michael Nielsen, only ‘if you’re stuck with 17th-century technology.’ Perhaps a metaphor for how draconian this method is?

There are calls for a more ‘open science’ method, which is argued because science could accomplish much more, much faster, in an environment of friction-free collaboration over the Internet. These are their words, but perhaps they have a point. And despite a host of obstacles, including the skepticism of many established scientists, their ideas are gaining traction.

Open-acess archives/journals like arXiv and Public Library of Science (PLoS) have existed for this sole reason. Another is GalaxyZoo, a ‘citizen-science’ site which has classified millions of objects in space, discovering characteristics that have led to a raft of scientific papers. There exists collective blogs too, such as MathOverflow, where mathematicians earn ‘reputation points’ for contributing to solutions. And who says scientists don’t like to socialise? The social networking site called ResearchGate offers scientists the opportunities to answer questions, share papers and find collaborators just like Facebook— is rapidly gaining popularity. So there are those who challenge the original way of working, and it seems they are having success.

A word from the editors of those traditional journals suggests that open science sounds ‘good,’ in theory. In practice, ‘the scientific community itself is quite conservative,’ said Maxine Clarke, executive editor of the commercial journal Nature, who added that the traditional published paper is still viewed as ‘a unit to award grants or assess jobs and tenure.’ Dr. Nielsen, 38, who left a successful science career to write ‘Reinventing Discovery: The New Era of Networked Science,’ agreed that scientists have been ‘very inhibited and slow to adopt a lot of online tools.’ But he added that open science was coalescing into ‘a bit of a movement.’

Scientists can be a ‘closed’ lot when it comes to publishing research findings. Perhaps its pride, or jealousy amongst their peers. You could say it’s human nature. Whatever the reason, there is evidence to suggest that the open science direction is gaining ground. In North Carolina State University, 450 bloggers, journalists, students, scientists, librarians and programmers converged for the sixth annual ScienceOnline conference. Bora Zivkovic, ‘chronobiology’ blogger and founder of the conference, stated that ‘science is moving to a collaborative model, because it works better in the current ecosystem, in the Web-connected world.’

Spotlight: ResearchGate

Yes, the internet itself is a key player in broadening the distribution channels (words of wisdom from Gabe Newell). Because of this, Zivkovic adds that scientists who attended the conference should not be seen as competing with one another. An open science notion would be seen to benefit everyone without having to fight others for publication space in journals.

Others have similar thoughts, such as 31-year-old Ijad Madishc, a Harvard-trained virologist and computer scientist behind the mentioned ResearchGate. ‘I want to make science more open. I want to change this.’ And change he sought out to do. ResearchGate started in 2008 with few features and was reshaped with feedback from scientist participants. Currently, its membership base is over 1.3 million users which clearly states that there are enough people who believe this is the way.

To put things into perspective, it attracts a tasty few million in venture capital from some of the original investors of Twitter, eBay and, funnily enough, Facebook. ResearchGate started with 12 employees; now it boasts 70. The company, based in Berlin, is modeled after Silicon Valley startups (re: Evernote). Lunch, drinks and fruit are free, and every employee owns part of the company. Ah, to work like that….

The site is best described as a hybrid mash-up between Facebook, Twitter and LinkedIn, with profile pages, comments, groups, job listings, and ‘like’ and ‘follow’ buttons- with the added bonus of being streamlined for its audience. No selfies, party photos and definitely no drunken nights either. Only scientists are invited to pose and answer questions, which isn’t difficult given that discussion threads discuss the likes of polymerase chain reactions.

Scientists populate the profiles with their real names, professional details and publications, data that the site uses to suggest connections with other members. Users can create public or private discussion groups, share papers and lecture materials. ResearchGate is also developing a ‘reputation score’ to reward members for online contributions so that the rest of the community knows of their input.


ResearchGate profile, showing founder Ijad Madisch

Perhaps what this shows most of all, is how the site offers a simple yet effective end run around restrictive journal access with its ‘self-archiving repository.’ Since most journals allow scientists to link to their submitted papers on their own sites, Dr. Madisch encourages his users to do so on their ResearchGate profiles. In addition to housing 350,000+ papers, the platform provides a way to search 40 million abstracts and papers from other science databases, constituting a large database.

In 2011, the site reported 1,620,849 connections made, 12, 342 questions asked and answered and 842,179 publications shared. Greg Phelan, chairman of the chemistry department at the State University of New York, Cortland, used it to find new collaborators, get expert advice and read journal articles not available through his small university. Now he spends up to two hours a day, five days a week, on the site. Likewise, Dr. Rajiv Gupta, a radiology instructor who supervised Dr. Madisch at Harvard and was one of ResearchGate’s first investors, called it ‘a great site for serious research and research collaboration,’ adding that he hoped it would never be contaminated ‘with pop culture and chit-chat.’

Challenges to Open Science

Dr Sonke H. Bartling, who is a researcher at the German Caner Research Centre and edited a book on ‘Science 2.0,’ asks that if open access is to be achieved through blogs, what good is it ‘if one does not get reputation and money from them?’ He writes that for scientists to move away from what is currently ‘a highly integrated and controlled process,’ a new system for assessing the value of research is needed.

The challenge would be to change the status quo — opening data, papers, research ideas and partial solutions to anyone and everyone — is still far more idea than reality. As the established journals argue, they provide a critical service that does not come cheap, as well as a service that has been functioning for centuries.

‘I would love for it (science journals) to be free,’ said Alan Leshner, executive publisher of the journal Science, ‘but we have to cover the costs.’ Those costs hover around $40 million a year to produce his nonprofit flagship journal, with its more than 25 editors and writers, sales and production staff, and offices in North America, Europe and Asia, not to mention print and distribution expenses. Like other media organizations, Science has responded to the decline in advertising revenue by enhancing its website offerings, and most of its growth comes from online subscriptions.

Similarly, Nature employs a large editorial staff to manage the peer-review process and to select and polish ‘startling and new’ papers for publication, said Dr. Clarke, its editor. And it costs money to screen for plagiarism and spot-check data ‘to make sure they haven’t been manipulated.’ Peer-reviewed open-access journals, like Nature Communications and PLoS One, charge their authors publication fees — $5,000 and $1,350, respectively — to defray their more modest expenses.

The largest journal publisher, Elsevier, whose products include The Lancet, Cell and the subscription-based online archive ScienceDirect, has drawn considerable criticism from open-access advocates and librarians, who are especially incensed by its support for the ‘Research Works Act’ that was introduced in Congress. This legislation aims to protect publishers’ rights by effectively restricting access to research papers and data, as we have seen in Spotlight: JSTOR.

Michael Eisen, a molecular biologist at the University of California, Berkeley, and a founder of the Public Library of Science, wrote that if the bill passes, ‘taxpayers who already paid for the research would have to pay again to read the results.’ Again, like the issues we saw in Spotlight: JSTOR. And remember that JSTOR also stores scientific journals and content. This bill would certainly have resonance here.

As a response from the criticised Elsevier, Alicia Wise, director of universal access, wrote that ‘professional curation and preservation of data is, like professional publishing, neither easy nor inexpensive.’ Tom Reller, a spokesman for Elsevier, supported her words by saying ‘government mandates that require private-sector information products to be made freely available undermine the industry’s ability to recoup these investments.’

The Future of Open Science

Scott Aaronson, a quantum computing theorist at the Massachusetts Institute of Technology (more well-known as MIT), has refused to conduct peer review for or submit papers to commercial journals. “I got tired of giving free labour,’ he said, to ‘these very rich for-profit companies.’ Dr. Aaronson is also an active member of online science communities like MathOverflow, where he has earned enough reputation points to edit others’ posts. ‘We’re not talking about new technologies that have to be invented,’ he said. ‘Things are moving in that direction. Journals seem noticeably less important than 10 years ago.’

Dr. Leshner, the publisher of Science, agrees that things are moving. ‘Will the model of science magazines be the same 10 years from now? I highly doubt it,’ he said. ‘I believe in evolution. When a better system comes into being that has quality and trustability, it will happen. That’s how science progresses, by doing scientific experiments. We should be doing that with scientific publishing as well.’

Matt Cohler, former vice president of product management at Facebook who now represents Benchmark Capital on ResearchGate’s board, sees a vast untapped market in online science. ‘It’s one of the last areas on the Internet where there really isn’t anything yet that addresses core needs for this group of people,’ he said, adding that ‘trillions’ are spent each year on global scientific research. Investors are betting that a successful site catering to scientists could shave at least a sliver off that enormous pie.

ResearchGate founder Dr Madisch has understood that he might never reach many of the established scientists for whom social networking can seem like a foreign language or a waste of time. But wait, he said, until younger scientists weaned on social media and open-source collaboration start running their own labs. ‘If you said years ago, ‘One day you will be on Facebook sharing all your photos and personal information with people,’ they wouldn’t believe you,’ he said. ‘We’re just at the beginning. The change is coming.’

In conclusion, it appears that money is a root factor in the whole process. I can understand somewhat to publishers needing to recoup costs as it is not a cheap business. The fact that it costs well into the millions to keep a publication running for a year means that many libraries (physical/digital) and other institutions are going to find it tougher to keep a subscription to them.

The efforts of ResearchGate and others can be seen as a start; whether it will be a real change to the system remains to be seen. Scientists are rightly annoyed their work is restricted and going out to less people than hoped. An open science route could be the answer, but the question is: is it the best answer?

To read Thomas Lin’s article ‘Cracking Open the Scientific Process’ that formed the basis of this post:

To read an ‘introductory’ article by Laura McKenna ‘Locked in the Ivory Tower: Why JSTOR Imprisons Academic Research’ and which helped form Spotlight: JSTOR:

Finally, Scott Aaronson, mentioned above, wrote a rather cynical article ‘Review of The Access Principe by John Willinsky’ about the practices of publishing companies:


SpotLight: BuddyPress

This series continues to look at some websites in the vein of sharing and storing. In this post, we look at BuddyPress.


WordPress is a website aimed for prospective bloggers to got their word out. Users chose which style and format layout they’d like their blog to be and begin blogging away. More ‘fancier’ styles usually have a cost involved, but the selections is so vast that there is something to suit everyone. WordPress is the most popular blogging system, registering more than 60 million websites. It is also the origin of this particular blog.

BuddyPress is the ‘expansion pack’ to the main game. It is a plugin that can be downloaded to make WordPress more like a social network. Plugins are a very popular area of WordPress; its plugin architecture allowing users opportunities to expand and extend WordPress beyond its core. It has over 26,000 plugins, each of which offers custom functions and features enabling users to tailor their sites to their specific need, such as the addition of widgets and navigation bars. But BuddyPress is perhaps the most featured.

Conceived in 2008, BuddyPress was added to allow for more social networking features to implemented into the core programming. This would turn the blogging system into something that added more ‘flavor.’ The first official stable release was in May 2009. The platform has grown and morphed considerably since then, into the dynamic, easily extensible package you see today.

Both WordPress and BuddyPress are open source, meaning there is no restrictions on their core code or engine. Everything from the core code, to the documentation, themes and plugin extensions are all built by the BuddyPress community. This means anyone can help the project by contributing their time and knowledge.

Described as ‘social networking in a box,’ BuddyPress is built to bring people together. It works well to enable people with similar interests to connect and communicate…just like a social network would. They themselves give a suggestion of ‘fantastic uses’ for this plugin being:

  • A campus wide social network for your university, school or college.
  • An internal communication tool for your company.
  • A niche social network for your interest topic.
  • A focused social network for your new product.

Nothing to surprising or unique about any of theses uses. But much like Steam adds newer services to supplement the demands of the community, so does BuddyPress to give its user base more things to try out. The average user rating for the plugin stands at 4 stars out of 5 and has been downloaded more than 1,750,000 times.

BuddyPress provides a range of features that work right out of the box. However, you might decide that you only want to make use of a couple of features to start with. This is really simple as you can turn off the features you don’t want with a click of a button. When you disable features, the site’s theme will auto adjust, showing only the menu items, pages and buttons for the features you have enabled.

BuddyPress Capabilities

The plugin lets users signup and start creating profiles, posting messages, making connections, creating and interacting in groups, and much more. A social network in a box, BuddyPress lets you easily build a community for your company, school, sports team, or other niche community.

As stated above, WordPress likes to take advantage of the numerous plugins to bolster its portfolio. BuddyPress boasts an ever growing array of new features developed by an awesome plugin development community (which is open source remember). There are more than 330 BuddyPress plugins available, and the list is growing every day.  You can install any of these plugins automatically, using the plugin installer on the WordPress Dashboard.


Example of BuddyPress in use

Criticisms of the System

Not directly ‘criticisms,’ but BuddyPress has been met with some issues in the past that have contributed to people expressing their views.

Firstly, the actual development for the plugin has slowed down. Part of the reason is that this is going on part-time. As hard as they can, the job is demanding and with other commitments, progress has stalled. It appears BuddyPress hasn’t changed in over a year. Whether it needs to change is another matter. This is also despite the fact that BuddyPress is open source, but getting patches and fixes in from the community is not easy. A large part of that community are not coders or programmers; I’ll be honest and say computing…hurts the eyes.

Another concern is that 3rd party developers and designers get little reward in their efforts. As assuming as this is, the current hierarchy works like this. Plugin donations are extremely spare. Thousands of people download BuddyPress plugins every day and the developers hardly receive anything in return. Whether the developers actually want money is another story, but if you spend that much time and effort then perhaps you’d be expecting some financial reward. That realization is not present with 95% of the community and that reflects on the activity around 3rd party plugin development. The designers, when asked what they get in return, is like this: ‘I get about $20 bucks of donations, shit loads of feature requests and the occasional hatemail.’

Even contributors of the community from the beginning have given up and moved on. Maybe the most painful example is the way Jeff Sayre’s ‘Privacy Component’ has turned out. Hundreds of hours of development seem to have been for nothing due to pushed back released dates, rude community members and overly long demotivating discussions about the importance of such functionality. Sadly, this is the price you pay for having such a large, hungry community. There are those that give you time, but generally, people can be inconsiderate and disgraceful that you wonder why developers go through the trouble in the first place. It certainly was like that for Jeff and the community members who did try their best to get it out there for us to use.

BuddyPress contains a large selection of themes to choose from. And here is also another problem: creating and maintaining themes is a challenge. When you just mess around with the stylesheet and change a few things, it’s all good. But when you start modifying templates or actually want to create your own custom parent theme, you get in a lot of trouble.

Trouble starts with a learning curve that is tough, and existing frameworks already have a base template structure set in stone, premium BuddyPress themes are sparse. For the most part it explains the lack of (commercial) themes available for BuddyPress. The importance of ‘Premium Themes’ (the ones that you pay for) should not be underestimated and it was a huge part of the success of WordPress.

Some efforts have been made by bigger players in the theming business but these project have turned out to be troublesome. This is because of compatibility issues with BuddyPress itself. ThemeForest does not even except BuddyPress themes, just a ThemeGarden and practically all other theme shops. 3rd party plugins are based on the BP-Default template structure and this led to great difficulties in getting BuddyPress functional and compatible with other themes and frameworks.

Perhaps to summarize this issue, here’s a quote from Adii, founder of WooThemes that underlines the position:

‘Our opinion on BuddyPress is divided: whilst we think it is a great platform, we honestly do not see widespread use for it and think that the functionality & features is overkill for 99% of websites. Just because it’s nifty to have your own little social network, doesn’t mean that every website should have this…So our opinion before was that BuddyPress is thus a very niche market and as far as niches go on WooThemes, we’ve always been reluctant to over-commit our internal resources to these. 

We haven’t seen more requests for BuddyPress popping up since we introduced Canvas BuddyPress. This has generally been the case when we’ve introduced something new (like our tumblog themes, which has seen us release multiple popular themes since the first two themes in March 2010). ‘

Besides this, there is a complete lack of new themes being released in the repository. It has taken months before actual BuddyPress child themes were accepted in the Theme Repository due to features missing in the BP-Default theme. This meant that all BP-Default Child Themes were rejected, which in turn led to no new free themes being released for almost a year. In the end this was solved by some great people in the community, but this should have been something that would at least be addressed and handled by the WordPress Core team. It seems the core team let BuddyPress to the dogs.

Speaking of which, the actual BuddyPress site – – is like a quarantine zone. The community is completely fractured into several streams of communication, and it’s impossible to completely keep track of what is going on. After seeing the site for myself, I can safely say that the ‘support’ section was clunky, clustered and several pages seem to have zero content. It seemed like it wasn’t updated regularly and that is not a good sign. Using is like having 3 email accounts without any of them showing them if you have new messages. You visit the forums, browse to your profile and check your favorite threads RSS feeds. It’s a usability nightmare.

I think what this highlights, more than anything, is the issues with the community itself. Having a large community using your system is a success, but can also be a major problem. BuddyPress is designed for the community in mind, and if that community can sense something is wrong, it spreads like fire. The development team is small, but there are other contributors who provide support, but too few to get noticed by the larger community. As I said, the community can be restless. The development team need to be given more slack and recognition because this is not an easy job; that is a huge issue with the community currently not seeing all and some getting more light than others leading to some thinking ‘why bother.’

For the criticisms section, I used bowe’s article ‘The current state of BuddyPress: A critical analysis,’ which is featured here:

Tammie Lister provides a counter-argument. She appears to have had some input in BuddyPress, and even commented on bowe’s article. I don’t know if the problems were actually fixed, but I felt it would be useful to include this to give another side: ‘

Spotlight: Evernote

We continue this series of looking at examples of various sharing and archiving websites. In this post, we look at Evernote.


Founded by Azerbaijan-born Stepan Pachikov, Evernote open-beta was launched in June, 2008. The current CEO is Phil Libin, whose activities largely contributed to what Evernote is today. He started up several companies before selling both. He started thinking about a third venture, using the experience from the first two. He knew two things: He didn’t want to be bored, and he didn’t want to merely make money. ‘What’s cool is impacting a billion people. Whatever I ended up doing, I wanted people to get excited about it. I wanted long lines forming for it.’

The earliest form of what would become ‘Evernote’ started out as a back-of-the-mind question: How do we remember something, like the name of a restaurant? The human brain is a complex thing, but does still have limited memory capacity.  And that’s a growing problem in an age in which information in all forms comes flying at us at ever-faster rates and you’re not sure which of it will prove useful.

Laptops and smartphones can compensate for this issue, yet only if people take the time to input the information in the right place, file and format. This is not fail safe. Is the name of that restaurant on my laptop or phone? Libin began to think about what a better electronic memory would be like. You could put in information in any form, and you could instantly get the information into any of your devices on the fly without worrying about how to organise it.More important, you would be able to find whatever it is whenever you need it, as effortlessly and intuitively as we now dig up stuff via Google.

In 2006, he pulled a crew (the same from his previous companies) to start a new company called Ribbon. They very quickly discovered, in Silicon Valley, an almost obscure start-up called Evernote. This version was created to extract text tools from photos so that you could take pictures of notes and make them searchable.  It was led by a brilliant techie named Stepan Pachikov. Liking the idea, Libin was obliged to meet with Pachikov, and an agreement was set that rather than compete, it would be beneficial to both parties if they merged. Libin became CEO of the company; Pachikov became the founder, although he gradually shifted his focus to other projects.

In 2008, they released a ‘private beta’ version for the Silicon Valley insiders. Word started to get around pockets of Silicon Valley about this cool new app that helped you remember stuff. It worked more or less as Libin had envisioned. Most types of information can be added to Evernote in a few seconds, from any computer or smartphone, and most formats. The software takes it from there, sucking the data into Evernote’s servers, as well as storing it on your computer. The system also labels the incoming data with any information that could come in handy, including when it was added and where you were when you added it. Any visible text in a photo becomes searchable. ‘Before I go to the supermarket, I take a snapshot of the list my wife has on the refrigerator,’ says Daniel Kuperman, CEO of Aprix Solutions, a Silicon Valley Web start-up, and an early user of Evernote.

However, there was a big concern. Evernote was being pitched as a so-called ‘freemium’ service; people could either use it for free or upgrade to a paid premium version, which is how the company would make money. The problem was that Libin refused to ‘downgrade’ the free version. The free version was full featured, rendering the point of upgrading void. ‘The more stuff you put in Evernote, the more important the service would be to you. Who would begrudge $5 a month to a company that was storing your memories and helping you retrieve them?’  He argued the danger was that they wouldn’t try the service in the first place or wouldn’t stick with it because the free version failed to impress. This fell on deaf ears. As more and more people opted for the complete free version, profits began to decline rapidly. A potential investment of $10 million with a firm collapse at the last minute and Libin consider shutting down.

In mid-2009, Libin met with Gary Little of Morganthaler Ventures, who had becomes interested after hearing about Evernote from a friend. Libin stunned the group with a series of slides that Little calls ‘one of best analytical dissections of a business I’ve ever seen.’ Libin showed that Evernote users became more likely to upgrade over time. The upgrade rate was an impressive 8 percent. He also showed how often an average user was actually using Evernote. Most people who try an app abandon it pretty quickly or use it less frequently as time goes on. But for Evernote, the curve was a smile – not only because active users were finding the service more and more useful, but also because those who had stopped using the service were returning to it.

Morgenthaler invested. So did Sequoia Capital, another top Silicon Valley VC firm. Altogether, Evernote has raised $95 million in a short period. ‘We didn’t need most of the money,’ says Libin. Evernote didn’t need it because the company became profitable early in 2011, hitting 10 million users and reaching annual sales of about $16 million. Evernote doesn’t do anything to encourage people to pay it, which is one of the reasons it’s so popular. The irony and genius of that nonsales strategy, of course, are that so far, at least, it has resulted in terrific sales. The long-term conversion rate can now be tracked out to three years, and it turns out to be more than 15 percent.

The challenge is to make Evernote into something that is beyond a ‘new fad.’  Part of the answer, Libin believes, is to expand the company beyond simply being a way to remember stuff. ‘We want to go from being one app to being a family of apps, all of which have something to do with memory.’ In the long term, Libin wants to figure out how to do more to help people remember, learn, understand, and communicate. He would like to see Evernote be able to recognize objects and faces in photographs, and perhaps one day even recognize smells. ‘Your own brain,’ says Libin, ‘might end up being the last place you search for information.’

Most of the information came from the article ‘Evernote: 2011 Company of the Year’ by David H Freedman. To see the article:

What is Evernote?

As stated above, Evernote is software designed for notetaking and archiving lots of information, such as lifelong memories and vital information to daily reminders and to-do lists. Everything stored within your Evernote account is automatically synced across all of your devices, making it easy to capture, browse, search and edit your notes everywhere you have Evernote, including smartphones, tablets, computers and on the Web. Its success stems from a combination of sophisticated note taking software, ‘Dropbox-like’ cloud storage and a intuitive universal search function.

On supported operating systems, Evernote stores and edits the user’s notes on their local machine. Your work can be accessed on every device/computer that are used, meaning you can work at home and in a library for instance. This allows easier storage and accessibility, as well as making it easier to ‘remember things you like,’ which was a theme of Libin’s that started Evernote.

You can also save entire webpages to your Evernote account with nifty ‘web clipper’ browser extensions. You get the whole page: text, images and links, as per your preference. Collect information from anywhere into a single place, from text notes to web pages to files to snapshots. Also share your notes and collaborate with friends, colleagues and classmates on larger projects. A ‘note,’ as you can imagine, can be anything as a piece of formatted text, full webpages or excerpt, photographs or voice recordings. These notes can be sorted into folders, then tagged, annotated, edited, given comments, searched and exported as part of a notebook.

Evernote Capabilities

After a series of updates, Evernote has been refined to be more streamlined and easier to use. Evernote 5.0 brings beautiful, more intuitive UI to the experience. Now you can save, sync, search and share your memories easier than ever. A new Sidebar feature gives you quick access to everything in an account, like a navigational tool. Shortcuts can be added quickly to the Sidebar, so that the notes and notebooks you use most are always at-hand. Its search capabilities, one of Evernote’s strong points, has been upgraded. Now Type Ahead search makes finding notes even easier by suggesting keywords and phrases, searching all Shared Notebooks, and offering advanced search filtering options.

evernote 1

Evernote Home Page

In terms of creating new notes, users click the ‘+ New Note’ button, which will create a note in the default notebook. Once created, the note will automatically be saved to Evernote and synced across all used devices. When created, just about anything can be added to theses notes as attachments, such as files, images and audio. Adding attachments means users need either dragging and droping it into the body of the note, or use any of the following buttons in the Note Editor: audio, snapshot, file.

An important issue is that when users makes changes to their work, they would also want to applied across all their devices. The Sync feature ensures that everything in an Evernote account is always available to view, edit and search everywhere, including smartphones, tablets, computers and on the Web. Synchronization is automatic, although there is an option to make this manual.

So once you have a collection of notes, there are different ways to go about searching for them. You may browse notes easily by note, notebook, tag, location, and more using the Sidebar. Click ‘Notes’ to browse all notes in an account at once.

‘Notebooks’ are a selection of relevant notes kept together in notebooks (like a diary of sorts). Essentially these are notes within notes. These are the most common ways people organise notes, which is useful when they can be separated by category, location or purpose. Your private notebooks will show the notebook name and the number of notes they contain. For example, you might create one notebook called ‘Work Notes’ and one called ‘Personal Notes’ to keep these types of notes separate and easier to find. A ‘Share Notebook’ feature allows contents of a notebook to be seen and edited by others; useful if working on a collaborative project. ‘Notebook Stacks’ is an optional way to organise multiple notebooks, which groups them into sets. Notebook Stacks are commonly used for grouping notebooks that have a similar broad topic or relating themes. All Evernote accounts have one default notebook that is automatically created when the account is created.

evernote notebook

The ‘Notebook’ Classification

You can also view using ‘tags’ and selecting the appropriate Tag button from the Sidebar. Please note that if there are no tagged notes in your Evernote account this option will not be visible in the Sidebar. To view all of the notes associated with a tag, double-click the tag. The notes will be shown in the Note List. Tags are an optional way to associate keywords to notes and improve searchability. One or more tags can be added when a note is created or at a later time. Common uses for tags include associating notes with categories, memories or locations. ‘Shortcuts’ are also used as the easy way to navigate directly to the items, like a…shortcut. You can create Shortcuts for notes, notebooks, Notebook Stacks and Tags, with up to 250 maximum.

Once you have organised your work, you can also search for them. Everything in Evernote can be fully searched. There are different search parameters you can try depending on your query. The most common is to use the ‘Type Ahead’ search. Just start typing in the Search notes field, and a drop-down will display search suggestions based on the contents of your Evernote account, including keywords, notebooks, tags and shortcuts. Users can also use the ‘Adding Search’ option, allowing searches to be limited to what and where in an account. Search Options are great if you have hundreds or thousands of notes in your Evernote account, but can be useful for anyone.

Criticisms of the System

The big issue for Phil Libin and his team when Evernote was starting up, was the differences between the ‘free’ and ‘premium’ accounts. His insistence that the free version be kept the same as the premium version on grounds of keeping first impressions excellent, was a pitfall that also sank the company. With a fully-stocked free version, there would be little point in upgrading to the paid account and the company started losing money.

That’s changed now. The free online service has monthly usage limitations (60 MB/month as of 2013), and displays a “usage” meter. A premium service is also available at $5 per month or $45 per year for 1,024 MB/month usage as of 2013.Premium features faster word recognition in images, greater security, and text searching within PDF files. Another advantage of the premium service is more options in the sharing process. Both free and premium Evernote users can share notebooks privately with other Evernote users. However, notebooks shared by premium users have the added benefit of allowing the premium user to give even non-premium users the permissions to edit the contents of the shared notebook. Non-premium users can share notebooks, but cannot give others permission to also edit the notebooks.

The most glaring absence from the free version is the ability to password protect your notes. This may be a ‘game changer’ for clinicians or researchers, who often keep notes that need to be secured. This, perhaps, should be implement across the whole model as no-one likes to have their work corrupted or lost. This issue was highlighted in March 2013 when Evernote was ‘hacked.’ The attackers gained access to their network and also the personal information of their user base including passwords, email address and usernames. Evernote urged users to quickly reset their passwords and stated it would investigate and reinforce security protocols. The other issue is that the free version does not allow offline access to your notes. Given that some places have poor signals, access to your work can be difficult or impossible.

Spotlight: Scrivener

Hello, and we continue our look at some more examples of digital libraries and other similar software to allows sharing. In this post we look at Scrivener.


Keith Blout, the creator and designer, founded a company called Literature and Latte in 2006. He is a writer and author and the website discusses this theme predominantly. But like so many others in the profession, often faced the creative difficulties with writing long pieces of text, such as writer’s block (e.g. his PhD which he never completed due to a number of reasons).

Once he started the company, he decided to develop a software to help others facing similar problems. This became Scrivener, setup as a tool to help writers organise their content in a more effective manner. Once Scrivener was released, Keith Blout discovered that he wasn’t alone in wanting a software like that in their workflow. As a result, the company become a small development team of like-minded individuals working towards the same goal.

This team consists of project manager Keith, who is responsible for ongoing development of Scrivener and their other program Scapple. David manages sales, marketing and accounts – the PR and marketing issues. Ioa is on a freelance basis, often assisting on website interface, support and testing. He was also one of Scrivener’s early beta-tester.

Lee is the Windows developer, helping port the program to Windows system. He is also author of Passion Driven. Julia is a journalist who handles the press releases and sponsorships. She is also Keith’s wife. Jennifer is also freelance, providing support for Lee in the Windows development and also with the Mac system porting . Finally, Tiho also provides support for Lee with the coding mechanics.

What is Scrivener?

Scrivener was born out of the need for Keith, who is a writer and faced the difficulties of writing. Essentially it is a compacted database for users to keep track of their work which they accumulate. This is very much designed with fellow writers and authors in mind.

This helps users organise notes, documents, texts, concepts and research notes with easy access and reference ( including text, images, PDF, audio, video, web pages, etc.). After writing a piece of text the user may export it to a standard word processor for formatting later.

Scrivener helps writers track their earliest drafts straight to the final edit. Outline and structure ideas, take notes, view research alongside your writing and compose the constituent pieces of text in isolation or in context. It makes all the tools you have scattered around available in one application.

A ‘binder’ on the left side allows users to navigate between different parts of a manuscript, notes or research materials with ease.  Text can be broken into smaller component parts or larger pieces to break down one single long one. Restructuring  drafts is as simple as drag and drop. Select a single document to edit a section of a manuscript in isolation, or use ‘Scrivenings’ mode to work on multiple sections as though they were one. Scrivener makes it easy to switch between focussing on the details and stepping back to get a wider view of the composition. With access to a powerful underlying text engine, you can add tables, bullet points, images and mark up your text with comments and footnotes.

Unlike word processors, Scrivener allows you to start work in a nonlinear order as opposed to page 1 -end. Users can enter a synopsis for each part on a virtual index card, stacking and shuffling them in the corkboard until the most effective sequence is found.These synopsis create prompts as you write. Alternatively, you can write everything down in the first draft, break it like bread for rearrangement on the outliner/corkboard (not like bread). Create collections of documents to read and edit related text without affecting its place in the overall draft; label and track connected documents or mark what still needs to be done.

One of the most annoying things for most people is switching between different windows when working. I know this as I would have word processor up, along with several webpages filled with information I needed. This switching between windows was not ideal, and it appeared Keith agreed. No more switching between multiple applications to refer to research files: keep all of your background material—images, PDF files, movies, web pages, sound files—right inside Scrivener. And unlike other programs that only let you view one document at a time, in Scrivener you can split the editor to view research in one pane while composing your text right alongside it in another. Transcribe an interview or conversation, make notes on an image or article, or just refer back to another chapter, all without leaving the document you’re working on. Keep everything on one single window!

The program also provides tools to help people get their manuscripts for publishing (the end product of writing). Once you’re ready to go, control everything from how footnotes, headers and footers appear to fine-tuning the formatting of each level of your draft—or keep it simple by choosing from one of Scrivener’s convenient presets. Print a novel using standard manuscript formatting. Export your finished document to a wide variety of file formats, including Microsoft Word, RTF, PDF and HTML—making it easy to share your work with others.

Scrivener Capabilities

As mentioned before, features include a corkboard, the ability to rearrange files by dragging-and-dropping virtual index cards in the corkboard, an outliner, a split screen mode that enables users to edit several documents at once, a full-screen mode, and ‘snapshots’ (ability to save a copy of a particular document prior to any drastic changes). We can say that Scrivener is more than just a word processor – a literary ‘project management tool.’

The cork notice-board (corkboard) is one of the writer’s most familiar organisational tools. Pre-Scrivener, index cards were not connected to anything, meaning that changes to the card sequence would need to be done manually in the draft. With Scrivener, each document is attached to a virtual index card; moving the cards on corkboard rearranges their associated text in your draft. Mark common themes or content using labels, or stack cards, grouping related documents together. The corkboard gives you the flexibility of a real notice-board while automatically reflecting any changes you make in your manuscript.


Scrivener ‘Corkboard’

The ‘outliner’ is the powerful tool used to edit synopses and metadata of documents. Organise your ideas using as many or few levels as you want and drag and drop to restructure your work. Check word counts, see what’s left to do using the Status column. Scrivener’s outliner is easy on the eyes, too, making it ideal for reading and revising an overview of a section, chapter or even the whole draft.

‘Scrivenings’ (nice word invention) allows users to move smoothly between editing a document one piece at a time or together as a whole. Novelists can write each scene in a separate document or whole chapters as one; scriptwriters can work scene-by-scene or act-by-act; academics can break down their ideas into individual arguments. Scrivenings mode allows you to collect the constituent components into a single editor, so that you can edit them as though they were all part of one document.

Everyone makes mistakes, and writers are no different. The ‘snapshot’ feature provides a means for users to return to an earlier version of their work should they make a mistake. Before embarking on a major edit, take a snapshot and you’ll be able to return to the current version any time you want.This is similar to a checkpoint, meaning that users are assured that there is an insurance option should things go wrong.

A real perk is the ability to go fullscreen. One click in Scrivener’s toolbar and you can leave the rest of your desktop behind. Fade the background in and out, choose the width of the “paper” and get writing. Prefer an old-school green-text-on-black look or maybe white text on a blue background? Flexible appearance options mean you can set up the full-screen mode as you please. This keeps things nice, focused and clear, unlike having all that clutter like in Word.


Full-screen mode

Where Does Scrivener Stand?

As a writing tool for fellow writers, Scrivener has been keenly taken on by many people. It has received praise from The New York Times, Wired, Macworld, Seattle Times,, AppleGeeks  amongst others. Review websites rate Scrivener very highly, with common criticisms point to the myriad of options which can seem extremely overwhelming to first-time users. However, most agree that once you get used to the system, the capabilities show themselves. Here’s a review I found on that sums up the general consensus:

‘But that’s only until you realize the power of Scene-by-scene synopses and the sheer amount of side notes, research links, tags and info you can cram around your document. The left pane is called the Binder, and its structure is entirely up to you. You can create as many folders as you wish, and while you feel a little lost at first, you quickly get used to the freedom. You are provided with two default views for opened Binder documents: Corkboard and Outline, both of which are powerful and concise and provide you with an instant bird’s-eye-view of where your project is going.

But that’s not where this app truly shines. Full screen support is simple to implement and, you ask, pretty hard to get wrong; but Scrivener’s full screen features blows the competition straight out of the water. Upon launching it, you’re presented with a basic page right in the middle of your screen. This page’s width and color can be adjusted and you also get the very cool option of having your document notes, project notes and tags inspector right there besides your manuscript.’    – roelani,

Scrivener is a perfect example of understanding the need of the consumer. It definitely helped that Keith Blout was a writer himself and knew what others were facing. Therefore, he had a unique advantage in what to give to the masses. Scrivener started out as a personal desire and need to solve a problem. This expanded to a fully-blown program that would refine itself to be as useful as possible to other people.

Spotlight: JSTOR

Following on from digital libraries, I thought it’d be worthwhile to examine an example or two, to see how they operate. One of the largely recogniseable ones is JSTOR.

History and Formation

JSTOR (Journal Storage) is a digital library setup in 1995. As of now, it stores digitalised content of books and journals of more than 8,000 institutions in more than 160 countries. It stores files in a .PDF format, meaning you need Adobe Acrobat to view. However, most computers comes with this per-installed anyways.

Its history can be traced to a time when physical libraries had issues with the amount of academic journals  around. As more and more journals were being published, libraries found it increasingly expensive and time-consuming to maintain the growing collections. JSTOR stepped in with a proposition to help libraries by digitalising their massive contents. As a result, JSTOR allowed libraries to outsource storage for the long-term future, as the transition to the internet improved access dramatically. JSTOR access was improved based on feedback from its initial sites, and it became a fully searchable index accessible from any ordinary Web browser.

Due to the initial success, the client base expanded in participating journals. The process was by no means quick.  JSTOR content is provided by more than 900 publishers now, with the database containing over 1,900 journal titles ranging 50 disciplines.

Business Model and Operation

JSTOR stresses that it is a not-for-profit business, which was established to help academic libraries and publishers with the workload of storage and converting print to digital content. The company is not a publisher in itself (meaning content is published by either the authors or publishing houses), and also does not take exclusive rights to any content shown.

This is different to the operations of Steam or Facebook, which largely own the rights to content that is published on site. JSOTR acts as an intermediary, allowing users to store their work and keep the copyrights. Whatever is shown is subject to those owner’s permission, although it appears that they have consented anyway if they allow their work to be shown in the first place.  Think of it as like a safety-deposit box, with the owners having the key.

In order to remain in business, JSTOR instead charges fees from libraries and institutions. These organisations are frequent partners of JSTOR, numbering thousands across the world. Fees are judged depending on the type of institutions under question. Research universities contribute much higher fees than small colleges. Community colleges, public libraries and high schools pay much less for access to the same privileges. For example, small high schools pay only $750 annually for access compared to the thousands of research universities. Some places even have free access.

The model is designed to provide the widest access to scholarship and accessibility. The money made from fees is used to fund newer technologies to support the use of content on the site, to provide outreach and support for constituents, and pays license fees for content owners.

Accessibility and Content Provided

One of the perks and original goals of JSTOR is to provide this content to those who are not necessarily part of a university or research institution. Following on from above, aside from paying considerably less fees, general libraries offers walk-in users free access as they operate under a specific scheme. This can allow extra broadening of JSTOR services to casual users. In addition to this, free access is provided for almost hundreds of thousands of articles in the public domain.

Not only is this profitable, but also boosts coverage and exposure of the website. The JSTOR team regularly conducts tests and checks with users on the functionality and accessibility of their website. PDF files, when clicked, are tagged fully to ensure increased accessibility of the files.  This way you ensure your best possible chances of people using your services and that they are working correctly.

Most of the content actually comprises of the social science and humanities disciplines.  There is a misconception that many people assume these kinds of articles are funded by governments and other interested parties, which should therefore be free to view. Sometimes journals are funded, but there is no way of telling which ones are funded or not. This is because content can come from a range of different disciplines.


JSTOR Home Page

Most of the publishers of the journals are non-governmental not-for-profit organisations themselves. Even if they are funded, this would not necessarily remove the costs of actually digitalising the content, making it easily searchable, and the preservation of its long-term future.

JSTOR has made steady progress in expanding options to people. With the digital boom, there has been an increased awareness and interest within academic content from both academia and general public (e.g. Google search). As early as 1999 – before broadband – JSTOR setup a program where more than 110 publishers could provide access to their complete collection of 350 journals on JSTOR directly to individuals, some as a benefit of society membership and some for a fee.

Speaking of Google, JSTOR has seen its usefulness in this goal. In 2006, they made an agreement with Google to index its entire full-text content. This facilitated access for students and faculty using Google for search but also introduced JSTOR to millions of people around the world. Search results within Google now included several from JSTOR itself, thereby allowing people to notice the website through this means.

When JSTOR first started establishing itself, the developers did not take into account how the technological advancements played out. The new worldwide demand for knowledge and information rapidly expanded with the world wide web, and especially from those not affiliated with any institutions. Now having clearly learned the potentials, JSTOR is busy working with partners to meet this demand.

They setup a ‘publisher sales service’ which gives the publishers to make individual articles available ‘for sale’ through JSTOR (numbering around 850 currently). The price for purchasing individual articles is set by each publisher and includes a flat fee to cover JSTOR’s costs for providing the service.

They also operate a service known as ‘Data for Research.’ This service is intended to be used by the research community, where DfR is a set of web-based tools for selecting, searching and interacting with content from the JSTOR archive.  The service also provides the ability to obtain data sets via bulk downloads. What DfR allows is the full-text and fielded search of the entire JSTOR archive via a powerful search interface (it would need to be powerful anyway to cover the range). The use of this interface would allow one to quickly and easily define contest through iterative processes of searching and results filtering.

Criticisms of the System

JSTOR is clearly successful, but is not without its limitations. As ironic as it sounds,  despite the deep wealth of content on its archives, the availability of many of these journals is controlled by a ‘moving wall’ business model. The moving wall is essentially an embargo.  Within ‘academic publishing’, an embargo refers to the period when access to content is prohibited to certain people.

The purpose of this is to protect the revenue of the publisher. The embargo can separate the most recent period (for which subscription is needed), from an older period where anyone can view the article. This acts as a virtual barrier so that new content is not diluted with the old, and where money could be potentially lost. The period is usually between 2 months to 5 years.

The moving wall concept is the period between the last issue of an academic journal shown on a online database/digital library, and the most recently published print issue of a journal. This is specified by publishers in agreement with databases, and generally ranges from several months to several years. In other words, the moving wall is the agreed-upon delay between the current volume of the journal and the latest volume available on JSTOR.

Due to JSTOR giving a lot of control to publishers, the latter can willingly or forcefully change the period at will. Formerly publishers could also request that the ‘moving wall’ be changed to a ‘fixed wall’ – a specified date after which JSTOR would not add new volumes to its database. In metaphorical terms, the fence becomes a solid brick wall. These fixed wall agreements are still in effect. This, of course, seems to go against JSTOR’s goal of increased accessibility for all.There is a difference between how many can see the content,and how many are able to read it.

The publisher is the key player and we need to understand their mindset. They need money and distribution channels to supplement its (usually niche) market. To make cash, the publisher sells the rights to an academic search engine company (JSTOR for one) and in the process, becomes highly profitable. They become highly profitable because, unlike traditional publishing, the publisher does not have to pay the writer or editor. It only has to cover the costs of typesetting, printing, and distribution.

JSTOR then sells back the content they have digitalised to university libraries to recoup the costs of digitalising (via the fees model). Remember also that these institutions are amongst the ones that must pay a very hefty fee. A substantial part of the university library budget is devoted towards subscriptions. It’s not surprising to hear that university libraries can often pay a one-off charge of $45,000 and then $8,500 every year after (at least as far as UC San Diego is concerned).

But the point is that universities who created this academic content for free must pay to read it, and quite dearly. The general public, those ones that JSTOR is trying to reach, are often barred from viewing certain content. General libraries cannot afford to pay that much and still stay in business each year. The media are also restricted of information. There’s resentment from academics that their work actually reaches less people than they hoped, after giving years of research. That’s staggering if JSTOR is to be believed that it turns away on average 150 million attempts to view articles.

We should be careful not to blame this solely on publishers. Everyone has to make a profit to continue, but parallels can be drawn with the music industry and the ways in which royalties and fees are distributed amongst the pyramid. In many ways, publishers acts as an inhibitor to accessibility and, quite rightly, there are criticisms of this. This is contradictory to JSTOR’s own aims. Whilst not all content comes under this scrutiny, a substantial number is effected. The challenge would be to devise a way in which such restrictions are removed.

Project Discussion Part VI: Digital Libraries

Hello everyone, and we’re nearing Christmas. Thought I’d get that out there; yes, it’s been a while. In this post, we discuss digital libraries.

It was important to talk about this topic because this is what I was essentially proposing as my project; allowing academics to upload their work in a large database. Now we’ve talked about social networks as I felt it relevant to this discussion, but at the same time, I didn’t want it to be too much like one. My original plan was to allow people a service that allows them to store and share work. That is what a digital library can do.

Q: What is a Digital Library?

Libraries have always been a hub of information for many people.  Libraries provide professionals trained to distinguish and verify content, build collections and provide a reference and information service They still are, but as we’ve moved into the technological age, more and more information is being digitalised. Therefore, as with most things, libraries are increasingly becoming digital themselves.

With this information boom, there are now more opportunities to build and share knowledge in the form of electronic formats.  The concept of a ‘world library for the blind’ rests on the ability of digital libraries to share and coordinate collection-building resources and to use digital
technology to share content. This needs to be understood as digital libraries being  designed effectively to do this job.

Technology changes libraries in the way it is organised and delivered. Essentially, the library still functions as a place for storage for organised content. Its digitization is a means of ensuring that its collections are preserved and accessible to all regardless of disability or affiliation. The digital library acts as the critical point of contacts between the information provider and the information consumer (user).

This system is what allows people to navigate their way through the database. Like a physical library, digital/electronic libraries store content as a digitalised format. Digital libraries are closely associated with academic institutions as a means for storage of the mass volumes of work. The term ‘born-digital’ means that the work has always been in a digital format. Many of these works are free to view for the public with little restrictions. Perhaps the real crux of these is that they can allow people not associated with an institutions (non-students) to view these works if necessary.

Digital libraries take advantage of the internet as a source of content and distribution means (remember, a ‘broadening of the distribution channels’).  It has profoundly changed information services for users and libraries. Publishers of content, trade books and magazines, electronic journals and electronic databases offer new opportunities for acquiring, managing and distributing content that is accessible.

Q: How Are The Aims of a Digital Library?

Due to their collections, digital libraries commonly integrate a search system within their database to make it easier for people to find what they’re looking for. They often use the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH), in which they expose their content to external libraries and search engines. The OAI-PMH is used to ‘collect’ metadata descriptions of the records in archives, so that services can be built using that information from many archives. It’s why you can see works in things like Google Scholar that exists outside the library.

Generally, there are 5 major functions of a digital library: acquisition, cataloging, retrieval, interpretation and sharing.


Is the process by which consumers selects different content to view, borrow, rent, read, and buy. This is the e-commerce functionality of the digital library (and a way in which the library gains funds).


Is the management of acquired content, along with accompanying copyrights and permissions from respective authors/owners. Search engines within the library helps order content by various factors, such as author or title.


Is the process of searching content and the management of the search results. A common search interface simplfies the search process for the user and at the same time enables the publisher to select the best data format for the task.


This involves viewing information in the context of related resources. It also includes being able to identify relevant connections between sources. Again, a search engine will be able to find relevant topics to the original search, thereby allowing the user additional content.


Sharing means having tools and processes for annotating resources, extracting from and citing sources, and collaborating with other users. The end product of research is often shared with others.

Q: Where Do Digital Libraries Stand?

In terms of the internet, digital libraries have a great opportunity to get ‘out there.’ As we have seen countless times, the internet provides the means and channels to get yourself noticed. To get maximum exposure, careful consideration is needed to the behaviour of the customer/consumer.

The new, competitive situation forces libraries to see things much more from the perspective of the user. First of all, this is an acknowledgement that, particularly at universities, libraries deal with a range of users with often different usage behaviours. An undergraduate has other demands for information than a qualified researcher, and their usage behaviours can vary substantially.

Undergrads try much harder to get general information, such as their usage of internet search engines. I can say this back in my uni days, that searching for something could be quite time consuming. Whereas seasoned researchers usually know where to look for information. The point is that with the range of search engines available, users have a large choice about how they get their information.

One thing that I can say as an ex-student, is that the majority of us would use Google as the preferred search engine for information. In fact, I still do. The simple reason why they still use the online catalogue is that, for this information type, they don’t have an available alternative, as internet search engines usually don’t cover the so called ‘deep’ or ‘invisible’ web. In any area where students think that they can find information, especially when they are looking for documents and full text, general search engines are even now much more popular than databases that have been made available through libraries. We students like to make it easy for ourselves, and by using Google to do the searching for us as it’s so useful and quick. Just as users like the ease of phrasing and submitting a search query, they also like the flexible and responsive display of result sets. Superior performance and the size of internet search indexes are most impressive to them.

Q: Challenges and Limitations?

Despite their usefulness, there can be some challenges that digital libraries currently face.

Coverage of data formats, full text search

Most systems focus solely on the search of metadata (bibliographic fields, keywords, abstracts). The cross-search of full text has only recently been introduced and is often restricted to a very limited range of data formats (primarily “html” and “txt”).

Coverage of Content types

Digital libraries largely integrate online library catalogues and databases with some full text repositories (e-journals). Freely available academic online content as described above is usually not covered by library portals. If they are selected at all they are mainly organised as html-link lists or specific databases (subject guides) that record reference metadata about web repositories.

Beyond online catalogues, databases and e-journals, researchers started to place their pre-prints or post-prints on the websites of faculties and research groups. Comprehensive web servers of scientific congresses include online presentations and papers, large international pre-print servers, often organised by the scientific community, store thousands and hundreds of thousands of documents, and the creation of e-learning objects is gaining increasing popularity.

And libraries? They add to the content that is available online. Today we have seen almost 15 years of digitisation activities, starting in the U.S. and spreading from there to other countries. Hundreds, if not thousands of digital document servers are available today, the majority of them as stand-alone systems. And activities at universities in building institutional repositories have only started. The long term goal is to store the research and e-learning output of each institution on self-controlled document servers. While the building of these repositories especially must be welcomed for strategic reasons (e.g. open access to research data, ensuring long term accessibility) the expected number of additional online hosts requires additional efforts on the search side.

Limited scalability / Information Retrieval performance

The majority of the portal systems rely on the metasearch (broadcast search) principle, i.e. a query is translated into the retrieval language of the target repositories (e.g. catalogues, databases) and sent out to selected repositories. The sequentially incoming responses are aggregated and presented in a joint result list.

The problems resulting from this search principle are well-known: due to the sequential response of the target repositories and in particular due to the dependence on the performance of these repositories we get—with an increasing number of target databases—limited scalability and decreasing performance.

An Essay Worth Reading…

Social Networks and Ways In Which They Facilitate Online Communities

By Alex Ho

The social network, and its usage by millions of users, is now intertwined with everyday life as the internet itself is and has been the source of much research. As a general feature, a social network could be considered if it does the following as outlined by Boyd and Ellison (2007): ‘ability to construct profiles…ability to identify a host of other users…ability to view and track individual connections.’ Taking these together, a social network’s (SNS) main goal appears to allow users to interact, in which results in the creation of a wider online community of fellow users. In order for this to happen, SNS provides a variety of tools for this functionality. This essay will attempt to explore the notion of an online community, and how social networks facilitate user activity with reference to Facebook and Steam.

Facebook (FB), although not the first, is arguably the most popular network with over 500 million users as of 2011. Due to its popularity, it seems likely that FB has built upon a formula in which its users can easily interact with each other. In other words, FB was founded within an era of the internet known as Web 2.0. This concept suggests, at its core, that the internet in early 2000s underwent a significant change that allowed it to be less ‘rigid.’ As Kaplan and Heinlein (2010: 60-1) state Web 2.0 described a way ‘which software developers and end-users started to utilise the web; that is, as a platform whereby content and applications…are continuously modified by all users in a participatory and collaborative fashion.’

This concept has set the groundwork for a more expansive and collaborative web; a web in which the opportunities for ordinary users to communicate and participate become significantly easier and frequent. This is not to say they weren’t already, however, pre-Web 2.0 most websites were far more ‘static’ and less collaborative.

Web 2.0 was one of the major reasons whyseveral designers (Mark Zuckerberg included) decided on their systems; a canvas was provided for them to work with. With less static websites, names such as Facebook and Twitter, as well as Amazon and IMDb, have taken off.  In light of this, it is important to consider more precisely why this is. Allowing a web that is more open, ordinary users begin to contribute more of their own content. Even something as innocuous as a status post on an SNS or a comment review on Amazon can be considered such.

This user collaboration and content (which is vast) has led to another significant concept known as user-generated content. Once again, as a result of Web 2.0, there has been an influx of content that has originated from the user community. Social networks are well-known for their ability to ‘produce’ mass amounts of user-generated content. This is significant for two reasons: a) as an indicator of the influence of SNS; b) the formation of an online community. An online community adheres to the values outline by Boyd and Ellison (2007), but Erickson and Nielsen (2008: 152) offers a more refined definition that a community can be defined if it allows ‘membership, relationships, commitment, collective values, shared goods and duration.’

This is no more different from a real-life community (of animals even), yet those within an online culture has changed the way the web looks now. Within FB users are given tools to help design their ‘ideal’ profile page. This emphasis is, of course, on encouraging user interactions. Common features include profile avatar, pictures, videos, status posts and personal description. Another features exists as a news feed, which is nothing more than a videprinter showing a list of their friends’ use activity, with latest shown top. This acts as the de facto home page; users as also able to use the ’like’ functionality, which allows them vote up content they like.

Putting these together and it become clearer how fluid it is for users to communicate with like-minded individuals within FB. The purposes of these tools is to allow people to ‘communicate with a wide circle of friends and relatives (their ‘social network’) cheaply and efficiently, sharing personal information…participating in the wider online community’ (Gibson et al, 2010: 186). In this sense, FB is essentially a virtual environment capable of sustaining a mass community. A virtual environment like FB survives because they have successfully facilitated the ability to interact with the wider online community in which they have ‘capture the notion of the internet as a location for virtual communities’ (Rhinegold, 1993). Whilst Rhinegold is referring to online role-playing games such as World of Warcraft and pre-2000, comparisons can be drawn that are not wholly different.

A further consequence is that if SNS (and the web as a whole) provides an online space, it must also mean that they are also readily accessible and accessed daily. Parallels can be drawn again to online role-playing games, in which those ‘game worlds are continuously accessible online, which allows for the emergence of complex social structures and economies’ (Chad and Vorderer cited in Vorderer and Bryant, 2006: 78). How ‘complex’ these social structures are within SNS needs investigative work, but as we have seen there is evidence that some collaborative relationships do exist, helped with regular access.

The clearest evidence of social structures/virtual community lies with Wikipedia. A online encyclopaedia, Wikipedia is remarkable for the fact that the content provided (on a myriad of subject matters) has been contributed by users. In other words, the content is added and edited by the community of users who take it upon themselves to maintain the site. The purpose of these projects is ‘the joint effort of many actors leads to a better outcome than any actors could achieve individually (Fama, citied Kaplan and Haenlein, 2009: 62). Wikipedia presents itself as an opportunity where many people can upload information for the greater good of the rest.

As content needs updating, this will occur within the specific article. Other sites that function similarly include Delicious, a bookmaking service aimed at storage and sharing of bookmarks. The relevance of these sites are clear as they fast become ‘the main source of information for many consumers’ (Kaplan and Haenlein, 2009: 62). The community take a responsibility to ensure the functionality of its content is open. Neilson et al (2008: 154) states ‘we can see how easily the dedicated fans have created thriving…communities…maintain dedicated homepages, and participate in intense discussions.’

Two minds are not twice better than one. They are many times better than one. From a psychological perspective, the concept of an online community draws upon the works of Pierre Levy’s ‘collective intelligence.’ Not only does Wikipedia exist for the benefit of all, but also that the intelligence of its community is amplified by thousand. As we live in a world that demands news and information every day, ‘consumption has become an increasingly collective process…None of us can know everything; each of us knows something; and we can put the pieces together if we pool our resources and combine our skills’ (Jenkins 2006: 4).

Steam is not seen as a ‘true’ SNS as it provides other services such as content delivery system for digital content (I.e. games). However, parallels can still be drawn; Steam also allows users to post comments, pictures and videos as users of Facebook could do, although they may be of different content. Steam was founded around the same time as Web 2.0 first started taking off. Valve Corporation, Steam’s creators, found a (risky) opportunity back then that a system such a Steam would be successful. The reasoning was that ’one of the things the internet represents is an enormous broadening of distribution channels…opens the door to titles that would have difficulty finding and developing an audience otherwise’ (Newell, 2009).

Ten years later (Steam entered beta stage in 2003) and Steam is one of the largest players of online digital content, at least where online games is concerned. This ‘broadening of the distribution channels’ is an excellent term to describe the way in which the web now provided the sufficient space and environment to get things done. With the refinement of broadband, users now find it easier to gain exposure to their content. This was perfect for Valve, as the target audience of Steam was more specific to online gamers, developers, modders and system enthusiasts, people who had interest in Web 2.0 as a new medium. Because of this, Valve placed emphasis on community growth as the critical factor in the future of Steam.

Steam provides users to more expressive tools to be creative and innovative with the Workshop component. This element allows users to upload and showcase their creations (e.g. rendered content using 3D development kits) off to the rest of the community. This has seen the result of a massive influx of community-made content, ranging even to full indie games. Those works of extreme high-quality get praise, recognition and sometimes even a financial reward to the respective authors. This provides a unique incentive to encourage people to be collaborative (as they get rewarded for their work), as well as giving them a reason to continue using Steam.

Additionally, one way in which distribution channels have been broadened is through Steam itself. With regards to indie games developers, finding a publisher or even gaining exposure can prove to be extremely difficult (much like the music industry). With Steam and its large user base, those developers have a (more specific) space in which to work in. Given enough time, their works can go on to become highly successful. A number of titles who have received critical acclaim have originated as indie games such as Terraria, Audiosurf, Baird and Super Hexagon.

The point of allowing the user base to be this expressive is to create a self-sufficient community. Since the Workshop was launched, there are over 192 million community content contributions. As Newman (2004: 149) explains ’players indicate the ways in which they learn from others, and helped others to learn, by sharing information on strategy and technique through talk and observing the play of others.’ We must refer back to the theme of collective intelligence, where many minds create a singular collective, where information is consumed daily.

To design 3D models and other modifications is not an easy process, therefore users will ask questions to problems. One user’s answer could be another’s answer. Often the answers to people’s questions can be found within Steam, which becomes a rich source of information for these developers. Such responses are appreciated within the community, where ‘contributions are actively sought and graciously accepted and acknowledged’ (Newman, 2008: 148). This collaboration of comments, criticism and opinions  between would-be designers is vital to problem solving.

What we have seen from the examples is the ‘ways in which the internet has become so central to contemporary media is through the way in which its symbiotic relationship with media culture has offered audiences participatory opportunities’ Lister et al (2009: 221). Another way of supporting Newell’s (2009) statements, what Lister et al is referring to is that this leads back to the notion of user-generated content. The instances of users creating and uploading digital content to Steam is a prime example of this, and also demonstrates an active community. This ensures a flourishing community; in Steam’s case having ‘attracted new players, reinvigorated veterans and invited significant contribution in the form of user-generated content‘ (Moore, 2011). As mentioned, something as small as a status post is content from the user; ‘every SNS post, or conversation in a chat room, every home page and downloaded MP3 play list facilitates the individual communicating in a pseudo public mode of address. What is clear is that a great deal of web use facilitates a feeling of participation’ Lister et al (2009: 222).

The modernisation of the internet and the era of Web 2.0 have clearly helped bring about changes to the web. This has made it significantly easier to maintain communities online. This is no truer than in SNS; they maintain users by promoting a sense of ‘belonging’ to the community. Other sites such as Wikipedia and IMDd provide a rich source of information with is readily tapped into and, crucially, maintained by community members themselves (i.e. ‘collective process’). Perhaps the biggest indicator of how social networks facilitate online communities, is the mass of user-generated content that followed. The notion of UGC gives reason to suggest that the lifeblood of a network is much about the user themselves. They are no longer just users, but active and collaborative members of a large community. It seemed more evident with Steam with more technical content creations, but apply this to networks raise the same matter. The fact that user are creating their own content by using the available tools demonstrates that the community will continue to grow.

Boyd, D, and Ellison, N  (2007)     ‘Social Network sites: Definition,                                                       history and scholarship’ in                           Journal of Computer-Mediated Communication  (2007)                volume 13, pp210-230

Chan, E and Vorderer, P   (2006)   ‘Massively Multiplayer Online   Games’ in            Vorderer, P and Bryant, J in                                                                                            Playing Video Games   (2006)                                                                                                  London: Lawrence Erlbaum Associates

Gibson, L et al (2010)    ‘Designing Social Networking Sites for Older Adults’
(Accessed 05.11.13)

Jenkins, H   (2006)          Convergence Culture: Where Old and New Media                                                           Collide    New York: New York University Press

Kaplan, A, and Haenlein, M (2010)

‘Users of the world, unite! The challenges and opportunities of Social Media’                                                                           Business Horizons, volume 53, 2010

Lister, M et al (2008)                     New Media: A Critical Introduction                                                                                                  London: Routledge

Medler, B  (2011)   ‘Player Dossiers: Analysing Gameplay Data as a Reward’,                     Games Studies,  volume 11 issue 1, February 2001
(Accessed 11.06.13)

Nielsen, S.E et al (2008)    Understanding Video Games: An Essential                                                                          Introduction         London: Routledge

Newell, G (2009)                                        Gabe Newell on Good Game                                      
(Accessed 29.10.13)

Newman, J   (2004)            Videogames        Oxon: Routledge

Newman, J  (2008)        Playing with Videogames   Oxon: Routledge