I S K O

 

Tagging

by Pauline M. Rafferty

Table of contents:
1. Introduction
2. Social tagging and folksonomy: definitions, concepts and background
3. Tag types and tag clouds
4. Social tagging as knowledge organization: strengths and weaknesses
5. Disciplining tagging
6. Comparing tagging systems and library based knowledge organization systems
7. Taggers and tagging practice
8. Research fronts
9. Concluding remarks
10. Further readings
References
Appendix: Examples of platforms that use tagging

Abstract:
This article examines tagging as knowledge organization. Tagging is a kind of indexing, a process of labelling and categorizing information made to support resource discovery for users. Social tagging generally means the practice whereby internet users generate keywords to describe, categorise or comment on digital content. The value of tagging comes when social tags within a collection are aggregated and shared through a folksonomy. This article examines definitions of tagging and folksonomy, and discusses the functions, advantages and disadvantages of tagging systems in relation to knowledge organization before discussing studies that have compared tagging and conventional library based knowledge organization systems. Approaches to disciplining tagging practice are examined and tagger motivation discussed. Finally, the article outlines current research fronts.

[top of entry]

1. Introduction

The move towards social software and what is generally known as Web 2.0 (O’Reilly 2005), or the social web, has generated interest in shared metadata and social tagging as an approach to resource description. The development of folksonomies and social tagging moves resource description towards a more dialogic communicative practice (Rafferty and Hidderley 2007), where creators, readers, listeners and viewers of documents are encouraged to add their own tags. There are a number of websites that use social tagging, and these include text based websites such as CiteULike, music based websites, such as lastfm.com, image based websites, such as Flickr, fan websites, for example Archive of Our Own and social websites such as Facebook and Twitter (see Appendix for URLs).

Twitter, a popular microblogging platform, is an interesting case-study. It uses the hashtag as the convention that allows users to describe and, increasingly, to comment on content. Twitter users can very easily create tweets and retweet the initial tweet. Hashtags establish a bi-directional interaction between the user and the information resource, which on the one hand allows people to follow and acquire news, opinions and people’s status updates, and on the other hand allows user participation in the creation of hashtags, facilitating the creation and propagation of content throughout the platform (Ma et al. 2013, 260). Hashtags are user driven and serve as metadata to code and spread ideas and trends quickly and easily, however, it can be difficult to interpret hashtags and discover their relationships because of their free-form nature (Ma et al. 2013). One of the interesting aspects of hashtag use is that it is very often used as meta-commentary on the tagged information resource rather than just as descriptive tag. This approach to using the hashtag, which goes beyond initial, official envisaged purposes and uses, might point to a creative use of tags and hashtags offering new approaches to search, not only informational search, but also emotional, mood, phatic, and even in the use of the #not hashtag, critical search. It is perhaps in this potential expansion of description and search that the strength of social tagging lies.

[top of entry]

2. Social tagging and folksonomy: definitions, concepts and background

Social tagging has its origins in the development of online bookmarking in systems such as itList, which began in 1996, but it was the social bookmarking web service Delicious (then known as de.lic.ious), which started up in 2003, that coined the term tagging. Web services change considerably over time, and Delicious was subsequently bought by Pinboard on June 1 2017, and its service discontinued in favour of Pinboard’s subscription service. Before tagging services began in practice, there was a theoretical literature that explored the idea of social tagging or democratic indexing (see, for example, Hidderley and Rafferty 1997). Golder and Huberman (2006) defined tagging as a process of labelling and categorizing information through which meaning emerges for individual users. Furner (2010, 1858) considered tagging a kind of indexing: “Tagging is the activity of assigning descriptive labels to useful (or potentially useful) resources”. And further (1859):

In the parlance of mid- to late-twentieth-century information science, [a given tag, e.g.] “cat” is an index term, and the activity of assigning index terms (words, phrases, codes, etc.) to resources (books, journal articles, Web pages, blog entries, digital photos, video clips, museum objects, etc.) has long been known as indexing, whether undertaken by people or machines.

Social tagging generally means the practice whereby internet users generate keywords to describe, categorise or comment on digital content. Tagging allows users to record their individual responses to the information objects. Tagging tools are generally formed of a triplet of user, information object and keyword. Tags, documents and users form a tri-partite graph, which means that tags are also connected (see for example, Cattuto et al. 2007). In this environment, users as well as documents are connected. We can think about social tags as being the categorization or description of content filtered through the user's knowledge structures as well as through the lens of other people's tags (Nam and Kannan 2014, 24).

In the early days, the emerging concepts and vocabulary relating to social tagging were still to be fixed, for example, in 2007, Zauder, Lazić and Zorica wrote that ”collaborative tagging is also frequently called social tagging and distributed classification, used as a synonym for folksonomy and even confused with social bookmarking" (437). They emphasized that the term folksonomy (see below) should be used for the totality of tags produced by users through the collaborative tagging process, not used to refer to the process itself. Social bookmarking, while often using collaborative tagging, is not synonymous with collaborative tagging. Collaborative tagging is the process by which users of a Web service add natural language keywords to information resources, creating a personalised collection which can be made available to all users. Trant (2009), also distinguishes between tagging as a “process with a focus on user choice of terminology”, while a folksonomy is the “resulting collective vocabulary (with a focus on knowledge organization)” and social tagging is the “sociotechnical context within which tagging takes place (with a focus on social computing and networks)”.

What was agreed on from the early days is that the value of tagging comes when a collection of social tags is aggregated and shared in a folksonomy. The term folksonomy derives from Vander Wal (2005), who explains that folksonomy is the result of personal free tagging of information and objects with a URL for one’s own retrieval within a social tagging environment. It is a portmanteau term created from folk (a favourite word of Vander Wal’s when referring to “regular people”) and taxonomy, however, unlike the hierarchical taxonomy, a folksonomy is a flat, uncontrolled resource organization system (Benz and Hotho 2007). Folksonomies are automatically generated related tags derived from the set of terms with which a group of users tagged content, they are not a predetermined set of classification terms or labels (Mathes 2004). Folksonomy has been described as an indexing language made up of entities and relationships which can be researched as networks through network analysis (Furner 2010).

Vander Wal distinguished between two types of folksonomy: the broad folksonomy and the narrow folksonomy. In a broad folksonomy, many different people can tag the same object, each person tagging from their own perspective. In this kind of system, the creator makes the information object available to others to tag with their own terms. An example of a site which uses broad folksonomy is Delicious. Broad folksonomies allow for the emergence of a Long Tail. In a narrow folksonomy, tags for a document are recorded only once, so that only new tags can be applied, and it is not possible to measure tag frequency. In such systems, tagging is often limited to the object’s creator or author, although this is not always the case. This means that the users who are searching the system are not always aware of the reasons behind the author’s tagging practice. Flickr, Technorati and YouTube are examples of narrow folksonomies (Peters and Stock 2009). There are likely to be fewer tags assigned in a narrow folksonomy and there is less likelihood of the emergence of a Long Tail. The narrow folksonomy is not as rich in relation to its social aspect as the broad folksonomy. The popularity of an item in a broad folksonomy can be estimated by the number of tags that refer to it. Items of narrow folksonomies have flat tag distributions as every tag is assigned only once. The aggregation of tags from different users in broad folksonomies generates non-uniform tag distribution which can help us to estimate the relevance of a tag from the number of times it is assigned. Within narrow folksonomies, “bag of tags” are only available for the whole tagging platform as opposed to individual documents as is the case in relation to broad folksonomy based systems.

Feinberg (2006) commented that while some writers (for example, Golder and Huberman 2005) stress the collaborative nature of social tagging, in fact the cluster of tag terms is an aggregate of individual decisions rather than a cohesive collaboration. The difference between conventional approaches and the tagging approach that was picked up early on in the history of tagging is that while the tag-based system supplies the mechanics for defining, assigning and using tags, it does not provide any specific and detailed rules, guidelines or documentation regarding tag semantics or the ways in which organization is to be achieved through collaborative tagging (see, for example, Zauder, Lazić and Zorica 2007). Users are free to decide what to use their tags for and this means that tags are not necessarily informational or subject related keywords, but might be purpose related or might even be quite random. Rattenbury et al. (2007) argued that it is the unstructured nature of the tag that makes it useful: “tags allow for greater flexibility and variation; and tags may naturally evolve to reflect emergent properties of the data” (103).

Quintarelli refers to tagging as bringing “power to the people” (2005), though researchers later discovered that tagging is often done by a relatively small number of “supertaggers” (see, for example Lorince et al. 2015). The motivation to tag depends on context, with sites such as Delicious existing principally to bookmark and retrieve information objects, while for some sites, such as LastFM, tagging is arguably of secondary importance. From early on, it was recognised that tags operate as content organizers and discoverers and that they also enable like-minded tag creators with resources to interact and to meet their information needs, potentially facilitating the development of social networks (see, for example Razikin et al. 2008; Ding et al. 2009). The success of a social tagging system depends on quality and engagement of its taggers, and recent studies have looked to develop serious play approaches to encourage taggers and to improve crowdsourced tagging (see, for example, Parachakis 2014; Konkova et al. 2014).

Social tagging, and the folksonomies that are created in and through the tagging process, remain important in relation to → knowledge organization in today’s social web. Popular social web sites such as YouTube, Flickr, Facebook, LastFM and Twitter, in its hashtag, have adopted tagging in practice, while social bookmarking sites such as CiteULike, LibraryThing and BibSonomy are active, the latter being the focus of a number of studies into tag user behaviour (see for example, Noy et al. 2008; Jaschke et al. 2008; Borrego and Fry 2012; Doerfel 2016). All of these social web sites demonstrate folksonomies at work. There are many other examples of Web 2.0 platforms and services that use collaborative tagging, including projects that use languages other than English as tagging medium. In France, for example, Moirez (2012) has undertaken a survey of folksonomies in use within French departmental archives.

Lasić-Lazić et al. (2017) present a very useful literature survey of current research approaches to studying folksonomies, updating Peters and Stock’s (2010) survey, and testing out theoretical frameworks constructed by Peters (2009) and Trant (2009). They note that recent studies have examined folksonomies as a new method of enhancing access to resources, search result, or as a basis for various recommender systems. Other studies have examined the potential of user tags in enhancing resource description and complementing standard KOS methods. The third approach is “concerned with extracting meaning from folksonomies, by making explicit the semantics and meaningful relationships in social tagging systems, so they can be transformed to partial ontologies and used to represent knowledge in the Semantic Web environment” (705). Such studies examine tags as viable alternatives to indexing terms assigned by professionals or as a means to complement existing schemes by reflecting user needs in ways that are not always addressed by existing indexing schemes. Recent research has also investigated the ways in which taggers tag (Doerfel et al. 2016) and the ways in which information seekers make use of the navigation options that they are given (Neibler 2016).

Vaidya and Harinarayana (2017) examined the role of social tags in relation to web resource discovery and noted that there has been relatively little research into how and to what extent tagging can be adopted to enhance the search process. They argue that the strength of folksonomy is its collaborative indexing while its weakness lies in information retrieval performance because of the lack of precision. Their study, which was a bibliographically oriented project focusing on LibraryThing and LOC, suggests that that user tagging might be used to complement indexer assigned controlled vocabularies but would be unlikely to replace them fully. The complementary relationship between conventional information retrieval approaches and social tagging in social web sites is a strong theme throughout the literature.

[top of entry]

3. Tag types and tag clouds

Different types of tags are used for different purposes. There has been some research focused on identifying tag typologies with a view to investigating whether they are useful in completing a task or whether they fulfil a specific function (Thom-Santelli 2008). Research in identifying the reasons why participants tag has also been used to generate recommendations for the design of tagging systems (see, for example, Ames 2007). Gupta et al. (2011) provide a categorization of tag types (although there are some overlaps in the typology), which is useful as an overview of tagging practice:

  • Content-Based Tags: to identify the actual content of the resource, e.g Honda Odyssey,
  • Context-Based Tags: to provide the context in which the object was created or saved, for example tags describing locations and time: San Francisco, Golden Gate Bridge 2005-10-19.
  • Attribute Tags: inherent attributes of an object but may not be derived from the content directly, e.g., author of a piece of content such Clay Shirky. These tags might also identify who or what the resource is about or can identify qualities or characteristics of the resource e.g. funny.
  • Ownership Tags: who owns the resource.
  • Subjective Tags: user’s opinion and emotion, e.g., funny or cool. They can also be recommendation tags or other kinds of self-expression tags.
  • Organizational Tags: to identify personal information e.g., mypaper or mywork, and remind the tagger about tasks to undertake e.g. toread, todo. These are less useful for others and are often time sensitive and concerned with an active engagement with the information object.
  • Purpose Tags: non-content specific functions relating to an information seeking task of users (e.g., learn about LaTeX, translate text).
  • Factual Tags: “identify facts about an object such as people, places, or concepts. Factual tags help to describe objects and also help to find related objects. Content-based, contextbased and objective, attribute tags can be considered as factual”. (6)
  • Personal Tags: most often used to organize a user’s objects (item ownership, self-reference, task organization).
  • Self-referential tags: “they are tags to resources that refer to themselves. e.g., Flickr’s sometaithurts — for “so meta it hurts” is a collection of images regarding Flickr, and people using Flickr. The earliest image is of someone discussing social software, and then subsequent users have posted screenshots of that picture within Flickr, and other similarly self-referential images” (7).
  • Tag Bundles: otherwise known as Folksonomies. (pp 5-7).

Folksonomy datasets are often represented as tag clouds. The tag cloud is a user interface element made up of the list of tags that have been used within a particular system. In some cases, the popularity of tags is displayed typographically (Panke & Gaiser 2009). Tag clouds aggregate tags and their resources and display them in “a visually appealing manner” (Helic et al. 2011). The tag acts like a query mechanism in a conventional system. As the user clicks on a tag within a cloud, the tagging system takes the tag and adds it to the system algorithm as a query. The system then matches the tag with related tags in its system and a new tag cloud is displayed based on the results (Mesnage and Carman 2009). Users are often given the option of filtering out tags from the search. Tag clouds can be attractive and interesting because the representation is compact, the eye is drawn to the largest words, and because the words themselves, their relative importance and their alphabetical order are represented simultaneously. However, it is difficult to compare tags of the same size, and in some clouds the word’s size is conflated with its importance. Another problem is that words of similar meaning might lie far apart so associations and relationships might be missed (Hearst and Rossner 2008).

A number of studies have examined the effectiveness of tag clouds as knowledge organization and information retrieval tools. Sinclair and Cardew-Hall (2008), in a study that examined the effectiveness of tag clouds as retrieval tools, concluded that when the information search was focused and specific, the traditional search interface was preferred while when the search was more general, users in their experiment preferred the tag cloud. Their overall view was that while the tag cloud is of value, it is not sufficient for navigation through a folksonomy-based dataset.

[top of entry]

4. Social tagging as knowledge organization: strengths and weaknesses

The strengths and weakness of tagging as a kind of indexing can partly be inferred from its characteristics relative to other forms of indexing. Furner wrote:

tagging can be characterized as a form of (1) manual, (2) ascriptive [assigned as opposed to derived], (3) natural language [as opposed to controlled vocabularies], (4) democratic indexing, which is typically undertaken by (5) resource creators and (6) resource users who have (7) low levels of indexing expertise, (8) high levels of domain knowledge, and (9) widely varying motivations, and which is commonly used to represent (10) non- or quasi-subject-related properties, and frequently (but far from exclusively) applied to (11) resources such as images that do not contain verbal text”. (Furner 2010, 1859).

There have been champions and critics of social tagging as a knowledge organization tool from the early days. Champions (for example, Kroski 2005; Shirky 2005; Merholz 2004) lauded the flexible, participative and collaborative nature of social tagging, which is democratic (Rafferty and Hidderley 2007) in that it involves all users, and emergent in that the tags can change rapidly in response to new content (Feinberg 2006). Early proponents of social tagging took inspiration from James Surowiecki's notion of the "hive mind", or the "wisdom of crowds", or "social intelligence" as a way to explain the advantages and richness that they claimed for social tagging. The idea is that the combined intelligence of a group of people will be more accurate than the knowledge of an individual, even an expert individual. Hidderley and Rafferty (1997), writing along these lines in relation to the theoretical concept of democratic indexing that they developed before the emergence of social tagging in the Web, drew on reader response and interpretative literary theory to explore the potential inherent in collaborative tagging.

Early in the history of social tagging, Mathes (2004) mapped out some of the useful aspects of folksonomies as knowledge organization tools for the Web. Folksonomy systems are useful because they facilitate serendipitous discovery through browsing, and they allow for tracking “desire lines”. They are useful because of their low entry barriers in relation to cost, education, training and experience. Feedback on tagging systems is immediate. The sharing of tags and the instant feedback that can be derived from user generated tagging facilitates a high level of community interaction that would probably not be possible if decisions had first to be made about codes, conventions and rules, such as might be found in the governance of any tightly controlled taxonomy. There is a question about who is doing the tagging, as the user has to have a certain level of IT literacy before engaging with social software. Mathes also cites as the limitations of these systems their ambiguity, the use of multiple words, and the lack of synonym control.

For Kroski (2005), tagging is inclusive, incorporating no imposed cultural or political bias: its language is current, fluid and capable of incorporating terminology and neologisms; it is non-binary, democratic and self-moderating, follows desire lines (see also Mathes 2004); it engenders community, and offers excellent usability. Hammond et al. (2005) add to the list of advantages its flexibility, while Mathes (2004) underlines the opportunity for serendipitous browsing afforded by the flat structure of folksonomy. Porter (2005) emphasises the importance of tagging in resource discovery. Their “freeform” (Shirky 2005) and uncontrolled nature means that tags are able to describe authentically an object in fluent, current and flexible language (Kroski 2007, 95). They can be created and applied “on the fly”; they are inclusive, and give equal weight to “Long Tail” interests (Trant 2009) and, as such, they are fundamentally different from formal or traditional taxonomies, which require language stability and control.

Echoing some of this in a highly cited paper, Shirky (2005) states that while ontologies work well in domains which have a small corpus, formal categories, stable entities, restricted entities, and clear edges, where the participants are expert cataloguers and expert and coordinated users looking for authoritative resources, they are less successful in domains which have large corpus, no formal categories, unstable and unrestricted entities with no clear edges, while the participants are uncoordinated and amateur users and naïve cataloguers, and there is no clear authority, in other words, Web 2.0. In such an environment, moving towards organic organization through the aggregating of tags is a practical solution.

Disadvantages, or weaknesses in social tagging have long been recognized in the literature. Martínez-Ávila (2015) summarized Doctorow's (2001) arguments on the weak side of social tagging as follows, "people lie in a competitive world, common people are too lazy to do something they do not understand, people refuse to exercise care and diligence in their tag creation, people do not know themselves, schemata are not neutral, metrics influence results, and there is more than one way to describe something".

Amongst the weaknesses of social tagging are a lack of synonym and homonym control, a lack of precision and hierarchy, a “basic level” problem where broad and narrow terms are used interchangeably, and a susceptibility to unethical gaming (Kroski 2005). Their uncontrolled nature has led to charges of imprecision, inexactness and ambiguity (Guy and Tonkin 2006; Rafferty and Hidderley 2007), undermining or disabling their expediency in information retrieval or for universal application. The lack of control (Guy and Tonkin 2006; Kroski 2007) and opportunities for over-personalisation create the potential for chaos and unpredictability. Despite this, Guy and Tonkin (2006) conclude that the benefits of tagging outweigh the costs, and they promote investment in ways of improving tags, both at a systems level (tidying tags, or tag bundles) and user level (tag literacy). The inevitability of tagging is readily evident in Kroski (2005), Quintarelli (2005), and Shirky (2005).

Champions of social tagging emphasised its ability to allow for unbiased tagging of resources (Kroski 2005), but a more nuanced and critical analysis, such as has been undertaken by Feinberg (2006), argues that one of the potential problems of social tagging is that it allows all biases to thrive in a form that lacks clear articulation. Feinberg (2006) draws attention to the limitations of social tagging in relation to the notion of social intelligence with reference to examples drawn from Surowiecki. She argues that while social tagging systems might be democratic in allowing anyone to tag, there is no sense of a community coming together to determine how a resource should be indexed. She suggests that if a political metaphor is to be used to characterise the attitude regarding authority in social tagging systems, then "social classification", as Feinberg calls it, should be likened to libertarianism, "where everyone's whims are allowed to flourish" (6). It is, Feinberg argues, the libertarianism of social tagging that facilitates the "Long Tail" aspect, although she wonders how useful the Long Tail actually is for knowledge discovery. A related issue is that while the folksonomy approach might allow for a wide range of voices to be heard, the burden of judging relevance is then on the information seeker.

Gartner's 2016 critique of tagging as an unfiltered representation of lived experience echoes elements of Feinberg's argument. Gartner writes that

The great strength of folksonomy is often claimed to be that it has a degree of authority because it comes directly from the people and presents an unfiltered representation of their living culture free of ideology. An appealing idea, but, as has been made clear in earlier chapters, the notion of metadata being devoid of ideology is a utopian one. Folksonomies are as ideological as any other form of metadata and what they present are beliefs about the world that are as value-laden as beliefs always are. (103)

In addition Gartner voices practical concerns about the "free-for-all" of folksonomy. Controlled vocabularies exist, he writes, to bring clarity "to a haze of terms that may describe the same concept; they do this by putting some shape and order into its synonyms, homonyms and alternative spellings". The problem with the free-for-all of folksonomy is that it

abandons attempts to do this, so there will inevitably be multiple ways of talking about the same thing. This is certainly democratic but it does mean low rates of retrieval: searching using a given term will inevitably mean missing records that describe the same concept but are tagged with an alternative one. Without some way of handling these thorny issues, we have to accept that we will miss plenty of material that could be relevant to us. (103)

Peters and Stock (2007) listed a number of strengths that folksonomies could bring, noting that they:

  • represent an authentic use of language,
  • allow multiple interpretations,
  • are cheap methods of indexing,
  • are the only way to index mass information on the Web,
  • are sources for the development of ontologies, → thesauri or → classification systems,
  • give the quality “control“ to the masses,
  • allow searching and – perhaps even better — browsing,
  • recognize neologisms,
  • can help to identify communities,
  • are sources for collaborative recommender systems,
  • make people sensitive to information indexing
before detailing the problems, which are:
  • absence of controlled vocabulary,
  • different basic levels,
  • language merging,
  • hidden paradigmatic relations,
  • tags which do not only identify aboutness,
  • spam-tags, user-specific tags, and other misleading keywords,
  • conflation of ofness, aboutness, iconology and isness.

They then suggested some Natural Language Processing techniques to solve the problems. Peters and Stock were not the only ones to suggest methods to improve the performance of social tagging systems: from early on in the history of social tagging there have been arguments in the literature for including some form of discipline within social tagging systems to address weaknesses (see, for example, Schmitz 2006; Schmitz at al. 2006; Benz and Hotha 2007).

[top of entry]

5. Disciplining tagging

In a fairly early paper, Rafferty and Hidderley (2007) noted that while the discourse of user-based indexing is one of democracy, organic growth, and of user emancipation, there were hints throughout the literature of the need for post hoc disciplining of some sort, and suggested that that this reveals a residing doubt amongst information professionals that tagging and folkonomy systems can work without there being some element of control and some form of “representative authority” (Wright 2005). Perhaps, they suggested, all that social tagging heralds is a shift towards user warrant. The interest since then in developing tools and systems to discipline tagging suggests that their thesis has merit. Examples of tag disciplining include tag recommendation systems that encourage consolidation of tagging vocabulary by recommending appropriate tags for a resource (Ding et al. 2010). Other projects that have explored ways to discipline tags include using visualisation techniques to display “interesting” or trending tags (e.g. Dubinko et al. 2007) and, as already noted, designing systems that use Semantic Web technologies such as ontologies to overcome the perceived weaknesses of conventional social tagging systems.

Noruzi (2007) argued that folksonomies should use thesauri to enhance efficiency and improve consistency, and also:

  • to provide a means by which the use of terms in a given subject field may be standardized.
  • to locate new concepts in a way which makes sense to users of the system.
  • to provide classified hierarchies so that a search can be narrowed or broadened systematically, if the first choice of search terms produces either too few or too many results/hits.
  • to provide a choice between singular and plural forms. Some words have two different connotations. Many concepts cannot be adequately represented by single words, and compounds are necessary.
  • to correct typographical errors made by folksonomy users.
  • to provide a guide for folksonomy users and searchers of the system for choosing the correct term for a subject search; this highlights the importance of cross-references. If a folksonomy user uses more than one synonym for the same resource — for example, man, men, male, and human – then that resource is liable to be indexed haphazardly under all of these tags; a searcher who chooses one and finds resources tagged there will assume that s/he has found the correct term and will stop his/her search without knowing that there are other useful resources tagged under the other synonyms.
  • to provide guides to terms which are related to any tag in other ways. Similar terms (related terms) should be linked together by three types of relationships: (i) hierarchical relationships, (ii) associative relationships, and (iii) equivalence relationships. For example, a search for the word "employees" will find records with the word "employees" but not records with words "employee," "worker," "laborer," "laborers," etc. The thesaurus is a way around this problem.

This paper is rather dated now, and the trend might be towards developing and using ontologies rather than thesauri to enhance social tagging systems, but the desire to use conventional information retrieval tools to address the weaknesses of social tagging, while retaining the strengths of such systems, remains strong.

Papers that discuss the design and development of tag ontologies to discipline and to enhance social tagging systems include Gruber 2007 and Kim et al. 2008. Ding et al. (2010) developed an upper level ontology (UTO) for social tagging, which aimed to integrate metadata from one social tagging site with metadata from other social tagging sites. Other semantic knowledge resources are sometimes used in projects that seek to map tags to ontologies, such as WordNet and DBpedia. Some approaches to the disciplining of tagging seek to enhance or extend existing ontologies by including conceptual and terminological representations of specific domains. As an example, a project undertaken by Font et al. (2014) sought to extend the MUTO (Modular Unified Tagging Ontology) tagging ontology (Lohmann 2011) by representing the semantics of a specific domain, in this case, the Freesound collaborative database that has more than 200,000 uploaded sounds (2). In 2011, Trattner et al. examined the possibilities of enhancing the efficiency of tagging as resource discovery tool by developing tag-resource taxonomies to support efficient navigation of tagging systems. As with the ontological enhancements to tagging practice, while the taxonomy might enhance efficiency, this type of approach is not perhaps in the spirit of free-form tagging. Another interesting approach was taken by Baldoni et al. (2012), who combined affective computing, social tagging and ontologies in relation to artworks with the end goal of representing the emotional tags derived from user interactions as emoticons, which could then be used to encourage future user tagging.

Another approach to the disciplining of tags can be seen in the development of UTIs or universal tag identifiers — see for example the OpenID initiative (http://openid.net/) that allows Web users to have one Web account for logging on to different sites, and the MOAT project that provides a framework for taggers to produce semantically annotated content by using URLs of existing resources (see Ding et al. 2009 for a more detailed discussion of UTIs, the UTO, and a comparison of tagging features on Delicious, Flickr and YouTube, as of 2009). Such initiatives fly in the face of the arguments about the freedom that tagging offers its users, but advocates of tag disciplining argue that the social networks are platforms not only for bookmarking or tagging for one’s own use, but for sharing.

Ding et al. (2010) describe FaceTag (available at http://www.facetag.org/), which combines the flat structure of user-generated tags with faceted vocabulary to enrich the system by incorporating relationships. The four basic facets are resource type, theme, people and purpose, and users can use these facets to supplement their own tags. They argue that “[s]ocial tagging and traditional indexing are similar in that the objective of both activities is to provide access to and support retrieval of a group of resources that share similar features”, and from this perspective the development of tag ontologies and other disciplining methods makes practical sense. They studied the tags taken from Delicious, Flickr and Youtube over three years and showed that behaviour on each of the sites is slightly different, determined by the technological parameters of the site, the content, the purpose and the developing group of users. This, they acknowledge, has implications for the design of system architecture moving forwards.

The EnTag project was a one-year’s UK JISC funded project that investigated ways to enhance social tagging via controlled vocabularies with a view to improving the quality of tags for increased information discovery and retrieval (Golub et al. 2009, 163). The main focus of attention was into the effectiveness of an enhanced tagging system (Matthews et al. 2010). The enhanced system, which had the capability of offering suggestions via a → knowledge organization system was compared against free social tagging. The enhanced tagging system, EnTag, was tested with experienced IT users with the results suggesting that users felt that some kind of controlled vocabulary was a “good thing”, and that suggesting or recommending tags would help the usability of a tag based system. They were less happy with the tag cloud as a navigation tool, although the researchers suggest that tag clouds might be made more usable by personalising them and/or using filtering, ranking and clustering design solutions.

Establishing control through using knowledge organization tools in tagging systems has its costs, and determining the value of undertaking such a project is complex and would depend on specific contexts, purposes and players. The EnTag project used DDC to suggest tags to users, concluding that such systems could improve specificity and could help with automatic spell checking (Lykke et al. 2012), while the final study of the EnTag project (Golub et al. 2014) explored whether tagging might be enhanced with suggestions from DDC or another well-established knowledge organization system, and concluded that such enhancements can help taggers, even those who are not professional indexers, especially if their tagging practice is appropriate and altruistic.

Another recent approach to disciplining tags has been the investigation of whether Games With A Purpose (GWAP) might be used to help generate tags. Goker et al. (2014) concluded that the games are more orientated towards describing "what" is in a tagged image, while photo-sharing social networks present a more balanced view of semantic facets (what/when/where/who). Weller and Peters (2008) used the analogy of the tag garden to discuss the management of folksonomy, and suggested that the tag garden could benefit from seeding, weeding and fertilizing, metaphors that Weller expanded upon in her 2010 book, Knowledge Representation in the Semantic Web.

Tag recommendation or tag recommender systems offer another important approach to disciplining tags. Tag recommendation systems aim to support users in the tagging process and to expose different facets of a resource (Jäschke 2007). The goal of these systems is to suggest a relevant set of keywords to assist the user in the tagging process. This is sometimes done by presenting the tags as a tag cloud or by using larger fonts for those tags that are most popular. Jäschke et al. writing in 2007 about using tag recommender systems in the search process, concluded that using “most popular tags” increases relevance and precision. The tag clouds or alternative representations can also be used as navigation tools by information seekers. The process of tag recommendation is that when a user posts a new resource on a Web 2.0 platform, the tag recommender will suggest some keywords to tag the resource based on some criteria of relevance.

The construction of tag recommender systems is very much dependent on the development of effective and relevant filtering algorithms, and the development of such algorithms has been the focus of much research over recent years (for example Lee and Chun 2007; Mishne et al. 2006; Musto et al. 2009; Kowald 2014; Wang et al. 2014). While the potential strengths lie in the ability of well-designed recommender systems to help produce relevant and appropriate keywords as tags, there is always a danger that tag recommender systems suggest inappropriate, irrelevant or obscure keywords to taggers. Another danger is that the overzealous implementation of tag recommendation systems will curb the creativity of individual tagging and lead to homogenous tagging systems, privileging particular worldviews and certain voices, possibly the voices of the “super taggers”.

Research has also focused on the influence of tag recommenders on the indexing quality in tagging systems. Dellschaft and Staab (2012), exploring tagging in image retrieval, undertook a study that examined the tags assigned by Mechanical Turk workers to images with accompanying description, and compared them with tags assigned to images without accompanying description. They discovered that the taggers who could see the descriptions spent significantly more time on the task. The presence of description led to increased tag production and global tag diversity. It reduced the inter-tagger diversity. The tags in the without description category tended to be more general than those in the with description category, but showed more diversity. They suggest that the ideal system might include tags generated through both methods and suggest that in soliciting tags from crowd-workers, designers of image tagging systems should ensure that “the tags for each image are provided by indexers who can observe image text description and another part by crowd-workers who can observe only the image itself”. Godoy et al. (2016) provide a useful overview of folksonomy based recommender systems, identifying their role and the advantages that they offer to growing Web 2.0 platforms.

[top of entry]

6. Comparing tagging systems and library based knowledge organization systems

Tagging systems have been compared with library based knowledge organization systems to determine questions relating to performance, usefulness, synonym control and browsability. Heymann and Garcia-Molina (2008) undertook an experiment in which they compared LibraryThing and Goodreads tags with LC, DDC and MARC 008 tags, and their evaluative framework outlined the features present in library systems that they believe “social cataloguing systems” should emulate:

  1. Objective, content-based annotations.
  2. Appropriate group size frequencies. "A system made up of groups of works where each group contains two works would be difficult to browse, as would a system where all groups are made up of a million works. A system should have the right distribution of these group sizes in order to be usable".
  3. Good coverage of the same groups as the library terms: A group in this instance is made up of the works tagged with a specific tag.
  4. Good recall: "A system should not only have the right groups of works, but it should have enough works annotated in order to be useful. For example, a system with exactly the same groups as libraries, but with only one work per group (rather than, say, thousands) would not be very useful".
  5. Little synonymy in annotations.
  6. Consistent cross-system annotation use. "Across the same type of system, in this case, across tagging systems, we would like to see the systems use the same vocabulary of tags because they are annotating the same type of objects—works".
  7. Consistent cross-system object annotation. "We would like the same work in two different tagging systems to be annotated with the same, or a similar distribution, of tags".

Their work showed that tagging in Goodreads and LibraryThing is predominantly objective and content-based, though many other types of tags exist and are prevalent. Tags have group size frequencies that are similar to library terms suggesting a similar quality of browsing is facilitated. They also found that the tags had good coverage of many of the same groups as library terms, implying that taggers found the right ways to divide up the books in the system. Tags had acceptable recall although recall was much better in relation to popular objects, and synonymy was found not to be a big problem. The tags that are in their data set have equivalent or contain library terms. They do not really explore the taggers themselves in the two literary orientated websites. This might impact on the results given that there is a fair chance that there might be a relatively high percentage of taggers trained in knowledge organization tagging on these sites, nonetheless, the work is of interest.

Tagging practice has been compared with conventional indexing practice by other scholars, for example, Rorissa (2010), who, in examining the similarities and differences between Flickr tags and controlled indexing keywords in a general image collection, aimed to “identify the structure of tags used for describing images on Flickr and empirically test the difference between that and the structure of index terms in general image collections according to categories of attributes of images in frameworks established by previous research” (4). Specifically, the frameworks were those developed and used by Enser & McGregor (1992) and Jörgensen (1998). The study showed that there were differences in structure between tag terms and index terms and that taggers behaviour differs from trained indexers’ behaviour. Tags often include the perspective and the context of the person doing the tagging so that they can be richer than the conventional index term. However, the professional indexer may evaluate the information content more thoroughly than the tagger, adding value to the index terms in terms of their precision for retrieval purposes (10). Other examples of this kind of study include Tsui et al. (2009) who looked at folksonomy as a way to augment conventional approaches to content description, Yi and Chan (2009) who linked folksonomy and LCSH, and Lawson (2009) who compared keywords used on OCLC's World Cat with tags assigned to the same books in LibraryThing and Amazon and concluded that social tagging could enrich conventional resource description.

Other studies that have developed in-depth analysis of tagging as a practical application in the area of knowledge organization include Keshet (2011) who examined tagging as social classification; Morrison (2008) who compared the search information retrieval performance of folksonomies from social bookmarking Web sites against search engines and subject directories; and Spiteri (2007) who evaluated tags selected from three websites against the National Information Standards Organization (NISO) guidelines for the construction of controlled vocabularies.

Empirical evaluative studies of the retrieval performance of tagging systems have been undertaken from early in their history. Morrison (2007) measured the effectiveness of folksonomies as information retrieval tools by conducting a “shoot-out” study between search engines, directories and folksonomies, examining precision, recall and overlap of results. Participants, drawn from Information Studies students, were asked to generate the queries themselves based on their own information needs and evaluate the relevance. In this study, the folksonomies outperformed directories for news searches in both recall and precision but fell well behind search engines. Folksonomies also fell behind in entertainment search, although not significantly behind directories, and they also performed worst for factual and exact site queries. Search engines had the highest precision and recall scores for all search types. Morrison argued that despite their performance limitations, folksonomies nevertheless show promise and could be used to improve search engine performance as they develop over time and increase their user base.

Seki, Quin and Uhuera (2010) examined MESH and CiteULike tags and showed that they performed similarly when searched separately but that when they were combined, performance increased significantly. Another area of interest in relation to user generated content is whether members of a particular group, for example domain experts, tend to tag in similar ways, which is turn might impact on relevance and usability. In relation to this point, Lee and Shleyer (2012) undertook research that investigated MESH and CiteULike terms and concluded that the terms in MESH and CIteULike showed different understandings of the two groups, that of the professionally trained indexers, and that of the users who are domain experts.

A number of empirical studies comparing user generated content and controlled vocabularies were carried out within the INEX (Initiative for the Evaluation of XML Retrieval) Social Book Search Track. This track was introduced in 2010 and ran until 2014 when it continued as the clef social book search lab. The evaluation studies undertaken on the INEX track evaluated book retrieval on Amazon, LibraryThing and libraries. Koolen, Kamps and Kazai (2012) found that for judging topical relevance, LibraryThing reviews were more important than the core bibliographical elements or the tags. The 2014 Koolen study also found that reviews were more important than the core bibliographic data or the tags. The Social Book Search Track shows that system effectiveness increases when systems include user generated content for a broad range of tasks. The focus of these tasks is in measuring the perceived usefulness of the system and the results suggest that it is really the reviews that complement more conventional search.

In a large scale empirical study, Bogers and Petras (2015), compared tags and controlled vocabularies in relation to book search. They discovered that tags and controlled vocabularies achieve similar effectiveness, however, although they achieve similar effectiveness, significant differences exist in the distribution of tag terms and controlled vocabulary terms in their study. The average number of types is larger for controlled vocabularies than for tags, while the average number of tokens is larger for tags than for controlled vocabularies, which means that there are more unique terms in the controlled vocabularies but more repetition of terms in the tags. They noted that while there was no significant different in retrieval effectiveness, tags appeared to perform better overall. They suggest that the difference in type/token averages could offer a possible explanation for this. One explanation might be that the keywords used in the tags are qualitatively better, while another explanation might be that precision is more important in book search than recall. More terms to match on, that is, more types, is likely to benefit recall while more repetition of the same term, more tokens, is likely to strengthen precision. This would suggest that controlled vocabularies improve recall while tags have a precision enhancing effect.

[top of entry]

7. Taggers and tagging practice

As tagging has become established practice, many studies have explored the motivations of taggers. It was clear from the early days of tagging that motivations range from the selfish to the altruistic (Hammond et al. 2005). Panke and Gaiser (2009) identified four types of taggers: ego taggers, who seek the publicity of being taggers; archivers, who tag to organize their social web activities; broadcasters, who tag to share content; and team players, who use tags to exchange information in personal networks. Nam and Kannan (2014) categorize the motives of tagging as relating to (a) content organization and (b) social communication, which can include (following Ames and Naaman 2007): self-orientated organization, self-orientated communication, social organization and social communication. There has been considerable research that has investigated “citer motivation” (see for example, the classic 1965 Garfield paper), and there are some overlaps between the two areas, however, citation and referencing derive from a specific form of communicative practice with a clear and focused purpose, while tagging is less disciplined, domain driven and conventionalized.

Gupta et al. (2011) offer a useful overview of tagger motivation which, they suggest, includes:

  • future retrieval,
  • contribution and sharing,
  • attracting attention,
  • play and competition,
  • self presentation (self referential tags),
  • opinion expression,
  • task organization,
  • social signaling,
  • money: (e.g. tagging for Amazon Mechanical Turk projects),
  • technological ease.

[top of entry]

8. Research fronts

Research in tagging as knowledge organization continues to engage with a range of topics from tag enhancement to tagging behavior. Recent trends include:

And finally, while current tagging practice tends to be in the form of inputting individual terms or short phrases, in other words, it operates mainly on the paradigmatic plane, it may be that operating at the syntagmatic plane, through sentences and stories, would allow us to capture a broader range of interpretations. Employing stories to capture description and affective responses has generated interest in relation to images and there is some acknowledgement that rich descriptions of images might enhance indexing exhaustivity, and indeed inform indexers’ understanding of users’ seeking behaviour (see for example, O’Connor, O’Connor and Abbas 1999, 682; Greisdorf and O’Connor 2002). O’Connor, O’Connor and Abbas (1999) noted that users employ stories to describe the content of images (684) and tend to use a narrative style for their descriptions as they become accustomed to the viewing experience offered by an image (687–688), but the possibility of using these stories in image indexing is only just starting to be considered by scholars because of the 'lack of a widely accepted conceptual framework within which to make indexing decisions' (Jörgensen 2003, 252) among experts.

A project undertaken by Leiberman, Rosenzweig and Singh (2001), developed a prototype user interface agent, ARIA (Annotation and Retrieval Integration Agent), which can sit in the user's email editor and sift “descriptions of images entered for the purposes of storytelling in e-mail” for annotations and indexing terms. The storytelling that might be done through e-mail communicative practices becomes the raw material for image annotation. More recently, Rafferty and Albinfalah (2014) investigated storytelling in users' descriptions of images using two “writerly” high modality images. Examining a small number of responses in some detail, the investigation established that story-telling plays an important role in how people interpret images and suggested that incorporating elements of storytelling in to the indexing process might be valuable in relation to indexing exhaustivity. One of the challenges in tagging is to encourage creativity while at the same time disciplining input. Story-telling, is a pervasive and generally pleasurable form of human communicative practice. In addition, story-telling offers a syntagmatic approach to user-based indexing input based on ubiquitous and very human communicative structures.

[top of entry]

9. Concluding remarks

Overall, the literature would suggest that while tagging and other forms of user generated content can appear to perform less well than conventional controlled vocabulary search systems in relation to certain retrieval performance measures, such approaches can complement, enrich, and indeed enhance conventional retrieval systems, for example in relation to book search (see Bogers and Petras 2015). Tagging systems have strengths and weaknesses relative to other forms of knowledge organization. In relation to complementing and enriching other forms of knowledge organization, tagging offers opportunities for indexing → aboutness and emotion, particularly in the tagging of non-text based resources. Work in this area has been undertaken by Neal et al. (2009) in relation to musical facets, tags and emotion and Lea and Neal (2009) on image searching. Neal et al’s paper also emphasises the value of “bottom-up” approach to image descriptions in situations where it is impossible for a few experts to describe numerous images.

Social tagging used alongside semantic web tools such as ontologies has also been shown to enrich access and discovery, and to offer alternative access routes into digital collection in projects such as Bertola and Patti’s project to develop software tools (ArsEmotica) that allow for emotion-driven access to artworks (see Bertola and Patti 2013; Bertola and Patti 2016). The quality of indexing, however, is necessarily dependent on the taggers who undertake the tagging, and on the quality of keywords generated by taggers, which in turn depends on knowledge and understanding (see, for example Rafferty (2011) on the dangers of potentially losing cultural and historical knowledge in tagging systems). Like all other knowledge organization systems, tagging will privilege specific worldviews, and may ignore or marginalise other worldviews, with the result that certain concepts and terms are neglected. The crucial element influencing the quality of tagging is in the end the quality of the taggers. The strengths and weaknesses of tagging are also dependent on the purpose and scope of particular domains and specific platforms. As Shirky argued early in the history of tagging, within the social web, which is a domain that has large corpus, no formal categories, unstable and unrestricted entities with no clear edges, participants who are uncoordinated and amateur users and naïve cataloguers, and there is no clear authority, tagging is perhaps the more pragmatic solution we have for facilitating some sort of information management.

There seems little doubt that in the context of the social web, social tagging offers novel, interesting and engaging approaches to resource description and discovery, albeit there are some challenges that are still in the process of being addressed. One of those challenges for information systems designers is to develop systems that can discipline and bring out the very best in tagging practice, while ensuring that certain worldviews and voices do not dominate. In addition to information search and retrieval, social tagging systems allow for social search and retrieval. Communication in these instances is as much to do with belonging, networking, and sharing as it is to do with denotative-semantic notions of meaning making. Tagging goes beyond the denotative and the informational and often includes emotional responses, connotative responses and phatic communication (in the form of emoticons). It might that the social connections that are made which draw together like-minded groups of people, the recommendation systems that emerge, and the broad communicative practices that are facilitated will provide the foundation for enhanced and enriched approaches to search and discovery.

[top of entry]

10. Further readings

[top of entry]

References

Almoqhim, Fahad, David E. Millard, and Nigel Shadbolt. 2014. “Improving on Popularity as a Proxy for Generality when Building Tag Hierarchies from Folksonomies”. In International Conference on Social Informatics. Springer, 95-110.

Ames, Morgan, and Mor Naaman. 2007. “Why We Tag: Motivations for Annotation in Mobile and Online Media”. Proceedings of the SIGCHI conference on Human factors in computing systems. ACM, 971-980.

Baldoni, Matteo, Cristina Baroglio, Viviana Patti, and Paolo Rena. 2012. “From Tags to Emotions: Ontology-driven Sentiment Analysis in the Social Semantic Web”. Intelligenza artificiale 6, no. 1: 41-54.

Bar-Ilan, Judit, Maayan Zhitomirsky-Geffet, Yitzchak Miller, and Snunith Shoham. 2012. “Tag-based Retrieval of Images through Different Interfaces: A User Study”. Online Information Review 36, no. 5: 739-757

Benz, Dominik, and Andreas Hotho. 2007. "Position Paper: Ontology Learning from Folksonomies". LWA 7: 109-112.

Berman, Sanford. 1971. Prejudice and Antipathies: A Tract on the LC Subject Heads Concerning People. Metuchen, N.J.: Scarecrow Press

Bertola, Federico, and Viviana Patti. 2013. "Emotional Responses to Artworks in Online Collections". UMAP Workshops, vol. 997. https://aperto.unito.it/retrieve/handle/2318/146314/175030/patch2013_paper_12.pdf

Bertola, Federico, and Viviana Patti. 2016. "Ontology-based Affective Models to Organize Artworks in the Social Semantic Web". Information Processing & Management 52, no. 1: 139-162.

Bogers, Toine, and Vivien Petras. 2015. "Tagging vs. Controlled Vocabulary: Which is More Helpful for Book Search?". In iConference 2015 Proceedings. https://www.ideals.illinois.edu/handle/2142/73673

Borrego, Angel and Jenny Fry. 2012. “Measuring Researchers’ Use of Scholarly Information Through Social Bookmarking Data: A Case Study of BibSonomy. Journal of Information Science 38, no. 3: 297–308. DOI: http://dx.doi.org/10.1177/0165551512438353

Cantador, Iván, Ioannis Konstas, and Joemon M. Jose. 2011".Categorising Social Tags to Improve Folksonomy-based Recommendations". Web Semantics: Science, Services and Agents on the World Wide Web 9, no. 1: 1-15.

Cattuto, Ciro, Christoph Schmitz, Andrea Baldassarri, Vito DP Servedio, Vittorio Loreto, Andreas Hotho, Miranda Grahl, and Gerd Stumme.2007. “Network Properties of Folksonomies”. AI Communications 20: 245-262.

Chinnov, Andrey, Pascal Kerschke, Christian Meske, Stefan Stieglitz, and Heike Trautmann. 2015. "An Overview of Topic Discovery in Twitter Communication through Social Media Analytics". Twenty-first Americas Conference on Information Systems, Puerto Rico 2015. 1-10 https://pdfs.semanticscholar.org/e1c4/fa3554f7eaaa3942b58d51ffc25c1463011f.pdf

Choi, Yunseon. 2014. "Social Indexing: A Solution to the Challenges of Current Information Organization". In New Directions in Information Organization, eds. Jung-Ran Park and Lynne. C. Howarth. Bingley: Emerald. 107-135

Choi, Youngok, and Sue Yeon Syn. 2016. "Characteristics of Tagging Behavior in Digitized Humanities Online Collections". Journal of the Association for Information Science and Technology 67, no.5: 1089-1104.

Ding, Ying, Elin K. Jacob, Zhixiong Zhang, Schubert Foo, Erjia Yan, Nicolas L. George, and Lijiang Guo. 2009. "Perspectives on Social Tagging". Journal of the Association for Information Science and Technology 60, no. 12: 2388-2401.

Ding, Ying, Elin K. Jacob, Michael Fried, Ioan Toma, Erjia Yan, Schubert Foo, and Staša Milojevic. 2010. "Upper Tag Ontology for Integrating Social Tagging Data". Journal of the Association for Information Science and Technology 61, no.3: 505-521.

Doctorow, Cory. 2001. "Metacrap: Putting the Torch to Seven Straw-Men of the Meta-Utopia". Retrieved from https://people.well.com/user/doctorow/metacrap.htm

Doerfel, Stephan, Daniel Zoller, Philipp Singer, Thomas Niebler, Andreas Hotho, and Markus Strohmaier. 2016. "What Users Actually Do in a Social Tagging System: A Study of User Behavior in BibSonomy". ACM Transactions on the Web (TWEB) 10, no. 2: 14.

Dubinko, Micah, Ravi Kumar, Joseph Magnani, Jasmine Novak, Prabhakar Raghavan, and Andrew Tomkins. 2007. "Visualizing Tags over Time". ACM Transactions on the Web (TWEB) 1: 7.

Enser, Peter G.B, and Colin G. McGregor. 1992. "Analysis of Visual information Retrieval Queries. Report on Project G16412 to the British Library Research and Development Department". British Library, London. (PDF Download Available). https://www.researchgate.net/publication/228820970_Image_Retrieval_Knowledge_and_Art_History_Curriculum_in_the_Digital_Age

Feinberg, Melanie. 2006. "An Examination of Authority in Social Classification Systems". Advances in Classification Research Online 17, no.1: 1-11.

Font, Frederic, Sergio Oramas, György Fazekas, and Xavier Serra. 2014. "Extending Tagging Ontologies with Domain Specific Knowledge". In Proceedings of the 2014 International Conference on Posters & Demonstrations Track-Volume 1272, CEUR-WS. 209-212.

Foster, Allen and Pauline Rafferty. (eds.). 2016. Managing Digital Cultural Objects: Analysis, Discovery and Retrieval. London: Facet.

Fridman, Natalya, Abhita Chugh Noy, and Alani Harith. 2008. “The CKC Challenge: Exploring Tools for Collaborative Knowledge Construction”. IEEE Intelligent Systems 23, no. 1: 64–68. DOI: http://dx.doi.org/10.1109/MIS.2008.14

Furner, Jonathan. 2010. “Folksonomies”. Encyclopedia of Library and Information Sciences. Eds Marcia Bates & Mary Niles Maack. (3rd ed.). New York: Taylor and Francis, 1858-1866.

Garfield, Eugene. 1965. "Can Citation Indexing be Automated". Statistical Association Methods for Mechanized Documentation, Symposium Proceedings, Washington, DC: National Bureau of Standards, Miscellaneous Publication 269:189-192.

Gartner, Richard. 2016. Metadata: Shaping Knowledge from Antiquity to the Semantic Web. Cham: Springer.

Godoy, Daniela, and Alejandro Corbellini. 2016. "Folksonomy-Based Recommender Systems: A State-of-the-Art Review". International Journal of Intelligent Systems 31, no. 4: 314-346.

Golder, Scott A., and Bernardo A. Huberman. 2006. "Usage Patterns of Collaborative Tagging Systems". Journal of Information Science 32, no. 2: 198-208.

Golub, Koraljka, Jim Moon, Douglas Tudhope, Catherine Jones, Brian Matthews, BartBomiej PuzoD, and Marianne Lykke Nielsen. 2009. "EnTag: Enhancing Social Tagging for Discovery". Proceedings of the 9th ACM/IEEE-CS Joint Conference on Digital libraries, ACM. 163-172.

Golub, Koraljka, Marianne Lykke, and Douglas Tudhope. 2014. "Enhancing Social Tagging with Automated Keywords from the Dewey Decimal Classification". Journal of Documentation 70, no. 5: 801-828.

Greisdorf, Howard, and Brian O'Connor. 2002. "What do Users See? Exploring the Cognitive Nature of Functional Image Retrieval". Proceedings of the Association for Information Science and Technology 39, no. 1: 383-390. doi:10.1002/meet.1450390142

Gruber, Thomas. 2007. "Ontology of Folksonomy: A Mash-up of Apples and Oranges". International Journal on Semantic Web and Information Systems (IJSWIS) 3, no. 1: 1-11.

Gupta, Manish, Rui Li, Zhijun Yin, and Jiawei Han. 2011. "An Overview of Social Tagging and Applications". Social Network Data Analytics. 447-497.

Hammond, Tony, Timo Hannay, Ben Lund, and Joanna Scott. 2005. "Social Bookmarking Tools (I)". D-Lib Magazine 11, no. 4: 1082-9873. Available at: http://www.dlib.org/dlib/april05/hammond/04hammond.html

Hearst, Marti A., and Daniela Rosner. 2008. "Tag Clouds: Data Analysis Tool or Social Signaller?". In Hawaii International Conference on System Sciences, Proceedings of the 41st Annual, IEEE, 160-169.

Helic, Denis, Markus Strohmaier, Christoph Trattner, Markus Muhr, and Kristina Lerman. 2011. "Pragmatic Evaluation of Folksonomies". In Proceedings of the 20th International Conference on World Wide Web, ACM. 417-426.

Heymann, Paul and Hector Garcia-Molina. 2008. Can Tagging Organize Human Knowledge? Technical report. Stanford University. http://ilpubs.stanford.edu/878/.

Hidderley, Rob, and Pauline Rafferty. 1997. “Democratic Indexing: An Approach to the Retrieval of Fiction”. Information Services & Use 17, nos 2-3: 101-109.

Jaschke, Robert, Leandro Marinho, Andreas Hotho, Lars Schmidt-Thieme, and Gerd Stumme. 2008. “Tag Recommendations in Social Bookmarking Systems”. AI Communications 21, no 4: 231–247. DOI: http://dx.doi.org/10.3233/AIC-2008-0438

Jörgensen, Corinne. 1998. "Attributes of Images in Describing Tasks". Information Processing & Management, 34, nos. 2-3: 161-174.

Jörgensen, Corinne. 2003. Image Retrieval: Theories and Research, Lanham, MD: Scarecrow.

Jörgensen, Corinne, Besiki Stvilia, and Shuheng Wu. 2014. "Assessing the Relationships among Tag Syntax, Semantics, and Perceived Usefulness". Journal of the Association for Information Science and Technology 65, no. 4: 836-849.

Keshet, Yael. 2011. "Classification Systems in the Light of Sociology of Knowledge". Journal of Documentation 67, no. 1: 144-158.

Kim, Hak Lae, Alexandre Passant, John G. Breslin, Simon Scerri, and Stefan Decker. 2008. "Review and Alignment of Tag Ontologies for Semantically-linked Data in Collaborative Tagging Spaces". In Semantic Computing 2008 IEEE International Conference on Semantic Computing. IEEE. 315-322

Kipp, Margaret E.I., Jihee Beak, and Ann M. Graf. 2015. "Tagging of Banned and Challenged Books". Knowledge Organization 42, no. 5: 276-283.

Kipp, Margaret E.I., Inkyung Choi, Jihee Beak, Olha Buchel, and Diane Rasmussen. 2014. "User Motivations for Contributing Tags and Local Knowledge to the Library of Congress Flickr Collection". In Proceedings of the Annual Conference of CAIS/Actes du congrès annuel de l'ACSI. http://www.cais-acsi.ca/ojs/index.php/cais/article/view/837

Konkova, Elena, Ayse. S. Goker, Richard Butterworth, and Andrew MacFarlane. 2014. "Social Tagging: Exploring the Image, the Tags, and the Game". Knowledge Organization 41, no. 1: 57-65.

Koolen, Marijn. 2014. "User Reviews in the Search Index? That'll Never Work!"". ECIR, vol. 8416: 323-334.

Koolen, Marijn, Jaap Kamps, and Gabriella Kazai. 2012. "Social Book Search: Comparing Topical Relevance Judgements and Book Suggestions for Evaluation". In Proceedings of the 21st ACM international conference on Information and knowledge management, ACM. 185-194.

Körner, Christian, Roman Kern, Hans-Peter Grahsl, and Markus Strohmaier. 2010. "Of Categorizers and Describers: An Evaluation of Quantitative Measures for Tagging Motivation". In Proceedings of the 21st ACM Conference on Hypertext and Hypermedia. 157-166. ACM.

Kowald, Dominik, Emanuel Lacic, and Christoph Trattner. 2014. "Tagrec: Towards a Standardized Tag Recommender Benchmarking Framework". In Proceedings of the 25th ACM Conference on Hypertext and Social Media, ACM, 305-307

Kroski, Ellyssa. 2005. "The Hive Mind: Folksonomies and User-Based Tagging". InfoTangle Blog, December.

Lasić-Lazić, Jadranka, Sonja Špiranec, and Tomislav Ivanjko. 2017. "Tag-Resource-User: A Review of Approaches in Studying Folksonomies". Qualitative and Quantitative Methods in Libraries 4, no. 3: 699-707.

Lawson, Karen G. 2009. "Mining Social Tagging Data for Enhanced Subject Access for Readers and Researchers". The Journal of Academic Librarianship 35, no. 6: 574-582.

Lee, Sigma On Kee and Andy Hon Wai Chun. 2007. “Automatic Tag Recommendation for the Web 2.0 Blogosphere Using Collaborative Tagging and Hybrid Ann Semantic Structures”. In ACOS’07: Proceedings of the 6th Conference on WSEAS International Conference on Applied Computer Science, 88–93, Stevens Point, Wisconsin, USA 2007. World Scientific and Engineering Academy and Society (WSEAS). 88-93

Lee, Danielle H., and Titus Schleyer. 2012. "Social Tagging is no Substitute for Controlled Indexing: A Comparison of Medical Subject Headings and CiteULike Tags Assigned to 231,388 papers". Journal of the Association for Information Science and Technology 63, no. 9: 1747-1757.

Lieberman, Henry, Elizabeth Rosenzweig, and Push Singh. 2001. "Aria: An Agent for Annotating and Retrieving Images". Computer 34, no. 7: 57-62.

Lohmann, Steffen, Paloma Díaz, and Ignacio Aedo. 2011. "MUTO: The Modular Unified Tagging Ontology". In Proceedings of the 7th International Conference on Semantic Systems, ACM. 95-104.

Lorince, Jared, Kenneth Joseph, and Peter M. Todd. 2015. "Analysis of Music Tagging and Listening Patterns: Do Tags Really Function as Retrieval Aids?" In International Conference on Social Computing, Behavioral-Cultural Modeling, and Prediction, New York, Springer. 141-152

Lykke, Marianne, A. Høj, L. Madsen, Koraljka Golub, and Douglas Tudhope. 2012. "Tagging Behaviour with Support from Controlled Vocabulary". In Facets of Knowledge Organization Facets of knowledge organization: proceedings of the ISKO UK Second Biennial Conference, 4th-5th July 2011, London. Bingley, Emerald. 41-50.

Ma, Xiaoyue, and Jean-Pierre Cahier. 2012. "Visual Distinctive Language: Using a Hypertopic-based Iconic Tagging System for Knowledge Sharing". In Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE) 2012 IEEE 21st International Workshop. IEEE. 456-461.

Ma, Xiaoyue., and Jean-Pierre Cahier. 2014. ”Graphically Structured Icons for Knowledge Tagging”. Journal of Information Science, 40 no 6: 779-795

Macgregor, George, and Emma McCulloch. 2006. "Collaborative Tagging as a Knowledge Organization and Resource Discovery Tool". Library Review 55, no. 5: 291-300.

Madden, Amy, Ian Ruthven, and David McMenemy. 2013. "A Classification Scheme for Content Analyses of YouTube Video Comments". Journal of Documentation, 69, no. 5: 693-714.

Martínez-Ávila, Daniel. 2015. "Knowledge Organization in the Intersection with Information Technologies". Knowledge Organization 42, no. 7: 486–498. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=iih&AN=111299953&lang=pt-br&site=ehost-live.

Mathes, Adam. 2004. "Folksonomies-Cooperative Classification and Communication through Shared Metadata". htttp://www.adammathes.com/academic/computer-mediated-communication/folksonomies.html

Matthews, Brian, Catherine Jones, Bartlomiej Puzon, Jim Moon, Douglas Tudhope, Koraljka Golub, and Marianne Lykke Nielsen. 2010. "An Evaluation of Enhancing Social Tagging with a Knowledge Organization System". Aslib Proceedings, 62 no. 4/5: 447-465.

Merholz, Peter. 2005. Metadata for the Masses, http://adaptivepath.com/ideas/e000361

Mesnage, Cédric S., and Mark J. Carman. 2009. "Tag navigation". In Proceedings of the 2nd International Workshop on Social Software Engineering and Applications, ACM. 29-32.

Mishne, Gilad. 206. Autotag: “A Collaborative Approach to Automated Tag Assignment for Weblog Posts”. In WWW ’06: Proceedings of the 15th International Conference on World Wide Web, ACM. 935-954

Moirez, Pauline. 2012. “Archives Participatives”. Bibliothèques 2.0 à l’heure des médias sociaux, 187:197. http://archivesic.ccsd.cnrs.fr/sic_00725420

Morrison, P. Jason. 2008. "Tagging and Searching: Search Retrieval Effectiveness of Folksonomies on the World Wide Web". Information Processing & Management 44, no. 4: 1562-1579.

Musto, Cataldo, Fedelucio Narducci, Marco De Gemmis, Pasquale Lops, and Giovanni Semeraro. 2009. "STaR: A Social Tag Recommender System". In Proceeding of ECML/PKDD 2009 Discovery Challenge Workshop, 215-227.

Niebler, Thomas, Martin Becker, Daniel Zoller, Stephan Doerfel, and Andreas Hotho. 2016. "FolkTrails: Interpreting Navigation Behavior in a Social Tagging System". In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, ACM. 2311-2316.

Noruzi, Alireza. 2006. "Folksonomies:(Un) Controlled Vocabulary?" Knowledge Organization 33, no. 406: 199-203.

O’Connor, Brian C., Mary K. O’Connor, and June M. Abbas. 1999. "User Reactions as Access Mechanism: An Exploration Based on Captions for Images". Journal of the Association for Information Science and Technology. 50, no. 8: 681–697.

O'Reilly, Tim. 2005. "What Is Web 2.0?: Design Patterns and Business Models for the Next Generation of Software", http://www.oreillynet.com/pub/a/oreilly/tim/news/2005/09/30/what-is- web-20.html.

Olson, Hope A. 2013 "Assumptions of Naming in Information Storage and Retrieval: A Deconstruction". In Proceedings of the Annual Conference of CAIS/Actes du congrès annuel de l'ACSI. 110-119.

Panke, Stefanie, and Birgit Gaiser. 2009. "With My Head Up in the Clouds'' Using Social Tagging to Organize Knowledge". Journal of Business and Technical Communication 23, no. 3: 318-349.

Pappas, Dimitrios, and Iraklis Paraskakis. 2016. "Adaptive Knowledge Retrieval Using Semantically Enriched Folksonomies". In Semantic and Social Media Adaptation and Personalization (SMAP) 2016 11th International Workshop on, IEEE, 100-105.

Paraschakis, Dimitris, and Marie Gustafsson Friberger. 2014. "Playful Crowdsourcing of Archival Metadata through Social Networks". In ASE BigData/SocialCom/Cybersecurity Conference, Stanford University, May 27-31, 1-9

Peters, Isabella, and Wolfgang G. Stock. 2007. "Folksonomy and Information Retrieval". In Proceedings of the Association for Information Science and Technology 44, no. 1: 1-28.

Peters, Isabella, and Katrin Weller. 2008. "Tag Gardening for Folksonomy Enrichment and Maintenance". Webology 5, no. 3: 1-18.

Peters, Isabella. 2009. Folksonomies: Indexing and Retrieval in Web 2.0. Berlin, Germany: Walter de Gruyter.

Peters, Isabella. and Wolfgang, G. Stock, W. G. 2010. “Power Tags” in Information Retrieval”. Library Hi Tech, 28, no. 1: 81-93.

Porter, Joshua. 2005. "Controlled Vocabularies Cut off the Long Tail". Bokardo.com: Social Web design [blog] (2005), http://bokardo.com/archives/controlled_vocabularies_long_tail/.

Quintarelli, Emanuele. 2005. “Folksonomies: Power to the People”. In ISKO Italy-UniMIB Meeting, Milan. http://www.iskoi.org/doc/folksonomies.htm.

Rafferty, Pauline. 2011. “Informative Tagging of Images: The Importance of Modality in Interpretation”. Knowledge Organization 38 no. 4: 283-298

Rafferty, Pauline, and Rob Hidderley. 2007. “Flickr and Democratic Indexing: Dialogic Approaches to Indexing", Aslib Proceedings 59 nos 4/5: 397-410.

Rattenbury, Tye, Nathaniel Good, and Mor Naaman. 2007. “Towards Automatic Extraction of Event and Place Semantics from Flickr Tags”. in Proceedings of the 30th Annual International ACM SIGIR Conference on Research and development in Information Retrieval. ACM. 103-110.

Razikin, Khasfariyati, Dion Hoe-Lian Goh, Alton, Y.K. Chua, and Chei Sian Lee. 2008. "Can Social Tags Help You Find What You Want?". In International Conference on Theory and Practice of Digital Libraries. Berlin, Heidelberg: Springer, 50-61

Rorissa, Abebe. 2010. "A Comparative Study of Flickr Tags and Index Terms in a General Image Collection". Journal of the Association for Information Science and Technology 61, no. 11: 2230-2242.

Schmitz, Christoph, Andreas Hotho, Robert Jäschke, and Gerd Stumme. 2006. "Mining Association Rules in Folksonomies". Data Science and Classification 261-270.

Schmitz, Patrick. 2006. "Inducing Ontology from Flickr Tags". In Collaborative Web Tagging Workshop at WWW2006, Edinburgh, Scotland, 50. http://www.ambuehler.ethz.ch/CDstore/www2006/www.rawsugar.com/www2006/22.pdf

Schultes, Peter, Verena Dorner, and Franz Lehner. 2013. "Leave a Comment! An In-Depth Analysis of User Comments on YouTube". Wirtschaftsinformatik 42: 659-673. https://pdfs.semanticscholar.org/d84e/c961f13ebc56bd45f63ac78a6e07bbba2a63.pdf

Seki, Kazuhiro, Huawei Qin, and Kuniaki Uehara. 2010. "Impact and Prospect of Social Bookmarks for Bibliographic Information Retrieval". In Proceedings of the 10th annual joint conference on Digital libraries, ACM. 357-360.

Shirky, Clay. 2005, “Ontology is Overrated: Categories, Links, Tags”, http://shirky.com/writings/ontology_overrated.html

Silva, Ana Margarida Dias da. 2017 "Folksonomies in Archives: Controlled Collaboration for Specific Documents". Ariadne 77, https://estudogeral.sib.uc.pt/bitstream/10316/40817/1/Folksonomies%20in%20archives-Ariadne77_SILVA.pdf

Sinclair, James, and Michael Cardew-Hall. 2008. "The Folksonomy Tag Cloud: When is it Useful?" Journal of Information Science 34, no. 1: 15-29.

Springer, Micheller, Beth Dulabahn, Phil Michel, Barbara Natanson, David W. Reser, Nicole B. Ellison, Helena Zinkham, and David Woodward. 2008. "For the Common Good: The Library of Congress Flickr Pilot Project". Library of Congress, Prints and Photographs Division, https://www.loc.gov/rr/print/flickr_report_final.pdf

Thom-Santelli, Jennifer, Michael J. Muller, and David R. Millen. 2008. "Social Tagging Roles: Publishers, Evangelists, Leaders". In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM. 1041-1044.

Trant, Jennifer. 2009. "Studying Social Tagging and Folksonomy: A Review and Framework". Journal of Digital Information 10, no. 1:1-44

Trattner, Christoph, Christian Körner, and Denis Helic. 2011. "Enhancing the Navigability of Social Tagging Systems with Tag Taxonomies". In Proceedings of the 11th International Conference on Knowledge Management and Knowledge Technologies, ACM. 18.

Tsui, Eric, Wai Ming Wang, Chi Fai Cheung, and Adela SM Lau. 2010. "A Concept–Relationship Acquisition and Inference Approach for Hierarchical Taxonomy Construction from Tags". Information Processing & Management 46, no. 1: 44-57.

Vaidya, Praveen Kumar, and N. S. Harinarayana. 2016. "The Comparative and Analytical Study of LibraryThing Tags with Library of Congress Subject Headings". Knowledge Organization 43, no. 1:35-43

Vaidya, Praveenkumar, and N. S. Harinarayana. 2017. "The Role of Social Tags in Web Resource Discovery: An Evaluation of User-Generated Keywords". Annals of Library and Information Studies (ALIS) 63, no. 4: 289-297.

Vander Wal, Thomas. 2005. “Explaining and Showing Broad and Narrow Folksonomies”, http://www.vanderwal.net/random/entrysel.php?blog=1635

Wang, Shaowei, David Lo, Bogdan Vasilescu, and Alexander Serebrenik. 2014. "Entagrec: An Enhanced Tag Recommendation System for Software Information Sites". In IEEE International Conference on Software Maintenance and Evolution (ICSME), IEEE, 291-300.

Weller, Katrin. 2010. Knowledge Representation in the Social Semantic Web. Berlin, Germany: Walter de Gruyter.

Weller, Katrin, Isabella Peters, and Wolfgang G. Stock 2009. “The Collaborative Knowledge Organization System”. Handbook of Research on Social Interaction Technologies and Collaboration Software: Concepts and Trends, IGI Global. 132-144.

Wright, Alex. 2005. “Folksonomy”. http://www.agwright.com/blog/archives/000900.htm

Yi, Kwan, and Lois Mai Chan. 2009. "Linking Folksonomy to Library of Congress Subject Headings: An Exploratory Study". Journal of Documentation 65, no. 6: 872-900.

Zauder, Kresimir, Jadranka Lasic Lazić, and Mihaela Banek Zorica. 2007. "Collaborative Tagging Supported Knowledge Discovery". Information Technology Interfaces 2007. ITI 2007. 29th International Conference on, IEEE. 437-442

Zubiaga, Arkaitz, Christian Körner, and Markus Strohmaier. 2011. "Tags vs Shelves: From Social Tagging to Social Classification". In Proceedings of the 22nd ACM conference on Hypertext and Hypermedia, ACM 93-102.

[top of entry]

Appendix: Examples of platforms that use tagging

BibSonomy https://www.bibsonomy.org/
CiteULike http://www.citeulike.org/group/
Flickr https://www.flickr.com/
LibraryThing https://www.librarything.com/
LastFM https://www.last.fm/
Twitter https://twitter.com/
YouTube https://www.youtube.com/

[top of entry]

 

Visited Hit Counter by Digits times since 2018-10-04 (10 months after first publication)


Version 1.1 (= 1.0 plus reference to Doctorow and Martínez-Ávila); published 2017-11-29, updated 2018-02-01
Article category: KOS Kinds

This article (version 1.0) is published in Knowledge Organization, vol. 45 (2018), Issue 6, pp. 500-516.
How to cite it (version 1.0): Rafferty, Pauline. 2018. “Tagging”. Knowledge Organization 45, no. 6: 500-516. Also available in ISKO Encyclopedia of Knowledge Organization, ed. Birger Hjørland, coed. Claudio Gnoli. http://www.isko.org/cyclo/tagging

©2017 ISKO. All rights reserved.