Posted on 2016/09/30 by

“The Pirate Bay for Research”: On Sci-Hub and Open Access

screen-shot-2016-09-30-at-6-11-53-pm

Screenshot of Sci-Hub Homepage

Daniel Allington’s statement, in the preamble to a 2013 blog post that “[i]n the last two or three years, open access to academic journal articles has gone from being something that noisy idealists were unrealistically demanding to something that’s going to happen whether we like it or not” (“On Open Access, and Why It’s Not the Answer,” emphasis added) instantly struck me as eerily prophetic.

For anyone with even a passing interest in academic publishing and Open Access (OA) issues, a gripping drama has been playing out at least since June 2015 when the academic publisher Elsevier filed a complaint against two websites, Sci-Hub and Libgen (an abbreviation for the Library Genesis Project), accusing them of pirating academic (mostly scientific) articles and offering them up for free online (Glance). In my two years working in a library before applying to do my MA, the proliferation of these services was something to behold, and it inspired many a pained water cooler conversation with my librarian colleagues.

As TechCrunch sexily dubbed it back in April, Sci-Hub is, essentially, “The Pirate Bay for research” (Coldewey), providing more than 58 million (formerly paywalled) articles available to searchers that initially supply a “URL, PMID [PubMed Identifier] / DOI [Digital Object Identifier] or search string” (Sci-Hub). According to a fascinating Science piece published earlier this year, in which author John Bohannon worked with Sci-Hub founder Alexandra Elbakyan to analyze six months of Sci-Hub user data, the website served 28 million individual paper downloads[1] to 3 million (itself a conservative estimate) unique IP addresses between September 1st 2015 and February 29th 2016 (“Who’s Downloading Pirated Papers? Everyone”). In a statement from last October, the Association of American Publishers issued its support for Elsevier’s complaint, arguing that Sci-Hub and Libgen were engaged in “compromising the security of colleges, universities, database owners, and individuals’ personal computers for the purpose of engaging in mass criminal computer intrusions and copyright infringement” (Association of American Publishers). Since then, Sci-Hub has been bouncing around while changing domain names and managing to avoid shutdown (Moody, 9).

Screenshot of Sci-Hub's "Ideas" section

Screenshot of Sci-Hub’s “Ideas” section

There have been – and continue to be – spirited conversations across the various stakeholder groups implicated in the Sci-Hub story. For our purposes, I think it is worth examining Sci-Hub’s seemingly fraught relationship with the OA ethos and its communities. Sci-Hub clearly views itself as acting in accord with an OA philosophy: “The Sci-Hub project supports Open Access movement in science. Research should be published in open access, i.e. be free to read. The Open Access is a new and advanced form of scientific communication, which is going to replace outdated subscription models. We stand against unfair gain that publishers collect by creating limits to knowledge distribution” (Sci-Hub). But is Sci-Hub’s model truly reflective of OA in any way? If yes, how might this complicate our understanding of OA? And, if no, how can we reconcile these seemingly incompatible proliferations of this terminology in our current informational/cultural moment in a way that might clarify underlying differing approaches to knowledge creation, copyright, scholarship, and academic publishing?

If we follow directly from Peter Suber, Sci-Hub is most definitely not in line with an OA ethos. After all, Suber’s very first OA stipulation is that it “removes price barriers (subscriptions, licensing fees, pay-per-view fees) and permission barriers (most copyright and licensing restrictions)” (“Open Access Overview”). Sci-Hub forcibly accomplishes the former while flagrantly violating the latter. Slightly later, Suber makes this more explicit by stating, “OA is not Napter for science,” that “[i]t’s about lawful sharing, not sharing in disregard of law,” and that “[t]here is no vigilante OA, no infringing, expropriating, or piratical OA” (“Open Access Overview”). Obtaining the consent of copyright holders is central to the OA project, as conceptualized and expressed by Suber, and, consequently, a scenario in which that consent has not been obtained and is instead violated – on a mass scale, no less – disqualifies Sci-Hub and its ilk from claiming OA status.

Ernesto Priego, writing in a post for The Winnower, seems to largely agree with Suber’s articulation of OA, arguing that, to him, “Sci-Hub is not what Open Access is about” (“Signal, Not Solution: Notes on Why Sci-Hub Is Not Opening Access”). For him, “[t]he ‘anyone-can-access-any-paper system we’ve all been dreaming about for years’… is not merely a technological solution. It is mostly a change of paradigm” (“Signal, Not Solution”). Additionally, for Priego, the imagined/ideal “‘system’ would be the whole scholarly apparatus, its human resources and material infrastructure, organised around the principle of sustainable availability and permission to reuse, to read and use as human beings … and with the machines that help us do our work” (“Signal, Not Solution”). So, for him, Sci-Hub’s self-defined OA aspirations fall short because the access it accomplishes is via “merely a technological solution,” a (flagrantly infringing) quick and dirty solution to what he views to be a much larger set of issues – seemingly above and beyond basic technological access – that require paradigmatic change, presumably the kind of change that the OA ethos can help foster.[2]

Where does this leave us in our consideration of Sci-Hub’s fraught OA status? Well, Balázs Bodó provides us with a useful terminological distinction when he includes Sci-Hub and Libgen as part of the Guerilla Open Access movement, in his “Pirates in the Library: An Inquiry into the Guerilla Open Access Movement.” According to Bodó, “they [Sci-Hub and Libgen] constitute two distinct, but closely related elements of a wider Guerilla Open access movement, which uses piracy as a political tool to address the systemic failures of scholarly publishing” (3). Significantly, for our present considerations, Bodó sees Suber’s – and, by extension, Priego’s – OA conceptualization as an inherently conservative effort in comparison to the radical nature of Guerilla OA, as initially theorized by Aaron Swartz in 2008. Bodó states, after an analysis of Suber’s own articulation of OA, “the radical transformation of the status quo was not amongst the goal [sic] of OA” (7). He continues, “[OA’s] course of action was to develop alternatives that, on the long run would complement, rather than replace the fundamentally commercial, paywall based systems of academic publishing” (7). Therefore, Suber’s OA was, in theory and as borne out in practice, ultimately very conservative in the ways in which it sought to effect changes to academic publishing infrastructure, and, crucially, “proved to be incapable of addressing the challenges that came with the rapid transformation of science and higher education around the globe in the last few decades” (7). It was in part this incapability, argues Bodó, that lead to Guerilla OA’s emergence.

After glossing the histories of scholarly piracy, the emergence of piratical shadow libraries, and drawing a thought-provoking comparison between Aaron Swartz and Alexandra Elbakyan, Bodó proffers near the end of his text, perhaps provocatively, “[t]here is a good chance that shadow libraries will prove to be irrepressible” (15). Curiously, this is in line with Elbakyan’s own views, as expressed in her blogged rebuttal to Priego, when she says, “[i]t will be impossible to shut down the website completely, so that change is forever” (“Why Sci-Hub Is the True Solution for Open Access: Reply to Criticism”). For Bodó, systemic issues inherent in scholarly publishing infrastructures that remain unaddressed by dominant publishing hegemonies naturally lead to Guerilla OA, almost as a form of frustrated or unsatisfied demand by infrastructurally-limited supply.

The question, perhaps crystallized by shadow libraries (like Sci-Hub), then becomes (a deceptively simple) one of reform’s efficacy within a (questionably?) reform-able scholarly publishing infrastructure. Suber and Priego’s understanding of OA, and how it can function to bring about change within this infrastructure, stands in stark contrast to the insurgent ethos that undergirds and animates Guerilla OA, as repeatedly articulated publicly by Elbakyan. In fact, in my opinion, this is precisely why we can have both groups simultaneously laying claim to OA as a descriptor: due to fundamental differences in their understanding of what would constitute open access in the present climate, each sees the other as failing to fulfill the mandate of true open access (however that is defined). I hope, in unpacking some of these issues here, that I haven’t favoured one side in this dialogue over the other, because I feel that it is both fascinating and crucial for us to consider as aspiring scholars navigating a fraught digital knowledge creation landscape. It is important for us to try to grasp these divergent values, ones that we actually have the power to shape – through our own research and publication decisions – as knowledge workers.

[1] According to Bohannon, 9,296,485 of these 28 million were for Elsevier products.

[2] It should be noted that Alexandra Elbakyan has personally responded to Priego on her blog, and maintains that she sees a through-line between OA and her Sci-Hub efforts. She even goes as far as to assert that she was “inspired by [the] Open Access movement” and that “if not [for] argumentation developed by Open Access, then it would be much harder for [her] to defend [that] what Sci-Hub [is] doing is [the] right thing to do” (“Why Sci-Hub Is the True Solution for Open Access: Reply to Criticism”).

Works Cited

Allington, Daniel. “On Open Access, and Why It’s Not the Answer.” DanielAllington.net, 15 Oct. 2013. http://www.danielallington.net/2013/10/open-access-why-not-answer/. Accessed 26 Sep. 2016.

Association of American Publishers. “Statement on Libgen/Sci-Hub Complaint.”Association of American Publishers, 22 Oct. 2015. http://publishers.org/news/statement-libgensci-hub-complaint. Accessed 26 Sep. 2016.

Bodó, Balázs. “Pirates in the Library: An Inquiry into the Guerilla Open Access Movement.” New York: Social Science Research Network. SSRN-id2816925. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2816925. Accessed 26 Sep. 2016.

Bohannon, John. “Who’s Downloading Pirated Papers? Everyone.” Science, 28 Apr 2016. http://www.sciencemag.org/news/2016/04/whos-downloading-pirated-papers-everyone. Accessed 26 Sep. 2016.

Coldewey, Devin. “Sci-Hub Is Providing Science Publishers With Their Napster Moment.” TechCrunch, 29 Apr. 2016. https://techcrunch.com/2016/04/29/sci-hub-is-providing-science-publishers-with-their-napster-moment/. Accessed 26 Sep. 2016.

Elbakyan, Alexandra. “Why Sci-Hub Is the True Solution for Open Access: Reply to Criticism.” Engineuring, 24 Feb. 2016. https://engineuring.wordpress.com/2016/02/24/why-sci-hub-is-the-true-solution-for-open-access-reply-to-criticism/. Accessed 26 Sep. 2016.

Glance, David. “Elsevier Acts Against Research Article Pirate Sites and Claims Irreparable Harm.” The Conversation, 15 Jun. 2015. https://theconversation.com/elsevier-acts-against-research-article-pirate-sites-and-claims-irreparable-harm-43293. Accessed 26 Sep. 2016.

Moody, Glyn. “Open Access: All Human Knowledge Is There—So Why Can’t Everybody Access It?”Ars Technica, 7 Jun. 2016. http://arstechnica.co.uk/science/2016/06/what-is-open-access-free-sharing-of-all-human-knowledge/. Accessed 26 Sep. 2016.

Priego, Ernesto. “Signal, Not Solution: Notes on Why Sci-Hub Is Not Opening Access.” The Winnower, 23 Feb. 2016. https://thewinnower.com/papers/3489-signal-not-solution-notes-on-why-sci-hub-is-not-opening-access. Accessed 26 Sep. 2016.

Sci-Hub. Sci-Hub. http://sci-hub.bz/. Accessed 26 Sep. 2016.

Suber, Peter. “Open Access Overview.” 5 Dec. 2015. http://legacy.earlham.edu/~peters/fos/overview.htm. Accessed 26 Sep. 2016.

 

fanfictionmap2
Posted on 2016/09/29 by

A Room of Our Own: Constructing and Curating the Open Access Archive for Transformative Works

In 2012, readers, media outlets, and literary critics were alarmed, appalled, and perhaps a little intrigued to find that E.L James’s erotic trilogy Fifty Shades of Grey had sold over 100 million copies worldwide and become a New York Times best-seller. Equally shocking to its mainstream audience was news of the novels’ scandalous origins: Fifty Shades of Grey began as fanfiction of Stephanie Meyer’s popular fantasy-romance YA series, Twilight (Bertrand). For their part, fanfiction authors had a different—albeit largely unified—response to the commercial success of Fifty Shades, which was to argue that the novels are neither radical nor well-written, and that an abundance of superior fan-generated erotica exists on the internet for free. Scores of articles like Aja Romano and Gavia Baker-Whitelaw’s “Where to Find the Good Fanfiction Porn” sprung up on blogs and digital news sources in an attempt to lead curious readers to popular repositories of high-quality fanfic. One such repository is Archive of Our Own (AO3), an open access database where users can read and post fanfiction. Unlike much academic research on the subject, this probe does not seek to answer whether fanfiction and other transformative works can or should be considered legal under the purview of fair dealing within copyright, but rather embraces the legal shades of grey that fanfiction inhabits in order to explore how repositories like AO3 provide a model for a productive, constructive open access archive.[1]

Fanfiction: What is it and where does it come from?

Fanfiction, often abbreviated as fanfic or fic, refers to narratives written by individual fans or fan communities based on the characters, settings, or plots of a canonical source text or transmedia franchise. Digital technologies scholar Bronwen Thomas posits that fanfiction “has long been the most popular way of concretizing and disseminating [fans’] passion for a particular fictional universe” (1). Far from simply rewriting the source text, authors of fanfic critique and transform of the canon, often by queering beloved characters or by giving voice to the women who are marginalized as little more than love interests in popular media.

The origins of fanfic as we understand it now can be traced back to the science fiction fanzines of the late 1960s (Tosenberger 186). Yet one can argue—and indeed many have—that fanfiction has existed as long as stories have been told, and that numerous foundational authors of the English literary canon, such as Milton and Shakespeare, were in effect authors of fanfiction themselves.

The advent of the Internet radically refigured the production and dissemination of fanfiction. In the 1980s, Usenet provided an early public platform permitting fans to connect and share their creative labour without the need for geographical proximity. The digitization of a formerly underground genre made it unprecedentedly accessible to anyone with an Internet connection, but this accessibility came at the cost of a reliable archive. Whereas printed zines had been physically circulated within small communities, posts on Usenet were often lost within days of creation until DejaNews provided the ability to access newsgroup content in 1995. While fanfiction expanded outward into the world, it also entered an age of ephemerality.

It was not until the introduction of the World Wide Web that fanfiction began to develop a system of digital archiving that lended works some measure of permanence. Single-fandom archives, hosted and managed by volunteers, collected, shared, and preserved stories as long as they remained online. Shannon Fay Johnson notes that the “more easily accessed and faster-paced virtual communities […] allowed for not only increased consumption, but also creation” (Johnson), and the downside to this wealth of creation was the organizational problem it posed for volunteer archivists. Fast forward to the late 1990s and early 2000s: the influx of creation produced a demand for massive multi-genre archives out of which were born, among others, FanFiction.net, the largest fanfiction repository in the world, and Archive of Our Own.

Building the Archive

Run by the non-profit Organization for Transformative Works, Archive of Our Own boasts more than 22,720 fandoms, 972, 600 users, and 2,549,000 works. Their goal, as listed in their mission statement, is to “maximize inclusiveness of content.” In order to do so, they grant open access to the works they host.

In “Open Access Overview,” Peter Subar quotes the PLoS definition of open access as “free availability and unrestricted use,” that is to say, content free of price and permission barriers. Both elements of open access are present in and the foundation upon which AO3 is built. Anyone with an Internet connection can read and leave kudos (similar to Facebook’s ‘like’) on stories in the archive. Although users do need an account to post, comment on, bookmark, and review fic, becoming a member is a free and non-discriminatory process. In terms of permission, guests and users alike are invited to download and save fanfics to their various devices as EPUB, MOBI, PDF, or HTML files, making them accessible both on- and offline. Moreover, fandom encourages authors and artists to excerpt each other’s work, translate it from one medium to another, or otherwise engage in reworkings of each other’s creations.

Screencap of the AO3 homepage.

Screencap of the AO3 homepage.

In their FAQ, the OTW describes the ethos of AO3 as the following:

In the Archive of Our Own, we hope to create a multi-fandom archive with great features and fan-friendly policies, which is customizable and scalable, and will last for a very long time. We’d like to be fandom’s deposit library, a place where people can back up existing work or projects and have stable links, not the only place where anyone ever posts their work. It’s not either/or; it’s more/more!

Like open access journals, the OTW recognizes digital archives as a legitimate method of building a cultural history that is—if not accessible to all—at least accessible to most. Fans nevertheless have a right to be concerned by this intangible method of preservation. In 2002 and again in 2012, FanFiction.net’s owner, Xing Li, banned and remove all works rated NC-17 from the archive, effectively erasing them from fandom history if they had not been cross-posted elsewhere. The immaterial form of the open access archive is both its greatest asset and its greatest liability: the increased accessibility of fanworks only exists insofar as the archive remains free and online. While AO3’s blanket permission to download stories in various formats assuages some of the larger concerns about impermanence, it does not promise an eternal resting place for fanworks despite the creators’ hopes. Even if some works are preserved on hard drives and smartphones around the world, the comments, kudos, bookmarks, and links between fans and fandom would be erased from history.

In terms of accessibility, Archive of Our Own is attempting to move towards a model of universal access. Although most stories are in English, AO3 welcomes works in a variety of languages, and entire fan communities dedicate their time to translating and re-posting fanfics. Many fics are also transformed into podfic, the fan equivalent to the audiobook. Although only a fraction of the works on AO3 have been translated or recorded, the growing practice seeks to render fanfiction increasingly accessible to international fans, as well as to people for whom reading is not a viable mode of cultural consumption.

Framing Constructive Practice

As Peter Subar states of open access with respect to scholarly journals:

The purpose of the campaign for OA is the constructive one of providing OA to a larger and larger body of literature, not the destructive one of putting non‐OA journals or publishers out of business. […] Open‐access and toll‐access literature can coexist. We know that because they coexist now.

It is here that fanfiction’s legal ambiguity as derivative or transformative work comes into play. While an argument can be made that fanfiction is legally unpublishable because it borrows so much from source texts that are still in copyright, the case of Fifty Shades proves that fanfiction can be hugely profitable with the right modifications. Yet most fanfiction authors do not seek financial gain from their labour. When Peter Subar notes “the campaign for OA focuses on literature that authors give to the world without expectation of payment,” arguing that “they write for impact, not for money,” he could just as well be discussing fanfiction authors rather than academics.

Archive of Our Own therefore provides a clear example of the constructive nature of OA. Fanfiction and other derivative works do not replace or supplant commercial culture, nor do they attempt to. By its very nature, fanfiction requires original novels, films, television shows, and/or other cultural products to engage with. Rather than competing with publishers and producers, fanfiction operates in conversation with cultural objects in a way that is arguably comparable to academic scholarship. In order to enjoy a fic the way an author intended it to be enjoyed, the reader must have a measure of familiarity with the canonical work being addressed. Fanfiction ultimately demands the consumption of commercial culture, and its open access encourage synchronous creation and consumption.

The Gendered Archive

In The Future of Ideas, Lawrence Lessing notes that “[l]urking in the background of our collective thought is a hunch that free resources are somehow inferior” (27), and we see this rhetoric circulating in the discourse surrounding fanfiction. One aspect of fanfiction and fan communities that I have yet to explore in this probe is the question of gender. Unlike content found in open access academic journals, the content found on AO3 is largely generated, read, and validated by women and girls. It is therefore difficult to untangle whether fanfiction is disparaged because it is free, because it is not “original” work (is there even such a thing anymore?) or because of its demographics.

Furthermore, women have historically been the volunteer curators of fanworks. The open access archive therefore raises legitimate concerns about the long history of free labour undertaken by women—be it intellectual or affective. On the one hand, women and girls are building a shared community based on creative practice, and to lock such a community behind a paywall would be to silence those voices. On the other hand, if fanfiction acts as a promotional device directing consumers to a given franchise, open access archives to fanworks may be yet another instance of unpaid women’s work that goes unnoticed and undervalued while corporations and the (often male) guardians of the canon profit from free advertising. It might then be a productive enterprise to examine who benefits from the open access archive, and whether this kind of access ultimately devalues the artistic labour inherent in its creation.

[1] For a comprehensive insight into fanfiction’s fraught relationship with copyright law, see Kate Romanenkova’s article, “The Fandom Problem: A Precarious Intersection of Fanfiction and Copyright.” (Note that Romanenkova’s text engages with U.S. copyright.)

Works Cited

Archive of Our Own. Organization for Transformative Works. 15 Nov. 2009 (beta) archiveofourown.org/. Accessed 24 Sept. 2016.

Bertrand, Natasha. “‘Fifty Shades of Grey’ started out as ‘Twilight’ fan fiction before becoming an international phenomenon.” Business Insider, 17 Feb. 2015,www.businessinsider.com/fifty-shades-of-grey-started-out-as-twilight-fan-fiction-2015-2. Accessed 24 Sept. 2016.

Johnson, Shannon Fay. “Fan fiction metadata creation and utilization within fan fiction archives: Three primary models.” Transformative Works and Cultures, no. 17, 2014. http://dx.doi.org/10.3983/twc.2014.0578.

“Frequently Asked Questions.” Organization for Transformative Works. http://www.transformativeworks.org/faq/ Accessed 24 Sept. 2016.

Lessig, Lawrence. The Future of Ideas: The Fate of the Commons in a Connected World. Random House, 2001.

Li, Xing. FanFiction.net. 15 October 1998, www.fanfiction.net. Accessed 23 Sept. 2016.

Romanenkova, Kate. “The Fandom Problem: A Precarious Intersection of Fanfiction and Copyright.” Intellectual Property Law Bulletin, vol. 18, no. 2, 20 May 2014, pp.183-312. Social Science Research Network, http://ssrn.com/abstract=2490788.

Romano, Aja and Gavia Baker-Whitelaw. “Where to find the good fanfiction porn.” The Daily Dot. 17 Aug. 2012, www.dailydot.com/parsec/where-to-find-good-fanfic-porn/. Accessed 24 Sept. 2016.

Suber, Peter. “Open Access Overview.” Earlham College. 21 June 2004, legacy.earlham.edu/~peters/fos/overview.htm. Accessed 21 Sept. 2016.

Thomas, Bronwen. “What Is Fanfiction and Why Are People Saying Such Nice Things about It?” StoryWorlds: A Journal of Narrative Studies, vol. 3, no. 1, 2011, pp. 1-24. Project MUSE, muse.jhu.edu/article/432689.

Tosenberger, Catherine. “Homosexuality at the Online Hogwarts: Harry Potter Slash Fanfiction.” Children’s Literature, vol. 36 no. 1, 2008, pp. 185-207. Project MUSE, doi:10.1353/chl.0.0017.

Posted on 2016/09/29 by

Copyright Simulation; Or, How Is This Text Even Possible

Let’s assume that there is copyright and that language functions. Next, let’s assume that I must write something from the term “copyright”. What is copyright? How can I answer that question? Is it possible to answer that question? Whether it is or not, should I still attempt to answer it? Or should I do something else from that question?

I’m going to assume that my language functions, and that the professor, and the students, who read this text – as I assume that there will be such humans reading these words in the near future… I’ve realized something quite different about the nature of this text. My first thought is that this text and copyright are two concepts graspable within a sentence; e.g. “This text is copyrighted.” I will proceed under the assumption that I possess – if that word functions – the function “copyright”. Why must I refer to that metaphor when stating what I do with that word? Can I speak of “copyright” as if it were immaterial? I assume that when I speak “copyright”, I can also define the word in a way acceptable to the readers; or, that I can violently impose meanings on the readers; or, that readers won’t understand much of the text.

I have to mention the concept “authority” and to affirm that in the sense that I assume that my audience – as I’m speaking to you in the present tense from the past; furthermore, “my audience” assumes that I possess you in some way – or, if I don’t have an audience, the human or humans reading this text, understands it. I either violently impose the meanings of the text in them, or, I have authority in that the audience accepts the meanings in this text. Perhaps I am speaking in this text, perhaps the text isn’t speaking me. Perhaps I won’t be here, or I is not what I think it is.

The copyright concept is present here.

“Subject to this Act, copyright shall subsist in Canada, for the term hereinafter mentioned, in every original literary, dramatic, musical and artistic work if any one of the following conditions is met,” (Copyright Act 5(1))

I do not like the Copyright Act. If I waive my copyright rights, I accept that there was a copyright to be waived. However, the armed police will not protect my copyright if I ignore this text. Furthermore, I cannot prove that there was copyright in this text unless someone infringes on it and I ask the government to enforce it, and they do.

“Don’t copy this without my permission.”

The government might not enforce the copyright if someone infringes on it in a specific way that it terms fair dealing. Maybe certain infringements of copyright do not infringe copyright. Maybe the government in practice tacitly asserts that my copyright was infringed and maintains that it could protect my copyright, but won’t in a specific case of its choosing.

I don’t know what the government is, nor do I know what the law is. But I feel fear at the latter word, as I fear the markings of law enforcement. I’m guessing that government controls the ontology of copyright: what copyright is and what are its features. It would then address it periodically, through legislation and case law.

What is it to guess at the ontology of copyright? Laura J. Murray and Samuel E. Trosow have attempted that guess with Canadian Copyright: A Canadian Citizen’s Guide. From past law and legislation, we cannot predict future court decisions. (What is it to know something? What is it to be accurate?)

Maybe there are, effected by self-identified citizens, a or numerous copyright simulations. The effect of such are to generate similar results as to what is copyright as if each instance of pseudo-identified copyright infringement were taken to court. The goal of copyright law seems to be the dissemination of simulated instances of government enforcement of copyright law.

Is there copyright here? This text will show up on numerous screens. The moment one of you devious humans enters amplab.ca into your browser and scrolls down, you will actively be copying this text. If you charge a human 100$ to spend some time in your room, and he peeks over your shoulder as you scroll down that page, thou hast infringed on mine copyright as per the words of the Copyright Act. However, I don’t know about said event; nor perhaps were you. Was copyright infringed?

More obviously, were one of you scoundrels to copy and paste this text and sell it to a publication for a stupendous sum, if I wielded large amounts of money myself I could first threaten to beg the court to bring copyright into existence in the text in my favor and thus erase your previous action by erasing a significant sum from your bank account. Were this successful in that you were willing to avoid paying large legal fees for an instant out-of-court settlement, we would have successfully simulated copyright. Were the case to go to court, the judicial system would decide the matter.

It is unnecessary to define copyright in terms such as intellectual property. It is unnecessary to examine whether a right “to produce or reproduce the work” is ownership of the work. Furthermore, the true ontology of an author as relates to copyright is the prerogative of the court. In copyright simulations, the author as relates to copyright is simulated as well. In this last case, the ontology of the simulated author is circumscribed by mutual interests of the parties: whether these be arrived at through threats or courtesy. Courtesy itself responds to the threat of the breakdown of someone’s safety as to the web of social relations in which they assume themselves to live. One could also add that agreements are contracts. Do these always function in the same way as copyright does? (A never ending cycle of contracts.)

These copyright simulations are backed up by the possibility of legal action. They can be substituted for the real thing, they can feel like the real thing, but if a case is taken to court, the court undoubtedly manifests the real copyright.

When both you and I disagree when attempting a copyright simulation, does a copyright simulation subsist within the work in question during the time before the court speaks the truth? Do two copyright simulations for a single work battle until they are dissolved and replaced by the court’s true copyright? How does this affect the truth of the work?

Is there a truth of the work within the simulation of a copyright simulation battle? While the copyright simulation is effective? Is the realm of these questions separate from the realm of questions such as the nature of the author? The nature of art?

Rights, ownership, tangibility, expression, form, ideas; these terms circulate in discourses outside of copyright; within law, they have a concrete reality.

Bibliography
Copyright Act. Statutes of Canada, c. C-42. Canada. Deparment of Justice. 1985. Department of Justice. Web. 24 Sep. 2016.
Murray, Laura J. & Samuel E. Trosow. Canadian Copyright: A Citizen’s Guide. Second edition. Toronto: Between the Lines, 2013.

Posted on 2016/09/27 by

“We Can Transcribe It For You Wholesale”: “Open-Source” Paleography, “Open-Access” Academy

In 1999, the University of California, Berkeley began an ambitious distributed-computing project associated with its longstanding partnership with the SETI (“Search for Extraterrestrial Intelligence”) Institute of California. The joint SETI program had for years been a Sisyphean search for signs of intelligence in the radio waves bombarding the Earth from space. The program originally involved gathering colossal amounts of raw data from radio telescopes and sifting through it with specialized and costly on-site supercomputers. The project that began in 1999, called SETI@home, is an extension of SETI from expert researchers and technicians to the general public, outsourcing the resource-intensive data analysis to thousands of PC chips throughout the world. The SETI@home program sends packets of collected data to a program running on each registered user’s computer to be processed whenever the machine is idle and then sent back to UC Berkeley for inclusion in a master database. This was among the earliest examples of Internet “crowdsourcing,” and the thinking has since informed countless other endeavours from raising seed capital to cleaning up image repositories: if it is impracticable to obtain a large amount of some resource from one contributor, aggregate small amounts of that resource from many contributors. SETI@home offers various incentives, referring to the parcels of distributed data as “workunits” (SETI@home) and assigning “credits” to users’ accounts for “workunits” completed. The implicit message is that home users are in a real sense completing and being compensated and recognized for legitimate scientific work.

In the same crowdsourcing spirit, a medieval history professor at the University of Colorado-Colorado Springs named Roger Louis Martínez has developed a program called Deciphering Secrets. Part online course, part group research work, part video game, Deciphering Secrets is a multimodal digital-humanities project with an ambitious public-outreach mandate. As Martínez describes the project in a post on Reddit, “We are democratizing discovery! We are crowdsourcing 1,500 pages of medieval manuscripts from the Cathedral of Burgos (Spain). Even better, there is a Massive Open Online Course that accompanies the process” (“We are democratizing discovery!”). Martínez writes that “[o]ne of the huge challenges today is that there is a disconnect between ‘research scholars’ and ‘the public’. Yes, we interact while you are in college, but after, not so much. We shouldn’t let our relationship go sour after you finish your degrees—instead we need to keep learning and discovering together. And we need to invite more people into the fold through free higher education efforts like Massive Open Online Courses.”

Martínez’s MOOC is more than just a course in medieval Spanish history, as he promises to make students into amateur paleographers ready to join the process of “crowdsourcing a previously unseen collection of manuscripts (1000 to 1500 c.e./a.d.) from the Cathedral of Burgos . . . teaching everyday citizens like you to read and transcribe manuscripts” (“We are democratizing discovery!”). To keep the pool of potential labour as large as possible, Martínez has “figured out a way for individuals who do not understand Spanish to work on the manuscript research as well.” One need only learn Martínez’s simple method of identifying and transcribing characters:

Martínez’s website and other promotional material for the course frequently mention the words “open” and “crowdsourcing,” and it might seem as if the project is indeed a kind of open-source and open-access undertaking. Normally, paleography and close textual analysis, like radio astronomy, are done in universities by trained experts, and though the finished product might ultimately be disseminated in publications that manage to reach a broad audience, it is more likely that it will be produced and consumed entirely by members of a small, highly-specialized expert community. Martínez’s project appears to open things up, involving members of the general public not just in the consumption of this work but also its production.

However, by placing Martínez’s course in applied amateur paleography side by side with SETI@home, I am suggesting instead that Deciphering Secrets is not an open-source invitation to the abstruse, formerly inaccessible enterprise of paleography for the exploration of a distributed community of interested amateurs. While in a sense it may be open-access, in that none of its activities or courses involves any real barrier for the user (no tuition, no application process, no pre-requisites, etc.), in an important way true access to the program remains decidedly closed. For example, as Peter Suber points out in his “Open Access Overview,” there are several barriers excluding potential users from otherwise purportedly open-access texts and resources, among them the language barrier that naturally confines the potential audience for any particular text or publication to the people who speak the language in which it is published. This is a vital consideration when evaluating Martínez’s project in terms of the purported “openness” of its avowed public-outreach ambitions, and it is the basis for related questions that might help us interrogate some of the key concepts in play: the roles of producers and consumers, the relationship between academic and more broadly public conceptions of information value, and the politics of inclusion and exclusion in open source and open access.

To begin with, Deciphering Secrets seems to confuse the relationship between producer and consumer. Participants in Martínez’s course follow his instructions carefully and work their way through the transcription of texts, and to the extent that they are successful at producing something accurate and academically useful (it is hard to say, but one can see why there might be doubts), their output will have scholarly value. Theirs, however, is rather like the work that SETI@home participants’ computers do on their behalf in the background while the screensaver is displayed onscreen and participants are in the other room making coffee. The “paleographers” in Martínez’s program provide the power necessary to do the work, but they remain nonetheless removed from much of the production/consumption economy to which they contribute that power. They produce something that they are incapable of consuming: the language barrier prevents them from reading, much less comprehending and analyzing, the materials they are transcribing, and they are not really participating in an open-source undertaking at all because the “source code,” the language within which the textual product has meaning and thus value, remains entirely opaque to these users even as they are generating numberless instances of it. They are unable, for example, to observe variations in the content of the medieval texts and make meaning out of these differences, as beginner HTML coders are able to do with access to the pool of open-source web code (Lessig 57-8); Martínez’s participants are less like the avid web tinkerers of the early internet described by Lessig and more like the machines on which their HTML code was executed, adding processing power but no higher-order engagement with the content.

The “source” is “open” in the sense that it is not a proprietary product locked up behind a paywall or otherwise obscured, and in principle any of the participants in Martínez’s course could obtain the necessary education and gain meaningful access to this mountain of available text and the paleographic processes involved in working with it, but this is something like saying that the code that runs the Windows operating system is open because in principle one can always get a job at Microsoft and thereby gain access to it. This may be true, but it sidesteps the point that Martínez’s project is not itself in any clear way the sort of democratizing force that would be capable of creating rich, productive connections between research scholars and the general public as advertised, “expos[ing] more people to medieval manuscripts” and “improv[ing] the standing of the humanities (history, arts, literature, philosophy, etc) in society” (“We are democratizing discovery!“). Martínez’s distributed pool of amateur paleographers, uncomprehendingly transcribing manuscripts and exploring 3D recreations of fifteenth-century Plasencia, does not seem to constitute a real paleographic/historical/textual “commons”—rather, it seems something like a digital sweatshop in which participants process “workunits” in exchange for the illusion of access to the prestige-economy of academic paleography.

Related to all of this is the way that Martínez’s project prompts us to think about what value means in connection with the products of specialists in the general marketplace and the “open-access” space of something like a MOOC-based research project. Again, Martínez tells us that Deciphering Secrets democratizes the textual analysis that used to belong exclusively to and ultimately benefit only elite members of a tiny, cloistered research community; even the name of the project emphasizes this, framing the objects of the paleographer and historian’s work as “secrets” to be uncovered. Everything here is to do with movements of value: from academe to the general public (gaining access to an intellectual endeavour), and from the general public back to academe (making purportedly meaningful contributions to the scholarly discourse of medieval Spanish textual history). The relatively low-stakes activities in the course syllabus, though, seem to call the former value movement into question as thoroughly as the no-Spanish-required policy does the latter. This calls to mind Daniel Allington’s reference to Paul Fyfe’s suggestion about crowdsourced editing potentially “displacing correction onto the reader or to autocorrecting functions of networks” (qtd. in Allington); indeed Martínez’s project underscores how profoundly even extremely widely distributed networks of very interested participants might fail to “autocorrect” specialized work, which inheres in an intellectual economy of some inaccessibility even as instructors like Martínez strive to open it up and render it accessible.

I wonder, then, whether Suber’s account of open access may actually understate the role of language barriers, literally and figuratively, and thus partly obscure an important point about “open” information cultures. “Sources” are languages, and “access” depends on one’s ability to read and participate in the value exchanges associated with those languages; thus in connection with both open-source software and open-access publication, the “openness” of a document depends a great deal on the “language” it is in, not just literally (it is in English, it is in hypertext markup, it is in a legible hand) but figuratively (it is part of a conversation to which I am party, it is part of a language economy to whose goods and value I have access). We might reflect on such “open questions” as it were, then, as we consider the roles of production and consumption in information economies, and on the types and degrees of value involved.


Works Cited

Allington, Daniel. “On Open Access, and Why It’s Not the Answer.” Daniel Allington.net, 15 October 2013. http://www.danielallington.net/2013/10/open-access-why-not-answer/. Accessed 21 Sep. 2016.

Lessig, Lawrence. The Future of Ideas: the Fate of the Commons in a Connected World. New York: Random House, 2001.

SETI@home. http://setiathome.ssl.berkeley.edu/. Accessed 21 Sep. 2016.

Suber, Peter. “Open Access Overview.” 5 Dec. 2015. http://legacy.earlham.edu/~peters/fos/overview.htm. Accessed 21 Sep. 2016.

“We are democratizing discovery! We are crowdsourcing 1,500 pages of medieval manuscripts from the Cathedral of Burgos (Spain). Even better, there is a Massive Open Online Course that accompanies the process.” Reddit, 14 Apr. 2016. https://www.reddit.com/r/history/comments/4etaz8/we_are_democratizing_discovery_we_are/. Accessed 21 Sep. 2016.

algorithmia-ft1
Posted on 2016/09/27 by

I, Algorithm: Canadian Copyright Law & Bots

The term “legal realism”, which appears in Carys J. Craig’s essay “The Canadian Public Domain: What, Where, and to What End?” seems a bit too much like a euphemism for “hopelessly subjective”. Like in other realms of legislation, the continuing procession of legal cases setting new precedents and overturning older ones seems to be the only constant factor affecting the definitions of terms like “public domain” in Canadian copyright law.

Craig points out that the label ‘public domain’ has progressed, in terms of its definition, from a domain, to a discourse, to a series of “uses”, and finally to “a continuum of legal states on a spectrum (there are now argued to be several ‘species’ of public domain, a multiverse of sorts, if one uses the outmoded spacial metaphor). This perspective is based on the relatively recent view of information, and its various modes of digital exchange, as a community, a network rather than a medium or a technical means-to-an-end(s) (Craig, 232).

The advent of the internet and the increasing move toward an economy of information has blurred the line between a piece of information that can be copyrighted, and an idea or concept, which most view as un-copyrightable. A process of doing or creating something, or the use of an existing idea or concept within a new and particular context, still constitutes something to which copyright can be applied.

For any creation to be copyrighted in the digital age, a healthy public domain must still exist from which creators can draw, allowing such creations to cohere through the ‘conduits’, or assemblers, that are their creators. Since the dissolution of the myth of romantic authorship, the notion that copyright requires the public domain in order to exist seems self-explanatory. No matter how strong your rope is and how hard you pull, your bucket won’t draw any water if the well is dry.

The claim to a work’s use and value falls upon whomever claims it first, much like a discovery of a mathematical law that existed before the discovery was made falls to whoever discovered that principle first (and proved it). Such discoveries, creative or otherwise, are extremely important to culture, as is the environment which produces these insights. What if the learning of mathematics was as proprietary as some other information is today? Would mathematical theories still arise as frequently, and from as many cultural and economic backgrounds?

Craig points out that copyright law is scantly studied in Canada, despite its importance, even by those whose careers depend upon its use and potential modification. While investigating the history of such laws can turn up more a hazy history of ‘copyright lore’ than concrete ‘copyright law’, the task remains an important one, especially in an era where the authors of ‘unique’ works are mostly non-humans.

The question that came to my mind throughout my initial research, and continues to occupy the forefront of my other inquiries, is how can copyright law adapt to the increasing presence of advanced algorithms or ‘bots’ online, which can trawl sources both in the public domain and under various restrictions, and reassemble this material into supposedly ‘unique’ texts? I wrote a short piece about such bots on another blog, which I am working to transform into a PhD proposal.

These bots use paraphrasing and syntactical rearrangement to accomplish these tasks for a price, in the case of entrepreneur Philip M. Parker,’s bots. Should copyright law be changed in response to these entities? Are they exploiting loopholes that should be closed, or perhaps widened to benefit the public domain(s)? What kinds of changes should be made in lieu of swarms of non-human, but human-controlled, entities with the ability to rapidly assemble copyrightable works? Does this potentially deprive human beings from participating in the creative process, or does it provide an unparalleled opportunity to assemble information in ways human beings simply can’t do so quickly or efficiently? Perhaps these questions can’t, and won’t, be answered until enough legal cases arise to set a precedent.

Finally, the perspective, elucidated in Craig’s essay, that something falling into public domain suggests a loss of relevance or quality is also somewhat disturbing. The greatest works of literature, having fallen into public domain, have allowed people to both scrutinize them with fresh eyes, and to study them free of charge, simply to enrich their knowledge of literature. The so-called “cultural stewardship” model described by Craig also seems suspicious, more like a ploy to extend the model of planned obsolescence to knowledge, where that model has thus far only applied to material goods…

Works Cited

Craig, Carys J., The Canadian Public Domain: What, Where, and to What End? (January 1, 2010). Canadian Journal of Law & Technology, Vol. 7, p. 221, 2010. Available at SSRN: http://ssrn.com/abstract=1567711

2093
Posted on 2016/09/25 by

France’s ReLIRE Project: How to Reconcile Mass Digitization & the ‘Droit d’Auteur’

In 2011, the ruling in a U.S. court case, Authors Guild, Inc. v. HathiTrust, set a precedent in copyright law by protecting the digital archives produced by libraries for preservation purposes under fair dealing. While the European Copyright Directive allows for the mass digitization of orphan works for non-commercial purposes, in Canadian copyright law the dissemination of such works is subject to the approval of royalties by the Canadian Copyright Board after a “reasonable” search for an unlocatable copyright holder. Commercial mass digitization projects, such as Google Books, have entered into agreements with both European and American publishing guilds. Between the library and the media company, a French digitization project for out-of-print books under copyright circumvents both fair dealing and publishing agreements in an approach that may resolve the question of how to balance “copy-right” with “democratic accessibility.”

In his seminal paper on the eighteenth-century book trade in France, “What is the History of Books?” (1990), Robert Darnton describes the life cycle of a book as a “communications circuit”: a stadial model that maps the material history of books from its author to its readers and emphasizes the actors involved in its production. Darnton’s model (and the dozens which followed) laid the ground work for the discipline of book history. While the discipline itself has recently come under fire by its own proponents for the way in which its emphasis on the material subject imagines a projected network of actors, this stadial model has influenced how we study the history of books.

circuit

Two decades of book history studies can be organized under a series of categories which correspond to the steps in Darnton’s communications circuit: studies of authorship; studies of publishing, its economy, and its materials; and studies of reading and readerships. Copyright, however, does not figure in this model. Or in that of Adams & Barker (1993), whose alternative but equally influential model for book production emphasizes the “whole socio-economic conjecture” — the intellectual, political, legal, and commercial influences on the book trade — over individual actors. Copyright is absent from these schemas due to the way in which it resists categorization. For those who have studied and written about copyright and the book trade, the political and economic pressures and legislative changes which structure copyright readily affect authors, publishers, booksellers, and readers. Copyright determines the price and availability of books, while it also creates a market for pirated and foreign reprints. Copyright both shapes a freely available public domain and protects a national literary canon.

If we were to place “copyright” or “the public domain” within Darnton’s communications circuit, it would belong along the dotted line that connects the “readers” of a book with its “author.” In this space of reception, the linear trajectory of a book’s transmission breaks down through the free circulation of books among readers and authors, who are also readers. More importantly, this model does not consider the temporal dimension of a book’s reception after publication. Without taking into account the longer history of books, book history studies are quick to disregard the titles and volumes which defy this model of transmission — the used books, rare books, and out-of-print books which have populated the shelves of second-hand and rare book sellers. Copies, reprints, and facsimiles have not been considered to the same extent as the publication of new titles. Even in large-scale studies of literary production, reprints of older titles are routinely removed from the lists generated by the British Library’s English Short Title Catalogue (ESTC) or the American HathiTrust. Aside from a handful of studies concerned with mediation as opposed to the materiality of books — like Leah Price’s book, How to Do Things with Books in Victorian Britain (2012), which discusses the libraries of middle- and working-class readers who purchase and acquire second-hand books — this segment of the cultural record has been relegated to antiquarians and the copyright library.

In many ways, the discourse used by the relationship between copyright legislation and the public domain resembles book history’s miasmic treatment of a book’s reception. In her article on the copyright and the public domain in Canada, Carys Craig describes the relationship between the two institutions as dynamic rather than complementary and intrinsically related to creative uses. For Craig and other legal scholars, the link between copyright and cultural production remains elusive:

“The copyright system should be regarded as one element of a larger cultural and social policy aimed at encouraging the process of cultural exchange that new technologies facilitate. The economic and other incentives that copyright offers to creators of original expression are meant to encourage a participatory and interactive society, and to further the social goods that flow through public dialogue. [. . .] The public domain that is irreducibly central to the copyright system (Drassinower 2008: 202) protects the cultural space in which this happens.” (78)

In this context, copyright and fair dealing are both driven by a user-based economy in which access to materials both under copyright and in the public domain becomes an important issue. In their user’s guide to Canadian copyright legislation, Laura Murray and Samuel Trosow discuss the question of access in relation to the current copyright regime through alternative funding models. How can works be made available “without undue constraints to those members of the public who want to engage with it” (233) while also respecting the labor and rights of owners? Answering this question has become an imperative for any copyright economy in the wake of digitization and the possibilities it offers for the dissemination of rare and restricted materials to a larger audience. Fair dealing, Open Access, and Creative Commons licences are only preliminary answers to this question: they offer only a limited solution for unlocatable copyright holders.

Article 5(2)(c) of the European Copyright Directive provides an exception to copyright infringement for non-commercial archives and libraries, educational institutions, or museums, however this exemption is limited to specific acts of reproduction and only applies to orphan works, not out-of-print works (Borghi & Karapapa 12, 88). While orphan works are works for which the rights holders are unlocatable, due to lacking information, out-of-print works are published works that are no longer commercially unavailable. Out-of-print works are often still protected by copyright as rights holders or publishers have withdrawn these books from circulation for commercial or authorial reasons (cf. the French “moral right of withdrawal”). To compensate for the corpus of works that are no longer available either commercially or through fair dealing, France passed an act (loi n˚2012-287) which modifies copyright law to incorporate a mechanism regulating the use of unavailable works through mandatory collective management. The regulation of twentieth-century out-of-print works under copyright relies on two main elements: the Registre des Livres Indisponibles en Réédition Éléctronique (ReLIRE; Register of Unavailable Books for Electronic Republication) and a collective management organization (the Sofia). For the purposes of this discussion, I am most concerned with ReLIRE.

 

The register itself serves to inform authors, editors, and rights holders that their works may enter collective management. Through this mechanism they may see their works republished and made available without undue financial burden on the rights holders themselves. Since 2013, each year on March 21, the Bibliothèque nationale de France (BnF, France’s copyright library) publishes a register of 60,000 commercially unavailable twentieth-century books under copyright to enter collective management for electronic republication. The list is available online and authors, copyright-holders, and editors that hold copyright have six months to opt-out of the list. After September 21, the works on the list enter collective management by the Sofia and editors that hold copyright in print are offered a 10-year exclusive right to distribute the work electronically if they reply within two months. After this delay, editors that hold copyright but are unable to prove their claim, original editors that fail to respond to the initial inquiry, or editors that never held copyright originally may hold a 5-year non-exclusive right to distribute the work electronically. In all cases, editors are required to publish a digital edition of the work within three years. Authors may petition to have their works removed from the list at any time under France’s “moral right of withdrawal.” Editors who petition to have works removed from the list (whether before or after the September deadline) are then legally obliged to publish the work within two years. Once the digital edition has been republished, it is made available to readers through the publishers’ digital library and Gallica, the BnF’s digital library.

To understand how ReLIRE operates unlike fair dealing under Canadian copyright or the European directive on orphan works, two aspects of French copyright legislation should be considered:

  • The loi n˚2012-268 which created ReLIRE relies on a blanket authorization to reproduce out-of-print works under copyright which places the burden of proof on copyright holders. However, authors, right holders, and editors who make themselves known receive remuneration from the Sofia on behalf of the publishers of the new electronic editions.
  • “Le droit de prêt,” or the right to borrow, which describes the remuneration paid by libraries and other educational institutions when copyrighted material is borrowed by or distributed to patrons.

In as much as the rights of original copyright holders are upheld through ReLIRE, the way in which editors must prove they still hold the right to publish the unavailable work in print under the natural law system of copyright or “droit d’auteur” recalls William St. Clair’s description of sixteenth- and seventeenth-century copyright in England, when copyright was explicitly understood as the “right to copy”:

“The owners of manuscripts of the works of Chaucer, Langland, Malory, Gower, and of the other English authors who wrote before the arrival of printing, are unlikely to have realised that, by permitting them to be copied by print, they were allowing the creation of an intellectual property which others would then privately own in perpetuity. [. . .] For a time it seems to have been part of the cooperative arrangements within the print industry that the exclusive right to print and sell copies of a particular text lapsed when all the copies of a particular edition of that text had been sold out. [. . .] Gradually the implied private property in an out-of-print title seems to have become normally regarded as an absolute one, which continued dormant in the hands of the original printer and his or her heirs and assigns even if they were unwilling, or unable, to supply the market by reprinting. If, as happened in the eighteenth century with Milton’s prose works, a text had been so long out of print that no publisher could easily establish an ownership claim, a new property right could be established in a newly printed text.” (49-52)

Bibliography

Adams, Thomas & Nicholas Barker. “A New Model for the Study of the Book.” A Potencie of Life: Books in Society; The Clark Lectures, 1986-1987. London: British Library, 2001.

Borghi, Maurizio & Savroula Karapapa. Copyright and Mass Digitization. Oxford: Oxford University Press, 2013.

Darnton, Robert. “What is the History of Books?” The Kiss of Lamourette: Reflections in Cultural History. New York: Norton, 1990. pp. 107-36.

Craig, Carys. “The Canadian Public Domain: What, Where, and to What End?” Dynamic Fair Dealing: Creating Canadian Culture Online. Toronto: University of Toronto Press, 2014. pp. 65-81.

McGill, Meredith. “What’s the Matter with the History of the Book?” The Question of Relevance. McGill University: April 7, 2016. Lecture.

Murray, Laura & Samuel Trosow. Canadian Copyright: A Citizen’s Guide. Second edition. Toronto: Between the Lines, 2013.

Price, Leah. How to Do Things with Books in Victorian Britain. Princeton: Princeton University Press, 2013.

St Clair, William. The Reading Nation in the Romantic Period. Cambridge: Cambridge University Press, 2004.

Posted on 2015/12/23 by

Interview with Dr. Maria Gurevich, SHiFT Lab, Ryerson University

The SHiFT Lab (Sexuality Hub: Integrating Feminist Theory) is affiliated with the Department of Psychology in Ryerson University, and focuses on integrating feminist poststructuralist and discursive practices into the study of sexual practices, technologies, and messages. Their current research spans a wide spectrum of topics: examples include research into the sexuopharmaceutical industry, discourses of gender transgression, and analysis of mainstream pornography in a postfeminist context.

While I haven’t visited their space in person, I found out about the lab through a quick search on Google. Though their website is still under construction, it showed up within the first results for “sexuality studies lab canada” for me, with their mandate and team of researchers highlighted. The website also outlines many of their current research projects and publications.

I had a chance to ask the lab director, Dr. Maria Gurevich, a few questions over email regarding the practices of the lab. While the SHiFT lab is not a media lab in the traditional sense, Dr. Gurevich defined it as “a critical sexuality scholarship lab. We study many influences on sexuality, including the role of media as a purveyor of messages.” The lab is affiliated with the Psychology Department, and the influence of media is apparent in much of their work ­— examples of media studied include sex blogs, contemporary queer magazines, pornography and erotica, and sexuopharmaceutical marketing.

In terms of physical space, Dr. Gurevich explained, “Our lab is not defined by a physical space but is rather a community of researchers. We have a couple of rooms where grad students and RAs share study space — that’s what most psych labs look like, unless they are conducting experiments.” (The SHiFT lab does not perform experimental work, but rather qualitative research.) The heterogeneous nature of their research assistants and graduate/undergraduate students is apparent — as the site notes, the team comes from “a variety of backgrounds in addition to psychology, including journalism, art and music theory and practice, film studies, philosophy, sociology, history, and sexual diversity studies.” This lends to the “multi-disciplinary approach” fostered by the lab, and I think indicates that the relationship between media and sexuality is an omnipresent research interest within a wide spectrum of the humanities.

The questions below were tailored to highlight the driving philosophies behind the lab, as well as the process of integrating technologies, discursive analysis, and various forms of media into research.

 

You mention a feminist discursive approach as one of your central analytic tools in research. Could you briefly touch on how/why you decided to shape your process this way?

We rely on feminist discursive approaches to analyze our data because this epistemic lens acknowledges that knowledge is perpetually negotiated in social interactions and institutional contexts. This approach also questions binarized gendered and sexed categorizations that structure personal and cultural narratives, and calls attention to dominant discourses that construct and constrict available subjectivities. In other words, this epistemic lens questions what is considered legitimate and ‘normal’ based on privilege, power, and access, which may be afforded or barred to specific individuals or groups based on their gender and sexuality markers.

Discourse analysis (DA) is part of a long tradition of discursive psychology, wherein language is viewed as central to identity production and practices. DA treats talk as a type of situated action, acknowledging that language is not a transparent or value-free vehicle for conveying meaning; rather, meaning is created and transmitted through language itself.

 

It’s interesting the way language is evolving around sexual technologies in particular, with sexnologies, teledildonics, etc. becoming ubiquitous terms. As a researcher in this field, do you participate in “coining” new jargon to talk about these technologies or their effects?

Yes, the language is rapidly developing and shifting. Given the lab’s emphasis on identities as historically contingent forms organized through talk, and our view that gender is a ‘practical accomplishment’ (West & Zimmerman, 2009) that is navigated and negotiated under specific cultural conditions, we are very cognisant of how we contribute to the formation of new terminology. One of my chief intellectual and aesthetic pleasures (and they are inseparable for me) is crafting new linguistic structures to describe emerging sexual phenomena. I would not refer to these as jargon, however, as this has a pejorative and/or elitist tinge in some circles. Rather, I think of this as an inevitable part of developing a mobile lexicon for describing shifting body regulation and modification practices, sexual scripts and intimacy norms.

 

On that note, what’s your process in terms of the sexual technologies you choose to study? With newer models, modifications etc. developing so quickly and constantly, how does a longer research project sustain an understanding of technologies which are somewhat in perpetual “update?”
Because we consider sexual technologies to be broader than physical or virtual platforms, that include modalities like pornography, sexual expert advice and, sexuopharmaceuticals, we are not so interested in capturing the very latest X.0 version of a specific technology. Rather, we focus on how ubiquitous some technologies are becoming and how they function to shape gendered subjectivities as situated practices, or ways of ‘doing’ gender. This permits us to focus on meaning making and practices shaped by emerging technologies, rather than the technologies qua technologies.

 

Your project “Intimate Interfaces for People with Disabilities” is developing a working technological prototype to support sexual experimentation for persons with disabilities. Can you talk a bit about the process of moving from research into the realm of creation?  

 I can’t speak to this one, as I am co-investigator on this project and my role in not on technological prototype development. The PI is an engineer, so she is responsible for the actual model building. My focus is on the psychological aspects, such as user experiences.

 

Currently, it looks like you’re researching STAXYN, which is noted to be a “growing but empirically virtually ignored sexual practice” in your abstract. What contributed to your decision to delve into a relatively unresearched phenomenon — and, as I’m guessing there are many such practices out there, what interested you most in this one? 

This is part of a larger project on recreational use of sexuopharmaceuticals, with Staxyn and Stendra being the most recently approved drugs. STAXYN is particularly interesting because its marketing explicitly emphasizes its safety and suitability for younger men without erectile dysfunction (ED), for whom stress is cited as contributing to occasional erectile difficulty (Canada Newswire, 2011). The benefits of STAXYN promoted by both physicians and Bayer Healthcare/GlaxoSmithKline (its manufacturer) for younger men are: low cost, efficacy unaffected by alcohol consumption and sleek packaging. As sexuopharmaceutical marketing expands the definition of ED and its intended users, these drugs are being touted both as performance enhancers and as a preventative measure against sexual failures among younger and younger men. The therapeutic claims of these drugs extend beyond rectifying failing erections, to assertions about enhancing sexual desire and pleasure, repairing relationships, enhancing self-esteem, and bolstering masculinities. These promises are being taken up by an increasingly broader spectrum of users. Recreational use is now steadily growing among those without ED, such as young men between the ages of 17-30.

Thanks again to Dr. Gurevich for participating in this interview. To learn more about the upcoming initiatives of the SHiFT lab, you can visit their current projects page here.

Posted on 2015/12/18 by

An Interview with Nick Montfort

Nick Montfort heads up the Trope Tank, a media lab at MIT, where he is also an associate professor specializing in digital media. He has authored several books, including Twisty Little Passages, a study of interactive fiction, and the upcoming Exploratory Programming for the Arts and Humanities. I had the opportunity to correspond with him about his work.

tt_computer_banner

Thanks so much for agreeing to this interview. In your technical report about the Trope Tank, “Creative Material Computing in a Laboratory Context,” you wrote that “in reorganizing the space, [you] considered its primary purpose as a laboratory (rather than as a library or studio).” Your desire to distinguish the Trope Tank from libraries and studios strikes me as an interesting place to start thinking about what a media lab is—by first thinking about what it isn’t. Could you describe how the layout of the Trope Tank sets it apart from those other kinds of spaces?

Libraries are set up to allow people to read and consult collections, typically books but other sorts of media as well. Studios are for artmaking; classically they should have good natural light. Archives are for preserving unique documents, and direct sunlight is undesirable.

By explaining that we’re not an archive, I mean to stress that the materials we have are for use, not to be preserved for decades. The Trope Tank isn’t a library in that the main interactions are not similar to consulting books. And we aren’t mainly trying to produce artworks, either. There are aspects of these, but the main metaphor for us is that of a laboratory where people learn and experiment. So we have systems set up for people to use, not stored in an inaccessible way that will best preserve them. We aren’t worried with managing collections and circulation in the way a library is. It’s okay if the outcome of work in the Trope Tank is a paper rather than a new artwork.

At the same time our model is not a pure innovation — it is based on how labs work.

I’ve had some trouble understanding the concept of media labs. In your report, you effectively sum up my problem: “Humanists are familiar with libraries and their uses, artists know what studios are and some of the ways in which they are used, but a laboratory is not as familiar in the arts and humanities.” Unfortunately, you also state that this lack of familiarity “can, ultimately, only be addressed by doing laboratory-based work that leads to new humanistic insights and significant new artistic developments.” 

I’ve never done lab-based work. Can you help me understand why “laboratory” is an appropriate classification for the Trope Tank? Might “workshop,” with its multiple meanings (it’s a space for working with technology and also a collaborative activity with intellectual, creative, and/or practical components), serve even better? 

Workshops are mainly for making or repairing things; laboratories are for inquiry, but that includes conducting inquiry in a practical way that can involve making.

I’m interested in the dilemma you present: the incommunicable quality of lab work. It reminds me of something Matt Ratto said about how critical making communicates concepts to the body, not just the brain. That material, tactile, experiential aspect strikes me as a fundamental difference between lab work and conventional humanities scholarship. What is your take on that?

There are aspects of traditional humanities scholarship, such as that in the material history of the text, also called book history, which are quite similar to our lab-like approach. With regard to this type of work in the humanities, we’re also learning from a tradition rather than developing an entirely new idea.

What are some of the things, whether tangible or intangible, that the Trope Tank produces?

The Trope Tank is for producing new insights. It isn’t about production in an industrial or consumer sense, or for that matter even mainly in an artistic sense.

In connection with my first question, could you tell me how the insights produced in the Trope Tank differ from those which more traditional humanities scholars might produce in a library and also how the media lab’s creative output compares with what one would expect to come out of a studio?

I think one of the answers is in how our projects sometimes lie outside of standard scholarship or standard artistic production. The Renderings project is a good example of this. We’ve translated and in some cases ported or emulated digital poetry from other languages. Most conventional literary translators have no idea what to make of this literary translation project. It involves study of and reference to earlier projects to translate electronic literature and constrained and avant-garde writing. The result is not well-understood (in the visual art world certainly) as artistic production, though.

In other cases we have studied digital media and art in ways that cut across platforms (the Apple //e) instead of confining themselves to standard categories of videogame, literary work, etc. This makes new connections between quite obviously related digital works that have never been considered alongside each other before.

Could you tell me what a typical day at the Trope Tank looks like? Who uses the space on a daily basis and in what capacity? What is it like for you to work in that space?

I don’t think there are typical days. We host class visits at times, have discussions with visiting artists and researchers at times, engage with software and hardware in quite specific and directed ways at times, and use systems in a more exploratory way at times. We have meetings with larger or smaller numbers of people or work individually. Often the people involved in the Trope Tank work from other places, if they don’t need the material resources of the lab. The Trope Tank isn’t an assembly line or Amazon warehouse in which the same activity happens all the time.

Having very fond memories of playing Infocom games (the Zork and Enchanter trilogies) on my father’s Apple IIe, I was a bit startled to learn that the Trope Tank hosts a community which is still developing the interactive fiction genre. In retrospect, it seems obvious that so much of the genre’s potential was never explored back in the 80’s. Why the enduring interest? What is the relevance of this sort of work in the context of contemporary literary production and game design? 

The question of why interactive fiction is still interesting deserves a book-length answer (Twisty Little Passages, Nick Montfort, MIT Press, 2003) or a documentary film-length answer (Get Lamp, Jason Scott, 2010). The main way interactive fiction relates to contemporary literary production and game design is that it is contemporary literary production and game design. Beyond that, it’s not simple to say how interactive fiction, still being made in very compelling ways, relates to other forms of literature and game. You would do well to consider specific works of interactive fiction and specific people, and how they relate to other sorts of literature and gaming.

Your book is on my holiday reading list, and I’ll see if I can track down that documentary. Thanks for that.

The book is a bit antiquated by now — no coverage of Twine and today’s popular (and sometimes radical) hypertext interactive fictions, for instance. But, I hope it’s still worthwhile.

Your upcoming book is intended to teach basic coding skills to workers in the arts and humanities. What inspired you to take on this project? Who will benefit from it most? More importantly, how can I, an aspiring fiction writer, benefit?

The book was mainly motivated by particular people in the arts and humanities who are interested in programming but who have not been finding the support to learn about it. I also saw that there was little high-level interest (in writing about the digital humanities, in curriculum committees, etc.) in teaching programming — even though millions of people learned how to program just for fun in the 1980s. Exploratory programming is about learning and discovery, not about instrumental uses. So, I would suggest that you and others in the literary arts can benefit by understanding powerful new ways to think and to amplify your thoughts using computation.

Thank you for taking the time to correspond with me.

Posted on 2015/12/17 by

Bums in Seats: Queer Media Database of Canada/Québec

The Uniter, October 15 2015.

The Uniter, October 15 2015.

The lights dim for the second of two screenings titled Matraques, a special event curated and organized by the Queer Media Database of Canada/Québec in collaboration with the queer film festival Image+Nation. The two-part screening is composed of twenty-one vignettes, each a short film or extract about the history of literal and metaphorical policing of queers in Canada. As the screenings end and the Q&A session starts, two things become evident. One: many people in the room know each other and others are being readily introduced, which makes knowledge of Canada’s queer history emerge out of the realm of shared collective memory, intensifying the already deeply communal nature of the event. Secondly, the bodies in the seats range from undergrads to seasoned film enthusiasts. Witnesses connect and respond to the programming on a visceral level, which for the younger people in the crowd enhances the immediacy of this history as represented in films that otherwise might have come across as demagogical or didactic. Judging by how no one feels like leaving Concordia University’s Cinema de Sève long after the films have finished, the screening is a resounding success.

A couple of days before the screening, I met up with Dr. Thomas Waugh and Jordan Arseneault. Prof. Waugh is Research Chair in Sexual Representation and Documentary Film at Concordia University’s Mel Hoppenheim School of Cinema and president of the Queer Media Database Canada-Québec. Waugh’s books include the anthologies The Perils of Pedagogy: The Works of John Greyson (with Brenda Longfellow and Scott MacKenzie, 2013); the collections The Fruit Machine: Twenty Years of Writings on Queer Cinema (2000) and The Right to Play Oneself: Looking back on Documentary Film (2011); the monographs Hard to Imagine: Gay Male Eroticism in Photography and Film from their Beginnings to Stonewall (1996), The Romance of Transgression in Canada: Sexualities, Nations, Moving Images (2006), Montreal Main (2010). He is also co-editor of the Queer Film Classics book series. Arseneault is the coordinator of the Queer Media Database Canada-Québec, as well as a drag performer, social artist, writer, meeting facilitator, translator and former editor of 2Bmag, Québec’s only English LGBTTQ monthly magazine.

According to the website, the purpose of the Queer Media Database of Canada/Québec is to “is to maintain a dynamic and interactive online catalogue of LGBTQ (lesbian, gay, bisexual, transgender, and queer) Canadian film, video and digital works, their makers, and related institutions.

Thomas Waugh. Source: Concordia University.

Thomas Waugh. Source: Concordia University.

 

How did the project come to be?
TW: There is a historic basis for the Media Queer Database. 20 years ago, when I started developing an encyclopedic project on Canadian queer moving image media, I saw and documented everything I could, thousands of short and long works. This documentation ended up in print form in my 2006 book The Romance of Transgression in Canada as an appendix to the main critical and analytic body of work. There are about 350 institutions and individuals embedded in the individual works that were catalogued and described. That print database festered and within five or six years we decided to bring it to life as a kind of living digital archive, using a Wiki model that would be maintained over time. My job is to supervise Jordan and other people working with the project, as well as guide the advisory board and push the project along. I try to empty my brain of data.

What do you mean by “Wiki model?” Does taking the online encyclopedia as a model imply a collaborative, open-source aspect for the archive?
JA: Copyright-wise, we decided to make it creative commons, which is different than a lot of academic material. This is a part of our mandate. There is also a submission form on the website. In other words, people can’t live-edit like they do on Wikipedia, but we do regular updates based on the submissions people give us. As the coordinator of the project, I periodically take the submissions that people have made, look at who we need to biographize, and then enter their filmographies, translate them, and so on.

Is there a lack of attention paid to Canadian queer cinema that this project is trying to address?
TW: Absolutely. Canadian work, especially in French, tends to become invisible in the global market. This is why we are committed to maintaining access to and visibility of these works. For this reason, the second phase of the project, once the website was up and running and our funds secured, became about programming. The Sunday event, Matraques, will be our fifth program of short and long films screened in the festival context in Canada. We are going international in 2016, with programs in India, France and Italy.

How is the project funded?
TW: We get funding from Concordia University, Canada Council, Heritage Canada and SSHRC. We also have partnerships with organizations across Canada who contribute moral support, facilities and some money, mainly in the form of accommodations and venues.

How has applying to these institutions and being helped by them verbalized the project?
TW: That is a very clever, Canadian question. Canadian culture and education are very much shaped by grant applications and criteria that everyone is scrambling to meet.

JA: We have been extraordinarily blessed with understanding on the part of these juries. People seem to get it. We applied for project funding that emphasized the national scope, research creation, public access to archival works… These are some of the trends we’ve tapped into. On the other hand, we haven’t been successful with one provincial funder who couldn’t conceive why we weren’t also streaming films. For them, a website hosting descriptions of artists and films did not really mean that much. Having said that, and having done the grant writing on the project since 2013, I find that there has been a very nice wave of understanding about the inherent value of having open-source material available on historic works. The AIDS Activist History Project, which is run by Alexis Stockwell at Carlton University and which collects oral history, interviews and names of activists and artists, has been similarly successful in that people understand how valuable primary source materials are.

TW: Streaming the films would also be great, but we can’t deal with the legality and materiality of rights ownership. Not only is it counter to our philosophy of copyleft and access, but it would also be a full-time industrial activity to maintain rights for 3,000 works. In fact, we want to support the distributors, exhibitors and rights owners who are doing their best to provide access to these works.

What are the project’s other institutional ties across Concordia? Prof. Waugh, you are a FIlm Studies professor at Mel Hoppenheim School of Cinema, which also runs the Moving Image Resource Center here at Concordia.
TW: We are friends with them all. My primary unit is obviously the Mel Hoppenheim School of Cinema, that’s our nerve center in a way. However, it might be bodies that matter rather than institutions. This would not be happening if it wasn’t for individual people’s passions and obsessions.

Jordan Arsenault. Source: Facebook, posted with permission.

Jordan Arseneault. Source: Facebook, posted with permission.

We are sitting in Concordia’s Fine Arts PhD Study Space here in the Fabourg Building in downtown Montreal, where the office for the project is located. How does one go about acquiring a space like this? How does it help you achieve your goals?
JA: We had to do an application to the Faculty of Fine Arts, which includes conversations about outcomes, partner grants and byproducts, including how many student employees there are. There is a strong pedagogical component to the project, the training and mentoring of undergraduate and graduate students involved in the project either as interns or employees. That was a requirement for obtaining the space. In that sense, the project is part of the larger Fine Arts pedagogical operation. This space is sort of weird, with all the VHS cassettes and lots of cardboard boxes. There is some technology here, but we don’t really use that stuff (laughs).

JA: When I need to transfer what we’ve received from a filmmaker on beta to MiniDV to DVD, I have to go to one of our partner organizations. At best, we could hobble together a VHS-to-DVD [converter]. In other words, there could be more technical equipment here, but we are happy to have obtained this space as a place for the project to legally have an address. In a way, the materiality of this space reflects the obsolescence of a lot of material and technology we deal with: from festival catalogues to films on formats ranging from tape to DVD.

Within the next two years, what began as a distant pipe dream for many of the organizations we work with, making the films legally streamable online, will finally be a reality. This is maybe forty percent of the materials we are talking and writing about.

TW: We will host direct links to these works. We will remain necessary because none of the distributors are queer per se. We need to claim kinship with all these objects and people; otherwise, they remain unidentified. The distributors, out of politics of impartiality, do not play the game of naming that is essential for us. The presumption of community and kinship through naming is at the core of the project.

Moreover, I like the concept of materiality, as it segues into corporeality, and the audience’s bums in seats. They are the ultimate matter of this project to me, so it is very exciting to meet them all across the country and see them discovering these works.

The knowledge that emerges out of the database is embodied in the screening events that seem central to the project. Could you tell me more about these events?

JA: We will have organized nine such events across Canada by the end of the inaugural year. In Toronto, for example, we were in the Buddies in Bad Times community theatre in the Village, where, even though we presented with the Inside Out film festival, we screened during Pride rather than compete for attention in the festival context. The event was sponsored by the Canadian Lesbian and Gay Archives, and it was in dialogue with so many other events. In Vancouver, on the other hand, we were in the very chic Vancity Theatre and that was a more traditional cinema experience, with all the trappings of a film festival.

How do you envision these events? What kind of audience engagement are you trying to create, and what kind of “look and feel” do you think is the most conducive to the project?

JA: We do two different things. In Winnipeg, we were showing a film called Prison for Women by Janis Cole and Holly Dale, as well as Claude Jutra’s À tout prendre, a very crypto-gay film. However, I did a salon with local filmmakers, curators and their local distributer called Videopool the day before in order to get input about what we are missing in the catalogue. Before coming to this project, the screenings I’ve attended and organized were about hanging a sheet and acquiring enough folding chairs in order to watch a film that someone physically brought from the Berlin festival, for example. Film festivals are important for the legitimization of queer cinema, and so is the sense of community. That is why I love that sort of artisanal, communal, “pink popcorn” practice of spectatorship where people don’t mind if they don’t have the perfect line of sight. So in Winnipeg we got to do both a venue screening and a salon. We are doing another salon this coming January at Videofag, a queer space in Toronto where we will again be talking about what is missing from the archive and what our next foray of research should be.

What are the requirements for being added to the database?
JA: To be included, the work needs to have been shown twice publicly. Otherwise, the profusion of eligible works would be astronomical. We are still considering self-published work. For example, with web-based work, if it has been seen 1,300 times, that might count as a public showing. Because, let’s be clear, in the Canadian art realm, a majorly distributed work might have been shown fourteen times. It’s all still very indie.

What links everything from the latest Xavier Dolan film to a lesbian stop-motion animation about bunnies is the political significance of self-describing as queer and ascribing queerness to an art object. I wonder when that might be made obsolete. However, a part of me thinks that structural homophobia and misogyny will continue be present to such a degree that an explicit queer lens on the moving image will always be useful.

Screen Shot 2015-12-16 at 7.44.57 PM
Posted on 2015/12/17 by

Bedroom as Beadwork Lab?: An interview with Cedar-Eve Peters

Cedar-Eve Peters is an Anishnaabae visual artist and beader from the Ojibwa nation, currently based in Montreal. Cedar sat down with me to discuss the nature of her workspace and its relationship to her beading practice. We also grappled with a question previously asked on the dhtoph blog: “do we really need a designated space for work that we can just as easily do at home or our favorite coffee house?” 

The transcription has been edited for clarity.


So how would you describe your lab space?

Very messy. Like right now it’s very disorganized.

Could you talk about where it’s located?

Oh yeah. My workspace is also my bedroom, so sometimes that’s annoying because I can’t separate workspace from sleep space. I guess it’s kind of organized. Everything’s in containers at least, but it just seems like things are all over the place right now.

What would you say the workspace itself consists of?

Mm…beads? You mean the materials?

Not so much the materials but the things your going to use. Like this chair, and that desk, the way it folds down, the cutting mat and the loom, your boxes of beads; these are all things that you need to get this work done.

Yeah. I guess I don’t think about that. Containers and shelves. Mostly containers I guess. A bunch of lights.

A surface?

Not so much right now. [laughs]

Surface space must be essential being that you need to be able to see all these tiny beads.

Yeah, if the surface isn’t clean then I feel like I can’t think straight, so that’s annoying. But also, it helps in a way cuz I’m just like, stimulated by everything thats around.

Is that a positive to working in your bedroom?

No. [laughs] I don’t think so. Read More

Older Posts