I'm using QGIS 2.8.1 Atlas and creating a mapbook using a grid layer that has an attribute with a URL in it.
I want to use the URL in the attribute to be a hyperlink in the PDF once its published.
Is there just an expression I can use to do this?
Unfortunately, QGIS doesn't seem to handle exporting hyperlinks in PDF for now. So even if you manage to create those links in the layout, the link will not be active.
Related question: Clickable HTML link in QGIS print composer pdf export?
QGIS bug tracker issue.
EDIT: Here's a Python code snippet for creating a PDF mapbook from QGIS Atlas images by adding links to neighbor tiles on each page.
I used this script to add links to images generated from a QGIS atlas and read them on my e-reader as a PDF. It works just fine!
In QGIS 3 (and possibly in your version) you can add an HTML box (Top menu: Add Item > Add HTML) where you can add a hyperlink. If your grid layer is being used as a coverage layer, adding the URL should be simple: you can use the expression [% "name-of-your-url-attribute" %] to pull in the link.
In your HTML Frame properties window, your code would look something like this:
You can just add a text link such as http://www.google.com and when you make the pdf it is a clickable hyperlink.
Comparing different data sources by examining the associations between surrounding greenspace and children's weight status
Studies on the association between surrounding greenspace and being overweight in childhood show inconsistent results, possibly because they differ widely in their definition and measurement of surrounding greenspace. Our aim was to evaluate whether the association of greenspace with being overweight depends on the measurement of greenspace in different data sources.
Based on data from the school entry examinations of 22,678 children in the city of Hannover, Germany, from 2010 to 14, the association between greenspace availability and overweight was examined. Three different sources of greenspace availability were derived for a set of 51 areas of the city: The Normalized Difference Vegetation Index (NDVI), the OpenStreetMap (OSM) dataset, and the European Urban Atlas (UA) dataset. Agreement between the indicators on the quantity of greenspace coverage was compared. The association with children's BMI z-score, including potential interaction terms, was assessed using multilevel regression analysis.
Greenspace availability per district area derived by NDVI was on average 42%, by OSM 29% and UA 22%, with OSM and UA being strongly correlated. Only the greenspace availability derived by NDVI showed an association with children's BMI z-score: The higher the greenspace availability was, the lower the BMI. The trend of association was higher for boys and migrant children than for girls and non-migrants and was restricted to the highest levels of greenspace availability.
Associations of greenspace with children's weight status depend on the greenspace measurement chosen. Surrounding greenspace was measured more comprehensively by NDVI. Data sources based on land use categories such as UA and OSM may be less suitable to reflect surrounding greenspace relevant for health outcomes. Potential mechanisms warrant further analysis and investigation.
unfortunately I've tried several different things using the lock layers and locking various items etc. It all works for me so you might want to update your QGIS version if you don't have the most recent one. If you do then you could report it to QGIS.
As a work around, if you are trying to create a single composer with multiple pages you might want to swap to have multiple composers, each with a single map. Or you could have the composer 'controlled by atlas'. You can then use each polygon as a coverage layer, applying a small margin so it all fits nicely on the page. Here's QGIS training on atlas: https://docs.qgis.org/2.8/en/docs/training_manual/forestry/forest_maps.html
GIS Workshop Registration for Fall 2016 Is Now Open
Registration is now open for the fall semester’s GIS (geographic information systems) Practicum, Introduction to GIS Using Open Source Software (featuring QGIS). The sessions will be held in the GIS Lab at Baruch College:
The day-long workshop runs from 9am to 4:30pm. Current CUNY graduate students, faculty, and staff, and full-time Baruch undergrads are eligible to register. Advance registration is required the fee is $30 and includes a detailed tutorial manual and a light breakfast. Participants must bring their own laptop with QGIS pre-installed in order to take the class. Visit the GIS Practicum page to learn more and to register: http://guides.newman.baruch.cuny.edu/gis/gisprac.
Baruch librarians: feel free to circulate this info to students and faculty, but please do not post on listservs.
All the novels by Austen are in the public domain and freely available, for example, from Project Gutenberg.
Jane Austen’s “Persuasion” in Project Gutenberg online at http://www.gutenberg.org/files/105/105-h/105-h.htm.
Public art by pedestrian publishing, online at http://www.pavementpoetry.com.
Audio books of Jane Austen, for example, are also available from Project Gutenberg.
Hamilton Gardens are the authors’ local public Gardens, see www.hamiltongardens.co.nz.
The participants were part of a course on HCI and user studies and had been encouraged to be questioning and critical. No participant had studied Digital Libraries as part of their formal training and Digital Libraries were not specifically mentioned to any participants.
When invited to post a definition of Humanities Computing/Digital Humanities in the online forum “Day of Digital Humanities”:
Keywords: Humanities Computing, Digital Humanities, discipline, interdiscipline, modes of engagement, 2.0 interactivity, visualization, spatialization, code
In an age when many people turn to the Internet for information, keyword searching is a tempting strategy for defining a field. However, the most obvious search term— digital humanities —yields only a partial picture. It is not a recognized subject heading in the U.S. Library of Congress classification system and, Willard McCarty found, near equivalents of “Humanities Computing” appear in conjunction with other terms such as humanities , arts , philosophy , and variations of computing , informatics , technology , data processing , digital , and multi-media ( Humanities Computing , 2–3, 215). Some subjects such as “arts” were also outside the scope of early print-dominated Humanities Computing. The words digital and media , Andy Engel found in doing keyword searching for this book, appear often in titles of publications, educational programs, calls for conference papers, and job descriptions. Yet, as they have gained popularity their usefulness has diluted (e-mail, July 13, 2010). Database sleuthing, then, is only a blunt instrument. A closer analysis of six major statements furnishes a more nuanced picture of how the field is defined. This chapter then situates definition in the context of three major disciplines where new technologies and media are changing the nature of practice—English, history, and archaeology. It closes with a reflection on three trendlines that have emerged in those disciplines and Digital Humanities writ large—visualization, spatialization, and a computational turn in the field.
This collection marks a turning point in the field of digital humanities: for the first time, a wide range of theorists and practitioners, those who have been active in the field for decades, and those recently involved, disciplinary experts, computer scientists, and library and information studies specialists, have been brought together to consider digital humanities as a discipline in its own right, as well as to reflect on how it relates to areas of traditional humanities scholarship.
—Susan Schreibman, Ray Siemens, and John Unsworth, “The Digital Humanities and Humanities Computing: An Introduction,” in A Companion to Digital Humanities (Malden, MA Oxford: Blackwell, 2004), xxiii
Publication of a Blackwell anthology in 2004 suggested that Digital Humanities had come of age in a history that is traced conventionally to the search for machines capable of automating linguistic analysis of written texts. The year 1949 is enshrined in most origin stories, benchmarked by Father Robert Busa’s efforts to create an automated index verborum of all words in the works of Thomas Aquinas and related authors. In the opening chapter, Susan Hockey divides the history of the field into four stages: Beginnings (1949–early 1970s), Consolidation (1970s–mid-1980s), New Developments (mid-1980s–early 1990s), and the Era of the Internet (1990s forward). Hockey is mindful of the challenge of writing the history of an interdisciplinary area. Any attempt raises questions of scope, overlap, impact on other disciplines, and the difference between straightforward chronology and digressions from a linear timeline (“The History,” 3). Willard McCarty also warns against the “Billiard Ball Theory of History,” asserting impact for some developments while consigning others to lesser or no importance ( Humanities Computing , 212–13). Jan Hajic, for instance, tracks emergence to 1948, citing broader scientific, economic, and political developments prior to and during World War II. Interest in natural language arose in fields distant from linguistics and other humanities disciplines, including computer science, signal processing, and information theory. The year 1948 also marks Claude Shannon’s foundational work in information theory and the probabilistic and statistical description of information contents (80).
Nonetheless, the field has a strong historical identity with linguistics and computer-aided study of texts, signified by the early names computational linguistics and humanities computing . Typical activities included textual informatics, miniaturization, and stylometric analysis of encoded textual material that aided studies of authorship and dating. Vocabulary studies generated by concordance programs were prominent in publications and, during the period of Consolidation, literary and linguistic computing in conference presentations. Yet, papers also accounted for using computers in teaching writing and language instruction, music, art, and archaeology. Overall, emphasis tended to be on input, output, and programming, though early reproduction was more suited to journals and books than poetry and drama. Mathematics for vocabulary counts also exceeded humanists’ traditional skills, and computer-based work was not widely respected in humanities (Hockey, “The History,” 7–10).
The period of New Developments was marked by several advances. By the late 1980s, powerful workstations were affording greater memory, screen resolution, color capacity, and graphical user interface, facilitating display of not only musical notation software but also non-standard characters in Old English, Greek, Cyrillic, and other alphabets. Both textual and visual elements could be incorporated in digital surrogates of manuscripts and documents as well (Hockey, “The History”). Expectations for quality in graphics grew, Burdick et al. also recall, as bandwidth increased, and multimedia forms of humanistic research in digital environments emerged (9, 20). And, Melissa Terras adds, unprecedented investments and development in digitization were apparent in the heritage and cultural sector, along with changes in public policy that increased availability of funding (“Digitization,” 51). The rhetoric of “revolution,” the Companion ’s editors caution, was more predictive in some disciplines than others (Schreibman, Siemens, and Unsworth, “The Digital Humanities,” xxiv). Even so, an authoritative historical record could now be compiled for what they alternately called a “field” and a “discipline” with an “interdisciplinary core” located in “Humanities Computing.” That label also marked a strong orientation to tools and methods reinforced in chapters on principles, applications, production, dissemination, and archiving.
The advent of personal computers and e-mail in the “Era of the Internet” ushered in a new relationship of humanities and technology. Burdick et al. characterize the change as acceleration of a transition in digital scholarship from processing to networking (8). The implications were evident in one of the early homes for Humanities Computing. Nancy Ide describes the period from the 1990s forward as a “golden era” in linguistic corpora. Prior to the Internet, the body of literature for stylistic analysis, authorship studies, and corpora for general language in lexicography was typically created and processed at single locations. Increased computer speed and capacity facilitated sharing more and larger texts while expanding possibilities for gathering statistics about patterns of language, and new language-processing software stimulated renewed interest in corpus composition in computational linguistics. Parallel corpora containing the same text in two or more languages also appeared, and automatic techniques were developed for annotating language data with information about linguistic properties. Yet, limits persisted. By 2004, few efforts had been made to compile language samples that were balanced in representing different genres and speech dialects (289–90).
Even with continuing limits, Hockey adds, by the early 1990s new projects in electronic scholarly editions were under way, libraries were putting the content of collections on the Internet, and the Text Encoding Initiative published the first full version of guidelines for representing texts in digital form. Services were being consolidated, and theoretical work in Humanities Computing and new academic programs signaled wider acceptance. And, early multimedia combinations of text with images, audio and video were appearing as well (“History,” 10–16). The sea change prompted by the Internet also became the basis for new periodizations of the field. Cathy Davidson calls the time from 1991 to the dot-com bust in fall 2001 “Humanities 1.0.” It was characterized by moving “from the few to the many.” Websites and tools facilitated massive amounts of archiving, data collection, manipulation, and searching. For the most part, though, tools were created by experts or commercial interests. “Humanities 2.0” was characterized by new tools and relationships between producers and consumers of tools, fostering a “many-to-many” model marked by greater interactivity, user participation, and user-generated content. This shift was apparent in the corporate and social networking of Google and MySpace, collaborative knowledge building of Wikipedia , user-generated photo-sharing of Flickr, video-posting of YouTube, and blogs, wikis, and virtual environments. “If Web 1.0 was about democratizing access,” Davidson sums up, “Web 2.0 was about democratizing participation” (“Humanities 2.0,” 205).
Steven E. Jones highlights a more recent timetable over a ten-year period that gained momentum between 2004 and 2008. New digital products emerged along with social-network platforms and other developments such as Google Books and Google Maps. The change was not so much a “paradigm” shift as a “fork” in Humanities Computing that established a new “branch” of work and a “new, interdisciplinary kind of platform thinking.” Borrowing from William Gibson, Jones styles the shift an “eversion” of cyberspace, a “turning itself inside out” marked by a diverse set of cultural, intellectual, and technological changes. Eversion parallels Katherine Hayles’s conception of new phase in cybernetics that moved from “virtuality” to a “mixed reality.” This phenomenon is not isolated to the academy: it is part of a larger cultural shift marked by emergence and convergence. The new DH associated with this shift is evident in digital forensics, critical code and platform studies, game studies, and a new phase of research using linguistic data, large corpora of texts, and visualizations documented in the latter half of this chapter in the disciplines of English, history, and archaeology. A more layered and hybrid experience of digital data and digital media, Jones adds, is occurring across contexts, from archived manuscripts to Arduino circuit boards. Conceptualized in terms of Hayles’s notion of “intermediation” of humans and machines in “recursive feedback and feedforward loops,” this experience is evident in new workflows and collaborative relationships examined more fully in chapter 6 (3–5, 11, 13, 31–32, 83, 91, 173).
Statement 2 signals another benchmark event that appeared three years after the Companion was published, the inaugural issue of Digital Humanities Quarterly ( DHQ ):
Digital humanities is by its nature a hybrid domain, crossing disciplinary boundaries and also traditional barriers between theory and practice, technological implementation and scholarly reflection. But over time this field has developed its own orthodoxies, its internal lines of affiliation and collaboration that have become intellectual paths of least resistance. In a world—perhaps scarcely imagined two decades ago—where digital issues and questions are connected with nearly every area of endeavor, we cannot take for granted a position of centrality.
—Julia Flanders, Wendell Piez, and Melissa Terras, “Welcome to Digital Humanities Quarterly ,” Digital Humanities Quarterly 1, no. 1 (2007): ¶3
In welcoming readers to the new journal, Flanders, Piez, and Terras resist defining the field as a discipline. They also defer the underlying question, “What is digital humanities?” Orthodoxies, codifications, and dominant practices had already formed, raising the danger of ossifying the history of a young field prematurely. They argue instead for letting definition emerge from practice, allowing submissions to represent contours of the field in Humanities Computing, other varieties of digital work, and initiatives and individuals not necessarily classified as “digital humanities.” DHQ was conceived as an experimental model. Its innovative technical architecture afforded online, open-access publication under a Creative Commons license that allowed copying, distributing, and transmitting work for non-commercial purposes. Copyright remained with authors, enabling further publication or reuse. Giving all articles detailed XML encoding also facilitated marking genres, names, and citations, while other features fostered more nuanced searching, visualization tools, and other modes of exploration and tracking the evolving nature of the field. Moreover, the editors were looking forward to testing whether the nature of argument would change with the capacity for including interactive media, links to data sets, diagrams, and audiovisual materials.
Mindful of the multiple organizations serving related interests by 2007, the editors also hoped DHQ would become a meeting ground and space of mutual encounter. They hoped to bridge historic constituencies of Digital Humanities represented by the sponsoring Alliance of Digital Humanities Organizations (ADHO) and closely related domains that were emerging at that point. The journal’s commitment to breadth has been borne out in the multidisciplinary scope of articles. Topics have spanned game studies and comic books, digital library resources, time-based digital media, digital editing, visual knowledge and graphics, sound, high-performance computing, copyright, endangered texts, and electronic literature, as well as teaching, learning, and curriculum and the reward system of tenure, promotion, and publication. Special clusters and numbers have also focused on project life cycles, data mining, classical studies, digital textual studies, the literary/studies, e-science for arts and humanities, theorizing connectivity, futures of digital studies, and oral histories of early Humanities Computing.
One year after the launch of Digital Humanities Quarterly , in May 2008, another benchmark of the field’s evolution appeared when the National Endowment for the Humanities elevated a program-level initiative to a full-fledged Office of Digital Humanities (ODH). Brett Bobley, director of the office, addressed the question of definition in a presentation to the National Council on the Humanities:
We use “digital humanities” as an umbrella term for a number of different activities that surround technology and humanities scholarship. Under the digital humanities rubric, I would include topics like open access to materials, intellectual property rights, tool development, digital libraries, data mining, born-digital preservation, multimedia publication, visualization, GIS, digital reconstruction, study of the impact of technology on numerous fields, technology for teaching and learning, sustainability models, and many others.
—Brett Bobley, “Why the Digital Humanities?” Director, Office of Digital Humanities, National Endowment for the Humanities http://www.neh.gov/files/odh_why_the_digital_humanities.pdf
The mission of the ODH is to support innovative projects that use new technologies to advance the endowment’s traditional goal of making cultural heritage materials accessible for research, teaching, and public programming. Elevation to a new office was widely considered a sign of maturity, signified as a “tipping” or “turning” point. In her report on DH for 2008, Lisa Spiro calls it a mark of credibility, and, in an article on “The Rise of Digital NEH,” Andy Guess remarks what began as a “grassroots movement” was now anchored by funding agencies and a network of centers. The impact of technology on humanities, Bobley summed up, is characterized by four major game-changers:
- the changing relationship between a scholar and the materials studied
- the introduction of technology-based tools and methodologies
- the changing relationship among scholars, libraries, and publishers
- the rise of collaborative, interdisciplinary work in the humanities.
The ODH expanded the endowment’s support for digital work significantly. It provides funding for institutes on advanced topics and DH centers. Its Implementation Grants program supports a wide range of activities including the development of computationally based methods, techniques, or tools completion and sustainability of existing resources often in alliance with libraries and archives studies of philosophical or practical implications of emerging technologies in both disciplinary and interdisciplinary contexts and digital modes of scholarly communication that facilitate peer review, collaboration, or dissemination scholarship. The ODH also partners with other funders, branches of government, organizations, and programs abroad. And, its Digital Humanities Start-Up Grants program supports smaller-scale prototyping and experimenting. Taking the April 2013 announcement of twenty-three new recipients of Start-Up Grants as a representative set of examples, projects span digital collections of visual, textual, and audio materials from early through modern periods, a mobile museum initiative, games development, and interests intersecting with fields of medieval studies, African American studies, and film studies. Older tools of computational linguistics are also being used in new contexts and novel ones developed for topic modeling, metadata visualization, open-source access, and preservation.
The Digging into Data Challenge, in particular, has accelerated boundary crossing between humanities and social sciences by providing funding for research using massive databases of materials, including digitized books and newspapers, music, transactional data such as web searches, sensor data, and cell-phone records. The “Big Data” initiative has also heightened the need for collaboration and inter-institutional cooperation in working with large data sets of complex topics over time, such as patterns of creativity, authorship, and culture. And, access to data on a large scale enhances prospects for interdisciplinary research and teaching by facilitating more comprehensive views. Describing the multidisciplinary scope of the project Civil War Washington, Kenneth Price lists history, literary studies, geography, urban studies, and computer-aided mapping. One of the reasons so little research had focused on the city during that period, Price speculates, was that the form of scholarship previously available could not represent adequately the complex interplay of literary, political, military, and social elements (293–94). Research on that scale, however, is expensive, rekindling debate about the relationship of humanities with commercial enterprises that set terms of access to and use of data. It has also stimulated a debate on marginalization of smaller projects in the force of “Big Humanities.”
Taken together, statements 1–3 document significant developments in the institutionalization of new fields—a defining literature, a dedicated journal, and funding support. Statements 4 and 5 benchmark an added development, growing debate on definition of the field. Read comparatively, they reveal new positionings.
Speculative computing arose from a productive tension with work in what has come to be known as digital humanities. That field, constituted by work at the intersection of traditional humanities and computation technology, uses digital tools to extend humanistic inquiry. Computational methods rooted in formal logic tend to be granted more authority in this dialogue than methods grounded in subjective judgment. But speculative computing inverts this power relation, stressing the need for humanities tools in digital environments.
—Johanna Drucker, SpecLab: Digital Aesthetics and Projects in Speculative Computing (Chicago: U of Chicago P, 2009), xi
Drucker distinguishes “digital humanities,” characterized by a philosophy of Mathesis, from “speculative computing,” characterized by a philosophy of Aesthesis. Her distinction is based on experiences during the 1990s and early 2000s at the Institute for Advanced Technology in the Humanities, in projects that became the core of the Speculative Computing Laboratory (SpecLab). By privileging principles of objectivity, formal logic, and instrumental applications in Mathesis, Drucker’s formulation of “digital humanities” prioritizes the cultural authority of technical rationality manifested in quantitative method, automated processing, classification, a mechanistic view of analysis, and a dichotomy of subject and object. By privileging subjectivity, aesthetics, interpretation, and emergent phenomena, “speculative computing” prioritizes questions of textuality, rhetorical properties of graphicality in design, visual modes of knowing, and epistemological and ideological critique of how we represent knowledge. Mechanistic claims of truth, purity, and validity are further challenged by a probablitistic view of knowledge and heteroglossic processes, informed by theories of constructivism and post-structuralism, cognitive science, and the fields of culture/media/and visual studies (Drucker, Spec Lab , xi–xvi, 5, 19, 22–30 see also Drucker and Nowviskie).
Drucker’s distinction elevates the aesthetics of computational work at the boundary of humanistic interpretation and computer science. In a comparable move, Burdick et al. bring a humanities conception of design—defined by information design, graphics, typography, formal and rhetorical patterning—to the center of the field framed by traditional humanities concerns—defined by subjectivity, ambiguity, contingency, and observer-dependent variables in knowledge production (vii, 92). Like Drucker, they also reconceptualize design from a linear and predictive process to generativity in an iterative and recursive process. Design, Drucker adds, becomes a “form of mediation,” not just transmission and delivery of facts. Information visualization, she notes elsewhere, becomes genuinely humanistic, incorporating critical thought and the rhetorical force of the visual (“Humanistic Theory,” 86). Not everyone, however, equates “digital humanities” narrowly with Mathesis. Drucker’s positioning of speculative computing as the “other” to DH, Katherine Hayles responded, opens up the field. Yet, her stark contrast flattens its diversity. Many would also argue they are doing speculative computing ( How We Think , 26). Moreover, Drucker bypasses the boundary work of Statement 5.
Statement 5 emanates from a group affiliated with UCLA’s Digital Humanities and Media Studies program. The group focused directly on the task of definition in a Mellon-funded seminar in 2008–2009 at UCLA, a Digital Humanities Manifesto 2.0 , and a March 2009 White Paper by Todd Presner and Chris Johanson on “The Promise of Digital Humanities.”
Digital Humanities is not a unified field but an array of convergent practices that explore a universe in which: a) print is no longer the exclusive or the normative medium in which knowledge is produced and/or disseminated instead, print finds itself absorbed into new, multimedia configurations and b) digital tools, techniques, and media have altered the production and dissemination of knowledge in the arts, human and social sciences.
—Jeffrey Schnapp and Todd Presner, “Digital Humanities Manifesto 2.0,” http://www.humanitiesblast.com/manifesto/Manifesto_V2.pdf
The periodization of the Manifesto and the White Paper parallels Davidson’s distinction between Humanities 1.0 and 2.0. A first wave of Digital Humanities in the late 1990s and early 2000s emphasized large-scale digitization projects and technological infrastructure. It replicated the world that print had codified over five centuries and was quantitative in nature, characterized by mobilizing search and retrieval powers of databases, automating corpus linguistics, and stacking HyperCards into critical arrays. In contrast, the second wave has been qualitative, interpretive, experiential, emotive, and generative in nature. It moved beyond the primacy of text to practices and qualities that can inhere in any medium, including time-based art forms such as film, music, and animation visual traditions such as graphics and design spatial practices such as architecture and geography and curatorial practices associated with museums and galleries. The agenda of the field also expanded to include the cultural and social impact of new technologies and born-digital materials such as electronic literature and web-based artifacts. DH became an umbrella term for a multidisciplinary array of practices that extend beyond traditional humanities departments to include architecture, geography, information studies, film and media studies, anthropology, and other social sciences.
Interdisciplinary is a keyword in the second wave, along with collaborative , socially engaged , global , and open access . Their combination is not a simple sum of the parts. Manifesto 2.0 invokes a “digital revolution,” and the White Paper calls the effect of new media and digital technologies “profoundly transformative.” The authors reject the premise of a unified field in favor of an interplay of tensions and frictions. Schnapp and Presner do not suggest that Digital Humanities replaces or rejects traditional humanities. It is not a new general culture akin to Renaissance humanism either, or a new universal literacy. They see it as a natural outgrowth and expansion in an “emerging transdisciplinary domain” inclusive of both earlier Humanities Computing and new problems, genres, concepts, and capabilities. The vision of a transdisciplinary domain parallels trans-sector Transdisciplinarity . The Manifesto pushes into public spheres of the Web, blogosphere, social networking, and the private sector of game design. At the same time, it parallels the imperative of Critical Interdisciplinarity . If new technologies are dominated and controlled by corporate and entertainment interests, the authors ask, how will our cultural legacy be rendered in new media formats? By whom and for what? Elsewhere, Presner reported being told his HyperCities project using Google Maps and Google Earth puts him “in bed with the devil” (qtd. in Hayles, How We Think , 41).
The transdisciplinary momentum of statement 5 is further apparent in comparable declarations, notable among them the Affiche du Manifeste des Digital Humanities . Circulated at a THATCamp in Paris in May 2010, the French manifesto embraces the totality of social sciences and humanities. It acknowledges reliance on the disciplines but deems Digital Humanities a “transdiscipline” that embodies all methods, systems, and heuristic perspectives linked to the digital within those fields and communities with interdisciplinary goals. Like its U.S. counterpart, the Manifeste covers a wide scope of practices: including encoding textual sources, lexicometry, geographic information systems and web cartography, data-mining, 3-D representation, oral archives, digital arts and hypermedia literatures, as well as digitization of cultural, scientific, and technical heritage. The Affiche also calls for integrating digital culture into the definition of general culture in the 21st century.
Statement 6 sketches the broadest picture of the field in Svensson’s typology of five paradigmatic modes of engagement between humanities and information technology or “the digital.”
Svensson’s typology builds on Matthew Ratto’s conception of “epistemic commitments.” Differing commitments influence the identification of study objects, methodological procedures, representative practices, and interpretative frameworks.
Below, I will examine five major modes of engagement in some more detail: information technology as a tool, as a study object, as an expressive medium, as an experimental laboratory and as an activist venue. The first three modes will receive the most attention. Importantly, these should not be seen as mutually exclusive or overly distinct but rather as co-existing and co-dependent layers, and indeed, the boundaries in-between increasingly seem blurry. This does not mean, however, that it may not fruitful to analyze and discuss them individually as part of charting the digital humanities.
— Patrik Svensson, “The Landscape of Digital Humanities,” Digital Humanities Quarterly 4, no. 1 (2010): ¶102 http://digitalhumanities.org/dhq/vol/4/1/000080/000080.html
In Svensson’s first mode of engagement—as a tool—the field exhibits a strong epistemic investment in tools, methodology, and processes ranging from metadata schemes to project management. There is also a strong focus on text analysis, exemplified by use of text encoding and markup systems in corpus stylistics, digitization, preservation, and curation. This first mode aligns DH with the concept of Methodological Interdisciplinarity. In his book Humanities Computing McCarty identifies method, not subject, as the defining scholarly platform of the field (5–6). The Wikipedia entry on Digital Humanities retains a strong methodological orientation. Tom Scheinfeld argues that scholarship at this moment is more about methods than theory (125). And, posters to the “Day of Digital Humanities” online forum on the question “How do you define Humanities Computing/Digital Humanities?” associate the field strongly with “tools” and “application” of technology. McCarty and Harold Short have mapped relations in the “methodological commons” (see fig. 1).
The octagons above the commons in figure 1, McCarty explains in his book, demarcate disciplinary groups of application. The indefinite cloudy shapes below the commons suggest “permeable bodies of knowledge” that are constituted socially, even though lacking departmental or professional aspects. All disciplines, however, do not have the same kind of relationship to the field. McCarty designates history as the primary discipline (especially history of science and technology), along with philosophy and sociology. All the rest are secondary ( Humanities Computing , 4, 33, 119, 129). In a speech in March 2013, Raymond Siemens compared versions of the figure. The first version, he recalled, focused on content oriented toward digital modeling (emphasizing digitization). The second version, above, is more inclusive of media types and extra-academic partners while acknowledging process modeling (emphasizing analysis). Looking toward the future, Siemens proposed it is time to focus on problem-based modeling that moves past the rhetoric of revolution to a sustainable action-oriented agenda.
All of the shapes in figure 1, it should be said, are not strictly “disciplines,” underscoring the need for the fourth major term in the baseline vocabulary for understanding interdisciplinarity— interprofessionalism . The figure also has a mix of traditional disciplines and interdisciplinary areas, in the latter case including cognitive science, performance studies, cultural studies, and the history and philosophy of science and technology. In addition, the profession of engineering appears. The commons in the middle of the figure is a hub for transcending the limits of specialized domains. In a separate though complementary reflection on the relationship of interdisciplinarity and transdisciplinarity in Digital Humanities, Yu-wei Lin calls models and tools for modeling “carriers of interdisciplinarity.” Their carrying capacity fosters projects that may lead to more radical “transdisciplinary” movement beyond parent disciplines through a shared conceptual framework that integrates concepts, theories, and approaches from different areas of expertise in the creation of something new (296–97).
In Svensson’s second mode of engagement—as a study object—the digital is an object of analysis with a strong focus on digital culture and transformative effects of new technologies of communication. Cyberculture studies and critical digital studies, for example, accentuate critical approaches to new media and their contexts. The scope of forms is wide: encompassing networked innovations such as blogging, podcasting, flashmobs, mashups, and RSS feeds as well as video-sharing websites such as MySpace and YouTube, Wikipedia , and massively multiplayer online role-playing games (MMORPGs). Creating and developing tools, Svensson adds, are not prominent activities in this mode, and use of information technology does not extend typically beyond standard tools and accessible data in online environments. The difference in the first two modes illustrates how definition varies depending on where the weight of priority falls: the algorithm or critical theory. Even the most fundamental terms, such as access , are used differently. From a technical standpoint, access connotes availability, speed, and ease of use. From the standpoint of cultural analysis, it connotes sharing materials and reinvigorating the notion of “public humanities” on digital ground.
In the third mode of engagement—as experimental laboratory—DH centers and laboratories are sites for exploring ideas, testing tools, and modifying data sets and complex objects. This kind of environment is familiar in science and technology but is relatively new to humanities. Svensson cites the Stanford Humanities Laboratory (SHL) and his own HUMlab at Umeå University. Digital platforms such as Second Life, he adds, may function as virtual spaces for experiments that are difficult to mount in physical spaces. Svensson likens such structures to Adam Turner’s notion of “paradisciplinary” work born of exchanging ideas, sharing knowledge, and pooling resources. Turner compares modes of interaction and creativity in these spaces to the community collaboration at the heart of “hacker/maker culture.” Whether the site is a shed or a garage, “the space breathes life into the community” (qtd. in Svensson, “Landscape”). In their model of a new Artereality, Schnapp and Michael Shanks call the SHL both “a multimodal and fluid network” and “a diverse ecology of activity and interest.” Established in 2001, the Stanford Lab was modeled on the platform of “Big Science.” Activities within this collaborative environment comprise a form of “craftwork” where participants learn by making.
Comparably, Saklofske, Clements, and Cunningham liken the space of humanities labs to “experimental sandboxes” (325), and Ben Vershbow calls the New York Public Library Lab a kind of “in-house technology startup.” The lab is occupied by “an unlikely crew of artists, hackers and liberal arts refugees” who focus on the library’s public mission and collections. Envisioned as “inherently inter-disciplinary,” their work has empowered curators “to think more like technologists and interaction designers, and vice versa.” Vershbow credits their success to being able “to work agilely and outside the confines of usual institutional structures” (80). Bethany Nowviskie further likens such spaces to skunkworks , a term adopted by small teams of research and development engineers at the Lockheed Martin aeronautics corporation in the 1940s. Library-based DH skunkworks function as semi-independent “prototyping and makerspace labs” where librarians take on new roles as “scholar-practitioners.” In the Scholars’ Lab at the University of Virginia Library, collaborative research and development has led not only to works of innovative digital scholarship but also to technical and social frameworks needed for support and sustainability. The lab was a merger of three existing centers. It opened in 2006 in a renovated area of the humanities and social sciences research library that was conducive to open communication and flexible use of space (53, 56, 61).
In the fourth mode of engagement—as expressive medium—increased digitalization has afforded unparalleled access to heterogeneous types of content and media. Much of this content is born digital in multimodal forms that can be manipulated within a single environment, including moving images, text, music, 3-D designs, databases, graphical details, and virtual walk-throughs. Some areas—such as visual, media and digital studies––have been affected significantly and, Svensson found, work tends to focus on studying objects rather than producing them. Nevertheless, both the third and fourth modes heighten creativity. For builders of tools, Thomas Crombez posted to the 2010 “Day of Digital Humanities,” DH is a “playground for experimentation.” Innovation has led to technological advancements in the form of new software and more powerful platforms for digital archives. It has also fostered new digital-born objects and aesthetic forms of art and literature. Posting to the 2009 forum, Jolanda-Pieta van Arnhem called DH “about discovery and sharing as much as it is about archival and data visualization.” It advances open communication, collaboration, and expression. At the same time it mirrors her own artistic process by incorporating art, research, and technology.
In Svensson’s fifth mode of engagement—as activist venue—digital technology is mobilized in calls for change. He highlights several examples. Public Secrets , Sharon Daniel’s work on women in prison and the prison system, is a hybrid form of scholarship that is simultaneously artistic installation, cultural critique, and activist intervention. Daniel moves from representation to participation, generating context in a database structure that allows self-representation. She describes her companion piece, Blood Sugar , as “transdisciplinary” in its movement beyond new ways of thinking about traditional rubrics to contesting those rubrics in open forms (cited in Balsamo, 87–88). Kimberly Christen’s Mukurtu: Wampurrarni-kari website on aboriginal artifacts, histories, and images provides aboriginal users with an interface that offers more extensive access than the general public. And, another form of activist engagement occurs in conversations about making as a form of thinking about design and use. Preemptive Media is a space for discussing emerging policies and technologies through beta tests, trial runs, and impact assessments. Elizabeth Losh also cites the Electronic Disturbance Theater that adapted principles of the Critical Art Ensemble in virtual sit-ins, the b.a.n.g. lab at the California Institute for Telecommunications and Information Technology, the “Electronic Democracy” network’s research on online practices of political participation, and acts of “political coding” and “performative hacking” by new-media dissidents (168–69, 171).
Svensson does not include Critical Interdisciplinarity and the “transgressive” and “trans-sector” connotations of Transdisciplinarity in the fifth mode. Yet, they can be viewed as activist modes of scholarship. Questions of social justice and democracy are prominent in cultural studies of digital technologies and new media. And, older topics of subjectivity, identity, community, and representation are being reinvigorated. Digital technologies are also sources of empowerment. Indigenous communities, for example, have used geospatial technologies to protect tribal resources, document sovereignty, manage natural resources, create databases, and build networking forums, and guidebooks. Yet, the same technologies are sources of surveillance, stereotyping, and subjugation. Amy Earhart has also interrogated the exclusion of non-canonical texts by women, people of color, and the GLBTQ community. Scrutinizing data from NEH Digital Humanities Start-Up Grants between 2007 and 2010, Earhart found that only 29 of the 141 awards focused on diverse communities and only 16 on preservation or recovery of the texts of diverse communities (314).
Distinct as they are, modes of engagement are not airtight categories. They may overlap, and even in the same mode differences arise. In an interview with Svensson, Charles Ess cites tension at a conference of the Association of Internet Researchers (AoIR) between German and philosophical senses of critical theory and radical critiques from the standpoint of race, gender, and sexuality in the Anglophone tradition. Moreover, although most researchers study the Internet as an artifact rather than engaging in experimentation, in Scandinavia there is a strong tradition of design. Internet research, Ess adds, could also be considered a subset of telecom research, digital studies, or other areas when it takes on their identities. Moreover, growing interest in research and instruction in multimedia art, design, and culture has aligned Humanities Computing with visual and performing arts. Svensson’s statistical tracking of the twenty to fifty most frequent words in programs of AoIR conferences from 1999 to 2008 also revealed the focus in another example of the second mode—Internet studies—was on space, divide, culture, self, politics and privacy phenomena, cultural artifacts and processes. An activist orientation appeared that is rare in the older discourse of Humanities Computing, where the predominant focus is databases, models, resources, systems, and editions.
That said, DH organizations are opening up to new topics. The annual meeting of the flagship Alliance of Digital Humanities Organizations (ADHO) still emphasizes Humanities Computing over new media and cultural interests that find more space in groups such as HASTAC. Yet, a new “Global Outlook” (GO::DH) special interest group has formed to address barriers that hinder communication and collaboration across arts, humanities, and the cultural heritage sector as well as income levels. Scott Weingart’s analysis of acceptances to the 2013 ADHO conference reveals that literary studies and data/text mining submissions outnumbered historical studies. Archive work and visualizations also appeared more often than multimedia. That said, despite being small, multimedia beyond text was not an insignificant subgroup. Gender studies also had a high acceptance rate of 85 percent, and the program included a panel on the future of undergraduate Digital Humanities. Traditional topics of text editing, digitization, computational stylistics, and curation are still invited for the Australasian Association’s hosting of the 2015 conference, but so are arts and performance, new media and Internet studies, code studies, gaming, curriculum and pedagogy, and critical perspectives.
The history of Digital Humanities is painted both in broad strokes, revealing shared needs and interests, and in thin strokes, revealing distinct subhistories. Like linguists, classicists have invested in making digital lexica and encyclopedias, and they have benefited from advances in graphic capacity and language technologies that facilitate machine translation, cross-lingual information retrieval, and syntactic databases. Like literary scholars, linguists have also created electronic text editions enhanced by the ability to annotate interpretations and hyperlink resources. And, involved as they are in data-intensive work, classicists, archaeologists, and historians have all gained from increased capacity for record keeping and statistical processing. The introduction of Digital Humanities interests often generates a claim of interdisciplinary identity in a discipline. Yet, identities differ. If there is a tight relationship between a discipline and a digitally inflected study object, Patrik Svensson found in mapping modes of engagement, the work may lack strong identity as “digital humanities.” A media studies scholar interested in news narratives in online media, for example, may consider this work to be anchored within media studies rather than a separate field. In contrast, if digitally mediated language or communicative patterns in Second Life are incorporated as objects of study, a discipline may change to include digital objects and develop intersections with other disciplines and fields. The changing nature of work practices and perceptions of the role of the digital are evident in the examples of English, history, and archaeology.
Digital Humanities and English have a long-standing relationship which Pressman and Swanstrom attribute to the fact that many groundbreaking projects centered on literary subjects. In an oft-cited essay, Matthew Kirschenbaum identifies six reasons why English departments have been favorable homes (“What is Digital Humanities,” 8–9). The beginning reason is not surprising: “First, after numeric input, text has been by far the most tractable data type for computers to manipulate.” In contrast, images present more complex challenges of representation. The second reason marks the multidisciplinary scope of English. Subfields of literary and cultural studies, rhetoric and composition, and linguistics have attained separate disciplinary status, but they are still typically housed within the same department. Over time, Pressman and Swanstrom add, conception of the “literary” has expanded beyond traditional texts. In welcoming readers to an online “disanthology” of articles on literary studies in the digital age, the editors called literary studies a “confluence of fields and subfields, tools and techniques.” Given that computational approaches come from varied sources, a growing array of methodologies are engaged and practices and methodologies of digital scholarship lead into other fields in humanities as well as computer science and library and information science.
In defining the second reason Kirschenbaum highlights, in particular, the long-standing relationship of computers and composition. Teachers of writing and rhetoric, Jay David Bolter recalls, were among the earliest to welcome new technologies into the classroom, initially word processors and then chat rooms, MOOs, wikis, and blogs. They constituted new spaces for pedagogy, and research on computers and composition expanded eventually from text-based literacy and writing to include new digital media, video games, and social networking (“Critical Theory”). By 2011, the relationship to Digital Humanities was the focus of a featured panel at the annual Computers and Writing conference. Panelist Douglas Eyman called himself a “self-confessed digital humanist,” but admitted he is still puzzling over the question of fit for himself and the field of digital rhetoric. On the TechRhet Digest listserv that prompted the session, Dean Rehberger cautioned against equating DH with one area such as composition and writing, or one area subsuming the other. “The trick,” he advised, “will be to untangle the points of intersection and interaction.”
Throughout its history, composition studies has intersected with multiple disciplines and fields, including literary studies and rhetoric, literacy studies, technology studies, and new media studies. One of those intersections, with rhetoric, is also linked with the field of communication studies. Computer-mediated communication was an early site of studies of behavior in online communities, work that continues in both communications and English departments. In a report on the emergence of “digital rhetoric,” Laura Gurak and Smiljana Antonijevic call for a new “interdisciplinary rhetoric” capable of understanding the persuasive functions of digital communications that encompass text, sound, visual, nonverbal cues, material, and virtual spaces. Digital rhetoric, they argue, must assert a new canon that draws on prior constructs while recognizing changes in the 2,000-year-old tradition that constitutes the field of Western rhetoric. “Screen rhetorics,” Gurak and Antonijevic add, are not a sidebar to studies of public discourse and public address. They are at the center of what theorists and critics should be studying, and of interest to linguists, psychologists, and others exploring human communication.
The third reason recognizes the link between English departments and converging conversations around editorial theory and method in the 1980s, amplified by subsequent advances in implementing electronic archives and editions. These discussions cannot be fully understood, Kirchsenbaum notes, without considering parallel conversations about the fourth reason—hypertext and other forms of electronic literature. By the 1990s, Bolter recalls, some critics were positioning digital media as an electronic realization of poststructuralist theory. George Landow argued that hypertext had a lot in common with contemporary literary and semiological theories, although it was aligned initially with formalist theory and print continued to dominate (“Theory and Practice,” 19–20, 26) . The “revolution” envisioned by early theorists of hypertext and electronic modes of authorship beckoned radical restructuring of textuality, authorship, and readership while fostering analysis of digital material culture. It took time, though, for more transformative practices of hypermediation and multi-modal remixing to become the object of study.
The fifth reason stems from openness to cultural studies. English departments were early homes for related interests, fostering interactions with other interdisciplinary fields such as popular culture studies, identity fields, and postcolonial studies. The scope of study also expanded with new objects. Once confined to print, the underlying notion of a “text” expanded to include verbal, visual, oral, and other forms of expression. Indicative of this trend, the Texas Institute for Literary and Textual Studies (TILTS), affiliated with the University of Texas English Department, focused on a broadening conception of the “literary” and the “textual.” The TILTS 2011 series on “The Digital and the Human(ities)” encompassed traditional works, non-textual forms, and popular genres. Symposium 1—Access, Authority, and Identity—considered older topics of scholarly editing plus social networking, corporatization and Google, and the fracturing of knowledge and undermining of traditional canons. Symposium 2—Digital Humanities, Teaching and Learning—looked at pedagogical innovations and digital mediated learning, new subjects of games and code, student subjectivities, born-digital materials, and multi-media composition. Symposium 3—The Digital and the Human(ities)—included automation, digital vernacular, the changing nature of argument, justice, and rights of students and of citizens. Kirschenbaum’s sixth and final reason also recognizes the rise of e-reading and e-book devices, as well as large-scale text digitization projects such as Google Books, data mining, and visualization in distant readings.
The discipline of history also has a long-standing involvement with Digital Humanities. In his report in the Blackwell Companion , William G. Thomas identified three phases in historians’ use of computing technologies. During the first phase in the 1940s, some historians used mathematical techniques and built large data sets. During the second phase beginning in the early 1960s, the emerging field of social science history opened up new social, economic, and political histories that drew on massive amounts of data, enabling historians to tell the story “from the bottom up” rather than elite perspectives that dominated traditional accounts. The third and current phase is marked by greater capacity for communication via the Internet, in a network of systems and data combined with advances in the personal computer and software. Historical geographical information systems (GIS) also holds promise for enhancing computer-aided spatial analysis in not only history and demography but archaeology, geography, law, and environmental science as well. The number and size of born-digital data collections has increased as well, along with tools that enable independent exploration and interpretive association.
Change, however, stirred debate. During the second phase, cliometrics was a flashpoint, with particular criticism aimed at Robert Fogel and Stanley Engerman’s 1974 book Time on the Cross: The Economics of American Negro Slavery . Critics questioned lack of attention to traditional methods, including narrative, textual, and qualitative analysis as well as interdisciplinary study of social and political forces. Another initiative launched in the 1970s, the Philadelphia Social History Project, assembled a multidisciplinary array of data while aiming to create guidelines for large-scale relational databases. It was criticized, though, for falling short of a larger synthesis for urban history. Other projects aggregated multidisciplinary materials. Who Built America? , for example, compiled film, text, audio, images, and maps in social history. Yet, early products were limited to self-contained CD-ROM, VHS-DVD, and print technology lacking Internet connectivity. As new technology became available, the idea of “hypertext history” arose in projects such as The Valley of the Shadow, which brought together Civil War letters, records, and other materials. Thomas speculates the term digital history originated at the Virginia Center for Digital History. In the 1997–98 academic year, he directed the center. He and William Ayers used the term to describe the project. In 1997 they taught “Digital History of the Civil War ” and began calling such courses “digital history seminars.” Subsequently, Steven Mintz started a digital textbook site named Digital History (Thomas, 57–58, 61–63).
Advances heralded new ways of studying and writing history. However, they also raise new questions about the nature of interpretation. In a 2008 online forum on “The Promise of Digital History,” William Thomas cautions that the fluidity or impermanence of the digital medium means scholars may never stop editing, changing, and refining as new evidence and technologies arise. Where, then, do interpretation and salience go in online projects that are continually in motion? And, what impact do technologies have on understanding history as a mode of investigation, meaning and content, and creating knowledge? Douglas Seefeldt joined Thomas in cautioning that expanded access does not answer the question of what history looks like in a digital medium. Production, access, and communication are valuable. Yet, on another level Digital History is a methodological approach framed by the hypertextual power of technologies to make, define, query, and annotate associations in the record of the past and to gain leverage on a problem. The scale and complexity of born-digital sources require more interdisciplinary collaboration and cooperative initiatives, as well as tailored digital resources and exposure for graduate students. Well-defined exemplars, guidelines for best practices, and standards of peer review are also needed. And, the focus must shift from solely product-oriented exhibits or websites toward the process-oriented work of employing new media tools in research and analysis.
Parallel advances are also evident in the third discipline. In his report on “Computing for Archaeologists” in the Blackwell Companion , Harrison Eiteljorg II traces the history of computing and archaeology to record keeping and statistical processing in the late 1950s. Early limits of cost and access, however, impeded progress. Punch cards and tape were the only means of entering data, and results were only available on paper. Archaeologists also had to learn computer languages. By the mid-1970s, database software was making record keeping more efficient, expanding the amount of material collected and ease of retrieving information without needing to learn programming languages. By the 1980s, microcomputers and new easy-to-use software were available, and geographical information systems (GIS) and computer-aided design (CAD) programs were enhancing map-making and capturing the three-dimensionality of archaeological sites and structures. Virtual reality systems based on CAD models also promised greater realism, but accurate representations were still limited by inadequate data. Like other disciplines, archaeology also needed more discipline-specific software and standards for use. Furthermore, the increasing abundance of information and preservation of data collections require careful management, doubts about the acceptability of digital scholarship persist, and not enough scholars are trained in using computers for archaeological purposes. Even with notable advances, Eiteljorg concludes, the transformation from paper-based to digital recording remains incomplete.
In a blog posting on “Defining Digital Archaeology,” Katy Meyers situates “digital archaeology” historically within the recent rise of “Digital Disciplines.” Yet, she reports, archaeologists have not engaged with the most active of them—the interdisciplinary group of Digital Humanities—or the ways technology is changing their work. Digital technologies are widely used and integrated into the discipline to the point that GIS, statistical programs, databases, and CAD are now considered part of the archaeologist’s toolkit. Yet, there is no disciplinary equivalent to “digital humanities” that accounts comprehensively for an archaeology of digital materials, including excavation of code, analysis of early informatics, and interpretation of early web-based materials. Or, digital archaeology conceived as an approach to studying past human societies through their material remains, rather than a support tool or method. Meyers also echoes long-standing concerns about the gap between generic approaches and discipline-specific needs, in this case the limits of the Dublin Core standard for metadata. Rather than a separate discipline and approach, the digital may constitute a different specialization such as a focus on ceramics, lithic analysis, or systems theory.
A recently published open-access book, Archaeology 2.0 , provides an overview of new approaches taking hold in the discipline. It does not explore digital initiatives outside of North America and the United Kingdom, but it does cover a broad range of topics that cut across disciplinary and geographic boundaries. Archaeology, Eric C. Kansa notes in the introduction, has long been considered “an inherently multidisciplinary enterprise, with one foot in the humanities and interpretive social sciences and another in the natural sciences.” Technological capacity has increased because of more powerful tools for data management, platforms for making cultural artifacts more accessible, and interfaces for making communication more open and collaboration feasible. Yet, these advances have compounded the challenges of archiving, preserving, and sustaining data, while creating information overload. Even with increased use of themed research blogs and field-based communication devices, the peer-reviewed scholarly journal also remains dominant. And, archeology faces unique challenges in designing computational infrastructure. It deals in longer horizons of “deep time” and complex multidisciplinary projects with data sets for describing complex contextual relations that are generated by different specialists. In addition, it has links to tourism and the marketing of cultural heritage involving commercially controlled mechanisms of communication and information sharing in both professional and public spheres.
Looking back on the trajectory of change in these disciplines, three trend lines stand out: visualization, spatialization, and a computational turn in scholarship. Visualization is not new. Conversations about visuality occur across disciplines and fields. The label visual culture , Nicholas Mirzoeff recounts, gained currency because the contemporary era is saturated with images, from art and multimodal genres to computer-aided design and magnetic resonance imaging (1–3). The most striking development for Digital Humanities has been enhanced capacity to visualize information, fostering a “spatial” and “geographical” turn in the field facilitated by technologies of Google Earth, MapQuest, the Global Positioning System (GPS), and three-dimensional modeling. Patricia Cohen, who covers “Humanities 2.0” for the New York Times , calls this development the foundation of a new field of Spatial Humanities. Advanced mapping tools, she recalls, were first used in the 1960s, primarily for environmental analysis and urban planning. During the late 1980s and 1990s, geographical historical information systems made it possible to plot changes in a location over time using census information and other quantifiable data. By the mid-2000s, technological advances were making it possible to move beyond restricted map formats and to add photos and texts.
The interdisciplinary character of the spatial turn is evident in three other ways. Visualization in humanities, Burdick et al. report, is based in large part on techniques borrowed from social sciences, business applications, and natural sciences (42). The multidisciplinary scope of materials also renders patterns more visible. A project to create a digital atlas of religion in North America, for example, revealed complex changing patterns of political preference, religious affiliation, migration, and cultural influence by linking them geographically. David Bodenhamer, of the Polis Center, calls the results of capturing multiple perspectives “deep maps” (qtd. in Patricia Cohen). Another project, the Mapping Texts partnership of Stanford and the University of North Texas, allows users to map and analyze language patterns embedded in 230,000 pages of digitized historical Texas newspapers spanning the late 1820s through early 2000s. With one of two interactive visualizations, for any period, geography, or newspaper title users can explore the most common words, named entities such as people and places, and correlated words that produce topic models.
Yet, Drucker admonishes, traditional humanistic skills of cultural and historical interpretation are still needed. Mapping the Republic of Letters is a Stanford-based project that plotted geographic data for senders and receivers of correspondence, making it possible to see patterns of intellectual exchange in the early-modern world. Lines of light expose connections between points of origin and delivery in the 18th century. Drucker cautions that discrepancies of time and flow are disguised by the appearance of a “smooth, seamless, and unitary motion” (“Humanistic Theory,” 91). Nonetheless, the project renders networks visible for interpretation. Another Stanford-based initiative, the Spatial History Project, provides a community for creative visual analysis in the organizational culture of a lab environment and a wide network of partnerships and collaborations. Geospatial databases facilitate integration of spatial and nonspatial data, then visual analysis renders patterns and anomalies. These examples underscore the blurred boundaries of data and argument. In the HASTAC Scholars online forum on Visualization Across Disciplines, Dana Solomon calls the practice of information visualization a form of textual analysis with the potential for historicizing and theorizing a technical process. It can also be located within a broader constellation of aesthetic practice and visual representation in the traditions of statistics, computer science, and graphic design and in the cultural heritage industry through use of virtual reality and augmented reality in restoration of sites.
The third trend line is signified by the label computational turn . David Berry calls it a third wave, extending beyond Schnapp and Presner’s first and second waves. The computational turn moves from older notions of information literacy and digital literacy to the literature of the digital and the shared digital culture facilitated by code and software. This development is evident in real-time streams of data, geolocation, real-time databases, Twitter, social media, cell-phone novels, and other processual and rapidly changing digital forms such as the Internet itself. Focusing on the digital component of DH, Berry adds, accentuates not only medium specificity but also the ways that medial changes produce epistemic ones. At the same time, it problematizes underlying premises of “normal” print-based research while refiguring the field as “computational humanities” (4, 15). The translation of all media today into numerical data, Lev Manovich also emphasizes, means that not only texts, graphics, and moving images have become computable but also sounds, shapes, and spaces (5–6).
The names culturnomics and cultural analytics accentuate the algorithm-driven analysis of massive amounts of cultural data occurring in the computational turn. In the process, Burdick et al. also note, the canon of objects and cultural material broadens and new models of knowledge beyond print emerge (41, 125). The capacity to analyze “Big Data” makes it possible to construct a picture of voices and works hitherto silent or glimpsed only at a microscale and in isolated segments. The project People of the Founding Era, for instance, provides biographical information about leaders along with facts about lesser-known people, making it possible to know how they changed over time and eventually to visualize social networks of personal and institutional relationships. It combines a biographical glossary with group study of nearly 60,000 native-born and naturalized Americans born between 1713 and 1815, their children, and grandchildren.
Like the visual and spatial turns in scholarship, the computational turn in Digital Humanities is indicative of a larger cultural shift. In defining “Digital Humanities 2.0,” Todd Presner treats computer code as an index of culture more generally, and the medial changes it affords foster a hermeneutics of code and critical approaches to software (“Hypercities”). At the same time, the computational turn has generated new overlapping subfields of code studies, software studies, and platform studies. At the Swansea University workshop on the computational turn, Manovich dated the beginning of the movement to 2008. The use of quantitative analysis and interactive visualization to identify patterns in large cultural data sets enables researchers to grapple with the complexity of cultural processes and artifacts. New techniques, though, must be developed to describe dimensions of artifacts and processes that received scant attention in the past, such as gradual historical changes over long periods. Visualization techniques and interfaces, Manovich added, are also needed for exploring cultural data across multiple scales, ranging from details of a single artifact or processes, such as one shot in a film, to massive cultural data sets/ flows, such as films made in the 20th century.
Heightened attention to the operations of code and software has also fostered Critical Interdisciplinarity in overlapping fields of race and gender studies. Amy Earhart has questioned the ways technological standards such as the Text Encoding Initiative’s tag selection construct race in textual materials (“Can Information,” 314, 316). Jacqueline Wernimont critiqued the politics of tools and coding practices from a feminist perspective, and Tara McPherson examined the ways early design systems such as the UNIX operating system prioritized modularity and isolated enclaves over intersections, context, relation, and networks. Responding in her blog to the charge of not being inclusive, Melissa Terras addressed the way guidelines in the Text Encoding Initiative assigned sexuality in a document by encoding 1 for male and a secondary 2 for female. As program chair for a Digital Humanities conference, Terras also aimed to widen protocols beyond consideration of disciplines, interests, and geography to include gender equality as well as economic, ethnic, cultural, and linguistic diversity.
The differing modes of engagement and practices reviewed in this chapter affirm Svensson’s conclusion: “The territory of the digital humanities is currently under negotiation.” It has evolved historically as the body of content expanded, new claims arose, and alternative constructions were asserted. And, as we’re about to see, constructions of the field also took root in differing institutional cultures.
Can a PDF file contain a virus?
There are many features in the PDF that can be used in malicious ways without exploiting a vulnerability. One example is given by Didier Stevens here. Basically he embeds an executable and has it launch when opening the file. I am not sure how today's versions of readers handle this but its a good method of using PDF features in malicious ways.
If you want an example malware, check out pidief.
And generally PDF malware will predominantly be just the dropper, not the payload itself.
To learn more on the vulnerabilities associated with pdf files and ways of detecting them before they do any damage read this kali documentation on peepdf.
Whether a file is malicious or not, does not depend on the file extension (in this case PDF). It depends on the vulnerabilities in the software which will be parsing it. So for example, PDF reader that you are using potentially contains a buffer overflow vulnerability, then an attacker can construct a special PDF file to exploit that vulnerability.
Consequently, to guard against such attacks is also easy, just ensure your PDF reader is up-to-date.
A simple google search landed me up on the SANS Institute's overview of PDF malware, which seems to be good to start with.
Inequities in the urban food environment of a Brazilian city
Food environment refers to the physical, social, cultural, economic and political contexts in which people engage with food systems in order to acquire, prepare and consume food. In 2016, we investigated the food environment of districts in Juiz de Fora, Minas Gerais, Brazil, according to different socio-economic levels. We proposed a categorization of food establishments according to the NOVA food classification, devised thematic maps, tested the significance of food retailers’ agglomerations by univariate K function and detected district clusters using variables of interest. A total of 23 districts (19.1%) presented high or very high vulnerability. Establishments only or mainly selling ultra-processed foods presented higher frequencies (52.7%) in comparison to other categories throughout the city. The downtown district had the most of all types of establishments. Districts of greater vulnerability had fewer establishments. The environmental iniquities we have identified reinforce the need to implement public policies that promote healthy urban food environments.
This is a preview of subscription content, access via your institution.
Machine learning methods for landslide susceptibility studies: A comparative overview of algorithm performance
Landslides are one of the catastrophic natural hazards that occur in mountainous areas, leading to loss of life, damage to properties, and economic disruption. Landslide susceptibility models prepared in a Geographic Information System (GIS) integrated environment can be key for formulating disaster prevention measures and mitigating future risk. The accuracy and precision of susceptibility models is evolving rapidly from opinion-driven models and statistical learning toward increased use of machine learning techniques. Critical reviews on opinion-driven models and statistical learning in landslide susceptibility mapping have been published, but an overview of current machine learning models for landslide susceptibility studies, including background information on their operation, implementation, and performance is currently lacking. Here, we present an overview of the most popular machine learning techniques available for landslide susceptibility studies. We find that only a handful of researchers use machine learning techniques in landslide susceptibility mapping studies. Therefore, we present the architecture of various Machine Learning (ML) algorithms in plain language, so as to be understandable to a broad range of geoscientists. Furthermore, a comprehensive study comparing the performance of various ML algorithms is absent from the current literature, making an assessment of comparative performance and predictive capabilities difficult. We therefore undertake an extensive analysis and comparison between different ML techniques using a case study from Algeria. We summarize and discuss the algorithm's accuracies, advantages and limitations using a range of evaluation criteria. We note that tree-based ensemble algorithms achieve excellent results compared to other machine learning algorithms and that the Random Forest algorithm offers robust performance for accurate landslide susceptibility mapping with only a small number of adjustments required before training the model.
2.1 Sampling, measurements, and feather stable isotopes
We temporarily captured myrtle and Audubon's warblers between April 18th and May 10th in 2014 at the Iona Island Bird Observatory, located in a riparian habitat at the Fraser River delta in southern British Columbia. Over the 2014 spring migration, 311 Audubon's and 235 myrtle warblers were banded at the observatory (WildResearch 2014 ). We focused the sampling for the current analysis across 6 days during spring migration (April 19–21st, 23rd, 27th, and 28th) when there were large numbers of warblers moving through the region. We determined the age and sex of each individual, and classified each—based on plumage—as Audubon's, myrtle, or hybrids (although we only identified a single hybrid). We took photographs of each bird and measured several morphometric traits, reporting wing chord and tail length here. We focused our analysis on male birds only, as they are the most confidently classified to species based on plumage. To compare the warblers to measurements from individuals across the range, we used data collected by Brelsford and Irwin ( 2009 ) as well as Hubbard ( 1970 ).
For a subset of Iona Island male warblers, we determined the stable hydrogen ratio (δ 2 Hf) in their covert feathers (n = 59 individuals, divided approximately equally between myrtle, n = 30, and Audubon's warblers, n = 29) sampled across the migratory period. Stable hydrogen ratio here refers to the relative amounts of the two stable forms of hydrogen (deuterium over protium) divided by that ratio in a standard material. We call this ratio of ratios the “isotope value” of the feather. We took advantage of a distinctive pattern in the molt cycle of these warblers: In the fall, each bird molts all of its feathers during a prebasic molt, which takes place on the breeding grounds (Pyle, 1997 ). Prior to spring migration, these birds again molt three to four of their inner greater covert feathers on their wintering grounds during their prealternate molt (Gaddis, 2011 ). Therefore, on any single individual caught during the spring, there are two generations of feathers that can be easily distinguished visually (by the extent of white edging): one set with the isotopic values of the previous breeding ground and another from the most recent wintering area. For all but seven individuals, we analyzed paired isotope data (i.e., both basic and alternate feathers from the same individual), resulting in n = 111 feathers with associated hydrogen data. Isotope analysis was carried out at Cornell University's stable isotope laboratory, and isotope corrections were performed using established keratin standards. The samples were run over 2 days. Information on international standards (KHS and CBS), as well as internal keratin standards, is reported in Table S1 (across the sample run the standard deviation of the internal keratin sample was 2‰). Experimental samples were weighed to an average of 0.848 mg (±0.005 SD). Hydrogen isotope values are reported as the corrected 2 H value measured against the Vienna Standard Mean Ocean Water (i.e., δ 2 HVSMOW). To compare isotope values between myrtle and Audubon's warblers, we used two-sample t tests as implemented in R 3.4.0 (R Core Team 2017 ). We also combined isotope and morphometric data to better assign individuals to specific breeding populations. For this, we used a linear regression between wing-plus-tail measures and the hydrogen value of the feathers, using the “lm” function in R.
2.2 IsoMAP analysis
We estimated the geographic origin of the feathers using IsoMAP, which is a framework that allows for modeling, predicting, and analysis of “isoscapes” (Bowen et al., 2014 http://www.isomap.org, accessed 4-6-2015). Stable hydrogen ratios correlate strongly with precipitation and vary, at a broad scale, by latitude (with higher latitudes having less deuterium Meehan, Giermakowski, & Cryan, 2004 Hobson et al., 2012 ), although there is much local variation that relates to other biotic and abiotic differences, such as elevation. We employed the same geographic assignment approach as outlined in Toews et al. ( 2014 ). In this study, we used a precipitation hydrogen model within a longitudinal range of 168.4° to 51°W and a latitudinal range of 16.6° to 71.5°N (IsoMAP jobkey: 46203).
There is not a 1:1 relationship between hydrogen in precipitation (δ 2 Hp) and hydrogen in feathers (δ 2 Hf). For organic samples, such as feather keratin (δ 2 Hf), it is therefore important to generate an empirically based transfer function between the two (Bowen et al., 2014 ). We used the two-part linear transfer function from Toews et al. ( 2014 ), which was modified from Hobson et al. ( 2012 ) and is based on hydrogen isotope values from passerine feathers grown at known locations. For feathers with δ 2 Hf below −53.6 ‰, we used δ 2 Hf = 0.5765*δ 2 Hp-61.34 as the transfer function for higher values, we used δ 2 Hf = 1.345*δ 2 Hp-20.17. With IsoMAP, we then generated a geographic likelihood assignment surface for each feather using the “individual assignment” function, including the standard deviation of the residuals from the water/feather transformation function (9.96‰). The resulting likelihood surfaces were then averaged across individuals—separated by species and feather type—using the raster calculator in QGIS (QGIS Development Team 2017 ).
Discussion and conclusions
We have discussed three lines of research where social big data can complement existing approaches to provide small area and high-time resolution methods for analysis of migration. In terms of estimating flows and stocks, some research already exists trying to use social big data to now-cast immigration. However, models still need to be refined and validated. An important issue here is that a proper gold standard does not exist: exact current immigration rates are unknown, and those in the past can be noisy, so validation of now-casting models is not straightforward. Finding the relations between policies and immigration could be a step forward in finding means to validate model output. Another big data type that has not been included here and that can help make predictions in terms of migration related to climate is satellite data. To measure migrant integration, we believe that several new data types can be used to introduce novel integration indices, based on retail consumer behaviour, mobile data, OSN language, sentiment and network analysis. Research in this direction is slightly less developed, mostly due to low availability of ready-to-use data sets. Our consortium is making steps in this direction, using existing data sets, participating to data challenges or collecting new data. For the return of migrants, again research is limited, although potential exists in data such as retail, mobile or OSN.
In all three dimensions, research has to carefully consider the issues with the data that is being used. It is important that each study includes a well-planned data collection phase where available data are analysed to identify gaps and to devise strategies to fill the gaps by integrating other types of data. This in order to ensure that the problem being studied is thoroughly covered by the data used. In this process, research infrastructures such as SoBigData can be of great help. On the one hand, they can provide means to catalogue data, so that new data sets are available to the community for integration. On the other hand, they enable the community to share methods and experiences so that gaps identified and the solutions taken to fill these gaps can be reused. This applies not only to traditional data sources, but also to social big data. The complexity of digital demography implies that there is no free lunch with digital traces either . One problem relates to the representativeness of the collected samples. For example, Facebook and Twitter penetration rates are different worldwide and tend to be different depending on the considered age of users . Being unable to track specific categories of users can steer policies on migration in a direction that unwillingly perpetuates discriminations or neglects the needs of the invisible groups. For the above reasons, analytical and technical challenges to extract meaning from this kind of data, in synergy with more traditional data sources, remain an open and very important research area, with some recent efforts made in this direction . Model validation using existing statistics and the relation to migration policies is important. Furthermore, careful data integration could help in overcoming some of the selection bias, resulting in novel, multi-level indices based on big data.
A different issue is that related to the ethics dimension of processing personal data, including sensitive personal data, describing human individuals and activities. As also stated in , the first rule that a researcher must follow is to acknowledge that data are people and can do harm. In particular, the context of migration is very sensitive to this problem, since individuals described in the data are often particularly vulnerable: refugees and their families might be persecuted in their home countries, so avoiding their re-identification is a critical matter. Moreover, mass media and social media impact our society and integration itself since a negative tone systematically relates to lower acceptance rates of asylum practices , so extreme care has to be taken in publishing results. Nevertheless, migration studies can have a significant impact to improve our society and to help the inclusion process of migrants thus, encouraging data sharing is one of our main goals for achieving public good.
For all these reasons, it is essential that legal requirements and constraints are complemented by a solid understanding of ethical and legal views and values such as privacy and data protection, composing an actual ethical and legal framework. To this end, a number of infrastructural, organizational and methodological principles have been developed by the SoBigData Project, in order to establish a Responsible Research Infrastructure, allowing users to make full use of the functionalities and capabilities that big data can offer to help us solve our problems, while at the same time allowing them to respect fundamental rights and accommodate shared values, such as privacy, security, safety, fairness, equality, human dignity and autonomy . In particular, we strongly rely on Value Sensitive Design and Privacy-by-Design methodologies, in order to develop privacy-enhancing technologies, privacy-aware social data mining processes and privacy risk assessment methodologies. These methods are developed mainly in the fields of mobility data (such as GPS trajectories), mobile and retail data, which are some of the (unconventional) big data used in our migration studies. Moreover, some other general tools have been implemented to assist researchers in their activities, create a new class of responsible data scientists and inform the data subjects and the society about our work and our goals, such as an online course, ethics briefs and public information documents.