Last weekend, we drove down to DC for a soccer tournament in which my son's team was participating. During the 16-hour trip there and back, we listened to an audiobook, Ready Player One by Ernest Cline. It's a sci-fi novel set in a dystopian future where the main characters spend most of their time in a virtual internet/game world called the OASIS (the ontologically anthropocentric sensory immersive simulation). The novel's plot is largely an easter egg hunt with the prize being ownership of the Oasis itself. The hunt, developed by the now-deceased creator of the Oasis James Halliday, is filled with the nerdy 80s trivia that Halliday loved. I won't say it is a great literary achievement, but it was fun to listen to while driving through central PA. Elsewhere this weekend, I was also revisiting Levi Bryant's Democracy of Objects, which I've been teaching in my Speculative Realism graduate seminar, and we spent some time on Monday discussing Bryant's concept of regimes of attraction.
This may seem like apples and oranges to you, but these are two sides of the concept of virtuality that I explored in The Two Virtuals. Briefly, as we know, the Deleuzian virtual is a monistic substrate. As Bryant writes, "Deleuze's constant references to the virtual as the pre-individual suggests this reading as well, for it implies a transition from an undifferentiated state to a differenciated individual. If the virtual is pre-individual, then it cannot be composed of discrete individual unities or substances. Here the individual would be an effect of the virtual, not primary being itself." The digital virtual is nothing like this. Perhaps virtual reality appears to be made of some monistic malleable materiality but it is, of course, code: ones and zeros, or even better, voltage intensities across a circuit. Deleuze's real virtual lies beneath all that even further. In Democracy of Objects, Bryant undertakes a motivated reading of Deleuze, identifying particular passages where he leans in a different direction, toward a more pluralistic, less monistic, virtuality, which eventually leads Bryant toward his concept of virtual proper being.
This got me wondering if virtual proper being might not be an ontology that is in some respects closer to the way that VR functions, or is imagined to function in a novel like Ready Player One. Bryant writes that
The virtual proper being of an object is what makes an object properly an object. It is that which constitutes an object as a difference engine or generative mechanism. However, no one nor any other thing ever encounters an object qua its virtual proper being, for the substance of an object is perpetually withdrawn or in excess of any of its manifestations.
And then later, he adds, "The virtual proper being of an object is its endo-structure, the manner in which it embodies differential relations and attractors or singularities defining a vector field or field of potentials within a substance." In the move from a common substrate to these virtual/hidden individual "generative mechanisms," this seems to me more like the generative mechanisms behind artificial life. Certainly it would be fair to say that the algorithms that drive such objects are only ever simulations or models of the singularities and attractors that operate virtually. I don't want to take this analogy too far. However, I wonder: if we could hypothesize some extra-dimensional position outside of the virtual-actual ontological circuit Bryant describes, would that position offer us a vantage that might be analogous to the human user examining the codes and manifestations of sofware objects in VR?
In any case, the development of virtual spaces offers a useful way to think about how regimes of attraction might operate. Bryant describes these regimes as follows:
Regimes of attraction should thus be thought as interactive networks or, as Timothy Morton has put it, meshes that play an affording and constraining role with respect to the local manifestations of objects. Depending on the sorts of objects or systems being discussed, regimes of attraction can include physical, biological, semiotic, social, and technological components. Within these networks, hierarchies or sub-networks can emerge that constrain the local manifestations available to other nodes or entities within the network.
In short, regimes of attraction place external limits on how the virtual proper being of an object might manifest itself. As it already stands, every object has an endo-structure that already places limits on what it might be (otherwise, virtual proper being would be the same as monism). However those manifestations are further limited by exo-relations, which form the regimes of attraction. As a result of this intersection, objects are neither entirely free to mutate in any fashion nor are they overdetermined by their context. To think about this within the analogy of a virtual reality space, one might say that any object that one creates will have an endo-structure that limits its operation and possibilities for manifesting. For example, Bryant uses the extended example of a blue coffee mug to talk about the mugs power "to blue" and how that manifests itself differently depending on regimes of attraction such as available light sources and the sensory capacities of the objects perceiving the mug. The same sort of mechanic would operate in a virtual world, just as most virtual worlds have game engines that govern things like "gravity." Those engines are part of the endo-structure of objects. The fact that one (or one's avatar) finds herself on a large planet with a particular gravitational field (or a virtual version of such) is then part of the regime of attraction that manifests your feet on the ground and constrains movement in specific ways. Virtual worlds often work this way, even though clearly they don't have to.
Ready Player One offers a peculiar vision of how the endo-structure of a VR world might develop out of a pre-existing set of regimes of attraction. That is, if we were to imagine, for a second, that someone actually created the Oasis, at the outset its limits would be defined by things like available computing technologies, processing power, programming languages, etc.: in other words, the material substrate on which any VR world would be built. The Oasis in the novel becomes further shaped by its creator's obsession with the 1980s: Dungeons and Dragons, early video games, computers, sci-fi novels, anime... the whole geek lexicon. This obession becomes built into the mechanics of the Oasis itself, along with those other technological elements. Parzival, the novel's protagonist, navigates the Oasis through his deep research into this culture.
How do we want to think about a VR world as a real object? We've seen this conversation among OOO folks about whether or not Popeye is real. In OOO terms, for an object to be real it must be able to exist independent of external relations. So a statue of Popeye is real. However, there is also something like an idea of Popeye that can be trademarked: is that a real object? Can something that is symbolic or encoded be real, independent of its local manifestations in a book or on a screen? Perhaps. So, for example, if a character in World of Warcraft finds a magical sword, which she can sell or trade, even for US dollars: is that sword a real object?
I would say that one potential error to avoid here is in imagining that virtual worlds are separate from the "real world." They aren't worlds inside of worlds. A digital photograph is as real as a printed photograph. An ebook is as real as a printed book. A digital movie file is as real, as material, as a movie on a reel. Virtual characters have a material manifestation as well. They take up physical space on a hard drive somewhere. They should not be mistaken for their local manifestations on a computer screen, though it is only within the particular regimes of attraction of that software and hardware that they become viewable as characters. Otherwise they are just data files.
Maybe this is a good analogy for a pluralist virtuality, like virtual proper being. Without the proper regime of attraction, virtual beings are unreadable, inaccessible. With the right regime, they become manifest in particular ways, not their true form of course, but a form that can relate and acquire agency. That would suggest though that virtual being can be altered by altering its local manifestation, which is something we already know in OOO right? Virtual beings cannot directly interact; only their local manifestations (for Bryant) or sensual objects (for Harman) can be encountered. But real objects (their virtual/withdrawn states) can be destroyed through these interactions; there must be feedback. If they can be destroyed then it makes sense that they can also be altered without destroying them--not just altering their local manifestations but their virtual/withdrawn states as well, while allowing the object to still be the same object.... just different. In other words, we can paint Bryant's blue coffee mug yellow, and it will no longer manifest blueness. It will lose that virtual power. It's virtual proper being will be altered. And yet it is still the same mug. I don't see why that should be a problem. We can have a discussion about how much change is required to make something into a different object, but that's for another time.
What this indicates to me is that if we think about a pluralist rather than a monist virtuality, then we create an ontology where the particulars of an object's virtual dimension are alterable through its local manifestations, which in turn are alterable through the regimes of attraction in which it emerges. This creates a condition in which we can proceed experimentally in devising different regimes for the purpose of shaping virtual being. Our ability to know the results of those experiments might always be limited but the principle that such mutations are possible seems to make sense.
Life interfered a litle last week, so I got off-track on this MOOC, so today I will be responding to both week 3 and 4 of the eLearning and Digital Culture MOOC. These weeks deal with the topic of posthumanism... so something on which I've written a great deal. The instructors also offer up transhumanism. I can see the point of comparision on some surface level, but, at least in my view, these are two very different things and the comparision might lead to some serious misunderstanding of posthumanism. I'll get to those details in a moment, but I want to arrive there through a consideration of this question: do MOOCs need guilds?
What do I mean by that? While I am not a player of MMORPGs like World of Warcraft, I am aware that much of the activity in these games is carried out by collectives of players known as guilds. Many of the missions in these games require the concerted effort of 20 or more players, so a fair amount of coordination is required both during the live event of adventuring and in-between. I realize I am an ususual participant in this MOOC as I am operating more as an observer than as someone who has specific learning goals. However I started thinking that serious MOOCers might benefit from being part of a small (20-50 person) group of participants who would commit not only to a single MOOC course but might undertake multiple courses over time. Different participants might offer different skills. For exmaple, some would have more or less time to devote to a particular course and bring more or less expertise to the content. As such their roles might change over time, sometimes being more like a mentor and sometimes playing the students. Members could divide the work of investigating different parts of the course and reporting back to the group.
I suppose one's response to this suggestion might have something to do with how one views the ethical practices of learning. Do we really want students taking courses as a group, collaborating and strategizing independing of the instructor? Or do we ultimately want students to be independent in some fundamental way that would make a guild-type strategy unethical.
In part, this also has to do with how one envisions learning on a MOOC. In an email list discussion I had a few weeks back on the potential of a writing MOOC, I suggested that learning in a MOOC couldn't simply be measured by what individual students had learned, that instead the pedagogical activity of the MOOC needs to recognize what the collective network learns. This is not a spurious suggestion. Part of the premise here is that IF MOOCs are the "future of education" then that is partly because the future of professional labor and citizenship will take place through this kind of collective, networked activity where expertise is not so much about what is inside your head but how well you can connect your head to a larger network of cognition.
Of course this brings to the matter of trans- and posthumanism. Transhumanism, at least as it is represented for the course in this article, is a political position that advocates for technoscientific experimentation and a legal system that promotes wide freedoms to adopt technological innovations. Posthumanism though is not well-represented, even though the instructors offer this introduction to a collection titled Posthumanism, which largely conflates posthumanism with poststructuralism. Unlike transhumanism, posthumanism is not about something that will or might be made to happen through technoscience. It is not even something that did happen, as in once upon a time we were humanist-type humans and now we are posthuman. Instead, posthumanism is about reconsidering what humans always already were/are. Basically, even though poststructuralism is certainly a critique of humanism, it does not do what posthumanism seeks to do in its attempt to understand the intersection of humans and nonhumans.
As a posthumanist, at least my version of it, one would look at MOOCs as networks of distributed cognition (which work with varying degrees of success). While the apparent goal of each course is mastering the content, the other, less obvious goal is teaching users to participate in a particular kind of information network, where knowledge is developed through a certain range of techniques. Of course we could say the same thing about the 20th-century industrial classroom. So, for example, 20th-century academic writing (e.g. first-year composition) was about
The 21st-century learning environemnt and digital composition is perhaps more akin to the wiki page
I don't mean to suggest that 20th century skills disappear, but I do believe they must operate within the context of the 21st century environment. For example, we still need to read closely. However, close reading is no longer enough and must be integrated with an ability to handle more information than one person can be expected to read in the time alloted for any given project. There will still be linear texts but they will not operate as they once did.
MOOCs can teach students to operate in these new environements. However I do think something like a guild approach would be useful in making that happen.
Surprisingly, English teachers from K-12 through higher education are not a particularly forward-thinking bunch. Shocking right? While schoolmarm grammarian is uncharitable, it's probably closer to the mark than future-oriented innovator. So when the National Council of Teachers of English publishes a framework for 21st century literacy curriculum that is entirely focused on digital matters, one could almost say this means that one no longer needs to be forward-thinking to recognize digital literacy as a primary site of English education.
I want to combine this with a generally more future-oriented institutional document, the New Media Consortium's Horizon report. The full report isn't out yet but they have identified the 6 technological trends:
Technology to Watch |
Time-to-Adoption Horizon |
|
One Year or Less |
|
Two to Three Years |
|
Four to Five Year |
I think the MOOC and the tablet are fairly obvious, which they should be given the time-to-adoption horizon. They been reporting some version of game-based learning and 3-d printing for some time, so I'm not sure about how those will come about, how broad their impact will be, or what the time frame will be. However I think big data and wearable technology are good bets.
I don't know if the particular brand of MOOCs we see with Coursera will be around in 5 years, but I'd be willing to bet that there will be millions of "students" taking open, online courses in 2020. I put students in scare quotes because students suggests, for some, college credits, and I'm not sure what the relationship will be between open courses and credits. What I do know is that these massive, networked environments will alter the way we learn (and work and socialize). I know this because they already have but that trend is only going to intensify.
What NCTE recognizes is that English should be the means by which such literacy is acquired (at least in the US, which is the nation in "National Council"). To that I say, "good luck." Good luck providing this professional development for existing teachers, who are not prepared to do this. Good luck finding university English departments with faculty to provide this literacy to the general population of college students, let alone educate preservice K-12 teachers or graduate students who will become university faculty. Good luck finding English departments who even remotely view digital literacy as a subject that even marginally concerns them, let alone one that would be central to their curriculum in the way that print literacy is now. As I suggested above, I think you'd have better luck selling the average college English department on becoming grammar-centric than you would on becoming digital-centric.
Now if you think I'm just trolling on my own website, well, you might be right. But the truth is that if this was 2003 and a department recognized that digital literacy was going to become the issue that might make or break their disciplinary future, then by now they might have four or five digital scholars hired and a couple tenured. Maybe they'd be in a position to deliver this content today. But few departments did that. This means the transition is likely to be rocky.
Here's my point of comparison. In the mid-19th century, English departments studied oratory and philology: two things contemporary English faculty no little about. Why did English split itself off from speech? Speech still survives, in a way. Most universities have some public speaking course, and speech departments evolved into communication studies. Without wanting to sound techno-determinist, the second industrial revolution had a significant hand in that transformation. I look upon print-centric literary and rhetorical studies in the same way. In hindsight we might say that the 19th century transition took 3 or 4 decades. Things move a little faster now, but the truth is that 2020 will be 25 years after the rise of the modern Internet.
We often see studies of technology adoption by college students, such as those done by Pew. We know from our own classrooms and walking the campus that Pew's statistics that 96% of undergrads have cell phone and 88% have laptops, that 92% of students have wireless connection on some device, are reflected in our own observations. The general adult population is 82%, 52%, and 57% respectively. I wonder what the stats would be for humanities faculty? Lower than undergrads I suspect, but I wonder if it would be lower than the average adult. I don't think anyone would be surprised to discover that humanities professors spend more time with traditional cultural/media activities than the average adult or student: reading print books; going to museums, public lectures, and libraries; listing to classical music, etc. And obviously one should be able to live one's life as one chooses, including in terms of online connection.
Given the nature of our profession however, particularly our professional freedoms, these personal choices then become professional ones. To a certain degree, all faculty have needed to come online in one way or another: library databases, email, online grading and student information, and to a lesser though still significant extent, course management systems. Clearly all faculty have internet access at least through their workplaces. But to what extent have we collectively embraced networked culture? Certainly not to the extent that we have embraced the modern culture that we continue to celebrate through our curriculum.
Why is this an issue? Let's say, for example, that I didn't really care for reading books. I would assign books for my courses because that was expected, but I didn't live a life where books were personally valued. How successful do you think I would be teaching print literacy? Teaching with digital networks requires a kind of literacy derived from a significant level of immersion. This is, I think, a real stumbling block for our profession in facing up to this challenge. And it's not just one decision that is crucial here, and I would agree that one can become overly and unproductively immersed in the digital world, but here are a few examples:
Again, there are valid points of concern here and these are all acceptable individual decisions. I have no problem with someone living their life this way, but if one puts this all together then I think one ends up with a faculty member who is not well-suited to meet the challenges of preparing students to live in a world that the faculty member has renounced.
In the eLearning MOOC we've been talking about this in terms of Prensky's digital immigrants and digital natives. In my view this is an unproductive and even damaging perspective. Again, as with the utopian/dystopian discourse, perhaps the concept is to move people away from these positions. Reading the course discussion, there are certainly people who have arrived familiar with these terms and unhappy with them. (I would imagine anyone who knows something about these matters knows this is not a productive kind of thinking. It's akin to starting a composition class with grammar instruction because that's what the students expect, even though as an instructor you know that's not the right direction.)
The immigrant/native business sets up a false dichotomy and reinforces an unnecessary conflict. If you identify the faculty as immigrants, then you are really taking a hostile position toward them. But more dangerously you are setting this up as a generational problem wherein the immigrant faculty can just say let the younger generation, the natives, do this work. But that's not what happens because this isn't about generations. It's about disciplinary culture. I have encountered plenty of mid-20s English doctoral students. Yes, many have cell phones and laptops and such, as you would expect. But very few see digital literacy or practices as part of their teaching or disciplinary work. Instead, they are adopting the cultural practices and values of their discipline, which is print-based.
English departments have always claimed that they are place where people go to become better writers. I have never believed that. I think English attracts students who are already good writers, and I think a literary and print-based curriculum can teach students to read particular genres in particular ways and write literary criticism with its specific discourse. Increasingly though, what is taught is a kind of monastic practice, one that clearly prides itself on its removal from the discourses of the marketplace and the larger culture. There is nothing wrong with monasticism.... for monks. However it doesn't have broad appeal, though there will always be some students who want that experience. We can't expect that there will be a generational shift that will lead to some eventual change in this situation. "Generations" are broad enough and the academic job needs are small enough, that there will always be potential hires and grad students to replicate these disciplinary values, just as there will always be people willing to live lives as monks.
And of course it's not just English departments. It's all of the humanities and perhaps beyond. That's just the corner of academia where I live. Ultimately, I think meeting the challenges of digita literacy will require university strategies for hiring and supporting faculty that work outside the disciplinary/departmental will to repetition.
With some 40,000 others I have started on this Coursera MOOC on elearning and digital culture. The first unit deals with utopian and dystopian perspectives on technology. Is it really necessary to explain why this is not a worthwhile way to frame this conversation? If we brought someone from where I live in Western New York from 200 or 500 years ago, would they look upon our world as utopian? dystopian? or just hopelessly foreign? I am guessing they would find it both alienating and amazing. Many of the values they would have would simply not make sense in our world. While technologies do not determine culture, they clearly participate in shaping the world (both naturally and culturally if you wish to make those problematic distinctions). It would be naive to view technological development as a problem solving activity: necessity isn't always the mother of invention. Technologies do not lead toward utopia. Sometimes technologies solve problems, but I think they're more likely to make old problems irrelevant. Did the automobile "solve" a problem with transportation? Not exactly. Did the light bulb solve a problem of darkness? I guess, but it would be more accurate to say that electric illumination created a new human space with new human activities. Put in the broadest terms, we no longer have the problems that 18th-century Americans had; we have new ones.
When we think of the technologies that are the focus of this MOOC--the social web, mobile devices, etc.--did they solve problems? Were they designed with some utopian impluse? Maybe, partly. Most people don't imagine they are doing evil. We could say that technologies are market-driven, but we wouldn't want to mistakenly believe that the market overdetermines technology. As if the market were some uniform entity. As if the market were not capable of error. In my view it's more accurate to imagine technologies as participating in a discontinuous process whereby new activities are generated, activities among humans and nonhumans. Sometime these activities solve old problems, sometimes they make old problems obsolete without solving them, and sometimes they create new problems or reshape old problems. Given this frame, what seems like a better starting point for me is attempting to identify the functionality of these media technologies and the activities that arise from them. Out of that analysis one might begin to think about their situation in pedagogies. I don't think that's a strictly technical or rational process, though obviously some technical understanding is necessary. That said, it's more about investigating what people do and the myriad capacities that might emerge when humans and nonhumans interact.
In other words, I think the utopian/dystopian business is a bit of boondoggle. I realize that we often think of technologies in this way, so maybe its an attempt to find a broad piece of common ground. OK. But as a teacher I would not want to spend so much time confirming a bias that I had to later undo.
This brings me to the MOOC itself. I'm very early into the experience. The typical thing users say is that they feel overwhelmed. That's not exactly the word I would use. The reading/viewing material is pretty light. There are a lot of potential discussion topics in Coursera, along with Twitter feeds, a Google Plus group, etc. etc. So it's a little hard to figure out where to find one's audience. I feel like my audience is here and connected to the MOOC through the Twitter hashtag. I will post in Coursera some, but I do face a rhetorical quandry there. I'm not sure what the point is. Maybe I'll find out.
I am crawling out from under the #mlasick illness I and so many others picked up in Boston. If you followed the Twitter stream during the conference, it is likely you encountered the conversation surrounding the session "The Dark Side of the Humanities." I wasn't able to attend the session, but the participants have uploaded their comments here. In The Chronicle William Pannapacker summarized the critique offered by the roundtable participants (Richard Grusin, Wendy Hui Kyong Chun, Patrick Jagoda, and Rita Raley) this way:
That DH is insufficiently diverse. That it falsely presents itself as a fast-track to academic jobs (when most of the positions are funded on soft money). That it suffers from “techno-utopianism” and “claims to be the solution for every problem.” That DH is “a blind and vapid embrace of the digital”; it insists upon coding and gamification to the exclusion of more humanistic practices. That it detaches itself from the rest of the humanities (regarding itself as not just “the next big thing,” but “the only thing”). That it allows everyone else in the humanities to sink as long as the DH’ers stay afloat. That DH is complicit with the neoliberal transformation of higher education; it “capitulates to bureaucratic and technocratic logic”; and its strongest support comes from administrators who see DH’ers as successful fundraisers and allies in the “creative destruction” of humanities education. And—most damning—that DH’ers are affiliated with a specter that is haunting the humanities—the specter of MOOCs.
In short, DH is an opportunistic, instrumentalist, mechanized response to the economic crisis—it represents “the dark side of capitalism”—and, as such, it is the enemy of good, organic humanists everywhere: cue the “Imperial March” from Star Wars.
Reading through the presentations onilne, this summary strikes me as defensive (Pannapacker clearly sees himself as a digital humanist) but accurate. In an online addendum to her presentation, Raley responds to the negative reaction the roundtable generated: "The upset seems in part to derive from a misunderstanding about our critical object: though our roundtable referred in passing to actually existing projects, collectives, and games that we take to be affirmative and inspiring, the 'digital humanities' under analysis was a discursive construction and, I should add, clearly noted as such throughout. That audience members should have professed not to recognize themselves in our presentations is thus to my mind all to the good, even if it somewhat misses the mark." OK. Maybe. But I think it strikes me as unrealistically naive to imagine that such attacks on the "discursive construction" of DH are not also unavoidably an attack upon the people who do the work. After all, the argument is that the digital humanities needs to act differently and does this not necessitate that DHers act differently?
Part of this is a well-covered methodological shift where digital humanities work tends to rely less upon traditional critical theory. This perceived "rise" of DH is thus seen as a threat, and I think this is the source of the greatest hostility: the idea that DHers might not view critical theory as a master discourse for their work. After all, it doesn't really make much sense to conflate DH, which in the end is a fairly narrow and specialized form of research, with the larger digital cultural revolution. It may not even make sense to conflate the emergence of digital technologies with the rise of neoliberal, global capitalism. However those are the links that are being made here. DH=digital culture=neoliberalism. Of course that's not an attack on DHers. No, of course not, because DHers can still accept critical theory as their personal savior; they can still be saved from the dark side. That's not an attack on critical theorists though; that's just an analysis of a discursive construction.
In this talk Wendy Chun makes an interesting observation: "The humanities are sinking—if they are—not because of their earlier embrace of theory or multiculturalism, but because they have capitulated to a bureaucratic technocratic logic." This observation comes out of this talk about DH "saving" the humanities and appears to imply that DH saves the humanities by moving away from theory and multiculturalism. I actually see this a little bit differently. I agree that the humanities aren't sinking because of theory or multiculturalism. I actually don't believe either have had much impact on the humanities outside of the relatively narrow sphere of graduate studies and scholarship. It would be difficult to find an English undergraduate major where theory plays a visible role and multiculturalism in most cases has meant the addition of a "nonwestern" class requirement. When the humanities, English in particular, thrived in the US (up through the 1960s), I would think it was precisely because the curriculum was aligned with the bureaucratic, technocratic logic of the day. Would we not aruge that the Eurocentric, patriarchal, canonical curriculum of literary studies supported our mid-century Anglo-American nationalist ideology? Certainly it was not offered as a critical resistance. As far as that goes, that curriculum continues to chug along, probably 80-90% unchanged on a national level (same courses, same books, same assignments). It's just less appealing now to students who have more choices, especially women who now have far more freedom in their choice of major than 40 years ago. In a similar vein, I was struck by Richard Grusin's observation of "a class system within DH that generates an almost unbridgeable divide between those on the tenure-track, those in what have come to be called “Alt-Ac” positions, and those in even more precarious and temporary positions." I agree with him that there's a complex problem there. These DH projects require a different kind of infrastructure, including employees in new kinds of positions. And there is a connection to be made with the emphasis on what we might euphemistically term "flexibility" in labor practices. However, in my view, this dh class system is dwarfed by the pyramid scheme that characterizes English Studies. All this "critiquing" stands firmly on the backs of TA and adjunct labor. We all know that. Talk about the pot calling the kettle black.
On a more positive note, Grusin wants to put an end to the division between production and critique, and Chun wants to forge better relations with science and engineering. Raley similarly wants to break down divisions between theory and practice. However, it seems odd to me to point to the DH community as the obstacle to such goals. To the contrary, I would think it is the traditional humanities that have the deepest antipathy for science and engineering. Certainly it is the traditional humanities that have the most invested in maintaining the separation between production/practice and critique/theory with the latter being the master term. All one has to do is look at the treatment of writing instruction in English to see how the humanities have denigrated production/practice over the decades. If DH rejects the role of critical theory as its master discourse (or, heaven forbid, turns to other theories), then it is only because of the way in which the traditional humanities have long established these hierarchies.
So, yes, let's put production and theory on a level playing ground in our curriculum and our research. Let's reshape our faculty and our discipline so that an interest in making is equally valued with an interest in critiquing. Let's embrace the DIY culture, as Raley suggests, that doesn't recognize elitist hierarchies that forbid makers from building their own theoretical views. But we should recognize that this isn't about reforming DH, it's about reforming the humanities.
There's some conceit in the humanities, or at least in English, that one must love the object one studies. It almost feels like an elementary school playground: "if you love social media then why don't you marry it?" I don't believe I've ever been a technology apologist. I actually don't make much personal use of social media. I guess I don't get enough enjoyment out of Facebook or Twitter to spend much "free time" there. I do however find productivity in these tools: i am introduced to people; I encounter new ideas; I get involved in new projects; and I find colleagues to work with on projects of my own devising. Still I understand the skepticism, even cynicism, regarding social media and its proponents (e.g. Clay Shirky). No one finds the conversation regarding MOOCs less surprising than I do, and I have certainly raised my concerns about them. However, it strikes me that we are only at the very start of a networked society. The problems we identify are challenges to address not reasons to turn away.
But that's not the humanities' way. I agree with Levi Bryant's recent post on the matter of cynicism. We have every reason to be skeptical of social media. There is no doubt that ideolgical and capitalistic motives lie behind the arguments for social media. Hell, in most cases, such motives are front and center. Should we be skeptical of Instagram or Facebook or Coursera? I think so. But should we be skeptical about the premise of networked sociality in itself? Or should we be looking to adapt/invent practices for this environment? Levi writes that as a result of cynicism "We thus strangely find ourselves in the same camp as the climate change denialists, the creationists who use their skepticism as a tool to dismiss evolutionary theory, and those that would treat economic theories as mere theories in the pejorative sense and continue to hold to their neoliberal economics despite the existence of any evidence supporting its claims. We critique everything and yet leave everything intact." It's a bold argument perhaps, as it equates what we imagine as the height of intellectual behavior (critique) as functionally equivalent to some of the more blantant examples of what we would term anti-intellectualism. However, I think the same thing could be said for our treatment of social media.
So here's another piece of information to critique. The McKinsey Global Institute produced a report earlier this year on "the social economy." Yes this institute is a think tank for a global consulting firm. I won't go into too great detail about this report as I don't think the findings are too surprising. One interesting statistic indicated that knowledge workers spend 28 hours a week writing emails, searching for information, and collaborating internally. While these may be online activities, they are typically not done via social media, and the report argues that these activities might be done more efficiently if social-media functionality was enabled. So that's their argument. Another argument/analysis that they offer identifies different industries in terms of the value a social media approach might have and the ease of realizing that value. Education comes out looking ripe for such activities, which I don't find so surprising either.
This leads me back to a position I've been arguing for some time (in fact, you can find it in my chapter for the Design Discourse collection) that we should be looking to integrate social media into our pedagogy. Here are four examples.
1. Next semester we will have around 2500 students and 80 instructors teaching composition courses at UB. No they aren't all following a common syllabus, but nevertheless they are encountering common tasks. About 3/4 will be in a second-semester course where they will be writing research papers. Could our students and instructors be more productive if they collaborated in this way? Could they get feedback faster? Could they get answers to questions that are stalling them out? Could they get support and encouragement when they hit writer's block or need to stay on schedule? Could they find an audience for their work and peers who have similar research interests?
2. On a smaller, but more diverse scale, we'll have around 60-70 other graduate and undergraduate courses in English with 40 or so different faculty and instructors. What kinds of conversations and collaborations could arise among them? Would we discover consistent and persistent challenges running through the curriculum that we might collectively address? Would we develop a sense of purpose and communicate that to our students?
3. And what about among the faculty, say all the humanities faculty at UB? Let's just say we did the following. Each semester, you would post in your profile a description of your current research project, courses you were teaching, and particular service project you were undertaking(if any). These could be driven by keywords and include lists of books and articles being used (in both research and teaching). Out of this, you could find colleagues across the humanities that might be potential collaborators. Administrators could see trends that might shape policy and budget decisions. One could generate a kind of ground-up vision of what we are doing. The sad thing is that we already do this in a far more pedantic way for our annual reports. However that information is retrospective (i.e what we've done rather than what we plan to do) and is not socialized.
4. As far as that goes, why not do this with MLA? You could start just with the convention. In a rudimentary way, you could start by just having participants add some keywords to their proposals (key terms, major authors, etc.). Then you could link these people together and create some recommendations. Not just among those who were accepted, but everyone who applied. I wouldn't mind a list from MLA of other scholars "who appear to be studying related issues" that I might then seek out on Twitter or Fb. Eventually though we'd need to build a platform to facilitate disciplinary collaboration.
Anyway, I think you get the point. The common workplace concern is that social media is a distraction. It is the common classroom concern as well. This is the perhaps counter-intuitive argument that social media could make us more productive workers and learners if the applications were properly designed and we adapted to use them in this fashion. The argument I made in that Design Discourse article (freely available here, pdf) is that one shouldn't make rules up front about how to use the technology. That is, rather than building the paths and forcing people to stay on them, one could wait to see where people move and build paths to facilitate that movement.
In the long term (say a decade out) this shift might not only be in the degree of productivity but also in the nature of productivity (which is more interesting to me). That is, our notion of the kind of work we should be doing will change as the means by which we do our work shifts. By facilitating large-scale collaboration among students, across the curriculum, institutionally among researchers, and through professional organizations, the relatively solitary labor of the typical humanist (a consequence of 20th century technology) will change. Maybe that change will not be for the better. It depends on what "better" means. If by better one means keeping things as they are, doing the same work as we've done in the past, and asking the same questions in the same ways, then this will obviously not be better. For those of us that are congenitally skeptical, as Bryant suggests, then there can never be a better, which results in a de facto argument for the status quo. For me, better for the humanities means increasing our capacity for producing knowledge and practices that have meaning and value beyond our specialized communities and that can speak (at least) to the shape of education and (hopefully) to the concerns of public, democratic discourse on a range of issues. But that's just rhetorician speak I suppose.
Related to my previous post on the Digital Humanities Interview Project, we will holding an MLA roundtable conversation on bring and early on Sunday morning. Here is our proposal and so related information, though if you are interested in our topic, I suggest you check out our DH interviews site.
Participants
I will be chairing and serving as a respondent.
Over the past summer, with the support of UB, two graduate students (Heather Duncan and Daniel Schweitzer) and I began an interview project and the initial results are now available here at dhinterviews.org. We will be discussing the interview project along with three of our interviewees (Richard Miller, Eileen Joy, and Matt Gold) at MLA.
The project formed in my mind about a year or so ago out of a couple intersecting questions or concerns.
In short, the project started in what I would term a quasi-scholarly fashion. We were going to conduct and publish interviews with the idea that they might be remixed in the future to from the basis of other scholarly video.
After having conducted six intitial vidoes (Liz Losh, Jamie Skye Bianco, and Levi Bryant in addition to the three listed above), a more conventionally ethnographic project has begun to form. We are in a watershed moment, so this seems like a valuable moment to capture in relation to the digital humanities as it moves from being a series of sub-specialties into relation with a far broader humanistic requirement for digital literacy from the undergrad level right through the professional-academic ranks. What we saw in our interviews (and you can see as well if you watch them) is that these scholars (ranging from assistant to full professors from two-year schools to R-1 universities) express views of their disciplines, the humanities, and universities as a whole that are quite divergent from those of scholars with more traditional paths. Maybe that isn't surprising. However, if this is generally the case regarding faculty that had moved very heavilty into middle-state and digital media as scholarly modes of production, then that would strongly indicate that the digital transition will be intertwined with a substantive, paradigmatic shift in the humanities. So this project then becomes a way of trying to characterize those shifts not only through an analysis of the scholarship that is produced but through interviews with the scholars that produce them.
As I have written here many time before, there is nothing natural about the 8000-word journal article or the 300-page monograph or whatever. Humanists don't like to talk about "impact" or track citations because those measure don't put our work in a good light. Instead, once one reaches a certain minimal quality threshhold (via blind review at a respectable press or journal), all the really matters is quantity (how many have you published? And the longer one's monograph is the better). But what I begin to see in these interviews (and the hypothesis I suppose I am seeking to investigate) is that the shift toward the digital is a shift toward impact and rhetorical effect.
I am often asked if writing this blog helps me be more productive in terms of articles or a second book. I always say, "yes, to some degree." But actually I think about it the other way. Doing the kind of sustained research that is necessary for produce an article or especially a book makes me more the kind of academic blogger I want to be. That is, the larger scholarly and philosphical discoveries of my research lead to better insights on the issues I address here. I would be very happy if I had more readers of my book than readers of my blog. I think that's unrealistic. I would be ecstatic to have more people cite my book than my blog, but that's unrealistic for much the same reason.
In a recent post, Clay Spinuzzi talks about publications as a kind of exhaust from research activity. We have commodified our scholarly exhaust and attributed an imaginary, symbolic value to it: as in, monograph equals tenure and two equals full prof. Publications are coins, and we have forgotten about their intrinsic value, which has clearly diminished in general terms in part because of over-production but mostly because intrinsic value and relevance aren't important for coins. We could easily detect different kinds of exhaust from research activity via middle-state publishing and possibly revive some emphasis on the notion that scholarship ought to do something. I think this is one of the clear messages that comes out from scholars that have shifted toward digital media and middle-state publishing: they want to have an impact and an audience. For some reason, the desire for an audience is practically villified in the humanities: to me this is the primary challenge we face in reinventing ourselves.
Recent Comments