On e-Literate, Elijah Mayfield has a good post addressing some of the myths (his term) going on around the subject of machine grading, particularly in response to the NY Times article that provocatively suggested that "Essay Grading Software Offers Professors a Break." I've been re-reading Manuel DeLanda's Philosophy and Simulation for my speculative realism class, which has me thinking about this is from some different angles I suppose. From Mayfield's perspective, as someone invested in machine learning and developing these kinds of applications, machine grading isn't about replacing professors (or giving them a break) but rather providing a different kind of feedback to authors. It really isn't grading at all. As Mayfield writes "If I were to ask whether a computer can grade an essay, many readers will compulsively respond that of course it can’t. If I asked whether that same computer could compile a list of every word, phrase, and element of syntax that shows up in a text, I think many people would nod along happily, and few would be signing petitions denouncing the practice as immoral and impossible."
This would be the point, right? The computer isn't "reading," so it clearly isn't "grading," if grading requires reading in the first place and means establishing relative quality (e.g. this essay is better than this other one).
Mayfield doesn't want to think about machine grading as replacing teaching but rather as a supplement, helping students: "What if, instead of thinking about how this technology makes education cheaper, we think about how it can make education better? What if we lived in a world where students could get scaffolded, detailed feedback to every sentence that they write, as they’re writing it, and it doesn’t require any additional time from a teacher or a TA?" Perhaps it is pollyanna to imagine this outcome, but I am interested in different questions here.
First, let's dispense with the grading aspect. The problem with the grading process is that it is always underlying the lamest possible writing activity: one where 100s of students are asked to write essentially the same response to a single, fairly narrow prompt. There is no real purpose of communication with another human. No one wants to write the texts, and no one wants to read them. As I understand this, the machine can sort responses only because the answers are so uniform and predictable. Mayfield, at one point, uses the example of sorting between photos of ducks and photos of houses. And the reality is that this kind of essay writing is equivalent to asking students to go, take a photo of a duck, and submit it for a grade. As a result the computer can tell whether the student has taken a duck photo or not. But if the assignment was to take an "interesting" photo. Well, let's just say that don't yet have to worry about a computer making that judgment for us.
On the other hand, I think part of the problem (and resistance) to machine grading lies in a serious misunderstanding of what humans do when they grade. To appropriate William Carlos Williams, a text is a machine made out of words. A text is a machine. For a Deleuzian like DeLanda, a human might be a machine as well. That is, humans reading texts are already machines processing other machines. In machine-to-machine relations, we are talking about capacities, not just the properties of a given text, which are finite, but the interaction of those properties with any possible reader in any given situation, which creates infinite capacities to affect. We already know all the things we do to norm readers to create predictable responses. In other words, grading is always about creating a situation that is unlike reading elsewhere, not only in a large standardized test, but in typical composition classroom grading as well. By regularizing one end of the equation, we hope to get better measurments of the other end (i.e. the student). As a grader I do not and cannot care about what the author says. To care is to invalidate the evaluative mechanism. It doesn't matter if I agree with your politics or not. All I am looking for is to see if the text has certain objective criteria. Have you ever watched a movie and started focusing on things like directorial or acting decisions (e.g. that's an interesting camera angle or that was a curious facial expression to match that line of dialog)? It's almost impossible to become affectively invested in the film. That's what grading is like.
That said, when one responds to student writing, one has to offer up real engagement with the text, because writing for the purpose of a grade is even more depressing than reading for the purpose of a grade. You have to generate some real, genuine human response to the subject matter. But this stops being about evaluation of the student then because we open up the Pandora's box of capacities; we become a chain of machinic interactions. And here the computer is already a welcome participant. Human feedback is valuable, but the network can analyze our text and offer thousands of interesting and useful responses. Today we use the network to uncover plagiarism, but tomorrow we could use it to link our students to 100s or 1000s of other writers and texts that share their interests. Think of a kind of reverse Google: your text is composed of these search terms.
Whether the machine is a human or a computer, the mass grading process is a statistical procedure that says that a writer who produces a text with certain measurable qualities is likely to be an "A," "B," or whatever, which in turn means they are statistically likely to "know" the content on which they are being tested. On the other hand, when a text is read, by a human or a computer, the reader establishes links between the text and a larger network of data, which generates a response. Do I mean to equate humans and computers? Not really. I suppose my point is that the issue here is with the practice of grading, not the machine doing it.
Collin Brooke has a recent post revisiting an old CCCC presentation (I was there and posted about it back then. Collin updates his thinking in response to Anil Dash's talk on "The Web We Lost" and here. Jeff Rice also writes about Dash. All three offer views on what we've lost or gained as culture or discipline in the ongoing churn of digital media networks. Collin in particular talks about fields and streams--the disciplinary field in relation to the social media stream of middle-state publishing (Twitter, FB, blogging, etc.)--and thinks about what his graduate students need from him in balancing the more traditional content of the field with tendencies toward the stream. 3 years ago perhaps there was a greater need to recognize the stream; today maybe the stream has become a flood.
I like this ecological trope and want to play it out. It's easy enough to say that our media ecology has experienced its own kind of climate change. The streams have moved, and the fields that once flourished have begun to dry up. But there's new life elsewhere. Fields and streams are part of a larger climate system. To incorporate Dash into this metaphor, we need to add the concept of cultivation, and, by extension, property. 10-15 years ago, the web operated by a far more open architecture than it does today. The corporate spaces of Facebook and Instagram have pushed out the more open blogging and Flickr. As Jeff points out, we can experience nostalgia about the early days of web 2.0 or web 1.0 or even the days before the web, when our discipline's research and pedagogical paradigms still made sense. Of course it's just as easy to have a romantic view of the future. For Jeff this begins with recognizing that the web does have an integral set of values toward being "open" or "closed." Open and closed systems are part of the same ecology though. All interaction requires a degree of openness, an openness to affect and be affected. Developing strategies for closing and redirecting streams might be understood as means for survival and reproduction (to stay with an ecological theme). Dash argues that we have lost something in moving from open to closed systems, even while we have maybe also gained something in the production of better, though closed, applications. Collin also wonders about what is gained or lost in the shifts from field to stream and back. For Jeff I think the question of what is gained or lost is not the right question, even though clearly we must all make decisions based upon values. Collin needs to decide about the curriculum for his graduate course. Elsewhere, we need to have conversations about the values that drive web practices.
The danger with the ecological trope is to mistake this as naturalizing a situation that we should insist on viewing as cultural or social. However, given my Latourian view, I am not inclined to accept the natural-social distinction. Instead, I am trying to get at a sense of the actor-network operating here. What is it that the closed system of Facebook makes us do? (To use Latour's phrase,"makes" meaning not compels us but rather composes us; this is not about determinism in a zero-sum game where either the individual or society wins.)
From my perspective, the interesting question is what is it that our students need to know or be able to do? Or in Latourian terms, what do they need to be made to do? We can start this question with graduate students, as Collin does, and ask what do students today, graduating in 2020, need? But we might as easily begin with undergraduate education, since today's grads are tomorrow's professors (hopefully): what will undergrads need to be made to do? Or perhaps one wants to think of this in terms of the citizens or professionals those students are and will become? However one phrases this, part of the answer has to be an ability to understand how these digital media networks function, what they do and do not do, and what the consequences of these choices might be. We might think of this as the procedural rhetoric of digital media. The informational-communicational world is far more complex today than 15-20 years ago. Figuring out how to learn to live in it... that's a task for the humanities. Not that humanists should tell people how to live (oops, too late), but rather that the humanities investigates this question and the ways we answer it.
The SUNY Council of Writing's annual conference was held yesterday in Buffalo. There were a number of interesting panels. Richard Miller and Kelly Kinney gave excellent plenary talks. Here I want to think about some of these conversations in relation to what we are doing at UB and my own vision for composition's future.
Kelly is the director of the Writing Initiative at SUNY Binghamton, which recently won an NCTE CCCC award as a program of excellence. What are some of the program's defining features?
If there is one area the program is lacking it might be in moving toward digital composing, which is something that Kelly mentioned. Digital composing was a central issue in Richard Miller's talk yesterday. It was a wide-ranging and well-received talk, and I will only settle on a few salient points. Miller discussed the struggles that students have with the directive to "be interested" in something that leads them to write. He observed that the superabundance of information and media stimulation is a significant factor in our students' (and, let's face it, our own) capacity to sustain a focused interest on a subject. We are all familiar with this argument, I think. We are, I think, equally aware of the countervailing argument that our always connected context facilitates opportunities to do incredible things with our "cognitive surplus." Miller also discussed this. Most importantly, he outlined the ways in which our traditional practices, hinged on the precepts that information is scarce (you have to go to the library and there's only one copy of the book) and mastery means content expertise, no longer applies. Instead, in a condition of information superabundance, mastery comes in the form of "resourcefulness:" having the skills to find the information and put it together for a digital community. For readers here, this message is familiar, but it was very well pitched to the audience yesterday (and still a message that the bulk of my discipline, to say nothing of the rest of the humanities, has failed to hear or understand).
One notable quality of Richard's pedagogy is his emphasis on helping students discover their passions and facilitating their exploration of those passions in a wide range of media and genre. It's a pedagogy that requires flexibility and improvisation from the instructor, which in turn requires a high level of expertise in rhetoric and composition. It also demands a specific pedagogical focus which puts the emphasis on the student where the connection to rhetoric and compositional processes arrives organically. You can check out some of Richard's students' work here. True, these are not FYC students. My point is not to say "look how well the students write." This isn't a heroic pedagogy narrative. Instead, my point is that the kind of appraoch that Richard takes would be difficult to accomplish in Kelly's program.
Or at least so we might think: that there is a tension between a standardized program and a classroom that faciliates this kind of open experimentation and pursuit of student interests.
I've been thinking about these issues in our own program as we move to adopt a common textbook for our 101 writing course. The convention in the humanities (and elsewhere in the academy) is that professors choose the topics and readings for their courses. This is based, first, on the principle that professors teach a subject rather than teach students, and second, that professors are masters of the subject content. As such, the depiction of FYC as an experimental zone focused on student interests, while not uncommon in the field (at least once upon a time), is quite atypical otherwise. FYC has become more disciplinary as the field has become more professionalized over the last 20 years (e.g. with the expansion of PhD programs in rhetoric and composition).
However I don't see these different emphases as necessarily opposition; they do require some tuning though. Without expertise in rhet/comp and commitment to its principles, the open FYC class becomes a free-for-all zone where the "content-less" nature of the course links with the conventional humanities impetus of the professor teaching according to her own content mastery to produce a curriculum where students are asked to write on whatever subject interests the instructor. With less experienced instructors it is very hard to get around this situation on some level, even if we attempt to negotiate some middle point where the writing topics represent some hoped for overlap between instructor and student interest/expertise. This is what we typically see with the "readers" publishers produce for FYC.
We have adopted a common 101 textbook, Mike Palmquist's Joining the Conversation, which is a "rhetoric" (i.e. it offers instruction on rhetorical/compositional practices). The text does little, in my view, to limit the capacity of students to pursue their own interests as writers. As a "purpose-driven" rhetoric, it does introduce students to different purposes student-writers might identify in relation to their interests: evaluating, reporting, proposing, etc. As such, students in a class might be asked "to evaluate," which might take the form of different genres--all different kinds of reviews (movies, books, performances, restaurants, a product etc.), an op-ed evaluation of a politician or law, an evaluation of a piece of research, a progress evaluation, a self-assessment, and so on. We can then make these genres more varied by intersecting them with different media (a slidecast, a blog post, a letter or memo, an internal report, a classroom essay). In theory, beyond the dictate "to evaluate" a student in a class using this text might choose among many genres depending on what suited her interest, purpose, and audience. In practice, an instructor might require all students to write movie reviews (for example). And in a way this might be understandable, especially for the novice instructor, as handling a wide range of topics and genres is challenging. This is particularly the case when an instructor adds topical readings (to extend my example: sample movie reviews and maybe some more academic/intellectual essay on film). At least in this example, we can hope that most FYC students can stir up some personal interest and expertise in movies. I have made this same choice in desiging the common syllabus taught by our incoming TAs: identifying specific topics in which I hope students and TAs will have some interest/experience--education, creativity, social media, etc. I have also narrowed the genre of a given assignment to make the task more manageable.
Now I am wondering though if this is really the best decision. I would like to see a curriculum that opened up more opportunities for student interest and experiment, especially across media, while still introducing students to basic rhetorical concepts like purpose, audience, and genre and helping students identify and develop their compositional practices. I am sure the fear would be that chaos would ensue, but my experience is that when FYC students open a word processor they have a hard time writing anything that isn't basically essayistic (i.e. like what they wrote in high school). That's why the digital assignments are so interesting, because they push students into a different compositional network where they can't just replicate what they once did. We would still want students to write essays and draw on academic sources as part of the course. However I think part of the task would be to demonstrate how students can explore their interests and achieve purposes with audiences they seek to address by writing academic essays.
And part of the way I see that happening is in my vision of the future of composition. While I remain skeptical of much of the MOOC bizniz, I am interested in the largely untapped potential of creating real audiences and communities for/of student writers. Skip the 50K random MOOC participants and think for a moment of the 2500 UB FYC students in a given semester. Can we create a digital community that would allow them to share their work with other intersted students (and perhaps a larger web public)? Would there be value (for example) in 1000 students writing reviews of current movies, books, products, local hangouts, bands, events, etc? I think there could be. What if the proposing assignment led 500 students to write proposals to improve some aspect of local university life? What if another 50 wrote proposals to improve a neighborhood? Could something actually emerge from that? Maybe one student in a class wants to analyze fracking and no one else in her class does, but there are 15 other students similarly interested across the program. Could they form a research group together? Give feedback on each other's work? Maybe they could end up producing a collaborative website with academic reports and a multimedia presentation.
This what I think about when I think about "cognitive surplus." Not untapped brain power necessarily but the way a network creates opportunities for thinking and creating that are not otherwise possible. A common textbook and curriculum would faciliate these kinds of activities, encouraging and empowering experimentation rather than operating in opposition to it. It's an approach that would recognize, as Richard Miller pointed out in his talk, that composing is no longer a cloistered activity.
As Steve Krause has noted and has been discussed a fair amount recently on the WPA-list, there is reason to be concerned with the growing role of grading writing by machines. There is a new site and petition (humanreaders.org), and I have added my name to that petition. So it should be clear that fundamentally I share the concerns raised there because I have confidence in the research beyond this position. Essentially the point is that current versions of machine grading software are not capable of "reading." What does that mean? It means that machines do not respond to texts in the way that humans do. It is possible to compose a text that humans would identify as nonsensical and receive a high score from a machine. Machines can be trained to look for certain features of texts that tend to correlate with "good writing" from a human perspective but those features can be easily produced without producing "good writing." The upshot, given the high stakes nature of many of these texts, is that students will not be taught to produce "good writing" but rather writing that scores well. The horrors of teaching to the test are a commonplace in our culture, so there's no need to take the argument further.
And yet, of course, you would not be reading this if your computer (or phone) didn't read it first. If you have arrived at this page via Google, then there have been several levels of machine reading that brought you here. If it seems that Google and other search engines do a fairly good job of finding reliable texts on the subject in which you are interested, then it is because, by some means, they are good readers. No doubt, part of Google's system is reliant upon human evaluators who link and visit pages, including perhaps your own preferences. The same might be said of human readers. How did we figure out what "good writing" was? Do we not rely upon social networks for this insight? In its crudest form (and closer to assessment), don't we "norm" scorers of writing for assessment purposes?
Anyone who has ever done search engine optimizing has written explicitly for machines. One of the things that makes SEO trickly though is the secret, proprietary nature of Google's search algorithm. Unlike these machine grading mechanisms, it is not easy to game Google's search rank. Perhaps what is required for machine grading is a more complex, harder to predict, mechanism. In other words, while machines do not need to read in the same way as humans do, they might need to simulate the subjective, unreliable responses of human readers in order to serve our purposes. That last sentence encapsulates two potential errors we encounter in our discussions of machine grading.
1. Because machines don't read the way humans do, they don't understand the meaning of the text. Critics complain that machines can't recognize when a text is nonsense or counterfactual. (One might say the same of humans anyway.) On what basis do we claim that humans are the arbiters of sense? Only on the basis that we only care what humans think, or from a correlationist perspective, that we can only understand texts in terms of ourselves anyway. We don't understand why machines grade texts the way they do sometimes, but we don't say that machines are subjective, which is what we say when human readers disagree. Instead, we say that machines produce error. I say that machines are readers too. Maybe they aren't the readers we want to score our tests, but then we wouldn't want a room full of kindergarteners either. So being human is no guarantee of reliable scoring.
2. Good machines would simulate human readers. This is our basic premise, right? That a machine would give the same score as a human to a given text. That is, we recognize that machines and humans will never read the same way but we need them to provide the same output in terms of scores. This would be like a calculator. A calculator doesn't do math like I do, but it gets the same answer. To make this happen we black box both the human and the calculator: the process is irrelevant; only the answer counts. But that's not really a good analogy for the scoring of human writing.
Unlike calculable equations, there is not right score for a text. What human scoring processes demonstrate is that reading takes place within the context of a complex network of actors that serve to create "interrater reliability" and so on. We begin with the preimse that humans typically will not agree on the score for a text, even when you take a fairly similar group of readers (e.g. composition instructors teaching in same department) and writers (e.g. students in their classes). They already are conditioned to a high degree, but then we add on specific conditioning through the norming process and the common conditions in which they are reading. Add into that various social pressures such as recognizing the seriousness of the scoring and the pressure to grade like other readers so as to reduce the amount of work (discrepancies in scoring lead to additional readings and scorings).
Scoring is not an objective, rational process. Once one abandons the flawed concept of intersubjectivity-- the consensual hallucination that we share thoughts when we agree--one has to come up with another explanation for why two readers give an essay the same score and that explanation, in my view, would involve an investigation of the actor/objects and network-assemblages that operate to produce results. We can complain that machines don't recognize meaning, but that's only because meaning isn't in the text. This has always been the flaw in any form of grading. We evaluate students based upon what their texts do to us as readers. The only reason students have any power to predict what our experience will be is because they participate in a shared network of activity: a network over which they have little control.
So to go back to the original problem of machine grading, I would say that we need to ask what it is that we are trying to determine when we are grading these exams. Do we want to know if students can produce texts that have certain definable features in a testing situation? Do we want to know if students will get good grades on writing assignments in college? Or do we want to know, more nebulously, if students are "good writers"? I think we have proceeded as if these are the same questions. That is, good writers get good grades in college because they can produce texts with certain definable features. But that's not how it works at all, and I think we know that.
In case we don't, just briefly... Good texts don't have certain definable features because the experience of "good" isn't inhered in the texts we read. This doesn't make the process subjective in the sense of one's reading practice being unpredictable or purely internal. It just makes reading relational. One way of defining rhetorical skill is having the ability to investigate the networks at work and produce texts that respond to those networks. We object to the notion of training students to compose texts that will produce positve responses from machines, but we also object to the notion of training students to compose texts that produce positive responses from normed human scorers.
The real problem though is starting with the pedagogical premise that teaching writing means teaching students to reproduce definable textual features without understanding the rhetorical and networked operations underneath. Beacuse what we discover from machine readers is that we can compose texts that have those textual features but are ineffective from our perspective. This is a discovery we have already made a million times though as we have all seen many students who diligently replicate the requirements of an assignment and still manage to produce unsatisfactory results. Why? Because they have produced those features without understanding the rhetoricity behind them.
Machines are perfectly good readers. That's not where the problem is. The problem is that we don't understand reading.
Last weekend, we drove down to DC for a soccer tournament in which my son's team was participating. During the 16-hour trip there and back, we listened to an audiobook, Ready Player One by Ernest Cline. It's a sci-fi novel set in a dystopian future where the main characters spend most of their time in a virtual internet/game world called the OASIS (the ontologically anthropocentric sensory immersive simulation). The novel's plot is largely an easter egg hunt with the prize being ownership of the Oasis itself. The hunt, developed by the now-deceased creator of the Oasis James Halliday, is filled with the nerdy 80s trivia that Halliday loved. I won't say it is a great literary achievement, but it was fun to listen to while driving through central PA. Elsewhere this weekend, I was also revisiting Levi Bryant's Democracy of Objects, which I've been teaching in my Speculative Realism graduate seminar, and we spent some time on Monday discussing Bryant's concept of regimes of attraction.
This may seem like apples and oranges to you, but these are two sides of the concept of virtuality that I explored in The Two Virtuals. Briefly, as we know, the Deleuzian virtual is a monistic substrate. As Bryant writes, "Deleuze's constant references to the virtual as the pre-individual suggests this reading as well, for it implies a transition from an undifferentiated state to a differenciated individual. If the virtual is pre-individual, then it cannot be composed of discrete individual unities or substances. Here the individual would be an effect of the virtual, not primary being itself." The digital virtual is nothing like this. Perhaps virtual reality appears to be made of some monistic malleable materiality but it is, of course, code: ones and zeros, or even better, voltage intensities across a circuit. Deleuze's real virtual lies beneath all that even further. In Democracy of Objects, Bryant undertakes a motivated reading of Deleuze, identifying particular passages where he leans in a different direction, toward a more pluralistic, less monistic, virtuality, which eventually leads Bryant toward his concept of virtual proper being.
This got me wondering if virtual proper being might not be an ontology that is in some respects closer to the way that VR functions, or is imagined to function in a novel like Ready Player One. Bryant writes that
The virtual proper being of an object is what makes an object properly an object. It is that which constitutes an object as a difference engine or generative mechanism. However, no one nor any other thing ever encounters an object qua its virtual proper being, for the substance of an object is perpetually withdrawn or in excess of any of its manifestations.
And then later, he adds, "The virtual proper being of an object is its endo-structure, the manner in which it embodies differential relations and attractors or singularities defining a vector field or field of potentials within a substance." In the move from a common substrate to these virtual/hidden individual "generative mechanisms," this seems to me more like the generative mechanisms behind artificial life. Certainly it would be fair to say that the algorithms that drive such objects are only ever simulations or models of the singularities and attractors that operate virtually. I don't want to take this analogy too far. However, I wonder: if we could hypothesize some extra-dimensional position outside of the virtual-actual ontological circuit Bryant describes, would that position offer us a vantage that might be analogous to the human user examining the codes and manifestations of sofware objects in VR?
In any case, the development of virtual spaces offers a useful way to think about how regimes of attraction might operate. Bryant describes these regimes as follows:
Regimes of attraction should thus be thought as interactive networks or, as Timothy Morton has put it, meshes that play an affording and constraining role with respect to the local manifestations of objects. Depending on the sorts of objects or systems being discussed, regimes of attraction can include physical, biological, semiotic, social, and technological components. Within these networks, hierarchies or sub-networks can emerge that constrain the local manifestations available to other nodes or entities within the network.
In short, regimes of attraction place external limits on how the virtual proper being of an object might manifest itself. As it already stands, every object has an endo-structure that already places limits on what it might be (otherwise, virtual proper being would be the same as monism). However those manifestations are further limited by exo-relations, which form the regimes of attraction. As a result of this intersection, objects are neither entirely free to mutate in any fashion nor are they overdetermined by their context. To think about this within the analogy of a virtual reality space, one might say that any object that one creates will have an endo-structure that limits its operation and possibilities for manifesting. For example, Bryant uses the extended example of a blue coffee mug to talk about the mugs power "to blue" and how that manifests itself differently depending on regimes of attraction such as available light sources and the sensory capacities of the objects perceiving the mug. The same sort of mechanic would operate in a virtual world, just as most virtual worlds have game engines that govern things like "gravity." Those engines are part of the endo-structure of objects. The fact that one (or one's avatar) finds herself on a large planet with a particular gravitational field (or a virtual version of such) is then part of the regime of attraction that manifests your feet on the ground and constrains movement in specific ways. Virtual worlds often work this way, even though clearly they don't have to.
Ready Player One offers a peculiar vision of how the endo-structure of a VR world might develop out of a pre-existing set of regimes of attraction. That is, if we were to imagine, for a second, that someone actually created the Oasis, at the outset its limits would be defined by things like available computing technologies, processing power, programming languages, etc.: in other words, the material substrate on which any VR world would be built. The Oasis in the novel becomes further shaped by its creator's obsession with the 1980s: Dungeons and Dragons, early video games, computers, sci-fi novels, anime... the whole geek lexicon. This obession becomes built into the mechanics of the Oasis itself, along with those other technological elements. Parzival, the novel's protagonist, navigates the Oasis through his deep research into this culture.
How do we want to think about a VR world as a real object? We've seen this conversation among OOO folks about whether or not Popeye is real. In OOO terms, for an object to be real it must be able to exist independent of external relations. So a statue of Popeye is real. However, there is also something like an idea of Popeye that can be trademarked: is that a real object? Can something that is symbolic or encoded be real, independent of its local manifestations in a book or on a screen? Perhaps. So, for example, if a character in World of Warcraft finds a magical sword, which she can sell or trade, even for US dollars: is that sword a real object?
I would say that one potential error to avoid here is in imagining that virtual worlds are separate from the "real world." They aren't worlds inside of worlds. A digital photograph is as real as a printed photograph. An ebook is as real as a printed book. A digital movie file is as real, as material, as a movie on a reel. Virtual characters have a material manifestation as well. They take up physical space on a hard drive somewhere. They should not be mistaken for their local manifestations on a computer screen, though it is only within the particular regimes of attraction of that software and hardware that they become viewable as characters. Otherwise they are just data files.
Maybe this is a good analogy for a pluralist virtuality, like virtual proper being. Without the proper regime of attraction, virtual beings are unreadable, inaccessible. With the right regime, they become manifest in particular ways, not their true form of course, but a form that can relate and acquire agency. That would suggest though that virtual being can be altered by altering its local manifestation, which is something we already know in OOO right? Virtual beings cannot directly interact; only their local manifestations (for Bryant) or sensual objects (for Harman) can be encountered. But real objects (their virtual/withdrawn states) can be destroyed through these interactions; there must be feedback. If they can be destroyed then it makes sense that they can also be altered without destroying them--not just altering their local manifestations but their virtual/withdrawn states as well, while allowing the object to still be the same object.... just different. In other words, we can paint Bryant's blue coffee mug yellow, and it will no longer manifest blueness. It will lose that virtual power. It's virtual proper being will be altered. And yet it is still the same mug. I don't see why that should be a problem. We can have a discussion about how much change is required to make something into a different object, but that's for another time.
What this indicates to me is that if we think about a pluralist rather than a monist virtuality, then we create an ontology where the particulars of an object's virtual dimension are alterable through its local manifestations, which in turn are alterable through the regimes of attraction in which it emerges. This creates a condition in which we can proceed experimentally in devising different regimes for the purpose of shaping virtual being. Our ability to know the results of those experiments might always be limited but the principle that such mutations are possible seems to make sense.
Life interfered a litle last week, so I got off-track on this MOOC, so today I will be responding to both week 3 and 4 of the eLearning and Digital Culture MOOC. These weeks deal with the topic of posthumanism... so something on which I've written a great deal. The instructors also offer up transhumanism. I can see the point of comparision on some surface level, but, at least in my view, these are two very different things and the comparision might lead to some serious misunderstanding of posthumanism. I'll get to those details in a moment, but I want to arrive there through a consideration of this question: do MOOCs need guilds?
What do I mean by that? While I am not a player of MMORPGs like World of Warcraft, I am aware that much of the activity in these games is carried out by collectives of players known as guilds. Many of the missions in these games require the concerted effort of 20 or more players, so a fair amount of coordination is required both during the live event of adventuring and in-between. I realize I am an ususual participant in this MOOC as I am operating more as an observer than as someone who has specific learning goals. However I started thinking that serious MOOCers might benefit from being part of a small (20-50 person) group of participants who would commit not only to a single MOOC course but might undertake multiple courses over time. Different participants might offer different skills. For exmaple, some would have more or less time to devote to a particular course and bring more or less expertise to the content. As such their roles might change over time, sometimes being more like a mentor and sometimes playing the students. Members could divide the work of investigating different parts of the course and reporting back to the group.
I suppose one's response to this suggestion might have something to do with how one views the ethical practices of learning. Do we really want students taking courses as a group, collaborating and strategizing independing of the instructor? Or do we ultimately want students to be independent in some fundamental way that would make a guild-type strategy unethical.
In part, this also has to do with how one envisions learning on a MOOC. In an email list discussion I had a few weeks back on the potential of a writing MOOC, I suggested that learning in a MOOC couldn't simply be measured by what individual students had learned, that instead the pedagogical activity of the MOOC needs to recognize what the collective network learns. This is not a spurious suggestion. Part of the premise here is that IF MOOCs are the "future of education" then that is partly because the future of professional labor and citizenship will take place through this kind of collective, networked activity where expertise is not so much about what is inside your head but how well you can connect your head to a larger network of cognition.
Of course this brings to the matter of trans- and posthumanism. Transhumanism, at least as it is represented for the course in this article, is a political position that advocates for technoscientific experimentation and a legal system that promotes wide freedoms to adopt technological innovations. Posthumanism though is not well-represented, even though the instructors offer this introduction to a collection titled Posthumanism, which largely conflates posthumanism with poststructuralism. Unlike transhumanism, posthumanism is not about something that will or might be made to happen through technoscience. It is not even something that did happen, as in once upon a time we were humanist-type humans and now we are posthuman. Instead, posthumanism is about reconsidering what humans always already were/are. Basically, even though poststructuralism is certainly a critique of humanism, it does not do what posthumanism seeks to do in its attempt to understand the intersection of humans and nonhumans.
As a posthumanist, at least my version of it, one would look at MOOCs as networks of distributed cognition (which work with varying degrees of success). While the apparent goal of each course is mastering the content, the other, less obvious goal is teaching users to participate in a particular kind of information network, where knowledge is developed through a certain range of techniques. Of course we could say the same thing about the 20th-century industrial classroom. So, for example, 20th-century academic writing (e.g. first-year composition) was about
The 21st-century learning environemnt and digital composition is perhaps more akin to the wiki page
I don't mean to suggest that 20th century skills disappear, but I do believe they must operate within the context of the 21st century environment. For example, we still need to read closely. However, close reading is no longer enough and must be integrated with an ability to handle more information than one person can be expected to read in the time alloted for any given project. There will still be linear texts but they will not operate as they once did.
MOOCs can teach students to operate in these new environements. However I do think something like a guild approach would be useful in making that happen.
Surprisingly, English teachers from K-12 through higher education are not a particularly forward-thinking bunch. Shocking right? While schoolmarm grammarian is uncharitable, it's probably closer to the mark than future-oriented innovator. So when the National Council of Teachers of English publishes a framework for 21st century literacy curriculum that is entirely focused on digital matters, one could almost say this means that one no longer needs to be forward-thinking to recognize digital literacy as a primary site of English education.
I want to combine this with a generally more future-oriented institutional document, the New Media Consortium's Horizon report. The full report isn't out yet but they have identified the 6 technological trends:
Technology to Watch
One Year or Less
Two to Three Years
Four to Five Year
I think the MOOC and the tablet are fairly obvious, which they should be given the time-to-adoption horizon. They been reporting some version of game-based learning and 3-d printing for some time, so I'm not sure about how those will come about, how broad their impact will be, or what the time frame will be. However I think big data and wearable technology are good bets.
I don't know if the particular brand of MOOCs we see with Coursera will be around in 5 years, but I'd be willing to bet that there will be millions of "students" taking open, online courses in 2020. I put students in scare quotes because students suggests, for some, college credits, and I'm not sure what the relationship will be between open courses and credits. What I do know is that these massive, networked environments will alter the way we learn (and work and socialize). I know this because they already have but that trend is only going to intensify.
What NCTE recognizes is that English should be the means by which such literacy is acquired (at least in the US, which is the nation in "National Council"). To that I say, "good luck." Good luck providing this professional development for existing teachers, who are not prepared to do this. Good luck finding university English departments with faculty to provide this literacy to the general population of college students, let alone educate preservice K-12 teachers or graduate students who will become university faculty. Good luck finding English departments who even remotely view digital literacy as a subject that even marginally concerns them, let alone one that would be central to their curriculum in the way that print literacy is now. As I suggested above, I think you'd have better luck selling the average college English department on becoming grammar-centric than you would on becoming digital-centric.
Now if you think I'm just trolling on my own website, well, you might be right. But the truth is that if this was 2003 and a department recognized that digital literacy was going to become the issue that might make or break their disciplinary future, then by now they might have four or five digital scholars hired and a couple tenured. Maybe they'd be in a position to deliver this content today. But few departments did that. This means the transition is likely to be rocky.
Here's my point of comparison. In the mid-19th century, English departments studied oratory and philology: two things contemporary English faculty no little about. Why did English split itself off from speech? Speech still survives, in a way. Most universities have some public speaking course, and speech departments evolved into communication studies. Without wanting to sound techno-determinist, the second industrial revolution had a significant hand in that transformation. I look upon print-centric literary and rhetorical studies in the same way. In hindsight we might say that the 19th century transition took 3 or 4 decades. Things move a little faster now, but the truth is that 2020 will be 25 years after the rise of the modern Internet.
We often see studies of technology adoption by college students, such as those done by Pew. We know from our own classrooms and walking the campus that Pew's statistics that 96% of undergrads have cell phone and 88% have laptops, that 92% of students have wireless connection on some device, are reflected in our own observations. The general adult population is 82%, 52%, and 57% respectively. I wonder what the stats would be for humanities faculty? Lower than undergrads I suspect, but I wonder if it would be lower than the average adult. I don't think anyone would be surprised to discover that humanities professors spend more time with traditional cultural/media activities than the average adult or student: reading print books; going to museums, public lectures, and libraries; listing to classical music, etc. And obviously one should be able to live one's life as one chooses, including in terms of online connection.
Given the nature of our profession however, particularly our professional freedoms, these personal choices then become professional ones. To a certain degree, all faculty have needed to come online in one way or another: library databases, email, online grading and student information, and to a lesser though still significant extent, course management systems. Clearly all faculty have internet access at least through their workplaces. But to what extent have we collectively embraced networked culture? Certainly not to the extent that we have embraced the modern culture that we continue to celebrate through our curriculum.
Why is this an issue? Let's say, for example, that I didn't really care for reading books. I would assign books for my courses because that was expected, but I didn't live a life where books were personally valued. How successful do you think I would be teaching print literacy? Teaching with digital networks requires a kind of literacy derived from a significant level of immersion. This is, I think, a real stumbling block for our profession in facing up to this challenge. And it's not just one decision that is crucial here, and I would agree that one can become overly and unproductively immersed in the digital world, but here are a few examples:
Again, there are valid points of concern here and these are all acceptable individual decisions. I have no problem with someone living their life this way, but if one puts this all together then I think one ends up with a faculty member who is not well-suited to meet the challenges of preparing students to live in a world that the faculty member has renounced.
In the eLearning MOOC we've been talking about this in terms of Prensky's digital immigrants and digital natives. In my view this is an unproductive and even damaging perspective. Again, as with the utopian/dystopian discourse, perhaps the concept is to move people away from these positions. Reading the course discussion, there are certainly people who have arrived familiar with these terms and unhappy with them. (I would imagine anyone who knows something about these matters knows this is not a productive kind of thinking. It's akin to starting a composition class with grammar instruction because that's what the students expect, even though as an instructor you know that's not the right direction.)
The immigrant/native business sets up a false dichotomy and reinforces an unnecessary conflict. If you identify the faculty as immigrants, then you are really taking a hostile position toward them. But more dangerously you are setting this up as a generational problem wherein the immigrant faculty can just say let the younger generation, the natives, do this work. But that's not what happens because this isn't about generations. It's about disciplinary culture. I have encountered plenty of mid-20s English doctoral students. Yes, many have cell phones and laptops and such, as you would expect. But very few see digital literacy or practices as part of their teaching or disciplinary work. Instead, they are adopting the cultural practices and values of their discipline, which is print-based.
English departments have always claimed that they are place where people go to become better writers. I have never believed that. I think English attracts students who are already good writers, and I think a literary and print-based curriculum can teach students to read particular genres in particular ways and write literary criticism with its specific discourse. Increasingly though, what is taught is a kind of monastic practice, one that clearly prides itself on its removal from the discourses of the marketplace and the larger culture. There is nothing wrong with monasticism.... for monks. However it doesn't have broad appeal, though there will always be some students who want that experience. We can't expect that there will be a generational shift that will lead to some eventual change in this situation. "Generations" are broad enough and the academic job needs are small enough, that there will always be potential hires and grad students to replicate these disciplinary values, just as there will always be people willing to live lives as monks.
And of course it's not just English departments. It's all of the humanities and perhaps beyond. That's just the corner of academia where I live. Ultimately, I think meeting the challenges of digita literacy will require university strategies for hiring and supporting faculty that work outside the disciplinary/departmental will to repetition.
With some 40,000 others I have started on this Coursera MOOC on elearning and digital culture. The first unit deals with utopian and dystopian perspectives on technology. Is it really necessary to explain why this is not a worthwhile way to frame this conversation? If we brought someone from where I live in Western New York from 200 or 500 years ago, would they look upon our world as utopian? dystopian? or just hopelessly foreign? I am guessing they would find it both alienating and amazing. Many of the values they would have would simply not make sense in our world. While technologies do not determine culture, they clearly participate in shaping the world (both naturally and culturally if you wish to make those problematic distinctions). It would be naive to view technological development as a problem solving activity: necessity isn't always the mother of invention. Technologies do not lead toward utopia. Sometimes technologies solve problems, but I think they're more likely to make old problems irrelevant. Did the automobile "solve" a problem with transportation? Not exactly. Did the light bulb solve a problem of darkness? I guess, but it would be more accurate to say that electric illumination created a new human space with new human activities. Put in the broadest terms, we no longer have the problems that 18th-century Americans had; we have new ones.
When we think of the technologies that are the focus of this MOOC--the social web, mobile devices, etc.--did they solve problems? Were they designed with some utopian impluse? Maybe, partly. Most people don't imagine they are doing evil. We could say that technologies are market-driven, but we wouldn't want to mistakenly believe that the market overdetermines technology. As if the market were some uniform entity. As if the market were not capable of error. In my view it's more accurate to imagine technologies as participating in a discontinuous process whereby new activities are generated, activities among humans and nonhumans. Sometime these activities solve old problems, sometimes they make old problems obsolete without solving them, and sometimes they create new problems or reshape old problems. Given this frame, what seems like a better starting point for me is attempting to identify the functionality of these media technologies and the activities that arise from them. Out of that analysis one might begin to think about their situation in pedagogies. I don't think that's a strictly technical or rational process, though obviously some technical understanding is necessary. That said, it's more about investigating what people do and the myriad capacities that might emerge when humans and nonhumans interact.
In other words, I think the utopian/dystopian business is a bit of boondoggle. I realize that we often think of technologies in this way, so maybe its an attempt to find a broad piece of common ground. OK. But as a teacher I would not want to spend so much time confirming a bias that I had to later undo.
This brings me to the MOOC itself. I'm very early into the experience. The typical thing users say is that they feel overwhelmed. That's not exactly the word I would use. The reading/viewing material is pretty light. There are a lot of potential discussion topics in Coursera, along with Twitter feeds, a Google Plus group, etc. etc. So it's a little hard to figure out where to find one's audience. I feel like my audience is here and connected to the MOOC through the Twitter hashtag. I will post in Coursera some, but I do face a rhetorical quandry there. I'm not sure what the point is. Maybe I'll find out.