Criticism Archive


Fighting Words – Episode 9 – Doctor Who Cares Anymore?

So it’s come to this. People are falling over themselves to praise the current season of Doctor Who, and I’m left to wonder if I’m an idiot for not seeing what they see, or if I’m just dead inside. Though, I suppose those two things are not mutually exclusive.

It’s entirely possible that I’m dead inside and an idiot. Though, what little shred of self-esteem I’ve managed to hold on to over the years tells me that I’m probably not an idiot.

Nevertheless, here we are roughly half-way through a season that I would describe as the omnishambles, and I think it’s time to ask some serious questions about how we’re supposed to approach Mr. Moffat’s story telling from a critical point of view.

I mean it’s either that or we can piss and moan about how the show used to be better back when (insert appropriate Doctor Who epoch here).

Remember, you can subscribe to Fighting Words on iTunes and get a new episode downloaded to your iDevice each and every week.

Here’s the audio.

Music Credits

“Pump Sting” Kevin MacLeod (

Licensed under Creative Commons: By Attribution 3.0


Babylon 5: The Last Best Hope for Empathy

Sometimes I set out to write one thing, but in the course of putting pen to paper I end up with an entirely different post. This is just such an occasion.

Not long after news spread about a potential film reboot of 90s sci-fi classic Babylon 5, I found myself apologizing to a friend for the series’ first season. It is a truth universally acknowledged among B5 devotees that the series doesn’t find its legs until the second season.

Now suppose we were having this conversation last week, and you asked me, “Adam, what’s the worst part about the first season of Babylon 5?” Last week, I would have answered, without hesitation, that the worst part of the show is Michael O’Hare as Commander Jeffrey Sinclair (pictured above).

I think the best zinger I ever got off about O’Hare’s performance as Commander Sinclair was that I’d call it cardboard were that not an insult to a useful packing material.

Sufficed to say, my first thoughts when I heard of a potential B5 remake were along the lines of, “Dear god, can we please keep Michael O’Hare away from this.” No cameos. No special guest appearances. Let’s not bring back the guy who makes me preface Babylon 5 conversations with, “don’t worry, Sinclair is only in it for the first season.”

One Google search later and I discovered that Michael O’Hare died last year.  To quote Admiral Kirk, dumbass on me.

After another search I came upon Michael O’Hare’s Wikipedia page. There, I read the following:

As Babylon 5 creator J. Michael Straczynski describes it, during the filming of the first season of Babylon 5, O’Hare began having paranoid delusions. Halfway through filming, his hallucinations worsened. It became increasingly difficult for O’Hare to continue work, his behavior was becoming increasingly erratic and he was often at odds with his colleagues. O’Hare sought treatment for his mental illness, but feared that, as the main character of Babylon 5, taking an extended medical leave of absence would destroy the show just as it was getting off the ground.

Straczynski offered to suspend the show for several months to accommodate O’Hare’s treatment for his mental health; however O’Hare refused to put so many other people’s jobs at risk. Straczynski agreed to keep his condition secret to protect O’Hare’s career. O’Hare agreed to complete the first season but would be written out of the second season so that he could seek treatment. He reappeared in a cameo appearance early in season two and returned in season three for the double episode “War Without End”, which closed his character’s story arc. He made no further appearances on Babylon 5.

Naturally I met this news with some level of skepticism. As I told my undergrads for many years, Wikipedia is not a valid source; dig deeper. After a bit more searching I found the following interview with Babylon 5 creator J. Michael Straczynski.


Double dumbass on me.

In my own defense, I’m not wrong about O’Hare’s performance. Nor am I wrong about the writing being hit and miss in the first season of Babylon 5. As a critic, I stand behind everything I have ever said on Babylon 5’s first season, good or bad. As a human being who attempts to cultivate empathy as a virtue, I feel a measure of regret for my words.

It might be hard to believe, but I have dipped my toe in the waters of acting. While it was always a lot of work, I never found it particularly hard. Pay attention for your cues, hit your mark, never let the audience see that you’ve mangled a line (live theatre), don’t do weird things with your hands. Because of that experience, I tend to be unsympathetic toward poor performances from actors in television and film. Michael O’Hare’s work in B5 was no exception.

Knowledge of what O’Hare was enduring during his B5 tenure can’t change his performance, but it does change the way I look at it. Now I can see his work in the first season of Babylon 5 as the labour of a man who refused to let people lose their jobs because his mental illness was getting the better of him. I see the half-baked writing in some episodes as the product of J. Michael Straczynski working to help O’Hare keep it together in addition to writing and producing his show.

Babylon 5 stood apart from Star Trek: TNG and Deep Space 9 because in many ways it is a much more honest version of a post-Cold War future. Humanity goes into space and we take all our baggage with us. The series explores mental illness, alcoholism, racism, labour equity, and has a more vibrant political culture than all of the Treks combined. I’d like to think that some of the off-stage personal struggles in the first season informed the character arcs in the remainder of the series, particularly with Garibaldi’s drinking and the former Earthforce officer who reinvents himself as King Arthur to hide from the guilt of his actions during the disastrous first contact between the Minbari Federation and the Earth Alliance. Even if those plot points exist independent of any real-life drama, the revelation of the latter has forced me to reconsider how I look at the first season of Babylon 5.

I don’t know if/how a critical methodology for parsing media should include personal demons as extenuating circumstances. Personally, I think I need to re-watch the first season of Babylon 5 to work my way through this question. I need to filter Michael O’Hare’s work through an empathetic and critical lens. This is not to excuse the work when it is poor, but to try and understand the person who had to hide a mental illness for fear of the consequences it would have on his career and the career of those around him.


Trigger Warnings: A Descent into Tedium

I’ve been avoiding this subject, hoping it would go away on its own before I’d feel compelled to write about it. Yet news, editorial, and opinion pieces on “trigger warnings” seem to be an almost daily affair on twitter and facebook. Of particular interest to me is a student resolution from UC Santa Barbara that urges faculty to put trigger warnings on their syllabi. This is to say, if an English professor was teaching Yevgeny Zamyatin’s We, they would put a trigger warning on the syllabus stating that the book contained depictions of forced medical procedures as a warning to a students who have potentially suffered through a forced medical procedure of their own.

It’s clear from the New Republic’s article that there’s little in the way of consensus when discussing trigger warnings. The term itself has morphed and mutated as more and more people mobilize the phrase as applied to their own particular agenda. Amid the range of arguments it’s also obvious that many people think trigger warnings ought to be applied with the casual passing of a “spoiler alert.” These warnings would be, simply, an attempt to extend a measure of empathy to a given audience, and in doing so acknowledge that personal experiences cause us to view the world through different lenses. And to a certain extent, I think there’s value to that position.

For example, if I knew that a person was at ground zero during 9/11, I probably wouldn’t recommend they read The Running Man. While I am awestruck by the anti-capitalist symbolism that King mobilized in having Ben Richards crash a plane into a government building, I suspect somebody who saw that happen first hand isn’t going to be in a place where they want to revisit the memories through fiction. That said, my obligation to be a decent human being does not extend to conducting a statistically representative survey of my audience each time I recommend that particular – or any other – book.

To reiterate, I’ll never set out to ruin a person’s day with a recommendation they read Stephen King’s short story Boogieman if I know they dealt with neglectful parents as a child. However, in the absence of specific knowledge of an individual reader’s tastes, I will always recommend that people read Boogieman before bed, so that they can enjoy the full emotional resonance of the piece. Moreover, I wont’ tell you why it works as a horror piece, either. I’ll trust that you – generally speaking – have the wherewithal to know if you’re the type of person who should read Stephen King before bed.

This is where the trigger warning discussion gets stuck in a quagmire, as do most discussions that turn into a debate on what people owe each other in a civil society. And from where I sit, said obligation ends at the intersection of human decency and individual agency.

For example, a university instructor – and I say this having been a teaching assistant for seven years – doesn’t owe their students a specific set of warnings about individual books that might act as triggers. The syllabus itself serves as ample foreshadowing of what is to come in the course. Since the function of an English lit course is to analyze a text – wherein plot is generally unimportant to said discussion – a student should exercise their individual agency, which is no way compromised through the pursuit of post-secondary education, to see if there’s anything on the syllabus that would cause them severe psychological harm. As far as I’m concerned, a professor could teach an entire course on the literature of incest and rape and not need to offer specific warnings on the texts under review. The syllabus itself, the course title, and the conscious decision of a student to take said course more than meets a means test of basic human decency in the face of challenging works, and that is all our social contract demands – and ought to demand – of each other.

Outside of academia, I think the same test ought to apply. As a critic, I’m engaging in a dialogue with people, not an individual person. Therefore, I’m going to assume that a person is choosing of their own accord to read a book, watch a film, or play a game, and not being subjected to some sort of Ludovico technique. While the media in question might depict something that haunts an individual person, at no point does said media strip a person of the agency to stop what they are doing. And since my job as a critic is primarily in the analysis and evaluation of a work, it’s not my place to burn words on guessing how specific people might or might not respond to something; such intervention is the role of a parent or court-appointed guardian.

Barring the egregious, like the below scene of force feeding torture/murder from the video game Phantasmagoria, it’s not sensible to expect public discourse to contain disclaimers for everything that might set a person off. And ultimately, when it comes to media consumption, a person can always walk away.


Seriously, the above scene is graphic and gruesome. It illustrates my point about agency in the most visceral way possible. Don’t say you weren’t warned. I didn’t make you watch this. You chose to press play, or you chose to move on.

Either way, you were an active agent in the decision to watch – or not watch – this video. I met a basic standard of decency in warning you that this video is particularly hideous, and by those standards there’s no need to slap a trigger warning at the top of this post.


Adam Shaftoe on John Updike on Criticism

In my travels about the internet I happened upon six of John Updike’s rules for literary criticism. Though these rules were originally published in Updike’s 1977 anthology Picked up Pieces, I think they remain relevant for experienced contemporary critics as well as those seeking to dip a toe into critical waters. In fact, I suspect these guidelines could be applied as signposts for any sort of critical analysis, regardless of the medium in question; in the interest of brevity, I’ll save said discussion for another day.

A word of warning before we proceed any further. Criticism, like art itself, can often be a very subjective thing. What one person reasonably and rationally justifies as a sound evaluation can just as easily and efficiently be dismissed by another. Therefore, it’s important to take the following rules, as well as my subsequent editorial, with a grain of salt. When art is defined by what the audience is willing to accept as art, the waters of critical consideration can get a bit muddied.

1- Try to understand what the author wished to do, and do not blame him for not achieving what he did not attempt.

In my estimation this is a crucial first step. Reading a text with enough depth to understand the author’s motivations informs the sorts of conclusions that can be drawn from various metaphors, allusions, and sundry literary devices within the book.

Additionally, there is a difference between blaming an author for missing their goals and holding them to account for the self same shortfall. Assume that a dedicated artist is concerned with honing their craft. Showing how a book misses an understood, or explicit, goal is a very good way to help an author reflect on their growth. Moreover, it establishes a meaningful connection between critic and writer.

2- Give him [or her] enough direct quotation – at least one extended passage – of the book’s prose so the review’s reader can form his own impression, can get his own taste.

Generally, I don’t think this is a hard and fast necessity. Between Google books and Amazon it’s easy enough to find a chapter of a given book. However, there is something to be said of a critic who can find a passage from a book which embodies the novel’s themes as well as the writer’s narrative voice. Alternatively, there are writers whose wholesale style and subject are so unconventional, James Marshall comes to mind, that any paragraph would be sufficient to show off a text’s essential uniqueness.

3 -Confirm your description of the book with quotation from the book, if only phrase-long, rather than proceeding by fuzzy precis.

Again, not a rule I think people need to follow. Updike was writing for a literary market with less access to books and book marketing than the world in which we find ourselves today. This said, rule three speaks to something essential and often absent within a quick, fast, and dirty review market: supporting evidence. Regardless of word limits and other such barriers, if a critic puts a judgement on the table they owe the reader and subject under evaluation some level of substantiation.

4- Go easy on plot summary, and do not give away the ending.

Yes. Forever yes. Frankly, it is a poor critic who can’t evaluate a text without giving away the ending. One, if not the ultimate, purpose of a review is to help a reader make an informed choice about if they will read a particular text. Spoiling the ending, or giving away the resolution to a story’s central conflict nullifies the need for a review. However, there is a time and place for full-on analysis. In those missives, which can often masquerade as reviews, surprise endings are not sacrosanct.

5- If the book is judged deficient, cite a successful example along the same lines, from the author’s ouevre or elsewhere. Try to understand the failure. Sure it’s his and not yours?

The ability to compare from one text to another is the mark of a great and well experienced critic. Bearing this in mind, any poser can use Wikipedia to pad a piece with a couple hundred words of name dropping. Even if those references are genuine, filling a review with oblique comparisons can be worse than useless to somebody using a review as a roadmap to break into a particular genre or style.

I’d also offer there’s nothing wrong with a critic who is willing to admit that their personal taste renders a certain book unpalatable. Again, so long as the reviewer is clearly justifying their position – so much so that the reader can compare their own tastes to the critic’s – they are doing their job.

Updike also offered a sixth potential rule.

6. To these concrete five might be added a vaguer sixth, having to do with maintaining a chemical purity in the reaction between product and appraiser. Do not accept for review a book you are predisposed to dislike, or committed by friendship to like. Do not imagine yourself a caretaker of any tradition, an enforcer of any party standards, a warrior in an ideological battle, a corrections officer of any kind. Never, never (John Aldridge, Norman Podhoretz) try to put the author ‘in his place,’ making him a pawn in a contest with other reviewers. Review the book, not the reputation. Submit to whatever spell, weak or strong, is being cast. Better to praise and share than blame and ban. The communion between reviewer and his public is based upon the presumption of certain possible joys in reading, and all our discriminations should curve toward that end.

Much of this rule exists to protect the critic. Just as authors put a part of themselves on public display through their work, so too do critics. It’s a different sort of vulnerability, but it exists in the moments when readers go out of their way to tell a critic they are wrong/stupid/tasteless.

Also, Updike’s sixth rule speaks to a maxim made famous by Wil Wheaton, “Don’t be a dick.”

Sage advice for all critics.


Critic for Hire: A Thought Exercise

Since the New York Times broke the story on Todd Rutherford’s reviewer for hire service, there’s been no so shortage of justifiable outrage orbiting critics who get paid to write glowing reviews regardless of the text under examination’s actual quality. Over at The Mad Reviewer Carrie Slager calls the transaction of cash for disingenuous praise “unethical on the reviewer’s part and laughably pathetic on the author’s.” At Erin Keane says that buying rave reviews is “lazy and counter to the true indie sprit.” Do a few more google searches on “Paying for book reviews” and a theme of near universal condemnation for said practice will emerge.

And generally, I agree with their sentiments. Any critic worth their salt knows not to intentionally misrepresent a given work. So please, dear readers, go forth in this missive knowing that I do not advocate being paid to spread lies.

However, I would ask if it is sensible to expect non-professional critics (that is critics not paid a fixed salary to review life, the universe, and everything) to maintain a level of integrity and impartiality akin to that of the Jedi Knights? Idealism is all well and good, but more often than not it loses out to pragmatism. Allow me to draw a comparison.

During the glory days of the Roman Republic, the Senate thought it best not to pay citizens who held various public offices. The dominant rationale being civic virtue and a sense of service to Rome was ample reward for a year spent as a city magistrate, judge, or septic engineer. This idealism was bolstered by the fact that any Roman who had the means and social connections to consider a life in politics was, by and large, of an old moneyed family. Yet this reality did not prevent a measure of corruption from winding its way into even the lowest levels of Roman bureaucracy.

So as an artistic community, are we not setting ourselves up for disappointment by expecting critics, who probably can’t claim the same sort of financial backing as a Roman aristocrat, to act as beacons of virtue? Indie authors, depending on how they market themselves and the quality of their work, can at least hope to find some form of eventual compensation for their labour. Even if those returns on a self-published novel amount to a penny per word, it’s more than most critics can hope to see given this now explicit expectation of monastic purity and poverty; at least monks received a warm bed, food, and beer in exchange for their daily scribbles.

Once again, I’m not advocating for the sort of disingenuous practices that Todd Rutherford was employing in the name of supply and demand. Yet Rutherford reminds us all that words arranged in a pleasing fashion are a commodity, fiction and editorial alike. Therein he and his ilk illustrate what I see as the crux of this issue. While writers, and perhaps even artists at large, now have venues to create for free with the expectation of deferred compensation, no such effective model exists for independent critics. Those few reviewers who are alternatively audacious or desperate enough to sell their services can now look forward to being anathematized by the writing community, and subjected to the digital age’s equivalent of being pilloried and flogged in the town square.

Still, prosecuting these offenders, at least within a Western tradition of justice, is not always as reductive as condemning a guilty act. Ask yourself if an extreme reaction against buying and selling reviews is warranted when the mens rea is simply a writer wanting to be paid for their words? Show me a writer, any writer, who doesn’t crave a vision of the future where they can write for a living, or even write for the occasional bottle of scotch, and I’ll gladly kiss your ring.

If we take it at face value that the digital age has democratized publications, and further assume that the ideal state for publication 2.0 is a meritocracy wherein good independent authors are rewarded with sales, then shouldn’t it follow that there is a similar revolution awaiting criticism and reviewing? Treating critics and reviewers as a limitless gestalt of free opinions is the same sort of tack some elements in the publishing industry struck with writers prior to the rise of e-publication. We need only look for diminishing sales and stock values to see how mindless adherence to tradition is faring in those quarters.

More to the point, can we really expect this new system to flourish if there are no incentives for reviewers to resist the temptation to mindlessly shill? Perhaps a good place to start is asking if there is a way to reconcile buying a critic’s services without buying the critic wholesale? Because as the indie publishing market grows, the problem of reviewers as paid PR stooges is only going to grow with it.


The Saturday Shaft: Random Thoughts

On Art,

Minecraft is a fantastic tool for having fun while creating digital art. But creating art with unlimited resources can diminish the experience. Minecraft builders, save for those undertaking “super” projects, would do well to play in survival mode from time to time. For one, there’s a unique satisfaction in creating a piece of digital art when you’ve had to scrounge for materials. For second, defending one’s project from creepers, zombies, and skeletons is much like defending one’s non-digital art. There are always going to be philistines who want to destroy, dismantle, or piss on the things that you care about. Accept that as a reality to be managed, rather than a thing to be avoided, and the experience will be that much more rewarding.

On Criticism,

Every writer a critic, and every critic a writer; that’s my solution to what Jacob Silverman calls an “an epidemic of niceness in online book culture”. Within the academic world, peer review is an essential part of the writing process. The creative sphere needs a little more of that. Granted getting published, save for writing a blog, self publication, or hooking up with a vanity press, is not easy. Once a writer has climbed that mountain the last thing they might want to hear from a colleague is that their work is wanting. But if we maintain that “critic” is the root of “critical thought” rather than simply “criticism”, we should welcome the feedback of our peers even if it’s not quite what we want to hear. In doing so, everybody grows as a creator and consumer.

Every writer a critic, and every critic a writer.

On Film,

The story of Pygmalion and Galatea is terrible; stop making movies that use it as the central conceit. I’ve met/talked to a lot of writers/artists over the last couple of years, and 96% of them are well adjusted and generally pleasant people. They can also be laser focused, incredibly determined, and passionate in ways that some folk might find surprising. This does not make them misanthropic narcissists who are incapable of holding down relationships with anybody who they have not created – I’m talking to you, Ruby Sparks and to a lesser extent Stranger than Fiction.

On America,

There are a lot of things that America does well. There is one thing America does very poorly: leaving issues of civil rights up to individual states and then a popular vote therein. Civil rights should never be a question of popular opinion, that’s why we call them rights. If they are not universal to all, then they are not worth the paper upon which they are printed. Who thinks that the 13th, 14th, and 15th amendments to the US Constitution would have been better left to regional/state opinion? Such is the stupidity of putting marriage equity in a similar light.


The Thin Plaid Line of Negative Reviews

In recent months, I’ve heard a great many thoughts on negative reviews. I’ll credit one of the best ideas to both Ryan Oakley, author Technicolour Ultra Mall, and Leah Petersen, author of Fighting Gravity. During a panel on criticism at Ad Astra 2012, a panel moderated by yours truly, both Oakley and Petersen suggested that leaving an inferior novel/film/game/whatever to languish in obscurity can sometimes be a better course of action than constructing a negative, albeit fair, review.

At that same convention I had a quick conversation with Sandra Kasturi, co-editor of ChiZine Publications, on the subject of criticism. Therein, she told me a critic should not be afraid to speak their mind as it is their job to assign value to a given work.

When I began writing reviews I found guidance in the writings of W.H. Auden, who framed criticism as something most useful when it calls attention to things worth attending to.

In an alternate timeline where I’m a “professional” critic, in the sense that I work for a publication and use words which elucidate critical thought for a living, I think it would be easy to craft the above mentioned philosophies into a set of grand critical principles. As an “amateur” critic, the issue is slightly more complicated.

Where “professional” critics are always going to receive gratis review materials, not to mention a fortified buffer zone from the subject under review via their publication, the amateur critic’s inclusion in the great game is dependent upon the good will of the publishers. Thus the “amateur” must broach the thin plaid line of negative reviews. In theory, a well crafted review, either positive or negative, speaks to the talent, experience, and ability of the critic in question. In reality, the publisher-critic relationship is about advertising, and no publisher is going to want to work with an “amateur” who makes one of their authors/game studios/clients look like a douche.

So that’s when you default to something that resembles the aforementioned Petersen-Oakley approach of occasionally leaving bad things to rot without comment, right?

Well, maybe. If a critic receives unsolicited review materials, they have every right to say thanks but no thanks. I did that once when a porn studio asked me to review one of their movies.

However, if said critic opts not to put pen to paper, then they probably aren’t going to see anything in the future from that publisher. To my previous example, nobody from the adult entertainment industry has solicited me since I said no to writing a detailed review of their Spartacus porno. (Come on, what would I say in a porn review? Offer commentary on the grunting and thrusting?)

If a critic asks a publisher for a review copy of a book/game/whatever, there’s an implicit, bordering on explicit, expectation that the work in question is going to get reviewed. A failure to complete the transaction on the part of the critic will likely yield the same result as producing a negative review: the end of the association between critic and publisher.

The equation is further complicated when self-published authors and independent productions enter the fray. Suppose a self-published author sends an “amateur” critic their debut novel. The critic then uses their review to demonstrate the inherent flaws of the text, warning potential readers away from investments in time and money. Under the Kasturi model, the critic has done their job. In theory, this is a good thing. In reality, a person who can write has held a person who can’t write to task for their inability to write. Some people (friends, family, fans of the author in question, and bored internet trolls) might be inclined to label that sort of treatment as a very public bullying.

Scenarios such as these contribute to what I see as a troublesome culture of positivity among the ranks of “amateur” critics. Beyond tiring both body and mind with a perpetual good will truffle-shuffle, this positive culture can cripple an “amateur” critic’s transition into the “professional” realm. Ask yourself this, how long would a New York Times book critic last if they loved everything? Would Ebert still be writing film reviews for the Chicago Sun Times if he praised every movie? Such a perpetually positive critic would quickly forfeit their perceived position as arbiter of taste, instead becoming something of a fanboy/girl, or worse, a paid stooge. Why would any editor want to hire such a writer? Furthermore, and even if people don’t want to admit it, readers love a scathing review.

A good negative review, that is to say one that knows how to challenge art without attacking the artist, is a spectacle in and of itself. It’s the reader-writer equivalent of a trip to the Coliseum. The negative review allows opportunities for revelling in collective contempt; those requiring evidence on that point should check the view count on Gilbert Gottfried’s reading from Fifty Shades of Grey. Similarly, the negative review can turn fans of the lampooned work into the most fervent advocates. At that point, a debate about the quality of the work often becomes a secondary concern. The focus shifts to proving the critic wrong and in the process mobilizing/recruiting others to that end. Go ahead and call out Stephenie Meyer and see just how quickly the Twihards assemble and more importantly proselytize. The same could be said for Community fans – myself among them. Say something bad about Dan Harmon within earshot of me and I’ll spend the next twenty minutes explaining why he is a visionary.


Thus we return to the Petersen-Oakley model where the bad review still promotes the work in question. Despite that reality, and the truism that any press is good press, there exists a limiting structure that shifts dominion over negative reviews to “professional” critics. The internet may have mobilized an army of well trained and highly skilled “amateurs”, but those critics risk biting the hand that feeds them should they dare to do their jobs and write like “professionals”.


Second Person Narratives: I’m not your monster.

As a reader, critic, and occasional writer, I don’t much care for the second person narrative style. That isn’t to say that I think it’s a pointless thing that needs to die an eternal death in the darkest foulest bowels of hell’s antiquated septic system, not at all. Second person can be quite useful. Choose your own adventure novels demand a second person narrative structure. Decades of Dungeons and Dragons DMs have forged elaborate worlds using the second person. Within more “conventional” storytelling (novels, short stories, modern video games) second person seems like a problematic thing.

First, a quick refresher for the benefit of anybody who doesn’t know what I’m talking about.

First person narrative: I couldn’t stand the sound of his voice for another minute. The way he went on and on, and the way everybody listened as if he was god’s chosen prophet. So I did what any coward would do; I kicked him in the nuts.

Note that the narrator is telling the story from their own perspective. FPN is life as you live it every day…or this.

Third person narrative: Adam’s practiced poker face was about to shatter. For three years he listened to his boss drone on and on about a managerial style that increased ROI each quarter. For three years Adam watched his colleagues genuflect to the pontification of a blowhard who outsourced his work to unpaid interns. At exactly eight minutes into the 10am meeting, Adam stood up from the boardroom table, walked to the front of the room, and smiled as he kicked his boss in the balls.

Notice here that the story is being told from a perspective external to the character in question. Both first and third person perspective should be quite familiar to anybody who has ever read a novel. Now things get weird.

Second person narrative: You can’t stand listening to him any longer. He’s taken so much credit for other people’s hard work. Everything he’s done is built on the backs of people half his age but twice his intelligence. You didn’t go to business school for this. You know going to Human Resources won’t solve anything. You don’t think about your next action, really. All you do is stand up, square yourself to the man who has stolen your life, and drive a size ten-and-a-half wingtip firmly between his legs.

Hilarious as crotch shots may be, these vignettes illustrate an essential problem with second person narrative. “Adam” might be the sort of guy who kicks his boss in the junk, but what if “You” are not?

What if the story is about something less cathartic than avenging one’s self against a boss? What if a reader is being told that they are standing at the feet of a dead body, licking a blood stained knife as a crimson pool slowly wraps around their feet. I don’t know about you, but sometimes I don’t feel like giving up who I am to become somebody else’s monster.

A narrative built in the more conventional first or third person style can safely assume that a reader wants an experience removed from their own world. To read Dune, or The Adventures of Sherlock Holmes is to ride on a worm behind Paul Atreides or follow along in the cab next to Holmes and Watson. At no point does the story ask the reader to do anything other than maintain their suspension of disbelief. Who the reader is, is irrelevant to the issues at hand. Second person narratives depend on a reader’s willingness to abandon their sense of self. If you, the reader, are not willing to become the serial killer, the half-demon spawn of a fallen angel, or the sexy horse vampire, then the story falls apart.

I live in fear of this moment every time I see a play

So what’s the problem? For my part, I hate audience participation (save for choose your own adventure novels, especially the ones that offer a hierarchy of endings [Hierarchy of Endings is the name of my next band]) in printed text just as much as I do in theatre. As a reader, I’m looking to be entertained. As a critic, I’m looking for subtexts and themes. As a writer, I’m looking to see what I can learn from the words in front of me. How can I do any of those things if I’m spending the lion’s share of my mental energy turning myself into someone who is compatible with the narration?

What am I gaining by undertaking this effort? Who is the writer to make me think I would even want to become this person? After all, the reader is the consummate and professional voyeur.

When evaluating recent encounters with second person narratives, the reader in me invokes Benedict Cumberbatch as he sighs “Bored”, the Jay Sherman in me says, “This is weak character construction masquerading as high concept bullshit”, and Adam, the scribbler of words, desperate for the approval of his peers, moves on to a new teacher as the story appears to be the ultimate form of telling without showing.

So to the writers of the world, I would offer these words, for whatever devalued Hellenic currency they are worth: it is infinitely easier to appropriate a concept, culture, historical figure, political ideology, or religious doctrine than it is to bank on your reader’s desire to abandon their sense of self as to give your story its necessary cohesion. If a reader is unable, or outright refuses, to participate in what you, the writer and possibly your editor and publisher, think is a transcendent experience, then all of the deeply layered metaphor and allegory in the world won’t amount to jack.


The Daily Shaft: Critical Disclosure: It’s not necessary, so let’s stop doing it.

Lately, I’ve noticed a trend within some critical circles. This trend suggests that if you are reviewing a given thing (book, movie, game, CD, graphic novel, adult novelty) you need to disclose that you were given said thing as a gratis review copy. When I did some asking around, nobody could give me a justification that seemed to match the magnitude of the “Thou Shalt Disclose” commandment. Therefore, I would offer the following thought to the critics of the world.

You don’t need to disclose if you got something for “free”, it’s part of the job.

Go ahead and look at any of Roger Ebert’s reviews, or scan through a game review on Kotaku or Game Informer. You won’t find those critics marring their prose with a gaudy “we got this for free” disclaimer. If the professionals don’t do this, why should the rest of us take up an action that would brand us as rank amateurs in the eyes of our readers and our betters?

In combating this cult of disclosure, perhaps we as critics need to do more to end the myth that we get things for “free”. Forgive me for invoking the ghost of Heinlein when I say this, but there is no such thing as a free lunch. When an organization gives something to a critic, it’s not a present, it’s an investment. Said investment only pays dividends when/if a critic produces a positive review, which, theoretically, yields greater sales and exposure. More importantly to the critical process, the no cost review copy is the best way to make sure a critic isn’t letting a financial bias influence their review.

I know that I have, from time to time, reviewed things that I paid for out of pocket. I suspect other critics have done it too, as few of us get into this game with corporate sponsorship from day one. The most important thing to note here is that no cost access to the subject under review buttresses professional detachment between critic and object. Once coin is parted from hand, that relationship begins to crumble. In a worst case scenario, the critical voice is all but lost in a sea of consumer outrage. I made this mistake with one particular game review (no, I won’t tell you which one, but I’ve left it up on the website as a lesson to myself.) I doubt that the bottom line of my review would have changed if I evaluated the game off a review copy, but my piss and vinegar outrage probably would have been a little more subdued. In short, it’s hard to be objective if a person feels that they have been ripped off.

Therefore, the question should be one of what is gained from expecting critics to disclose the origins of their review. To those who think it adds a measure of transparency to a critic’s words, I would say that a critic who gets paid to say nice things about shitty products is not going to feel any compunction about sticking in a line to appear on the up and up. How can disclosure do anything to curb the actions of those mercenary critics? Given the way a critic should operate, at least under ideal conditions, foisting an expectation of disclosure upon them feels like a counter-intuitive effort to codify amateurism.


The Daily Shaft: Free Advice on Critical Methodology

Right then, a planned trip to a Neuro-Ophthalmologist (I’m fine, I was playing wheelman/seeing eye dog) saw me in the city from about 9AM to 8PM today. During that time I managed to hand write the post that I wanted to do today, but after spending a day enduring hospital folk, sitting on commuter rail and driving to and from the rail station, I’m not in the mood to try and translate my chicken scratch rough draft into a polished post.  Instead, I want to share something that I read today.

In between moving from one waiting room to the next, I read through a book that one of my colleagues is considering for use as a guide for upper year history students who want to gain a better footing in discussing art.  The introductory chapter of the book includes W. H. Auden’s – an English born poet of the 20th century – attempt at framing the role of the critic.  Although his words have aged nearly a half century, I find that they remain supremely poignant.

What is the function of a critic?  So far as I am concerned, he can do me one or more of the following services.

1 – Introduce me to authors or works of which I was hitherto unaware.

2 – Convince me that I have undervalued an author or work because I had not read them carefully enough.

3 – Show me relations between works of different ages and cultures which I never could have seen for myself because I do not know enough and never shall.

4 – Give a “reading” of a work which increases my understanding of it.

5 – Throw light upon the process of artistic “Making.”

6 – Throw light upon the relation of art to life, to science, economics, ethics, religion, etc.

From The Dyer’s Hand (1963), 8-9.

I try not to offer a lot of unsolicited advice to other people; mostly because I approach life the same way I did Kendo.  Therein there are three types of people: the students, the advanced students, and the teachers.  Despite teaching for a living I far too often feel like a first among learners hence my reticence at doling out the wisdom.  However, I feel absolutely confident in suggesting that any aspiring critics should frame and mount Auden’s words above their desk.  For my part, I’m happy to see that my attempts at criticism have quite often connected to one or more facet of Auden’s epistemology.  The obvious caveat is when I ply my sharp tongue and razor wit to eviscerating a particular travesty of genre media.  Those reviews are a necessary tool for demonstrating to magazine/website editors that I know the difference between slander and the sort of theatrical writing that will draw in an audience.

I’d also offer that Auden’s thoughts on critical writing need not be limited to academic discourses.  Language and background information can always be tailored to suit a broad audience. Bearing that in mind, a given work need not be stuffy or pretentious to introduce a potential audience to something new or demonstrate connections between an author and the culture in which they are writing. Moreover, where high school English teachers spend years training students to ignore their personal biases, Auden’s third point, when applied with a modicum of common sense, encourages a critic to engage their unique experiences as they explore a given text.  Within an increasingly “democratic” critical sphere, an application of any of these approaches could lead to some very interesting relationships between artists and critics.

And that is my free advice on critical methodology.  Tomorrow, my thoughts on 2011’s game of the year.