Sunday, June 28, 2015

We Live in Intriguing Times

Art by Bob Englehart of the Hartford Courant

Well, this has been quite a week, so I feel obliged to say something about it. 


FLAGGING SUPPORT FOR A SYMBOL OF DIVISION

First of all, the Confederate battle flag, which once had present-day (white) Southerners falling all over themselves to say how it wasn’t a symbol of racism, has now fallen into disrepute due to white man Dylann Roof’s — I suppose I need to insert the word “alleged” — racially motivated attack on the Emanuel AME church of Charleston, South Carolina, last June 17, killing nine black parishioners. 

Given the fervent support that the battle flag has enjoyed for decades, what is amazing to me is the speed at which many people, including some white Southerners, are calling for the flag’s removal from official grounds throughout the South.  The haste of some Southerners at least to keep the Confederate battle flag at arm’s length, if not to consign it to history, is something that I never thought I’d see in my lifetime.  Many American’s go out of their way to rationalize the Confederate States of America.  About a week ago, I was posting on Facebook, and the topic of Robert E. Lee came up.  A Facebook friend with more conservative views instantly jumped in to say how great Lee’s service to the U.S. army in the antebellum years was, and that is what he should be remembered for. 

Now, I am not the most qualified person to cast judgement on Robert E. Lee’s life.  To my limited knowledge, Lee’s service in the U.S. army in those years before 1861 was one of distinction, and I understand that he, on the whole, led a very honorable life.  But the fact remains that he was also a traitor who took up arms against the United States of America primarily in order to keep a race of people enslaved.  Shouldn’t that be the headline of Lee’s life, rather than how gentlemanly and honorable he was?  The mere fact that I need to ask this question speaks volumes about the United States in the post-Civil War years. 


I spent much of my time growing up in central Virginia.  As a grade-schooler just learning about the Civil War, I remember being struck by the sight of a sunbather at Virginia Beach lying on a stars-and-bars beach towel.  I remember thinking what a casual use that was for the flag of an enemy (one on the wrong side of history) that the U.S. army fought and defeated.  While lying on top of a flag my not be the most respect one can show for it, the fact that the sunbather seemed so accepting of this enemy’s ensign struck me as disregard for the values that the Union victory in the Civil War stood for — keeping the country together and ending slavery in particular.  But since I was just a grade-schooler at the time, I didn’t say anything about it. 

Afterwards, I started noticing the ubiquity of the Confederate battle flag and the esteem in which many seemed to hold it.  As a child, I was especially disquieted by my first ride as a passenger in a car down Richmond’s Monument Avenue.  I rode past towering statues of Confederate historical figures: Robert E. Lee, Jefferson Davis, Stonewall Jackson, and Matthew F. Maury.  It not only struck me as strange that the people of Richmond took such pride in these turncoats, but the grandeur and defiance of these statues communicated the following message: the South didn’t really loose the Civil War.  Monument Avenue filled me with apprehension.  (Since then, a statue of black tennis-player and Richmond native Arthur Ashe has been added, apparently to offset partially the pro-Confederate signal sent by the other sculptures.) 

R.M.T. Hunter (1809-1887)
As I got older, it also became clearer to me that the Southern side of my family also held the Confederacy in somewhat high regard.  Here’s an example: A distant family relative is Robert Mercer Taliaferro Hunter (1809-1887 — the family pronounces “Taliaferro” as “Tolliver”), a lawyer and statesman who built a large farm that my family still uses.  Growing up, I remember a history-buff uncle telling me with a twinkle in his eye how R.M.T. Hunter served as the Confederate Secretary of State.  Only later did I discover that Hunter had also been a statesman for the United States, and at one point, he not only became Speaker of the U.S. House of Representatives, but he remains the youngest ever to have held that distinguished position.  But that kind of accomplishment apparently wasn’t worth mentioning, only his involvement with the C.S.A. 

In the decades since, of course, I came to see in what great regard the Confederate battle flag — and, to some degree, the idea of the Confederate States themselves — was held in much of not only the South, but the northern United States as well.  Over the years, I have heard many rationales for people embracing the Confederate flag: it being a symbol of heritage, history, and any kind of against-the-grain rebellion.  But, I thought to myself, shouldn’t the fact that it was a treasonous symbol for perpetuating slavery, going against the founding notion that “all men are created equal,” trump any other kind of meaning the flag might convey?  I gradually got the idea that most people who wave the Confederate flag don’t believe in all people being created equal; it was a way for them implicitly to signal that African Americans are still inherently one-down in this country. 

Perhaps because of the gradualness of my discovery and my family’s warmth (at least in part) to the idea of the Confederacy, I kept my qualms about the battle flag — and other celebrations of the Southern succession, such as Monument Avenue — to myself.  Could I be overreacting to the Confederate flag?  Could the flag be a more benign symbol than my negative visceral reactions to it told me?  Whatever the answer, seeing how widespread the esteem for the flag and for the Confederacy was in Virginia, I didn’t think that there was anything I could say about the subject that would change anyone’s mind. 

After the June 17 shooting of nine black parishioners in their Charleston church by a white gunman, the flags of the United States and South Carolina flew at half-staff.  But the Confederate flag overlooking the Confederate monument within sight of the statehouse flew at full staff, an image truly worth a thousand words.

For a long time, I’ve wanted to ask Confederate flag supporters why it was only this particular symbol of the South, and not another, that could adequately express their pride or heritage or whatever.  Knowing that the Confederate battle flag came into widespread use in the South in the 1940s and ‘50s as a symbol of resistance to racial desegregation, I have a feeling that the answer to my question would ultimately be — regardless of what I would be told — that such flag supporters didn’t truly believe in racial equality. 

Now, many white Southerners are apparently regarding the Confederate battle flag as an undesirable object.  My youthful negative reaction to it appears to be vindicated.  No, taking down the flag won’t magically undo racism — or even the lingering legacy of the Confederacy — in the United Sates.  But it’s a good start.  


ONE PLUS ONE EQUALS... 

The other major event this week was the Supreme Court’s ruling, in Obergefell v. Hodges, that same-sex marriage — or more exactly, marriage equality — is constitutional in all 50 states.  Many who disagree with this 5-4 decision are criticizing it for supposedly stretching the bounds of what is protected by the Constitution.  Others are finding fault with Justice Anthony Kennedy’s flowery language in his majority opinion (I have not read the full text), which goes on at length about how ennobling marriage is.  Although I support marriage equality, I can understand, to an extent, the criticism of Kennedy’s opinion. 

For me, the entire case in favor of marriage equality boils down to one issue.  According to Wikipedia, married couples have access to 1,138 rights that unmarried people don’t have.  If the government allows one segment of its population access to certain rights — such as the absence of inheritance taxes upon the death of a spouse — via marriage to the consenting adult of their choice, but denies those rights to another segment, then the government is relegating that latter segment to second-class status.  And the government shouldn’t be doing that.  That’s it.  Everything else, including any “ennobling” qualities of matrimony, is just embellishment. 

Some have also criticized that the basis of this opinion was not to be found in the Constitution.  But if the Constitution protects those rights and responsibilities for heterosexual spouses, it should protect them for gay couples, too. 


Marriage equality for gay couples and a newfound ignominy for the Confederate battle flag — yes, this has been a very historic week. 

Friday, June 12, 2015

Film Noir: The Darkness Returns

Jane Greer and Robert Mitchum in ‘Out of the Past’ (1947)

Okay, I’ll spill.  I promised four long years ago to write some follow-up posts on film noir after my first one, saying what I think does and does not make a movie “noir.”  Well, time got away from me like an escaped con high-tailing it from the heat.  And I didn’t think that I had very much to add to Alain Silver and Elizabeth Ward’s explanation of why they excluded gangster films, period pieces, and comedies from Film Noir: An Encyclopedic Reference to the American Style.  Plus, after several blogposts about what does or does not constitute a particular genre, I started feeling like a member of the genre police.  Still, I thought that a few more ramblings from me about film noir (unlike most of the movies themselves) wouldn’t kill anyone. 

First, one reason why so many film buffs have so many different definitions for what film noir is and isn’t is because the concept of “film noir” was established virtually after the fact.  French critics in the late 1940s assigned the label film noir (‘black film’) to a number of American movies that these critics saw as darker and more cynical than the typical Hollywood fare.  The filmmakers who produced these movies didn’t see their offerings as related (except in the most obvious ways, of course) and therefore didn’t see any need to ensure that any of these films possessed one attribute or another. 

In his excellent book More Than Night: Film Noir in Its Contexts (which I recommend to any reader of an academic bent), James Naremore writes that “film noir” is an idea more than it is a body of film texts.  So, “film noir,” in this view, can mean anything that anyone wants the term to mean.  Moreover, Naremore points out that when French critics first applied the label “film noir” to American movies, they also attached it to non-crime motion pictures, such as Billy Wilder’s The Lost Weekend (1945), and only later was the term seen to apply exclusively to crime films.  So, the term itself has evolved over time, and it will probably evolve some more, making any attempt (like this one) to ascertain a hard-and-fast definition of “film noir” a fool’s errand, much like trying to determine the identity of the first rock & roll record

At the same time, if the mantle of “film noir” can be applied to anything, that renders the term virtually meaningless.  If you type the phrase “best noir films” into a Google search engine, a number of movie posters for works described as such on the Web appear at the top of your computer screen.  In addition to such widely accepted noir titles as Billy Wilder’s Double Indemnity (1944), Jacques Tourneur’s Out of the Past (1947), and Joseph H. Lewis’ Gun Crazy (1950), there appears a poster for Ridley Scott’s 1982 science-fiction film Blade Runner.  Is Blade Runner a true example of noir?  If so, why?  Yes, Blade Runner has many of noir’s trappings: the relentless investigator, the hardboiled voiceover dialogue, shadowy photography, etc.  But is this enough?  If a category of film can encompass both Gun Crazy and Blade Runner, is that category helpful?  Let’s take a closer look. 
 
Humphrey Bogart and Lauren Bacall in ‘The Big Sleep’ (1946)

In my inaugural essay, I refer to film noir as a subgenre.  I realize now that isn’t the word that I was looking for.  Noir films can be made of any crime genre: a number are whodunits (The Big Sleep, Black Angel, etc.), suspense thrillers (Sleep, My Love; The Window; Alfred Hitchcock’s works, etc.), and gangster films (most notably, White Heat, which, while not a “classic” rise-and-fall story, is still about a gangster).  So, film noir is something that can permeate genres, not a subset of one.  Therefore, I think that we should retire the word “genre” and call noir something else.  Since film noir is a vague concept, I can’t think of anything better than the equally vague word “cycle.”  Film noir — something that I think lasted only in American-centered crime movies from the 1940s until the late 1950s — was a collection of styles and motifs that evolved, flourished, and then ran its course.  From here on out, noir is a “cycle,” not a “subgenre.”

My earlier definition of film noir, for the most part, still holds: “a specifically Hollywood [or American-centered] crime drama made sometime between the mid-1940s to late 1950s, characterized by cinematography with shadowy low-key lighting and an urban-inflected story with the strong potential to unnerve its audience.”  The key phrase is “unnerve its audience.”  The best noir films seem to pose some kind of existential dilemma to the audience.  The best tell stories that, at least for a moment, unmoor the audience from a sense of moral certainty and a sense of a steady place in the world around them.  Silver and Ward say that one of film noir’s most “consistent” attributes is the paranoid protagonist.  They illustrate their point by quoting dialogue spoken by detective Bradford Galt (Mark Stevens) in The Dark Corner (1946): “I feel all dead inside.  I’m backed up in a dark corner, and I don’t know who’s hitting me.”  Silver and Ward write:

With its simple graphic language, Galt’s statement captures the basic emotion of the noir figure.  The assailant is not a person but an unseen force.  The pain is more often mental than physical: the plunge into spiritual darkness, the sense of being “dead inside.”  For Galt in his dark corner the mere fact of being outside the law is neither new nor terrifying.  It is the loss of order, the inability either to discover or to control the underlying cause of his distress, that is mentally intolerable.  (p. 4)

This component of uncertainty — however fleeting or however weakly contradicted by the Production Code-approved happy endings — is key.  If a 1940s-’50s crime drama doesn’t do something to unsettle the audience, aficionados are unlikely to embrace the film as an example of noir.  In a DVD review of the by-the-numbers police-procedural Union Station (1950) for the magazine Sight & Sound, Tim Lucas says:

True noir is something specific, tales of existential entrapment, drenched in irony and fatality.  Films such as Union Station — monochromatic tales of trenchcoated dicks and sadistic criminals staying resolutely on their own sides of the moral fence in a world where good wholesomely prevails — cry out for a category all their own.  So why not call them ‘near-noir’?  (Sight & Sound, XX, 10, p. 88)

Sounds good to me.  One film that I propose would be better branded as “near-noir” is a title often extolled as an exemplar of the film-noir cycle: Jules Dassin’s The Naked City (1948).  Critically praised for, among other things, its pioneering use of location photography, The Naked City is often one of the titles first mentioned as a pre-eminent specimen of the cycle.  However, there’s little sense of moral ambiguity in Dassin’s film.  It’s a straight-ahead police-procedural starring Barry Fitzgerald as an avuncular police investigator whose twinkling presence sooths rather than unsettles.  His younger plainclothes sidekick, played by Don Taylor, is likewise uncomplicated: the biggest moral quandary he faces is a boys-will-be-boys problem with his young son at home, and his pinup-worthy wife (Anne Sargent) suggests that all is basically well within the household.  (Is there any doubt that such a blissfully wedded and photogenic couple would have great sex?)  In short, there’s nothing about The Naked City that implies any ethical abstruseness: we know who the good guys are and the bad guys are, and justice prevails.  Why do so many movie-savvy critics regard The Naked City as a film noir? 


Mark Stevens (right) in ‘The Dark Corner’ (1946)

One point of contention among noir enthusiasts is whether or not a particular movie succeeds in unsettling its audience and, if so, to what degree.  Two pictures often labeled as film noir are two crime dramas with strong racial themes: Joseph L. Makiewicz’s No Way Out (1950) and Samuel Fuller’s The Crimson Kimono (1959).  However, from where I stand, these two anti-racism tracts take such pains to paint their minority co-leads as exemplars of all that is right and good (Sidney Poitier in the former and James Shigeta in the latter) that this leaves very little room for moral ambiguity or psychological dislocation.  So, I have great difficulty accepting No Way Out and The Crimson Kimono as examples of film noir.  But I’m sure that other movie mavens would disagree with me. 

Similarly, if there is anything else about a noir-era crime film that intervenes between the audience and an inchoate sense of dread, such a movie would have a hard time being seen as part of the cycle.  Silver and Ward list some elements that would likely keep the audience at an arm’s length from the “true” noir experience.  Here are some other necessary requirements for film noir:

A crime: Film noir is, first and foremost, a type of crime drama.  The element of crime decisively ruptures the veneer of the placid, morally secure society, and this usually snowballs into noir’s murky interrogation of humanity’s dark side.  So, if no criminal conduct is present in a movie, it’s not a film noir.  For all of its pioneering narrative and visual stylistics that would eventually become absorbed by film noir, Orson Welles’s Citizen Kane (1941) isn’t an example of the cycle: no crime is committed.  On the other hand, such a requisite crime may be large or small: it may be a vicious murder; or it may merely be a robbery that is set right before it is discovered, as in The Steel Trap (1952); it may be only the nominal “kidnapping” of a child in the next hotel room, as in Don’t Bother to Knock (1952); or it may be trying to frame someone and an implied murder at the end, as in Sweet Smell of Success (1957).  Any crime will do.  But no crime, no film noir. 

John McGuire (left) and Peter Lorre in ‘Stranger on the Third Floor’ (1940)

A film made during the 1940s and 1950s: While some commentators have seen so-called “neo-noir” films of later decades as a direct extension of film noir into the present day, most critics agree that the “classic” period for film noir lasted only from the 1940s to the 1950s.  As Foster Hirsch puts it in Film Noir: The Dark Side of the Screen:

Film noir erupted in full creative force during a comparatively concentrated period.  In an early and influential article, “Notes on Film Noir” (1972), Paul Schrader places its outer limits from The Maltese Falcon in 1941 to Touch of Evil in 1958.  In a more strict dating, Amir Karimi, in Toward a Definition of American Film Noir, limits the period from 1941 to 1949.  Later critics suggest that the true heyday of noir lasted only a few years, from Wilder’s Double Indemnity in 1944 to the same director’s Sunset Boulevard in 1950.  But the long-range view, with noir extending from the early forties to the late fifties, is the most sensible, for the crime films of this period are noticeably different in theme and style from those made before and after.  
Films noirs share a vision and sensibility, indicated by their echoing titles: No Way Out, Detour, Street with No Name, Scarlet Street, Panic in the Streets, The Naked City, Cry of the City, The Dark Past, The Dark Corner, The Dark Mirror, Night and the City, Phenix City Story, They Live by Night, The Black Angel, The Window, Rear Window, The Woman in the Window, D.O.A., Kiss of Death, Killer’s Kiss, The Killing, The Big Sleep, Murder[,] My Sweet, Caught, The Narrow Margin, Edge of Doom, Ruthless, Possessed, Jeopardy.  These wonderfully evocative titles conjure up a dark, urban world of neurotic entrapment leading to delirium.  The repetition of key words (street, city, dark, death, murder) and things (windows, mirrors) points up the thematic and tonal similarities among the films.  (p. 10)
 
The largest consensus among movie commentators that I’ve seen seems to be that the first film noir is Boris Ingster’s Stranger on the Third Floor (1940 — with its European director, its “wrongly accused murderer” story, its expressionistic dream sequences, and its strong suggestion of sexual desire), and the cycle ends with such unease-inducing films as Robert Wise’s Odds Against Tomorrow, Irving Lerner’s City of Fear, and John Cromwell’s The Scavengers (all 1959). 

As I said in my first essay, film noir was largely shaped by the constraints of the Hollywood Production Code, a sanitizing set of rules which compelled filmmakers merely to imply disturbing issues (such as losing one’s sanity or the desirability of social transgression) between the lines of a censor-approved optimistic story.  This created a disconnect between the disturbing themes and the movies’ reassuring veneer, a disconnect that fragmented the perceived wholeness and self-containment of the filmic text.  By 1960, the weakening grip of the Hollywood Production Code meant that disturbing, impolite themes no longer needed to be hidden, no longer ran the risk of potentially bursting the bounds of a bowdlerized story.  By the time Alfred Hitchcock made Psycho in 1960, the film’s openness about such heretofore-verboten themes like adultery, non-marital sex, unambiguous gender ambiguity, all-but-shown nudity, and the grisly gore of murder eliminated the need merely to hint at their existence between the lines of a sanitized movie, thus eliminating the danger of fracturing the film via such suggestive indirection.  So, like many others, I set the timeframe of  “true” film noir between 1940 and 1959. 
 
Orson Welles and Rita Hayworth in ‘Lady from Shanghai’ (1947)

American protagonists or an American milieu:  Film noir intrigues its audience because it questions the optimism — and, some would say, the naïveté — of the American dream and the American mythos.  Noir films are stories of moral scarcity in the land of plenty.  This is what gives film noir its disquieting edge.  So, a film noir must either be set in the U.S. or be about Americans living abroad, such as Charles Vidor’s Gilda (1946), Carol Reed’s The Third Man (1949, a British film), and Jules Dassin’s Night and the City (1950).  Lewis Milestone’s Arch of Triumph (1948) tells a sinister story of intrigue with low-key lighting and high-contrast black & white photography, but its French (and wartime) setting and French characters shield it from any unsettling implications for an American audience.  Two films often associated with noir are Fritz Lang’s M (1931) and Luchino Visconti’s Ossessione (1942), but since these are European productions with European characters and European content (German and Italian, respectively), they don’t fit the bill for noir.  If a film noir is going to have a non-American protagonist, the setting should still be in or around the United States, as in Lady from Shanghai (1947), The Other Woman (1954), and Touch of Evil (1958).

A contemporary setting: To really shake up an audience, the viewer should feel that his or her sense of security could be whipped out from under them at any moment.  When a film is set in the recognizable past, it removes this aura of urgency.  I say “recognizable” past because films set in the recent past (e.g., Double Indemnity [1944] is set six years before the movie was made, probably to avoid any reference to World War Two) are usually indistinguishable from films with a here-and-now setting and don’t have this problem.  Therefore, a crime film like Hangover Square (1945), with its Victorian London setting, reassures the audience that its unpleasant story is safely secured in the unreachable past — have no fear.  For this reason (and its English characters), Hangover Square would not be considered noir. 

However, one period piece is often cited as an important film noir: Charles Laughton’s Night of the Hunter (1957), set in the 1930s, some 20 years in the past.  This period setting, the Southern Gothic trappings, and Robert Mitchum’s flamboyant take on the lead character cushion the audience from any sense of dread caused by the morally ambiguous plot or shadowy, low-key lighting.  As Silver and Ward put it: “[T]he period context [in the film] insulates [any noir] elements, as well as perverse sexuality or character alienation, and mitigates the immediacy of their impact” (p. 330).  So, I don’t regard the canonized Night of the Hunter as noir. 

No supernatural story element: A story instigated by a magical or paranormal problem can easily be resolved by a magical or paranormal solution.  A film noir should give its audience the sense that a recognizable, real-life, uneasily rectifiable dilemma may just be around the corner.  A movie featuring such an out-of-this-world problem cushions any sense of immediacy, any sense that the viewer might soon face the same problem.  So, for all of their noir-ish trappings, a horror film like The Cat People (1942) and a science-fiction movie like Invasion of the Body Snatchers (1956) don’t count as the real deal.  (I hope that I have now given my reasons why Blade Runner, a science-fiction film from the 1980s, isn’t a film noir.) 

Joseph Cotten and Marilyn Monroe in ‘Niagara’ (1953)

Black & white photography?: And speaking solely for myself — and if you follow my blog at all, you could probably guess this — I prefer a film noir to be in black & white.  Some color films are championed as film noir because of their quasi-expressionistic use of a many-pigmented palette.  Films frequently held up as color noirs include John M. Stahl’s Leave Her to Heaven (1945), Henry Hathaway’s Niagara (1953), Raoul Walsh’s The Revolt of Mamie Stover (1956), Alan Dwan’s Slightly Scarlet (1956), and Alfred Hitchcock’s polychrome productions of the 1940s to ’50s.  But I’ve only seen a few of these movies.  When I’m in the mood for film noir, I want to see the shadowy patterns on the screen shaped by the interplay of blacks, whites, and grays.  These are the kind of movies that come to mind when I hear the words “film noir.”  However, I wouldn’t want to rule out the possibility of a noir film shot in color.  While such a movie wouldn’t be my first choice when I’m in the mood for a film noir, if color can abet any feelings of unease or disquiet in a crime drama, I would be interested to see how its done.  A film noir in color is like life on other planets: it’s not something I’m likely to see anytime soon, but I wouldn’t want to say it doesn’t exist.  

Sunday, June 7, 2015

Why Bill Maher’s ‘New Rule’ Will Fall on Deaf Ears



I enjoyed Bill Maher’s tirade on his show Real Time Friday night, during its “New Rules” segment, about the ridiculous notion of Christians being persecuted in the United States.  He started off by quoting a number of influential conservatives on the subject of supposed Christian oppression and showing just how over the top their words were. 

Rick Santorum says that the treatment of Christians in America is so bad, we should keep in mind Nazi Germany: “…where you go from Christians — Jews, obviously, but also Christians — being not just persecuted but put to death.”  Again, 70% of America is Christian.  Who’s going to put them to death?  The Hindus? 

Yes, once again, some conservative Christians are using hyperbolic language that perceives a slippery slope from a loss of Christian privilege to mass martyrdom.  The idea is ridiculous, and I was glad to hear Maher (as usual) skewer such egregious overstatement.  But as spot-on as Maher’s comments were, I know that they will, alas, not get this particular brand of conservative Christian to reconsider their claims.  For I am certain that a conservative Christian (CC) will accuse Maher of quoting Santorum and company out of context.  

CCs look at the world differently than a lot of other people do.  To them, their faith isn’t just something that they practice on Sunday and then compartmentalize to live and work in the secular world for the rest of the week.  CCs see their faith as pervading their entire life, especially the moments that they don’t spend in church.  For this reason, they see everything they do as an extension of their religion, and if anything compels them to do something they believe is against their faith, they will protest against doing it. 

CCs see themselves as put upon for a variety of reasons, but the two most prominent at the moment are the growing rights of LGBT people and the Affordable Care Act’s mandate of certain forms of birth control, which they feel infringe on what they consider moral.  When Ted Cruz says, “There’s no room for Christians in today’s Democratic Party,” what he likely means is that there is “no room” (actually, there is) for Democrats against marriage equality and against Obamacare (among other issues).  Of course, that’s a narrow definition of “Christian,” but it gets the red meat delivered to a conservative political audience. 

Science is increasingly telling us that LGBTs are born with their sexual orientation in their DNA, so their homosexuality is part of who they are.  Consequently, when someone discriminates against a gay person, the government more and more sees that as prejudice against an individual for something that can’t be controlled.  However, many CCs say that they don’t discriminate against gay people as individuals but against “the homosexual lifestyle,” a lifestyle that to them is manifested by immoral acts.  In this way, CCs view gay people’s homosexuality as what they do.  For this reason, CCs bristle at the comparing the gay-rights movement to the racial-equality movement of the 1960s. 

So, when a CC is asked to do something that (in however small a way) furthers the acceptance, equality, or visibility of LGBTs, they see that as an infringement upon their religious beliefs.  If the government has mechanisms in place that penalize anyone for discriminating against gay people, religious conservatives see that as the heavy hand of government forcing a person to violate his or her faith.  Devout right-wingers probably think about instances like this when they us the term “anti-Christian fascism.” 

The same thing goes for the Affordable Care Act’s requiring employers of large companies to provide birth control and abortion services to their female employees: this kind of a scenario would be seen as the government “coercing” a business owner of faith who is against contraception etc. to violate their religious beliefs.  The CCs codified that perspective in the Supreme Court’s Hobby Lobby case, which publicized the owner of any establishment seeing their business as an extension of their faith, that their faith was not something that they merely professed in church.

When Sean Hannity speaks of the “liberal” media as anti-Christian, he probably means, in part, the information industry’s recent acknowledgement and dissemination of the views of the so-called “New Atheists,” like Sam Harris and Richard Dawkins, whose views CCs find offensive and inflammatory.  He probably also means the news media constantly couching the gains for same-sex marriage as a civil-rights issue for LGBTs and not as a governmental appropriation of an exclusively opposite-sex institution revered by most Christians.  

And meanwhile, efforts to ensure that the government isn’t favoring one religion over another (say, by removing the Ten Commandments from a courtroom) are seen as another governmental attack upon Christians.  While the First Amendment protects most forms of religious practice and speech, CCs feel put upon if they can’t use the government as some kind of vehicle for Christianity (or at least monotheism), so they view the government affirming its secular status as intolerance against religion in general, and Christianity in particular.

If everything is the work of the CC’s deity, then everything is an extension of their religion, and anything that impinges on their everything may be seen to negatively affect their First Amendment religious rights. 

In short, many conservative Christians want to be treated as though their religion is akin to race — or for that matter, to sexual orientation — as something that is inherently part of their biology.  So, CCs strive to portray disrespecting or criticizing religion as something tantamount to racial discrimination. It’s issues like this that conservatives of faith — misguidedly, I believe — think about when they say Christianity is under attack in the good ole U.S.A. 

I wish that there were something to say to this kind of conservative Christian to reorient their view of the non-(devoutly-)Christian world as something that is (for all intents and purposes) contaminated by sin.  This is not something that everyone believes, and conservative Christians, in this officially secular society, need to get along with people outside their denomination as best as they can — without feeling that doing so violates their faith. 

I wish that there were something to say to this kind of Christian conservative to make them see just how hyperbolic and unnecessary such slippery-slope and argumentum ad Hitlerum rhetoric is.  If there were, then maybe the conservative Christians and the rest of America would at least be on the same page and have something politically sensible to argue about — and maybe even to agree on.   

Sunday, May 31, 2015

‘Aloha’: No Film Is an Island

Left: Sony/Fox’s official European poster for Cameron Crowe’s ‘Aloha’.
Right: A parody of the poster from Imgur. ‘Haole’ is the Hawaiian word for Caucasian.

Once again, I’m going to write about a movie that I haven’t seen, so if you want to reckon this blogpost as being without any merit whatsoever, I’ll understand.  But more than the movie itself, I want to concentrate on the idea of a movie, and why that questionable idea was seemingly never questioned when the movie itself was given the go-ahead by its producing studio, Sony Pictures Entertainment

The film is Aloha (2015), writer-director Cameron Crowe’s Hawaii-set romantic comedy about, in the words of one writer, “a military contractor ([Bradley] Cooper) who moves to Hawaii for work and falls for an energetic Air Force member ([Emma] Stone).” The movie was released in the U.S. last Friday by Sony’s Columbia Pictures division (it will be released abroad by 20th Century Fox).  For the most part, Aloha has received an unenviable pummeling from the critics.  To mention just one, Devin Faraci of Birth.Movies.Death says: “The movie’s just a jumble, a total mess, and that plays out in both macro and micro ways.” 

Again, I haven’t seen Aloha, which in addition to Cooper and Stone, also stars Rachel McAdams, Alec Baldwin, and Bill Murray.  For all I know, I might disagree with Faraci and the rest of the clobbering critical crowd.  For all I know, I might just agree with Los Angeles Times critic Mark Olsen, who writes: “Even with its off-balance, overstuffed storytelling, [Aloha] maintains a charm and energy that never flags, with brisk pacing and generally engaging performances from its deep-bench cast.”  So, I can’t say for sure what my reaction to the storytelling and performances in Aloha would be.  But I have a feeling that I wouldn’t be able to get past the movie’s concept and casting. 

As far as the concept goes, I have some questions: If you wanted to make a movie primarily about Caucasian characters, would you set your story in Harlem? If you wanted to make a movie primarily about Caucasian characters, would you set your story in East Los Angeles?  I would hope that the answer to both questions would be an obvious and resounding “no,” because Harlem is overwhelmingly African American and East L.A. is overwhelmingly Latino.  If you want to film a story primarily about white people, wouldn’t the logical strategy be to set the story in one of the plentiful U.S. locations where white communities dominate?  And if you wanted to set your story in Harlem or East L.A. wouldn’t you feel the need to reflect the settings’ disproportionately non-white populations in your lead cast? 

I’m not so certain about the answers that I would get to those last couple of questions.  At this moment, I can hear gainsayers telling me that Caucasian people set foot in Harlem or East L.A. all the time.  So, such devil’s advocates might ask, why shouldn’t stories be told about white characters in those primarily non-white settings?  I’m not denying the right of filmmakers or other storytellers to spin whatever yarn they want.  But stories about white lead characters already abound in U.S. media, and making them the focal point in a minority-majority community hazards relegating the people who dwell there to the backgrounds of their own histories and lived experiences.  You also run the risk of alienating minority audiences, who would be yet again deprived of “seeing themselves” and their own experiences in the settings where they live. 

Bradley Cooper and Rachel McAdams in ‘Aloha’

So, I would also add this question: If you wanted to make a movie primarily about Caucasian characters, would you set your story in Hawaii, a state with an Asian/Pacific American supermajority population?  I would hope that the answer to this question would be an equally obvious “no,” but this is what writer-director Crowe (Jerry Maguire, Vanilla Sky) does in Aloha.  However, if you did set a story with a Caucasian primary protagonist in the 50th state, wouldn’t you still want to reflect its Asian/Pacific majority with your lead cast members (the way the current TV reboot of Hawaii Five-O does, where two of its four lead regulars are played by Asian American actors)?  Apparently, Crowe’s answer to this question would be no.  His Hawaii-set Aloha is clearly about a white protagonist, and none of the film’s top-billed cast members are recognizably Asian/Pacific.  This understandably prompted an angry press release from the watchdog group the Media Action Network for Asian Americans (MANAA):

Taking place in the 50th state, the movie features mostly white actors … and barely any Asian American or Pacific Islanders.  “60% of Hawaii’s population is [Asian/Pacific American],” says MANAA Founding President and former Hawaii resident Guy Aoki.  “Caucasians only make up 30% of the population, but from watching this film, you’d think they made up 90%.  This comes in a long line of films (The Descendants, 50 First Dates, Blue Crush, Pearl Harbor) that uses Hawaii for its exotic backdrop but goes out of its way to exclude the very people who live there. ... It’s an insult to the diverse culture and fabric of Hawaii.”

(Full disclosure: Guy Aoki is a personal friend of mine, and I have been a member of MANAA for several years, but I was not a part of its Aloha campaign.)  If I had seen Aloha, that issue would likely be my foremost thought as well.  But something else would probably be on my mind…

I really thought that, by now, the American entertainment industry, including Hollywood, had absorbed the lesson of the Miss Saigon casting controversy of 1990, which I have written about elsewhere.  I thought that the dispute over that Broadway musical casting its Asian male lead role with a Caucasian actor had ultimately gotten across this fact:

Asian American actors do not have equal opportunities to play lead roles on Broadway or in Hollywood

Historically, white actors have always been able to play lead Asian roles — from Charlie Chan to The King and I to Kung Fu — while Asian American actors have never played white leads and continue to struggle to play Asian leads.  So, however talented the actor and however well intentioned the decision, any time a U.S. production casts a white actor in an Asian lead role, it diminishes already scarce opportunities for Asian American actors and perpetuates a racially discriminatory double standard.  The best solution to this predicament, in my opinion and that of many others, is to reserve Asian roles, especially lead Asian roles, for recognizably (i.e., visibly) Asian performers.  Thespians who are part-Asian but can pass as wholly non-Asian (such as Keanu Reeves or Hailee Steinfeld or, for that matter, Yul Brynner) don’t have the same constraints on their careers that recognizably Asian actors do, and thus don’t need this particular kind of consideration.  I thought that Hollywood had gotten the memo.  Apparently, I was wrong. 

Emma Stone as Capt. Allison Ng in ‘Aloha’

In an article titled “The Unbearable Whiteness of Cameron Crowe’s Aloha,” Jen Yamato, writing for The Daily Beast, quotes the MANAA press release and expresses the same misgivings about the film, as well as skepticism about its casting:

MANAA and other Aloha critics didn’t get to see the film before issuing their statements; Sony didn’t conduct a press day for the movie (translation: no stars did interviews) and hid the film from everyone, including journalists, until three days before it opened. If they had [seen the movie], they might be even more perplexed. Because Aloha actually features one of the more prominent Asian/mixed-heritage female leads in any studio movie in recent memory. 
She just happens to be played by Emma Stone. 
The Amazing Spider-Man star locks horns and lips with Cooper as Allison Ng, a promising pilot moving up the ranks. She loves the stars. She’s focused on her career. She impressed Hillary Clinton with her discipline that one time! And she’s all about her native culture. Native, because the blond, green-eyed Ng is one-quarter Chinese, one-quarter Hawaiian, and one-quarter [sic] Swedish….

Of course, Stone is not recognizably Asian/Pacific, and I’d wager that she’s not Asian/Pacific at all.  She certainly has many opportunities to play non-Asian roles — playing them is how she became a star — and I would also bet that she turns down more job offers than she accepts, another luxury that Asian American thespians seldom have.  There are plenty of half-Asian actresses in Hollywood, including some well-known ones, and I’m sure that any of them would have made a more believable quarter-Chinese, quarter-Native Hawaiian, half-white female lead than the blonde-haired Emma Stone. (I might need to add an extra entry to my blogpost “Yellowface Top Ten.”)

As Erin Keane put it in her Salon article (referencing the former Sony Pictures executive) “8 Things About Aloha That Bugged Amy Pascal More Than Casting Emma Stone as an Asian Character”:

But with all of the [unrelated] objections Pascal herself raised with the film during its production, you’d think her emails would produce even one “um, you guys?” moment about the choice of Stone to play this particular character instead of an actress with actual Chinese and/or Pacific Islander heritage.  Maybe there is one lurking in the Wikileaks repository, but I couldn’t find anything.

Or as blogger Shanee Edwards sums up:

[T]he fact that Emma Stone’s character is supposed to be a quarter Native Hawaiian, creates a bit of a disconnect. We’re not saying a person of Hawaiian descent can't have blond hair or blue eyes, but it does seem unfair not to cast someone who is the real deal.

Of course, even if Aloha had indeed cast the role of Allison Ng with a recognizably Asian actress, this would have perpetuated the shopworn paradigm of the Asian female love interest to a white male lead, a paradigm that has already marked scores of Hollywood releases from Sayonara (1957) to Year of the Dragon (1985) to Broken Trail (2006).  

Cooper (left) gives the Hawaiian shaka sign (with Stone, right).

Controversy also surrounds the very use of the word “aloha” as a title for Crowe’s film, or perhaps as a title for anything else.  For example, Hawaii-born culture critic Janet Mock writes on her blog:

Hawaii lives vividly in people’s minds as nothing more than a weeklong vacation – a space of escape, fantasy and paradise. But Hawaii is much more than a tropical destination or a pretty movie backdrop — just as Aloha is way more than a greeting. 
The ongoing appropriation and commercialization of all things Hawaiian only makes it clearer as to why it is inappropriate for those with no ties to Hawaii, its language, culture and people to invoke the Hawaiian language. This is uniquely true for aloha – a term that has been bastardized and diminished with its continual use…. 
When writer-director Cameron Crowe uses the language of a marginalized, indigenous people whose land, culture and sovereignty have been stripped from them, he contributes to a long tradition of reducing Native Hawaiians to his own limited imaginings – and this is dangerous…. 
A message to those in Hollywood: If you are not [Native Hawaiian] or a person from the Hawaiian Islands, you do not get to spread the message of aloha through your product because [the message] is not yours. It is not yours for appropriation or profit…. 

Poster for ‘Aloha Summer’ (1988)
This seems rather restrictive, and I can imagine some on the other side of the issue crying censorship.  However, I believe that this sensitivity over Crowe titling his movie Aloha is compounded by his choice of visibly white lead characters and an exclusively white lead cast.  If Crowe had done more to feature Hawaii’s diverse ethnic culture in his lead cast and main story line (as opposed to a mere subplot) — the way that the 1988 independent film Aloha Summer did — I think that the titling of the Crowe movie with a word sacred to Hawaiians wouldn’t have stung the local population so much. 

But what dismays me the most is Sony Pictures’ response to the controversy.  As quoted by the Los Angeles Times, the studio released a statement that read in part:

While some have been quick to judge a movie they haven't seen and a script they haven't read, the film Aloha respectfully showcases the spirit and culture of the Hawaiian people. … Filmmaker Cameron Crowe spent years researching this project and many months on location in Hawaii, cultivating relationships with leading local voices. He earned the trust of many Hawaiian community leaders, including Dennis “Bumpy” Kanahele, who plays a key role in the film. 
 The tone of Sony’s press release strikes me as defensive and somewhat taken by surprise.  From reading it, I get the idea that Sony felt blindsided by the ethnic criticisms surrounding a supposedly innocent romantic comedy.  If this was the spirit in which the press release was written, I’m dumbfounded.  Critiques of Hollywood’s depiction (or lack thereof) of racial minorities and other historically underrepresented communities are going on all the time.  Now, the studio has presented a movie of visibly white lead characters, played by a visibly white lead cast, in a setting with an Asian/Pacific supermajority.  How could Sony not have seen this controversy coming? 

Trailer for Cameron Crowe’s ‘Aloha’


Update, June 3, 2015: Cameron Crowe has written a post on his blog regarding the casting of Emma Stone as Allison Ng in Aloha.  It reads in part:

Thank you so much for all the impassioned comments regarding the casting of the wonderful Emma Stone in the part of Allison Ng. I have heard your words and your disappointment, and I offer you a heart-felt apology to all who felt this was an odd or misguided casting choice. As far back as 2007, Captain Allison Ng was written to be a super-proud ¼ Hawaiian who was frustrated that, by all outward appearances, she looked nothing like one.  A half-Chinese father was meant to show the surprising mix of cultures often prevalent in Hawaii.  Extremely proud of her unlikely heritage, she feels personally compelled to over-explain every chance she gets. The character was based on a real-life, red-headed local who did just that.

I think it’s bewildering that Crowe would try to “show the surprising mix of cultures often prevalent in Hawaii” by erasing that very mix of cultures in the casting process.  As we have seen, when the first publicity for the film appeared, viewers believed that all of Aloha’s lead characters, played by a white lead cast, were Caucasian.  Nothing in those posters, trailers, or press pieces suggested — much less “show[ed]” — a mix of cultures.  They suggested a story about a verdant, magical, Other-inhabited land as seen from inside a Caucasian cocoon.  That’s precisely why early critics thought that the film was entirely about Caucasian characters: no mix of cultures was promised to be shown. 

I’m also still wondering why it’s a Hawaiian story like this — and not something else — that gets the Hollywood treatment.  Why is Hollywood pouring its money into a Hawaiian story where the lead minority character “look[s] nothing like one,” where her status as a person of color is visibly erased?  Meanwhile, projects that do indeed show a mix of cultures in Hawaii (the Disney Channel’s Johnny Tsunami [1999] comes to mind) are given a much lower profile (even the new Hawaii Five-O has a white lead character as its top-billed role).  A Native Hawaiian-centered major Hollywood release like Disney’s Lilo and Stitch (2002) is very much the exception.  Crowe goes on:

Whether that story point felt hurtful or humorous has been, of course, the topic of much discussion. However I am so proud that in the same movie, we employed many Asian-American, Native-Hawaiian and Pacific-Islanders, both before and behind the camera … including Dennis “Bumpy” Kanahele, and his village, and many other locals who worked closely in our crew and with our script to help ensure authenticity.  (ellipses in original)

This reminds me of Welsh-born Jonathan Pryce in 1991 thanking Miss Saigon’s “multiracial cast” when accepting his Tony for playing Broadway’s first Asian male lead in 15 years: “As long as this minority-majority-set project gives jobs lower on the list to people of color,” this kind of rationalization seems to say, “keeping the above-the-line talent Caucasian is acceptable.”  (The Tonys considered B.D. Wong to be a “featured” [i.e., supporting] actor in 1988’s M. Butterfly.)  If Crowe had really learned about Hawaii and its people during his purported “years of research” in the writing of Aloha, I get the idea that he would have come up with a story where he wouldn’t need to disclose the involvement of Hawaiian locals away from the movie’s advertising — because this fact would be self-evident in the film’s trailer.  

With all of my dismay about Aloha, I can say that, at the very least, I’m glad this issue is one that Cameron Crowe felt the need to address on his blog and express a modicum of remorse for.  As for Aloha itself, reviews and word-of-mouth for the film were so cripplingly bad that it opened in sixth place at the weekend box office, a disastrous placing, in the eyes of the industry, for a new major-studio offering in wide release.  Whether Hollywood at large will learn anything — beyond the mercantile — from the Aloha controversy remains to be seen.  


From BuzzFeed Videos

Monday, May 18, 2015

Classicism, Modernism, Christopher Booker & ‘L’Eclisse’

Michelangelo Antonioni’s ‘L’Eclisse’ (1962):
Monica Vitti (partially obscured, left) and Alain Delon

One of the most illuminating books I’ve read in the past few years is The Seven Basic Plots: Why We Tell Stories (2004) by Christopher Booker, the conservative British journalist.  But that’s not a recommendation.  I consider the book “illuminating” because of what it told me about the conservative worldview.  Booker’s 700-plus-page tome attempts to distill world storytelling (or at least the storytelling of the Western part) into seven fundamental paradigms.  However, I’m not writing to review the soundness of Booker’s take on the number of plots, nor do I want to detail the characteristics of each, which would take a rather long time.  The Internet is already crowded with reviews of this particular book that look at it from this perspective.  Instead, I’m writing about The Seven Basic Plots because of what it told me — sometimes inadvertently — about the major differences between classicism and modernism, and also about the preference for classicism among conservatives.

By “classicism,” I mean those works (especially in storytelling media) that convey, via generally accepted conventions, an idea or subject in a clear, straightforward way, giving a sense of completion to the narrative and giving a sense of wholeness to the work.  And by “modernism,” I mean works that call those conventions into question so that the idea or subject isn’t so straightforward, thus challenging the audience’s sense of “completion” and “wholeness” and the world around them.  These aren’t the only ways to use the words “classicism” and “modernism,” but they’ll do for now.

One primary concern for The Seven Basic Plots is how the lead character(s) of a story is (are) portrayed.  To Booker, the fundamental kind of protagonist, the “hero” (largely assumed to be male), begins the story in an immature state of incompletion.  As the story progresses, the “light” protagonist encounters one or more “dark forces” (usually the malevolent antagonist[s]), which challenge the hero’s sense of himself, becoming part of their conflict.  If male, the protagonist, while struggling through his conflict, will also encounter a representative of his anima, his gentler female side, which usually requires completion through spiritual union with a woman; the anima is most commonly represented by a female love interest for the hero.  Exactly what shape the hero’s struggle takes depends on what kind of plot the work has — to use Booker’s titles for these plots: “Overcoming the Monster,” “Rags to Riches,” “Voyage and Return,” “The Quest,” among others.  Ideally, the hero overcomes the “dark force(s)” by realizing what Booker calls the protagonist’s “Self”: his mature, unselfish identity that is at peace with the world.

In fact, to Booker, it’s this coming into Self-hood that, in large part, enables the hero to overcome his adversarial forces.  The hero ends the story by defeating his antagonist, which also (ideally) marks his ascension into society as an adult.  And by the hero uniting in the end with his anima (e.g., “getting the girl”), the story promises, if only by implication, that this exact society — at least the way that it exists by the “happy” end of the narrative — will be perpetuated via procreation.  There are other forms that a story like this can take, but the hero’s realizing his unselfish Self and his helping to perpetuate a fruitful, benevolent society are crucial elements.

Booker refers to the most obvious antagonist archetype as the “monster”:

[P]hysically, morally and psychologically, the monster in storytelling … represents everything in human nature that is somehow twisted and less than perfect.  Above all, and it is the supreme characteristic of every monster who has ever been portrayed in a story, he or she is egocentric.  The monster is heartless; totally unable to feel for others, although this may sometimes be disguised beneath a deceptively charming, kindly or solicitous exterior; its only real concern is to look after its own interests, at the expense of everyone else in the world.  (p. 33, emphasis in original)

To a conservative like Booker, that is the ideal kind of plot, the kind with a “complete, fully formed happy ending.”  A different kind of plot is a story told from the perspective of the “monster,” told from the perspective of an anti-hero, and Booker calls this kind of plot "Tragedy.”  In other words, Booker’s “Tragedy” is a plot where the lead character would be the antagonist in a more ideal story.  But the ending is the same: the “dark force” protagonist is overcome, allowing a harmonious society to be perpetuated.  For example, Macbeth is told from the perspective of the murderous Scottish usurper, but once he is defeated, Caledonia can unite under a more beneficent ruler. 
 
Christopher Booker in 2011
Another kind of “Tragedy” is one with one or more ideal protagonists whose ascension to Self-hood in society is incomplete by the story’s end, often because of death.  But — and this is crucial to Booker in this kind of tragedy — this frustration of the hero’s objective must lead to a greater good.  For example, Romeo and Juliet’s deaths bring their feuding families together by the play’s end.  Even in A Midsummer Night’s Dream’s farcical play-within-the-play of Pyramus and Thisbe, Bottom takes the time to mention that the wall that once divided the two lovers’ families has been torn down because of the couple’s suicides.  So, the death of the tragic hero (as distinct from the tragic anti-hero) has not been in vain, and the good society endures once again. 

In this way, Booker sees stories as allegories of each audience member’s life.  The struggle for the ideal hero to reach some important goal and ascend into a benevolent society as a fully formed adult correlates to the listeners’ own individual struggles to meet their own important goals.  And the hero “getting the girl” at the end of the story analogizes the audience members finding their own soulmates and ascending into society themselves as fully formed, procreating adults: 

What we see symbolically represented [in archetypal stories] … is the idealised pattern of how any human being can [like the stories’ heroes] travel on the long, tortuous journey of inner growth, finally emerging to a state of complete self-realisation.  (p. 222)

This parallel between the story-hero’s fictional struggles and the spectator’s own non-fictional struggles is sometimes compactly expressed by the aphorism, “If you want to win the princess, you have to fight the dragon.”  Booker explores other kinds of ideal or “light” stories (as opposed to “dark” ones), but the one of a male hero coming into his full and harmonious adult sense of Self by overcoming the egotistical “monster” (broadly defined) and winning the love of the female lead (as the author puts it, “uniting with his anima”) is, to him, “the most basic.” 

However, Booker identifies a kind of plot outside the ideal, one that he claims has come to mark narratives for the past couple of centuries.  In archetypal “light” stories, the monster is the egotistical force, but Booker inveighs against a kind of storytelling where he locates the egotism within the lead character:

[I]n countless modern stories, a fundamental shift has taken place in the psychological ‘centre of gravity’ from which they have been told.  They have become detached from their underlying archetypal purpose.  Instead of being fully integrated with the objective [!] values embodied in the archetypal structure, such stories have taken on a fragmented, subjective character, becoming more like personal dreams or fantasies.  (p. 348)

Some seemingly archetypal narratives Booker takes to task for not transforming their lead characters’ inner lives meaningfully, not bringing them to a complete and integrated sense of Self.  One such tale is Charlie Chaplin’s The Gold Rush (1925), where, in Booker’s view, the Little Tramp is too passive and doesn’t do enough to earn the riches or the woman’s love that he gets by the end of the film.  Another is George Lucas’s Star Wars (1977) because hero Luke Skywalker liberates anima Princess Leia before defeating “monster” Darth Vader: “This misses the very essence of what the archetypal symbolism is about.  The anima can only properly be liberated at the moment when the monster is finally overcome” (p. 382).  Booker terms such insufficiently archetypal tales as “romantic” stories. 

While the writer sternly chides works that are “romantic,” his tone of voice grows more agitated over narratives that call the archetypes into question.  One of the earliest that Booker mentions is Mary Shelley’s novel Frankenstein (1818).  By making the novel’s monster (at least at first) sympathetic, Booker charges, Shelley broke a covenant with the reader.  Instead of being the unambiguous figure of dark forces that the hero (presumably Dr. Frankenstein) must overcome to realize his own “light” forces, the monster becomes the object of pity, thereby turning the archetypes on their heads and initiating a story that can only lead to the doctor’s miserable destruction.  Or as Booker puts it, the story “ends with the hero being overcome by the monster, rather than the other way around” (pp. 356-57).  After Booker denounces the archetypal inversions of the novel Frankenstein, he then subjects Mary Shelley to a kind of retrotemporal psychoanalysis to figure out why she would have written such an unorthodox book and concludes that it was the product of a troubled mind.  Booker puts on the couch other authors whose works go against his archetypal paradigm, with similar results.  This is the most condescending aspect of The Seven Basic Plots: if there are stories that deviate from the book’s ideal, it can’t be because of Booker’s paradigm; something must be wrong with the authors that he criticizes. 

Booker sees such stories as being deleterious to the audience because the writers’ egos — which ought to have been overcome to achieve a more mature sense of Self — have taken over the narration of the stories, creating stunted “heroes” who likewise give into their egotism and thereby populate stories with pessimistic or cynical endings.  In discussing such stories, Booker’s tone is stern, but he saves his most caustic venom for the modernist narrative. 

To Booker, the questioning of classical or “archetypal” forms — which is the hallmark of modernism — has been the result of a series of psychological traumas throughout the last 200 years of history (basically since beginning of the industrial revolution).  From the Napoleonic Wars to the automation of the early 20th century to the promiscuity of Bill Clinton, each new change chipped away at Western civilization’s mature sense of Self, giving birth to stories with egotistical and malformed heroes, which, in turn, fed the degenerative cycle of destructive, non-archetypal narratives.  And these egotistical stories, be they “romantic” (merely insufficiently archetypal) or more sinister dark inversions of the paradigm (for all intents and purposes, modernist narratives), have created a fragmented, chaotic world — in other words, a conservative’s dystopian nightmare:

Up to the late 1950s Western society had still managed to preserve an idealised image of its own totality, corresponding to the Self.   Vital to this had been those ruling masculine principles of order, discipline and hierarchy which archetypally constituted the ‘values of Father’.  The institutions and conventions traditionally regarded as essential to holding society together had generally remained intact.  Importance was still attached to such concepts as ‘duty’, ‘responsibility’ and ‘good manners’.  The social order still rested on the respect accorded to ‘authority figures’: from parents to political leaders, from teachers to policemen.  A framework of sanctions still existed to uphold sexual discipline and the central importance of marriage, from laws prohibiting homosexuality to social taboos on promiscuity and adultery.  
One of the more obvious features of the change which came over society after the late 1950s had been the extent to which all this was rejected.  All that complex of ‘masculine’ principles associated with duty, discipline, hierarchy, tradition and authority came to be perceived as oppressive and life-denying.  The new ruling consciousness was one which promoted ‘below the line’ [i.e., plebian] values at the expense of those ‘above the line’; the attributes of youth over those of maturity; liberation over constraint; ‘lower class’ over ‘upper’; the future over the past.  A dominant archetype of the age — personified in such hero-figures as Elvis Presley or the Beatles — became that of the rebellious puer aeternus, ‘the boy hero’ frozen in immaturity.  No longer was it generally taken for granted that the ultimate goal of life was to work towards the wisdom of age.  What mattered in an age of incessant change was to remain in touch with the new: to aspire to a state of perpetual youth.  (pp. 680-81)

Passages such as these gave me a profound insight into the differences between classicism (Booker’s preferred form of storytelling) and its opposite, modernism.  I don’t mean to suggest a hard-and-fast binarism between classicism and modernism: even Booker himself sees something like the “romantic” (insufficiently archetypal) narrative as something in-between.  I also don’t intend to make an absolute binarism out of liberalism and conservatism: most political views are more complex than party-line moieties.  But to provisionally use “classicism” and “modernism,” “liberalism” and “conservatism,” as antipodes of each other, The Seven Basic Plots indirectly told me how the two camps see the world. 

The classical narrative basically views the world as — all things considered — a benevolent place, with civilized societies worth preserving.  If a particular society is unjust, the cause is a tyrannical “ruler” (authority figure) whose individual defeat or change of heart can restore/bring about a more ideal environment that deserves perpetuating.  And in those stories where a malignant, oppressive society still exists after the climax (such as Casablanca [1943] or Mad Max 2 [1981]), the hero’s smaller-scale victory against one of the tyrant’s surrogates suggests that a better world is just over the horizon.  This is the environment of the classical narrative: the hero’s ultimate fitting into, upholding, and propagating this kindhearted society serves as an allegory of the audience member fitting into, upholding, and propagating his or her own society — more or less as that society presently exists — as well.  Due to this somewhat (however indirectly) proselytizing mission of this kind of story, it should be told in as clear and as easy-to-follow a manner as possible, thus the classical narrative’s usual reliance on established conventions. 

By contrast, modernist works see some deeply ingrained flaw in the societies that their characters inhabit.  In a setting that a classical-oriented audience might view as (on the whole) unproblematic, the modernist narrative views as problematic, more problematic than anything that could be reversed by the mere “overthrow” of an individual tyrant figure.  Indeed, there might even be something inherent about this milieu that is inconspicuously malignant.  For this reason, the characters in a modernist work have nothing to gain by upholding and propagating their societies. 

So, the modernist creator’s job is to make such a society’s problematics/malignancies more discernible.  And because accepted artistic conventions do much to hold perceptions of this society in place, the modernist’s most direct tactic is to interrogate those conventions, and by troubling them, the creator exposes, in a very indirect fashion, at least a portion of what is virulent in this setting.  Characters in a modernist work often come to ends that are unhappy or worse.  Such a work allegorizes the destructive forces in the world around them, as it also implies that their deleterious society isn’t worth conforming to or propagating.  This is the basis, I believe, why pessimistic endings (as opposed to the bittersweet endings of works that are more classical) are so prevalent in stories of a modernist bent. 
‘L’Eclisse’

The modernist text that best illustrates this divide between the classical story and the modernist one is Michelangelo Antonioni’s L’Eclisse (a.k.a. The Eclipse, 1962).  This film even makes a brief appearance in The Seven Basic Plots.  Booker disapprovingly writes:  “One of the most acclaimed ‘art films’ of 1962 was the Italian director Michelangelo Antonioni’s Eclipse, a drifting nightmare which ended in a cloud covering the sun, throwing the world into a silent twilight” (p. 676).  (More accurately, the film ends with various shots of the big city as night falls.)  

L’Eclisse tells the very loose narrative of Vittoria (Monica Vitti) and Piero (Alain Delon), who meet in contemporary cosmopolitan Rome, date each other, and eventually become lovers.  Piero’s occupation is a stockbroker on the Rome exchange, and we see scenes of him shouting along with the other brokers on the exchange’s chaotic floor.  The setting of the exchange is a very depersonalizing space where brokers scramble and shout among themselves in a frantic chase for phantom fortunes.  They are so riotous that they can barely contain themselves to honor the recent death of a colleague, after which the exchange goes back to its usual pandemonium.  When the market crashes one day, Vittoria asks Piero where all the money went, and he answers, “Nowhere,” leading us to wonder if the cash that the brokers are frantically chasing is, in fact, real. 

Vittoria, Piero, and the citizens of modern Rome inhabit a fragmented environment sometimes marked by inorganic geometrical shapes and structures.  This is the space shaped by the depersonalizing form of capitalism that the anarchic stock market represents, a space equally depersonalizing to its inhabitants.  In other words, Rome is an alienating environment where the citizens are unable to live truly meaningful lives.  Vittoria’s encounter with a white colonial Kenyan arouses the issue of imperialism’s role in shaping their environment.  And hints of nuclear anxiety appear throughout the film — from the mushroom-cloud shape of the hovering E.U.R. tower, to a man carrying a newspaper with the headline “Peace Is Fragile,” to the blinding white light of a streetlamp bulb that fills the final shot. 



Because of its creation and sustenance by a depersonalizing capitalism, hierarchical imperialism, and dread-inducing nuclear weapons, the Rome of L’Eclisse is as pitiless as any tyrannical realm, but it’s pitiless in a more subtle way — and Antonioni’s camera tries to shed light on these subtleties.  Due to their cold, unfeeling environment, any romance between Vittoria and Piero seems doomed from the start.  Vittoria seems uncertain to commit to a relationship, and we wonder if Piero’s intentions are all that honorable.  But even in those moments when Vittoria and Piero come together in an embrace, both have a faraway look of dissatisfaction in their eyes, as though both think that their relationship is really a substitute for something better that they haven’t found.  Of course, L’Eclisse is most well known for its final seven minutes, where Vittoria and Piero agree to meet later but then never appear again.  And the camera wanders the streets of Rome, as if in search of them, driving home the alienating aspects of the city, which are more visible and tangible when there isn’t a romantic, photogenic, story-shaping couple to distract our attention. 

To reference Booker’s paradigm, Vittoria and Piero don’t come together in the closing moments of L’Eclisse because the society of contemporary Rome is sterile and doesn’t deserve to be propagated.  Any coming-together by the couple in the closing moments of the film would have implied the opposite.  Our “heroes” don’t ascend to their mature, fully formed societal roles because their society — with its deep roots in an impersonal capitalism, colonialism, and unease over the nuclear bomb — is not worth ascending to.  And the mere “overthrow” of some “tyrant” isn’t going change their environment in any meaningful way: L’Eclisse has no individual tyrant figure; the “tyrant” is the society itself. 


Although Antonioni implicitly criticizes the world of 1962 Italy, he also intimates the possibility of escape.  In the alienating atmosphere of Rome, Vittoria is at home neither in her own new-fashioned apartment nor in Piero’s ancestral house, neither in the modern nor the traditional.  Her ideal environment is yet to be found, although inklings that it exists are implied by such things as Vittoria's revivifying visit to the airport in Verona (and its indications of a world beyond Italy) and her appreciation of the artwork that decorates her apartment.  Even the wind, as it rustles the trees or rattles a line of metal flagpoles, hints at a more organic, life-giving state of existence elsewhere or elsewise.  (The last shot we see of Vittoria is a close-up of her head against a cluster of tree branches.)  As awkward as Vittoria’s “blackface” dance (she imitates an African woman in the white Kenyan’s flat) may look today, it’s yet another enactment of Vittoria’s desire to escape her confining, discomforting world — as her African make-up implies that the world to which she must escape should be one of her own making and not necessarily a geographical destination.  And the idea that she eventually will succeed in finding such a space is reflected in her triumphant name: Vittoria — victory.


This is what I gleaned from The Seven Basic Plots: To a conservative like Booker, the (more or less) exemplary society — as allegorized in fiction — does indeed already exist (however marred it my be at the moment), so a story’s unformed hero (like each citizen) must become worthy of this society through personal transformation into a mature, conformist adult.  But to a progressive like Antonioni, the current society must be dramatically transformed to be worthy of its people, so the characters in his films (surrounded by destructive environments) always come to unsatisfactory or unhappy ends.  To the conservative, transformation must be personal.  To the liberal, transformation must be societal. 

I have depicted this idea in relatively broad strokes, of course, and this issue may be examined in other ways don’t depend on dichotomies.  But by and large, I think that my observation after reading The Seven Basic Plots marks one important distinction between classical works, modernist works, and their often divergent audiences.