Sunday, June 28, 2015

We Live in Intriguing Times

Art by Bob Englehart of the Hartford Courant

Well, this has been quite a week, so I feel obliged to say something about it. 


FLAGGING SUPPORT FOR A SYMBOL OF DIVISION

First of all, the Confederate battle flag, which once had present-day (white) Southerners falling all over themselves to say how it wasn’t a symbol of racism, has now fallen into disrepute due to white man Dylann Roof’s — I suppose I need to insert the word “alleged” — racially motivated attack on the Emanuel AME church of Charleston, South Carolina, last June 17, killing nine black parishioners. 

Given the fervent support that the battle flag has enjoyed for decades, what is amazing to me is the speed at which many people, including some white Southerners, are calling for the flag’s removal from official grounds throughout the South.  The haste of some Southerners at least to keep the Confederate battle flag at arm’s length, if not to consign it to history, is something that I never thought I’d see in my lifetime.  Many American’s go out of their way to rationalize the Confederate States of America.  About a week ago, I was posting on Facebook, and the topic of Robert E. Lee came up.  A Facebook friend with more conservative views instantly jumped in to say how great Lee’s service to the U.S. army in the antebellum years was, and that is what he should be remembered for. 

Now, I am not the most qualified person to cast judgement on Robert E. Lee’s life.  To my limited knowledge, Lee’s service in the U.S. army in those years before 1861 was one of distinction, and I understand that he, on the whole, led a very honorable life.  But the fact remains that he was also a traitor who took up arms against the United States of America primarily in order to keep a race of people enslaved.  Shouldn’t that be the headline of Lee’s life, rather than how gentlemanly and honorable he was?  The mere fact that I need to ask this question speaks volumes about the United States in the post-Civil War years. 


I spent much of my time growing up in central Virginia.  As a grade-schooler just learning about the Civil War, I remember being struck by the sight of a sunbather at Virginia Beach lying on a stars-and-bars beach towel.  I remember thinking what a casual use that was for the flag of an enemy (one on the wrong side of history) that the U.S. army fought and defeated.  While lying on top of a flag my not be the most respect one can show for it, the fact that the sunbather seemed so accepting of this enemy’s ensign struck me as disregard for the values that the Union victory in the Civil War stood for — keeping the country together and ending slavery in particular.  But since I was just a grade-schooler at the time, I didn’t say anything about it. 

Afterwards, I started noticing the ubiquity of the Confederate battle flag and the esteem in which many seemed to hold it.  As a child, I was especially disquieted by my first ride as a passenger in a car down Richmond’s Monument Avenue.  I rode past towering statues of Confederate historical figures: Robert E. Lee, Jefferson Davis, Stonewall Jackson, and Matthew F. Maury.  It not only struck me as strange that the people of Richmond took such pride in these turncoats, but the grandeur and defiance of these statues communicated the following message: the South didn’t really loose the Civil War.  Monument Avenue filled me with apprehension.  (Since then, a statue of black tennis-player and Richmond native Arthur Ashe has been added, apparently to offset partially the pro-Confederate signal sent by the other sculptures.) 

R.M.T. Hunter (1809-1887)
As I got older, it also became clearer to me that the Southern side of my family also held the Confederacy in somewhat high regard.  Here’s an example: A distant family relative is Robert Mercer Taliaferro Hunter (1809-1887 — the family pronounces “Taliaferro” as “Tolliver”), a lawyer and statesman who built a large farm that my family still uses.  Growing up, I remember a history-buff uncle telling me with a twinkle in his eye how R.M.T. Hunter served as the Confederate Secretary of State.  Only later did I discover that Hunter had also been a statesman for the United States, and at one point, he not only became Speaker of the U.S. House of Representatives, but he remains the youngest ever to have held that distinguished position.  But that kind of accomplishment apparently wasn’t worth mentioning, only his involvement with the C.S.A. 

In the decades since, of course, I came to see in what great regard the Confederate battle flag — and, to some degree, the idea of the Confederate States themselves — was held in much of not only the South, but the northern United States as well.  Over the years, I have heard many rationales for people embracing the Confederate flag: it being a symbol of heritage, history, and any kind of against-the-grain rebellion.  But, I thought to myself, shouldn’t the fact that it was a treasonous symbol for perpetuating slavery, going against the founding notion that “all men are created equal,” trump any other kind of meaning the flag might convey?  I gradually got the idea that most people who wave the Confederate flag don’t believe in all people being created equal; it was a way for them implicitly to signal that African Americans are still inherently one-down in this country. 

Perhaps because of the gradualness of my discovery and my family’s warmth (at least in part) to the idea of the Confederacy, I kept my qualms about the battle flag — and other celebrations of the Southern succession, such as Monument Avenue — to myself.  Could I be overreacting to the Confederate flag?  Could the flag be a more benign symbol than my negative visceral reactions to it told me?  Whatever the answer, seeing how widespread the esteem for the flag and for the Confederacy was in Virginia, I didn’t think that there was anything I could say about the subject that would change anyone’s mind. 

After the June 17 shooting of nine black parishioners in their Charleston church by a white gunman, the flags of the United States and South Carolina flew at half-staff.  But the Confederate flag overlooking the Confederate monument within sight of the statehouse flew at full staff, an image truly worth a thousand words.

For a long time, I’ve wanted to ask Confederate flag supporters why it was only this particular symbol of the South, and not another, that could adequately express their pride or heritage or whatever.  Knowing that the Confederate battle flag came into widespread use in the South in the 1940s and ‘50s as a symbol of resistance to racial desegregation, I have a feeling that the answer to my question would ultimately be — regardless of what I would be told — that such flag supporters didn’t truly believe in racial equality. 

Now, many white Southerners are apparently regarding the Confederate battle flag as an undesirable object.  My youthful negative reaction to it appears to be vindicated.  No, taking down the flag won’t magically undo racism — or even the lingering legacy of the Confederacy — in the United Sates.  But it’s a good start.  


ONE PLUS ONE EQUALS... 

The other major event this week was the Supreme Court’s ruling, in Obergefell v. Hodges, that same-sex marriage — or more exactly, marriage equality — is constitutional in all 50 states.  Many who disagree with this 5-4 decision are criticizing it for supposedly stretching the bounds of what is protected by the Constitution.  Others are finding fault with Justice Anthony Kennedy’s flowery language in his majority opinion (I have not read the full text), which goes on at length about how ennobling marriage is.  Although I support marriage equality, I can understand, to an extent, the criticism of Kennedy’s opinion. 

For me, the entire case in favor of marriage equality boils down to one issue.  According to Wikipedia, married couples have access to 1,138 rights that unmarried people don’t have.  If the government allows one segment of its population access to certain rights — such as the absence of inheritance taxes upon the death of a spouse — via marriage to the consenting adult of their choice, but denies those rights to another segment, then the government is relegating that latter segment to second-class status.  And the government shouldn’t be doing that.  That’s it.  Everything else, including any “ennobling” qualities of matrimony, is just embellishment. 

Some have also criticized that the basis of this opinion was not to be found in the Constitution.  But if the Constitution protects those rights and responsibilities for heterosexual spouses, it should protect them for gay couples, too. 


Marriage equality for gay couples and a newfound ignominy for the Confederate battle flag — yes, this has been a very historic week. 



Update, July 11, 2015: My head is spinning.  Yesterday, less than one month after the tragic shooting at Emanuel AME church in Charleston, South Carolina, the Confederate battle flag that flew in view of the statehouse was taken down.  It took a two-thirds vote from both legislative chambers in South Carolina — as well as the signature of Indian American governor Nikki Haley — in order for the flag to be removed (or moved in any way, which is why the Confederate flag wasn’t lowered when the national and state flags flew at half-staff for the shooting victims).  Events have been moving at such a breakneck clip that I’d probably be dizzy even if I didn’t have vertigo.  

I’m glad to see so many people finally agree with me (or finally acknowledge) that the Confederate battle flag is a symbol of treason and racial oppression, and it has no place in or near the official halls of the federal and local governments.  I’m glad that I got to see such impassioned speeches against the flag, such as this one from South Carolina state representative Jenny Horne during the chamber’s debate to take down the flag:



I also think that the Confederate battle flag is a poor choice as a symbol of Southern pride and heritage, as so many Southerners have claimed over the decades.  I believe that if Southerners need a symbol of their pride and their heritage, they should choose one without the treasonous and racist associations of the banner that Robert E. Lee flew in battle.  

In fact, the Confederate flag has fallen into such infamy that it is no longer being shown in other contexts as well.  Most remarkably, the basic-cable rerun channel TV Land has withdrawn episodes of The Dukes of Hazzard from its schedule because the flag is painted on the roof of the show’s featured car, nicknamed “The General Lee.”  I’m not sure if I would go so far as to pull The Dukes of Hazzard from a TV schedule just for showing the Confederate flag (as opposed to, say, pulling it for being a stupid show), but I’m glad that the culture at large now agrees with me and no longer views the flag as a benign symbol.  Others are being pressured to stop displaying the flag as well, such as Kid Rock, who features the flag at some of his concerts.  Whether Kid Rock or any other individual flies or does not fly the Confederate flag as part of their self-expression is their decision to make.  What’s important is for the flag to stop being flown as part of state or federal governmental pomp.  

As I said above, removing the Confederate flag from the halls of government won’t magically make racism disappear, and there are other struggles ahead against racial disparity.  But removing the flag from South Carolina state grounds is a good first step.  Some may say that the Confederate flag is “only” a symbol, but symbols can be powerful.  If the states opposed to desegregation in the 1950s and ’60s thought that the flag was important enough fly in order to express their belief in racial inequality, then the flag’s removal becomes a symbol of racial equality, and that is important, too.  

Friday, June 12, 2015

Film Noir: The Darkness Returns

Jane Greer and Robert Mitchum in ‘Out of the Past’ (1947)

Okay, I’ll spill.  I promised four long years ago to write some follow-up posts on film noir after my first one, saying what I think does and does not make a movie “noir.”  Well, time got away from me like an escaped con high-tailing it from the heat.  And I didn’t think that I had very much to add to Alain Silver and Elizabeth Ward’s explanation of why they excluded gangster films, period pieces, and comedies from Film Noir: An Encyclopedic Reference to the American Style.  Plus, after several blogposts about what does or does not constitute a particular genre, I started feeling like a member of the genre police.  Still, I thought that a few more ramblings from me about film noir (unlike the tales told by the movies themselves) wouldn’t kill anyone. 

First, one reason why so many film buffs have so many different definitions for what film noir is and isn’t is because the concept of “film noir” was established virtually after the fact.  French critics in the late 1940s assigned the label film noir (‘black film’) to a number of American movies that these critics saw as darker and more cynical than the typical Hollywood fare.  The filmmakers who produced these movies didn’t see their offerings as related (except in the most obvious ways, of course) and therefore didn’t see any need to ensure that any of these films possessed one attribute or another. 

In his excellent book More Than Night: Film Noir in Its Contexts (which I recommend to any reader of an academic bent), James Naremore writes that “film noir” is an idea more than it is a body of film texts.  So, “film noir,” in this view, can mean anything that anyone wants the term to mean.  Moreover, Naremore points out that when French critics first applied the label “film noir” to American movies, they also attached it to non-crime motion pictures, such as Billy Wilder’s The Lost Weekend (1945), and only later was the term seen to apply exclusively to crime films.  So, the term itself has evolved over time, and it will probably evolve some more, making any attempt (like this one) to ascertain a hard-and-fast definition of “film noir” a fool’s errand, much like trying to determine the identity of the first rock & roll record

At the same time, if the mantle of “film noir” can be applied to anything, that renders the term virtually meaningless.  If you type the phrase “best noir films” into a Google search engine, a number of movie posters for works described as such on the Web appear at the top of your computer screen.  In addition to such widely accepted noir titles as Billy Wilder’s Double Indemnity (1944), Jacques Tourneur’s Out of the Past (1947), and Joseph H. Lewis’ Gun Crazy (1950), there appears a poster for Ridley Scott’s 1982 science-fiction film Blade Runner.  Is Blade Runner a true example of noir?  If so, why?  Yes, Blade Runner has many of noir’s trappings: the relentless investigator, the hardboiled voiceover dialogue, shadowy photography, etc.  But is this enough?  If a category of film can encompass both Gun Crazy and Blade Runner, is that category helpful?  Let’s take a closer look. 
 
Humphrey Bogart and Lauren Bacall in ‘The Big Sleep’ (1946)

In my inaugural essay, I refer to film noir as a subgenre.  I realize now that isn’t the word that I was looking for.  Noir films can be made of any crime genre: a number are whodunits (The Big Sleep, Black Angel, etc.), suspense thrillers (Sleep, My Love; The Window; Alfred Hitchcock’s works, etc.), and gangster films (most notably, White Heat, which, while not a “classic” rise-and-fall story, is still about a gangster).  So, film noir is something that can permeate genres, not a subset of one.  Therefore, I think that we should retire the word “genre” and call noir something else.  Since film noir is a vague concept, I can’t think of anything better than the equally vague word “cycle.”  Film noir — something that I think lasted only in American-centered crime movies from the 1940s until the end of the 1950s — was a collection of styles and motifs that evolved, flourished, and then ran its course.  From here on out, noir is a “cycle,” not a “subgenre.”

My earlier definition of film noir, for the most part, still holds: “a specifically Hollywood [or American-centered] crime drama made sometime between the mid-1940s to late 1950s, characterized by cinematography with shadowy low-key lighting and an urban-inflected story with the strong potential to unnerve its audience.”  The key phrase is “unnerve its audience.”  The best noir films seem to pose some kind of existential dilemma to the audience.  The best tell stories that, at least for a moment, unmoor the audience from a sense of moral certainty and a sense of a steady place in the world around them.  Silver and Ward say that one of film noir’s most “consistent” attributes is the paranoid protagonist.  They illustrate their point by quoting dialogue spoken by detective Bradford Galt (Mark Stevens) in The Dark Corner (1946): “I feel all dead inside.  I’m backed up in a dark corner, and I don’t know who’s hitting me.”  Silver and Ward write:

With its simple graphic language, Galt’s statement captures the basic emotion of the noir figure.  The assailant is not a person but an unseen force.  The pain is more often mental than physical: the plunge into spiritual darkness, the sense of being “dead inside.”  For Galt in his dark corner the mere fact of being outside the law is neither new nor terrifying.  It is the loss of order, the inability either to discover or to control the underlying cause of his distress, that is mentally intolerable.  (p. 4)

This component of uncertainty — however fleeting or however weakly contradicted by the Production Code-approved happy endings — is key.  If a 1940s-’50s crime drama doesn’t do something to unsettle the audience, aficionados are unlikely to embrace the film as an example of noir.  In a DVD review of the by-the-numbers police-procedural Union Station (1950) for the magazine Sight & Sound, Tim Lucas says:

True noir is something specific, tales of existential entrapment, drenched in irony and fatality.  Films such as Union Station — monochromatic tales of trenchcoated dicks and sadistic criminals staying resolutely on their own sides of the moral fence in a world where good wholesomely prevails — cry out for a category all their own.  So why not call them ‘near-noir’?  (Sight & Sound, XX, 10, p. 88)

Sounds good to me.  One film that I propose would be better branded as “near-noir” is a title often extolled as an exemplar of the film-noir cycle: Jules Dassin’s The Naked City (1948).  Critically praised for, among other things, its pioneering use of location photography, The Naked City is often one of the titles first mentioned as a pre-eminent specimen of the cycle.  However, there’s little sense of moral ambiguity in Dassin’s film.  It’s a straight-ahead police-procedural starring Barry Fitzgerald as an avuncular police investigator whose twinkling presence soothes rather than unsettles.  His younger plainclothes sidekick, played by Don Taylor, is likewise uncomplicated: the biggest moral quandary he faces is a boys-will-be-boys problem with his young son at home, and his pinup-worthy wife (Anne Sargent) suggests that all is basically well within the household.  (Is there any doubt that such a blissfully wedded and photogenic couple would have great sex?)  In short, there’s nothing about The Naked City that implies any ethical abstruseness: we know who the good guys are and who the bad guys are, and justice prevails.  Why do so many movie-savvy critics regard The Naked City as a film noir? 


Mark Stevens (right) in ‘The Dark Corner’ (1946)

One point of contention among noir enthusiasts is whether or not a particular movie succeeds in unsettling its audience and, if so, to what degree.  Two pictures often labeled as film noir are two crime dramas with strong racial themes: Joseph L. Makiewicz’s No Way Out (1950) and Samuel Fuller’s The Crimson Kimono (1959).  However, from where I stand, these two anti-racism tracts take such pains to paint their minority co-leads as exemplars of all that is right and good (Sidney Poitier in the former and James Shigeta in the latter) that this leaves very little room for moral ambiguity or psychological dislocation.  So, I have great difficulty accepting No Way Out and The Crimson Kimono as examples of film noir.  But I’m sure that other movie mavens would disagree with me. 

Similarly, if there is anything else about a noir-era crime film that intervenes between the audience and an inchoate sense of dread, such a movie would have a hard time being seen as part of the cycle.  Silver and Ward list some elements that would likely keep the audience at an arm’s length from the “true” noir experience.  Here are some other necessary requirements for film noir:

A crime: Film noir is, first and foremost, a type of crime drama.  The element of crime decisively ruptures the veneer of the placid, morally secure society, and this usually snowballs into noir’s murky interrogation of humanity’s dark side.  So, if no criminal conduct is present in a movie, it’s not a film noir.  For all of its pioneering narrative and visual stylistics that would eventually become absorbed by film noir, Orson Welles’s Citizen Kane (1941) isn’t an example of the cycle: no crime is committed.  On the other hand, such a requisite crime may be large or small: it may be a vicious murder; or it may merely be a robbery that is set right before it is discovered, as in The Steel Trap (1952); it may be only the nominal “kidnapping” of a child in the next hotel room, as in Don’t Bother to Knock (1952); or it may be trying to frame someone and an implied murder at the end, as in Sweet Smell of Success (1957).  Any crime will do.  But no crime, no film noir. 

John McGuire (left) and Peter Lorre in ‘Stranger on the Third Floor’ (1940), which many critics consider the first film noir

A film made during the 1940s or 1950s: While some commentators have seen so-called “neo-noir” films of later decades as a direct extension of film noir into the present day, most critics agree that the “classic” period for film noir lasted only from the 1940s to the 1950s.  As Foster Hirsch puts it in Film Noir: The Dark Side of the Screen:

Film noir erupted in full creative force during a comparatively concentrated period.  In an early and influential article, “Notes on Film Noir” (1972), Paul Schrader places its outer limits from The Maltese Falcon in 1941 to Touch of Evil in 1958.  In a more strict dating, Amir Karimi, in Toward a Definition of American Film Noir, limits the period from 1941 to 1949.  Later critics suggest that the true heyday of noir lasted only a few years, from Wilder’s Double Indemnity in 1944 to the same director’s Sunset Boulevard in 1950.  But the long-range view, with noir extending from the early forties to the late fifties, is the most sensible, for the crime films of this period are noticeably different in theme and style from those made before and after.  
Films noirs share a vision and sensibility, indicated by their echoing titles: No Way Out, Detour, Street with No Name, Scarlet Street, Panic in the Streets, The Naked City, Cry of the City, The Dark Past, The Dark Corner, The Dark Mirror, Night and the City, Phenix City Story, They Live by Night, The Black Angel, The Window, Rear Window, The Woman in the Window, D.O.A., Kiss of Death, Killer’s Kiss, The Killing, The Big Sleep, Murder[,] My Sweet, Caught, The Narrow Margin, Edge of Doom, Ruthless, Possessed, Jeopardy.  These wonderfully evocative titles conjure up a dark, urban world of neurotic entrapment leading to delirium.  The repetition of key words (street, city, dark, death, murder) and things (windows, mirrors) points up the thematic and tonal similarities among the films.  (p. 10)
 
The largest consensus among movie commentators that I’ve seen seems to be that the first film noir is Boris Ingster’s The Stranger on the Third Floor (1940 — with its European director, its “wrongly accused murderer” story, its expressionistic dream sequences, and its strong suggestion of sexual desire), and the cycle ends with such unease-inducing films as Robert Wise’s Odds Against Tomorrow, Irving Lerner’s City of Fear, and John Cromwell’s The Scavengers (all 1959). 

As I said in my first essay, film noir was largely shaped by the constraints of the Hollywood Production Code, a sanitizing set of rules which compelled filmmakers merely to imply disturbing issues (such as losing one’s sanity or the desirability of social transgression) between the lines of a censor-approved optimistic story.  This created a disconnect between the disturbing themes and the movies’ reassuring veneer, a disconnect that fragmented the perceived wholeness and self-containment of the filmic text.  By 1960, the weakening grip of the Hollywood Production Code meant that disturbing, impolite themes no longer needed to be hidden, no longer ran the risk of potentially bursting the bounds of a bowdlerized story.  By the time Alfred Hitchcock made Psycho in 1960, the film’s openness about such heretofore-verboten themes like adultery, non-marital sex, unambiguous gender ambiguity, all-but-shown nudity, and the grisly gore of murder eliminated the need merely to hint at their existence between the lines of a sanitized movie, thus eliminating the danger of fracturing the film via such suggestive indirection.  So, like many others, I set the timeframe of  “true” film noir between 1940 and 1959. 
 
Orson Welles and Rita Hayworth in ‘Lady from Shanghai’ (1947)

American protagonists or an American milieu:  Film noir intrigues its audience because it questions the optimism — and, some would say, the naïveté — of the American dream and the American mythos.  Noir films are stories of moral scarcity in the land of plenty.  This is what gives film noir its disquieting edge.  So, a film noir must either be set in the U.S. or be about Americans living abroad, such as Charles Vidor’s Gilda (1946), Carol Reed’s The Third Man (1949, a British film), and Jules Dassin’s Night and the City (1950).  Lewis Milestone’s Arch of Triumph (1948) tells a sinister story of intrigue with low-key lighting and high-contrast black & white photography, but its French (and wartime) setting and French characters shield it from any unsettling implications for an American audience.  Two films often associated with noir are Fritz Lang’s M (1931) and Luchino Visconti’s Ossessione (1942), but since these are European productions with European characters and European content (German and Italian, respectively), they don’t fit the bill for noir.  If a film noir is going to have a non-American protagonist, the setting should still be in or around the United States, as in Lady from Shanghai (1947), The Other Woman (1954), and Touch of Evil (1958).

A contemporary setting: To really shake up an audience, the viewer should feel that his or her sense of security could be whipped out from under them at any moment.  When a film is set in the recognizable past, it removes this aura of urgency.  I say “recognizable” past because films set in the recent past (e.g., Double Indemnity [1944] is set six years before the movie was made, probably to avoid any reference to World War Two) are usually indistinguishable from films with a here-and-now setting and don’t have this problem.  Therefore, a crime film like Hangover Square (1945), with its Victorian London setting, reassures the audience that its unpleasant story is safely secured in the unreachable past — have no fear.  For this reason (and its English characters), Hangover Square would not be considered noir. 

However, one period piece is often cited as an important film noir: Charles Laughton’s Night of the Hunter (1955), set in the 1930s, some 20 years in the past.  This period setting, the Southern Gothic trappings, and Robert Mitchum’s flamboyant take on the lead character cushion the audience from any sense of dread caused by the morally ambiguous plot or shadowy, low-key lighting.  As Silver and Ward put it: “[T]he period context [in the film] insulates [any noir] elements, as well as perverse sexuality or character alienation, and mitigates the immediacy of their impact” (p. 330).  So, I don’t regard the canonized Night of the Hunter as noir. 

No supernatural story element: A story instigated by a magical or paranormal problem can easily be resolved by a magical or paranormal solution.  A film noir should give its audience the sense that a recognizable, real-life, uneasily rectifiable dilemma may just be around the corner.  A movie featuring such an out-of-this-world problem cushions any sense of immediacy, any sense that the viewer might soon face the same problem.  So, for all of their noir-ish trappings, a horror film like The Cat People (1942) and a science-fiction movie like Invasion of the Body Snatchers (1956) don’t count as the real deal.  (I hope that I have now given my reasons why Blade Runner, a science-fiction film from the 1980s, isn’t a film noir.) 

Joseph Cotten and Marilyn Monroe in ‘Niagara’ (1953)

Black & white photography?: And speaking solely for myself — and if you follow my blog at all, you could probably guess this — I prefer a film noir to be in black & white.  Some color films are championed as film noir because of their quasi-expressionistic use of a many-pigmented palette.  Films frequently held up as color noirs include John M. Stahl’s Leave Her to Heaven (1945), Henry Hathaway’s Niagara (1953), Samuel Fuller’s House of Bamboo (1955), Raoul Walsh’s The Revolt of Mamie Stover (1956), Alan Dwan’s Slightly Scarlet (1956), and Alfred Hitchcock’s polychrome productions of the 1940s to ’50s.  But I’ve only seen a few of these movies.  When I’m in the mood for film noir, I want to see the shadowy patterns on the screen shaped by the interplay of blacks, whites, and grays.  These are the kind of movies that come to mind when I hear the words “film noir.”  However, I wouldn’t want to rule out the possibility of a noir film shot in color.  While such a movie wouldn’t be my first choice when I’m in the mood for a film noir, if color can abet any feelings of unease or disquiet in a crime drama, I would be interested to see how its done.  A film noir in color is like life on other planets: it’s not something I’m likely to see anytime soon, but I wouldn’t want to say it doesn’t exist.  

Sunday, June 7, 2015

Why Bill Maher’s ‘New Rule’ Will Fall on Deaf Ears



I enjoyed Bill Maher’s tirade on his show Real Time Friday night, during its “New Rules” segment, about the ridiculous notion of Christians being persecuted in the United States.  He started off by quoting a number of influential conservatives on the subject of supposed Christian oppression and showing just how over the top their words were. 

Rick Santorum says that the treatment of Christians in America is so bad, we should keep in mind Nazi Germany: “…where you go from Christians — Jews, obviously, but also Christians — being not just persecuted but put to death.”  Again, 70% of America is Christian.  Who’s going to put them to death?  The Hindus? 

Yes, once again, some conservative Christians are using hyperbolic language that perceives a slippery slope from a loss of Christian privilege to mass martyrdom.  The idea is ridiculous, and I was glad to hear Maher (as usual) skewer such egregious overstatement.  But as spot-on as Maher’s comments were, I know that they will, alas, not get this particular brand of conservative Christian to reconsider their claims.  For I am certain that a conservative Christian (CC) will accuse Maher of quoting Santorum and company out of context.  

CCs look at the world differently than a lot of other people do.  To them, their faith isn’t just something that they practice on Sunday and then compartmentalize to live and work in the secular world for the rest of the week.  CCs see their faith as pervading their entire life, especially the moments that they don’t spend in church.  For this reason, they see everything they do as an extension of their religion, and if anything compels them to do something they believe is against their faith, they will protest against doing it. 

CCs see themselves as put upon for a variety of reasons, but the two most prominent at the moment are the growing rights of LGBT people and the Affordable Care Act’s mandate of certain forms of birth control, which they feel infringe on what they consider moral.  When Ted Cruz says, “There’s no room for Christians in today’s Democratic Party,” what he likely means is that there is “no room” (actually, there is) for Democrats against marriage equality and against Obamacare (among other issues).  Of course, that’s a narrow definition of “Christian,” but it gets the red meat delivered to a conservative political audience. 

Science is increasingly telling us that LGBTs are born with their sexual orientation in their DNA, so their homosexuality is part of who they are.  Consequently, when someone discriminates against a gay person, the government more and more sees that as prejudice against an individual for something that can’t be controlled.  However, many CCs say that they don’t discriminate against gay people as individuals but against “the homosexual lifestyle,” a lifestyle that to them is manifested by immoral acts.  In this way, CCs view gay people’s homosexuality as what they do.  For this reason, CCs bristle at the comparing the gay-rights movement to the racial-equality movement of the 1960s. 

So, when a CC is asked to do something that (in however small a way) furthers the acceptance, equality, or visibility of LGBTs, they see that as an infringement upon their religious beliefs.  If the government has mechanisms in place that penalize anyone for discriminating against gay people, religious conservatives see that as the heavy hand of government forcing a person to violate his or her faith.  Devout right-wingers probably think about instances like this when they use the term “anti-Christian fascism.” 

The same thing goes for the Affordable Care Act’s requiring employers of large companies to provide birth control and abortion services to their female employees: this kind of a scenario would be seen as the government “coercing” a business owner of faith who is against contraception etc. to violate their religious beliefs.  The CCs codified that perspective in the Supreme Court’s Hobby Lobby case, which publicized the owner of any establishment seeing their business as an extension of their faith, that their faith was not something that they merely professed in church.

When Sean Hannity speaks of the “liberal” media as anti-Christian, he probably means, in part, the information industry’s recent acknowledgement and dissemination of the views of the so-called “New Atheists,” like Sam Harris and Richard Dawkins, whose views CCs find offensive and inflammatory.  He probably also means the news media constantly couching the gains for same-sex marriage as a civil-rights issue for LGBTs and not as a governmental appropriation of an exclusively opposite-sex institution revered by most Christians.  

And meanwhile, efforts to ensure that the government isn’t favoring one religion over another (say, by removing the Ten Commandments from a courtroom) are seen as another governmental attack upon Christians.  While the First Amendment protects most forms of religious practice and speech, CCs feel put upon if they can’t use the government as some kind of vehicle for Christianity (or at least monotheism), so they view the government affirming its secular status as intolerance against religion in general, and Christianity in particular.

If everything is the work of the CC’s deity, then everything is an extension of their religion, and anything that impinges on their everything may be seen to negatively affect their First Amendment religious rights. 

In short, many conservative Christians want to be treated as though their religion is akin to race — or for that matter, to sexual orientation — as something that is inherently part of their biology.  So, CCs strive to portray disrespecting or criticizing religion as something tantamount to racial discrimination. It’s issues like this that conservatives of faith — misguidedly, I believe — think about when they say Christianity is under attack in the good ole U.S.A. 

I wish that there were something to say to this kind of conservative Christian to reorient their view of the non-(devoutly-)Christian world as something that is (for all intents and purposes) contaminated by sin.  This is not something that everyone believes, and conservative Christians, in this officially secular society, need to get along with people outside their denomination as best as they can — without feeling that doing so violates their faith. 

I wish that there were something to say to this kind of Christian conservative to make them see just how hyperbolic and unnecessary such slippery-slope and argumentum ad Hitlerum rhetoric is.  If there were, then maybe the conservative Christians and the rest of America would at least be on the same page and have something politically sensible to argue about — and maybe even to agree on.