![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxE-tf8lW1Ftu6-WIpmcU-Cs0lfFCY0Fr0hPt8qiwRZvdMYdPQiT9nwaR-Pz6OvuHRDSZ93MV6Vxu7-_9Eb74YJWt8tLhKFf2lhmJVJ_yU4nOjSKJ3Z9iV0GS4wAsr-qOs45REMh61pdQ/s320/120_minute_hate.jpg)
He starts with an extended lamentation from New Yorker film critic David Denby on the state of basic film narrative:
"State of Play," which was directed by Kevin Macdonald, is both overstuffed and inconclusive. As is the fashion now, the filmmakers develop the narrative in tiny fragments. Something is hinted at - a relationship, a motive, an event in the past - then the movie rushes ahead and produces another fragment filled with hints, and then another. The filmmakers send dozens of clues into the air at once, but they feel no obligation to resolve what they tell us. Recent movies like "Syriana," "Quantum of Solace," and "Duplicity" are scripted and edited as overly intricate puzzles, and I've heard many people complain that the struggle to understand the plot becomes the principal experience of watching such films.
He agrees somewhat with Denby's jeremiad, then he brings the science:
Here's the requisite scientific reference, which comes from a study led by Rafael Malach. The experiment was simple: he showed subjects a vintage Clint Eastwood movie ("The Good, The Bad and the Ugly") and watched what happened to the cortex in a scanner. To make a long story short, he found that when adults were watching the film their brains showed a peculiar pattern of activity, which was virtually universal. (The title of the study is "Intersubject Synchronization of Cortical Activity During Natural Vision".) In particular, people showed a remarkable level of similarity when it came to the activation of areas including the visual cortex (no surprise there), fusiform gyrus (it was turned on when the camera zoomed in on a face), areas related to the processing of touch (they were activated during scenes involving physical contact) and so on. Here's the nut graf from the paper:
"This strong intersubject correlation shows that, despite the completely free viewing of dynamical, complex scenes, individual brains "tick together" in synchronized spatiotemporal patterns when exposed to the same visual environment."
But it's also worth pointing out which brain areas didn't "tick together" in the movie theater. The most notable of these "non-synchronous" regions is the prefrontal cortex, an area associated with logic, deliberative analysis, and self-awareness. (It carries a hefty computational burden.) Subsequent work by Malach and colleagues has found that, when we're engaged in intense "sensorimotor processing" - and nothing is more intense than staring at a massive screen with Dolby surround sound - we actually inhibit these prefrontal areas. The scientists argue that such "inactivation" allows us to lose ourself in the movie.
Finally, Lehrer makes a test-balloon hypothesis of his own:
What does this have to do with tricky cinematic narratives? I'd argue that the constant confusion makes it harder for us to dissolve into the spectacle on screen. We're so busy trying to understand the plot that our prefrontal cortex can't turn off. To repeat: this isn't necessarily a bad thing, but it does go against the fundamental experience of watching a movie. It's a formal innovation that contradicts the essence of the form. We can't afford to "lose ourselves" in the movie because we're already lost.
I'm not sure I find this prelim hypothesis entirely convincing. Still, I think it's interesting. What I do like about it is that it shifts attention from content to form. So many of the debates in the horror blog-twit pro-am focus on content issues: the divisions between genres, the use of the supernatural, "showing the monster," etc. It would be interesting to consider flicks that, say, regardless of their content, followed the same narrative structure. This isn't to say that the content is irrelevant or not worth discussing, but it may be overvalued in our current conversations.
0 comments:
Post a Comment