It Seemed Like a Bad Idea at the Time

Photo by the Washington Post

My first job after college was at a health insurance company, making copies, printing out letters to reject claim appeals, filing, sending out mailings, and answering phones…and those were the fun parts. For this position, across from Westlake Park in downtown Seattle, I’d turned down an offer for the proofreading night shift at a tiny publisher situated in a bad part of town and a position assisting a quadriplegic entrepreneur, which sounded fascinating until every last member of his staff made a point of telling me he regularly insulted them until they cried.

By the time I capitulated and took the health insurance job, I was feeling the strain of hunching over a computer on a flea-infested rug in an apartment without furniture (yes, in the same building where I was later threatened by a shotgun-wielding retiree), living in a city where I knew no one and couldn’t afford to go out. I took a part-time telemarketing gig that involved calling up unemployed people without health insurance and trying to get them to buy season theater tickets. A bubbly blonde actress who started the same night I did almost immediately started reeling in customers, while I rapidly proved myself to be the world’s worst telemarketer. I didn’t even make it through the training, and after two nights on the phone I stopped coming, too humiliated to ask if I would have a paycheck.

I did not have much more aptitude for clerical work than I did for telemarketing. I learned that my high GPA and undergraduate degree did not necessarily mean I could survive in an office without losing my mind. I thought health insurance was ethically wrong – insurance companies profited from people’s vulnerabilities and fears, weaseled out of paying for illnesses, and had an entire department of registered nurses and doctors whose sole purpose was to ensure that treatments were medically necessary, which turned out to be code for finding reasons to deny coverage – but by then I needed the money.

Essentially, my first lesson after graduation was one that most people grow up knowing: idealism is hard to maintain when you need to pay the rent. Though I looked down on my coworkers for their unquestioning loyalty to a company that made its money through the pain and suffering of its customers, I quickly realized that I had fallen into the same moral compromise as the insurance profiteers. I dragged slowly through the filing, stealing chances to read the documents as I went and then justifiably being chastised for my lack of productivity.

I found a fiction workshop to take at night, but the couple of hours a week devoted to writing withered under the pressure of 40-hour weeks devoted to work I hated. The job was spectacularly dull, but worse than that it sucked me into a bottomless pit of self-loathing. My supervisor, a truly kind person trying to do the right thing, began to express concern about my emotional state. One morning, filling in while the receptionist was on a coffee break, she told me that my voice was too soft on the intercom and asked me to speak more loudly. When she turned away from the reception desk, my eyes filled with tears. I can’t even answer the phone right, I thought. How am I supposed to get a decent job?

A moment later, though, I had one of those merciful, lifesaving thoughts that from time to time have saved me from myself: What does answering the phone right have to do with anything? A day or so later, I gave notice. My last day was Valentine’s Day, some five months or so after I’d started, and I celebrated with a date at the Pink Door with a man from the writer’s workshop.

Those five months, however, ended up more than paying for themselves as the health insurance system got more and more complex. Because I played Harriet the Spy with the filing, I understood the labyrinthine rules of insurance games, even as they grew more and more labyrinthine in the decades after I graduated from college. I knew definitions, exclusions, preexisting conditions, capitation; I knew the difference between copays, coinsurance, deductibles, and lifetime maxes. I knew that, as people suspected, the regulations were, in fact, meant to give insurance companies reasons not to pay. And, from doing filing in the provider services department, I saw the kinds of malpractice and sexual harassment that would be tolerated without serious repercussions by the medical establishment. (When, a dozen or so years later, a gynecologist faced criminal charges for drugging and then raping female patients, I thought of those files.)

As I’ve been absorbing today’s news that the Affordable Care Act defied the pundits and survived its constitutional challenge in the Supreme Court, I have thought a lot about my insurance years. Yesterday, Sarah Palin repeated her erroneous and much-debunked claim that “death panels” would determine whether patients received care. Meanwhile, those who would most benefit under health care reform misunderstand the legislation, and those who oppose reform actively distort and wrongly characterize its provisions. Even as the quality and availability of health care have plummeted below any other advanced Western nation, factual information has failed to counter ideological misrepresentations.

One reason for the public’s confusion may be that the law’s advocates have not adequately explained the ACA or persuaded citizens of its benefits. I think that the main reason, though, is that insurance in general is difficult to understand unless you have spent substantial time learning about insurance. The whole system is a sleight of hand, meant to fool the unwary, and, despite the victory in the Supreme Court, a large segment of the public are still too easily fooled.

A Failed History of Flight

Nearly a year ago, partway through my recovery from a neck and shoulder injury, my orthopedist waved me out of his office with vague instructions to avoid car accidents, roller coasters, and hang gliding. When I asked for more detail about what might happen if I went hang gliding, he dodged specifics, then fumed, and then finally, when I explained that I had always planned to try hang gliding someday and wanted to know the risks, told me that a bad landing might result in my needing neck surgery.

“Thank you,” I said. “That’s all I wanted to know.”

I can’t say I mind trying to avoid car accidents. I will miss roller coasters, although they are not so important to me that I am willing to risk catastrophic injury. Hang gliding, though, is another story.

Since I can remember, I have wanted to fly. I think I was eight years old, inspired by a summer-camp reading of Jonathan Livingston Seagull, when I decided that humans had failed to fly only because they had convinced themselves they couldn’t. With proper concentration and determination, I insisted, I would be the first human child ever to take to the air unaided.

I put myself on a rigorous program of fierce concentration and practice. A friend and I constructed cardboard wings, which we tied to our arms with string, and then ascended the steep hill at the top of the local park and ran down it as fast as we could, our arms outstretched. My friend, who broke her arm attempting a different flight-related activity on her own, missed the next stage of training, which involved jumping off a branch of the biggest California oak tree in the park, then about eight or ten feet off the ground.

Nearly each day that summer, I ran down the hill and jumped off the branch, thinking every time, This time this time this time this time. One day, several weeks into my regimen, I tipped off balance and sprained my ankle when I landed. The sky – or maybe my hopes – had, for the first time, betrayed me.

I had heard that you should get back on a horse after you fall, so I jumped out of the tree one more time, as a farewell to the impossible. Part of me believed that it could not possibly be true that something I wanted so badly could be forever unattainable. I jumped, and gravity, the same as always, pulled me back to earth.

At a fiction reading I attended shortly before The Brief Wondrous Life of Oscar Wao won the Pulitzer Prize, Junot Díaz said, almost as an aside, that all creative people have personal origin myths about their creativity, but they don’t as often have myths about the origins of their inner critics. (He said many amazing things, but most of them have nothing to do with this blog entry.) My own creation myth is one of flying and falling, making uneasy negotiations with hope and then having to accept my own limitations.

What did I learn from my failed experiment?

Basically, I learned almost nothing. Over the years, I have learned nothing over and over again. I can’t watch even the scruffiest sparrow dip in and out of shrubs without feeling the same old longing for flight. I wrecked my knee in a dance class, then danced again; when I’ve flown in airplanes, I always ask for the window seat so that I can imagine what the wind would feel like if my arms were wings. All I have learned is that hope is very, very difficult to kill.

If being what I am not is impossible, though, being what I am is not all that easy, either, nor is it really as prosaic as it sounds. The first year I had a vegetable garden (in Seattle’s P-Patch community gardens), I was astonished that plants could so thoroughly be themselves. The first thing a carrot seed did was send down a long, fragile root; the first thing a lettuce seed did was make leaves. At the time I started gardening, I was recovering from knee surgery and had not yet been cleared to go to dance class. I was just starting to know what my reconstructed knee would be able to do, and just beginning to understand that its abilities would fluctuate from day to day. It is humbling to become the student of a radish, but the plants had an insouciant self-acceptance that I did not.

I hope that eventually I will have the opportunity to decide whether to try hang gliding despite the risks. One side of the balance represents fulfillment of a dream of flight; the other side offers a promise of seeing the world – without embellishment – precisely as it is. Both possibilities, in the end, seem equally profound; and both still seem almost, but not quite, beyond my reach.

Pete and Repeat Were Walking Down the Street

The year I finished graduate school, I moved into a squat, well-proportioned brick building on a street in the Wallingford neighborhood of Seattle. Within a couple of years, the entire building needed to be repointed, a term I’d never heard but which meant removing and replacing every bit of mortar around every single brick. A couple of very nice men spent months hauling massive sacks of concrete up scaffolding and painstakingly repairing the masonry.

The nice men had a radio, and all day the radio played “oldies” from the sixties and seventies. On one Saturday afternoon, radio blasting right outside my window, I vowed that if in twenty years I was still listening only to the exact same music I did in my teens, someone should just kill me right then, since clearly I would be half-dead already. (I realize that sounds extreme, but those are the words I said to myself.)

An editorial by David Hadju (an associate professor of journalism at Columbia) in the New York Times hypothesized that so many music greats turned 70 around 2011 because musical tastes form at age fourteen, and when 2011’s seventy-year-olds were fourteen, they listened to Elvis. Elvis’s primacy during these septuagenarians’ formative years, the theory goes, seeded an enchanted forest of newly-planted musical trees. It’s an interesting idea, but I disagree with the article’s premise that taste and identity are basically a done deal by fourteen. The best practitioners of every profession continue to evolve and pursue what lies beyond their comfort zone – and the artists, writers, and musicians whose work I most respect have reimagined themselves over and over again.

The idea that we mature into a permanent shape bothers me not only in terms of artistic but also human potential. People love to say that people can’t change, but what they actually mean is that true change requires boundless devotion akin to what’s required to compete in the Olympics, and that rather than admitting they don’t want to expend the energy required, they prefer to believe that the task is impossible. Having made profound changes in my own life, and having had the honor of seeing a large number of friends and students also make profound changes, I know that the naysayers are just plain wrong: The quest for change may take every bit of strength you have, but it is always, always possible.

All this is to explain why I believe that replaying the broken records of my youth is a form of premature death. Though I am relatively tolerant of stasis in others, I oppose it in general and loathe it in myself. I actually enjoy music from all time periods and nearly all genres; I just refuse to write “THE END” on the last page of my musical tastes, or any other aspect of myself or my life, at least while I have any say in the matter.

Calcification, on the other hand, has no place in the classroom, politics, or the arts – to name just three areas where I abhor it – which may be why I am one of the only movie lovers in America who disliked Woody Allen’s film Midnight in Paris enough to turn it off partway through.

My dislike of the film probably says more about me than about the movie. On Rotten Tomatoes, I saw only one negative review, although I suspect that most of the people who watch Woody Allen movies these days are already loyal fans (maybe they saw his films at fourteen years old and couldn’t let them go).

I have felt wary of Woody Allen ever since his 1992 affair with his partner Mia Farrow’s adoptive daughter Soon-Yi, who was thirty-five years younger than he was, and whom he married in 1997. More to the point, I also found his shtick outdated, annoying, not particularly funny, and a sign of a general failure to adapt to changing times. As he continued to play the same character in film after film, always paired romantically with a young, beautiful woman I also felt – even though I have friends who will consider my opinion heresy – bored.

Once Allen scaled back on acting in his films and focused the main action on younger Hollywood hotties, I regained some interest…for a while. Javier Bardem, Patricia Clarkson, and Penélope Cruz are always interesting, so I enjoyed Vicky Cristina Barcelona, for example. I had high hopes for Midnight in Paris: good reviews, Owen Wilson, a lively cast of expat literary icons from the 1920s.

From the opening frame, with the same jazz track, black screen and white font Allen has used since I can remember, followed by a series of shots of Paris tourist landmarks that flashed and lingered in tempo with the soundtrack, though, I felt irritated. My annoyance escalated as the screen filled with bickering characters: Gil, a screenwriter writing a novel about the owner of a nostalgia shop, a premise that has about as light a touch as Versailles, which the characters are touring as Gil explains his idea; Inez, his shallow and materialistic fiancée; her parents, who are even shallower and more materialistic than Inez; and Inez’s pretentious, blowhard intellectual friend.

Every scene reminded me of being in an elevator with a jabbering neurotic who won’t shut up for love or money. Seeing Owen Wilson mimic Woody Allen’s characters, with their stammering, relentlessly self-referential monologues, I felt like I was watching something not quite obscene, like photos of Jon-Benet Ramsay or Allen’s own marriage to Soon-Yi. John Lahr, writing in The New Yorker (December 9, 1996), writes, “Allen admits that in fact he was never a nebbish, never that schlub in his stand-up routine.” Along the same lines, Lahr writes in the same article, “Allen does not stammer. He is not uncertain of what he thinks. He is not full of jokes or bon mots…”

Of course, it is possible that at some point beyond the point where I hit the eject button, Allen made some postmodern use of a film with a nostalgic protagonist whose time travels lead him to 1920s writers who seem to be two parts artifice to one part history, in a Paris obscured by Gil’s dreams of Paris, just as the opening frames seem to be a visual riff on tourist postcards. The trouble is that this postmodern postcard arrives three decades late, and that Owen, playing Gil, comes off like an imitation of the persona of an artifice.

If the artifice were interesting and the characters were more relevant, I would have kept watching. In the shadow of the uproar over Allen’s relationship with Soon-Yi, again in The New Yorker, Adam Gopnik argues, “By the early eighties, the distance between the comedian and his audience was becoming more noticeable, and the tension between high-modern fastidiousness and Upper West Side middlebrow life was becoming more and more attenuated…It was said that the problem was that the comedian had been cut off from real life. Woody himself had lived in a penthouse for a long time, and that’s not a place from which to make shrewdly gauged social observations.

Rather than conceding this disconnect, Gopnik goes on to blame – surprise! – feminism, lamenting that it has led the public to unfairly condemn things like lechery and “even mildly predatory desire…Our present situation is bad for everyone, but it is cruelly bad for Woody Allen. The loss of lechery as an acceptable emotion robbed him of his comic subject.” I’ve seen this argument before, from the pedophile Humbert Humbert, in Lolita. There is a lot I love about The New Yorker, but the sexism embedded in such sentiments – and the gender stereotyping in the cartoons especially – seem to me to be not only anachronistic, but alienating. So, Gopnik’s argument implies, if I didn’t find Midnight in Paris funny, it’s the feminists’ fault.

With all due respect, it wasn’t the unholy stew of Zelda Fitzgerald, gold-digging Inez, and snarky Mom that made the movie too interminable to watch. It’s seeing the same themes, characters, and stereotypes (which were funny a couple of decades ago) regifted and recycled. Intellectually, I can appreciate a nostalgic film about nostalgia, but it is possible for a piece of art to become too much of what it purports to satirize.

It’s like the old riddle, “Pete and RePete were walking down the street. Pete fell down and broke his feet. Who was left?” “Repeat…” I’ll say again that my reaction probably says at least as much about me as it does about the film. So, in that spirit, Pete, I ditched repeat – get back on your feet.

 

Pass on the Robo-Pet…and Hold the Animals

In a gorgeous scene early in Marilynne Robinson’s novel, Gilead, children bring a litter of kittens to a river and baptize some of them before an adult stops them. The kittens all find homes, but nobody remembers which were baptized. The narrator, a minister, always wonders whether there is any theological difference between the baptized and non-baptized kittens. I don’t find this dilemma troubling at all, since I have never for a moment doubted that animals have souls.

My fifth-grade teacher, who was also a former minister (and, as far as I’m concerned, a saint as well), tried unsuccessfully to convince me otherwise. He’d filled his classroom with assorted fauna – rats, tarantulas, gopher snakes, chicks, lizards, crawfish – and allowed us to hold them during lessons. I was nonchalant about snakes, couldn’t bring myself to hold a tarantula, and fell in love with the rats, especially after reading Robert C. O’Brien’s classic, Mrs. Frisby and the Rats of NIMH.

Trying to turn me into a good scientist, he showed me several science books that presented as fact that one of the factors that distinguishes humans from animals is the capacity to feel emotion. He taught me a word that felt clunky and collegiate on my tongue: anthropomorphize, the tendency of humans to attribute human qualities to animals and other things that are not human.

His contention that the rats had no personalities and no emotions was the one thing he ever told me that I didn’t believe. Research since then has suggested that I was right that the divide scientists then drew between humans and animals was artificial and anthropocentric. One of the experts on animal emotion, Jaak Panksepp, a professor and researcher at Washington State University, says “people don’t have a monopoly on emotion; rather, despair, joy and love are ancient, elemental responses that have helped all sorts of creatures survive and thrive in the natural world.”

The human tendency to anthropomorphize inanimate objects, however, is also well documented. In a famous 1960s experiment I studied in college, students confided in a computer program, ELIZA, that spat out responses based on Rogerian therapy. Many participants mistook ELIZA for a human and grew emotionally attached to “her.” More recently, a New York Times editorial by branding consultant Martin Lindstrom – contended (controversially and possibly falsely) that brain scans revealed that “the subjects’ brains responded to the sound of their phones as they would respond to the presence or proximity of a girlfriend, boyfriend or family member…they loved their iPhones.

The implication here is something like, “Humans anthropomorphize both objects and animals; objects don’t have emotions; therefore it is likely that animals don’t have emotions either.”

For those of us who spend time around them, though, it seems glaringly obvious that animals have an emotional life. Pets clearly show jealousy, anger, affection, and joy; and when their owners feel strong emotions, they rarely fail to appear with a cold nose, a warm tongue, or a snuggle. Our pets declare their personalities and desires as plainly as any child, and they share the child’s impulse to touch and give comfort in the face of human emotions they don’t understand. Researchers point out that only humans have the ability to think about their own emotions…at least as far as they know.

And then there’s Amy Hempel’s story, “In the Cemetery Where Al Jolson Is Buried,” about a woman’s failure to acknowledge her best friend’s terminal illness, which ends with this devastating passage about a real-life chimpanzee who has been taught sign language:

I think of the chimp, the one with the talking hands.

In the course of the experiment, that chimp had a baby. Imagine how her trainers must have thrilled when the mother, without prompting, began to sign to her newborn.

Baby, drink milk.

Baby, play ball.

And when the baby died, the mother stood over the body, her wrinkled hands moving with animal grace, forming again and again the words: Baby, come hug, Baby, come hug, fluent now in the language of grief.

I have never been able to read this story – or, I have just discovered, write about it – without crying.

So, when I read Clay Risen’s brief article in the New York Times Magazine, citing “Robo-Petting” as an innovation we can anticipate in the next four years or so, I couldn’t help thinking that the idea was not just disgusting but a poor approximation of the love of a good animal:

Petting a living animal has long been known to lower blood pressure and release a flood of mood-lifting endorphins. But for various reasons — you’re at work, or you’re in a hospital, or your spouse is allergic to dogs — you can’t always have a pet around to improve your mental health. So researchers at the University of British Columbia have created something called “smart fur.” It’s weird-looking (essentially just a few inches of faux fur) but its sensors allow it to mimic the reaction of a live animal whether you give it a nervous scratch or a slow, calm rub. Creepy? Yes. But effective.

Right. I prefer my “smart fur” on a live animal, thank you very much…and I would be willing to bet that animals do, too.