No Props for ‘Bones’ Because #notanactuallivingscientist

C8MOdLkV0AAEeBX
No lie, I totally bought this t-shirt.

Bah it’s the end of the term so I’m grumpy. I’m surviving on chocolate, popcorn twists, Dr. Pepper, tylenol, and coffee. Most of the archaeologists I know or want to know are at the Society for American Archaeology meetings in Vancouver and I’m dealing with end of term insanity. I’m also bombarded by students in my office (which gives me joy and life) who have questions about what classes to take next year, what a career in archaeology or anthropology looks like, and what they need to do if they want to be a forensic anthropologist (!!!) when they grow up. I mention all of this not to solicit your pity nor to brag about how overworked or tired I am because I hate that ideology too but just to provide context for the rant that follows.

Fuck ‘Bones’. Yes some anthropologists love that damn show and there is even an excellent blog by an actual anthropologist (this will become important shortly) that posts on each and every episode breaking down the good and the bad of the forensic anthropology depicted, but I’m not one of those anthropologists. The American Anthropological Association (AAA) recently shared a story by CNN commemorating the vital role that ‘Bones’ (and other shows) have played in encouraging females to become scientists, referred to as the “Scully Effect“. Before you get all ragey with me know I am not angry about female scientists (nor about X-Files because no, I love the X-Files) but I’ve decided in my current ragey state to take a stand against encouraging and promoting stereotypical representations of who a female scientist is and what scientists do as if the ends (people becoming a scientist) justifies the means (misrepresentation to the extreme of depicting really bad, unethical science).

Hell I admit that Indiana Jones sparked my interest in archaeology BUT so did the Nova specials I used to watch with my poppa as did the stories of anthropologists I read about in National Geographic. See I’m getting old and I’m starting to become concerned, even as a huge popular culture consumer, about representations of my discipline and of science more broadly.  Why do we need to make science palatable in the form of popular culture? Why are there not enough real science shows with real scientists? Do NOT tell me that it is because people won’t watch them, that they have to be dumbed down or sensationalized for people to watch because where have we heard that before? Oh yeah – people won’t watch films with female leads or with people of colour leads or with LGBQTA* people in them… but are you kidding me WE DO! *cough* Rogue One, Hidden Figures *cough* We are begging for diversity on our screens! I bet if you gave us shows with actual living scientists we would consume them greedily too and beg for more.

This concern around actual living scientists is a thing; I know it’s a thing because it has its own hashtag #actuallivingscientist. It became a thing on social media in response to the growing anti-science/science-as-elitist position of the populist movements in the United States, Canada, and Britain (among others), which were inflamed by the re-circulation of the results of a 2013 survey that found most Americans could not name a living scientist. So scientists on social media began using the hashtag to introduce themselves and their work, to challenge stereotypes, to humanize science, and to connect with the public. This response and associated hashtag is widely accepted because of how it integrates with intersectionality – a clear demonstration of who a scientist is in terms of gender, ethnicity, age, etc but not only or simply one of those things. Teachers picked up on this creating #actuallivingscientists boards in their classrooms to show their students they too could do exciting, interesting work in science – that science is not what they see on their screens, it is not just what they learn in their classrooms, and it is certainly NOT only done exclusively by the “old, white dudes” textbooks celebrate.

I know that there is a whole organization devoted to ensuring the use of accurate science in the entertainment industry, which is great in combating pseudoscience. This is great but my concerns around representation remain – pat on the back for consulting an #actuallivingscientist but have you actually written a role or cast an actor who represents what a scientist is or are you just doing it because you’ve realized that people get more out of their experience when it is real (a huge motivating factor driving the use of conlangs. Ed: another post!?)?

Why then am I so angry about ‘Bones’? Because I’m told by CNN and the AAA that I should celebrate (mis)representations of people in my field (broadly) because it gets people interested in the field. Sure, I love that people think anthropology or archaeology is cool because they saw something about it somewhere. I love that students take anthropology courses because they saw something they connect with. But I struggle with the let down, the reality check that being a forensic anthropologist isn’t what ‘Bones’ promised it would be (i.e., more bones, poor access to cool tools, and very few explicit “forensic anthropologist inquire within” job opportunities). And listen I’m not saying we can’t have our wonderful shows or movies or books, I will not give up my Dr. Dana Scully. I guess I just want my students to be inspired by #actuallivingscientists like Dr. Kristina Killgrove (who won an award for her public outreach) as just ONE example instead of the fictional Dr. Temperance Brennan even if she’s “based on” Kathy Reichs.  This means we need to not only make sure that we have real science in our shows or celebrate portrayals like the token representations they are but argue for actual scientists doing actual science too. And don’t tell me no one wants it – Bill Nye is coming back!

Advertisements

Thanks, Mayim Bialik!

So Mayim Bialik, whose Blossom nerdery inspired wardrobe choices for me in early teenhood and whose role on some other show I choose to mostly ignore, is pretty smart IRL. She actually has a PhD in neuroscience, which shows somewhat in her recent video that’s being shared by everybody and their dog, on why you shouldn’t call grown adult women “girls”.

pexels-photo-167921
This image is one of many of adult women that came up on pexels.com when I searched for “girls”. WAY TO PROVE MY POINT STOCK PHOTO PPL. 

Bialik bases her argument on the claim that the language we use influences the way we think, briefly name-dropping Sapir-Whorf and encouraging readers to Google that to get an explanation for it. I would advocate against googling those names and that term, though, as you’re likely to land in a morass of pseudoscience, possibly getting lost in the blizzard of “words for snow” debates. It can, or should, be taken as nearly axiomiatic that, as Bialik says, language matters. It neither comes from nowhere nor does nothing, and if you continue to see language as a neutral descriptor of an objectively existing world, well, I’m not sure this blog is for you.

And in this specific example, Bialik hits on a major issue: women are construed as inferior to men through the not-so-subtle use of language. This language is not the cause of women’s inferiority, nor is a shift in word choice the be-all-and-end-all of feminism, but it is meaningful. Using a term whose primary reference point is small children and applying it to unambiguously adult women, whether they are in a bar (as in the initial example Bialik gives) or acting in power positions (as in the CEO she mentions, or in a recent example I heard, university professors [Ed: Ouch]) is an act of infantalization. The semantics of the word “girl” continue to include not only female, but female + child, and using it repeatedly reinforces the notion that women are not as capable, not as intellectually advanced, and not to be taken as seriously as men. It’s a solid four minutes of feminist linguistics in pop culture action, to be honest.

But as the maxim goes, don’t read the comments. As I’ve seen this video shared several times on my social media feeds, I’m coming across some repeated arguments used to counter Bialik’s ideas, and they are hitting all my feminist linguist buttons all at once, leaving me to need to put the giant grading pile aside and get some thoughts about them out.

  • We call men “boys” too. Isn’t that the same thing?  It’s true, there are times when grown adult men are called “boys”, but there are definite contrasts between these uses and the ubiquity of calling adult women “girls”. A key aspect is that “boys” is used in contexts where adding the connotation of youthful play or even childishness isn’t seen as an inherent negative – they’re the “boys” on one’s sports team, for example. Bad behaviour among adult males may even be excused using the colloquial phrase “boys will be boys”, where yes, being a “boy” is a bad thing, but paradoxically, that “boyness” is something that we just have to tolerate and doesn’t preclude the male in question from a position of authority or responsibility. It’s also clearly used in a way that distinguishes adulthood from childhood, as in “separating the men from the boys”, which just doesn’t work when you try to feminize the expression into “separating the women from the girls”. That in itself is kind of telling, because Bialik’s whole point is that we erase the separation between women and girls. “Boy” is not generally used to refer to adult men in their regular, everyday lives (except: see the last point in this section), and you don’t hear someone asking to speak to the “boy in charge of this office” in the same way that you would often hear them refer to “the girl at the desk”. The diminishing of women’s authority and capability is generalized, not based on behaviour, and it’s pervasive. Yes, Bialik says it “never” happens. Yes, she’s wrong about that. But no, that doesn’t erase her point or make it okay for you to dismiss everything else she says, and taking her error that way is simply making an excuse not to listen to women.
  • Well, what about “guys”? We use the phrase “guys and girls” for everybody, so isn’t that the same? Not so much, no. It’s true that this has become a paired set (which reinforces both a binary notion of gender, erasing the many forms of “neither”, and also places identity focus on gender as a relevant enough category to use as a standard, necessary differentiator [Ed: Wow, that’s a lot happening in a couple of words]), but the connotations of the terms are fundamentally different. Only one of them includes the sense of “small child”. When applied to young kids at a school, you don’t actually hear “guys and girls” – you hear “boys and girls”. Washrooms for male identified kids aren’t labeled “guys”, they’re labeled “boys”. And so on. So we give a substitution for adult (or even teenage) men, but the women’s term stays the same. Boys get to grow up and change, while girls don’t. See how that’s not equal?
  • I’m a woman and I refer to my friends as “girls”. Yep. Stop doing that. This isn’t a matter of “only men treat women like children”. It’s a matter of “women are socially constructed as lesser than and our interests are dismissed and diminished”. It’s pervasive, societal, and structural.
  • There isn’t a better word. Sure there is – “women”. We find some of these terms awkward to use in everyday conversation because we’re not used to using them in everyday conversation. The only way that changes is by habit.
  • Wait a minute, are there really no times when using “boy” to refer to adult men is offensive? I’m glad you asked that, fictional comment writer who I haven’t actually seen, because there is one damn important point that Bialik misses and that I wanted to detach from the earlier points because it deserves to be more than a side note. “Boy” is regularly attached to adult men…if they’re black. And in this way, it is clearly infantalizing, diminishing, and reinforcing white supremacy. A quick google search turns up several discussions of why (see here, here, and here – that last one has a whole bunch of legal discussion and analysis of racism that deserves its own post, but still highlights the basic point). That first linked article reacts to an incident where then-Senator Barack Obama was referred to as a “boy” by a white Republican Congressman, and includes this passionate articulation of the problem with that label

    it’s the ultimate sign of disrespect, and is often more offensive than calling them the N-word. For years black men were summarily dismissed and treated with disregard. It was as if their stature was diminished when someone white called them a boy. I’ve heard black men describe the hurt and pain of growing up and having someone white call them a boy in front of their own child.

    In this context, “boy” is a means of diminishing, dismissing, and infantalizing specific types of men, of deeming them less than, and of establishing a racial power hierarchy. The semantic and pragmatic properties of “girl” have a lot in common with this dynamic in terms of power (and we should absolutely add discussion of differential usage patterns that emerge based on other lines of privilege and power, including especially race and disability).

  • Why should we care what a TV star thinks? Here’s an interesting angle on this discussion, to my mind. Bialik is famous because she’s an actress on TV, yes. But she also has a PhD in neuroscience. She’s not a specialist on the relationship between language and cognition, but it’s close enough to her general area of expertise that she’s able to bring that background to bear on her interpretations, in much the same ways as I’m doing here. I’m honestly tired of seeing popular posts by, say, Neil deGrasse Tyson where he comments on language or culture in ways that are totally ignorant because that’s not even remotely close to his area of expertise (Ed: Seriously, stay in your lane, NdGT. You’re so good in your lane), but that are attached to his authority as a scholar and serious thinker, while women’s expertise is ignored. So maybe this should be linked as “Dr. Mayim Bialik, neuroscientist, discusses the relationship between language and thought”, and I would still critique some of her points, but also – Respect to scholarly women saying scholarly things.

To close this now long and ranty post off, if language, gender, and power are your jam, you can find much better and more authoritative commentary on this and other related issues at debuk.wordpress.com.

Translating “Mansplaining”

This article on The Establishment has been thoroughly linked in the rounds of linguist Twitter (sidenote: my favourite Twitter [ed: wow, you really are a nerd]), and for good reason. It contains several fun and informative things – an account of how useful new terms work, crowdsourcing, and creative multilingual language play. On the one hand, it speaks for itself, but I want to add to a few of its points, and then be a killjoy just for a minute.

  1. The ‘splain morpheme as a wondrous piece of semantic change. While the article covers the origins, meaning, and spread of the term “mansplaining” quite well, it only briefly touches on how productive the “splain” morpheme has become. There are widespread examples of it with any form of dominant identity as the prefix –
    mansplainer1
    Grateful acknowledgment for this meme goes to Femina Invicta 

     

    whitesplaining, cis-splaining, profsplaining, etc etc etc. It can even be used on its own, as simple “splaining”. Although this Merriam Webster [ed: the go-to dictionary of the resistance…because who knew that would be a thing?] post argues that ‘splain’ predates mansplaining, in the sense of a reduction of the original term “explain” (as in the famous “Lucy, you got some splaining to do” formation), its current use does shift that meaning. “Splaining” is not just “explaining” – it’s a condescending, unnecessary explanation based on the presumption that the splainer knows things and the splainee doesn’t. It’s such a great word that captures such a clear meaning, it’s almost hard to believe it’s not even a decade old.

  2. Semantic traveling. ‘Splaining’, and mansplaining in specific, is also a concept with legs, and as it was likely born on the internet in an age of internet communication, it’s only natural that it should strike some of those who encounter it in English that it may be useful in their native languages as well. Two different types of such applications were documented naturally, as Swedes comfortably borrowed the English term, while Icelandic speakers created a translation with relevant nativized terms and metaphors. Both excellent strategies for different contexts. The later “crowdsourced” list also includes a few examples that have developed on their own (as in they weren’t made up just for the sake of making the suggestion), like the French “mecspliquer”. As a reasonably decent French speaker, I particularly like this one, because it captures the “guy + explain” basic structure, but has the added bonus of punning on the reflexive “m’expliquer” (explain to me).
  3. THAT CROWDSOURCED LIST, OMG. It makes me happy for so many reasons. First, it reveals the varied strategies and selections from homophones to make the words fun and flowing. The Chinese correspondent used discourse-level markers (the wind character) to reinforce the perception of a haughty attitude. Some of the correspondents hesitated because their language lacks some key features – like say, gender marking in Swahili – that are necessary to capturing the translation. It wasn’t impossible to convey the term, you’ll note (the trope of ‘untranslatable terms’ is one for another day), but the structures of the language really do create different ways of expressing ideas.
  4. Inclusion of unusual languages. This deserves its own marker – there are even some endangered and marginalized languages on that list of only 34, which is something distinctly rare. The Mohegan example is particularly striking – the language had its own term for a concept like this, and in response to the inquiry about ‘mansplaining’, a correspondent brought it forward to illustrate a similar concept with different cultural roots. Irish and Welsh are also nice inclusions. Language endangerment contexts often involve a lot of opportunity to think creatively about the languages, developing new forms that sound and feel natural on the languages’ own terms, so it’s nice to see that represented here as well.
  5. We are all verbal artists. One more highlight – it’s worth noting the extensive engagement with the way the words sound. It might be easy to think of new word creation as a somewhat utilitarian enterprise, but as these show, it’s also fun because of semantic play, and it’s poetic. The words take hold because they capture something not just in their meaning, but in the way they sound/feel as we say them. We don’t always pay much attention to this fun point of language, treating it as something that professional wordsmiths get, but normal people don’t. In fact normal people are pretty linguistically fun, which is why I like paying attention to them.
  6. It’s all fun and games, except…Finally, my killjoy moment – yes, it’s presumably intended to be cheeky, but I hate when “cultural universal” is demonstrated by a few dozen examples, the vast majority of which come from Indo-European cultures or a couple of large major non-European ones like Chinese or Arabic. This one admittedly goes farther than most, with the inclusion of Swahili, Mohegan, Tagalog, and Indonesian…but please stop with the use of “universal”. Please?

Language Change, Racism, and White Ideologies

*Content note: This post is explicitly about language that some consider racist, and it’s extremely difficult to talk about that language without using the terms themselves. While I will endeavor to avoid some of the slurs when I can make my point without them, some will end up being used.

A week or so ago, a friend linked this post on social media. It’s a common type of post, really – here are some words you might be saying that actually maybe you should think about *not* saying, because racist. And as happens in many instances when this type of point is raised, some (generally white) people respond with some questions about whether all of these terms really are, in fact, racist. As I’ve observed the way these conversations happen, I notice two types of arguments that are raised:

  • Look, this term wasn’t originally intended to be offensive. Here’s an etymological dictionary that says it meant something innocuous. Therefore, it’s not racist.
  • But…language changes, doesn’t it? So just because this term originates as a racist insult, does that still matter if we no longer know about that original association?

If you noticed that these are inherently contradictory, ten points for you! They are, of course, not applied to the same terms, nor are they necessarily arguments used by the same people. But it is worth comparing and contrasting the ideas, assumptions, and beliefs about how meaning works that are underlying each of them.

The first comes up a lot in relation to the controversy surrounding the Washington NFL team name. This NPR article does a nice job outlining the point in detail. The work of linguist/historian Ives Goddard is the authoritative reference point invoked in these, and

no-redskins
Source website: Indian Country Media Network (image uncredited)

as that article outlines, it is possible to conclude that indeed, the word was not a hateful slur from its origin. There is, however, an alternative possible origin story for this word, which is indeed very offensive and violent, and which is cited by many Indigenous people as what they were taught about the word’s origin. It’s worth recognizing how this works as yet another example where Western academic knowledge is prioritized over Indigenous knowledge, but at the same time, I want to make the case that even if the benign story is the accurate one, using that as justification to keep the name and logo is still racist.

A very similar type of argument (complete with another etymological trace done by Goddard himself) is outlined in Jane Hill’s fantastic book The Everyday Language of White Racism, where she refers to this as a “baptismal ideology”. The idea is that the most authoritative definition of a word is found in its original meaning, and it’s one we use in a number of different contexts beyond debates about racism. In academics, for example, we often explicitly try to return to the original coinage of a term in order to ensure that we aren’t relying on misinterpretations or a kind of “broken telephone” effect. In addition to the weight of the origin, this argument treats linguists like Goddard as authorities not just about the history of certain forms of language, but about the actual meaning of particular words (to my knowledge, Goddard has never commented on these ways of deploying his name in support of the continued use of these words). This is rooted in assumptions not only about etymology, but about authority, in establishing meaning.

 

The opposite comes up with respect to words like “gyp”, meaning “to short change, rip off”. As the article linked above notes, this is derived from the word “gypsy”, which is itself a slur applied to Roma people (who remain a highly marginalized group of people living primarily in Europe). That meaning is, to a degree, opaque at this point, so the argument goes – if the vast majority of people using a term are not only not trying to be offensive, they’re not even aware that there is a semantic connection to this other word, has the meaning drifted enough from its source that we don’t have to call it racist anymore? This perspective is rooted in the (correct) notion that language changes, and places authority over what a word “really” means in the intentions and knowledge of the speakers who use it.

What is important about this argument, to me, isn’t to decide which of these two views is more “correct” than the other. Both contain some elements of truth, in a historical as well as a broad theoretical sense. Both also contain some ideological bases that assume meaning works in specific and limited types of ways. The overall picture of how meaning does work, especially in regard to heated and complex areas like linguistic racism and what constitutes a ‘slur’, is far more complicated than either of these positions can singledhandedly capture. As someone who is very much invested in expanding people’s acceptance of language change (because refusal to allow it to change is so often a tool used by the privileged to put down those most likely to change it), I will admit I wrestled for a while with what was wrong with the second one. And then I saw it – the position used changes depending on what is the most efficient ideological approach to allowing dominant folks to feel okay about using terms that are, at best, problematic (and at worst, overtly racist). Though they are, on their face, opposite to each other, they work to accomplish the same task. That task, at its core, is about the maintenance of privilege.

Slurs – whether racist, sexist, homophobic, or any other mode of oppression – are particularly potent forms of language. Because their meaning and the weight different connotations carry within them is subject to such constant commentary and debate, these meanings are enhanced – we don’t just hear them when they are used, we hear them when they are discussed, and analyzed, and discussed again (like I am doing right here, yes). To an extent, this is why the negative meaning is almost always going to outweigh any neutral version. At the same time, the very fact that such heated debates emerge whenever people point out that specific words are hurtful or upsetting to them illustrates how hard privileged people work to protect their privilege. The loss of a few (or even a lot of) words from my repertoire doesn’t really hinder my communicative creativity all that much – it limits me verbally about to the same degree that not being allowed to hit people limits my range of acceptable arm motions. The fact that we strive for ideologies of maximal offensiveness allowed is yet another ugly feature of a structurally racist society.

On New Planets, Mars, & Colonization

eagle-nebula-11173_1280

I want to be an astronaut when I grow up. I am a child of the Challenger era and I vividly recall watching the Challenger explosion in 1986. Pluto was a planet when I was a kid and I’ve now come to terms with why we no longer classify it as such (thanks NdGT). I would not call myself an emotional person but things related to space just get me good. I openly wept watching Commander Hadfield’s performance of Space Odyssey on the ISS. I cheered out loud watching the Dragon 1 successfully dock with the ISS receiving odd looks from my co-workers (I should have been working but the ability to now watch via live-streaming is just so awesome). My kid recently asked me why I was crying at the computer – I was watching the Falcon 9’s successful landing. I also cried seeing Atlantis in person at the Kennedy Space Center last year, was on the verge of tears the whole time I was there because it was so overwhelming to actually be at Cape Canaveral, and was devastated to miss an actual launch by just two days. Sarcastic Rover is easily one of my favourite twitter accounts and I honestly feel sad knowing Curiosity is alone on Mars. I have always known what Earth looks like from space from photographs and have wanted few things more than to be able to have the experience of seeing it first hand. As I’m unlikely to go in life, I’ve asked my family to send a small portion of my cremains to space. However a love for all things space (and of course science fiction) does not make me any kind of expert in astronomy (#notarealdoctor) but my anthropological perspective can add some insight into recent discussions regarding new planets, colonization, and human evolution. Side note: I’m pissed I didn’t figure out that I could be a space archaeologist sooner because wtf that’s actually a real thing!

So here are some thoughts I’ve been trying to process lately. Note that most of what follows has little to do with space and more to do with the intersection of science fiction and anthropology. I’m reading and watching a lot of science fiction right now so it’s really up in my brain right now.

First, there seems to be a long history of anthropologists or children of anthropologists or people who took anthropology courses as writers of science fiction. H.G. Wells, Ursula K. Le Guin and C.J. Cherryh all have links to anthropology (note: there’s a whole wiki page on anthropological science fiction ftw! [Ed: filed under “things I will have to make you nerds review at some point]). This makes so much sense to me. Culture contact, race, gender, technology, and evolution are all major themes in science fiction; who better to write about it then the discipline of scholars who explicitly study these topics. In particular I think anthropological contributions to understanding what happens when cultures come into contact with each other is invaluable in science fiction and speculative fiction. C.J. Cherryh’s (1983) “40,000 in Gehenna” explicitly asks the question of what happens when a new planet is colonized by humans then is “lost” for forty thousand years before being contacted again? The answer – culture change and speciation.

So here’s my second point: evolution IS occurring now and will continue to occur so it’s not unreasonable to consider the consequences of interplanetary travel on human evolution. Mars is far away. It will take time to get there and back. It will take time to establish a colony; those first Martians will be unlikely to ever return to Earth. While colonizing Mars seems less like science fiction these days, colonizing further out is tougher for me to accept as possible mainly because humans are so short lived and space is so very vast. That said it’s not just a matter of time and distance. Speciation is not inevitable; it’s reasonable but space exploration does not necessarily mean that it will occur. Unless we get into a “lost colony” situation as in Cherryh’s work then we’ll still likely be in enough *ahem* contact to continue to interbreed. Along these lines I like the recent television show Expanse’s portrayal of the “reality” of colonizing Mars and beyond – that eventually there will be changes to and therefore differences in the populations found on Earth versus Mars versus “Belters“, but that ongoing contact between the populations will keep us a single species. What the Expanse gets “right” is not just the anatomical changes but I also appreciate the linguistic changes too AND how these changes all increase tension between populations.

Because colonialism is not good. While I am SO excited about the prospect of humans becoming an interplanetary species, I also find it really hard to be excited about colonizing other planets when I see so many problems caused by colonialism and the colonial mindset here on Earth. The Expanse gets this “right” too – that humans are exceptionally good at dividing ourselves into “Us” and “Them”. I worry that we’ll use it as opportunity to further marginalize humans we’ve already othered on Earth – if we don’t care about you as a human on Earth, why would we care about you as a Martian or as a Belter? Would we privilege the Earth way of life? Would all other populations be Earth’s inferior, labourers who procure the resources we are rapidly diminishing here on Earth? Probably.

Cultures changes. Populations change. Technologies change. These are fact. Change is also good. Variation is very good so we should not fear change. We should reach for the stars but I have two (final) concerns. First, we need to be cautious of framing technological change as advancement; this is a whole other can of worms but while we clearly need different technologies then the ones we have now to get to space, we must be critical of positions (and leaders) that firmly state advancement is progress and therefore inherently superior. Second we need to think about what we are taking with us. Some values (xenophobia, racism, sexism, ableism, etc.) have no place here on Earth; space doesn’t need them either.