Posts Tagged ‘Politics’

As social media platforms enter their collective adolescence – Facebook is fifteen, YouTube fourteen, Twitter thirteen, tumblr twelve – I find myself thinking about how little we really understand their cultural implications, both ongoing and for the future. At this point, the idea that being online is completely optional in modern world ought to be absurd, and yet multiple friends, having spoken to their therapists about the impact of digital abuse on their mental health, were told straight up to just stop using the internet. Even if this was a viable option for some, the idea that we can neatly sidestep the problem of bad behaviour in any non-utilitarian sphere by telling those impacted to simply quit is baffling at best and a tacit form of victim-blaming at worst. The internet might be a liminal space, but object permanence still applies to what happens here: the trolls don’t vanish if we close our eyes, and if we vanquish one digital hydra-domain for Toxicity Crimes without caring to fathom the whys and hows of what went wrong, we merely ensure that three more will spring up in its place.

Is the internet a private space, a government space or a public space? Yes.

Is it corporate, communal or unaffiliated? Yes.

Is it truly global or bound by local legal jurisdictions? Yes.

Does the internet reflect our culture or create it? Yes.

Is what people say on the internet reflective of their true beliefs, or is it a constant shell-game of digital personas, marketing ploys, intrusive thoughts, growth-in-progress, personal speculation and fictional exploration? Yes.

The problem with the internet is that takes up all three areas on a Venn diagram depicting the overlap between speech and action, and while this has always been the case, we’re only now admitting that it’s a bug as well as a feature. Human interaction cannot be usefully monitored using an algorithm, but our current conception of What The Internet Is has been engineered specifically to shortcut existing forms of human oversight, the better to maximise both accessibility (good to neutral) and profits (neutral to bad). Uber and Lyft are cheaper, frequently more convenient alternatives to a traditional taxi service, for instance, but that’s because the apps themselves are functionally predicated on the removal of meaningful customer service and worker protections that were hard-won elsewhere. Sites like tumblr are free to use, but the lack of revenue generated by those users means that, past a certain point, profits can only hope to outstrip expenses by selling access to those users and/or their account data, which means in turn that paying to effectively monitor their content creation becomes vastly less important than monetising it.

Small wonder, then, that individual users of social media platforms have learned to place a high premium on their ability to curate what they see, how they see it, and who sees them in turn. When I first started blogging, the largely unwritten rule of the blogsphere was that, while particular webforums dedicated to specific topics could have rules about content and conduct, blogs and their comment pages should be kept Free. Monitoring comments was viewed as a sign of narrow-minded fearfulness: even if a participant was aggressive or abusive, the enlightened path was to let them speak, because anything else was Censorship. This position held out for a good long while, until the collective frustration of everyone who’d been graphically threatened with rape, torture and death, bombarded with slurs, exhausted by sealioning or simply fed up with nitpicking and bad faith arguments finally boiled over.

Particularly in progressive circles, the relief people felt at being told that actually, we were under no moral obligation to let assholes grandstand in the comments or repeatedly explain basic concepts to only theoretically invested strangers was overwhelming. Instead, you could simply delete them, or block them, or maybe even mock them, if the offence or initial point of ignorance seemed silly enough. But as with the previous system, this one-size-fits-all approach soon developed a downside. Thanks to the burnout so many of us felt after literal years of trying to treat patiently with trolls playing Devil’s Advocate, liberal internet culture shifted sharply towards immediate shows of anger, derision and flippancy to anyone who asked a 101 question, or who didn’t use the right language, or who did anything other than immediately agree with whatever position was explained to them, however simply.

I don’t exempt myself from this criticism, but knowing why I was so goddamn tired doesn’t change my conviction that, cumulatively, the end result did more harm than good. Without wanting to sidetrack into a lengthy dissertation on digital activism in the post-aughties decade, it seems evident in hindsight that the then-fledgling alliance between trolls, MRAs, PUAs, Redditors and 4channers to deliberately exhaust left-wing goodwill via sealioning and bad faith arguments was only the first part of a two-pronged attack. The second part, when the left had lost all patience with explaining its own beliefs and was snappily telling anyone who asked about feminism, racism or anything else to just fucking Google it, was to swoop in and persuade the rebuffed party that we were all irrational, screeching harridans who didn’t want to answer because we knew our answers were bad, and why not consider reading Roosh V instead?

The fallout of this period, I would argue, is still ongoing. In an ideal world, drawing a link between online culture wars about ownership of SFF and geekdom and the rise of far-right fascist, xenophobic extremism should be a bow so long that not even Odysseus himself could draw it. But this world, as we’ve all had frequent cause to notice, is far from ideal at the best of times – which these are not – and yet another featurebug of the internet is the fluid interpermeability of its various spaces. We talk, for instance – as I am talking here – about social media as a discreet concept, as though platforms like Twitter or Facebook are functionally separate from the other sites to which their users link; as though there is no relationship between or bleed-through from the viral Facebook post screencapped and shared on BuzzFeed, which is then linked and commented upon on Reddit, which thread is then linked to on Twitter, where an entirely new conversation emerges and subsequently spawns an article in The Huffington Post, which is shared again on Facebook and the replies to that shared on tumblr, and so on like some grizzly perpetual mention machine.

But I digress. The point here is that internet culture is best understood as a pattern of ripples, each new iteration a reaction to the previous one, spreading out until it dissipates and a new shape takes its place. Having learned that slamming the virtual door in everyone’s face was a bad idea, the online left tried establishing a better, calmer means of communication; the flipside was a sudden increase in tone-policing, conversations in which presentation was vaunted over substance and where, once again, particular groups were singled out as needing to conform to the comfort-levels of others. Overlapping with this was the move towards discussing things as being problematic, rather than using more fixed and strident language to decry particular faults – an attempt to acknowledge the inherent fallibility of human works while still allowing for criticism. A sensible goal, surely, but once again, attempting to apply the dictum universally proved a double-edged sword: if everything is problematic, then how to distinguish grave offences from trifling ones? How can anyone enjoy anything if we’re always expected to thumb the rosary of its failings first?

When everything is problematic and everyone has the right to say so, being online as any sort of creator or celebrity is like being nibbled to death by ducks. The well-meaning promise of various organisations, public figures or storytellers to take criticism on board – to listen to the fanbase and do right by their desires – was always going to stumble over the problem of differing tastes. No group is a hivemind: what one person considers bad representation or in poor taste, another might find enlightening, while yet a third party is more concerned with something else entirely. Even in cases with a clear majority opinion, it’s physically impossible to please everyone and a type of folly to try, but that has yet to stop the collective internet from demanding it be so. Out of this comes a new type of ironic frustration: having once rejoiced in being allowed to simply block trolls or timewasters, we now cast judgement on those who block us in turn, viewing them, as we once were viewed, as being fearful of criticism.

Are we creating echo chambers by curating what we see online, or are we acting in pragmatic acknowledgement of the fact that we neither have time to read everything nor an obligation to see all perspectives as equally valid? Yes.

Even if we did have the time and ability to wade through everything, is the signal-to-noise ratio of truth to lies on the internet beyond our individual ability to successfully measure, such that outsourcing some of our judgement to trusted sources is fundamentally necessary, or should we be expected to think critically about everything we encounter, even if it’s only intended as entertainment? Yes.

If something or someone online acts in a way that’s antithetical to our values, are we allowed to tune them out thereafter, knowing full well that there’s a nearly infinite supply of as-yet undisappointing content and content-creators waiting to take their place, or are we obliged to acknowledge that Doing A Bad doesn’t necessarily ruin a person forever? Yes.

And thus we come to cancel culture, the current – but by no means final – culmination of previous internet discourse waves. In this iteration, burnout at critical engagement dovetails with a new emphasis on collective content curation courtesies (try saying that six times fast), but ends up hamstrung once again by differences in taste. Or, to put it another way: someone fucks up and it’s the last straw for us personally, so we try to remove them from our timelines altogether – but unless our friends and mutuals, who we still want to engage with, are convinced to do likewise, then we haven’t really removed them at all, such that we’re now potentially willing to make failure to cancel on demand itself a cancellable offence.

Which brings us right back around to the problem of how the modern internet is fundamentally structured – which is to say, the way in which it’s overwhelmingly meant to rely on individual curation instead of collective moderation. Because the one thing each successive mode of social media discourse has in common with its predecessors is a central, and currently unanswerable question: what universal code of conduct exists that I, an individual on the internet, can adhere to – and expect others to adhere to – while we communicate across multiple different platforms?

In the real world, we understand about social behavioural norms: even if we don’t talk about them in those terms, we broadly recognise them when we see them. Of course, we also understand that those norms can vary from place to place and context to context, but as we can only ever be in one physical place at a time, it’s comparatively easy to adjust as appropriate.

But the internet, as stated, is a liminal space: it’s real and virtual, myriad and singular, private and public all at once. It confuses our sense of which rules might apply under which circumstances, jumbles the normal behavioural cues by obscuring the identity of our interlocutors, and even though we don’t acknowledge it nearly as often as we should, written communication – like spoken communication – is a skill that not everyone has, just as tone, whether spoken or written, isn’t always received (or executed, for that matter) in the way it was intended. And when it comes to politics, in which the internet and its doings now plays no small role, there’s the continual frustration that comes from observing, with more and more frequency, how many literal, real-world crimes and abuses go without punishment, and how that lack of consequences contributes in turn to the fostering of abuse and hostility towards vulnerable groups online.

This is what comes of occupying a transitional period in history: one in which laws are changed and proposed to reflect our changing awareness of the world, but where habit, custom, ignorance, bias and malice still routinely combine, both institutionally and more generally, to see those laws enacted only in part, or tokenistically, or not at all. To take one of the most egregious and well-publicised instances that ultimately presaged the #MeToo movement, the laughably meagre sentence handed down to Brock Turner, who was caught in the act of raping an unconscious woman, combined with the emphasis placed by both the judge and much of the media coverage on his swimming talents and family standing as a means of exonerating him, made it very clear that sexual violence against women is frequently held to be less important than the perceived ‘bright futures’ of its perpetrators.

Knowing this, then – knowing that the story was spread, discussed and argued about on social media, along with thousands of other, similar accounts; knowing that, even in this context, some people still freely spoke up in defence of rapists and issued misogynistic threats against their female interlocutors – is it any wonder that, in the absence of consistent legal justice in such cases, the internet tried, and is still trying, to fill the gap? Is it any wonder, when instances of racist police brutality are constantly filmed and posted online, only for the perpetrators to receive no discipline, that we lose patience for anyone who wants to debate the semantics of when, exactly, extrajudicial murder is “acceptable”?

We cannot control the brutality of the world from the safety of our keyboards, but when it exhausts or threatens us, we can at least click a button to mute its seeming adherents. We don’t always have the energy to decry the same person we’ve already argued against a thousand times before, but when a friend unthinkingly puts them back on our timeline for some new reason, we can tell them that person is cancelled and hope they take the hint not to do it again. Never mind that there is far too often no subtlety, no sense of scale or proportion to how the collective, viral internet reacts in each instance, until all outrage is rendered flat and the outside observer could be forgiven for worrying what’s gone wrong with us all, that using a homophobic trope in a TV show is thought to merit the same online response as an actual hate crime. So long as the war is waged with words alone, there’s only a finite number of outcomes that boycotting, blocking, blacklisting, cancelling, complaining and critiquing can achieve, and while some of those outcomes in particular are well worth fighting for, so many words are poured towards so many attempts that it’s easy to feel numbed to the process; or, conversely, easy to think that one response fits all contexts.

I’m tired of cancel culture, just as I was dully tired of everything that preceded it and will doubtless grow tired of everything that comes after it in turn, until our fundamental sense of what the internet is and how it should be managed finally changes. Like it or not, the internet both is and is of the world, and that is too much for any one person to sensibly try and curate at an individual level. Where nothing is moderated for us, everything must be moderated by us; and wherever people form communities, those communities will grow cultures, which will develop rules and customs that spill over into neighbouring communities, both digitally and offline, with mixed and ever-changing results. Cancel culture is particularly tricky in this regard, as the ease with which we block someone online can seldom be replicated offline, which makes it all the more intoxicating a power to wield when possible: we can’t do anything about the awful coworker who rants at us in the breakroom, but by God, we can block every person who reminds us of them on Twitter.

The thing about participating in internet discourse is, it’s like playing Civilisation in real-time, only it’s not a game and the world keeps progressing even when you log off. Things change so fast on the internet – memes, etiquette, slang, dominant opinions – and yet the changes spread so organically and so fast that we frequently adapt without keeping conscious track of when and why they shifted. Social media is like the Hotel California: we can check out any time we like, but we can never meaningfully leave – not when world leaders are still threatening nuclear war on Twitter, or when Facebook is using friendly memes to test facial recognition software, or when corporate accounts are creating multi-staffed humansonas to engage with artists on tumblr, or when YouTube algorithms are accidentally-on-purpose steering kids towards white nationalist propaganda because it makes them more money.

Of course we try and curate our time online into something finite, comprehensible, familiar, safe: the alternative is to embrace the near-infinite, incomprehensible, alien, dangerous gallimaufry of our fractured global mindscape. Of course we want to try and be critical, rational, moral in our convictions and choices; it’s just that we’re also tired and scared and everyone who wants to argue with us about anything can, even if they’re wrong and angry and also our relative, or else a complete stranger, and sometimes you just want to turn off your brain and enjoy a thing without thinking about it, or give yourself some respite, or exercise a tiny bit of autonomy in the only way you can.

It’s human nature to want to be the most amount of right for the least amount of effort, but unthinkingly taking our moral cues from internet culture the same way we’re accustomed to doing in offline contexts doesn’t work: digital culture shifts too fast and too asymmetrically to be relied on moment to moment as anything like a universal touchstone. Either you end up preaching to the choir, or you run a high risk of aggravation, not necessarily due to any fundamental ideological divide, but because your interlocutor is leaning on a different, false-universal jargon overlying alternate 101 and 201 concepts to the ones you’re using, and modern social media platforms – in what is perhaps the greatest irony of all – are uniquely poorly suited to coherent debate.

Purity wars in fandom, arguments about diversity in narrative and whether its proponents have crossed the line from criticism into bullying: these types of arguments are cyclical now, dying out and rekindling with each new wave of discourse. We might not yet be in a position to stop it, but I have some hope that being aware of it can mitigate the worst of the damage, if only because I’m loathe to watch yet another fandom steadily talk itself into hating its own core media for the sake of literal argument.

For all its flaws – and with all its potential – the internet is here to stay. Here’s hoping we figure out how to fix it before its ugliest aspects make us give up on ourselves.

 

 

 

 

 

I have a lot of thoughts right now, and I’m not sure how to express them. There’s so much going wrong in the world that on one level, it feels insincere or trivial to focus on anything other than the worst, most visceral horrors; but on another, there’s a point past which grief and fury becoming numbing. The angriest part of of me wants to wade into the wrench of things and wrangle sense from chaos, but my rational brain knows it’s impossible. I hate that I know it’s impossible, because what else but this do the people who could really change things think, to justify their inaction? I have words, and they feel empty. The world is full of indifferent walls and the tyrants who seek to build more of them; words, no matter how loudly intoned, bounce off them and fade into echoes.

Our governments are torturing children.

I could write essays detailing why particular policies and rhetoric being favoured by Australia, the UK and the US right now are inhumane, but I don’t have the strength for it. Some actions are so clearly evil that the prospect of explaining why to people claiming confusion about the matter makes me want to walk into the sea. I can’t go online without encountering adults who want to split hairs over why, in their view, it’s completely justifiable to steal the children of refugees and incarcerate them away from the parents they mean to deport, because even though they don’t want adult refugees they see no contradiction in keeping their babies indefinitely, in conditions that are proven to cause severe psychological damage, because – why? What the fuck is the end-game, here? People don’t seek asylum on a goddamn whim; they’re fleeing violence and terror, persecution and war and destruction; yet somehow the powers that be think that word of stolen kids will pass through some non-existent refugee grapevine and stop people coming in future? And even if it did, which it manifestly can’t and won’t, what the fuck do they plan to do with the ones they’ve taken?

Our governments are torturing children.

Refugees caged on Manus Island are committing suicide, their families left to learn of their deaths through the media. Disabled people of all ages are dying and will continue to die in the UK of gross neglect. None of this is conscionable; none of it need happen. Billionaires are privately funding enterprises that ought to be public because they can’t conceive of a better use for that much money while workers employed by their companies die sleeping in cars or collapse on the job from gross overwork or subsist on food stamps.

I want to say that the world can’t continue like this, but I know it can. It has before; we’re at a familiar crossroads, and the path down which we’re headed is slick with history’s blood. That’s why it’s so goddamn terrifying.

Please, let this be the turning point. Let’s fix this before it’s too late.

Our governments are torturing children.

When I think about the state of global politics, I often imagine how it’s going to be viewed in the future.  My reflex is to think in terms of high school history textbooks, but that phrase evokes a specific type of educational setup that already feels anachronistic – that of overpriced, physical volumes written specifically for teaching teenagers a set curriculum, rather than because they represent good historical summaries in their own right. I think about our penchant for breaking the past down into neatly labelled epochs, and wonder how long it will take for some sharp-tongued future historian to look at the self-professed Information Age, as we once optimistically termed it, examine its trajectory through the first two decades of the new millennium, and conclude that it should be more fittingly known as the Disinformation Age.

With that in mind, here’s my hot take on what a sample chapter from such a historical summary might look like:

Chapter 9: Perprofial Media, Propaganda and Power

Perprofial, adj: something which is simultaneously personal, professional and political. 

When Twitter, the first widespread micro-blogging platform, was launched in 2006, no one could have predicted that, barely eleven years later, this new perprofial medium would have irrevocably changed the political landscape. Earlier social media sites, such as Facebook, were foremost a digital extension of existing personal networks, with aspirational connections an afterthought; traditional blogging, by contrast, began as a form of mass broadcast diarising which steadily – though not without hiccups – osmosed the digital remnants of print-era journalism. But from the outset, Twitter was a platform whose users could both listen and be listened to, a sea of Janus-headed audience-performers whose fame might as easily precede that particular medium as be enabled by it, unless it was both or neither. The draw of enabling the unknown, the upcoming, the newly-minted and the long-established to all rub shoulders at the same party – or at least, to shout around each other from the variegated levels of an infinite, Escheresque ballroom – was considered just that: a draw, instead of – as it more properly was – a Brownian mob-theory engine running in 24/7 real time without anything like a Chinese wall, a fact-checker or a control group to filter the variables.

The true point at which Twitter stopped being a social media outlet and became a Trojan horse at the gates of the Fifth Estate is now a Sorites paradox. We might not be able to pinpoint the exact time and date of the transition, but such coordinates are vastly less important than the fact that the switch itself happened. What we can identify, however, is the moment when the extrajudicial nature of the power wielded by perprofial platforms became clear at a global level.

Though Donald Trump’s provocative online statements long preceded his tenure as president, and while they had consistently drawn commentary from all corners, the point at which his tweets were publicly categorised as a declaration of war by North Korean authorities was a definite Rubicon crossing. As Twitter could – and did – ban users for issuing threats of violence in violation of its Terms of Service, it was argued, then why should it allow a world leader to openly threaten war? If the “draw” of the platform was truly a democratising of the powerful and the powerless, then surely powerful figures should be held to the same standards as everyone else – or even, potentially, to more rigorous ones, given the far greater scope of the consequences afforded them by their fame.

But first, some context. At a time of resurgent global fascism and with educational institutions increasingly hampered by the anti-intellectual siege begun some sixty years earlier, when the theory of “creationism” was first pitched as a scientific alternative in American public schools, the zeitgeist was saturated with the steady repositioning of expertise as toxic to democracy. Early experiments in perprofial media, then called “reality television,” had steadily acclimated the public to the idea that personal narratives, no matter how uninformed, could be a professional undertaking – provided, of course, that they fit within an accepted sociopolitical framework, such as radical weight loss or the quest for fame. At the same time, the rise of the internet as a lawless space where anyone could create and promote their own content, regardless of its quality, created an explosion of self-serving informational feedback loops which, both intentionally and by accident, preyed on the uncritical fact-absorption of generations taught to accept that anything written down in an approved book – of which the screen was seemingly just a faster, more convenient extension – was necessarily true.

The commensurate decline of print-based journalism was the final nail in the coffin. To combat the sharp loss of revenue necessitated by a jump from an industry financed by a cornered market, lavish advertising revenue and a locked-in pay-per-issue model to the still-nebulous vagaries of digital journalism, where paid professional content existent on the same apparent footing as free amateur blogging, corners were cut. Specialists and sub-editors were let go, journalists were alternately asked or forced to become jacks of all trades, and content was recycled across multiple outlets. All of these changes were drastic enough to be noticeable even to the uninitiated; even so, the situation might still have been salvageable if not for the fact that, in looking to compete in this new environment, the bulk of traditional outlets made the mistake of assuming that the many digital amateurs of the blogsphere were, in aggregate, equivalent to their old nemesis, the tabloid press.

Scandal-sheets are a tradition as old as print journalism, with plenty of historical overlap between the one and the other. At some time or another, even the most reputable papers had all resorted to sensationalism – or at least, to real journalism layered with editorial steering – in an effort to wrest their readerships back from the tabloids, but always on the understanding that their legacy, their trustworthiness as institutions, was established enough to take the moral hit. But when this same tactic was tried again in digital environs, the effect was vastly different. Still struggling with web layouts and paywalls, most traditional papers were demonstrably harder and less intuitive to navigate than upstart blogs, and with not much more to boast in the way of originality (since they’d sacked so many writers) or technical accuracy (since they’d sacked so many editors), the decision to switch to tabloid, clickbait content – often by hiring from the same pool of amateur bloggers they were ultimately competing with, leveraging their decaying reputations as compensation for no or meagre pay in a job market newly seething with desperate, unemployed writers – backfired badly. Rather than reclaimed readerships, the effect was to cement the idea that the only real difference between professional news and amateur opinion wasn’t facts, or training, or integrity, but a simple matter of where you preferred to shop.

The internet had become an information marketplace – quite literally, in the case of Russia bulk-purchasing ads on Facebook in the lead-up to the 2016 US presidential election. In Britain, the success of the Leave vote in the Brexit referendum was attributed in part to voters having “had enough of experts” – the implication being that, contrary to the famous assertion of Isaac Asimov, many people really did think their ignorance was just as good as someone else’s knowledge. Though Asimov was speaking specifically of American anti-intellectualism and a false perception of democracy in the 1980s, his fears were just as applicable some forty years later, and arguably moreso, given the rise of perprofial media.

In the months prior to his careless declaration of war, then-president Trump made a point of lambasting what he called the “fake news media”, which label eventually came to encompass every and any publication, whether traditional or digital, which dared to criticise him; even his former ally, Fox News, was not exempt. In the immediate, messy aftermath of the collapse of print journalism, this claim rang just nebulously true enough to many that, with so many trusted papers having perjured themselves with tabloid tactics, Trump was able to situate himself as the One True Authority his followers could trust.

It’s important to note, however, that not just any politician, no matter how sociopathic or self-serving, could have pulled off the same trick. The ace in Trump’s sleeve was his pre-existing status as a king in the perprofial arena of reality television, which had already helped to re-contextualise democracy – or the baseline concept of a democratic institution, rather – as something in which expertise was only to be trusted if supported by success, where “success” meant “celebrity”. Under this doctrine, those who preached expertise, but whom the listener had never heard of, were considered suspect: true success meant fame, and if you weren’t famous for what you knew, then you must not really be knowledgeable. By the same token, celebrities who claimed expertise in fields beyond those for which they were famous were also criticised: it was fine to play football or act, for instance, but as neither skill was seen to have anything to do with politics, the act of speaking “out of turn” on such topics was dismissed as mere self-aggrandising. Actual facts had nothing to do with the matter, because “actual facts” as a concept was rendered temporarily liminal by the struggle between amateur and professional media.

With such “logic” to support him, Trump couldn’t lose. What did his lack of political qualifications matter? He’d still succeeded at getting into politics, which meant he must have learned by doing, which meant in turn that his fame, unlike that of other celebrities, made him an inviolate authority on political matters. Despite how fiercely he was opposed and resisted, his repeated, defensive cries of “fake news!” rang just true enough to sow doubt among those who might otherwise have opposed him.

And so to Twitter, and a declaration of war. By historical assumption, Trump as president ought to have been the most powerful man in the world, but by investing so much of that power in a perprofial platform – one to whose rules of conduct he was personally bound, without any exemption or extenuation on account of his office – he had, quite unthinkingly, agreed to let an international corporation place extrajudicial sanctions, not only on the office of the presidency, but through Trump as an individual and his investiture as the head of state, on a declaration of war.

In the next chapter: racism, dogwhistles and spinning the Final Solution.

*

History is, of course, what we make of it. Right now, I just wish we weren’t making quite so much.

 

   

 

 

 

Warning: spoilers for Shin Godzilla.

I’ve been wanting to see Shin Godzilla since it came out last year, and now that it’s available on iTunes, I’ve finally had the chance. Aside from the obvious draw inherent to any Godzilla movie, I’d been keen to see a new Japanese interpretation of an originally Japanese concept, given the fact that every other recent take has been American. As I loaded up the film, I acknowledged the irony in watching a disaster flick as a break from dealing with real-world disasters, but even so, I didn’t expect the film itself to be quite so bitingly apropos.

While Shin Godzilla pokes some fun at the foibles of Japanese bureaucracy, it also reads as an unsubtle fuck you to American disaster films in general and their Godzilla films in particular. From the opening scenes where the creature appears, the contrast with American tropes is pronounced. In so many natural disaster films – 2012, The Day After Tomorrow, Deep Impact, Armageddon, San Andreas – the Western narrative style centres by default on a small, usually ragtag band of outsiders collaborating to survive and, on occasion, figure things out, all while being thwarted by or acting beyond the government. There’s frequently a capitalist element where rich survivors try to edge out the poor, sequestering themselves in their own elite shelters: chaos and looting are depicted up close, as are their consequences. While you’ll occasionally see a helpful local authority figure, like a random policeman, trying to do good (however misguidedly), it’s always at a remove from any higher, more coordinated relief effort, and particularly in more SFFnal films, a belligerent army command is shown to pose nearly as much of a threat as the danger itself.

To an extent, this latter trope appears in Shin Godzilla, but to a much more moderated effect. When Japanese command initially tries to use force, the strike is aborted because of a handful of civilians in range of the blast, and even when a new attempt is made, there’s still an emphasis on chain of command, on minimising collateral damage and keeping the public safe. At the same time, there’s almost no on-the-ground civilian elements to the story: we see the public in flashes, their online commentary and mass evacuations, a few glimpses of individual suffering, but otherwise, the story stays with the people in charge of managing the disaster. Yes, the team brought together to work out a solution – which is ultimately scientific rather than military – are described as “pains-in-the-bureaucracy,” but they’re never in the position of having to hammer, bloody-fisted, on the doors of power in order to rate an audience. Rather, their assemblage is expedited and authorised the minute the established experts are proven inadequate.

When the Japanese troops mobilise to attack, we view them largely at a distance: as a group being addressed and following orders, not as individuals liable to jump the chain of command on a whim. As such, the contrast with American films is stark: there’s no hotshot awesome commander and his crack marine team to save the day, no sneering at the red tape that gets in the way of shooting stuff, no casual acceptance of casualties as a necessary evil, no yahooing about how the Big Bad is going to get its ass kicked, no casual discussion of nuking from the army. There’s just a lot of people working tirelessly in difficult conditions to save as many people as possible – and, once America and the UN sign a resolution to drop a nuclear bomb on Godzilla, and therefore Tokyo, if the Japanese can’t defeat it within a set timeframe, a bleak and furious terror at their country once more being subject to the evils of radiation.

In real life, Japan is a nation with extensive and well-practised disaster protocols; America is not. In real life, Japan has a wrenchingly personal history with nuclear warfare; America, despite being the cause of that history, does not.

Perhaps my take on Shin Godzilla would be different if I’d managed to watch it last year, but in the immediate wake of Hurricane Harvey, with Hurricane Irma already wreaking unprecedented damage in the Caribbean, and huge tracts of Washington, Portland and Las Angeles now on fire, I find myself unable to detach my viewing from the current political context. Because what the film hit home to me – what I couldn’t help but notice by comparison – is the deep American conviction that, when disaster strikes, the people are on their own. The rich will be prioritised, local services will be overwhelmed, and even when there’s ample scientific evidence to support an imminent threat, the political right will try to suppress it as dangerous, partisan nonsense.

In The Day After Tomorrow, which came out in 2004, an early plea to announce what’s happening and evacuate those in danger is summarily waved off by the Vice President, who’s more concerned about what might happen to the economy, and who thinks the scientists are being unnecessarily alarmist. This week, in the real America of 2017, Republican Rush Limbaugh told reporters that the threat of Hurricane Irma, now the largest storm ever recorded over the Atlantic Ocean, was being exaggerated by the “corrupted and politicised” media so that they and other businesses could profit from the “panic”.

In my latest Foz Rants piece for the Geek Girl Riot podcast, which I recorded weeks ago, I talk about how we’re so acclimated to certain political threats and plotlines appearing in blockbuster movies that, when they start to happen in real life, we’re conditioned to think of them as being fictional first, which leads us to view the truth as hyperbolic. Now that I’ve watched Shin Godzilla, which flash-cuts to a famous black-and-white photo of the aftermath of Hiroshima when the spectre of a nuclear strike is raised, I’m more convinced than ever of the vital, two-way link between narrative on the one hand and our collective cultural, historical consciousness on the other. I can’t imagine any Japanese equivalent to the moment in Independence Day when cheering American soldiers nuke the alien ship over Las Angeles, the consequences never discussed again despite the strike’s failure, because the pain of that legacy is too fully, too personally understood to be taken lightly.

At a cultural level, Japan is a nation that knows how to prepare for and respond to natural disasters. Right now, a frightening number of Americans – and an even more frightening number of American politicians – are still convinced that climate change is a hoax, that scientists are biased, and that only God is responsible for the weather. How can a nation prepare for a threat it won’t admit exists? How can it rebuild from the aftermath if it doubts there’ll be a next time?

Watching Shin Godzilla, I was most strongly reminded, not of any of the recent American versions, but The Martian. While the science in Shin Godzilla is clearly more handwavium than hard, it’s nonetheless a film in which scientific collaboration, teamwork and international cooperation save the day. The last, despite a denouement that pits Japan against an internationally imposed deadline, is of particular importance, as global networking still takes place across scientific and diplomatic back-channels. It’s a rare American disaster movie that acknowledges the existence or utility of other countries, especially non-Western ones, beyond shots of collapsing monuments, and even then, it’s usually in the context of the US naturally taking the global lead once they figure out a plan. The fact that the US routinely receives international aid in the wake of its own disasters is seemingly little-known in the country itself; that Texas’s Secretary of State recently appeared to turn down Canadian aid in the wake of Harvey, while now being called a misunderstanding, is nonetheless suggestive of confusion over this point.

As a film, Shin Godzilla isn’t without its weaknesses: the monster design is a clear homage to the original Japanese films, which means it occasionally looks more stop-motion comical than is ideal; there’s a bit too much cutting dramatically between office scenes at times; and the few sections of English-language dialogue are hilariously awkward in the mouths of American actors, because the word-choice and use of idiom remains purely Japanese. Even so, these are ultimately small complaints: there’s a dry, understated sense of humour evident throughout, even during some of the heavier moments, and while it’s not an action film in the American sense, I still found it both engaging and satisfying.

But above all, at this point in time – as I spend each morning worriedly checking the safety of various friends endangered by hurricane and flood and fire; as my mother calls to worry about the lack of rain as our own useless government dithers on climate science – what I found most refreshing was a film in which the authorities, despite their faults and foibles, were assumed and proven competent, even in the throes of crisis, and in which scientists were trusted rather than dismissed. Earlier this year, in response to an article we both read, my mother bought me a newly-released collection of the works of children’s poet Misuzu Kaneko, whose poem “Are You An Echo?” was used to buoy the Japanese public in the aftermath of the 2011 tsunami . Watching Shin Godzilla, it came back to me, and so I feel moved to end with it here.

May we all build better futures; may we all write better stories.

Are You An Echo?

If I say, “Let’s play?”
you say, “Let’s play!”

If I say, “Stupid!”
you say, “Stupid!”

If I say, “I don’t want to play anymore,”
you say, “I don’t want to play anymore.”

And then, after a while,
becoming lonely

I say, “Sorry.”
You say, “Sorry.”

Are you just an echo?
No, you are everyone.

 

 

 

A poem by me, with apologies to Dylan Thomas:

Nevertheless, She Persisted

Nevertheless, she persisted.

Live women fighting we shall be one

With la Liberté and the French Joan;

When their hearts are picked clean and the clean hearts gone,

She shall wear laws at elbow and foot;

Though she go mad she will be sane,

Though she flees through the sea she shall rise again;

Though justice be lost the just shall not;

For nevertheless, she persisted.

.

Nevertheless, she persisted.

Over the whinings of their greed

Men lying long have now lied windily;

Changing their tacks when stories give way,

Stacking their courts, yet we shall not break;

Faith in our hands shall snap in two

And the unicorn evils run them through;

Split all ends up she shan’t crack;

And nevertheless, she persisted.

.

Nevertheless, she persisted.

No more may Foxes cry in decline

Or news break loud to a silenced room;

Where fawned a follower may a follower no more

Bow his head to the blows of this reign;

Though she be mad and tough as nails,

Her headlines in characters hammer the dailies;

Break in the Sun ‘till the Sun breaks down,

As nevertheless, she persisted.

The last few weeks or so, I’ve seen the same video endlessly going around on Facebook: a snippet of an interview with Simon Sinek, who lays out what he believes to be the key problems with millennials in the workplace. Every time I see it shared, my blood pressure rises slightly, until today – joy of joys! – I finally saw and shared a piece rebutting it. As often happens on Facebook, a friend asked me why I disagreed with Sinek’s piece, as he’d enjoyed his TED talks. This is my response.

In his talk, Sinek touches on what he believes to be the four core issues handicapping millennials: internet addiction, bad parenting, an unfulfilled desire for meaningful work and a desire to have everything instantly. Now: demonstrably, some people are products of bad parenting, and the pernicious, lingering consequences of helicopter parenting, wherein overzealous, overprotective adults so rob their children of autonomy and instil in them such a fear of failure that they can’t healthily function as adults, is a very real phenomenon. Specifically in reference to Sinek’s claims about millennials all getting participation awards in school (which, ugh: not all of us fucking did, I don’t know a single person for whom that’s true, shut up with this goddamn trope), the psychological impact of praising children equally regardless of their actual achievements, such that they come to view all praise as meaningless and lose self-confidence as a result, is a well-documented phenomenon. But the idea that you can successfully accuse an entire global generation of suffering from the same hang-ups as a result of the same bad parenting stratagems, such that all millennials can be reasonably assumed to have this problem? That, right there, is some Grade-A bullshit.

Bad parenting isn’t a new thing. Plenty of baby boomers and members of older generations have been impacted by the various terrible fads and era-accepted practises their own parents fell prey to (like trying to electrocute the gay out of teenagers, for fucking instance), but while that might be a salient point to make in individual cases or in the specific context of tracking said parenting fads, it doesn’t actually set millennials apart in any meaningful way. Helicopter parenting might be comparatively new, but other forms of damage are not, and to act as though we’re the only generation to have ever dealt with the handicap of bad parenting, whether collectively or individually, is fucking absurd. But more to the point, the very specific phenomenon of helicopter parenting? Is, overwhelmingly, a product of white, well-off, middle- and-upper-class America, developed specifically in response to educational environments where standardised testing rules all futures and there isn’t really a viable social safety net if you fuck up, which leads to increased anxiety for children and parents both. While it undeniably appears in other countries and local contexts, and while it’s still a thing that happens to kids now, trying to erase its origins does no favours to anyone.

Similarly, the idea that millennials have all been ruined by the internet and don’t know how to have patience because we grew up with smartphones and social media is – you guessed it – bullshit. This is really a two-pronged point, tying into two of Sinek’s arguments: that we’re internet addicts who don’t know how to socialise properly, and that we’re obsessed with instant gratification, and as such, I’m going to address them together.

Yes, internet addiction is a problem for some, but it’s crucial to note it can and does affect people of all ages rather than being a millennial-only issue, just as it’s equally salient to point out that millennials aren’t the only ones using smartphones. I shouldn’t have to make such an obvious qualification, but apparently, I fucking do. That being said, the real problem here is that Sinek has seemingly no awareness of what social media actually is. I mean, the key word is right there in the title: social media, and yet he’s acting like it involves no human interaction whatsoever – as though we’re just playing with digital robots or complete strangers all the time instead of texting our parents about dinner or FaceTiming with friends or building professional networks on Twitter or interacting with our readerships on AO3 (for instance).

The idea, too, that millennials have their own social conventions different to his own, many of which reference a rich culture of online narratives, memes, debates and communities, does not seem to have occurred to him, because we’re not learning to do it face to face. Except that, uh, we fucking are, on account of how we still inhabit physical bodies and go to physical places every fucking day of our goddamn lives, do I really have to explain that this is a thing? Do I really have to explain the appeal of maintaining friendships where you’re emotionally close but the person lives hundreds or thousands of kilometres away? Do I really have to spell out the fact that proximal connections aren’t always meaningful ones, and that it actually makes a great deal of human sense to want to socialise with people we care about and who share our interests where possible rather than relying solely on the random admixture of people who share our schools and workplaces for fun?

The fact that Sinek talks blithely about how all millennials grew up with the internet and social media, as though those of us now in our fucking thirties don’t remember a time before home PCs were common (I first learned to type on an actual typewriter), is just ridiculous: Facebook started in 2004, YouTube in 2005, Twitter in 2006, tumblr in 2007 and Instagram in 2010. Meaning, most millennials – who, recall, were born between 1980 and 1995, which makes the youngest of us 21/22 and the eldest nearly forty – didn’t grow up with what is now considered social media throughout our teenage years, as Sinek asserts, because it didn’t really get started until we were out of high school. Before that, we had internet messageboards that were as likely to die overnight as to flourish, IRC chat, and the wild west of MSN forums, which was a whole different thing altogether. (Remember the joys of being hit on by adults as an underage teen in your first chatroom and realising only years later that those people were fucking paedophiles? Because I DO.)

And then he pulls out the big guns, talking about how we get a dopamine rush when we post about ourselves online, and how this is the same brain chemical responsible for addiction, and this is why young people are glued to their phones and civilisation is ending. Which, again, yes: dopamine does what he says it does, but that is some fucking misleading bullshit, Simon Says, and do you know why? Because you also get a goddamn dopamine rush from talking about yourself in real life, too, Jesus fucking Christ, the internet is not the culprit here, to say nothing of the fact that smartphones do more than one goddamn thing. Sinek lambasts the idea of using your phone in bed, for instance, but I doubt he holds a similar grudge against reading in bed, which – surprise! – is what quite a lot of us are doing when we have our phones out of an evening, whether in the form of blogs or books or essays. If I was using a paperback book or a physical Kindle rather than the Kindle app on my iPhone, would he give a fuck? I suspect not.

Likewise, I doubt he has any particular grudge against watching movies (or TED talks, for that matter) in bed, which phones can also be used for. Would he care if I brought in my Nintendo DS or any other handheld system to bed and caught a few Pokemon before lights out? Would he care if I played Scrabble with a physical board instead of using Words With Friends? Would he care if I used the phone as a phone to call my mother and say goodnight instead of checking her Facebook and maybe posting a link to something I know will make her laugh? I don’t know, but unless you view a smartphone as something that’s wholly disconnected from people – which, uh, is kind of the literal antithesis of what a smartphone is and does – I don’t honestly see how you can claim that they’re tools for disconnection. Again, yes: some people can get addicted or overuse their phones, but that is not a millennial-exclusive problem, and fuck you very much for suggesting it magically is Because Reasons.

And do not even get me started on the total fuckery of millennials being accustomed to instant gratification because of the internet. Never mind the fact that, once again, people of any age are equally likely to become accustomed to fast internet as a thing and to update their expectations accordingly – bitch, do you know how long it used to take to download music with Kazaa using a 56k modem? Do you know how long it still takes to download entire games, or patches for games, or – for that matter – drive through fucking peak-hour traffic to get to and from work, or negotiate your toddler into not screaming because he can’t have a third juicebox? Because – oh, yeah – remember that thing where millennials stopped being teenagers quite a fucking while ago, and a fair few of us are now parents ourselves? Yeah. Apparently our interpersonal skills aren’t so completely terrible as to prevent us all from finding spouses and partners and co-parents for our tiny, screaming offspring, and if Mr Sinek would like to argue that learning patience is incompatible with being a millennial, I would like to cordially invite him to listen to a video, on loop, of my nearly four-year-old saying, “Mummy, look! A lizard! Mummy, there’s a lizard! Come look!” and see what it does for his temperament. (We live in Brisbane, Australia. There are geckos everywhere.)

But what really pisses me off about Sinek’s millennial-blaming is the idea that we’re all willing to quit our jobs because we don’t find meaning in them. Listen to me, Simon Sinek. Listen to me closely. You are, once again, confusing the very particular context of middle-class, predominantly white Americans from affluent backgrounds – which is to say, the kind of people who can afford to fucking quit in this economy – for a universal phenomenon. Ignore the fact that the global economy collapsed in 2008 without ever fully recovering: Brexit just happened in the UK, Australia is run by a coalition of racist dickheads and you’ve just elected a talking Cheeto who’s hellbent on stripping away your very meagre social safety nets as his first order of business – oh, and none of us can afford to buy houses and we’re the first generation not to earn more than our predecessors in quite a while, university costs in the States are an actual goddamn crime and most of us can’t make a living wage or even get a job in the fields we trained in.

But yeah, sure: let’s talk about the wealthy few who can afford to quit their corporate jobs because they feel unfulfilled. What do they have to feel unhappy about, really? It’s not like they’re working for corporations whose idea of HR is to hire oblivious white dudes like you to figure out why their younger employees, working longer hours for less pay in tightly monitored environments that strip their individuality and hate on unions as a sin against capitalism, in a context where the glass ceiling and wage gaps remain a goddamn issue, in a first world country that still doesn’t have guaranteed maternity leave and where quite literally nobody working minimum wage can afford to pay rent, which is fucking terrifying to consider if you’re worried about being fired, aren’t fitting in. Nah, bro – must be the fucking internet’s fault.

Not that long ago, Gen X was the one getting pilloried as a bunch of ambitionless slackers who didn’t know the meaning of hard work, but time is linear and complaining about the failures of younger generations is a habit as old as humanity, so now it’s apparently our turn. Bottom line: there’s a huge fucking difference between saying “there’s value in turning your phone off sometimes” and “millennials don’t know how to people because TECHNOLOGY”, and until Simon Sinek knows what it is, I’m frankly not interested in whatever it is he thinks he has to say.

annie-mic-drop

And lo, in the leadup to Christmas, because it has been A Year and 2016 is evidently not content to go quietly into that good night, there has come the requisite twitter shitshow about diversity in YA. Specifically: user @queen_of_pages (hereinafter referred to as QOP) recently took great exception to teenage YouTube reviewer Whitney Atkinson acknowledging the fact that white and straight characters are more widely represented in SFF/YA than POC and queer characters, with bonus ad hominem attacks on Atkinson herself. As far as I can make out, the brunt of QOP’s ire hinges on the fact that Atkinson discusses characters with specific reference to various aspects of their identity – calling a straight character straight or a brown character brown, for instance – while advocating for greater diversity. To quote QOP:

[Atkinson] is separating races, sexuality and showing off her white privilege… she wants diversity so ppl need to be titled by their race, disability or sexuality. I want them to be titled PEOPLE… I’m Irish. I’ve been oppressed but I don’t let it separate me from other humans.

*sighs deeply and pinches bridge of nose*

Listen. I could rant, at length, about the grossness of a thirtysomething woman, as QOP appears to be, insulting a nineteen year old girl about her appearance and lovelife for any reason, let alone because of something she said about YA books on the internet. I could point out the internalised misogyny which invariably underlies such insults – the idea that a woman’s appearance is somehow inherently tied to her value, such that calling her ugly is a reasonable way to shut down her opinions at any given time – or go into lengthy detail about the hypocrisy of using the term “white privilege” (without, evidently, understanding what it means) while complaining in the very same breath about “separating races”. I could, potentially, say a lot of things.

But what I want to focus on here – the reason I’m bothering to say anything at all – is QOP’s conflation of mentioning race with being racist, and why that particular attitude is both so harmful and so widespread.

Like QOP, I’m a thirtysomething person, which means that she and I grew up in the same period, albeit on different continents. And what I remember from my own teenage years is a genuine, quiet anxiety about ever raising the topic of race, because of the particular way my generation was taught about multiculturalism on the one hand and integration on the other. Migrant cultures were to be celebrated, we were told, because Australian culture was informed by their meaningful contributions to the character of our great nation. At the same time, we were taught to view Australian culture as a monoculture, though it was seldom expressed as baldly as that; instead, we were taught about the positive aspects of cultural assimilation. Australia might benefit from the foods and traditions migrants brought with them, this logic went, but our adoption of those things was part of a social exchange: in return for our absorption of some aspects of migrant culture, migrants were expected to give up any identity beyond Australian and integrate into a (vaguely homogeneous) populace. Multiculturalism was a drum to beat when you wanted to praise the component parts that made Australia great, but suggesting those parts were great in their own right, or in combinations reflective of more complex identities? That was how you made a country weaker.

Denying my own complicity in racism at that age would be a lie. I was surrounded by it in much the same way that I was surrounded by car fumes, a toxic thing taken into the body unquestioning without any real understanding of what it meant or was doing to me internally. At my first high school, two of my first “boyfriends” (in the tweenage sense) were POC, as were various friends, but because race was never really discussed, I had no idea of the ways in which it mattered: to them, to others, to how they were judged and treated. The first time I learned anything about Chinese languages was when one of those early boyfriends explained it in class. I remember being fascinated to learn that Chinese – not Mandarin or Cantonese: the distinction wasn’t referenced – was a tonal language, but I also recall that the boy himself didn’t volunteer this information. Instead, our white teacher had singled him out as the only Chinese Australian present and asked him to explain his heritage: she assumed he spoke Chinese, and he had to explain that he didn’t, not fluently, though he still knew enough to satisfy her question. That exchange didn’t strike me as problematic at the time, but now? Now, it bothers me.

At my second high school, I was exposed to more overt racism, not least because it was a predominantly white, Anglican private school, as opposed to the more diversely populated public school I’d come from. As an adult, I’m ashamed to think how much of it I let pass simply because I didn’t know what to say, or because I didn’t realise at the time now noxious it was. Which isn’t to say I never successfully identified racism and called it out – I was widely perceived as the token argumentative lefty in my white male, familially right-wing friend group, which meant I spent a lot of time excoriating them for their views about refugees – but it wasn’t a social dealbreaker the way it would be now. The fact that I had another friend group that was predominantly POC – and where, again, I was the only girl – meant that I also saw people discussing their own race for the first time, forcing me to examine the question more openly than before.

Even so, it never struck me as anomalous back then that whereas the POC kids discussed their own identities in terms of race and racism, the white kids had no concept of their whiteness as an identity: that race, as a concept, informed their treatment of others, but not how they saw themselves. The same boys who joked about my biracial crush being a half-caste and who dressed up as “terrorists” in tea robes and tea towels for our final year scavenger hunt never once talked about whiteness, or about being white, unless it was in specific relation to white South African students or staff members, of which the school historically had a large number. (The fact that we had no POC South African students didn’t stop anyone from viewing “white” as a necessary qualifier: vocally, the point was always to make clear that, when you were talking about South Africans, you didn’t mean anyone black.)

Which is why, for a long time, the topic of race always felt fraught to me. I had no frame of reference for white people discussing race in a way that wasn’t saturated with racism, which made it easy to conflate the one with the other. More than that, it had the paradoxical effect of making any reference to race seem irrelevant: if race was only ever brought up by racists, why mention it at all? Why not just treat everyone equally, without mentioning what made them different? I never committed fully to that perspective, but it still tempted me – because despite all the racism I’d witnessed, I had no real understanding of how its prevalence impacted individuals or groups, either internally or in terms of their wider treatment.

My outrage about the discriminatory treatment of refugees ought to have given me some perspective on it, but I wasn’t insightful enough to make the leap on my own. At the time, detention centres and boat people were the subject of constant political discourse: it was easy to make the connection between things politicians and their supporters said about refugees and how those refugees were treated, because that particular form of cause and effect wasn’t in question. The real debate, such as it was, revolved around whether it mattered: what refugees deserved, or didn’t deserve, and whether that fact should change how we treated them. But there were no political debates about the visceral upset another boyfriend, who was Indian, felt at knowing how many classmates thought it was logical for him to date the only Indian girl in our grade, “because we both have melanin in our skins”. (I’ve never forgotten him saying that, nor have I forgotten the guilt I felt at knowing he was right. The two of them ran in completely different social circles, had wildly different personalities and barely ever interacted, and yet the expectation that they’d end up dating was still there, still discussed.) I knew it was upsetting to him, and I knew vaguely that the assumption was racist in origin, but my own privilege prevented me from understanding it as a microaggression that was neither unique to him nor the only one of its kind that he had to deal with. I didn’t see the pattern.

One day, I will sit down and write an essay about how the failure of white Australians and Americans in particular to view our post-colonial whiteness as an active cultural and racial identity unless we’re being super fucking racist about other groups is a key factor in our penchant for cultural appropriation. In viewing particular aspects of our shared experiences, not as cultural identifiers, but as normal, unspecial things that don’t really have any meaning, we fail to connect with them personally: we’re raised to view them as something that everyone does, not as something we do, and while we still construct other identities from different sources – the regions we’re from, the various flavours of Christianity we prefer – it leaves us prone to viewing other traditions as exciting, new things with no equivalent in our own milieu while simultaneously failing to see to their deeper cultural meaning. This is why so many white people get pissed off at jokes about suburban dads who can’t barbecue or soccer moms with Can I Speak To The Manager haircuts: far too many of us have never bothered to introspect on our own sociocultural peculiarities, and so get uppity the second anyone else identifies them for us. At base, we’re just not used to considering whiteness as an identity in its own right unless we’re really saying not-black or acting like white supremacists – which means, in turn, that many of us conflate any open acknowledgement of whiteness with some truly ugly shit. In that context, whiteness is either an invisible, neutral default or a racist call to arms: there is no in between.

Which is why, returning to the matter of QOP and Whitney Atkinson, pro-diversity advocates are so often forced to contend with people who think that “separating races” and like identifiers – talking specifically about white people or disabled people or queer people, instead of just people – is equivalent to racism and bigotry. Whether they recognise it or not, they’re coming from a perspective that values diverse perspectives for what they bring to the melting pot – for how they help improve the dominant culture via successful assimilation – but not in their own right, as distinct and special and non-homogenised. In that context, race isn’t something you talk about unless you’re being racist: it’s rude to point out people’s differences, because those differences shouldn’t matter to their personhood. The problem with this perspective is that it doesn’t allow for the celebration of difference: instead, it codes “difference” as inequality, because deep down, the logic of cultural assimilation is predicated on the idea of Western cultural superiority. A failure or refusal to assimilate is therefore tantamount to a declaration of inequality: I’m not the same as you is understood as I don’t want to be as good as you, and if someone doesn’t want to be the best they can be (this logic goes) then either they’re stupid, or they don’t deserve the offer of equality they’ve been so generously extended in the first place.

Talking about race isn’t the same as racism. Asking for more diversity in YA and SFF isn’t the same as saying personhood matters less than the jargon of identity, but is rather an acknowledgement of the fact that, for many people, personhood is materially informed by their experience of identity, both in terms of self-perception and in how they’re treated by others at the individual, familial and collective levels. And thanks to various studies into the social impact of colour-blindness as an ideology, we already know that claiming not to see race doesn’t undo the problem of racism; it just means adherents fail to understand what racism actually is and what it looks like, even – or perhaps especially – when they’re the ones perpetuating it.

So, no, QOP: you can’t effectively advocate for diversity without talking in specifics about issues like race and sexual orientation. Want the tl:dr reason? Because saying I want more stories with PEOPLE in them isn’t actually asking for more than what we already have, and the whole point of advocating for change is that what we have isn’t enough. You might as well try and work to decrease the overall number of accidental deaths in the population without putting any focus on the specific ways in which people are dying. Generalities are inclusive at the macro level, but it’s specificity that gets shit done at the micro – and ultimately, that’s what we’re aiming for.

 

 

Let me tell you what I wish I’d known
When I was young and dreamed of glory:
You have no control
Who lives, who dies, who tells your story.
– Lin-Manuel Miranda, “History Has Its Eyes On You”, Hamilton
.
As the Brexit vote and its consequences reverberate through the internet, I listen to Hamilton’s”History Has Its Eyes On You”, and of all possible things, I find myself remembering the morning of 9/11. I was fifteen years old, and as I stumbled through my parents’ bedroom to their en-suite to get ready for school, despite my habitual bleariness, I was conscious that they were both unnaturally still, frozen in bed as they listened to the radio. I remember my mother saying, unprompted, “Something terrible has happened in the world,” my stomach lurching at the graveness of her tone. I remember how, at school that day, the attacks were all anyone could talk about; how even the most diffident students begged our history teacher for permission to watch George Bush’s address on the TV in our classroom. Above all else, I remember the sense, not of fear, but of irrevocable change beyond my control: the knowledge that something material to all our futures had happened – was in the process of happening still – and yet we were expected to carry on as usual.
 .
I remember thinking about documentaries we’d watched in history or which I’d seen at home, segments where various adults were asked to give their eyewitness accounts of events that happened in their youth, and imagined being one day called on to do likewise. Where were you when it happened? How did you feel? What did you say? Did you know, then, what stretched out before you? What were the details? I was years away from wanting children, but I still wondered what I might say to my own hypothetical offspring, if some future history teacher asked them to interview a parent about the momentous events which they (meaning I) had lived through. And I thought of the propaganda posters I’d so recently studied for my own modern history class – that classic image of the beslippered pater familias sitting in his armchair, two cherubic children at his feet, and a pained, distant expression on his face as his daughter asked, “Daddy, what did YOU do in the Great War?”
.
Great War
.
Neither one is a comparable situation to the Brexit vote, of course. (I hope.). But that feeling of history having its eyes on me – on all of us – is one of which I’ve felt increasingly conscious ever since the neo-fascist Golden Dawn party gained unprecedented power in Greece in 2015. I find myself thinking again of those high school history lessons, where Edmund Burke’s adage that those who don’t know history are doomed to repeat it first became a part of my awareness, a tritely profound observation that nonetheless remains relevant, and of all the early warning signs that preceded both world wars. Perhaps it’s just the consequence of having grown to adulthood in the spectre of 9/11, American politics the long shadow cast constantly over my Australian shoulder, but since then, I’ve never lost the awareness that my local world is only an engine part in a bigger and more complex machine.
 .
I have plenty of scathing things to say about the current state of secondary education in Australia, but it was a modern history unit on the Israel-Palestine conflict that inspired the core of my early university studies: Arabic as a language, the Arab World, Islam and the Middle East, Biblical Studies. Then as now, I understood that, whatever my personal atheism, it was the religious, political and cultural schisms developed over centuries between Judaism, Christianity and Islam that had ultimately shaped the modern world, and in light of that fateful day in September 2001 – in light of the history assignment for which I subsequently won a school award, cutting endless newspaper clippings on conflict in the Middle East to explain how each event went back to what we’d studied about pogroms and Zionism, the UN and oil and the Sykes-Picot Agreement – I wanted to try and understand the foundations of the present.
 .
I am, by nature, a storyteller. Narrative has patterns, and though we construct them knowingly in fiction, still they echo naturally in life, whose permutations are often far stranger than anything we can dream up. I look at where the world is poised, on the brink of men like Donald Trump and Nigel Farage, Malcolm Turnbull and Boris Johnson, and for an instant the very air is textbook paper, a blurring of time and distance and a whisper of darkest timelines. I am a mater familias in an armchair as yet unbuilt, and as my son looks up from whatever device goes on to replace the common iPad, I hear him ask, “Who did YOU vote for in 2016, mummy?” I imagine a homework sheet that asks him to list the date of Jo Cox’s death the same way I once listed Emily Davison’s, and with as much bland dispassion. I wonder if his history module will cover the Orlando massacre, assuming it isn’t deemed too volatile for junior study, the same way I never learned in school that the queer prisoners in Nazi concentration camps, once freed, were immediately rearrested – homosexuality was still considered a crime, you see, even by the Allies.
 .
I wonder how many teenagers throughout the UK and Europe checked the results of the Brexit vote on their phones today, on laptops, in class, and watched it all with that same spectre of wrenching helplessness that I once did, as their future was altered without their say-so. Overwhelmingly, it was young people who voted Remain, and older folks who voted Leave, and while the result is a tragedy for all of them alike – regardless of how or whether they voted, older Britons have just been screwed out of their pensions as the pound falls to a 31 year low, a span longer than my lifetime – it’s the young whose futures have just been overwritten. Scotland might yet break from the UK, just as other countries might yet break from the EU; Canada is a lone bastion of Western political sanity right now, but everything else is disintegrating. To quote another poet:
 .
Things fall apart; the centre cannot hold;
Mere anarchy is loosed upon the world,
The blood-dimmed tide is loosed, and everywhere
The ceremony of innocence is drowned;
The best lack all conviction, while the worst
Are full of passionate intensity. 
.
I don’t know where the world is headed; only that it scares me. We’ve fought through so much to achieve even the fragments of unity and fellowship we now possess, and yet the backlash against it has been so violent, so continuous, as to defy belief. It’s barely been two months since I left the UK, and already it appears unrecognisable, a distorted funhouse reflection trying desperately to possess the body that cast it. Perhaps the only way out is through, but at this point, I can’t imagine that we’re going to get there painlessly.
 .
I wonder who’ll live, who’ll die. Who’ll tell this story.

On the phone from the Middle East, where he is currently deployed, Torgersen lamented what he called “the cognitive dissonance of people saying, ‘No, the Hugos are about quality,’ and then at the same time they’re like: ‘Ooh, we can vote for this author because they’re gay, or for this story because it’s got gay characters,’ or, ‘Ooh, we’re going to vote for this author because they’re not white.’ As soon as that becomes the criteria, well, quality goes out the window.”

Who Won Science Fiction’s Hugo Awards, and Why It Matters, by Amy Wallace

In light of this year’s Hugo Award results, and with particular reference to Amy Wallace’s excellent rundown on the Puppies affair, I feel moved to address the Sad, rather than the Rabid, contingent. Per Torgersen’s remarks above, and setting aside every other aspect of the debate that renders me alternately queasy or enraged, I can’t shake the feeling that there’s something fairly fundamental to the problem that’s consistently misunderstood by the Puppies, and which, when explained, might go a long way towards explaining the dissonance between what they think is happening and what is actually happening. Not that I particularly expect Torgersen or Correia to listen to me at this point; or if they did, I’d be greatly surprised. Even so, the point seems worth stating, if only for the sake of greater clarity.

When it comes to debating strangers with radically different perspectives, you sometimes encounter what I refer to as Onion Arguments: seemingly simple questions that can’t possibly be answered to either your satisfaction or your interlocutor’s because their ignorance of concepts vital to whatever you might say is so lacking, so fundamentally incorrect, that there’s no way to answer the first point without first explaining eight other things in detail. There are layers to what’s being misunderstood, to what’s missing from the conversation, and unless you’ve got the time and inclination to dig down to the onion-core of where your perspectives ultimately diverge, there’s precious little chance of the conversation progressing peacefully. After all, if your interlocutor thinks they’ve asked a reasonable, easy question, your inability to answer it plainly is likely to make them think they’ve scored a point. It’s like a cocky first-year student asking a 101 question and feeling smug when their professor can’t condense the four years of study needed to understand why it’s a 101 question into a three-sentence answer. The problem is one as much of attitude as ignorance: having anticipated a quick response, your interlocutor has to be both willing and interested enough to want to hear what might, at least initially, sound like an explanation of a wholly unrelated issue – and that’s assuming you’re able to intuit the real sticking point straight off the bat.

So: inasmuch as any of the Puppies can be said to have a reasonable concern at the bottom of all their rhetoric, which often comes off as little more than “we think books about people who aren’t straight white dudes are boring”, it’s the worry that certain stories are being rewarded because they contain X character or are written by Y author rather than because they’re actually good. And given the way such books are often discussed and lauded by those who love them, where these aspects are openly stated as pros, you can see where the concern comes from. After all, the difference between saying “this book is great because it had a queer protagonist” and “this book is great because it had a well-written protagonist” seems, on the surface, pretty obvious: one is concerned with a single aspect of characterisation regardless of its execution, and the other is concerned with execution alone. So clearly, if you’re vaunting queerness (for instance) as though it’s a synonym for quality, you’re at risk of recommending mediocre stories on a tokenistic, uninformed basis.

Right?

Wrong.

But in order to explain why this is so, there’s six onion layers we need to unravel: context, experience, awareness, representation, language and taste.

Let’s start with layer one: context. While there’s always been an element of diversity in SFF – you can’t ignore the contributions of writers like Ursula K. Le Guin or Octavia Butler, or pretend that the Golden Age greats never wrote about politics – as the Puppies themselves agree, it’s only comparatively recently that a movement in favour of promoting diversity has flourished. Setting aside the question of whether this is a good or a bad thing, or merely just a thing, at a practical level, increased diversity in narrative means you’re adding a new set of variables to any critical equation, which in turn requires a new way to discuss them. For example: if the vast majority of protagonists in a given genre are straight, white men, then critically speaking, there’s little need to mention their straightness/whiteness/maleness when making reviews or recommendations, because none of these details are relevant in distinguishing Story A from Story B, or Character A from Character B. Instead, you talk about other things – the quality of the characterisation, for instance – and consider it a job well done.

Which, contextually, it is. And somewhat understandably, if this is what you’re used to, it can be easy to assume that ever mentioning race or gender or sexuality in a review is irrelevant – even when the characters are more diverse – because these details, whatever else they might indicate, have no bearing on the quality of the story.

Except, of course, they do, as per the evidence of layer two: experience. Who we are and where we’ve come from impacts on our construction; on our beliefs and personalities. Returning to a situation where straight white male characters are the default, a reviewer would be within their rights – would, indeed, be doing a good job – to discuss how Character A’s working class upbringing informs his personality, especially when compared with Character B’s more aristocratic heritage and attitudes. A veteran soldier will have a different perspective on combat to someone who’s only ever studied tactics at a remove, just as an old man who’s recently outlived the love of his life will think differently about romance to a teenager in the throes of his first infatuation. These details are critically pertinent because they demonstrate how and why two characters can experience the same story in radically different ways, and if we as readers happen to have some points in common with Character A or Character B, we’re always going to compare our own experiences with theirs, no matter how fantastical or futuristic the setting, because it helps us gauge whether, in our opinion, the writer has done a good job of portraying their thoughts and feelings realistically.

And so it is with details like race and gender and sexuality. A queer character will have different experiences to a straight one, particularly if they live in a homophobic culture; someone who’s religious will have a different outlook on life to someone who’s an atheist; a person from a racial and cultural minority will experience their surroundings differently to someone from the racial and cultural majority; someone who grows up poor will approach wealth differently to someone who’s always had it. How relevant these details are to individual characterisation and worldbuilding – and how successfully they’re executed within a given story – will, of course, vary on a case by case basis; but of necessity, they matter more often than not, and therefore deserve to be mentioned.

Which means that, if the straight white man is no longer the default character, but is rather just one of a number of options, his straightness, whiteness and maleness will be subject to new scrutiny, both in the present and as a retroactive phenomenon. This is layer three: awareness. All stories, no matter how fantastic or futuristic, are ultimately the product of their times, because their writers are the product of their times, too. We might envisage new worlds, but what we consider new depends as much on what we think is old as what we think is possible; our taboos change with the decade or century or according to cultural context; particular writing styles go in and out of vogue; and audiences, depending on their tastes and when they’re raised, expect a range of different things from narrative.

The retroactive criticism and analysis of old works has always been part of literary tradition; what changes is the present-day yardstick against which we measure them. Right now, we’re in the middle of a cultural shift spanning multiple fronts, both political and creative: we’re aware that stories are being told now which, for various reasons, haven’t often been told before, or which didn’t receive much prominence when they were, and which are consequently being told by a wider range of people. Depending on your personal political stance, and as with the question of diversity in the context layer, you might view this as a good thing, a bad thing, or merely a thing – but regardless of your beliefs, you can’t deny that it’s happening, and that it’s having an impact. As a direct result of this, many of us are now looking at old stories – and at old defaults – in a new light, which means that certain narratives and narrative elements which, by dint of once being so common as to void discussion, were considered thematically neutral, are now being treated as political. (Which, really, they always were – but more on that later.)

As our cultural taboos have shifted – as queerness has become decriminalised (if not always accepted) and rights extended to everyone regardless of race and gender (though still often enacted with prejudice) – the types of stories it’s acceptable to tell have changed, just as it’s now possible for a wider range of storytellers to be heard. We’re all aware of these changes, and whether we like them or not, their visibility makes us question our stories in ways we haven’t before. Thus: while there is nothing noteworthy in choosing to write a straight, white male protagonist in a cultural milieu where almost all protagonists share these qualities, the same act carries more meaning when the combination is understood to be just one of a number of possible choices; and especially where, of all those choices, it’s the one we’ve seen most often, and is therefore, in one sense, the least original. Which doesn’t make such characters inherently bad, or boring, or anything like that; nor does the presence of such characters – or the success of such writers – preclude the simultaneous presence of diversity. It simple means we have an increased awareness of the fact that, up until very recently, a certain type of character was the narrative default, and now that he’s not – or at least, now that he’s not to some people – it’s worth asking whether his presence is a sign that the writer, whether consciously or unconsciously, is perpetuating that default, and what that says about the story in either case.

Which brings us to the fourth layer: representation. Following on from the issue of awareness, consider that, as a wider variety of stories are now being told by a wider variety of people, a wider range of protagonists has consequently entered the narrative market. As with context and awareness, you might think this is a good thing, a bad thing, or merely a thing: regardless, it is happening, and will doubtless continue to happen. As such, a wider percentage of the audience is now having stories written both by and about them – or at least, about people like them – than in previous years; which means that, in response to the former dearth of such narratives, there’s been a corresponding rise in people requesting or recommending them primarily or prominently on the basis of their representational elements.

Ignoring for the moment all questions of quality – which, yes; I’m aware that’s the discussion we’re ultimately having, but bear with me – it should be a point of basic human empathy to understand why this is important; or at the very least, why representation matters to so many people. Despite our ability to empathise and connect with characters whose lives and experiences are utterly different to our own, we still like to see ourselves represented in fiction from time to time, not only as a form of validation – you’re worth telling stories about – but because, amidst so much difference, it’s a point of connection, affirmation, identity. Yet because straight white male characters were so long the default – and because that default, by virtue of its ubiquity, was considered politically neutral – changing the recipe, as it were, is still a visibly deliberate act: it makes the reader aware that the author chose for the character to be male or female, queer or straight, black or white (to give the simplest binary permutations), which awareness refutes the mythical idea of characters as the immaculate, fully-fledged gifts of some inviolable Muse, beyond the writer’s ability to pick or alter; and as such, there’s a reflexive tendency to conflate deliberate with forced, where the latter term carries implications of artificial, false, arbitrary, tokenistic. When these attributes don’t describe us, it’s easy to forget that actually, people like that do exist in the real world, and in considerable numbers; they’re not just something the author has made up out of whole cloth, and the fact that we might be surprised to see them in a given story doesn’t mean, ipso facto, that they’re incongruous within it.

As such, there’s a developing trend towards recommending stories which feature traditionally under-represented groups, not just as some arbitrary exercise, but because we’re aware that members of those groups might actually want to read those stories, and will, as a consequence, have a material interest in that aspect of the contents. But for precisely this reason, such recommendations are seldom indiscriminate, based, as Torgersen and the Puppies fear, solely on the presence of Character A regardless of execution or context – because even though protagonists have long defaulted to being straight, white and male, there’s an equally long tradition of other groups being portrayed badly. The fact that a book contains multiple female characters is no guarantee that those characters are written well, let alone inoffensively, just as the presence of POC within a classic text doesn’t mean their portrayal and treatment isn’t screamingly racist – which is why, when you see  diversity advocates recommending books on the basis that Character A is queer (for instance), the implication is that the filtering for quality has already taken place; that Character A both exists in a well-written narrative and isn’t a walking stereotype. The entire point of the exercise is to promote stories, not on the basis of token or forced diversity alone, but which portray diversity well – and because an author writing from their personal, in-depth experience is likely to have an extensive understanding of the topic, this support naturally extends to mentioning if, for instance, the author of a story starring multiple queer characters is queer themselves, not because there’s an assumption that straight people can’t write excellent stories about queer individuals, but because within any field or group, there’s always going to be a degree of insight or insider knowledge that can only be understood through personal experience, and it’s worth recognising which books are likely to replicate it, especially if we’re insiders, too, and are therefore more likely to notice if those perspectives are missing.

Consider, for instance, the probable insights contained in a military SF novel written by serving soldier, as distinct from one written by a military historian, as distinct again from one whose author’s knowledge of combat, tactics and fighting comes primarily from what they’ve read or seen in other fictional stories. The different backgrounds and knowledge-bases of these hypothetical authors says nothing about how well they write fiction, or how skilled they might be at other aspects of storytelling; they might have wildly different narrative styles and work within very different worlds, such that comparing their books, for all that they ostensibly share a genre, is a tricky proposition. All three books could be excellent in different ways, and all three books could be poor. But if someone you knew  to be both a good judge of fiction and possessed of actual combat experience – let’s call them Sam – handed you the first writer’s book and said, “Read this! The author actually served overseas!”, you’d probably deduce from context that, having served themselves, Sam was telling you that this writer gets it; their experience is my experience, or close enough to mine to be recognisable, and they know what they’re talking about. 

Similarly, if Sam praised either of the other two books for the military content, you’d understand that they were speaking from a position of personal experience: that, to someone with firsthand knowledge of fighting, the tactical/combat elements didn’t feel unrealistic or forced. By the same token, if Sam disliked the first book, you might take the criticism seriously while considering that, as the author was writing from their own first-hand perspective, too, a lack of realism wasn’t necessarily at fault, so much as a clash of opinions. But if Sam told you categorically that the third writer had no idea what they were talking about – that, regardless of any other qualities the book might have, the military aspect was hopeless – you’d be inclined to take that criticism more seriously than if a civilian friend with no grasp of tactics recommended it wholeheartedly; but depending on your own status as civilian, historian or soldier – and how badly you wanted to read the book for other reasons – your own reaction could be different again.

What I mean to say is this: seen from the outside, it’s easy to look at the members of a community recommending stories on what seems to you a superficial basis, and to conclude that, actually, nobody in that conversation is concerned with quality at all. But as per the fifth layer – language – what you’re really witnessing is a collectively understood shorthand: a way of signalling quickly that this book or that is worthy of attention based on a deeper awareness of commonly-held priorities, with respect accorded to those whose recommendations are supported by their personal experiences. Particularly on Twitter, where conversations between small groups are visible to non-participants and where character limitations make exposition difficult, it makes sense that bloggers, writers and critics alike try to be as succinct and powerful in their advocacy as possible. Just as I would accord a greater critical weight to the judgement of a soldier recommending a military SF novel, if a person of colour praises a book for its positive racial representation – or, conversely, criticises its lack thereof – I’m going to consider that relevant.

Which all ties in neatly to the final layer: taste. I’ve said before, and will say again, that I’m a firm believer in the value of negative reviews. Not only do they serve an important critical function, but as another person’s taste is seldom identical to our own, they help us construct a more useful idea of where our interests overlap with the critic’s, and where they diverge. Demonstrably, there’s an audience right now for diverse fiction: for stories which reject the old defaults and showcase a wider range of people, themes and places. The fact that some people enjoy such works does not, in and of itself, make them good works, just as popularity is no guarantee of goodness, either. The Venn diagram of why we love something is seldom a perfect circle with its objective strengths, inasmuch as such strengths can be reasonably said to exist: creative endeavours are funny like that. There’s always going to be a sort of tension between technique and passion, skill and heart, not because those things are in any way diametric opposites, but because we can never quite agree on whether one is more important than the other, or if you can really have one without the other, or where the distinction between them lies if, for instance, the most heartfelt aspects of a story are only so because of their technically sound expression.

As such, creative awards are contentious creatures – have always been so; will always be so – inasmuch as presenting one represents the imposition of an objective judgement into a fundamentally subjective medium; and because all claims to objectivity are inherently political, so must awards be political, too. This isn’t new information, though some people, like the Puppies, have become mightily outraged at the revelation that what they’ve historically perceived as a lack of politics was, in fact, merely a political bias towards their own comfort. That they are no longer predominantly catered to, they perceive as being under attack; what they call the forced introduction of politics into a formerly neutral space is rather the revelation of existing politics through a natural process of change. A sandbar might be solid for years, but when it shifts with the ocean and so makes new waves, it hasn’t betrayed the people standing on it – though possibly, it might have collapsed sooner beneath their weight, especially if they mistook it for solid and made it the foundation of an improbable edifice.

I guess what I want to say is this: despite what the Puppies think, the rest of us aren’t interested in diversity without quality, and as we’re all acutely aware, the failure mode of diversity is stereotype, which concept isn’t exactly on handshake terms with quality in the first place. That we want to celebrate historically silenced voices and perspectives doesn’t mean we’re doing so purely to spite you, or that we’ve lost all sense of judgement: if our tastes extend to seeing in fiction those versions of ourselves you’re disinclined to write, then who are you to tell us we aren’t entitled to our preferences? Nobody is saying you can’t tell your stories; we just might not want to read them, the same as you evidently have no desire to read ours. That’s not the genre being attacked – it’s the genre changing, and whether you change with it or not, we’re still going to like what we like.

Stop fighting the riptide, Puppies. As any Australian could tell you, it’s the surest way to drown.

For a while now, I’ve been hearing chatter about Seth Dickinson’s upcoming debut, The Traitor Baru Cormorant, due for release in September this year. Some of what I’ve heard has been extremely positive; some has been less so. Either way, I was intrigued enough to be interested, and today I finally read the first two chapters, which are currently available online at Tor.com.

My gut reaction thus far: creeping unease.

At a technical level, Dickinson writes extremely well. His prose is clean and sharp and compelling with a good sense of pace, and he has a knack for conveying great scope with few words. He’s also telling a story about queer people, people of colour, women, imperialism, politics and colonialism, which is always going to interest me at a visceral level, and as such, I was never bored.

However.

The thing about writing SFFnal stories is that, no matter how fantastic the setting or distant the future we might write, they’re still ultimately shaped by our very real, very human now: by our cultures, past and present, with all the attendant histories and contexts that entails. Sometimes, the connection is more obvious than others, as when we’re deliberately trying to evoke the shadow of ancient Rome or Renaissance Italy, but however we might invent, dissemble, hybridise, paraphrase or otherwise imagine new worlds, we’re not making anything out of whole cloth. Our fingerprints pattern the weave, reminding us of the reality we’re trying, however briefly, to escape, and whether we do it consciously or not, the process still occurs, as inevitable as sunrise.

Thus: when Dickinson writes about the Empire of Masks, with its paper money, bureaucratic service exam and sterile hatred of unhygienic behaviour, which here means homosexuality in all its permutations, what I think of is a cross between Imperial Britain and Imperial China, the language and bigotry of the former married to the institutions and scale of the latter. Adding to this impression, the denizens of Falcrest, home of this chimerical empire, are described as follows:

“This was the first impression Baru had of the Falcrest people: stubborn jaws, flat noses, deep folded eyes, their skin a paler shade of brown or copper or oat. At the time they hardly seemed so different.” 

Anglophone language and epicanthic folds: it’s not a subtle marriage, and in these two chapters, it feels like Dickinson has smashed imperial China and Britain together without much regard for the consequences of the fit. Which, ordinarily, might raise my eyebrow without stirring complaint – generally speaking, I’m a fan of cultural mashups, especially incongruous or startling ones. But here, given the prominent focus on homophobia and queer persecution, I can’t get past the real world implications; or, more specifically, the real world history.

Because beyond the horrific history between Britain and China, which frequently involves the former exploiting the latter, there’s the inescapable fact that Imperial China didn’t have anything even vaguely resembling the institutional homophobia Dickinson is describing, because in China – as in so many other parts of the world impacted by white colonialism – the sort of scientific, medicalised, systematic homophobia that situated being queer as an illness was a Western import. Nor is this a difficult fact to ascertain, as per the very first paragraph of the Wikipedia entry on homosexuality in China:

“The existence of homosexuality in China has been well documented since ancient times. According to one study, homosexuality in China was regarded as a normal facet of life in China, prior to the Western impact of 1840 onwards. However, this has been disputed. Many early Chinese emperors are speculated to have had homosexual relationships, accompanied by heterosexual ones. Opposition to homosexuality, according to the study by Hinsch, did not become firmly established in China until the 19th and 20th centuries, through the Westernization efforts of the late Qing Dynasty and early Republic of China. On the other hand, Gulik’s influential study argued that the Mongol Yuan dynasty introduced a more ascetic attitude to sexuality in general… Either way, it is indisputable that homosexual sex was banned in the People’s Republic of China from at least the twentieth century, until it was legalized in 1997.”

By comparison, the first British anti-sodomy law was the Buggery Act of 1533, which gave the crown the power to deal with an offence that had previously been handled exclusively by the Christian ecclesiastical courts. Consider this excerpt, for instance, from the Wikipedia article on homosexuality and psychiatry in a Western context:

“The view of homosexuality as a psychological disorder has been seen in literature since research on homosexuality first began. However, psychology as a discipline has evolved over the years in its position on homosexuality. Current attitudes have their roots in religious, legal and cultural underpinnings. In the early Middle Ages the Christian Church tolerated, or at least ignored homosexuality in secular cultures outside the Church. However, by the end of the 12th century hostility towards homosexuality began to emerge and spread through Europe’s secular and religious institutions. There were official expressions condemning the “unnatural” nature of homosexual behavior in the works of Thomas Aquinas and others. Unti the 19th century, homosexual activity was referred to as “unnatural, crimes against nature”, sodomy or buggery and was punishable by law, and even death. As people became more interested in discovering the causes of homosexuality, Medicine and Psychiatry began competing with the law and religion for jurisdiction. In the beginning of the 19th century, people began studying homosexuality scientifically. At this time, most theories regarded homosexuality as a disease, which had a great influence on how it was viewed culturally.”

With these two different narratives in mind, here’s the view of homosexuality held by Dickinson’s fictitious imperial Falcrest, as described in Chapter One:

“She went into the school, with her own uniform and her own bed in the crowded dormitory, and there in her first class on Scientific Society and Incrasticism she learned the words sodomite and tribadist and social crime and sanitary inheritance, and even the mantra of rule: order is preferable to disorder. There were rhymes and syllogisms to learn, the Qualms of revolutionary philosophy, readings from a child’s version of the Falcresti Handbook of Manumission.”

Clearly, then, this is type of homophobia is far more in the British mould than the Chinese. And thus my unease: because while Dickinson’s Masquerade, as his empire is externally known, is a fictional culture, what it evokes, in terms of real world comparisons, is a narrative wherein an undeniably white, colonial, homophobic agenda is being utilised by POC against other POC. Throw in the fact that, post-Western influence, modern China was, for a period, intensely homophobic – something the casual reader is more likely to know about than, say, the passion of the cut sleeve – and you have a narrative that, whether intentionally or not, subtly reinforces the stereotype of homophobia as a predominantly non-Western, non-white problem.

Further complicating matters is the planned trajectory of the titular protagonist – that is, of Baru Cormorant – as a woman from a formerly queer-friendly culture having to repress that part of her identity in order to rise through the Falcresti ranks, the better to one day change their ideology. To be clear: I have absolutely nothing against the idea of a story where a secret outsider strives to change a toxic system from within; that’s good stuff. The problem is that, by the end of Chapter Two, Baru – now eighteen – is set to leave her home island of Taranoke for life in the imperial service, having aged eleven years since the start of Chapter One. And while, as stated, Dickinson writes with great technical skill, for a story that’s being set up to portray Baru as the intended saviour of Taranoke culture, it’s troubling that we see her behaviour almost exclusively through the lens of Falcresti mores.

By which I mean: beyond its queer and polyamorous acceptance, we’re shown very little about Taranoke culture, and thus don’t have the proper sense of what Baru is setting out to avenge or protect beyond a deeply simplistic narrative of Homophobia Is Wrong. Baru’s time at the Falcresti school under the sponsorship of her patron, Cairdine Farrier, is the kind of thing I could easily read books about in its own right, but which in either case demands far more attention than two brief chapters can supply, no matter how well written they might be. Instead, we see far more of Baru’s acceptance of Falcresti logic than we do comparisons or conflicts with what she was taught before then; even the other students seem to have accepted the colonial mandate that the families and family structures they’ve known all their lives are wrong, as per this section in Chapter Two:

“Children began to vanish from the school, sent back out onto the island, into the plague. “Their behaviour was not hygienic,” the teachers said. Social conditions, the students whispered. He was found playing the game of fathers –

The teachers watched them coldly as their puberty came, waiting for unhygienic behaviour to manifest itself. Baru saw why Cairdine Farrier had advised her on her friendships. Some of the students collaborated in the surveillance.”

This level of indoctrination and complicity, presented in the absence of any compelling reason as to why the Taranoke students are so quick to abandon their own culture, is utterly jarring. We don’t get a sense of fear or coercion or other social changes beyond the plague and its impact; the children are seemingly cut off from their parents and families long before then, and it’s all glossed so quickly that what should be a nuanced explanation of cultural change and colonialism – but which is still the apparent heart of the novel, given that Baru is meant to be motivated by her time here to come back and fix everything – is instead rendered in brief, like an unimportant aside before the real story starts.

As a queer reader, the portrait Dickinson paints of Falcresti homophobia is genuinely unsettling, which is why the commensurate lack of attention paid to Taranoke customs feels like such an imbalance. Two chapters in, and all we know of queerness so far is that people suffer for it: Baru loses one of her fathers to the invaders, her cousin is threatened with molestation under the guise of corrective rape, Taranoke is colonised, and Baru’s two external allies both abandon her when they learn what she did to try and protect her cousin.

It’s queer tragedy porn in a fantasy context, and from what I’ve been told about how the book ends, that never really changes; arguably gets worse, in fact. And while I applaud Seth Dickinson for wanting to tell a story about how Homophobia Is Bad, complete with a cast of characters who are queer and female and POC, I can’t applaud his apparent decision to do so by making said characters suffer unbearably because of their orientation, the better to let the audience know that Homophobia Is Wrong.

The problem, then, is that The Traitor Baru Cormorant comes across as being a novel about queer oppression that is – whether intentionally or not – written for a straight audience: that is, for people who can find novelty and drama in stories about unrelenting queer oppression because they’ve never personally experienced it, whereas those of us who have just want, by and large, to read about queer people being people, preferably complex ones who get their fair share of happy endings rather than the traditional tragedy.

So, yeah. I’ll reserve full judgement for when (and if) I make it through the rest of the book, but right now, it doesn’t bode well.