Posts Tagged ‘Politics’

As social media platforms enter their collective adolescence – Facebook is fifteen, YouTube fourteen, Twitter thirteen, tumblr twelve – I find myself thinking about how little we really understand their cultural implications, both ongoing and for the future. At this point, the idea that being online is completely optional in modern world ought to be absurd, and yet multiple friends, having spoken to their therapists about the impact of digital abuse on their mental health, were told straight up to just stop using the internet. Even if this was a viable option for some, the idea that we can neatly sidestep the problem of bad behaviour in any non-utilitarian sphere by telling those impacted to simply quit is baffling at best and a tacit form of victim-blaming at worst. The internet might be a liminal space, but object permanence still applies to what happens here: the trolls don’t vanish if we close our eyes, and if we vanquish one digital hydra-domain for Toxicity Crimes without caring to fathom the whys and hows of what went wrong, we merely ensure that three more will spring up in its place.

Is the internet a private space, a government space or a public space? Yes.

Is it corporate, communal or unaffiliated? Yes.

Is it truly global or bound by local legal jurisdictions? Yes.

Does the internet reflect our culture or create it? Yes.

Is what people say on the internet reflective of their true beliefs, or is it a constant shell-game of digital personas, marketing ploys, intrusive thoughts, growth-in-progress, personal speculation and fictional exploration? Yes.

The problem with the internet is that takes up all three areas on a Venn diagram depicting the overlap between speech and action, and while this has always been the case, we’re only now admitting that it’s a bug as well as a feature. Human interaction cannot be usefully monitored using an algorithm, but our current conception of What The Internet Is has been engineered specifically to shortcut existing forms of human oversight, the better to maximise both accessibility (good to neutral) and profits (neutral to bad). Uber and Lyft are cheaper, frequently more convenient alternatives to a traditional taxi service, for instance, but that’s because the apps themselves are functionally predicated on the removal of meaningful customer service and worker protections that were hard-won elsewhere. Sites like tumblr are free to use, but the lack of revenue generated by those users means that, past a certain point, profits can only hope to outstrip expenses by selling access to those users and/or their account data, which means in turn that paying to effectively monitor their content creation becomes vastly less important than monetising it.

Small wonder, then, that individual users of social media platforms have learned to place a high premium on their ability to curate what they see, how they see it, and who sees them in turn. When I first started blogging, the largely unwritten rule of the blogsphere was that, while particular webforums dedicated to specific topics could have rules about content and conduct, blogs and their comment pages should be kept Free. Monitoring comments was viewed as a sign of narrow-minded fearfulness: even if a participant was aggressive or abusive, the enlightened path was to let them speak, because anything else was Censorship. This position held out for a good long while, until the collective frustration of everyone who’d been graphically threatened with rape, torture and death, bombarded with slurs, exhausted by sealioning or simply fed up with nitpicking and bad faith arguments finally boiled over.

Particularly in progressive circles, the relief people felt at being told that actually, we were under no moral obligation to let assholes grandstand in the comments or repeatedly explain basic concepts to only theoretically invested strangers was overwhelming. Instead, you could simply delete them, or block them, or maybe even mock them, if the offence or initial point of ignorance seemed silly enough. But as with the previous system, this one-size-fits-all approach soon developed a downside. Thanks to the burnout so many of us felt after literal years of trying to treat patiently with trolls playing Devil’s Advocate, liberal internet culture shifted sharply towards immediate shows of anger, derision and flippancy to anyone who asked a 101 question, or who didn’t use the right language, or who did anything other than immediately agree with whatever position was explained to them, however simply.

I don’t exempt myself from this criticism, but knowing why I was so goddamn tired doesn’t change my conviction that, cumulatively, the end result did more harm than good. Without wanting to sidetrack into a lengthy dissertation on digital activism in the post-aughties decade, it seems evident in hindsight that the then-fledgling alliance between trolls, MRAs, PUAs, Redditors and 4channers to deliberately exhaust left-wing goodwill via sealioning and bad faith arguments was only the first part of a two-pronged attack. The second part, when the left had lost all patience with explaining its own beliefs and was snappily telling anyone who asked about feminism, racism or anything else to just fucking Google it, was to swoop in and persuade the rebuffed party that we were all irrational, screeching harridans who didn’t want to answer because we knew our answers were bad, and why not consider reading Roosh V instead?

The fallout of this period, I would argue, is still ongoing. In an ideal world, drawing a link between online culture wars about ownership of SFF and geekdom and the rise of far-right fascist, xenophobic extremism should be a bow so long that not even Odysseus himself could draw it. But this world, as we’ve all had frequent cause to notice, is far from ideal at the best of times – which these are not – and yet another featurebug of the internet is the fluid interpermeability of its various spaces. We talk, for instance – as I am talking here – about social media as a discreet concept, as though platforms like Twitter or Facebook are functionally separate from the other sites to which their users link; as though there is no relationship between or bleed-through from the viral Facebook post screencapped and shared on BuzzFeed, which is then linked and commented upon on Reddit, which thread is then linked to on Twitter, where an entirely new conversation emerges and subsequently spawns an article in The Huffington Post, which is shared again on Facebook and the replies to that shared on tumblr, and so on like some grizzly perpetual mention machine.

But I digress. The point here is that internet culture is best understood as a pattern of ripples, each new iteration a reaction to the previous one, spreading out until it dissipates and a new shape takes its place. Having learned that slamming the virtual door in everyone’s face was a bad idea, the online left tried establishing a better, calmer means of communication; the flipside was a sudden increase in tone-policing, conversations in which presentation was vaunted over substance and where, once again, particular groups were singled out as needing to conform to the comfort-levels of others. Overlapping with this was the move towards discussing things as being problematic, rather than using more fixed and strident language to decry particular faults – an attempt to acknowledge the inherent fallibility of human works while still allowing for criticism. A sensible goal, surely, but once again, attempting to apply the dictum universally proved a double-edged sword: if everything is problematic, then how to distinguish grave offences from trifling ones? How can anyone enjoy anything if we’re always expected to thumb the rosary of its failings first?

When everything is problematic and everyone has the right to say so, being online as any sort of creator or celebrity is like being nibbled to death by ducks. The well-meaning promise of various organisations, public figures or storytellers to take criticism on board – to listen to the fanbase and do right by their desires – was always going to stumble over the problem of differing tastes. No group is a hivemind: what one person considers bad representation or in poor taste, another might find enlightening, while yet a third party is more concerned with something else entirely. Even in cases with a clear majority opinion, it’s physically impossible to please everyone and a type of folly to try, but that has yet to stop the collective internet from demanding it be so. Out of this comes a new type of ironic frustration: having once rejoiced in being allowed to simply block trolls or timewasters, we now cast judgement on those who block us in turn, viewing them, as we once were viewed, as being fearful of criticism.

Are we creating echo chambers by curating what we see online, or are we acting in pragmatic acknowledgement of the fact that we neither have time to read everything nor an obligation to see all perspectives as equally valid? Yes.

Even if we did have the time and ability to wade through everything, is the signal-to-noise ratio of truth to lies on the internet beyond our individual ability to successfully measure, such that outsourcing some of our judgement to trusted sources is fundamentally necessary, or should we be expected to think critically about everything we encounter, even if it’s only intended as entertainment? Yes.

If something or someone online acts in a way that’s antithetical to our values, are we allowed to tune them out thereafter, knowing full well that there’s a nearly infinite supply of as-yet undisappointing content and content-creators waiting to take their place, or are we obliged to acknowledge that Doing A Bad doesn’t necessarily ruin a person forever? Yes.

And thus we come to cancel culture, the current – but by no means final – culmination of previous internet discourse waves. In this iteration, burnout at critical engagement dovetails with a new emphasis on collective content curation courtesies (try saying that six times fast), but ends up hamstrung once again by differences in taste. Or, to put it another way: someone fucks up and it’s the last straw for us personally, so we try to remove them from our timelines altogether – but unless our friends and mutuals, who we still want to engage with, are convinced to do likewise, then we haven’t really removed them at all, such that we’re now potentially willing to make failure to cancel on demand itself a cancellable offence.

Which brings us right back around to the problem of how the modern internet is fundamentally structured – which is to say, the way in which it’s overwhelmingly meant to rely on individual curation instead of collective moderation. Because the one thing each successive mode of social media discourse has in common with its predecessors is a central, and currently unanswerable question: what universal code of conduct exists that I, an individual on the internet, can adhere to – and expect others to adhere to – while we communicate across multiple different platforms?

In the real world, we understand about social behavioural norms: even if we don’t talk about them in those terms, we broadly recognise them when we see them. Of course, we also understand that those norms can vary from place to place and context to context, but as we can only ever be in one physical place at a time, it’s comparatively easy to adjust as appropriate.

But the internet, as stated, is a liminal space: it’s real and virtual, myriad and singular, private and public all at once. It confuses our sense of which rules might apply under which circumstances, jumbles the normal behavioural cues by obscuring the identity of our interlocutors, and even though we don’t acknowledge it nearly as often as we should, written communication – like spoken communication – is a skill that not everyone has, just as tone, whether spoken or written, isn’t always received (or executed, for that matter) in the way it was intended. And when it comes to politics, in which the internet and its doings now plays no small role, there’s the continual frustration that comes from observing, with more and more frequency, how many literal, real-world crimes and abuses go without punishment, and how that lack of consequences contributes in turn to the fostering of abuse and hostility towards vulnerable groups online.

This is what comes of occupying a transitional period in history: one in which laws are changed and proposed to reflect our changing awareness of the world, but where habit, custom, ignorance, bias and malice still routinely combine, both institutionally and more generally, to see those laws enacted only in part, or tokenistically, or not at all. To take one of the most egregious and well-publicised instances that ultimately presaged the #MeToo movement, the laughably meagre sentence handed down to Brock Turner, who was caught in the act of raping an unconscious woman, combined with the emphasis placed by both the judge and much of the media coverage on his swimming talents and family standing as a means of exonerating him, made it very clear that sexual violence against women is frequently held to be less important than the perceived ‘bright futures’ of its perpetrators.

Knowing this, then – knowing that the story was spread, discussed and argued about on social media, along with thousands of other, similar accounts; knowing that, even in this context, some people still freely spoke up in defence of rapists and issued misogynistic threats against their female interlocutors – is it any wonder that, in the absence of consistent legal justice in such cases, the internet tried, and is still trying, to fill the gap? Is it any wonder, when instances of racist police brutality are constantly filmed and posted online, only for the perpetrators to receive no discipline, that we lose patience for anyone who wants to debate the semantics of when, exactly, extrajudicial murder is “acceptable”?

We cannot control the brutality of the world from the safety of our keyboards, but when it exhausts or threatens us, we can at least click a button to mute its seeming adherents. We don’t always have the energy to decry the same person we’ve already argued against a thousand times before, but when a friend unthinkingly puts them back on our timeline for some new reason, we can tell them that person is cancelled and hope they take the hint not to do it again. Never mind that there is far too often no subtlety, no sense of scale or proportion to how the collective, viral internet reacts in each instance, until all outrage is rendered flat and the outside observer could be forgiven for worrying what’s gone wrong with us all, that using a homophobic trope in a TV show is thought to merit the same online response as an actual hate crime. So long as the war is waged with words alone, there’s only a finite number of outcomes that boycotting, blocking, blacklisting, cancelling, complaining and critiquing can achieve, and while some of those outcomes in particular are well worth fighting for, so many words are poured towards so many attempts that it’s easy to feel numbed to the process; or, conversely, easy to think that one response fits all contexts.

I’m tired of cancel culture, just as I was dully tired of everything that preceded it and will doubtless grow tired of everything that comes after it in turn, until our fundamental sense of what the internet is and how it should be managed finally changes. Like it or not, the internet both is and is of the world, and that is too much for any one person to sensibly try and curate at an individual level. Where nothing is moderated for us, everything must be moderated by us; and wherever people form communities, those communities will grow cultures, which will develop rules and customs that spill over into neighbouring communities, both digitally and offline, with mixed and ever-changing results. Cancel culture is particularly tricky in this regard, as the ease with which we block someone online can seldom be replicated offline, which makes it all the more intoxicating a power to wield when possible: we can’t do anything about the awful coworker who rants at us in the breakroom, but by God, we can block every person who reminds us of them on Twitter.

The thing about participating in internet discourse is, it’s like playing Civilisation in real-time, only it’s not a game and the world keeps progressing even when you log off. Things change so fast on the internet – memes, etiquette, slang, dominant opinions – and yet the changes spread so organically and so fast that we frequently adapt without keeping conscious track of when and why they shifted. Social media is like the Hotel California: we can check out any time we like, but we can never meaningfully leave – not when world leaders are still threatening nuclear war on Twitter, or when Facebook is using friendly memes to test facial recognition software, or when corporate accounts are creating multi-staffed humansonas to engage with artists on tumblr, or when YouTube algorithms are accidentally-on-purpose steering kids towards white nationalist propaganda because it makes them more money.

Of course we try and curate our time online into something finite, comprehensible, familiar, safe: the alternative is to embrace the near-infinite, incomprehensible, alien, dangerous gallimaufry of our fractured global mindscape. Of course we want to try and be critical, rational, moral in our convictions and choices; it’s just that we’re also tired and scared and everyone who wants to argue with us about anything can, even if they’re wrong and angry and also our relative, or else a complete stranger, and sometimes you just want to turn off your brain and enjoy a thing without thinking about it, or give yourself some respite, or exercise a tiny bit of autonomy in the only way you can.

It’s human nature to want to be the most amount of right for the least amount of effort, but unthinkingly taking our moral cues from internet culture the same way we’re accustomed to doing in offline contexts doesn’t work: digital culture shifts too fast and too asymmetrically to be relied on moment to moment as anything like a universal touchstone. Either you end up preaching to the choir, or you run a high risk of aggravation, not necessarily due to any fundamental ideological divide, but because your interlocutor is leaning on a different, false-universal jargon overlying alternate 101 and 201 concepts to the ones you’re using, and modern social media platforms – in what is perhaps the greatest irony of all – are uniquely poorly suited to coherent debate.

Purity wars in fandom, arguments about diversity in narrative and whether its proponents have crossed the line from criticism into bullying: these types of arguments are cyclical now, dying out and rekindling with each new wave of discourse. We might not yet be in a position to stop it, but I have some hope that being aware of it can mitigate the worst of the damage, if only because I’m loathe to watch yet another fandom steadily talk itself into hating its own core media for the sake of literal argument.

For all its flaws – and with all its potential – the internet is here to stay. Here’s hoping we figure out how to fix it before its ugliest aspects make us give up on ourselves.

 

 

 

 

 

Advertisements

I have a lot of thoughts right now, and I’m not sure how to express them. There’s so much going wrong in the world that on one level, it feels insincere or trivial to focus on anything other than the worst, most visceral horrors; but on another, there’s a point past which grief and fury becoming numbing. The angriest part of of me wants to wade into the wrench of things and wrangle sense from chaos, but my rational brain knows it’s impossible. I hate that I know it’s impossible, because what else but this do the people who could really change things think, to justify their inaction? I have words, and they feel empty. The world is full of indifferent walls and the tyrants who seek to build more of them; words, no matter how loudly intoned, bounce off them and fade into echoes.

Our governments are torturing children.

I could write essays detailing why particular policies and rhetoric being favoured by Australia, the UK and the US right now are inhumane, but I don’t have the strength for it. Some actions are so clearly evil that the prospect of explaining why to people claiming confusion about the matter makes me want to walk into the sea. I can’t go online without encountering adults who want to split hairs over why, in their view, it’s completely justifiable to steal the children of refugees and incarcerate them away from the parents they mean to deport, because even though they don’t want adult refugees they see no contradiction in keeping their babies indefinitely, in conditions that are proven to cause severe psychological damage, because – why? What the fuck is the end-game, here? People don’t seek asylum on a goddamn whim; they’re fleeing violence and terror, persecution and war and destruction; yet somehow the powers that be think that word of stolen kids will pass through some non-existent refugee grapevine and stop people coming in future? And even if it did, which it manifestly can’t and won’t, what the fuck do they plan to do with the ones they’ve taken?

Our governments are torturing children.

Refugees caged on Manus Island are committing suicide, their families left to learn of their deaths through the media. Disabled people of all ages are dying and will continue to die in the UK of gross neglect. None of this is conscionable; none of it need happen. Billionaires are privately funding enterprises that ought to be public because they can’t conceive of a better use for that much money while workers employed by their companies die sleeping in cars or collapse on the job from gross overwork or subsist on food stamps.

I want to say that the world can’t continue like this, but I know it can. It has before; we’re at a familiar crossroads, and the path down which we’re headed is slick with history’s blood. That’s why it’s so goddamn terrifying.

Please, let this be the turning point. Let’s fix this before it’s too late.

Our governments are torturing children.

When I think about the state of global politics, I often imagine how it’s going to be viewed in the future.  My reflex is to think in terms of high school history textbooks, but that phrase evokes a specific type of educational setup that already feels anachronistic – that of overpriced, physical volumes written specifically for teaching teenagers a set curriculum, rather than because they represent good historical summaries in their own right. I think about our penchant for breaking the past down into neatly labelled epochs, and wonder how long it will take for some sharp-tongued future historian to look at the self-professed Information Age, as we once optimistically termed it, examine its trajectory through the first two decades of the new millennium, and conclude that it should be more fittingly known as the Disinformation Age.

With that in mind, here’s my hot take on what a sample chapter from such a historical summary might look like:

Chapter 9: Perprofial Media, Propaganda and Power

Perprofial, adj: something which is simultaneously personal, professional and political. 

When Twitter, the first widespread micro-blogging platform, was launched in 2006, no one could have predicted that, barely eleven years later, this new perprofial medium would have irrevocably changed the political landscape. Earlier social media sites, such as Facebook, were foremost a digital extension of existing personal networks, with aspirational connections an afterthought; traditional blogging, by contrast, began as a form of mass broadcast diarising which steadily – though not without hiccups – osmosed the digital remnants of print-era journalism. But from the outset, Twitter was a platform whose users could both listen and be listened to, a sea of Janus-headed audience-performers whose fame might as easily precede that particular medium as be enabled by it, unless it was both or neither. The draw of enabling the unknown, the upcoming, the newly-minted and the long-established to all rub shoulders at the same party – or at least, to shout around each other from the variegated levels of an infinite, Escheresque ballroom – was considered just that: a draw, instead of – as it more properly was – a Brownian mob-theory engine running in 24/7 real time without anything like a Chinese wall, a fact-checker or a control group to filter the variables.

The true point at which Twitter stopped being a social media outlet and became a Trojan horse at the gates of the Fifth Estate is now a Sorites paradox. We might not be able to pinpoint the exact time and date of the transition, but such coordinates are vastly less important than the fact that the switch itself happened. What we can identify, however, is the moment when the extrajudicial nature of the power wielded by perprofial platforms became clear at a global level.

Though Donald Trump’s provocative online statements long preceded his tenure as president, and while they had consistently drawn commentary from all corners, the point at which his tweets were publicly categorised as a declaration of war by North Korean authorities was a definite Rubicon crossing. As Twitter could – and did – ban users for issuing threats of violence in violation of its Terms of Service, it was argued, then why should it allow a world leader to openly threaten war? If the “draw” of the platform was truly a democratising of the powerful and the powerless, then surely powerful figures should be held to the same standards as everyone else – or even, potentially, to more rigorous ones, given the far greater scope of the consequences afforded them by their fame.

But first, some context. At a time of resurgent global fascism and with educational institutions increasingly hampered by the anti-intellectual siege begun some sixty years earlier, when the theory of “creationism” was first pitched as a scientific alternative in American public schools, the zeitgeist was saturated with the steady repositioning of expertise as toxic to democracy. Early experiments in perprofial media, then called “reality television,” had steadily acclimated the public to the idea that personal narratives, no matter how uninformed, could be a professional undertaking – provided, of course, that they fit within an accepted sociopolitical framework, such as radical weight loss or the quest for fame. At the same time, the rise of the internet as a lawless space where anyone could create and promote their own content, regardless of its quality, created an explosion of self-serving informational feedback loops which, both intentionally and by accident, preyed on the uncritical fact-absorption of generations taught to accept that anything written down in an approved book – of which the screen was seemingly just a faster, more convenient extension – was necessarily true.

The commensurate decline of print-based journalism was the final nail in the coffin. To combat the sharp loss of revenue necessitated by a jump from an industry financed by a cornered market, lavish advertising revenue and a locked-in pay-per-issue model to the still-nebulous vagaries of digital journalism, where paid professional content existent on the same apparent footing as free amateur blogging, corners were cut. Specialists and sub-editors were let go, journalists were alternately asked or forced to become jacks of all trades, and content was recycled across multiple outlets. All of these changes were drastic enough to be noticeable even to the uninitiated; even so, the situation might still have been salvageable if not for the fact that, in looking to compete in this new environment, the bulk of traditional outlets made the mistake of assuming that the many digital amateurs of the blogsphere were, in aggregate, equivalent to their old nemesis, the tabloid press.

Scandal-sheets are a tradition as old as print journalism, with plenty of historical overlap between the one and the other. At some time or another, even the most reputable papers had all resorted to sensationalism – or at least, to real journalism layered with editorial steering – in an effort to wrest their readerships back from the tabloids, but always on the understanding that their legacy, their trustworthiness as institutions, was established enough to take the moral hit. But when this same tactic was tried again in digital environs, the effect was vastly different. Still struggling with web layouts and paywalls, most traditional papers were demonstrably harder and less intuitive to navigate than upstart blogs, and with not much more to boast in the way of originality (since they’d sacked so many writers) or technical accuracy (since they’d sacked so many editors), the decision to switch to tabloid, clickbait content – often by hiring from the same pool of amateur bloggers they were ultimately competing with, leveraging their decaying reputations as compensation for no or meagre pay in a job market newly seething with desperate, unemployed writers – backfired badly. Rather than reclaimed readerships, the effect was to cement the idea that the only real difference between professional news and amateur opinion wasn’t facts, or training, or integrity, but a simple matter of where you preferred to shop.

The internet had become an information marketplace – quite literally, in the case of Russia bulk-purchasing ads on Facebook in the lead-up to the 2016 US presidential election. In Britain, the success of the Leave vote in the Brexit referendum was attributed in part to voters having “had enough of experts” – the implication being that, contrary to the famous assertion of Isaac Asimov, many people really did think their ignorance was just as good as someone else’s knowledge. Though Asimov was speaking specifically of American anti-intellectualism and a false perception of democracy in the 1980s, his fears were just as applicable some forty years later, and arguably moreso, given the rise of perprofial media.

In the months prior to his careless declaration of war, then-president Trump made a point of lambasting what he called the “fake news media”, which label eventually came to encompass every and any publication, whether traditional or digital, which dared to criticise him; even his former ally, Fox News, was not exempt. In the immediate, messy aftermath of the collapse of print journalism, this claim rang just nebulously true enough to many that, with so many trusted papers having perjured themselves with tabloid tactics, Trump was able to situate himself as the One True Authority his followers could trust.

It’s important to note, however, that not just any politician, no matter how sociopathic or self-serving, could have pulled off the same trick. The ace in Trump’s sleeve was his pre-existing status as a king in the perprofial arena of reality television, which had already helped to re-contextualise democracy – or the baseline concept of a democratic institution, rather – as something in which expertise was only to be trusted if supported by success, where “success” meant “celebrity”. Under this doctrine, those who preached expertise, but whom the listener had never heard of, were considered suspect: true success meant fame, and if you weren’t famous for what you knew, then you must not really be knowledgeable. By the same token, celebrities who claimed expertise in fields beyond those for which they were famous were also criticised: it was fine to play football or act, for instance, but as neither skill was seen to have anything to do with politics, the act of speaking “out of turn” on such topics was dismissed as mere self-aggrandising. Actual facts had nothing to do with the matter, because “actual facts” as a concept was rendered temporarily liminal by the struggle between amateur and professional media.

With such “logic” to support him, Trump couldn’t lose. What did his lack of political qualifications matter? He’d still succeeded at getting into politics, which meant he must have learned by doing, which meant in turn that his fame, unlike that of other celebrities, made him an inviolate authority on political matters. Despite how fiercely he was opposed and resisted, his repeated, defensive cries of “fake news!” rang just true enough to sow doubt among those who might otherwise have opposed him.

And so to Twitter, and a declaration of war. By historical assumption, Trump as president ought to have been the most powerful man in the world, but by investing so much of that power in a perprofial platform – one to whose rules of conduct he was personally bound, without any exemption or extenuation on account of his office – he had, quite unthinkingly, agreed to let an international corporation place extrajudicial sanctions, not only on the office of the presidency, but through Trump as an individual and his investiture as the head of state, on a declaration of war.

In the next chapter: racism, dogwhistles and spinning the Final Solution.

*

History is, of course, what we make of it. Right now, I just wish we weren’t making quite so much.

 

   

 

 

 

Warning: spoilers for Shin Godzilla.

I’ve been wanting to see Shin Godzilla since it came out last year, and now that it’s available on iTunes, I’ve finally had the chance. Aside from the obvious draw inherent to any Godzilla movie, I’d been keen to see a new Japanese interpretation of an originally Japanese concept, given the fact that every other recent take has been American. As I loaded up the film, I acknowledged the irony in watching a disaster flick as a break from dealing with real-world disasters, but even so, I didn’t expect the film itself to be quite so bitingly apropos.

While Shin Godzilla pokes some fun at the foibles of Japanese bureaucracy, it also reads as an unsubtle fuck you to American disaster films in general and their Godzilla films in particular. From the opening scenes where the creature appears, the contrast with American tropes is pronounced. In so many natural disaster films – 2012, The Day After Tomorrow, Deep Impact, Armageddon, San Andreas – the Western narrative style centres by default on a small, usually ragtag band of outsiders collaborating to survive and, on occasion, figure things out, all while being thwarted by or acting beyond the government. There’s frequently a capitalist element where rich survivors try to edge out the poor, sequestering themselves in their own elite shelters: chaos and looting are depicted up close, as are their consequences. While you’ll occasionally see a helpful local authority figure, like a random policeman, trying to do good (however misguidedly), it’s always at a remove from any higher, more coordinated relief effort, and particularly in more SFFnal films, a belligerent army command is shown to pose nearly as much of a threat as the danger itself.

To an extent, this latter trope appears in Shin Godzilla, but to a much more moderated effect. When Japanese command initially tries to use force, the strike is aborted because of a handful of civilians in range of the blast, and even when a new attempt is made, there’s still an emphasis on chain of command, on minimising collateral damage and keeping the public safe. At the same time, there’s almost no on-the-ground civilian elements to the story: we see the public in flashes, their online commentary and mass evacuations, a few glimpses of individual suffering, but otherwise, the story stays with the people in charge of managing the disaster. Yes, the team brought together to work out a solution – which is ultimately scientific rather than military – are described as “pains-in-the-bureaucracy,” but they’re never in the position of having to hammer, bloody-fisted, on the doors of power in order to rate an audience. Rather, their assemblage is expedited and authorised the minute the established experts are proven inadequate.

When the Japanese troops mobilise to attack, we view them largely at a distance: as a group being addressed and following orders, not as individuals liable to jump the chain of command on a whim. As such, the contrast with American films is stark: there’s no hotshot awesome commander and his crack marine team to save the day, no sneering at the red tape that gets in the way of shooting stuff, no casual acceptance of casualties as a necessary evil, no yahooing about how the Big Bad is going to get its ass kicked, no casual discussion of nuking from the army. There’s just a lot of people working tirelessly in difficult conditions to save as many people as possible – and, once America and the UN sign a resolution to drop a nuclear bomb on Godzilla, and therefore Tokyo, if the Japanese can’t defeat it within a set timeframe, a bleak and furious terror at their country once more being subject to the evils of radiation.

In real life, Japan is a nation with extensive and well-practised disaster protocols; America is not. In real life, Japan has a wrenchingly personal history with nuclear warfare; America, despite being the cause of that history, does not.

Perhaps my take on Shin Godzilla would be different if I’d managed to watch it last year, but in the immediate wake of Hurricane Harvey, with Hurricane Irma already wreaking unprecedented damage in the Caribbean, and huge tracts of Washington, Portland and Las Angeles now on fire, I find myself unable to detach my viewing from the current political context. Because what the film hit home to me – what I couldn’t help but notice by comparison – is the deep American conviction that, when disaster strikes, the people are on their own. The rich will be prioritised, local services will be overwhelmed, and even when there’s ample scientific evidence to support an imminent threat, the political right will try to suppress it as dangerous, partisan nonsense.

In The Day After Tomorrow, which came out in 2004, an early plea to announce what’s happening and evacuate those in danger is summarily waved off by the Vice President, who’s more concerned about what might happen to the economy, and who thinks the scientists are being unnecessarily alarmist. This week, in the real America of 2017, Republican Rush Limbaugh told reporters that the threat of Hurricane Irma, now the largest storm ever recorded over the Atlantic Ocean, was being exaggerated by the “corrupted and politicised” media so that they and other businesses could profit from the “panic”.

In my latest Foz Rants piece for the Geek Girl Riot podcast, which I recorded weeks ago, I talk about how we’re so acclimated to certain political threats and plotlines appearing in blockbuster movies that, when they start to happen in real life, we’re conditioned to think of them as being fictional first, which leads us to view the truth as hyperbolic. Now that I’ve watched Shin Godzilla, which flash-cuts to a famous black-and-white photo of the aftermath of Hiroshima when the spectre of a nuclear strike is raised, I’m more convinced than ever of the vital, two-way link between narrative on the one hand and our collective cultural, historical consciousness on the other. I can’t imagine any Japanese equivalent to the moment in Independence Day when cheering American soldiers nuke the alien ship over Las Angeles, the consequences never discussed again despite the strike’s failure, because the pain of that legacy is too fully, too personally understood to be taken lightly.

At a cultural level, Japan is a nation that knows how to prepare for and respond to natural disasters. Right now, a frightening number of Americans – and an even more frightening number of American politicians – are still convinced that climate change is a hoax, that scientists are biased, and that only God is responsible for the weather. How can a nation prepare for a threat it won’t admit exists? How can it rebuild from the aftermath if it doubts there’ll be a next time?

Watching Shin Godzilla, I was most strongly reminded, not of any of the recent American versions, but The Martian. While the science in Shin Godzilla is clearly more handwavium than hard, it’s nonetheless a film in which scientific collaboration, teamwork and international cooperation save the day. The last, despite a denouement that pits Japan against an internationally imposed deadline, is of particular importance, as global networking still takes place across scientific and diplomatic back-channels. It’s a rare American disaster movie that acknowledges the existence or utility of other countries, especially non-Western ones, beyond shots of collapsing monuments, and even then, it’s usually in the context of the US naturally taking the global lead once they figure out a plan. The fact that the US routinely receives international aid in the wake of its own disasters is seemingly little-known in the country itself; that Texas’s Secretary of State recently appeared to turn down Canadian aid in the wake of Harvey, while now being called a misunderstanding, is nonetheless suggestive of confusion over this point.

As a film, Shin Godzilla isn’t without its weaknesses: the monster design is a clear homage to the original Japanese films, which means it occasionally looks more stop-motion comical than is ideal; there’s a bit too much cutting dramatically between office scenes at times; and the few sections of English-language dialogue are hilariously awkward in the mouths of American actors, because the word-choice and use of idiom remains purely Japanese. Even so, these are ultimately small complaints: there’s a dry, understated sense of humour evident throughout, even during some of the heavier moments, and while it’s not an action film in the American sense, I still found it both engaging and satisfying.

But above all, at this point in time – as I spend each morning worriedly checking the safety of various friends endangered by hurricane and flood and fire; as my mother calls to worry about the lack of rain as our own useless government dithers on climate science – what I found most refreshing was a film in which the authorities, despite their faults and foibles, were assumed and proven competent, even in the throes of crisis, and in which scientists were trusted rather than dismissed. Earlier this year, in response to an article we both read, my mother bought me a newly-released collection of the works of children’s poet Misuzu Kaneko, whose poem “Are You An Echo?” was used to buoy the Japanese public in the aftermath of the 2011 tsunami . Watching Shin Godzilla, it came back to me, and so I feel moved to end with it here.

May we all build better futures; may we all write better stories.

Are You An Echo?

If I say, “Let’s play?”
you say, “Let’s play!”

If I say, “Stupid!”
you say, “Stupid!”

If I say, “I don’t want to play anymore,”
you say, “I don’t want to play anymore.”

And then, after a while,
becoming lonely

I say, “Sorry.”
You say, “Sorry.”

Are you just an echo?
No, you are everyone.

 

 

 

A poem by me, with apologies to Dylan Thomas:

Nevertheless, She Persisted

Nevertheless, she persisted.

Live women fighting we shall be one

With la Liberté and the French Joan;

When their hearts are picked clean and the clean hearts gone,

She shall wear laws at elbow and foot;

Though she go mad she will be sane,

Though she flees through the sea she shall rise again;

Though justice be lost the just shall not;

For nevertheless, she persisted.

.

Nevertheless, she persisted.

Over the whinings of their greed

Men lying long have now lied windily;

Changing their tacks when stories give way,

Stacking their courts, yet we shall not break;

Faith in our hands shall snap in two

And the unicorn evils run them through;

Split all ends up she shan’t crack;

And nevertheless, she persisted.

.

Nevertheless, she persisted.

No more may Foxes cry in decline

Or news break loud to a silenced room;

Where fawned a follower may a follower no more

Bow his head to the blows of this reign;

Though she be mad and tough as nails,

Her headlines in characters hammer the dailies;

Break in the Sun ‘till the Sun breaks down,

As nevertheless, she persisted.

The last few weeks or so, I’ve seen the same video endlessly going around on Facebook: a snippet of an interview with Simon Sinek, who lays out what he believes to be the key problems with millennials in the workplace. Every time I see it shared, my blood pressure rises slightly, until today – joy of joys! – I finally saw and shared a piece rebutting it. As often happens on Facebook, a friend asked me why I disagreed with Sinek’s piece, as he’d enjoyed his TED talks. This is my response.

In his talk, Sinek touches on what he believes to be the four core issues handicapping millennials: internet addiction, bad parenting, an unfulfilled desire for meaningful work and a desire to have everything instantly. Now: demonstrably, some people are products of bad parenting, and the pernicious, lingering consequences of helicopter parenting, wherein overzealous, overprotective adults so rob their children of autonomy and instil in them such a fear of failure that they can’t healthily function as adults, is a very real phenomenon. Specifically in reference to Sinek’s claims about millennials all getting participation awards in school (which, ugh: not all of us fucking did, I don’t know a single person for whom that’s true, shut up with this goddamn trope), the psychological impact of praising children equally regardless of their actual achievements, such that they come to view all praise as meaningless and lose self-confidence as a result, is a well-documented phenomenon. But the idea that you can successfully accuse an entire global generation of suffering from the same hang-ups as a result of the same bad parenting stratagems, such that all millennials can be reasonably assumed to have this problem? That, right there, is some Grade-A bullshit.

Bad parenting isn’t a new thing. Plenty of baby boomers and members of older generations have been impacted by the various terrible fads and era-accepted practises their own parents fell prey to (like trying to electrocute the gay out of teenagers, for fucking instance), but while that might be a salient point to make in individual cases or in the specific context of tracking said parenting fads, it doesn’t actually set millennials apart in any meaningful way. Helicopter parenting might be comparatively new, but other forms of damage are not, and to act as though we’re the only generation to have ever dealt with the handicap of bad parenting, whether collectively or individually, is fucking absurd. But more to the point, the very specific phenomenon of helicopter parenting? Is, overwhelmingly, a product of white, well-off, middle- and-upper-class America, developed specifically in response to educational environments where standardised testing rules all futures and there isn’t really a viable social safety net if you fuck up, which leads to increased anxiety for children and parents both. While it undeniably appears in other countries and local contexts, and while it’s still a thing that happens to kids now, trying to erase its origins does no favours to anyone.

Similarly, the idea that millennials have all been ruined by the internet and don’t know how to have patience because we grew up with smartphones and social media is – you guessed it – bullshit. This is really a two-pronged point, tying into two of Sinek’s arguments: that we’re internet addicts who don’t know how to socialise properly, and that we’re obsessed with instant gratification, and as such, I’m going to address them together.

Yes, internet addiction is a problem for some, but it’s crucial to note it can and does affect people of all ages rather than being a millennial-only issue, just as it’s equally salient to point out that millennials aren’t the only ones using smartphones. I shouldn’t have to make such an obvious qualification, but apparently, I fucking do. That being said, the real problem here is that Sinek has seemingly no awareness of what social media actually is. I mean, the key word is right there in the title: social media, and yet he’s acting like it involves no human interaction whatsoever – as though we’re just playing with digital robots or complete strangers all the time instead of texting our parents about dinner or FaceTiming with friends or building professional networks on Twitter or interacting with our readerships on AO3 (for instance).

The idea, too, that millennials have their own social conventions different to his own, many of which reference a rich culture of online narratives, memes, debates and communities, does not seem to have occurred to him, because we’re not learning to do it face to face. Except that, uh, we fucking are, on account of how we still inhabit physical bodies and go to physical places every fucking day of our goddamn lives, do I really have to explain that this is a thing? Do I really have to explain the appeal of maintaining friendships where you’re emotionally close but the person lives hundreds or thousands of kilometres away? Do I really have to spell out the fact that proximal connections aren’t always meaningful ones, and that it actually makes a great deal of human sense to want to socialise with people we care about and who share our interests where possible rather than relying solely on the random admixture of people who share our schools and workplaces for fun?

The fact that Sinek talks blithely about how all millennials grew up with the internet and social media, as though those of us now in our fucking thirties don’t remember a time before home PCs were common (I first learned to type on an actual typewriter), is just ridiculous: Facebook started in 2004, YouTube in 2005, Twitter in 2006, tumblr in 2007 and Instagram in 2010. Meaning, most millennials – who, recall, were born between 1980 and 1995, which makes the youngest of us 21/22 and the eldest nearly forty – didn’t grow up with what is now considered social media throughout our teenage years, as Sinek asserts, because it didn’t really get started until we were out of high school. Before that, we had internet messageboards that were as likely to die overnight as to flourish, IRC chat, and the wild west of MSN forums, which was a whole different thing altogether. (Remember the joys of being hit on by adults as an underage teen in your first chatroom and realising only years later that those people were fucking paedophiles? Because I DO.)

And then he pulls out the big guns, talking about how we get a dopamine rush when we post about ourselves online, and how this is the same brain chemical responsible for addiction, and this is why young people are glued to their phones and civilisation is ending. Which, again, yes: dopamine does what he says it does, but that is some fucking misleading bullshit, Simon Says, and do you know why? Because you also get a goddamn dopamine rush from talking about yourself in real life, too, Jesus fucking Christ, the internet is not the culprit here, to say nothing of the fact that smartphones do more than one goddamn thing. Sinek lambasts the idea of using your phone in bed, for instance, but I doubt he holds a similar grudge against reading in bed, which – surprise! – is what quite a lot of us are doing when we have our phones out of an evening, whether in the form of blogs or books or essays. If I was using a paperback book or a physical Kindle rather than the Kindle app on my iPhone, would he give a fuck? I suspect not.

Likewise, I doubt he has any particular grudge against watching movies (or TED talks, for that matter) in bed, which phones can also be used for. Would he care if I brought in my Nintendo DS or any other handheld system to bed and caught a few Pokemon before lights out? Would he care if I played Scrabble with a physical board instead of using Words With Friends? Would he care if I used the phone as a phone to call my mother and say goodnight instead of checking her Facebook and maybe posting a link to something I know will make her laugh? I don’t know, but unless you view a smartphone as something that’s wholly disconnected from people – which, uh, is kind of the literal antithesis of what a smartphone is and does – I don’t honestly see how you can claim that they’re tools for disconnection. Again, yes: some people can get addicted or overuse their phones, but that is not a millennial-exclusive problem, and fuck you very much for suggesting it magically is Because Reasons.

And do not even get me started on the total fuckery of millennials being accustomed to instant gratification because of the internet. Never mind the fact that, once again, people of any age are equally likely to become accustomed to fast internet as a thing and to update their expectations accordingly – bitch, do you know how long it used to take to download music with Kazaa using a 56k modem? Do you know how long it still takes to download entire games, or patches for games, or – for that matter – drive through fucking peak-hour traffic to get to and from work, or negotiate your toddler into not screaming because he can’t have a third juicebox? Because – oh, yeah – remember that thing where millennials stopped being teenagers quite a fucking while ago, and a fair few of us are now parents ourselves? Yeah. Apparently our interpersonal skills aren’t so completely terrible as to prevent us all from finding spouses and partners and co-parents for our tiny, screaming offspring, and if Mr Sinek would like to argue that learning patience is incompatible with being a millennial, I would like to cordially invite him to listen to a video, on loop, of my nearly four-year-old saying, “Mummy, look! A lizard! Mummy, there’s a lizard! Come look!” and see what it does for his temperament. (We live in Brisbane, Australia. There are geckos everywhere.)

But what really pisses me off about Sinek’s millennial-blaming is the idea that we’re all willing to quit our jobs because we don’t find meaning in them. Listen to me, Simon Sinek. Listen to me closely. You are, once again, confusing the very particular context of middle-class, predominantly white Americans from affluent backgrounds – which is to say, the kind of people who can afford to fucking quit in this economy – for a universal phenomenon. Ignore the fact that the global economy collapsed in 2008 without ever fully recovering: Brexit just happened in the UK, Australia is run by a coalition of racist dickheads and you’ve just elected a talking Cheeto who’s hellbent on stripping away your very meagre social safety nets as his first order of business – oh, and none of us can afford to buy houses and we’re the first generation not to earn more than our predecessors in quite a while, university costs in the States are an actual goddamn crime and most of us can’t make a living wage or even get a job in the fields we trained in.

But yeah, sure: let’s talk about the wealthy few who can afford to quit their corporate jobs because they feel unfulfilled. What do they have to feel unhappy about, really? It’s not like they’re working for corporations whose idea of HR is to hire oblivious white dudes like you to figure out why their younger employees, working longer hours for less pay in tightly monitored environments that strip their individuality and hate on unions as a sin against capitalism, in a context where the glass ceiling and wage gaps remain a goddamn issue, in a first world country that still doesn’t have guaranteed maternity leave and where quite literally nobody working minimum wage can afford to pay rent, which is fucking terrifying to consider if you’re worried about being fired, aren’t fitting in. Nah, bro – must be the fucking internet’s fault.

Not that long ago, Gen X was the one getting pilloried as a bunch of ambitionless slackers who didn’t know the meaning of hard work, but time is linear and complaining about the failures of younger generations is a habit as old as humanity, so now it’s apparently our turn. Bottom line: there’s a huge fucking difference between saying “there’s value in turning your phone off sometimes” and “millennials don’t know how to people because TECHNOLOGY”, and until Simon Sinek knows what it is, I’m frankly not interested in whatever it is he thinks he has to say.

annie-mic-drop

And lo, in the leadup to Christmas, because it has been A Year and 2016 is evidently not content to go quietly into that good night, there has come the requisite twitter shitshow about diversity in YA. Specifically: user @queen_of_pages (hereinafter referred to as QOP) recently took great exception to teenage YouTube reviewer Whitney Atkinson acknowledging the fact that white and straight characters are more widely represented in SFF/YA than POC and queer characters, with bonus ad hominem attacks on Atkinson herself. As far as I can make out, the brunt of QOP’s ire hinges on the fact that Atkinson discusses characters with specific reference to various aspects of their identity – calling a straight character straight or a brown character brown, for instance – while advocating for greater diversity. To quote QOP:

[Atkinson] is separating races, sexuality and showing off her white privilege… she wants diversity so ppl need to be titled by their race, disability or sexuality. I want them to be titled PEOPLE… I’m Irish. I’ve been oppressed but I don’t let it separate me from other humans.

*sighs deeply and pinches bridge of nose*

Listen. I could rant, at length, about the grossness of a thirtysomething woman, as QOP appears to be, insulting a nineteen year old girl about her appearance and lovelife for any reason, let alone because of something she said about YA books on the internet. I could point out the internalised misogyny which invariably underlies such insults – the idea that a woman’s appearance is somehow inherently tied to her value, such that calling her ugly is a reasonable way to shut down her opinions at any given time – or go into lengthy detail about the hypocrisy of using the term “white privilege” (without, evidently, understanding what it means) while complaining in the very same breath about “separating races”. I could, potentially, say a lot of things.

But what I want to focus on here – the reason I’m bothering to say anything at all – is QOP’s conflation of mentioning race with being racist, and why that particular attitude is both so harmful and so widespread.

Like QOP, I’m a thirtysomething person, which means that she and I grew up in the same period, albeit on different continents. And what I remember from my own teenage years is a genuine, quiet anxiety about ever raising the topic of race, because of the particular way my generation was taught about multiculturalism on the one hand and integration on the other. Migrant cultures were to be celebrated, we were told, because Australian culture was informed by their meaningful contributions to the character of our great nation. At the same time, we were taught to view Australian culture as a monoculture, though it was seldom expressed as baldly as that; instead, we were taught about the positive aspects of cultural assimilation. Australia might benefit from the foods and traditions migrants brought with them, this logic went, but our adoption of those things was part of a social exchange: in return for our absorption of some aspects of migrant culture, migrants were expected to give up any identity beyond Australian and integrate into a (vaguely homogeneous) populace. Multiculturalism was a drum to beat when you wanted to praise the component parts that made Australia great, but suggesting those parts were great in their own right, or in combinations reflective of more complex identities? That was how you made a country weaker.

Denying my own complicity in racism at that age would be a lie. I was surrounded by it in much the same way that I was surrounded by car fumes, a toxic thing taken into the body unquestioning without any real understanding of what it meant or was doing to me internally. At my first high school, two of my first “boyfriends” (in the tweenage sense) were POC, as were various friends, but because race was never really discussed, I had no idea of the ways in which it mattered: to them, to others, to how they were judged and treated. The first time I learned anything about Chinese languages was when one of those early boyfriends explained it in class. I remember being fascinated to learn that Chinese – not Mandarin or Cantonese: the distinction wasn’t referenced – was a tonal language, but I also recall that the boy himself didn’t volunteer this information. Instead, our white teacher had singled him out as the only Chinese Australian present and asked him to explain his heritage: she assumed he spoke Chinese, and he had to explain that he didn’t, not fluently, though he still knew enough to satisfy her question. That exchange didn’t strike me as problematic at the time, but now? Now, it bothers me.

At my second high school, I was exposed to more overt racism, not least because it was a predominantly white, Anglican private school, as opposed to the more diversely populated public school I’d come from. As an adult, I’m ashamed to think how much of it I let pass simply because I didn’t know what to say, or because I didn’t realise at the time now noxious it was. Which isn’t to say I never successfully identified racism and called it out – I was widely perceived as the token argumentative lefty in my white male, familially right-wing friend group, which meant I spent a lot of time excoriating them for their views about refugees – but it wasn’t a social dealbreaker the way it would be now. The fact that I had another friend group that was predominantly POC – and where, again, I was the only girl – meant that I also saw people discussing their own race for the first time, forcing me to examine the question more openly than before.

Even so, it never struck me as anomalous back then that whereas the POC kids discussed their own identities in terms of race and racism, the white kids had no concept of their whiteness as an identity: that race, as a concept, informed their treatment of others, but not how they saw themselves. The same boys who joked about my biracial crush being a half-caste and who dressed up as “terrorists” in tea robes and tea towels for our final year scavenger hunt never once talked about whiteness, or about being white, unless it was in specific relation to white South African students or staff members, of which the school historically had a large number. (The fact that we had no POC South African students didn’t stop anyone from viewing “white” as a necessary qualifier: vocally, the point was always to make clear that, when you were talking about South Africans, you didn’t mean anyone black.)

Which is why, for a long time, the topic of race always felt fraught to me. I had no frame of reference for white people discussing race in a way that wasn’t saturated with racism, which made it easy to conflate the one with the other. More than that, it had the paradoxical effect of making any reference to race seem irrelevant: if race was only ever brought up by racists, why mention it at all? Why not just treat everyone equally, without mentioning what made them different? I never committed fully to that perspective, but it still tempted me – because despite all the racism I’d witnessed, I had no real understanding of how its prevalence impacted individuals or groups, either internally or in terms of their wider treatment.

My outrage about the discriminatory treatment of refugees ought to have given me some perspective on it, but I wasn’t insightful enough to make the leap on my own. At the time, detention centres and boat people were the subject of constant political discourse: it was easy to make the connection between things politicians and their supporters said about refugees and how those refugees were treated, because that particular form of cause and effect wasn’t in question. The real debate, such as it was, revolved around whether it mattered: what refugees deserved, or didn’t deserve, and whether that fact should change how we treated them. But there were no political debates about the visceral upset another boyfriend, who was Indian, felt at knowing how many classmates thought it was logical for him to date the only Indian girl in our grade, “because we both have melanin in our skins”. (I’ve never forgotten him saying that, nor have I forgotten the guilt I felt at knowing he was right. The two of them ran in completely different social circles, had wildly different personalities and barely ever interacted, and yet the expectation that they’d end up dating was still there, still discussed.) I knew it was upsetting to him, and I knew vaguely that the assumption was racist in origin, but my own privilege prevented me from understanding it as a microaggression that was neither unique to him nor the only one of its kind that he had to deal with. I didn’t see the pattern.

One day, I will sit down and write an essay about how the failure of white Australians and Americans in particular to view our post-colonial whiteness as an active cultural and racial identity unless we’re being super fucking racist about other groups is a key factor in our penchant for cultural appropriation. In viewing particular aspects of our shared experiences, not as cultural identifiers, but as normal, unspecial things that don’t really have any meaning, we fail to connect with them personally: we’re raised to view them as something that everyone does, not as something we do, and while we still construct other identities from different sources – the regions we’re from, the various flavours of Christianity we prefer – it leaves us prone to viewing other traditions as exciting, new things with no equivalent in our own milieu while simultaneously failing to see to their deeper cultural meaning. This is why so many white people get pissed off at jokes about suburban dads who can’t barbecue or soccer moms with Can I Speak To The Manager haircuts: far too many of us have never bothered to introspect on our own sociocultural peculiarities, and so get uppity the second anyone else identifies them for us. At base, we’re just not used to considering whiteness as an identity in its own right unless we’re really saying not-black or acting like white supremacists – which means, in turn, that many of us conflate any open acknowledgement of whiteness with some truly ugly shit. In that context, whiteness is either an invisible, neutral default or a racist call to arms: there is no in between.

Which is why, returning to the matter of QOP and Whitney Atkinson, pro-diversity advocates are so often forced to contend with people who think that “separating races” and like identifiers – talking specifically about white people or disabled people or queer people, instead of just people – is equivalent to racism and bigotry. Whether they recognise it or not, they’re coming from a perspective that values diverse perspectives for what they bring to the melting pot – for how they help improve the dominant culture via successful assimilation – but not in their own right, as distinct and special and non-homogenised. In that context, race isn’t something you talk about unless you’re being racist: it’s rude to point out people’s differences, because those differences shouldn’t matter to their personhood. The problem with this perspective is that it doesn’t allow for the celebration of difference: instead, it codes “difference” as inequality, because deep down, the logic of cultural assimilation is predicated on the idea of Western cultural superiority. A failure or refusal to assimilate is therefore tantamount to a declaration of inequality: I’m not the same as you is understood as I don’t want to be as good as you, and if someone doesn’t want to be the best they can be (this logic goes) then either they’re stupid, or they don’t deserve the offer of equality they’ve been so generously extended in the first place.

Talking about race isn’t the same as racism. Asking for more diversity in YA and SFF isn’t the same as saying personhood matters less than the jargon of identity, but is rather an acknowledgement of the fact that, for many people, personhood is materially informed by their experience of identity, both in terms of self-perception and in how they’re treated by others at the individual, familial and collective levels. And thanks to various studies into the social impact of colour-blindness as an ideology, we already know that claiming not to see race doesn’t undo the problem of racism; it just means adherents fail to understand what racism actually is and what it looks like, even – or perhaps especially – when they’re the ones perpetuating it.

So, no, QOP: you can’t effectively advocate for diversity without talking in specifics about issues like race and sexual orientation. Want the tl:dr reason? Because saying I want more stories with PEOPLE in them isn’t actually asking for more than what we already have, and the whole point of advocating for change is that what we have isn’t enough. You might as well try and work to decrease the overall number of accidental deaths in the population without putting any focus on the specific ways in which people are dying. Generalities are inclusive at the macro level, but it’s specificity that gets shit done at the micro – and ultimately, that’s what we’re aiming for.