I wasn’t going to write this but then Witcher 2 made a period joke so

I spent my early teenage years on an online forum dedicated to video games. It was predominantly male, and I very rarely felt unwanted or uncomfortable. When I did, it was usually from a new poster, not the friends I had made.

I remember having several conversations about gender and video games. This was one of the areas where I felt least likely to find common ground. However, I never felt my disagreement was met with anything more than further debate. No one called me an unkind name or failed to play games with me afterwards. I’ve stayed in contact with these people — most of them, anyway- – and it’s been interesting to see how we’ve all evolved on this issue as we’ve grown older.

I’m bringing it up in lieu of a Wired.com piece by Patton Oswalt. It’s an old piece, but one I’ve only recently seen, though I think it unintentionally demonstrates an attitude that, when adopted by people who are not as nice or thoughtful as Oswalt, are problematic. I want to be clear that Oswalt is and always has been an ally re: Gamergate and other hurdles. His piece struck me in large part because the strand that runs through it tracks a great deal to the sense of geek culture ownership that, in my experience, spurs a lot of resentment toward allegedly non-traditional gamers.

This article is not about video games and gender — it’s barely even about video games — but it is about nerd culture writ large. I don’t want to put words in Oswalt’s mouth, but I would categorize it as an articulate evaluation of how this sub-culture has become less of a sub-culture at all and more of mainstream experience. I would say it discusses the “democratization” of nerd culture, and how that can have negative impacts on those who spent early years marinating in the minutiae of a medium or lore or whatever else.

I come a bit later to this seen than Oswalt, so perhaps I feel this change less acutely. What struck me most was not the lamentation about this change in experience, but rather the sense of ownership Oswalt seemingly feels over the culture, and I find it difficult to sympathize because it is this ownership that has contributed to toxification of nerd culture, generally, and gaming culture, specifically.

I am not imputing this on Oswalt; I think he is making a fair point but the origin of it has also led to less well stated, sympathetic views. That feeling of ownership, for example, has popped up when we talk about objectification of women in video games, and how women are purportedly ruining gaming. It comes up when we talk about toxic masculinity in gaming, as the gold standard for heroism the traditional male power fantasy, and any attempt to fiddle with that is viewed as an infringement on one’s desired simulation. Its relevance is magnified when you view video games as the bridge between would-be trash talk and “lighthearted” trolling to legitimate threats and online harassment.

The word “entitlement” comes to mind. I can’t help but wonder what it’s like to live in a world where you get to feel as though something as ethereal and limitless as a culture should exist in stasis. Where is the property right in a sub-culture?

Oswalt’s exploration is benign, but for the bad actors out there, the belief that nerd culture is theirs is an animating force behind the hate and vitriol they spew at those outside their preferred norm. It’s the reason why they will grief or troll. I honestly wonder if Oswalt’s daughter, whom he cites towards the end of his piece, would be better off if that ownership remained. More likely, I suspect she would be shut out by her peers in this culture.

I’m not sure I can say that because obviously my experience prior to women being viewed as a threat to nerd culture was a good one; it wasn’t until boys and men in this sub-culture felt challenged that they really owned lashing out. But I also wonder how many women wanted to be part of that culture and felt shut out and turned off — the opportunity cost of narrow cultural participation and ownership.

I personally sign onto the thesis that the newfound large appeal of previously minority pastimes has led to a ton of sub-par creations. It’s becoming clear in comics, video games, and other nerdy hobbies that the possibility of larger scale endeavors like movies are a driving force in the creative process. On the flip side, broad swaths of people clamoring for super heroes and playable characters that look like them is a product of this democratization — Black Panther is a testament to this.

Ultimately, it seems as though the sense of sub-cultural ownership results in, at best, interesting internecine disputes. At worst, as we’re witnessing now, it results in a vituperative rebuff by those who feel as though they are losing something. In the middle is a sense of stagnation. Yes, the trade-off to scaling nerdiness is that we get a lot more shitty works out there and nowhere near proportional debate over small details that we might see at a smaller level. But the pay-off is that we get a lot more creatives feeling emboldened to explore with these newer markets who maybe aren’t the same guys speaking code to each other at a bar table, you know?

Level up – Week of March 27th 2018

Stuff I’ve been reading:

Regulations won’t fix “move fast and break things.”

Regulations won’t fix “move fast and break things.”

Another Facebook inspired topic, the subject of which is the value of regulation. As usual, I’m writing this on the road so I’ll come back to correct typos.

First we need to unpack the purpose of regulation, its value, and the intended trade-offs. To understand this requires a bit of background on litigation — mostly, but not exclusively, negligence suits. I’ll focus on this lens for now.

Back in the day, if a company hurt a person in the course of doing business, one needed to sue the company to recoup their losses. The harm – defined as anything ranging from physical and emotional harm to financial harm – was compensated in the form of damages (money).

This created a situation where you commonly had:

  • An individual litigating against a company;
  • A lawyer or small firm representing that individual versus a large group of retained corporate counsel;
  • An individual’s purse versus a corporate treasury;
  • A burden of proof on the individual to create a prima facie (face value) case for a cause of action (legally recognized harm for which the court can provide remedy);
  • Reactive, compensatory damages as opposed to proactive, harm-reduction policies.

Regulation’s value is primarily creating an even(ish) landscape for redress and pitting similarly situated parties against each other.

For example, instead of making an individual fund a suit with the hopes of damages (minus legal fees), the government is in the same weight class as larger conglomerates. It is in a better place to enforce a duty of care.

Regulations create a clearer line for a duty of care and, coupled with better ability to enforce, make it more likely companies will meet that duty. If you’re building the cost of harm into your business model, it’s easier to dismiss the likelihood of a single person who might not know their rights and has other parts of their life to tend than a government body whose sole purpose is to enforce that duty.

Because these higher enforcement probabilities make it riskier to be non-compliant, companies are less likely to wait until someone gets hurt to ask. To go to court, you need to have experienced a harm. The old model allows behavior to continue until someone already experienced the prohibited harm AND the person hurt decides to sue AND they can prove they should be heard in court. In contrast, he regulatory model generally makes it more cost-effective for companies to assess the risk of harm to its users before a product or service goes to market.

(If this sounds familiar, you’ve probably seen Fight Club’s breakdown of Judge Learned Hand’s famous negligence formula.)

Where regulation is less valuable is the view that it magically gets companies to behave a certain way. Companies can take risks. Corporate cultures that value high risk-taking might nevertheless decide to play it fast and loose with consumer outcomes for a number of reasons: moving so fast that they’re unaware a regulation applies, belief that they can get away with that behavior, etc etc.

Occupying a grey area is what we call “reasonable risk,” and typically entails behavior that may or may not violate a regulation. Companies with a high risk appetite might look at that and decide its worth it to move forward because they can make an argument they were compliant. Others might be more risk averse and play it safe even if it costs them more to take a product or service to market.

On the flip side, not all risk is regulated but should nevertheless be accounted for. An example of this is the hover board fiasco a few years ago. No regulations were on the books but once companies realized they could be dangerous, it behooved them to recall and fix their products. Morally and economically it is bad to have visibly and egregiously harmful products on the market, and I think most people would agree the argument “but it’s not against the law” would have little sway in lieu of material knowledge that a product is so harmful.

Pivoting back to Facebook, I definitely think regulations have a place here in that they force Facebook to answer to government bodies whose sole task is to monitor a specific kind of consumer risk. This is especially pertinent when it comes to the sometimes technical or absurd terms of service digital platforms create.

Additionally, we don’t have to wait for a breach/misuse to give Facebook et al reason to err on the side of caution. Regulations create knowledge of a would-be harm and companies have less room to argue that a reasonable person in their position would have done the same. The regulation tells them what, at minimum, is reasonable.

Here is where I’m skeptical: Facebook was already subject to regulation. My last post discusses briefly the FTC Consent Agreement to which Facebook was party. Either through ignorance or high risk appetite, Facebook arguably failed to comply with it by leaning primarily on a trust-and-verify document rather than robust biannual audits.

This “move fast and break things” culture is at the root of most of the Facebook-specific malfeasance we’ve revealed as of late. The agility required for start-ups can be carved out in regulatory exceptions for small firms, but Facebook is a large and sophisticated global conglomerate that has embedded in it a high risk mentality. If you’re small and scaling your impact is minimal and this is probably fine, but Facebook is huge and its risks hurt broad swaths of people in sensitive and personal ways.

None of this is to say that regulation isn’t worthwhile. I mentioned the concept of information fiduciaries in my last post and that is a good place to start. What I am saying is that regulation shuffles considerations and burdens around in a way that will only matter if the company’s risk calculus internalizes them as too big to ignore. Europe may have found a way to do this with their high penalties for non-compliance, hence the entire digital world from Slack to Google changing their terms of service. For regulation to matter, it will take a concerted effort to reform cultures at reckless tech giants in conjunction with the aforementioned enforcement methods.

The problem is that Zuckerberg’s mismanagement and response to the issue suggests that he welcomes regulation as a stand in for much needed tough decisions — calls only Facebook can make for itself. Whether digital privacy or toys. Zuckerberg would be the CEO putting consumers at risk because moving fast and breaking things has an implied risk calculation in it: take the risk, fix it after. If Facebook made toys, they would probably have choking hazards and unsafe lead levels even if the CPSC caught them, much like the FTC did in our present “breach” case. Zuckerberg would be doing interviews lamenting the situation but wondering why he of all people should be in charge of these decisions. This is, at its core, about culture in Silicon Valley writ large, and leadership at CEO, specifically.

Zuckerberg can we not

Inspired by this tweet storm – my tweet storm, sorry not sorry.

A lot of hot takes out there on Facebook. Here’s mine:

Breach versus Misuse

The technical distinction between breach and misuse is academic so far as the majority of Facebook users are concerned. I know for developers and engineers it’s important, so I won’t harp on this too much, but in the court of public opinion it matters little. For user trust it matters little. In a court of law, I don’t know how much it would matter  because I don’t know the cause of action; I assume it would be probative but not necessarily determinative. For regulatory bodies, that also depends — which regulatory body? In the US? The EU? Regardless, you start out with your strongest talking point, and this was the weakest.


Facebook as victim

Facebook is framing themselves as the victim. As noted in the tweet, this is absurd. Sure, Facebook was duped, but they were duped because they took a business risk and it blew up in their face. Trust and verify – that is, sending a document to be signed and confirmed that another party did something – is a low cost way of enforcing your terms of doing business. This can work well enough if (a) the other party is being honest, or; (b) you don’t get caught when they’re not honest. Facebook had the alternative of auditing their vendor, as is their right per their own terms, and is their obligation per a 2011 FTC agreement.


The FTC told Facebook to police this in 2011

Yes, you read the above correctly. The reason why Facebook is doubly on the hook is that they had material knowledge of how this information was misused as far back as 2011, and the FTC consent agreement they struck in lieu of that breach/misuse involved biannual audits.


Facebook’s terms only have as much force as Facebook wants

Facebook says this was “legally binding.” Well, I suppose we’ll see if that’s true. They’d have to enforce that agreement. That is, a court or arbiter would need to acknowledge it as binding. My own experience with such agreements is that they function more or less like those “I promise not to cheat” attestations at the beginning of your SATs. You sign it but (a) the board, not the government, enforces the terms, and; (b) you can still get in trouble for the underlying offense. Through the lens of Facebook, they would be the party to enforce their terms and they can still be on the hook – through their third-party developers – for malfeasance on their behalf.

Perhaps, and more likely, there is an indemnity clause nestled in there somewhere and Facebook would turn to Cambridge Analytica to recoup some losses, but it’s probably not “legally binding” in the sense that CA broke anything more than The Law of Facebook. Again, it was on Facebook to check up on their partners, much like manufacturers need to check on their supply chain.


Facebook’s business model is not really the point. Their recklessness is.

The core issue here is not that Facebook collects our data or that Cambridge Analytica purports to brainwash people into voting a certain way. Data collection is a feature, not a bug of Facebook’s business model. The issue is that they had so little control over secondary and tertiary uses of that data, i.e., what happened after Facebook provided that information to other parties.

Jack Balkin has argued that Facebook and other digital information brokers should be seen as information fiduciaries and I agree. I think this provides a good argument for that. Being a fiduciary comes with a heightened standard of care and exercise of due diligence. Facebook, for example, has a fiduciary duty to its shareholders. I’m fine with Facebook having a business model that revolves around ad revenue and data collection; I’m most definitely not fine with them being reckless in maintenance and control of that information distribution.


Zuck needs to draw a line.

Zuckerberg needs to grow up. In his interviews – and I’m paraphrasing but here’s the Recode transcript – Zuckerberg wants to know who chose him to make these decisions. Here’s the answer: Zuckerberg chose himself. Being CEO of a company means you’re in charge of everything that company does. I don’t think that Facebook’s impacts are so reasonably foreseeable that we can reach back in time and hold the company accountable for every failing that has popped up in the past year. Very few people envisioned prominent, US-based social media platforms as the hinge for disinformation campaigns by hostile foreign parties.

But, Facebook definitely could have foreseen that people lie, and sometimes they lie on the internet, and sometimes they are greedy, and that a concoction of greed and lying and peoples’ personal data equals a high risk of possible misuse and abuse. It means that sophisticated companies with armies of legal compliance attorneys should do their due diligence to ensure, at the least, that they are being good stewards of that information (i.e., that they have some record of information collected, to whom it was sold, for what purpose(s) it was being used, and following up on deletion requests with regular audits of their choosing).


This is some heavy shit

I have some thoughts on digital privacy, generally, that I’ll likely write in another post since it is its own beast. Obviously I run and use GPS tracking apps. I share this information with friends and family. It puts me at risk in some ways, perhaps more so than Facebook, depending on the nature of the risk we’re talking about. But what really ground my gears this time around was how Facebook handled the situation.

There is a demonstrable pattern of data mismanagement, at best. During each iteration, there is an apology (good) and an attempt to feign innocence or victimhood. As I said above, there are some major issues percolating in social media and national security that I think are fundamentally unfair to plop at the feet of Facebook, Twitter, etc — at least wholly at their feet. But this particular problem was entirely within Facebook’s home and their first reaction was to explain how really they were the victims and they waited five days to do it.

Mark Zuckerberg in particular seems woefully ill-suited for the kind of lateral public policy thinking that is and will continue to be necessary to manage a data-collecting social media company in an age where this is a primary mode of communication. For some countries, it is the Internet (note: I disagree with the author’s main argument, but she’s correct in pointing out the reliance on Facebook by many other groups). I really don’t think that the world can handle a person unwilling to understand that inaction is not neutrality. Facebook has, at this point, been used not only to meddle in US elections, but also to sow discord in developing countries, including genocide against the Rohingya. It’s a big deal and sometimes neutrality is just a facade for indifference and/or enabling very bad actors. He and his company need to think very hard about what his role at Facebook. Founders don’t necessarily need to be driving every facet of company management, especially if their imprudence has an impact on a scale of losing $50 billion in market cap, disorienting American policy writ large, and wiping out whole swaths of people.

 

 

Ruth Bader Ginsburg is the hero we need.

She is bringing back scrunchies and has a working ranked list:

“I have been wearing scrunchies for years,” she said during a recent interview with the Wall Street Journal all about her go-to hair accessory.

Her scrunchie collection comes from far and wide. The world-traveling justice has picked up accessories from her stops in countries around the world. During that same interview, Ginsburg ranked her scrunchies and where they come from.
“My best scrunchies come from Zurich. Next best, London, and third best, Rome,” she told the Wall Street Journal.

The fashion-forward justice acknowledges that her scrunchie assortment may be vast, but it doesn’t stack up to the number of other accessories in her collection.
“My scrunchie collection is not as large as my collar and glove collections, but scrunchies are catching up,” she said.

The only way this gets better is if her briefs come in Lisa Frank folders.

Nancy Drew and the case of the missing blog posts.

A while back I was encouraged to post on Medium. Personally, I’m not a fan of the platform so I’m sticking to WordPress. I don’t want to lose my posts there. For posterity, I’m linking them here.

Oldie but a goodie: Mad Max feminist manifesto

Political Gabfest: Exploring the Anatomy of a Lie.

/r/NeutralPolitics Response: Trump’s Executive Order

Double X: “Milo v. Lena” Edition

The good, the bad and the CMV ugly.

This is something I plan to post as a moderator for /r/ChangeMyView. However, I take a personal interest in online curation and content moderation. Therefore I’m posting it here as well.


There’s a topic I’ve been thinking about since the lead-up to the 2016 election¹: does CMV create a safe space for abhorrent views? Do we normalize and reinvigorate conversations already rejected by society? What is our responsibility, as a platform, with respect to each of these issues?

A competing response from CMV users is that we create a platform for some really bad views — the kinds of views traditionally considered shameful — even through the lens of diversified perspectives. Some examples are white supremacy, “biotruths” regarding race and gender, and categorizing homosexuality as a mental disorder. I would personally argue these views are rightly considered shameful. My personal views, however, do not grapple with best practices for moderation.

These are conversations that have been explored copiously, from nearly every angle imaginable and, in some cases, ratified by law only to be turned over by democratic processes or formal adjudication. In this case, one wonders, myself included, if there is any value to be had in relitigating these issues. In fact, there is a heightened concern that creating a space for these conversations allows them back into civil discourse, legitimizing and normalizing them. This, in turn, potentially gives them space to breath and ultimately flourish.

Caning of Sumner
Editorial lithograph depicting the caning of Charles Sumner in the US Legislature, 1856. Image taken from Senate.gov.

I will flatly admit that I share this concern, especially in 2018. Political and cultural hostility is a tale as old as time (see also Adams and Franklin phhhbbtt). The reason why Internet platforms have come into the forefront is, I believe, for two main reasons: first, the speed at which we are bombarded with information has dramatically increased, and; second, the barrier to publishing information has significantly diminished, thus creating a wider breadth of comments/claims. The latter is extremely hard to moderate logistically, let alone discretionally. Taken together, this creates a unique manifestation of an otherwise old problem.


We have preliminary data showing some previously shunned views are now emboldened. While polarization is not direct evidence for specific views, they are a useful proxy from which we can make fair extrapolations. Other online forums for political debate are deeply partisan. White people and black people have fundamental disagreements on police brutality. Pew Research indicates Muslims fear intimidation – defined as a reasonable expectation of bodily harm – in numbers surpassing the immediate 9/11 era. Increased hostilities have not been limited to the United States.

None of these flash-points point to egregious views per se, but I’m highlighting them to indicate that, to the extent people have views, they are trending more extreme, and that these extreme views are likewise painting a portrait to others, the culminating effect of which is to make them fearful or, at the least, deeply anxious about their relative placement in society.

Pivoting back to reddit, various other subreddits have been deconstructed with an eye towards low quality and/or highly polarized online discussion. Maintstream outlets have explored the intersection between extremist, often “alt-right” political views and reddit:

(Unfortunately, but topical, the FiveThirtyEight article warns for slurs, as will I.)

And that top five isn’t exactly pretty, though it does support the theory that at least a subset of Trump’s supporters are motivated by racism. The presence of r/fatpeoplehate at the top of the list echoes some of President Trump’s own behavior, including his referring to 1996 Miss Universe winner Alicia Machado as “Miss Piggy” and insulting Rosie O’Donnell about her weight. The second-closest result, r/TheRedPill, describes itself in its sidebar as a place for “discussion of sexual strategy in a culture increasingly lacking a positive identity for men”; named after a scene from the “The Matrix,” the group believes that women run the world and men are an oppressed class, and from that belief springs an ideology that has been described as “the heart of modern misogyny.” r/Mr_Trump self-describes as “the #1 Alt-Right, most uncucked subreddit” — referring to a populist white-nationalist movement and an increasingly all-purpose insult meant to denigrate others’ masculinity — and the appallingly named r/coontown is the now-banned but previously central home to unrepentant racism on Reddit. Finally, coming in at No. 5 is r/4chan, a subreddit dedicated to posting screenshots of threads found on 4chan, where many users supported Trump for president and where the /pol/ board in particular has a strongly racist bent.

Emphasis mine.

To CMV’s credit, our media deep dives have been positive², but that’s not self-executing. It takes a lot of introspection about our role, rules, and moderation framework to create the necessary forum – and I do believe it is necessary to have these conversations – where we can talk about all of these tough issues without glorifying the underlying views.


The risk of validating views absolutely exists. I do believe that, as a moderation team, we should be mindful of how our curation impacts the perception of what reddit audiences believe to be fair-minded, critical conversations. I don’t purport to represent the whole of the moderation team in this respect, but I do think it’s safe to say we share an acute awareness of how our impact here grows as our subreddit does.

The place I’ve come to is:  people very rarely want benevolent views changed. CMV could not exist as a place for critical discourse if we tried to create a superficial impression that people only have pedestrian or typical views. On the contrary, we consider it more likely that someone is trolling or looking to push a view (“soapbox”) if they come in with a view most people share. It begs the question why one wants that view changed. I get why a self-identified white supremacist wonders why s/he might be wrong; most people, at least publicly, advocate for equality. That is a clear catalyst for questioning one’s position.

But why would someone want to change their view on, to use a real example, equality of races? Is it possible? Sure. Is it probable? I don’t think so, and while we wouldn’t remove this thread automatically, it would likely deserve higher scrutiny. It is suspicious in the sense that there is an even better probability that this person is posting the opposite of their view in the original post so they can bolster and increase visibility of that opposite view in the comments.

With this heightened likelihood of publicly sharing a “shameful” view, I would rather create a place of deliberate contemplation and critical thinking where this view is exposed to contrary evidence. Our value system ultimately drills down to engendering an ethos of thoughtful critique. We don’t always succeed at this but the moderators do try to be self-critical, regularly step back from our work, and revise our method based on shared learned experiences navigating these waters.

It certainly creates a risk of normalizing these views and creating a “safe” space,  but only if we insulate these users from the civic consequences of their behavior. That is, it is only “safe” if we allow them to share without engagement. Ours is not a safe space for like-minded people in the sense that we advocate the same substantive views. It is a safe space for people to talk candidly and respectfully, to forcefully push-back on ideas so long as one recognizes this is a two-way street of presuming good faith until proven otherwise. It is perhaps our most valued first principle, and it is the compact made between users and moderators each time we participate in CMV.

My view hasn’t changed 180 degrees, but that also captures an element we embed in our delta system: having a good conversation doesn’t require a “gotcha” moment where you lose and your entire worldview is toppled; it simply requires gaining an additional insight that impacts your view. Where I’ve landed is that I am cautiously guarded when I see these threads advocating a “bad” view, but I nevertheless operate by the Principle of Charity: I assume good faith and rationality until a poster gives me a reason not to, and leave it to our users to thoughtfully and creatively critique those views. My responsibility is to curate and cultivate an environment that reinforces an understanding of nuance, but also a willingness to speak up when something is clearly abhorrent and explain why someone should change their mind.



¹ Posting about these thoughts was spurred, in part, by this podcast conversation between Katie Couric and Recode’s Kara Shwisher. It echoes a lamentation I hear frequently about Internet conversations, which is that they lack nuance and fail to appreciate the multi-faceted qualities of just about every human being.

² “Our Best Hope for Civil Discourse Online Is…Reddit” by Virginia Heffernan, Wired.com; “On the Other Hand” by Tim Adams, The Guardian; Our Minds Have Been Hijacked by Our Phones. Tristan Harris Wants to Rescue Them” by Nicholas Thompson, Wired.com.