Featured

Post-Run Thoughts No. 1.

I need a place to toss all the things I think about while running. Oh, a blog. Well then.

I don’t have any novel or sophisticated thoughts regarding the Supreme Court nomination. Suffice to say I agree that Merrick Garland’s seat was stolen, Justice Kennedy hung LGBTQ and women’s reproductive rights out to dry, and we will see the most conservative court in recent history after Trump’s nomination is inevitably approved by the Senate.

My first thought is in response to a lot of podcasts I’ve listened to opining whether the court is truly non-partisan. I can say that when I was in law school this was the aspirational goal portrayed to me, but there was a realistic acknowledgment that checking political branches necessarily requires amassing some political capital to preserve the appearance of non-partisanship. I suppose the distinction here is the notion that the SC will take politics into account without taking a preferential side.

Of course this doesn’t answer the question about the individual political inclinations of each justice. What most people mean when they talk about the Court being political is this: is a given Justice working backwards from a preferred political outcome? Are they using the law to bridge the gap between their desired policy and legality?

I do believe there are Justices that do this. The fairest interpretation I have, however, and the one I believe to be mostly true amongst judges, generally, is that judicial philosophy inevitably intertwined with political philosophies in large part due to substantive due process.

The “penumbra of rights” associated with the right to privacy – including reproductive and sexual autonomy – are not explicitly called out in the Constitution. Rather, they are inferred from the 14th Amendment, and this is called substantive due process. It’s hardly coincidental, I think, that the more narrow interpretations of the Constitution would find most encumbrances on these rights a-okay (and vice versa — broader interpretations supporting substantive due process would find the same burdens unconstitutional.)

The question is, therefore, whether you believe a the judicial philosophy is the chicken or the egg. My cynical view these days is that many judges have set on a policy and adopt the judicial philosophy that is most likely to justify an opinion supporting that policy. But, I could see a more optimistic world where judges truly do call balls and strikes by thinking a given interpretation is best, and letting policy flow (via the legislature) from there.

Obviously these mullings are separate and apart from what is otherwise clearly political: the Heritage Foundation and Federalist Society have groomed and vetted these candidates in such a way that whether by the cynical or optimistic interpretation, they will rule with the conservative side of the court that is fundamentally opposed to substantive due process and all its umbrella rights and precedent. This includes Roe v. Wade / Planned Parenthood v. Casey. Even if they don’t outright overrule, they will assuredly find burden after burden placed on women to be constitutional, until there are separate reproductive right regimes for upper and middle class women versus their lower income and working class counterparts.

Again, nothing new, just waiting between meetings.

Sometimes I share things on Strava.

I deleted my Facebook page because I have qualms with their handling of personal data. Of course, lots of things ask me for personal information, and I share a lot of information too, which begs the question why it’s okay in one scenario and not the other.

I suppose the first part is a distinction: I conceded a while back that applications would want my personal data. This made me uneasy at first but eventually I relented, and I do like a lot of the services and customer experiences I get as a result.

However, I also take seriously how that data is managed. If I give Facebook permission to do something, and if I give an app permission to use that data, that’s it; these are the only groups I want handling that information unless otherwise specified in the terms (which are cumbersome to read, even to a lawyer.) Facebook’s breach of trust was not the mystifying terms of service to which I agreed, but rather they simply did not have a handle on the information at all and with whom it was shared.

Think of it like giving your social security number to a bank: maybe you don’t like sharing that information but it’s part-and-parcel with a lot of bank services, so you do it, but then you find out they’ve been loaning it out to a bunch of people — and they don’t know which people and it was accidentally on purpose and they can’t trace the audit trail back. Bad.

The next part is the kind of information I am used to sharing. I run, and I use Strava. I use Strava because my friends are on Strava. It tracks my routes and my times. I don’t love some of its metrics but it’s fine. Garmin and Smashrun do the same, but it’s the Strava link I share. I did wonder: why am I so willing to share my physical location and typical running routes?

After thinking it over – on my runs, go figure – I realized it had a lot to do with the fact that I am weighing the risk of being out in public every day. Some of this we all do. Men, women, old, young — we know that leaving our house poses a set of risks and we index on them. Women in particularly, however, have a more acute awareness of what it means to be out in public, especially alone, particularly if you’re alone at night. So I find myself accounting for this kind of risk all the time and have been since I was old enough for my school to give me the “Here is how you hold your keys like Wolverine” talk.

Thus the risk might increase and maybe I should rethink that much. If I find it somewhat unsafe to be on my own in a situation, it might be more unsafe if I am taking the same regular route. This might then increase if I post something on Twitter that a person doesn’t like. It could create an opportunity to harass or harm, and I would make the argument that this is a heightened risk online for women over men, as the former tend to be harassed as women and not simply because of the things they say, but I digress.

The kind of risk though? It might seem counterintuitive but I’m more or less used to it. A vulnerability like that runs through the entire part of my life that begins around adolescence. My calculus might change but its existence doesn’t phase me. The landscape of how we deal with these issues is also way more established. There are all kinds of victim-blaming that can occur in physical crime or privacy violations, but we’ve also been dealing with it for centuries and have some sense of what’s actionable and fair versus what is not.

Contrast that with digital privacy. The information I’m handing over is new. The way in which it is used against me is likewise novel, sometimes simply unknown. Leglislators and courts are at a loss on how to counter abuses and correct course. There is minimal to no accountability for the Googles and Facebooks of the world save for market accountability (one reason why I deleted Facebook) even if I am only a drop in the bucket. I have a much greater sense and familiarity with what I’m putting on the line when I post my runs on Strava than when I allow an app to access my photos on Facebook; I know where I go if something bad happens to me, and; I have a basic sense of potential avenues for redress.

Finally, there is something particularly galling about Facebook’s apology tour – or series of apology tours – that makes it that much harder to trust the company. Strava has had its own privacy kerfuffles, but it has also never positioned itself as caring all that much beyond basic control mechanisms. It knows you can triangulate one’s position from its GPS. Whenever there’s a substantive grievance, they take some remediation step, which is arguably sub-par, but never presented as an angelic “oopsie.”

This doesn’t excuse Strava, in my view; I know they are also sharing information. Nevertheless, I do feel like I have a much more realistic expectation with respect to how their app is properly used and what level of work I need to engage in to keep myself and my information safe. The same cannot be said of Facebook, whose only reliable quality is that their privacy protections and customer care are a constant moving target.

I wasn’t going to write this but then Witcher 2 made a period joke so

I spent my early teenage years on an online forum dedicated to video games. It was predominantly male, and I very rarely felt unwanted or uncomfortable. When I did, it was usually from a new poster, not the friends I had made.

I remember having several conversations about gender and video games. This was one of the areas where I felt least likely to find common ground. However, I never felt my disagreement was met with anything more than further debate. No one called me an unkind name or failed to play games with me afterwards. I’ve stayed in contact with these people — most of them, anyway- – and it’s been interesting to see how we’ve all evolved on this issue as we’ve grown older.

I’m bringing it up in lieu of a Wired.com piece by Patton Oswalt. It’s an old piece, but one I’ve only recently seen, though I think it unintentionally demonstrates an attitude that, when adopted by people who are not as nice or thoughtful as Oswalt, are problematic. I want to be clear that Oswalt is and always has been an ally re: Gamergate and other hurdles. His piece struck me in large part because the strand that runs through it tracks a great deal to the sense of geek culture ownership that, in my experience, spurs a lot of resentment toward allegedly non-traditional gamers.

This article is not about video games and gender — it’s barely even about video games — but it is about nerd culture writ large. I don’t want to put words in Oswalt’s mouth, but I would categorize it as an articulate evaluation of how this sub-culture has become less of a sub-culture at all and more of mainstream experience. I would say it discusses the “democratization” of nerd culture, and how that can have negative impacts on those who spent early years marinating in the minutiae of a medium or lore or whatever else.

I come a bit later to this seen than Oswalt, so perhaps I feel this change less acutely. What struck me most was not the lamentation about this change in experience, but rather the sense of ownership Oswalt seemingly feels over the culture, and I find it difficult to sympathize because it is this ownership that has contributed to toxification of nerd culture, generally, and gaming culture, specifically.

I am not imputing this on Oswalt; I think he is making a fair point but the origin of it has also led to less well stated, sympathetic views. That feeling of ownership, for example, has popped up when we talk about objectification of women in video games, and how women are purportedly ruining gaming. It comes up when we talk about toxic masculinity in gaming, as the gold standard for heroism the traditional male power fantasy, and any attempt to fiddle with that is viewed as an infringement on one’s desired simulation. Its relevance is magnified when you view video games as the bridge between would-be trash talk and “lighthearted” trolling to legitimate threats and online harassment.

The word “entitlement” comes to mind. I can’t help but wonder what it’s like to live in a world where you get to feel as though something as ethereal and limitless as a culture should exist in stasis. Where is the property right in a sub-culture?

Oswalt’s exploration is benign, but for the bad actors out there, the belief that nerd culture is theirs is an animating force behind the hate and vitriol they spew at those outside their preferred norm. It’s the reason why they will grief or troll. I honestly wonder if Oswalt’s daughter, whom he cites towards the end of his piece, would be better off if that ownership remained. More likely, I suspect she would be shut out by her peers in this culture.

I’m not sure I can say that because obviously my experience prior to women being viewed as a threat to nerd culture was a good one; it wasn’t until boys and men in this sub-culture felt challenged that they really owned lashing out. But I also wonder how many women wanted to be part of that culture and felt shut out and turned off — the opportunity cost of narrow cultural participation and ownership.

I personally sign onto the thesis that the newfound large appeal of previously minority pastimes has led to a ton of sub-par creations. It’s becoming clear in comics, video games, and other nerdy hobbies that the possibility of larger scale endeavors like movies are a driving force in the creative process. On the flip side, broad swaths of people clamoring for super heroes and playable characters that look like them is a product of this democratization — Black Panther is a testament to this.

Ultimately, it seems as though the sense of sub-cultural ownership results in, at best, interesting internecine disputes. At worst, as we’re witnessing now, it results in a vituperative rebuff by those who feel as though they are losing something. In the middle is a sense of stagnation. Yes, the trade-off to scaling nerdiness is that we get a lot more shitty works out there and nowhere near proportional debate over small details that we might see at a smaller level. But the pay-off is that we get a lot more creatives feeling emboldened to explore with these newer markets who maybe aren’t the same guys speaking code to each other at a bar table, you know?

Level up – Week of March 27th 2018

Stuff I’ve been reading:

Regulations won’t fix “move fast and break things.”

Regulations won’t fix “move fast and break things.”

Another Facebook inspired topic, the subject of which is the value of regulation. As usual, I’m writing this on the road so I’ll come back to correct typos.

First we need to unpack the purpose of regulation, its value, and the intended trade-offs. To understand this requires a bit of background on litigation — mostly, but not exclusively, negligence suits. I’ll focus on this lens for now.

Back in the day, if a company hurt a person in the course of doing business, one needed to sue the company to recoup their losses. The harm – defined as anything ranging from physical and emotional harm to financial harm – was compensated in the form of damages (money).

This created a situation where you commonly had:

  • An individual litigating against a company;
  • A lawyer or small firm representing that individual versus a large group of retained corporate counsel;
  • An individual’s purse versus a corporate treasury;
  • A burden of proof on the individual to create a prima facie (face value) case for a cause of action (legally recognized harm for which the court can provide remedy);
  • Reactive, compensatory damages as opposed to proactive, harm-reduction policies.

Regulation’s value is primarily creating an even(ish) landscape for redress and pitting similarly situated parties against each other.

For example, instead of making an individual fund a suit with the hopes of damages (minus legal fees), the government is in the same weight class as larger conglomerates. It is in a better place to enforce a duty of care.

Regulations create a clearer line for a duty of care and, coupled with better ability to enforce, make it more likely companies will meet that duty. If you’re building the cost of harm into your business model, it’s easier to dismiss the likelihood of a single person who might not know their rights and has other parts of their life to tend than a government body whose sole purpose is to enforce that duty.

Because these higher enforcement probabilities make it riskier to be non-compliant, companies are less likely to wait until someone gets hurt to ask. To go to court, you need to have experienced a harm. The old model allows behavior to continue until someone already experienced the prohibited harm AND the person hurt decides to sue AND they can prove they should be heard in court. In contrast, he regulatory model generally makes it more cost-effective for companies to assess the risk of harm to its users before a product or service goes to market.

(If this sounds familiar, you’ve probably seen Fight Club’s breakdown of Judge Learned Hand’s famous negligence formula.)

Where regulation is less valuable is the view that it magically gets companies to behave a certain way. Companies can take risks. Corporate cultures that value high risk-taking might nevertheless decide to play it fast and loose with consumer outcomes for a number of reasons: moving so fast that they’re unaware a regulation applies, belief that they can get away with that behavior, etc etc.

Occupying a grey area is what we call “reasonable risk,” and typically entails behavior that may or may not violate a regulation. Companies with a high risk appetite might look at that and decide its worth it to move forward because they can make an argument they were compliant. Others might be more risk averse and play it safe even if it costs them more to take a product or service to market.

On the flip side, not all risk is regulated but should nevertheless be accounted for. An example of this is the hover board fiasco a few years ago. No regulations were on the books but once companies realized they could be dangerous, it behooved them to recall and fix their products. Morally and economically it is bad to have visibly and egregiously harmful products on the market, and I think most people would agree the argument “but it’s not against the law” would have little sway in lieu of material knowledge that a product is so harmful.

Pivoting back to Facebook, I definitely think regulations have a place here in that they force Facebook to answer to government bodies whose sole task is to monitor a specific kind of consumer risk. This is especially pertinent when it comes to the sometimes technical or absurd terms of service digital platforms create.

Additionally, we don’t have to wait for a breach/misuse to give Facebook et al reason to err on the side of caution. Regulations create knowledge of a would-be harm and companies have less room to argue that a reasonable person in their position would have done the same. The regulation tells them what, at minimum, is reasonable.

Here is where I’m skeptical: Facebook was already subject to regulation. My last post discusses briefly the FTC Consent Agreement to which Facebook was party. Either through ignorance or high risk appetite, Facebook arguably failed to comply with it by leaning primarily on a trust-and-verify document rather than robust biannual audits.

This “move fast and break things” culture is at the root of most of the Facebook-specific malfeasance we’ve revealed as of late. The agility required for start-ups can be carved out in regulatory exceptions for small firms, but Facebook is a large and sophisticated global conglomerate that has embedded in it a high risk mentality. If you’re small and scaling your impact is minimal and this is probably fine, but Facebook is huge and its risks hurt broad swaths of people in sensitive and personal ways.

None of this is to say that regulation isn’t worthwhile. I mentioned the concept of information fiduciaries in my last post and that is a good place to start. What I am saying is that regulation shuffles considerations and burdens around in a way that will only matter if the company’s risk calculus internalizes them as too big to ignore. Europe may have found a way to do this with their high penalties for non-compliance, hence the entire digital world from Slack to Google changing their terms of service. For regulation to matter, it will take a concerted effort to reform cultures at reckless tech giants in conjunction with the aforementioned enforcement methods.

The problem is that Zuckerberg’s mismanagement and response to the issue suggests that he welcomes regulation as a stand in for much needed tough decisions — calls only Facebook can make for itself. Whether digital privacy or toys. Zuckerberg would be the CEO putting consumers at risk because moving fast and breaking things has an implied risk calculation in it: take the risk, fix it after. If Facebook made toys, they would probably have choking hazards and unsafe lead levels even if the CPSC caught them, much like the FTC did in our present “breach” case. Zuckerberg would be doing interviews lamenting the situation but wondering why he of all people should be in charge of these decisions. This is, at its core, about culture in Silicon Valley writ large, and leadership at CEO, specifically.

Zuckerberg can we not

Inspired by this tweet storm – my tweet storm, sorry not sorry.

A lot of hot takes out there on Facebook. Here’s mine:

Breach versus Misuse

The technical distinction between breach and misuse is academic so far as the majority of Facebook users are concerned. I know for developers and engineers it’s important, so I won’t harp on this too much, but in the court of public opinion it matters little. For user trust it matters little. In a court of law, I don’t know how much it would matter  because I don’t know the cause of action; I assume it would be probative but not necessarily determinative. For regulatory bodies, that also depends — which regulatory body? In the US? The EU? Regardless, you start out with your strongest talking point, and this was the weakest.


Facebook as victim

Facebook is framing themselves as the victim. As noted in the tweet, this is absurd. Sure, Facebook was duped, but they were duped because they took a business risk and it blew up in their face. Trust and verify – that is, sending a document to be signed and confirmed that another party did something – is a low cost way of enforcing your terms of doing business. This can work well enough if (a) the other party is being honest, or; (b) you don’t get caught when they’re not honest. Facebook had the alternative of auditing their vendor, as is their right per their own terms, and is their obligation per a 2011 FTC agreement.


The FTC told Facebook to police this in 2011

Yes, you read the above correctly. The reason why Facebook is doubly on the hook is that they had material knowledge of how this information was misused as far back as 2011, and the FTC consent agreement they struck in lieu of that breach/misuse involved biannual audits.


Facebook’s terms only have as much force as Facebook wants

Facebook says this was “legally binding.” Well, I suppose we’ll see if that’s true. They’d have to enforce that agreement. That is, a court or arbiter would need to acknowledge it as binding. My own experience with such agreements is that they function more or less like those “I promise not to cheat” attestations at the beginning of your SATs. You sign it but (a) the board, not the government, enforces the terms, and; (b) you can still get in trouble for the underlying offense. Through the lens of Facebook, they would be the party to enforce their terms and they can still be on the hook – through their third-party developers – for malfeasance on their behalf.

Perhaps, and more likely, there is an indemnity clause nestled in there somewhere and Facebook would turn to Cambridge Analytica to recoup some losses, but it’s probably not “legally binding” in the sense that CA broke anything more than The Law of Facebook. Again, it was on Facebook to check up on their partners, much like manufacturers need to check on their supply chain.


Facebook’s business model is not really the point. Their recklessness is.

The core issue here is not that Facebook collects our data or that Cambridge Analytica purports to brainwash people into voting a certain way. Data collection is a feature, not a bug of Facebook’s business model. The issue is that they had so little control over secondary and tertiary uses of that data, i.e., what happened after Facebook provided that information to other parties.

Jack Balkin has argued that Facebook and other digital information brokers should be seen as information fiduciaries and I agree. I think this provides a good argument for that. Being a fiduciary comes with a heightened standard of care and exercise of due diligence. Facebook, for example, has a fiduciary duty to its shareholders. I’m fine with Facebook having a business model that revolves around ad revenue and data collection; I’m most definitely not fine with them being reckless in maintenance and control of that information distribution.


Zuck needs to draw a line.

Zuckerberg needs to grow up. In his interviews – and I’m paraphrasing but here’s the Recode transcript – Zuckerberg wants to know who chose him to make these decisions. Here’s the answer: Zuckerberg chose himself. Being CEO of a company means you’re in charge of everything that company does. I don’t think that Facebook’s impacts are so reasonably foreseeable that we can reach back in time and hold the company accountable for every failing that has popped up in the past year. Very few people envisioned prominent, US-based social media platforms as the hinge for disinformation campaigns by hostile foreign parties.

But, Facebook definitely could have foreseen that people lie, and sometimes they lie on the internet, and sometimes they are greedy, and that a concoction of greed and lying and peoples’ personal data equals a high risk of possible misuse and abuse. It means that sophisticated companies with armies of legal compliance attorneys should do their due diligence to ensure, at the least, that they are being good stewards of that information (i.e., that they have some record of information collected, to whom it was sold, for what purpose(s) it was being used, and following up on deletion requests with regular audits of their choosing).


This is some heavy shit

I have some thoughts on digital privacy, generally, that I’ll likely write in another post since it is its own beast. Obviously I run and use GPS tracking apps. I share this information with friends and family. It puts me at risk in some ways, perhaps more so than Facebook, depending on the nature of the risk we’re talking about. But what really ground my gears this time around was how Facebook handled the situation.

There is a demonstrable pattern of data mismanagement, at best. During each iteration, there is an apology (good) and an attempt to feign innocence or victimhood. As I said above, there are some major issues percolating in social media and national security that I think are fundamentally unfair to plop at the feet of Facebook, Twitter, etc — at least wholly at their feet. But this particular problem was entirely within Facebook’s home and their first reaction was to explain how really they were the victims and they waited five days to do it.

Mark Zuckerberg in particular seems woefully ill-suited for the kind of lateral public policy thinking that is and will continue to be necessary to manage a data-collecting social media company in an age where this is a primary mode of communication. For some countries, it is the Internet (note: I disagree with the author’s main argument, but she’s correct in pointing out the reliance on Facebook by many other groups). I really don’t think that the world can handle a person unwilling to understand that inaction is not neutrality. Facebook has, at this point, been used not only to meddle in US elections, but also to sow discord in developing countries, including genocide against the Rohingya. It’s a big deal and sometimes neutrality is just a facade for indifference and/or enabling very bad actors. He and his company need to think very hard about what his role at Facebook. Founders don’t necessarily need to be driving every facet of company management, especially if their imprudence has an impact on a scale of losing $50 billion in market cap, disorienting American policy writ large, and wiping out whole swaths of people.

 

 

Ruth Bader Ginsburg is the hero we need.

She is bringing back scrunchies and has a working ranked list:

“I have been wearing scrunchies for years,” she said during a recent interview with the Wall Street Journal all about her go-to hair accessory.

Her scrunchie collection comes from far and wide. The world-traveling justice has picked up accessories from her stops in countries around the world. During that same interview, Ginsburg ranked her scrunchies and where they come from.
“My best scrunchies come from Zurich. Next best, London, and third best, Rome,” she told the Wall Street Journal.

The fashion-forward justice acknowledges that her scrunchie assortment may be vast, but it doesn’t stack up to the number of other accessories in her collection.
“My scrunchie collection is not as large as my collar and glove collections, but scrunchies are catching up,” she said.

The only way this gets better is if her briefs come in Lisa Frank folders.