Inspired by this tweet storm – my tweet storm, sorry not sorry.
A lot of hot takes out there on Facebook. Here’s mine:
Breach versus Misuse
The technical distinction between breach and misuse is academic so far as the majority of Facebook users are concerned. I know for developers and engineers it’s important, so I won’t harp on this too much, but in the court of public opinion it matters little. For user trust it matters little. In a court of law, I don’t know how much it would matter because I don’t know the cause of action; I assume it would be probative but not necessarily determinative. For regulatory bodies, that also depends — which regulatory body? In the US? The EU? Regardless, you start out with your strongest talking point, and this was the weakest.
Facebook as victim
Facebook is framing themselves as the victim. As noted in the tweet, this is absurd. Sure, Facebook was duped, but they were duped because they took a business risk and it blew up in their face. Trust and verify – that is, sending a document to be signed and confirmed that another party did something – is a low cost way of enforcing your terms of doing business. This can work well enough if (a) the other party is being honest, or; (b) you don’t get caught when they’re not honest. Facebook had the alternative of auditing their vendor, as is their right per their own terms, and is their obligation per a 2011 FTC agreement.
The FTC told Facebook to police this in 2011
Yes, you read the above correctly. The reason why Facebook is doubly on the hook is that they had material knowledge of how this information was misused as far back as 2011, and the FTC consent agreement they struck in lieu of that breach/misuse involved biannual audits.
Facebook’s terms only have as much force as Facebook wants
Facebook says this was “legally binding.” Well, I suppose we’ll see if that’s true. They’d have to enforce that agreement. That is, a court or arbiter would need to acknowledge it as binding. My own experience with such agreements is that they function more or less like those “I promise not to cheat” attestations at the beginning of your SATs. You sign it but (a) the board, not the government, enforces the terms, and; (b) you can still get in trouble for the underlying offense. Through the lens of Facebook, they would be the party to enforce their terms and they can still be on the hook – through their third-party developers – for malfeasance on their behalf.
Perhaps, and more likely, there is an indemnity clause nestled in there somewhere and Facebook would turn to Cambridge Analytica to recoup some losses, but it’s probably not “legally binding” in the sense that CA broke anything more than The Law of Facebook. Again, it was on Facebook to check up on their partners, much like manufacturers need to check on their supply chain.
Facebook’s business model is not really the point. Their recklessness is.
The core issue here is not that Facebook collects our data or that Cambridge Analytica purports to brainwash people into voting a certain way. Data collection is a feature, not a bug of Facebook’s business model. The issue is that they had so little control over secondary and tertiary uses of that data, i.e., what happened after Facebook provided that information to other parties.
Jack Balkin has argued that Facebook and other digital information brokers should be seen as information fiduciaries and I agree. I think this provides a good argument for that. Being a fiduciary comes with a heightened standard of care and exercise of due diligence. Facebook, for example, has a fiduciary duty to its shareholders. I’m fine with Facebook having a business model that revolves around ad revenue and data collection; I’m most definitely not fine with them being reckless in maintenance and control of that information distribution.
Zuck needs to draw a line.
Zuckerberg needs to grow up. In his interviews – and I’m paraphrasing but here’s the Recode transcript – Zuckerberg wants to know who chose him to make these decisions. Here’s the answer: Zuckerberg chose himself. Being CEO of a company means you’re in charge of everything that company does. I don’t think that Facebook’s impacts are so reasonably foreseeable that we can reach back in time and hold the company accountable for every failing that has popped up in the past year. Very few people envisioned prominent, US-based social media platforms as the hinge for disinformation campaigns by hostile foreign parties.
But, Facebook definitely could have foreseen that people lie, and sometimes they lie on the internet, and sometimes they are greedy, and that a concoction of greed and lying and peoples’ personal data equals a high risk of possible misuse and abuse. It means that sophisticated companies with armies of legal compliance attorneys should do their due diligence to ensure, at the least, that they are being good stewards of that information (i.e., that they have some record of information collected, to whom it was sold, for what purpose(s) it was being used, and following up on deletion requests with regular audits of their choosing).
This is some heavy shit
I have some thoughts on digital privacy, generally, that I’ll likely write in another post since it is its own beast. Obviously I run and use GPS tracking apps. I share this information with friends and family. It puts me at risk in some ways, perhaps more so than Facebook, depending on the nature of the risk we’re talking about. But what really ground my gears this time around was how Facebook handled the situation.
There is a demonstrable pattern of data mismanagement, at best. During each iteration, there is an apology (good) and an attempt to feign innocence or victimhood. As I said above, there are some major issues percolating in social media and national security that I think are fundamentally unfair to plop at the feet of Facebook, Twitter, etc — at least wholly at their feet. But this particular problem was entirely within Facebook’s home and their first reaction was to explain how really they were the victims and they waited five days to do it.
Mark Zuckerberg in particular seems woefully ill-suited for the kind of lateral public policy thinking that is and will continue to be necessary to manage a data-collecting social media company in an age where this is a primary mode of communication. For some countries, it is the Internet (note: I disagree with the author’s main argument, but she’s correct in pointing out the reliance on Facebook by many other groups). I really don’t think that the world can handle a person unwilling to understand that inaction is not neutrality. Facebook has, at this point, been used not only to meddle in US elections, but also to sow discord in developing countries, including genocide against the Rohingya. It’s a big deal and sometimes neutrality is just a facade for indifference and/or enabling very bad actors. He and his company need to think very hard about what his role at Facebook. Founders don’t necessarily need to be driving every facet of company management, especially if their imprudence has an impact on a scale of losing $50 billion in market cap, disorienting American policy writ large, and wiping out whole swaths of people.