Another Facebook inspired topic, the subject of which is the value of regulation. As usual, I’m writing this on the road so I’ll come back to correct typos.
First we need to unpack the purpose of regulation, its value, and the intended trade-offs. To understand this requires a bit of background on litigation — mostly, but not exclusively, negligence suits. I’ll focus on this lens for now.
Back in the day, if a company hurt a person in the course of doing business, one needed to sue the company to recoup their losses. The harm – defined as anything ranging from physical and emotional harm to financial harm – was compensated in the form of damages (money).
This created a situation where you commonly had:
- An individual litigating against a company;
- A lawyer or small firm representing that individual versus a large group of retained corporate counsel;
- An individual’s purse versus a corporate treasury;
- A burden of proof on the individual to create a prima facie (face value) case for a cause of action (legally recognized harm for which the court can provide remedy);
- Reactive, compensatory damages as opposed to proactive, harm-reduction policies.
Regulation’s value is primarily creating an even(ish) landscape for redress and pitting similarly situated parties against each other.
For example, instead of making an individual fund a suit with the hopes of damages (minus legal fees), the government is in the same weight class as larger conglomerates. It is in a better place to enforce a duty of care.
Regulations create a clearer line for a duty of care and, coupled with better ability to enforce, make it more likely companies will meet that duty. If you’re building the cost of harm into your business model, it’s easier to dismiss the likelihood of a single person who might not know their rights and has other parts of their life to tend than a government body whose sole purpose is to enforce that duty.
Because these higher enforcement probabilities make it riskier to be non-compliant, companies are less likely to wait until someone gets hurt to ask. To go to court, you need to have experienced a harm. The old model allows behavior to continue until someone already experienced the prohibited harm AND the person hurt decides to sue AND they can prove they should be heard in court. In contrast, he regulatory model generally makes it more cost-effective for companies to assess the risk of harm to its users before a product or service goes to market.
Where regulation is less valuable is the view that it magically gets companies to behave a certain way. Companies can take risks. Corporate cultures that value high risk-taking might nevertheless decide to play it fast and loose with consumer outcomes for a number of reasons: moving so fast that they’re unaware a regulation applies, belief that they can get away with that behavior, etc etc.
Occupying a grey area is what we call “reasonable risk,” and typically entails behavior that may or may not violate a regulation. Companies with a high risk appetite might look at that and decide its worth it to move forward because they can make an argument they were compliant. Others might be more risk averse and play it safe even if it costs them more to take a product or service to market.
On the flip side, not all risk is regulated but should nevertheless be accounted for. An example of this is the hover board fiasco a few years ago. No regulations were on the books but once companies realized they could be dangerous, it behooved them to recall and fix their products. Morally and economically it is bad to have visibly and egregiously harmful products on the market, and I think most people would agree the argument “but it’s not against the law” would have little sway in lieu of material knowledge that a product is so harmful.
Pivoting back to Facebook, I definitely think regulations have a place here in that they force Facebook to answer to government bodies whose sole task is to monitor a specific kind of consumer risk. This is especially pertinent when it comes to the sometimes technical or absurd terms of service digital platforms create.
Additionally, we don’t have to wait for a breach/misuse to give Facebook et al reason to err on the side of caution. Regulations create knowledge of a would-be harm and companies have less room to argue that a reasonable person in their position would have done the same. The regulation tells them what, at minimum, is reasonable.
Here is where I’m skeptical: Facebook was already subject to regulation. My last post discusses briefly the FTC Consent Agreement to which Facebook was party. Either through ignorance or high risk appetite, Facebook arguably failed to comply with it by leaning primarily on a trust-and-verify document rather than robust biannual audits.
This “move fast and break things” culture is at the root of most of the Facebook-specific malfeasance we’ve revealed as of late. The agility required for start-ups can be carved out in regulatory exceptions for small firms, but Facebook is a large and sophisticated global conglomerate that has embedded in it a high risk mentality. If you’re small and scaling your impact is minimal and this is probably fine, but Facebook is huge and its risks hurt broad swaths of people in sensitive and personal ways.
None of this is to say that regulation isn’t worthwhile. I mentioned the concept of information fiduciaries in my last post and that is a good place to start. What I am saying is that regulation shuffles considerations and burdens around in a way that will only matter if the company’s risk calculus internalizes them as too big to ignore. Europe may have found a way to do this with their high penalties for non-compliance, hence the entire digital world from Slack to Google changing their terms of service. For regulation to matter, it will take a concerted effort to reform cultures at reckless tech giants in conjunction with the aforementioned enforcement methods.
The problem is that Zuckerberg’s mismanagement and response to the issue suggests that he welcomes regulation as a stand in for much needed tough decisions — calls only Facebook can make for itself. Whether digital privacy or toys. Zuckerberg would be the CEO putting consumers at risk because moving fast and breaking things has an implied risk calculation in it: take the risk, fix it after. If Facebook made toys, they would probably have choking hazards and unsafe lead levels even if the CPSC caught them, much like the FTC did in our present “breach” case. Zuckerberg would be doing interviews lamenting the situation but wondering why he of all people should be in charge of these decisions. This is, at its core, about culture in Silicon Valley writ large, and leadership at CEO, specifically.