Regulations won’t fix “move fast and break things.”

Regulations won’t fix “move fast and break things.”

Another Facebook inspired topic, the subject of which is the value of regulation. As usual, I’m writing this on the road so I’ll come back to correct typos.

First we need to unpack the purpose of regulation, its value, and the intended trade-offs. To understand this requires a bit of background on litigation — mostly, but not exclusively, negligence suits. I’ll focus on this lens for now.

Back in the day, if a company hurt a person in the course of doing business, one needed to sue the company to recoup their losses. The harm – defined as anything ranging from physical and emotional harm to financial harm – was compensated in the form of damages (money).

This created a situation where you commonly had:

  • An individual litigating against a company;
  • A lawyer or small firm representing that individual versus a large group of retained corporate counsel;
  • An individual’s purse versus a corporate treasury;
  • A burden of proof on the individual to create a prima facie (face value) case for a cause of action (legally recognized harm for which the court can provide remedy);
  • Reactive, compensatory damages as opposed to proactive, harm-reduction policies.

Regulation’s value is primarily creating an even(ish) landscape for redress and pitting similarly situated parties against each other.

For example, instead of making an individual fund a suit with the hopes of damages (minus legal fees), the government is in the same weight class as larger conglomerates. It is in a better place to enforce a duty of care.

Regulations create a clearer line for a duty of care and, coupled with better ability to enforce, make it more likely companies will meet that duty. If you’re building the cost of harm into your business model, it’s easier to dismiss the likelihood of a single person who might not know their rights and has other parts of their life to tend than a government body whose sole purpose is to enforce that duty.

Because these higher enforcement probabilities make it riskier to be non-compliant, companies are less likely to wait until someone gets hurt to ask. To go to court, you need to have experienced a harm. The old model allows behavior to continue until someone already experienced the prohibited harm AND the person hurt decides to sue AND they can prove they should be heard in court. In contrast, he regulatory model generally makes it more cost-effective for companies to assess the risk of harm to its users before a product or service goes to market.

(If this sounds familiar, you’ve probably seen Fight Club’s breakdown of Judge Learned Hand’s famous negligence formula.)

Where regulation is less valuable is the view that it magically gets companies to behave a certain way. Companies can take risks. Corporate cultures that value high risk-taking might nevertheless decide to play it fast and loose with consumer outcomes for a number of reasons: moving so fast that they’re unaware a regulation applies, belief that they can get away with that behavior, etc etc.

Occupying a grey area is what we call “reasonable risk,” and typically entails behavior that may or may not violate a regulation. Companies with a high risk appetite might look at that and decide its worth it to move forward because they can make an argument they were compliant. Others might be more risk averse and play it safe even if it costs them more to take a product or service to market.

On the flip side, not all risk is regulated but should nevertheless be accounted for. An example of this is the hover board fiasco a few years ago. No regulations were on the books but once companies realized they could be dangerous, it behooved them to recall and fix their products. Morally and economically it is bad to have visibly and egregiously harmful products on the market, and I think most people would agree the argument “but it’s not against the law” would have little sway in lieu of material knowledge that a product is so harmful.

Pivoting back to Facebook, I definitely think regulations have a place here in that they force Facebook to answer to government bodies whose sole task is to monitor a specific kind of consumer risk. This is especially pertinent when it comes to the sometimes technical or absurd terms of service digital platforms create.

Additionally, we don’t have to wait for a breach/misuse to give Facebook et al reason to err on the side of caution. Regulations create knowledge of a would-be harm and companies have less room to argue that a reasonable person in their position would have done the same. The regulation tells them what, at minimum, is reasonable.

Here is where I’m skeptical: Facebook was already subject to regulation. My last post discusses briefly the FTC Consent Agreement to which Facebook was party. Either through ignorance or high risk appetite, Facebook arguably failed to comply with it by leaning primarily on a trust-and-verify document rather than robust biannual audits.

This “move fast and break things” culture is at the root of most of the Facebook-specific malfeasance we’ve revealed as of late. The agility required for start-ups can be carved out in regulatory exceptions for small firms, but Facebook is a large and sophisticated global conglomerate that has embedded in it a high risk mentality. If you’re small and scaling your impact is minimal and this is probably fine, but Facebook is huge and its risks hurt broad swaths of people in sensitive and personal ways.

None of this is to say that regulation isn’t worthwhile. I mentioned the concept of information fiduciaries in my last post and that is a good place to start. What I am saying is that regulation shuffles considerations and burdens around in a way that will only matter if the company’s risk calculus internalizes them as too big to ignore. Europe may have found a way to do this with their high penalties for non-compliance, hence the entire digital world from Slack to Google changing their terms of service. For regulation to matter, it will take a concerted effort to reform cultures at reckless tech giants in conjunction with the aforementioned enforcement methods.

The problem is that Zuckerberg’s mismanagement and response to the issue suggests that he welcomes regulation as a stand in for much needed tough decisions — calls only Facebook can make for itself. Whether digital privacy or toys. Zuckerberg would be the CEO putting consumers at risk because moving fast and breaking things has an implied risk calculation in it: take the risk, fix it after. If Facebook made toys, they would probably have choking hazards and unsafe lead levels even if the CPSC caught them, much like the FTC did in our present “breach” case. Zuckerberg would be doing interviews lamenting the situation but wondering why he of all people should be in charge of these decisions. This is, at its core, about culture in Silicon Valley writ large, and leadership at CEO, specifically.

Zuckerberg can we not

Inspired by this tweet storm – my tweet storm, sorry not sorry.

A lot of hot takes out there on Facebook. Here’s mine:

Breach versus Misuse

The technical distinction between breach and misuse is academic so far as the majority of Facebook users are concerned. I know for developers and engineers it’s important, so I won’t harp on this too much, but in the court of public opinion it matters little. For user trust it matters little. In a court of law, I don’t know how much it would matter  because I don’t know the cause of action; I assume it would be probative but not necessarily determinative. For regulatory bodies, that also depends — which regulatory body? In the US? The EU? Regardless, you start out with your strongest talking point, and this was the weakest.


Facebook as victim

Facebook is framing themselves as the victim. As noted in the tweet, this is absurd. Sure, Facebook was duped, but they were duped because they took a business risk and it blew up in their face. Trust and verify – that is, sending a document to be signed and confirmed that another party did something – is a low cost way of enforcing your terms of doing business. This can work well enough if (a) the other party is being honest, or; (b) you don’t get caught when they’re not honest. Facebook had the alternative of auditing their vendor, as is their right per their own terms, and is their obligation per a 2011 FTC agreement.


The FTC told Facebook to police this in 2011

Yes, you read the above correctly. The reason why Facebook is doubly on the hook is that they had material knowledge of how this information was misused as far back as 2011, and the FTC consent agreement they struck in lieu of that breach/misuse involved biannual audits.


Facebook’s terms only have as much force as Facebook wants

Facebook says this was “legally binding.” Well, I suppose we’ll see if that’s true. They’d have to enforce that agreement. That is, a court or arbiter would need to acknowledge it as binding. My own experience with such agreements is that they function more or less like those “I promise not to cheat” attestations at the beginning of your SATs. You sign it but (a) the board, not the government, enforces the terms, and; (b) you can still get in trouble for the underlying offense. Through the lens of Facebook, they would be the party to enforce their terms and they can still be on the hook – through their third-party developers – for malfeasance on their behalf.

Perhaps, and more likely, there is an indemnity clause nestled in there somewhere and Facebook would turn to Cambridge Analytica to recoup some losses, but it’s probably not “legally binding” in the sense that CA broke anything more than The Law of Facebook. Again, it was on Facebook to check up on their partners, much like manufacturers need to check on their supply chain.


Facebook’s business model is not really the point. Their recklessness is.

The core issue here is not that Facebook collects our data or that Cambridge Analytica purports to brainwash people into voting a certain way. Data collection is a feature, not a bug of Facebook’s business model. The issue is that they had so little control over secondary and tertiary uses of that data, i.e., what happened after Facebook provided that information to other parties.

Jack Balkin has argued that Facebook and other digital information brokers should be seen as information fiduciaries and I agree. I think this provides a good argument for that. Being a fiduciary comes with a heightened standard of care and exercise of due diligence. Facebook, for example, has a fiduciary duty to its shareholders. I’m fine with Facebook having a business model that revolves around ad revenue and data collection; I’m most definitely not fine with them being reckless in maintenance and control of that information distribution.


Zuck needs to draw a line.

Zuckerberg needs to grow up. In his interviews – and I’m paraphrasing but here’s the Recode transcript – Zuckerberg wants to know who chose him to make these decisions. Here’s the answer: Zuckerberg chose himself. Being CEO of a company means you’re in charge of everything that company does. I don’t think that Facebook’s impacts are so reasonably foreseeable that we can reach back in time and hold the company accountable for every failing that has popped up in the past year. Very few people envisioned prominent, US-based social media platforms as the hinge for disinformation campaigns by hostile foreign parties.

But, Facebook definitely could have foreseen that people lie, and sometimes they lie on the internet, and sometimes they are greedy, and that a concoction of greed and lying and peoples’ personal data equals a high risk of possible misuse and abuse. It means that sophisticated companies with armies of legal compliance attorneys should do their due diligence to ensure, at the least, that they are being good stewards of that information (i.e., that they have some record of information collected, to whom it was sold, for what purpose(s) it was being used, and following up on deletion requests with regular audits of their choosing).


This is some heavy shit

I have some thoughts on digital privacy, generally, that I’ll likely write in another post since it is its own beast. Obviously I run and use GPS tracking apps. I share this information with friends and family. It puts me at risk in some ways, perhaps more so than Facebook, depending on the nature of the risk we’re talking about. But what really ground my gears this time around was how Facebook handled the situation.

There is a demonstrable pattern of data mismanagement, at best. During each iteration, there is an apology (good) and an attempt to feign innocence or victimhood. As I said above, there are some major issues percolating in social media and national security that I think are fundamentally unfair to plop at the feet of Facebook, Twitter, etc — at least wholly at their feet. But this particular problem was entirely within Facebook’s home and their first reaction was to explain how really they were the victims and they waited five days to do it.

Mark Zuckerberg in particular seems woefully ill-suited for the kind of lateral public policy thinking that is and will continue to be necessary to manage a data-collecting social media company in an age where this is a primary mode of communication. For some countries, it is the Internet (note: I disagree with the author’s main argument, but she’s correct in pointing out the reliance on Facebook by many other groups). I really don’t think that the world can handle a person unwilling to understand that inaction is not neutrality. Facebook has, at this point, been used not only to meddle in US elections, but also to sow discord in developing countries, including genocide against the Rohingya. It’s a big deal and sometimes neutrality is just a facade for indifference and/or enabling very bad actors. He and his company need to think very hard about what his role at Facebook. Founders don’t necessarily need to be driving every facet of company management, especially if their imprudence has an impact on a scale of losing $50 billion in market cap, disorienting American policy writ large, and wiping out whole swaths of people.

 

 

An impromptu primer on how to deal with YouTube tosspots.

Listen to Kara Swisher’s interview with Susan Wojcicki (CEO of YouTube). There is a portion regarding rules/codes of conduct/community guidelines. I want to memorialize my thoughts here.

I’m not going to put words in Wojcicki’s mouth, but there’s an interaction where she tries to put contours around what it means to create community guidelines and rules. What she’s talking about, at least as I hear it, is due process. That word is tossed around a lot but it has meaning: namely, that we have a set of rules created before anything happens, process we use to enforce those rules, and we use that process for everybody.

Due process is important because it creates a sense of fair play and justice. This is more important when we’re talking about laws dealing with peoples’ lives and liberties, but it can be just as meaningful in cultivating a sense of community – a community that people buy into and of which they want to be a part.

Businesses that have poor communities often have poor user experiences, so your service better be exclusive or so exceptional that people eat the cost of having a bad experience with others. Tech companies should take note, in my opinion, that people are being offered more alternatives and developing higher expectations in this area, so I wouldn’t hang my hat on riding this out. Get community moderators, folks. /end self-promotion

In law – the “codifying” to which Wojcicki refers – due process can be boiled down to two core components:

  1. Notice – have you told people how they’re supposed to behave before you enforce that standard of behavior?
  2. Hearing – is there an opportunity to make their case to whomever is enforcing that standard of behavior?

When talk about how to enforce a law, we apply the code to a specific set of facts. Let’s make our use case Logan Paul, who I believe is, at best, an irresponsible and juvenile opportunist who needs to grow up and, at worst, an utterly insensitive knuckledragger with no understanding of common decency towards other people.

Nevertheless, he was a user of YouTube and presumably agreed to abide by their Terms of Service (TOS) which outlined an assortment of rules to which he is bound, including a three-strikes rule, which is exactly what it sounds like. Swisher asks (paraphrase): why don’t you just get rid of him? You make the rules, change the rules.

Agreed, but that doesn’t solve our Logan Paul problem, at least not right away. Paul was bound to a set of rules that, in my opinion, exposed a gap in behavioral expectations for the YouTube platform. It behooves YouTube to change the rules, capture this behavior and close the gap.

What it doesn’t mean is that it’s a good idea to retroactively apply this rule. Remember “notice”: you want to tell people before they act what the expectation is. Removing content that is abhorrent without a codified rationale undermines this principle. There are always going to be exceptions such as an imminent and credible threat to a person’s life, or something so grossly vulgar that the better risk calculation is to take it down and eat the cost of dealing with the aftermath. But, these are exceptions, and we don’t make rules based on exceptional behavior. We make rules based on things that are commonplace and easily understood such that most people find it possible to comply with them.

Additionally, we don’t create rules to target a specific person. It would be dubious to create a rule that seems neutral but, in application, only results in the removal of Logan Paul. Sure, it’s Youtube’s prerogative to remove whomever they want, but I’m coming from a place that assumes YouTube wants to (1) create a consistent user experience; (2) brand as a media platform that doesn’t pick favorites, and; (3) provide a cogent rationale to its stakeholders and users such that they don’t come off as frivolous or erratic.

I’m losing steam since I need to prepare for a meeting, but my roadmap would essentially boil down to the following:

  • Do a gap assessment on YouTube’s rules as of the date of the Logan Paul suicide forest controversy. He may have engaged in questionable behavior in the past but this is the clear marker of what crossed the line in such a way as to enter the cultural zeitgeist and create national controversy.
  • Once you’ve discovered the gap, ask yourself if a rule would have captured this. Sometimes the behavior is so extraordinary that you could make a rule but it wouldn’t, in practice, police anything because it was such a one off. Other times it’s behavior that defies codification. This doesn’t preclude policing it, but it does probably mean you need to preserve in your ToS a level of discretion for content moderators (which you should have) and training for those moderators to spot red flags, etc.
  • Amend the ToS as needed. Make Logan Paul and others click “agree” to participate on your not-a-media-service platform.
  • Penalize all users for non-compliance, including Logan Paul should he run afoul after the new ToS have been socialized.

I don’t have much to say about hearings here. This is something that is a lot more important in traditional legal situations. From what I hear, you can appeal after three strikes and so forth and this is, frankly, a marketplace. You have options to go elsewhere even if they’re shitty options, YouTube isn’t a basic human necessity to which you have some inherent claim.

I can spell “Wojcicki” off-hand now. Boom.

Can the refugee ban help us mitigate the access to justice problem?

kamala-khan-01

Obviously the above picture is not mine. Kamala Khan is Ms. Marvel and belongs to, you guessed it, Marvel. I thought it looked cool and was appropriately in tone and representation.

Spend enough time in law school or at the bar (the boring bar) and you’ll hear about the “access to justice” problem. The access to justice problem, at its most simplest, is that te legal profession cannot sufficiently meet the needs of those who require legal services, and that this disproportionately impacts the poor and indigent. More and more, it is also affecting the working class or lower middle class, who both cannot afford even more middling legal services but also make too much money to qualify for legal aid services.

The salient point is that supply cannot meet demand. There are a lot of reasons for this and we can dive down that rabbit hole another day but, suffice to say, it’s a problem.

Solving problems, in my experience, and I would say in most folks’ experiences, happens incrementally. You make something 3/5/10% better and the eventual outcome is a larger accrued benefit over time.

Likewise, working for a metrics-centric company that values scalable, “machine learning” solutions has changed the way I approach a problem. I’m a lot less interested in the romantic sweeping proposal and a lot more interested in finding gaps that are overlooked when we focus on the main problem. Often, smaller increases in the process can better the quality of life for those part of it and bolster our ability to isolate what the “real” problem is. Instead of getting lost in a sea of possible problems and, consequently, possible solution, we focus on the immediate hurdles we can overcome and then dramatically increase our ability to solve the problem at large by process of isolation and/or elimination.

This isn’t how I’ve seen the legal community solve its problems. For one, it’s averse to change overall. It’s traditional and, by its nature, quite married to precedent, and not just in case law. Moreover, which I’ll touch on later, every solution to a problem must must be at peace with the overarching ethical rules to which lawyers are beholden. Another hurdle to this is that such guidelines are set by state bars, meaning what is ethical in one state might not be in another. Creative solutions to non-substantive problems are thus not only tacitly discouraged by culture but also, as a trade-off to for ethical lawyering, genuine rules.

The Refugee Ban as a Pilot for Better Legal Services

This past weekend, President Trump signed an executive order (“EO”) that “suspended entry of all refugees to the United States for 120 days, barred Syrian refugees indefinitely, and blocked entry into the United States for 90 days for citizens of seven predominantly Muslim countries: Iran, Iraq, Libya, Somalia, Sudan, Syria and Yemen.”

Lawyers quickly camped out at airports. I was not among them. Regretfully, I am licensed in NY and currently unsure of my ability to practice law – even at a federal level – under these circumstances. I’m attempting to clarify it, but nevertheless my participation was limited to Facebook posts and a local rally.

I do have friends who participated. Many have signed up for an Airport Triage. This and similar initiatives seek to compile a list of attorneys who are willing to represent anyone detained or otherwise affected by the EO. It also lists the times they are able to do so, how they can be contacted, and additional information such as whether you can file a habeas corpus petition and what areas of law you typically practice.

Essentially, it’s an intake form. A small, rather thankless but necessary description of who your potential client is, what they need, and how you two can communicate and get together.

It got me wondering about what sort of phone applications one could create. As I mentioned above, this is a very small thing — the sort of thing we really don’t think about when it comes to access to justice. But one hurdle of access to justice is that the people who need lawyers often won’t ask or don’t know where to go. Another is that there are a lot of attorneys who simply cannot afford to do pro bono.

But what about those people who want to do pro bono work and mean to do pro bono work but it gets lost in the sea of other obligations? What if we could lower the barrier to entry for requesting legal aid while providing the sort of information that comes as a push notification and a very short list of facts that, like the refugee ban, galvanizes an attorney

This issue is sexy to us this weekend, but even after the stay and as this is litigated, people will be affected by policies the Trump administration settles on. They are unlikely to change in intent even if they change in process. Programs that keep this and other issues in the minds of attorneys, as well as continue aligning those needs with attorney skills, are worth exploring.

Obviously “I need help with my Earned Income Tax Credit and my husband/wife is a crazy bitch” is not as enticing as protecting the due process rights of a suffering class of people. But the basic thrust is that (1) there are people out there who need lawyers and have engaging/sympathetic/whatever cases, and; (2) there are also a lot of lawyers who do, in fact, want to donate their skills but, like all of us, put it on the backburner until something provides an obvious motivator.

Basically, legal aid societies typically function as the broker for volunteer services. My proposal here, not having considered ethics yet, is to remove the middle man by creating a functional intake app that allows lawyers to screen possible volunteer cases. It’s also a format that is easily accessible to potential clients and is, perhaps, not as daunting as walking into a law firm.

I’m still hashing this out in my head, and I don’t want this post to be too long, so here is a quick list of other off-shoots to this:

  • I’m focusing on a sort of brokerage because apps that focus on being a substantive resource tread too closely to providing legal advice. Providing legal advice can create client relationships one doesn’t want, and depending on who makes the app and populates its information, could run afoul of lawful-practice-of-law, well, laws.
  • Another potential is to use it as a means for attorneys to seek clients who want to operate outside the bounds of a traditional firm. These are probably clients who occupy that working poor/lower middle class range. I’m not sure what the ethical implications are since there are rules about billable hours, but it could function as a means to broker a billable hour they can afford. I’m sure that such a thing would have to be blessed by a state’s ethics bar before this could be done. We don’t want non-lawyers to overshoot how much they should pay only to have a more savvy lawyer take advantage, for example.
  • The unspoken hurdle, particularly in my last point, is cost. Not just cost in development, but cost of offering services. A major pain point in the legal community is that (1) it’s expensive to become an attorney, such that; (2) it’s often cost-prohibitive to work for less than the kind of billable hour that precludes representing kinda-poor people.
  • In general, I’m interested in seeing what other organic solutions to client/attorney hurdles come out of this event. If there’s a silver lining, it might be that necessity begets inventions or otherwise jerry rigged systems that reveal small solutions to larger problems.
  • Of course, my bias is that of a lawyer. We can also consider that refugees and immigrants, as two distinct classes, have some overlapping needs that must be met. Such an app could be used to find people – attorneys or otherwise – who speak their language or are some other kind of advocate.

As someone who is not a developer, that is a major gap in my thinking, but I’m certainly very interested in what other lessons we can learn here – hopefully something more positive, long-term and scalable – to improve access to justice and client experiences. I say this as someone who is, as another disclaimer, no longer has to work with clients so that is yet another blind spot of mine.