Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Regulation could protect Facebook, not punish it (techcrunch.com)
181 points by shahocean on March 25, 2018 | hide | past | favorite | 70 comments


I can think of three relevant regulations for (large) internet companies

1. Strong anti-monopoly regulation. There is no reason for Facebook owning Whatsapp & Instagram.

2. Once your revenue exceeds x million USD on ad supported services that are free for end users, you must offer the service also for money with reasonable margin on top of the actual production costs, with strong guarantees that the data of end user who has opted for paid service is kept private and not monetized anyhow.

3. Once your revenue on ad supported services that are free to end users exceed y million USD, you must make easily available every single usage of the end user's data. Also, user must have right to prohibit any specific type of use of user's data.


> Once your revenue exceeds x million USD on ad supported services that are free for end users, you must offer the service also for money with reasonable margin on top of the actual production costs, with strong guarantees that the data of end user who has opted for paid service is kept private and not monetized anyhow

I like the size threshold but not the solution. Toying with business models should be a solution of last resort.

Simpler: for companies with more than X users or Y gross revenues, an American GDPR. Consumers get an absolute right to audit and delete their data. Explicit consent is required for each instance of third-party sharing. Companies are liable to their users for breaches, with a minimum amount directly claimable by users through an easily-accessibly regulator.


Why a size threshold? An American GDPR seems like a fine idea, especially if it’a similar enough to GDPR that complying with one gets the other for free.


Size threshold is presumably to allow small startups to grow and explore models without crushing compliance costs. That’s the kind of regulation that protects incumbents who can afford the lawyers and software tools to do compliance properly.

There tends to be an inverse relationship between regulation in a space and incumbents springing up, for better or for worse


There's a fine line between "grow and explore models without crushing compliance costs" and "break all the rules so you can outcompete established players". Hi, Uber!


Correctly handling user data is much cheaper if you bake it in from the start. The real costs are in figuring out how to follow GDPR when you already have private user data saved all over the place and suddenly need to be able to do things like permanently remove all identifiable information for a single user.

It isn't a huge loss if we no longer explore models that require violating privacy protections to get off the ground.


The overhead cost of compliance with something like GDPR would probably kill most small startups that had a data-backed business model.


There isn't really that much overhead, especially if you keep the GDPR in mind from the beginning. The big costs happen at older and larger companies that had very lax procedures before.

I bet it will not inhibit startups in the EU.


To some part I think the GDPR is intended to make it easier for startups in Europe to compete with US services on the EU market.

In recent times, being a privacy-aware service is a selling point and the EU services can easily advertise as being privacy-first to US citizens while US companies will have trouble doing so (considering SESTA/FOSTA and CLOUD act)


Do we want startups to have a "data-backed business model"?


Yeah we do. While I don't like Google tracking where I am from my cell phone when location-sharing is off, I love listening to music on Pandora and Spotify and I would never discover the new songs I do if there was no option to like or ban songs and evolve the station.


I don't see how either of those are "data-backed business models". Pandora/Spotify have a very clear draw for people which exists regardless of how much data they store on people. They absolutely can and do use data to _enhance_ their core offering, but if they stopped overnight it would not cripple their business immediately.


Both aslo offer a very clear service, (streaming music) and both only really collect data to improve your music experience.

Facebook seems to grab whatever it can, just to make money.


People that are willing to pay are your ad targets. Such a system devalues the rest of your base.


What about a syndicated network of N companies, each with X-1 users and Y-1 gross revenue? It sounds like what you're describing would be easy to game.


> What about a syndicated network of N companies, each with X-1 users and Y-1 gross revenue?

Size-dependent threshold exist across our tax and regulatory codes. Concepts like beneficial ownership [1] and affiliation [2], amongst others, address these issues.

[1] https://en.wikipedia.org/wiki/Beneficial_ownership

[2] https://www.investopedia.com/terms/a/affiliate.asp


I agree that I do not exactly like forcing business models to companies. But I think especially facebook and google are currently so difficult to avoid that their users should be given a fair choice on how to pay for their services, with money or with data.


2. would we an extremely bad law. Even if purely "social" factors will be worked out and we come up with a perfect definition of when and how this should apply — how the fuck do you mean to enforce it? It will make only more trouble and bureaucracy for tech people (I mean actual programmers) with no useful output. Because you cannot realistically enforce it. Data either belongs to somebody, or it doesn't. Either you come up with technological solution for keeping user data secure from anybody including youself (which would be equivalent to not storing it at all, and I'm not sure it's possible for every application), or you make it responsibility of some shady but "reputable" auditors to make sure there's no crime going on, which is a dead end, as even if there's no bribes and conspiracy (which there sure will be), it only passes the responsibility and assumes the end user "trusts" the regulator, the government — well, just "someone else". Which is just silly.

Same applies to 3. at some extent. Only 1. may be assumed reasonably (even though still not "easily", as in practice it will just produce more work for lawyers and accountants) transparent for the public.


The original comment said:

"[...] with strong guarantees that the data of end user who has opted for paid service is kept private and not monetized anyhow."

As an "actual programmer" myself, I don't see how that would be so burdensome to implement. The data is somehow segregated so that it's excluded from being sold to advertisers in any form. The user can have their data deleted permanently at any time.

It obviously wouldn't prevent someone from authorizing a 3rd party app to all of my Facebook data, but that is more about Facebook educating people about what they're consenting to rather than a philosophical debate about ownership of data.


> The data is somehow segregated so that it's excluded from being sold to advertisers in any form.

That doesn't mix with there being free users whose data is sold to advertisers, because it's the same data. You share your photo with another user who in turn shares it with another user etc. That's what a social network is.

So what happens when the vast majority of everything is shared with both types of users? If touching a free user means it can be subject to facial recognition and tagging and everything else for advertising then there is little point in paying. If touching a paid user means it can't then it destroys the revenue model for free users and the price of paying would be high enough to discourage everyone from doing it, because you're de facto paying for all the free users you share anything with.


> As an "actual programmer" myself, I don't see how that would be so burdensome to implement

Oh, but you will see, when your manager (who doesn't really understand the implications of various technical decisions, by the way) will order you to do something completely meaningless (in your opinion), because the regulatory documents (made by people who understand and care for the implications of technical decisions far, far less than your manager) say something meaningless, contradictory and hard to interpret in the first place. If you aren't familiar with such situation it's probably because you are not working in a regulated industry, not because these things "solve themselves" in practice. No, in practice they don't.


I'm not sure what you consider as "no useful output"? Is it the offering your services for a fee instead of ads or guaranteeing the privacy of your users' data?

Because to me, both are quite useful outputs. The first one is quite common practice, and for the second one there are other industries (e.g. financial, medical) from where we can see how this is actually already done in legislation and practice.

(I would think that majority of Google's and Facebook's technical challenges are because they want to utilize the customer data.)


There are tons of rules that are not immediately enforceable, but are still in place and make the society better. Make it illegal, find and punish people who don't follow the rules, and things will be better. Many people are not willing to do outrightly illegal stuff.


That's a curious opinion and I still am too see the facts to back it up with, perhaps you can provide it. Because, as a matter of fact, I've seen exactly the opposite: laws that are not enforceable are the worst laws ever, disasterous even. If they are not "important" in their consequences, they just leave the simple folks (small businesses, for example — and hurting small business is really unwise thing to do IMO) in the state of FUD, because it takes a professional cons-man (e.g. and experienced lawyer) to know how to deal with this bullshit — it puts "honest" people in uncomfortable position, while doing nothing to people who are used to operating in perpetual state of moral hazard, which means it is beneficial to people who are most likely to exploit the law (or its absence) to start with. If this law is more "important" and involves a lot of regulations it fuels corruption, because, again, it's beneficial for dishonest people — it's up to somebody's opinion if you are breaking the law, and opinions can be swayed with some money…

In the most general and seemingly "harmless" case these laws do exactly nothing: nobody is following them and nobody tries to punish people for not following them. This just makes the legal system more complicated (and — yes, there are plenty of such laws, and this is one of the reasons so many legal systems are shit) and provokes the disrespectful, hand-waving attitude towards the law, which isn't a good thing either.


no. 2 is guaranteed cost plus pricing. Quite a lot of fiascos built on that principle.

Regulation is harder than it seems. The legal system we use... it just doesn't really work that way. It doesn't fail just because of corruption and such. That said, here's one. Whatever they collect, users can elect to have a copy and easily with a 3rd party. At least that way we could watch them watching us.


I am confident that facebook makes much more money on the ads it shows me than what is any reasonable margin on the actual cost of serving me the pages and storing my status updates. So in this case this guaranteed cost plus is much less than what facebook currently does.


Some form of analytics that can report each downstream access.


Or just make the law so that the customers own their own private details, and not the company.

If the company wishes to use private data for anything else than providing service to the customer, it must ask for permission.

They can still pick ads based on the private data and personalise the user experience but they can not share or otherwise disclose it to anyone else.


On points 2 and 3, why limit this to companies with revenue exceeding x or y million USD?


I assume it is specifically to avoid killing startups in the crib.


If your business model requires that it should be killed in the crib


So current free to user companies just get to hold their market forever? Are you that happy with them?


To continue the analogy, they should be euthanized


Because it's the same principle we apply to other mass communication providers (i.e. telecom providers). Except there it's based on #users, not revenue.


To not create unnecessary barriers of entry.


#3 sounds like a cool idea, I think knowing how your data is being used is important and allows users to make more informed decisions about how they give their data away.

I think #2 is over-regulation, given #3 users can decide if they want to support the product or not by using it.

Also, if you deactivate your account companies should no longer be able to use your data.


I think we are currently beyond the point where most people actually can choose whether or not to use some services. So I think it would be appropriate to have the option to pay for these services while keeping the privacy.


It's well known at this point that regulations tend to protect incumbents in many industries by making it more expensive for startups to compete.

Industry support for regulations can seem counter-intuitive -- why would a company want to increase its cost of doing business? But it makes sense when you consider that the costs are an anti-competitive weapon. One example is when Philip Morris supported FDA regulations of cigarettes: http://www.slate.com/articles/business/moneybox/2002/07/smok...

So, we should be very careful about how we regulate companies' use of customer data, lest we make it even more difficult for Facebook's competitors.


No. Privacy should never be second priority.

Citizens' right to privacy trumps startup creation.


> Citizens' right to privacy trumps startup creation.

You're assuming that regulatory capture protects privacy better than competition would.

The evidence is to the contrary. Debian and Open Whisper Systems are better at it than AT&T and Comcast.


Competition only works if the customer has relevant information. For a cable network's customers it's instantly apparent when their internet speeds suck; it's not apparent when someone sells your private data.

It doesn't matter that the information eventually comes to light, because they're the dominant social media company by that point, buying up their competitors, and it takes a lot more to get rid of them.


> For a cable network's customers it's instantly apparent when their internet speeds suck; it's not apparent when someone sells your private data.

There is no way to tell when they sell it but there is a way to tell when they have it to begin with.

If you're using software with public source code that you know is end-to-end encrypting the message to the recipient, you know there is no third party doing nefarious things with the data, because no third party has the data to begin with.

And that's the distinction that matters. Prohibiting a company from selling your data to someone who will do evil things with it is no use if the company itself can still do the exact same evil things with it. Or just get itself hacked like Equifax and reveal all the data to the attacker.


We should be very weary of regulation that has support from those it regulates.


I really don’t care if building codes, automotive safety regulations, and standards of medical care make it hard on new businesses. My priorities are not dying in a car crash, my home not falling on my head, and avoiding incompetent doctors. Regulation on tobacco may have helped incumbents stay incumbent, but it also led to record judgements against the, and the rapid decline of their industry in the developed world. That it’s harder for a tobacco startup to get off the ground is a bonus.


Except Facebook isn’t cigarettes, dude. No matter how much you want it to be cigarettes, it just isn’t.

Facebook is a boredom cure at best, and at times it can be a version of the cold, hard, cringey truth. You might complain that Facebook harms mental health, but the reality is that sometimes the truth is the most depressing thing to learn.

If Facebook does harm by preventing enough escapism, there’s no regulatory cure to fix that. You’ll just have to wait for the tide to shift. But hey, look at that, today’s your lucky day...


Facebook, and large scale social media in general, does harm by substituting for genuine real-world social interaction, by warping our perception of other people's lives (highlight-reel effect on steroids), by pushing us to become egotistical "self-marketers", and by hijacking our attention via insidious manipulation of the lizard brain.


Hmmm, not quite. See, that’s the part about the boredom cure I mentioned.

People really are that bored. People’s lives are frequently very lackluster and uneventful. This is plainly obvious, because whenever someone delightfully ordinary (yet comfortable with that) comes along, it seems like such a breath of fresh air amongst all the puffed out try-hards that grab so much attention.

The cringey reality check is the depressing part that creates the anxiety of never wanting to leave the house. People have a hard time differentiating between bullshit and not bullshit.

It’s not entirely correct to blame Facebook in particular for the inadequacy felt, when comparing one’s self to someone else’s carefully currated (yet seemingly natural) photo album.

That part of the debate is just the bulemia-caused-by-waif-as-fashion-model-redux. You can’t fix that by removing Facebook from the equation, because people will never stop beautifying their profile pics and cherry picking exclusive-yet-swollen friend lists.

So we’ll cure it by all jumping ship to Mastodon, where the photo albums will be crappier and less enviable, and we’ll all be less cringey? That’s the plan?

This isn’t about elections at all, is it?


That's why I mentioned broader social media in general. This problem isn't specific to Facebook alone.


The constant posting of people salivating for regulations like yourself is also hijacking my attention and warping my perception of how misguided society at large really is. You are insidiously manipulating my lizard brain to want to destroy you, while I should be productively working. Please stop your egotistical wish marketing and regulate yourself so I can not be distracted.


I'm not the person several comments up who brought up regulations. I don't yet have a strong opinion about what regulations, if any, might be beneficial.

I was merely responding to a comment that I thought didn't accurately portray the harms that social media can cause. Perhaps you could address those points?

> to want to destroy you

That doesn't sound healthy...


I'm not so sure Facebook is in the same category as vehicles, medical care and addictive carcinogens.


I would say it's more comparable to the casino. Psychology is analysed and exploited to the benefit of the house. To some it's harmless entertainment, and to others it's devastating.

Support for regulation of casinos is born largely out of the idea that many people are not fully responsible for their psychology.

Federated social media (e.g. Mastodon) only serves part of the problem. It's like going from the casino to your friends' houses; ill effects still exist, but the domain is not controlled by a profiteer, who may have incentives to maximise and exploit those ill effects.


Eh, maybe. But worst case scenario at a casino is you blow your life savings. Worst case on Facebook is you waste a few hours of your time being bored.

Additionally, if we do introduce social media regulations (shudder) you can guarantee Facebook will have a hand in writing/influencing them to benefit itself and hurt any startup competitors, as the article argues.

I wouldn't be opposed to basic data protection laws, but I'm always wary of "oh just regulate them" as an automatic cure-all for when we don't like how a company behaves. There's almost always unintended consequences.


https://www.eff.org/deeplinks/2018/03/how-fosta-will-get-hol...

"The not-so-secret goal of SESTA and FOSTA is made even more clear in a letter from Oracle. “Any start-up has access to low cost and virtually unlimited computing power and to advanced analytics, artificial intelligence and filtering software,” wrote Oracle Senior VP Kenneth Glueck. In his view, Internet companies shouldn’t “blindly run platforms with no control of the content.”

That comment helps explain why we’re seeing support for FOSTA and SESTA from odd corners of the economy: some companies will prosper if online speech is subject to tight control. An Internet that’s policed by “copyright bots” is what major film and record studios have advocated for more than a decade now. Algorithms and artificial intelligence have made major advances in recent years, and some content companies have used those advances as part of a push for mandatory, proactive filters. That’s what they mean by phrases like “notice-and-stay-down,” and that’s what messages like the Oracle letter are really all about."


People who think like this should be VP of anything, I’m not surprised he’s at Oracle.


Techies take the job because they wake up each day believing that they’re having a massive positive influence by connecting the world.

I don't think this is really the case anymore - at least from the FB engineers I know.

Great engineers are at FB because they get to deploy really interesting projects on the largest platforms in the world with best in class technologies, or get to work on cutting edge AI research (FAIR) or interfaces (Oculus).

The whole "connect the world" thing seems to be an afterthought at best for most.


I don't find that to be true for the people I know who work at FB though...they all deeply believe in connecting the world.

Deploying code to a billion people is only thrilling the first time anyways


A good way to think about regulation and Facebook might be, would you rather continue using Facebook, knowing the government is supposedly regulating them, but you can never know for sure, or would you rather move on to a new and more interesting platform where privacy and security defaults are integrated into the platform from day one? Regulation that Facebook can point to and say, "See, we are following to rules!" will make it harder to convince your friends to leave Facebook.


That's implying that your friends would consider leaving Facebook at all because of privacy issues. If, instead, your choice is between "friends on unregulated FB" and "friends on regulated FB", I'd prefer the latter option.


Then how do you confirm Facebook is complying, and how do you confirm that the regulation they need to follow is sufficient, and if it's not, how do you really create any change to make it sufficient?


In investment parlance the government would build them a moat. Companies with capital to maintain compliance will have a head start on any startup looking to unseat them. Should this come, it will be a lot harder to start a startup.


If regulators really wanted to stop the abuse by Facebook and every ad company, they would outlaw the collection of any tracking data of users on any website except the ones you own (website operators have security needs). Embedded content in other websites (via any means including iframes) not owned by your company could not record tracking data or insert data into the user's browser (e.g. cookies, custom urls, Web SQL Databases). Any sharing of user data with other companies is illegal. Cannot abuse what you don't have.

I expect any legislation to make competition with Facebook very difficult.


privacy regulations = Facebook loses. compliance regulations = Facebook wins.


I think that future (social) data-mining startups will need to address these things in order to be regulatory agnostic (on top of the general product/market fit):

- solving the decentralized infrastructure: as nodes get taken offline, they can replace themselves, and service is still usable

- solving payments for services to third parties that aren't directly the users: cannot rely on traditional payments infrastructure, since that is easily co-opted by regulatory bodies and be used agaisnt the developers/owners of such services since those have traditionally tied ones identity to such payment information.


Privacy regulations have value independent of its effects on competition. Making a Rube Goldberg argument about how it might end up benefiting Facebook ignores that fact.

The solution to market dominance is antitrust enforcement.


I wanted to make this comment but you made it for me.

Only breaking at least one tech giant up will ever change the status quo. In FBs case that could be very simple - in theory. They’re only allowed to operate over specific geographic regions, say states in the US and feature (IM separate from profile separate from gaming) In practice, though, they're probably too big and global to be broken up.


That is the typical situation. Regulation protects the entrenched large organizations who are mostly 'grandfathered' in and set the standard for what is permissible. And it then shelters the giants from competition coming from smaller organizations. This is how companies ossify themselves into the larger structure of government.


The whole "we shouldn't regulate Facebook because it will help Facebook" thing really seems like an underhanded attempt at tricking privacy advocates and others to fight against their own best interests.

In reality, it depends entirely on the details of the regulations.


danah boyd wrote in a 2010 essay (“Facebook is a utility; utilities get regulated”, http://www.zephoria.org/thoughts/archives/2010/05/15/faceboo...):

“In my post yesterday, I emphasized that what’s at stake with Facebook today is not about privacy or publicity but informed consent and choice. Facebook speaks of itself as a utility while also telling people they have a choice. But there’s a conflict here. We know this conflict deeply in the United States. When it comes to utilities like water, power, sewage, Internet, etc., I am constantly told that I have a choice. But like hell I’d choose Comcast if I had a choice. Still, I subscribe to Comcast. Begrudgingly. Because the “choice” I have is Internet or no Internet.

“I hate all of the utilities in my life. Venomous hatred. And because they’re monopolies, they feel no need to make me appreciate them. Cuz they know that I’m not going to give up water, power, sewage, or the Internet out of spite. Nor will most people give up Facebook, regardless of how much they grow to hate them.”

and

“Thus far, in the world of privacy, when a company oversteps its hand, people flip out, governments threaten regulation, and companies back off. This is not what’s happening with Facebook. Why? Because they know people won’t leave and Facebook doesn’t think that regulators matter. In our public discourse, we keep talking about the former and ignoring the latter. We can talk about alternatives to Facebook until we’re blue in the face and we can point to the handful of people who are leaving as “proof” that Facebook will decline, but that’s because we’re fooling ourselves. If Facebook is a utility – and I strongly believe it is – the handful of people who are building cabins in the woods to get away from the evil utility companies are irrelevant in light of all of the people who will suck up and deal with the utility to live in the city. This is going to come down to regulation, whether we like it or not.

“The problem is that we in the tech industry don’t like regulation. Not because we’re evil but because we know that regulation tends to make a mess of things. We like the threat of regulation and we hope that it will keep things at bay without actually requiring stupidity. So somehow, the social norm has been to push as far as possible and then pull back quickly when regulatory threats emerge. Of course, there have been exceptions. And I work for one of them. Two decades ago, Microsoft was as arrogant as they come and they didn’t balk at the threat of regulation. As a result, the company spent years mired in regulatory hell. And being painted as evil. The company still lives with that weight and the guilt wrt they company’s historical hubris is palpable throughout the industry.

“I cannot imagine that Facebook wants to be regulated, but I fear that it thinks that it won’t be. There’s cockiness in the air. Personally, I don’t care whether or not Facebook alone gets regulated, but regulation’s impact tends to extend much further than one company. And I worry about what kinds of regulation we’ll see. Don’t get me wrong: I think that regulators will come in with the best of intentions; they often (but not always) do. I just think that what they decide will have unintended consequences that are far more harmful than helpful and this makes me angry at Facebook for playing chicken with them. I’m not a libertarian but I’ve come to respect libertarian fears of government regulation because regulation often does backfire in some of the most frustrating ways. (A few weeks ago, I wrote a letter to be included in the COPPA hearings outlining why the intention behind COPPA was great and the result dreadful.) The difference is that I’m not so against regulation as to not welcome it when people are being screwed. And sadly, I think that we’re getting there. I just wish that Facebook would’ve taken a more responsible path so that we wouldn’t have to deal with what’s coming. And I wish that they’d realize that the people they’re screwing are those who are most vulnerable already, those whose voices they’ll never hear if they don’t make an effort.”


What other reasons would there be for Zuckerberg to invite regulation of the social networks his company dominates?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: