Amazon, Facebook, Google discussed on The Lawfare Podcast
So what sort of tools does privacy law have any to sort of help address. These kind of negative externalities tonality to privacy. Homs with the individual feels about what's happening but we have a societal damage. With all of these daughter accumulating. That's really honey. This is something I've also got into trouble with. I have gone on record saying that. For example Amazon Ring may not be the most terrible technology ever invented in the history of human technology. Right there are many types consumer technologies that seem a little creepy and have bad. externalities are very useful and people like convenience people like technology. Sometimes we like targeted ads. They give you brands that you may actually want to buy. Sometimes we like technologies like Siri or like Google home or digital assistance allows us to do things without actually typing things into a computer or a phone. It's true that consumers want technologies that occasionally our privacy invasive. I think what's important here is twofold that I the companies have to realized their responsibilities both legal and otherwise protect consumer data so that they can have still a strong market advantage but also we need you to make sure that we regulate the technology sector generally and data collection specifically speaking protect consumers. Even when they do want these technologies I think we can have you nahles. We just need to make sure we implement them and develop them in a way. That is protective of privacy. So that I think it's too much bigger question question. which is what are the sort of big changes in technology in recent years? That privacy regulation has failed to adapt to on a high level. All that is a key question so the question of how society has changed is really important and understanding how the law should change and one of my mentors Professor Jack Balkin at Yale Law School. He has always spoken about the importance of technology law as not just dealing dealing with new technologies simply because they're new but looking at what is change in society so that these technologies have changed as well and when you understand what is change about society then you can figure out how the law should change so here. I think we have a few societal changes that are important to understand when we think about how to change privacy laws or update them to allow for protection against disinformation for example. One of the greatest. Societal changes is simply the proliferation of data. So the big data world is one of the biggest changes the fact that you walk outside and data's being collected about you from numerous numerous actors many of which you don't know that's something that's new in the past you. Generally new win data was being collected about you today. You don't and and there's almost no way to control how the data is leaders shared or transferred or aggregated. It strikes me on that point with the massive accumulation of data that we say that. We're actually in these very very unique moment in history where we could study things like this information in a way that we could never do before platforms have so much information about how it spreads who who spreads it what is effective counter speech etc.. But here's where we run into privacy from a different angle. So in the study of disinformation or when in their campaigns happening in real time. They're concerned about privacy about letting too much data out to let by the researches or count speech campaigns in and this sort it seems like a real tension there and you can understand you have some sympathy for the platforms to sort of. Be Very nervous about the privacy homes especially in the wake of Cambridge analytic current things. So how how do you conceive of that trade off there and is there any way to sort of move forward from this point where we have this bit of a stalemate. It's really interesting when you talk to people who are trying you do research on the platforms. which involved many of us on this podcast right now? It's difficult to get data from the platforms if you want hard data ada on how people are talking to each other happy interacting. This involves either very creative use of the platform scrape various types of data or talking to the platforms themselves and asking for data directly from them of course post Cambridge Analytica. A many platforms have kind of locked down their access for researchers searchers and this is a good and bad thing it could be good in that we could prevent the next Cambridge Analytica from happening right. We could limit a bad faith researcher from taking too much data it's also potentially bad because there are probably many more good faith researchers who could used I used data even data that could potentially identify a person solely in a research capacity and this research could be on topics that could relate to disinformation mation. It could be about information actors about the spread and the amplification of this info. All of those things. That are very important for us to study right now. So I am concerned about the over application of privacy issues and privacy law and stopping research on tech platforms. So what I I have found fascinating about The individual user side of this is that seems like individuals also starting to realize of their personal data is worth worth money right And in a way they're learning that they have something to sell and we saw this pretty clearly in the Ukraine elections action that took place over the the spring and summer where there was a confession by a Russian agent who said he was sent to Ukraine around the elections to try to get people to rent out their facebook accounts and then they would have this organic account that they could then use to post. The local ADS has or publish all kinds of false stories. Or what have you and right. Now if you google rent on my facebook There's plenty of companies specialize in this S. and this to me seems like the next phase of the GIG economy right where people are realizing that their data's worth something it's not just something to to the companies and then to advertise trying to sell products. It also could be worth something to them right and I just wonder where like this is all heading and one one thing that I have put out there and nobody has really yelled at me about this. Although I don't think it's a good idea necessarily but I love to get your thoughts is suggested that will be really useful to it. Requires through disclosures. Were companies are basically required to tell you if you request that information how much you're worth to them right because we all all have a dollar sign on us when it comes to the kinds of advertising stretching US appealing react to certain advertisers. I mean is this just a completely crazy idea or do you think this is something that I mean at least raise awareness about how data collected. How much of his collected and what is is worth value to the individual? I think that's an interesting idea. I don't know if it's crazy. It's certainly not something that we we've seen implemented in the law. I think thou what you're talking about here. This idea of the worth of a user's data. It's not so much about how much data costs it's about Out The user account or really the user identity and we've seen this we've seen different companies by user accounts in order to then flip them. Tom and use them for disinformation or for advertising. I just read an article recently about this phenomenon where people are now Amazon. Reviewers yours for products for pay. So what they essentially do. is they sign up on Amazon. They have an account and the also sign up on an external website to be paid reviewer. This external website pays them completely off Amazon reviews written products. These people then go on Amazon actually buy the product and and then review it as a verified purchaser. So there's no money. Being exchanged between Amazon and the reviewer but reviewers getting paid based on their appearance of having a real Amazon account. I think that's a very similar We're talking about the issue of people selling their appearance or their identity or their account It's not so much selling your data which I think is different and scarier. I'm if you sell a user account on facebook or user account on twitter or Amazon. That's one discreet thing. And that's something we can get a price on A. We could have a marketing company. Just priced how much a certain account is worth. You can think of this. In terms of a proxy for example how much an influencer of different follower how is paid for certain advertisement the idea of getting paid for your data generally scarier though because it gets into this really scary concept to me which is simply be idea of data brokers aggregate irs owning more data than anyone can brilliant understand. I think we understand that. We each have no-one social media can't hear one account there and we had discreet data collected by those companies in each of those those accounts. What we probably don't grasp is the enormity of how much data exists that? It's just on us out there. Control by the various aggregate irs and so so wh. What Solutions Does Privacy Law have for that or does it have any? There are various privacy. Laws are trying to control against this idea of big data aggregation many privacy laws though are not really set up in a way to protect against those secondary or let's say downstream harms harms privacy laws protect against the collection of data. They limit the collection of data the limit the first use of data they limit the first sale or transfer of data and they allow for certain rights sometimes like the right to access the right to deletion what we should be more concerned about though the harms of aggregation and the harm arm that From these large data brokers and what I haven't really seen so far are any good laws that really regulate the data brokers. I seen a few the proposals and a few laws that try identify them but nothing that really regulates against their specific harms. So I want to jump back to you a question that I think combined do picks and what we were just discussing about selling accounts though not selling data itself. which is if you're talking about? You know someone who's sort of rents out their Amazon account or rents out their facebook account that strikes me as another example of something. We've talked about a lot on this podcast which which is a sort of a degradation and trust right that you know now if I look at Amazon reviews. I don't necessarily trust that you know. All these five-star reviews are legitimate. Even though say a few years ago I might have been a little more naive and said wow like everyone loves this lightbulb so much and so bringing that into the political realm. One thing that I've seen recently that's really interesting is The Pete Buddha judge campaign his campaign security officer officer announced that they're basically filming the presidential candidate all the time at least when he's awake so they can disprove move any deep fakes that anyone makes of ham and this on the one hand is really interesting as a way of getting back that realm that aspect of trust right of saying we can prove that he did or he didn't do X.. We can prove that this is legitimate that it's not faked on the other hand that's obviously really invasive. And so there's a sort of there's a privacy benefit and a privacy harm in hand at the same time I mean. What do you think about this? As a hint of the road. We.