New working hypothesis. If correct, the entire concept of market incentives for privacy needs to be substantially altered. Not that creating such a market is impossible er se, but the belief that disclosure and consumers can "vote with their feet" is provably invalid because the rational consumer -- like the rational car buyer -- must assume that the device is or will become a "privacy lemon" rather than a "privacy creampuff."
Not gonna bother to unpack that unless someone is actually interested. But I need to park it somewhere or two weeks from now I'm going to be looking for the stupid scrap of paper.
Not gonna bother to unpack that unless someone is actually interested. But I need to park it somewhere or two weeks from now I'm going to be looking for the stupid scrap of paper.
no subject
Date: 2018-02-09 12:05 pm (UTC)Well, to unpack my work here a bit. I have been migrating into the policy field over the last few years. In Policyland, we still have the raging debate on whether people care about privacy (and how much) based on the fact that people use the Internet anyway. So dspite the fact that survey after survey demonstrates that people really hate the status quo, we get lots of argument that they don't care or are unwilling to make the effort.
My argument is one grounded in well established economic theory that people are simply being rational actors. It is based on Akerlof's seminal economics paper "The Market For Lemons." To summarize- Akerlof examined the used car market. Although everyone would like to buy good used cars ("creampuffs") and avoid bad cars ("lemons") the market failed to produce any reliable mechanism for distinguishing between the two. Why? As Akerlof explained, consumers faced the problem of being unable to verify any dealer claim that a used car was a creampuff rather than a lemon. Accordingly, consumers act as if all cars are lemons. Because consumers act as if all cars are lemons, there is no incentive by any used car dealer to try to develop a way to sell only creampuffs. No matter what the merchant does, it is not rational for the consumer to believe it because the consumer cannot trust the car and every contract has fine print that somehow allows merchants to wiggle out. No matter what I as an individual merchant may do to try to convince you that I am different, you rationally should not believe me, because that is *precisely* what a dishonest used car dealer would say or do.
Akerlof's solution was to find a way for the consumer to trust the vendor. He proposed a "Lemon Law" that would allow a consumer to return the used car within 30 days for any defect. The consumer no longer had to prove that the dealer new about the defect, and there was no way for the dealer to force the user to waive the right to return the car if it proved defective. The result was to radically change the operation of the used car market.
Privacy policy has a similar problem. Market incentives are not a bad idea, but we are going about them all wrong. Because a privacy policy can always be unilaterally changed without notice, because there is no private right of action or private enforceability mechanism, and because the vendor itself may not know of the future privacy "defect," the rational consumer therefore treats all privacy policies as an unenforceable joke. No market for privacy can therefore develop under the current legal regime.
Accordingly, we need to adopt policy measures that address the factors that prevent consumers from being able to trust strong privacy policies if we want market incentives to enhance privacy. A "privacy market" that industry keeps insisting would emerge if people genuinely cared can be proven to be not merely a practical impossibility, but a theoretical possibility.
no subject
Date: 2018-02-09 02:45 pm (UTC)There's a big problem: the industry does not generally know how to build secure systems, where secure is minimally defined as "gives only the appropriate information to the right people, and does not give inappropriate information to anyone". There are special cases which may be secure, but most complex systems which make general security claims are not.
We can improve incentives: for instance, a bank could guarantee that no inadvertent disclosure of your information would put you at risk of more than $100 loss in the same way that unauthorized credit card use tops out at $50. But the fact of the matter is that the bank would not have any particular internal assurance that they were doing things correctly.
New salescritters occasionally contact me in my $work capacity, making assurances about how secure their cloud environments are. I ask them if they are willing to indemnify us for the complete value of loss of information, assuming that the loss is their fault. New salescritters are sure something can be worked out, and bring in their lawyers... who need about fifteen seconds to say no, not a chance. So we don't increase our attackable surface, and remain hunkered down trying to do the right thing.