Politics is a distracting affair which I generally believe it’s best to stay out of if you want to be able to concentrate on research. Nevertheless, the US presidential election looks like something that directly politicizes the idea and process of research by damaging the association of scientists & students, funding for basic research, and creating political censorship.
A core question here is: What to do? Today’s March for Science is a good step, but I’m not sure it will change many minds. Unlike most scientists, I grew up in a a county (Linn) which voted overwhelmingly for Trump. As a consequence, I feel like I must translate the mindset a bit. For the median household left behind over my lifetime a march by relatively affluent people protesting the government cutting expenses will not elicit much sympathy. Discussion about the overwhelming value of science may also fall on deaf ears simply because they have not seen the economic value personally. On the contrary, they have seen their economic situation flat or worsening for 4 decades with little prospect for things getting better. Similarly, I don’t expect history lessons on anti-intellectualism to make much of a dent. Fundamentally, scientists and science fans are a small fraction of the population.
What’s needed is a campaign that achieves broad agreement across the population and which will help. One of the roots of the March for Science is a belief in facts over fiction which may have the requisite scope. In particular, there seems to be a good case that the right to engage in mass disinformation has been enormously costly to the United States and is now a significant threat to civil stability. Internally, disinformation is a preferred tool for starting wars or for wealthy companies to push a deadly business model. Externally, disinformation is now being actively used to sway elections and is self-funding.
The election outcome is actually less important than the endemic disagreement that disinformation creates. When people simply believe in different facts about the world how can you expect them to agree? There probably are some good uses of mass disinformation somewhere, but I’m extremely skeptical the value exceeds the cost.
Is opposition to mass disinformation broad enough that it makes a good organizing principle? If mass disinformation was eliminated or greatly reduced it would be an enormous value to society, particularly to the disinformed. It would not address the fundamental economic stagnation of the median household in the United States, but it would remove a significant threat to civil society which may be necessary for such progress. Given a choice between the right to mass disinform and democracy, I choose democracy.
A real question is “how”? We are discussing an abridgment of freedom of speech so from a legal perspective the basis must rest on the balance between freedom of speech and other constitutional rights. Many abridgements exist like censuring a yell of “fire” in a crowded theater unnecessarily.
Voluntary efforts (as Facebook and Twitter have undertaken) are a start, but it seems unlikely to go far enough as many other “news” organizations have made no such commitments. A system where companies commit to informing over disinforming and in return become both more trusted and simultaneously liable for disinformation damages (due to the disinformed) as assessed by civil law may make sense. Right now organizations are mostly free to engage in disinformation as long as it is not directed at an individual where libel laws apply. Penalizing an organization for individual mistakes seems absurd, but a pattern of errors backed by scientific surveys verifying an anomalously misinformed status of viewers/readers/listeners is cause for action. Getting this right is obviously a tricky thing—we want a solution that a real news organization with an existing mimetic immune system prefers to the status quo because it curbs competitors that disinform. At the same time, there must be enough teeth to make disinformation uneconomical or the problem only grows.
Should disinformation have criminal penalties? One existing approach here uses RICO laws to counter disinformation from Tobacco companies. Reading the history, this took an amazing amount of time—enough that it was ineffective for a generation. It seems plausible that an act directly addressing disinformation may be helpful.
What about technical solutions? Technical solutions seem necessary for success, perhaps with changes to law incentivizing this. It’s important to understand that something going before the courts is inherently slow, particularly because courts tend to be deeply overloaded. A significant increase in the number of cases going before courts makes an approach nonviable in practice.
Would we regret this? There is a long history of governments abusing laws to censor inconvenient news sources so caution is warranted. Structuring new laws in a manner such that they cannot be abused is an important consideration. It is obviously important to leave satire fully intact which seems entirely possibly by making the fact that it is satire unmistakable. This entire discussion is also not relevant to individuals speaking to other individuals—that is not what creates a problem.
Is this possible? It might seem obvious that mass disinformation should be curbed but there should be no doubt that powerful forces will work to preserve mass disinformation by subtle and unethical means.
Overall, I fundamentally believe that people in a position to inform or disinform have a responsibility to inform. If they don’t want that responsibility, then they should abdicate the position to someone who does, similar in effect to the proposed fiduciary rule for investments. I’m open to any approach towards achieving this.
Edit: also at CACM.
This is a thoughtful post but why do you feel it necessary to say: “What to do? Today’s March for Science is a good step, but I’m not sure it will change many minds.” This common refrain feels lazy to me; a massive demonstration in support of science serves many more purposes than any one person can enumerate and in any case how do you know what will change minds or when or whether that might even be possible?
Organized demonstrations allow one to publicly register dissent for actions and policies — when there’s broad agreement they can be massive — the Women’s march, demonstrations against the Iraq War, the travel ban, and yes, the March for Science were massive. That means something in the world of ideas and for understanding broad popular opinion around the issues of the day.
I do say “It is a good step”, and I fully believe that. Whether or not minds are changed is a question of experience validity for which there is no simple answer in the short term.
If you think many minds were changed, then maybe you are content. If you don’t (as I don’t), then I’m suggesting an approach which channels the energy of the protest towards a policy which lobotomizes disinformation strategies. Achieving that seems pretty worthwhile. It’s the kind of change which might have a great long term benefit.
This is frightening. Can an apparently thoughtful and intelligent person really think that abandoning a core principle of the Enlightenment is a good idea? Can they really be so, frankly, STUPID as to think that this is “not relevant to individuals speaking to other individuals”? Can they really think that “caution is warranted” is going to be enough of a guard against the entirely predictable endpoint of this?
The toxic nature of current politics, especially in the US, is very disheartening. You are not thinking coherently. You are part of the problem, not the solution.
Do you have an alternative? Or is is it just “live with disinformation?”?
Of course one must live with disinformation, countering it with correct information as best one can. There is no alternative to that. Outlawing “disinformation” has historically been the primary means of ensuring that disinformation is NOT countered – think of the Inquisition, for example. You have to be extremely naive to think that disinformation is never propagated by the people with the political power to control what is and is not outlawed. In fact, I’m at a loss to understand how anyone could possibly think that at the present time. Are you suffering from delusions that “your side” will always be the ones in control, and that “your side” does not engage in disinformation?
In the last few hundred years, free and democratic societies have evolved a set of norms and institutions that have on the whole led, at least internally, to relative peace and prosperity. These norms include freedom of speech, a non-political judiciary, acceptance of election results, a separation of politics from business, and civil behaviour towards people even if you disagree with their views. There have always been people who don’t abide by these norms, of course, but on the whole they have been preserved by people who realize that destroying them for short-term political gain, or simply because it makes them feel good, is a very bad idea.
Very, very bad – as in millions of people dead in a civil war.
Are you opposed to the RICO settlement with Tobacco companies? The Fiduciary Rule? Prosecuting someone for inciting a panic in a crowded space? Campaign contribution limits? Libel laws? All of these abridge free speech in one sense or another, so I expect the answer is ‘yes, you are opposed’, but please clarify if not, particularly because that position seems extreme.
I’m not familiar with whatever RICO settlement with US tobacco companies you’re referring to, but I suspect the settlement scheme is at least as corrupt as the tobacco companies. I do recall that some US lawyers got a contingency fee of BILLIONS of dollars in one such legal action, which was essentially a legal trick in order to avoid the legislature taking responsibility for passing whatever laws they wanted (which may have been unconstitutional). The logic of such a settlement seems absurd, since they are essentially paid by the customers, who, keep in mind, are the victims.
Regarding the Fiduciary Rule, if you’re talking about how members of the board of a corporation are supposed to act in the interests of the shareholders, rather than pilfer the company coffers for their own benefit, then of course I support it, but I don’t see how it has anything to do with free speech. If you’re talking about the recent attempt to prosecute Exxon for having supposedly deceived shareholders about climate change, then I’d say the case is ridiculous, and cannot be taken seriously by anyone who is not engaging in motivated reasoning to get to the conclusion they want. In other words, it’s disinformation.
Prosecuting someone for deliberately inciting a dangerous panic should indeed be illegal, as should uttering the words “I’ll pay you $100,000 if you kill my wife”. Nobody disputes this. But note that the “fire in a crowded theatre” quote was part of an opinion that, if I recall correctly, upheld a law criminalizing opposition to the military draft. Do you think people who voice opposition to the draft should be put in jail?
Campaign contribution limits are a violation of the core purpose of freedom of speech in a democracy, which is to allow full political debate. If there was a problem of one party buying up all the trees so that the other parties couldn’t get any paper to print their pamphlets on, that might justify limits, but of course nothing even remotely like that is happening in practice. If you think that voters are stupid and vote based on how many times they see each party’s advertisements, rather than by considering the content of these advertisements, then why would you think that evening out the numbers, so the voters effectively choose randomly, would be better? Maybe you should instead have a bit more faith in voter intelligence.
I’m not entirely decided about libel laws, but they should certainly be restricted more than at present.
In any case, even those who favour more limits than I do on freedom of speech should be able to recognize that your proposal effectively throws out the whole concept, and can be used to justify any restriction at all that those in power decide on. It is tyranny. And if it is implemented, will likely lead to civil war.
The Fiduciary Rule is a proposed (not yet in effect but having an effect on the industry) United States rule that investment advisors should provide advice in the best interest of clients. Look here under 2016.
I think people should be free to oppose the draft. For the right war though, I would support a draft even while agreeing that people should be able to speak against it.
It seems like your preferred point for free speech is looser than exists at present but not total. For example, you support allowing Tobacco companies to advertise to kids? (Nixing that was a part of the RICO settlement.)
I’m not following the step from “intentionally engaging in mass disinformation should be illegal” to civil war. Maybe you are trying to make a slippery slope argument? Can you make that? As it is this seems like “any abridgement of freedom of speech leads to civil war”. Can you explain why the abridgements that exist which you do not like have not led to civil war? What is different here?
I’m comfortable with whatever party is in control prosecuting cases against mass disinformation. They might not choose the best cases to prosecute, but I just don’t see the harm in decimating mass disinformation campaigns. Why should we allow and enrich people for knowingly deceiving others on a massive scale?
Doesn’t it bother you that the the first thing pro-censorship people always say – “there’s no right to shout FIRE in a crowded theatre” – was actually used as a justification for censorship that most people (including current US courts) would consider core protected speech? (See https://www.theatlantic.com/national/archive/2012/11/its-time-to-stop-using-the-fire-in-a-crowded-theater-quote/264449 for details.) Does that not make you suspect that anything other than a “bright line” has little chance of restraining the impulses of governments to shut up criticisms? (Of course, the bright line failed in that case, but it did get overturned later.)
Purporting to be offering investment advice to a client while actually advising them to do things that aren’t to their benefit sounds like fraud to me, so I don’t see why some special rule is needed. (I assume that none of these advisors are up front about it, saying “I think that you should buy shares in X mutual fund, because while it’s likely that you’ll lose money, it will certainly help me!”)
Issues involving kids are often treated differently than for adults – I don’t think I want to get into that debate.
The link from suppressing free speech to civil war is pretty direct. You’d be foolish to start a civil war if you can peacefully vote out the government in four (or two) years. But you’re not going to be able to do that if you’re not allowed to tell people why they should vote against the government. Once there is no free speech, the norms of democracy are gone, and civil war is definitely an option. Of course, many governments get away with this for some years, if they have substantial support, and are suppressing the speech of only a small minority of dissenters. Trying to censor the opposition when the opposition is just as numerous as your side is total madness.
And that’s what you’re talking about. Saying you only want to suppress “disinformation” is baloney. What is or is not true is the subject of much debate. It might help if you gave examples of disinformation that you think has been recently propagated, and widely believed. Maybe “The Russians hacked the election”? Or “Trump wants to shut down Meals on Wheels”? Or how about “CO2 is a pollutant”?
Example context does not bother me much. It’s a good example of an abridgement to free speech which seems to have survived it’s context.
The situation w.r.t. financial advisers is much more subtle. It’s more like they say “I think this is a good investment for you”, never mentioning other plausibly better investments or the fact that they get a kick back when you invest. And since it’s an investment it might even be right (but probably not). Proving fraud in this case is either impossible or impossible-in-practice.
I’m skeptical that all possible attempts to suppress disinformation are doomed to totalitarianism as long as sufficiently strong evidentiary standard is required. For your examples:
“The Russians hacked the election”—Maybe this could be proved false to high certainty in a court, but I’m not aware of anyone with a significant audience trying to promote this?
“Trump wants to shut down Meals on Wheels”—I’m very skeptical this could be proved either true or false.
“CO2 is a pollutant”—This seems way to vague.
I’m thinking about things like this. I’m comfortable with whoever propagates such disinformation owing sufficient damages (due to those they disinformed in a class action style) to make this sort of thing noneconomic. I see no value to society of creating or propagating lies of this sort, and I see a significant tax. To the extent that people end up shooting each other because they have been disinformed, curbs on disinformation seem wise as your right to lie should not exceed my right to live. Crafting the right rule takes some care—and it should absolutely be done under the assumption that whatever party you most dislike is in power at the time it is used.
But I don’t see all possible solutions as doomed to end in civil war.
The pizzagate thing is certainly bizarre, and probably actionable under current libel laws, I would have thought. But do you really think that greater barriers to publicizing supposed child sex abuse than are presently in place are needed? There have, after all, been real cases not much less bizarre than pizzagate.
I think I must not be understanding what you’re saying with regards to “The Russians hacked the election”. Surely you’ve heard this? Here’s a Reuters story: http://www.reuters.com/article/us-usa-trump-russia-cyber-idUSKBN15X0OE with title “U.S. inquiries into Russian election hacking include three FBI probes”. The actual factual reference is to supposedly-Russian hackers stealing and releasing some mildly-embarrassing internal Democratic party emails, which might have had some minuscule influence on voters. But the phrasing of the headline and opening paragraph is pretty clearly designed to make a casual reader think that the Russians hacked the voting machines to alter the vote totals. To be clear, I DON’T think Reuters reporters should be put in jail for this, nor should Reuters be forced to pay millions in damages. But if you don’t think that, then I’m at a loss to know what you think the point of your proposal is.
Democrats losing the election by running a stupid campaign -> blame russia and internet trolls -> get rid of free speech -> hillary 2020
True story!
Generally I may be considered as a realist, which could be interpreted as a pessimist sometimes. So this resonates with me quite well.
What I think is more interesting from a machine learning standpoint isn’t the lies and the lying liars who tell them (who have always been around) but the algorithms used by Cambridge Analytica et al that are being used to put the right lies in front of the right eyes. While such clustering algorithms have a tremendous power to inform and encourage voting (as Obama did in 2008 and 2012), they also are also obviously tremendous weapons to disinform and discourage voting.
How do we possibly fight that?
Especially in an increasingly insulated online world where different “clusters” not only never talk to each other and see the lies the others are being fed, but actively exclude other ideas from their online spaces, preventing any useful comparing of notes outside of slinging “fake news” claims.
There was a time when having a large corporation push the government to fund Neural Network research would have been considered “against the scientific consensus that Neural Networks are not the path to AI”
No one needs to speak out for e=mc2. It doesn’t matter if large companies push that e=.4mc2. People can figure it out for themselves. Teaching people how to think does not involve “protecting them from falsehoods and telling what is the consensus”. It involves teaching people how to test things for themselves. It’s non-sense to say that “regular people just won’t understand and need to be protected from lies”.
The freedom to get together with a group of people, agree on an idea, and pool your money toward spreading your beliefs in an organized way is the core of civilization advances. Burning “books of misinformation” is for the dark ages.
I generally agree, except where people or organizations willfully lie on massive scale for personal benefit. If they can be proved to lie then I see nothing wrong with repercussions similar to what you get from a libel suit.
Ever since working on the FakeNewsChallenge I’ve been wondering about a technical solution. I agree that courts are currently too costly for this, both in time and money spend. And laws should be changed to reflect modern times and renewed insights: Information can have a negative value. Instead of making things easier to predict/compress (the goal of valuable information), disinformation adds noise/randomness/waste to the receiver. It takes time and energy to debunk disinformation. Both of these could be measured. In an abstract sense: Disinformation is stealing/wasting brain power. We have laws for wasting resources, polluting the environment, stealing money. We don’t have clear laws against wasting brain power, polluting the minds of a populace, stealing mental energy. Maybe there should be in the future.
I gravitate to a pure technical solution, based on cryptographic principles. In the paper “Covert Two-Party Computation” (Ahn, Hopper, Langford) http://hunch.net/~jl/projects/covert_mpc/f240-ahn.pdf there is a fine clause for not following the protocol:
> If Alice is concerned that Bob might fabricate data to try and learn something from her logs, the computation could be modified so that when an attacker is identified, the output is both an attacker and a signed contract stating that Alice is due a prohibitively large fine (for instance, $1 Billion US) if she can determine that Bob falsified his log, and vice-versa.
Perhaps information providers can be held to the same standard: smart Ethereum-based contracts put a price on providing fact-checked news. Once proven to be disinformation, either the contract breaks: The news agency loses the right to charge for subscriptions, or a set amount of small fines is automatically deducted from a pool held by a third party. News agencies and social networks could put a price on their trustworthiness and fact-checking. Consumers could gauge trustworthiness of a news provider by comparing the amounts of money they have put up for breaching the truth contract.