In response to ‘fake news’ many media have installed ‘fact checking’ columns and the European Commission (EC) has launched a plan to tackle online disinformation. These approaches have fundamental flaws, argues Dennis Zeilstra in this pungent longread on truth and contemporary society. Zeilstra was trained in the mathematics of aviation engineering and is now a professional health researcher.
Summary
In response to ‘fake news’ many media have installed ‘fact checking’ columns and the European Commission (EC) has launched a plan to tackle online disinformation. These approaches have fundamental flaws. They assume that there are clear, objective, true facts, which in philosophy is known as a realist world view. This view assumes that reality exists as is and we just have to discover it, but ignores that every judgement is colored. Often science is invoked as the ultimate tool to determine the truth, but hardly ever it is acknowledged that science too involves presuppositions. Rather than statements being true or false they merely have lower or higher degrees of certainty. There may be consensus, but that as such does not increase the degree of certainty that something is an accurate description of the reality. The example of Galileo Galilei’s theory on Earth’s position in the solar system shows that holding on to the paradigm view can be dangerous and prevent advancement of our understanding.
Solutions that do not rely on the fundamentally problematic identification of truth are possible. First, we should abandon the flawed realist worldview. Second, it is better to take others seriously instead of categorically dismissing anything they say. Often details of a story may sound iffy, but the underlying feelings that the stories express are real world concerns. Third, algorithms causing polarization can be regulated, similar to the way the international code for journalists regulates traditional media. Fourth, platforms could stimulate the best-known way to bring people together: exposure to other ideas. Examples of implementations exist. Fifth, and most important, we could and should educate people in epistemology, the branch of philosophy that asks the question ‘what can we know’? In an open society, people should be capable and able to judge information for themselves.
In conclusion, the dangers associated with restricting exposure to information are just as real and threatening as disinformation. This approach reinforces prevailing paradigms, which may be wrong. As the truth does not exist, the best defense against disinformation is to educate people in making their own judgement. It is imperative to recognize that defining what is true and what isn't is a human activity and that in a democracy this must be exercised by all people rather than a few.
In the past few years terms like ‘fake news’ or ‘alternative facts’ are being used by all kinds of media. Many major news media have sections in which ‘fact checking’ occurs. In Europe there is even an official plan, created by the European Commission (EC), to ‘tackle online disinformation’. The formal EC Communication published in April 2018 states that “The exposure of citizens to large scale disinformation, including misleading or outright false information, is a major challenge for Europe.” Each of these initiatives seems to be based on a rather binary view of information: regarding it either as factual or fictional. The term ‘disinformation’ adds an intention to spreading fictional information, namely to purposely mislead people.
There are a couple of fundamental problems with the concept of fake news and disinformation and the way news sources and governments try to neutralize it. The approach being taken by the EC poses a severe risk to peoples’ sovereignty of opinion-forming. In this longread I will touch upon several key issues that play a role and which, in my opinion, make the EC’s plan a rather dangerous approach. I will discuss some of the methods employed by online platforms and I will attempt to formulate some alternative options to deal with the overwhelming stream of online information of varying quality.
Facts
Let’s first look at the concept of ‘fact’. Oxford Learner’s Dictionaries provides two definitions: (1) “a thing that is known to be true, especially when it can be proved”, and (2) “things that are true rather than things that have been invented”. Both definitions use the distinction between things that are true and things that are not. But here is the problem: how can we determine whether or not something is true?
Square A is darker than square B, or is it? Well-known optical illusion in which squares A and B have the same shade of grey (credits: wikimedia)
You might think that in many cases it is obvious whether or not something is true. You know something to be true when you have seen it with your own eyes. Or is it? As Beau Lotto, reader of neurobiology at University College London, explains in his TED talk, context, perspective, expectations, and past experiences, all influence our interpretation of the information that we gather through our senses. Even something as simple as the color of a surface cannot be determined as a mere fact, but depends on the context.
The notion that we cannot know with a 100% certainty that what we observe through our senses is a correct reflection of reality shows that a binary distinction between true and false cannot be made. Instead, we can only draw conclusions with a lower or higher degree of certainty. Even things such as the laws of physics are not laws in the sense that they represent reality with a 100% certainty. Rather, they are models of reality that have a very high degree of certainty under many circumstances.
So what about scientific conclusions? In today’s world, science has gained a status of providing the final verdict. If politicians are not sure what is the best next step they order a scientific investigation. If a newspaper wants to ‘factcheck’ something, they refer to scientific publications or ask scientific researchers. Worse still, if anyone wants to ‘prove’ their point, on any topic, they often claim it is backed by science. But scientific research is not a truth machine, it is just a means (though the best we know) to investigate things.
Much of the misconception about science comes from a realist worldview. This is based on the idea that the reality exists as is and we just have to discover it. From such a world view it makes sense that scientific research will reveal reality and provide us the truth. An opposing worldview is constructivism which is based on the idea that everything we see, hear, or conclude about reality is in essence a creation (a construct) of our mind. As shown by the optical illusion example, our brains indeed do construct a virtual image of reality and this might be deceiving. This and other examples show that it is fundamentally impossible to be completely sure that our perception actually reflects reality.
As a downside, a constructivist worldview may render one skeptical about anything and everything, which is not very helpful in dealing with real world problems. A third world view, that accepts the limitations of our perception yet aims to deal with real life issues anyway, is known as pragmatism. This worldview does not pretend that we can fully comprehend reality, but aims to use observations, conclusions, concepts, and theories to deal with practical matters. In other words, in the pragmatic (and the related instrumentalist) worldview truth is not defined as an accurate representation of reality, but by how well a concept predicts an outcome of a specific practical matter.
Pragmatism may be a way forward in dealing with controversies of facts versus fiction. Limiting our expectations of ideas to the question how well it predicts observations may remove the irrational binary classification and put cases in their specific context. It can place scientific conclusions on a scale from a very low to very high degree of certainty of providing an accurate prediction of a particular problem.
Science is a human exercise
Our world view may influence the picture that we paint of the totality of evidence, but what about the conclusions of a particular scientific investigation? If a study is conducted according to the best standards, its conclusions simply follow from solid logic and evidence and there can’t be much debate on its conclusions, right? This seems to be a widespread idea about scientific conclusions, especially among people who do not have experience with conducting research. It fuels the aforementioned realist world view and corresponding idea that well-conducted science eventually will provide a single answer to any question. The problem is that it ignores an important factor that plays a role in scientific conclusions: presuppositions.
Scientific conclusions are governed by more than evidence and logic alone. Presuppositions form the frame within which all other ingredients of the scientific method become meaningful (source: Advances in Nutrition).
Presuppositions are underlying assumptions that cannot be proven but are presumed to be true. They shape the way studies are designed and how their outcomes are processed and interpreted in order to draw conclusions. An excellent example of a presupposition is voiced by the American Association for the Advancement of Science (AAAS), which, amongst others, publishes the famous journal Science. It is one of the presuppositions underlying science itself: “science presumes that the things and events in the universe occur in consistent patterns that are comprehensible through careful, systematic study.” As with all presuppositions, this must be accepted by faith as it cannot be proven (nor disproven) that things in the universe indeed occur in patterns, nor that systematic study will make them comprehensible. As a second example, physicists draw conclusions about certain properties of the universe by presupposing that the speed of light in vacuum is constant throughout space and time. Again, this cannot be proven nor disproven.
Take a moment to let the last sentence sink in.
No wonder that there can be so much debate one certain topics! Especially when the degree of certainty is low, which is quite often the case in life sciences such as nutritional research, there can be widely different interpretations on the meaning of scientific outcomes. This also explains why it doesn’t help if debaters overload their opponents with more evidence that supposedly backs up their own conclusion: the opponent will interpret that new evidence through the same deviating frame of mind that led to their different conclusions in the first place.
Consensus
One reason that presuppositions are often overlooked as a factor influencing conclusions is that in many cases they remain implicit. The more a certain presupposition is shared within a given field of expertise, the less it will be explicitly mentioned in publications or presentations. This has an interesting effect when it comes to a tool that has gained popularity during the last decades: consensus statements.
Consensus is an opinion that all members of a group agree with. However, consensus is not part of the scientific procedure and does not increase the degree of certainty. Instead it expresses which interpretation is held by the majority. Consensus statements have made their way into dietary guidelines, medical treatment recommendations, and e.g. policies to manage Covid-19.
In a democracy it is not a bad thing to aim for reaching consensus. The problem is that often ‘expert consensus’ or ‘scientific consensus’ is being used as a rationale of the highest degree to uphold a certain recommendation. Such an argument is often posed as if consensus is irrefutable. This is problematic, because it presumes that the majority opinion leads to a higher degree of certainty about some aspect of our world. History shows that there are many cases where the majority opinion — even of experts — turned out not to be the most accurate representation of reality. One famous example is Galileo Galilei’s argument against the consensus theory that Earth is the center of the universe, for which he was sentenced and his books were banned.
The dominating world view can be an incorrect representation of reality. Reinforcing it by accepting consensus as a sound argument may be counterproductive. In fact, alternative worldviews are crucial for the advancement of knowledge.
If Robin Warren and Barry J. Marshall would not have ignored the 1980’s consensus view on the cause of gastric ulcers, they would not have discovered that the bacterium Helicobacter pylori is the causative agent in most cases. Their original work was not received very well, because it challenged the medical wisdom of the time which considered gastric ulcers to be caused by spicy foods and stress. In the eyes of their colleagues, Warren and Marshall were considered to be a kind of quacks. That may sound ridiculous today, but it is easy to judge things in hindsight.
Suppressing alternative worldviews can be just as dangerous as distributing false information. Warren and Marshall received the Nobel prize for their discovery and one of the reasons that was mentioned in the announcement was that their ideas went “Against prevailing knowledge and dogmas''. Their work has saved many people the suffer from gastric ulcers.
What about the EC plan?
Back to the EC. In the 2018 Communication the EC recognizes that there is a difference between online information and traditional media as the latter “is subject to a wide range of rules on impartiality, pluralism, cultural diversity, harmful content, advertising and sponsored content”. Since these checks and balances are absent for online content the online platforms play a large role in spreading disinformation, according to the EC. The EC defines disinformation as: “verifiably false or misleading information that is created, presented and disseminated for economic gain or to intentionally deceive the public, and may cause public harm.”
If there is any take home message from the previous sections that has stuck I hope it is that there is no such thing as a clear distinction between fact and fiction. Obviously things that are figments of imagination are fictional, but even these may contain elements that are in line with reality. You only have to watch science fiction movies from a couple of decades ago to recognize that some of the then-imaginary technologies have become reality today. More importantly, everything is subject to perception including the opaque line between fact and fiction. At one point in time it was considered a fact that gastric ulcers are caused by stress and spicy foods, but not many would still consider this a fact today.
To use a topical example, in a video on disinformation of March 2020 the president of the EC, Ursula von der Leyen, starts with a firm statement that vitamin C cannot cure Covid-19. She uses the vitamin C ‘cure’ as an example of disinformation, but in fact she could not know whether or not vitamin C can cure Covid-19 as it wasn’t even properly studied at that time. Firmly stating that it can’t cure Covid-19 is equally invalid as firmly stating the opposite. Moreover, several investigators have published scientific articles arguing that vitamin C may actually save lives. Several clinical trials are currently ongoing to properly investigate this and only when high quality evidence becomes available we may be able to make any firm statement.
This example, used by the president of the EC herself, shows how difficult it is to determine whether or not something can be considered disinformation. Yet, it is such an invalid binary view that seems to be at the core of EC’s plans.
How does the EC plan to tackle disinformation?
Although the Communication mentions that the Commission wants to foster education and media literacy, much of the efforts focus on dealing with the online content itself. This focus also becomes clear from the continued efforts as outlined by the EC.
Timeline of EC actions to tackle online disinformation, as published by the EC.
The EC strategy has been translated into a Code of Practice that has been signed by major players such as Facebook and Google. This Code includes aspects on commercial advertising and political advertising, but also more general aspects to protect users from disinformation. In the Annex the best practices of the companies that signed the Code are outlined. For example, Google claims to protect the integrity of the information that it provides via “improvements to algorithms in Search to prioritize authoritative sources”
Let that sink in for a moment. What does this actually mean? How does a company such as Google distinguish authoritative sources from others?
A ministry of truth by proxy
Upon the release of its ‘Democracy Action Plan’ on December 3rd, 2020, the EC stated that there won’t be a ‘Ministry of Truth’. This plan, which is part of the forthcoming Digital Services Act (a follow-up of the Code of Practice), will move from self-regulation to ‘co-regulation’. Rather than putting in place a Ministry of Truth, the EC’s efforts to make media platforms responsible for diluting disinformation has effectively delegated exactly that to the media companies. This is reason for major concerns for two reasons.
First, as argued above, in many cases it is impossible to binary distinguish facts from fiction. During the Covid-19 pandemic, social media and search engines promoted information from national and international authorities as a means to dilute disinformation. The problem is: an authority interpretation does not necessarily provide a better proxy for ‘truth’ than other interpretations.
The issues discussed above are at the heart of the problem. There will always be different interpretations of the data and the minority view may turn out to be a better description of reality. This is illustrated by the fact that one authority may not have the same interpretation as another. For example, during the first months of the Covid-19 crisis the Dutch health authorities adopted the position that non-medical facemasks are ineffective, whereas the health authorities in the neighboring country Germany held a different view and the German government made it mandatory to wear facemasks in e.g. public transport and shops. These opposite recommendations show that promoting information from authoritative sources does not necessarily lead to more reliable information. After all, opposing views cannot both be true.
In addition, there is evidence that politicians and governments have interfered with scientific findings during the Covid-19 crisis, as the executive director of the renounced scientific journal The BMJ wrote. One of the examples given is the inappropriate involvement of government advisers in one of the many authoritative organizations, in this case the Scientific Advisory Group for Emergencies (SAGE) in the UK. In February 2021 the German newspaper Die Welt found evidence of government interference in Germany as well. What does such political interference mean for the validity and reliability of information provided by ‘authoritative sources’?
After the 2009 swine flu pandemic, the Council of Europe, the leading human rights organization of Europe, investigated the course of things. A committee of the Parliamentary Assembly of the Council of Europe concluded that “the handling of the H1N1 pandemic by the World Health Organization (WHO), EU agencies and national governments led to a ‘waste of large sums of public money, and unjustified scares and fears about the health risks faced by the European public’”. Moreover the committee concluded that the WHO failed to provide sufficient transparency and conflicts of interest may have played a role in WHO’s recommendations.
WHO is arguably considered the most authoritative source of information on health issues. If even the information from this authority might be biased in some cases, how would prioritizing information from ‘authoritative sources’ protect civilians from disinformation? How would this ensure that civilians have access to independent information so that they can e.g. supervise their government? What does this mean for the very definition of disinformation?
Conflicting goals
The second major concern about the EC plans is the prominent role that they envision for the media platforms. The EC has two angles of attack. The first is that the platforms should alter their algorithms such that disinformation is suppressed. The second is that the EC wants to gain insights in how and which algorithms result in information that a user sees. Both are problematic.
To start with the second angle, announced in December 2020, this may not even be possible. In the Netflix documentary ‘The social dilemma’ former employees of companies like Facebook, Google, Pinterest and others state that no-one really knows how the algorithms work, not even senior engineers. That is not surprising, given the fact that many of these algorithms are based on self-learning routines and interact with each other. This creates an extremely complex decision making process, which in many cases is beyond human comprehension. It is impossible to understand why a certain user is served a certain result, let alone transparently report this.
A much bigger problem, however, is that the platforms’ goals are conflicting with the EC goals. According to Tristan Harris, former design ethicist at Google, many, if not all, platforms are based on three goals: an engagement goal, a growth goal, and an advertising goal. Note that none of these goals has anything to do with providing relevant and objective information. You may argue that the engagement goal, which aims to keep the users’ attention and ensure that they come back, is linked to the information served to the user. However, this only means that the information that is being served must be in line with the users’ perception of good results. In other words, to meet the engagement goal, the search engines only have to appeal to confirmation bias.
For social media, the feed only needs to address the users’ interests and spark their imagination. For the online platforms there is no other measure for the relevance of the results than the users’ clicks. As the author of ‘Weapons of math destruction’, Cathy O’Neill, phrases it: “they don’t have a proxy for truth that is better than a click.” This meets the engagement goal by definition, because as long as users get the results that are in line with their expectations they will keep using a platform.
For both examples meeting the engagement goal does not contain any incentive for the results to provide a balanced view. In fact, feeds and search results that are less biased towards the particular user are likely to decrease the engagement, which is in direct conflict with the platforms’ business interests.
Alternative options to deal with alternative views
Obviously some information is, in most peoples’ world view, extremely unlikely to be true. In some cases information may implicate persons and this could lead to verbal or even physical attacks by people that rely on it and trust the source. Clearly, information can harm people. The problem is that this can apply to both ‘alternative facts’ and prevailing paradigms, as the example of gastric ulcers shows. If the solution cannot come from the online platforms, as I and others argue, what can we do to prevent harm?
First, we should abandon the realist view that there is a single truth that we only have to uncover. This world view is at the core of many beliefs and the root cause of the idea that the world can be divided into facts and fiction. Embracing the realization that concepts and ideas can range from very unlikely to very likely certainty, never reaching either end of the spectrum, helps to foster the understanding that it all boils down to judging the degree of certainty. It helps to understand that such judgement is by definition a human activity. This is in stark contrast with the idea that concepts are true or false on their own and that is precisely the point. Ursula von der Leyen’s statement that vitamin C does not cure the new coronavirus is a judgement, not an absolute truth.
Discuss don’t divide
Second, we should seek ways to discuss matters rather than categorizing people into those who are wrong and those who are right. Some ideas can be described as conspiracy theories, but that doesn’t make everyone who tends to believe any such theory a nutcase. In fact, there are many examples of actual conspiracies that took place. In a recent article in the New York Times Yuval Noah Harari mentions e.g. the Sovjet Union which did conspire “to ignite communist revolutions throughout the world” in the 1930’s. Or take the Watergate scandal for example. It is neither strange nor wrong that people think that some things are conspiracies.
Regulate algorithms that cause polarization
Third, governments could start regulating the core problem of online platforms: algorithms with the sole aim to keep people engaged. Various people that are interviewed in ‘The social dilemma’ explain that the optimization algorithms, in particular algorithms directed towards the engagement goal, cause polarization. This is not intentional, but simply a result of self-learning routines that gage human preferences. The human mind gets caught up by content that is close to, but a tiny fraction more extreme than one’s beliefs and self-learning algorithms just pick up this human tendency. By appealing to these kinds of human tendencies, platforms succeed in their goal to increase users’ engagement time. Algorithms do not even need to ‘know’ what kind of content it is serving or what that content means, they can make decisions based on ‘fingerprints’ of the content that may be meaningless to any human observer. For example, how often a certain piece of content is watched by people that previously watched some other content, does not say anything about the content itself.
Since the algorithms are so powerful, they can apply this at user level and this creates a non-static situation. If a user has liked one piece of content which was slightly more extreme than previous content, the next piece of content on this particular topic is likely to be slightly more extreme. Thus, users’ preferences change along with the content that they get served, all in favor of engagement. For the population at large this means that over time various groups tend to believe in opposing and increasingly extreme views.
To translate this into a Covid example, one may start having some doubts on the effectiveness of facemasks and over time, by viewing increasingly extreme yet convincing information, become convinced that masks do more harm than good. Another person may start off with the idea that if facemasks even help a tiny bit they will contribute to less spreading of the virus. Over time this position may change and create a fierce proponent of face mask who considers anyone expressing doubts to be very irresponsible.
A second hurdle is enforcement. If platforms are not willing or not able to provide insights in the way their algorithms work, enforcement must be based on an external view on their operation. It may be possible to design smart enforcement algorithms that detect when a platform does not comply with the regulation. However, that may result in an arms race that we know from malware developers versus antivirus software developers. If this is the way to enforce regulation, it may become a never-ending story of adjustments to the latest methods employed by social media.
Despite these hurdles, regulation of the very core of the problem is something to consider. This example may be done in a similar way as the international code for journalists regulates traditional media. At the very least regulation prevents platforms from having to introduce bias towards authoritative sources and the corresponding problems that this introduces. It is one, perhaps the only way to address the core issue of online platforms.
Get to know your bias
A fourth option that I can imagine is based on the very thing that helps people to become more open-minded: conversation with people that have a different background and perspective. In real life situations exposure to other views is the one common thing of interventions that bring people closer and decrease polarization. A similar approach may help online, for example by showing (links to) one or more alternative views with every piece of content. Several other implementations of such diversity exposure have been proposed and solutions like these could be enforced through regulation.
This or other solutions in which diverse views are being provided could be called a truly democratic way to make information accessible to anyone. It will, however, be in conflict with the engagement goals that many online platforms have. Engagement is stimulated more when people are exposed to views closer to their own, so intentionally showing alternative views could lower users’ engagement. This probably means that some form of regulation is needed to ensure that this line of solutions is actually being implemented. However, this is worthwhile as it replaces any Ministry of truth type of implementation with a system that just provides diversity.
Epistemologically empowering people
The fifth and in my view by and far the most important step that we can and should take is to empower people to judge things for themselves. This has nothing to do with education of facts or accepted theories, as traditional school systems usually employ, but everything with making people familiar with the basics of epistemology. Epistemology is a branch of philosophy that in essence studies the question: what can we know? This question is much more difficult to answer than many people seem to realize, as illustrated by the popularity of fact checking.
If we realize that things we think we know are not only based on pure and objective observation but are always colored by constructs of human thinking, and if we are aware where our own thinking colors our own views, we may become a bit more humble. We may be able to first question our own beliefs and start to understand that other peoples’ views may be rooted in a different perspective rather than in a lack of knowledge.
In conclusion
I hope that the above discussion shows that the dangers associated with restricting exposure to information are just as real and threatening as disinformation. This approach reinforces prevailing paradigms, which may be wrong. As the truth does not exist, the best defense against disinformation is to educate people in making their own judgement. It is imperative to recognize that defining what is true and what isn't is a human activity and that in a democracy this must be exercised by all people rather than a few.
I also hope that the EC is willing to reconsider their approach and that the European Parliament critically reviews any future plans and draft regulations on this topic. Moreover, I hope that you draw your own conclusions from this text. If you are an European citizen and if you feel that the EC plans are moving in the wrong direction, please stand up and make your objections known to European politicians. If we as a society fail to address the dangers posed by both the algorithms of online platforms and the EC plans, we may soon be unable to sharpen our own ideas by a variety of information.
In response to ‘fake news’ many media have installed ‘fact checking’ columns and the European Commission (EC) has launched a plan to tackle online disinformation. These approaches have fundamental flaws. They assume that there are clear, objective, true facts, which in philosophy is known as a realist world view. This view assumes that reality exists as is and we just have to discover it, but ignores that every judgement is colored. Often science is invoked as the ultimate tool to determine the truth, but hardly ever it is acknowledged that science too involves presuppositions. Rather than statements being true or false they merely have lower or higher degrees of certainty. There may be consensus, but that as such does not increase the degree of certainty that something is an accurate description of the reality. The example of Galileo Galilei’s theory on Earth’s position in the solar system shows that holding on to the paradigm view can be dangerous and prevent advancement of our understanding.
The dangers associated with restricting exposure to information are just as real and threatening as disinformationThe EC plan to tackle disinformation is dangerous. First, the plan involves that internet platforms are required to ‘dilute’ disinformation, which implies that the platforms somehow must distinguish false from true information. They would effectively become a ministry of truth by proxy. This will encounter the aforementioned fundamental problems with ‘truth’ and may cause reinforcement of (potentially incorrect) paradigms. Second, this new task of the online platforms is in direct contradiction with their business and algorithm goals. These goals involve things like engagement, which appeals to the human tendency to favor information that is close to, but preferably slightly more extreme than their own ideas. The algorithms that cause polarization are exactly those that cause user engagement, which the platforms need to be able to meet another goal: commercialization.
Solutions that do not rely on the fundamentally problematic identification of truth are possible. First, we should abandon the flawed realist worldview. Second, it is better to take others seriously instead of categorically dismissing anything they say. Often details of a story may sound iffy, but the underlying feelings that the stories express are real world concerns. Third, algorithms causing polarization can be regulated, similar to the way the international code for journalists regulates traditional media. Fourth, platforms could stimulate the best-known way to bring people together: exposure to other ideas. Examples of implementations exist. Fifth, and most important, we could and should educate people in epistemology, the branch of philosophy that asks the question ‘what can we know’? In an open society, people should be capable and able to judge information for themselves.
In conclusion, the dangers associated with restricting exposure to information are just as real and threatening as disinformation. This approach reinforces prevailing paradigms, which may be wrong. As the truth does not exist, the best defense against disinformation is to educate people in making their own judgement. It is imperative to recognize that defining what is true and what isn't is a human activity and that in a democracy this must be exercised by all people rather than a few.
In the past few years terms like ‘fake news’ or ‘alternative facts’ are being used by all kinds of media. Many major news media have sections in which ‘fact checking’ occurs. In Europe there is even an official plan, created by the European Commission (EC), to ‘tackle online disinformation’. The formal EC Communication published in April 2018 states that “The exposure of citizens to large scale disinformation, including misleading or outright false information, is a major challenge for Europe.” Each of these initiatives seems to be based on a rather binary view of information: regarding it either as factual or fictional. The term ‘disinformation’ adds an intention to spreading fictional information, namely to purposely mislead people.
There are a couple of fundamental problems with the concept of fake news and disinformation and the way news sources and governments try to neutralize it. The approach being taken by the EC poses a severe risk to peoples’ sovereignty of opinion-forming. In this longread I will touch upon several key issues that play a role and which, in my opinion, make the EC’s plan a rather dangerous approach. I will discuss some of the methods employed by online platforms and I will attempt to formulate some alternative options to deal with the overwhelming stream of online information of varying quality.
Facts
Let’s first look at the concept of ‘fact’. Oxford Learner’s Dictionaries provides two definitions: (1) “a thing that is known to be true, especially when it can be proved”, and (2) “things that are true rather than things that have been invented”. Both definitions use the distinction between things that are true and things that are not. But here is the problem: how can we determine whether or not something is true?
You might think that in many cases it is obvious whether or not something is true. You know something to be true when you have seen it with your own eyes. Or is it? As Beau Lotto, reader of neurobiology at University College London, explains in his TED talk, context, perspective, expectations, and past experiences, all influence our interpretation of the information that we gather through our senses. Even something as simple as the color of a surface cannot be determined as a mere fact, but depends on the context.
The notion that we cannot know with a 100% certainty that what we observe through our senses is a correct reflection of reality shows that a binary distinction between true and false cannot be made. Instead, we can only draw conclusions with a lower or higher degree of certainty. Even things such as the laws of physics are not laws in the sense that they represent reality with a 100% certainty. Rather, they are models of reality that have a very high degree of certainty under many circumstances.
In the pragmatic (and the related instrumentalist) worldview truth is not defined as an accurate representation of reality, but by how well a concept predicts an outcome of a specific practical matter.Science as the road to truth
So what about scientific conclusions? In today’s world, science has gained a status of providing the final verdict. If politicians are not sure what is the best next step they order a scientific investigation. If a newspaper wants to ‘factcheck’ something, they refer to scientific publications or ask scientific researchers. Worse still, if anyone wants to ‘prove’ their point, on any topic, they often claim it is backed by science. But scientific research is not a truth machine, it is just a means (though the best we know) to investigate things.
Much of the misconception about science comes from a realist worldview. This is based on the idea that the reality exists as is and we just have to discover it. From such a world view it makes sense that scientific research will reveal reality and provide us the truth. An opposing worldview is constructivism which is based on the idea that everything we see, hear, or conclude about reality is in essence a creation (a construct) of our mind. As shown by the optical illusion example, our brains indeed do construct a virtual image of reality and this might be deceiving. This and other examples show that it is fundamentally impossible to be completely sure that our perception actually reflects reality.
As a downside, a constructivist worldview may render one skeptical about anything and everything, which is not very helpful in dealing with real world problems. A third world view, that accepts the limitations of our perception yet aims to deal with real life issues anyway, is known as pragmatism. This worldview does not pretend that we can fully comprehend reality, but aims to use observations, conclusions, concepts, and theories to deal with practical matters. In other words, in the pragmatic (and the related instrumentalist) worldview truth is not defined as an accurate representation of reality, but by how well a concept predicts an outcome of a specific practical matter.
Pragmatism may be a way forward in dealing with controversies of facts versus fiction. Limiting our expectations of ideas to the question how well it predicts observations may remove the irrational binary classification and put cases in their specific context. It can place scientific conclusions on a scale from a very low to very high degree of certainty of providing an accurate prediction of a particular problem.
Science is a human exercise
Our world view may influence the picture that we paint of the totality of evidence, but what about the conclusions of a particular scientific investigation? If a study is conducted according to the best standards, its conclusions simply follow from solid logic and evidence and there can’t be much debate on its conclusions, right? This seems to be a widespread idea about scientific conclusions, especially among people who do not have experience with conducting research. It fuels the aforementioned realist world view and corresponding idea that well-conducted science eventually will provide a single answer to any question. The problem is that it ignores an important factor that plays a role in scientific conclusions: presuppositions.
Presuppositions are underlying assumptions that cannot be proven but are presumed to be true. They shape the way studies are designed and how their outcomes are processed and interpreted in order to draw conclusions. An excellent example of a presupposition is voiced by the American Association for the Advancement of Science (AAAS), which, amongst others, publishes the famous journal Science. It is one of the presuppositions underlying science itself: “science presumes that the things and events in the universe occur in consistent patterns that are comprehensible through careful, systematic study.” As with all presuppositions, this must be accepted by faith as it cannot be proven (nor disproven) that things in the universe indeed occur in patterns, nor that systematic study will make them comprehensible. As a second example, physicists draw conclusions about certain properties of the universe by presupposing that the speed of light in vacuum is constant throughout space and time. Again, this cannot be proven nor disproven.
History shows that there are many cases where the majority opinion — even of experts — turned out not to be the most accurate representation of reality.If you now are starting to doubt science itself, that is not necessary. The fact that presuppositions are always involved doesn’t mean that anything goes. Presuppositions must follow common sense within a particular field of expertise in order to be acceptable. In other words, if the audience does not share the presuppositions they may interpret things differently. Even with exactly the same data and logic, different presuppositions may cause one to draw (entirely) different conclusions.
Take a moment to let the last sentence sink in.
No wonder that there can be so much debate one certain topics! Especially when the degree of certainty is low, which is quite often the case in life sciences such as nutritional research, there can be widely different interpretations on the meaning of scientific outcomes. This also explains why it doesn’t help if debaters overload their opponents with more evidence that supposedly backs up their own conclusion: the opponent will interpret that new evidence through the same deviating frame of mind that led to their different conclusions in the first place.
Consensus
One reason that presuppositions are often overlooked as a factor influencing conclusions is that in many cases they remain implicit. The more a certain presupposition is shared within a given field of expertise, the less it will be explicitly mentioned in publications or presentations. This has an interesting effect when it comes to a tool that has gained popularity during the last decades: consensus statements.
Consensus is an opinion that all members of a group agree with. However, consensus is not part of the scientific procedure and does not increase the degree of certainty. Instead it expresses which interpretation is held by the majority. Consensus statements have made their way into dietary guidelines, medical treatment recommendations, and e.g. policies to manage Covid-19.
In a democracy it is not a bad thing to aim for reaching consensus. The problem is that often ‘expert consensus’ or ‘scientific consensus’ is being used as a rationale of the highest degree to uphold a certain recommendation. Such an argument is often posed as if consensus is irrefutable. This is problematic, because it presumes that the majority opinion leads to a higher degree of certainty about some aspect of our world. History shows that there are many cases where the majority opinion — even of experts — turned out not to be the most accurate representation of reality. One famous example is Galileo Galilei’s argument against the consensus theory that Earth is the center of the universe, for which he was sentenced and his books were banned.
If Robin Warren and Barry J. Marshall would not have ignored the 1980’s consensus view on the cause of gastric ulcers, they would not have discovered that the bacterium Helicobacter pylori is the causative agent in most casesParadigms suppress progress
The dominating world view can be an incorrect representation of reality. Reinforcing it by accepting consensus as a sound argument may be counterproductive. In fact, alternative worldviews are crucial for the advancement of knowledge.
If Robin Warren and Barry J. Marshall would not have ignored the 1980’s consensus view on the cause of gastric ulcers, they would not have discovered that the bacterium Helicobacter pylori is the causative agent in most cases. Their original work was not received very well, because it challenged the medical wisdom of the time which considered gastric ulcers to be caused by spicy foods and stress. In the eyes of their colleagues, Warren and Marshall were considered to be a kind of quacks. That may sound ridiculous today, but it is easy to judge things in hindsight.
Suppressing alternative worldviews can be just as dangerous as distributing false information. Warren and Marshall received the Nobel prize for their discovery and one of the reasons that was mentioned in the announcement was that their ideas went “Against prevailing knowledge and dogmas''. Their work has saved many people the suffer from gastric ulcers.
What about the EC plan?
Back to the EC. In the 2018 Communication the EC recognizes that there is a difference between online information and traditional media as the latter “is subject to a wide range of rules on impartiality, pluralism, cultural diversity, harmful content, advertising and sponsored content”. Since these checks and balances are absent for online content the online platforms play a large role in spreading disinformation, according to the EC. The EC defines disinformation as: “verifiably false or misleading information that is created, presented and disseminated for economic gain or to intentionally deceive the public, and may cause public harm.”
If there is any take home message from the previous sections that has stuck I hope it is that there is no such thing as a clear distinction between fact and fiction. Obviously things that are figments of imagination are fictional, but even these may contain elements that are in line with reality. You only have to watch science fiction movies from a couple of decades ago to recognize that some of the then-imaginary technologies have become reality today. More importantly, everything is subject to perception including the opaque line between fact and fiction. At one point in time it was considered a fact that gastric ulcers are caused by stress and spicy foods, but not many would still consider this a fact today.
To use a topical example, in a video on disinformation of March 2020 the president of the EC, Ursula von der Leyen, starts with a firm statement that vitamin C cannot cure Covid-19. She uses the vitamin C ‘cure’ as an example of disinformation, but in fact she could not know whether or not vitamin C can cure Covid-19 as it wasn’t even properly studied at that time. Firmly stating that it can’t cure Covid-19 is equally invalid as firmly stating the opposite. Moreover, several investigators have published scientific articles arguing that vitamin C may actually save lives. Several clinical trials are currently ongoing to properly investigate this and only when high quality evidence becomes available we may be able to make any firm statement.
This example, used by the president of the EC herself, shows how difficult it is to determine whether or not something can be considered disinformation. Yet, it is such an invalid binary view that seems to be at the core of EC’s plans.
How does the EC plan to tackle disinformation?
Although the Communication mentions that the Commission wants to foster education and media literacy, much of the efforts focus on dealing with the online content itself. This focus also becomes clear from the continued efforts as outlined by the EC.
How does a company such as Google distinguish authoritative sources from others?Instead of removal of content (which is limited to illegal content only) the Commission intents to “dilute the visibility of disinformation by improving the findability of trustworthy content”. One may argue that not promoting information is not the same as blocking it. However, this is similar to a library that does have a book, but stores outside the sight of visitors on some bookshelf in the back. To find the book, a visitor has to know that it exists and its exact location. In the vast depths of the internet this poses a much bigger challenge. Often the only way to find something, even if you know it exists, is through search engines such as Google.
The EC strategy has been translated into a Code of Practice that has been signed by major players such as Facebook and Google. This Code includes aspects on commercial advertising and political advertising, but also more general aspects to protect users from disinformation. In the Annex the best practices of the companies that signed the Code are outlined. For example, Google claims to protect the integrity of the information that it provides via “improvements to algorithms in Search to prioritize authoritative sources”
Let that sink in for a moment. What does this actually mean? How does a company such as Google distinguish authoritative sources from others?
A ministry of truth by proxy
Upon the release of its ‘Democracy Action Plan’ on December 3rd, 2020, the EC stated that there won’t be a ‘Ministry of Truth’. This plan, which is part of the forthcoming Digital Services Act (a follow-up of the Code of Practice), will move from self-regulation to ‘co-regulation’. Rather than putting in place a Ministry of Truth, the EC’s efforts to make media platforms responsible for diluting disinformation has effectively delegated exactly that to the media companies. This is reason for major concerns for two reasons.
First, as argued above, in many cases it is impossible to binary distinguish facts from fiction. During the Covid-19 pandemic, social media and search engines promoted information from national and international authorities as a means to dilute disinformation. The problem is: an authority interpretation does not necessarily provide a better proxy for ‘truth’ than other interpretations.
The issues discussed above are at the heart of the problem. There will always be different interpretations of the data and the minority view may turn out to be a better description of reality. This is illustrated by the fact that one authority may not have the same interpretation as another. For example, during the first months of the Covid-19 crisis the Dutch health authorities adopted the position that non-medical facemasks are ineffective, whereas the health authorities in the neighboring country Germany held a different view and the German government made it mandatory to wear facemasks in e.g. public transport and shops. These opposite recommendations show that promoting information from authoritative sources does not necessarily lead to more reliable information. After all, opposing views cannot both be true.
In addition, there is evidence that politicians and governments have interfered with scientific findings during the Covid-19 crisis, as the executive director of the renounced scientific journal The BMJ wrote. One of the examples given is the inappropriate involvement of government advisers in one of the many authoritative organizations, in this case the Scientific Advisory Group for Emergencies (SAGE) in the UK. In February 2021 the German newspaper Die Welt found evidence of government interference in Germany as well. What does such political interference mean for the validity and reliability of information provided by ‘authoritative sources’?
After the 2009 swine flu pandemic, the Council of Europe, the leading human rights organization of Europe, investigated the course of things. A committee of the Parliamentary Assembly of the Council of Europe concluded that “the handling of the H1N1 pandemic by the World Health Organization (WHO), EU agencies and national governments led to a ‘waste of large sums of public money, and unjustified scares and fears about the health risks faced by the European public’”. Moreover the committee concluded that the WHO failed to provide sufficient transparency and conflicts of interest may have played a role in WHO’s recommendations.
WHO is arguably considered the most authoritative source of information on health issues. If even the information from this authority might be biased in some cases, how would prioritizing information from ‘authoritative sources’ protect civilians from disinformation? How would this ensure that civilians have access to independent information so that they can e.g. supervise their government? What does this mean for the very definition of disinformation?
Conflicting goals
The second major concern about the EC plans is the prominent role that they envision for the media platforms. The EC has two angles of attack. The first is that the platforms should alter their algorithms such that disinformation is suppressed. The second is that the EC wants to gain insights in how and which algorithms result in information that a user sees. Both are problematic.
To start with the second angle, announced in December 2020, this may not even be possible. In the Netflix documentary ‘The social dilemma’ former employees of companies like Facebook, Google, Pinterest and others state that no-one really knows how the algorithms work, not even senior engineers. That is not surprising, given the fact that many of these algorithms are based on self-learning routines and interact with each other. This creates an extremely complex decision making process, which in many cases is beyond human comprehension. It is impossible to understand why a certain user is served a certain result, let alone transparently report this.
A much bigger problem, however, is that the platforms’ goals are conflicting with the EC goals. According to Tristan Harris, former design ethicist at Google, many, if not all, platforms are based on three goals: an engagement goal, a growth goal, and an advertising goal. Note that none of these goals has anything to do with providing relevant and objective information. You may argue that the engagement goal, which aims to keep the users’ attention and ensure that they come back, is linked to the information served to the user. However, this only means that the information that is being served must be in line with the users’ perception of good results. In other words, to meet the engagement goal, the search engines only have to appeal to confirmation bias.
For social media, the feed only needs to address the users’ interests and spark their imagination. For the online platforms there is no other measure for the relevance of the results than the users’ clicks. As the author of ‘Weapons of math destruction’, Cathy O’Neill, phrases it: “they don’t have a proxy for truth that is better than a click.” This meets the engagement goal by definition, because as long as users get the results that are in line with their expectations they will keep using a platform.
For both examples meeting the engagement goal does not contain any incentive for the results to provide a balanced view. In fact, feeds and search results that are less biased towards the particular user are likely to decrease the engagement, which is in direct conflict with the platforms’ business interests.
Alternative options to deal with alternative views
Obviously some information is, in most peoples’ world view, extremely unlikely to be true. In some cases information may implicate persons and this could lead to verbal or even physical attacks by people that rely on it and trust the source. Clearly, information can harm people. The problem is that this can apply to both ‘alternative facts’ and prevailing paradigms, as the example of gastric ulcers shows. If the solution cannot come from the online platforms, as I and others argue, what can we do to prevent harm?
First, we should abandon the realist view that there is a single truth that we only have to uncover. This world view is at the core of many beliefs and the root cause of the idea that the world can be divided into facts and fiction. Embracing the realization that concepts and ideas can range from very unlikely to very likely certainty, never reaching either end of the spectrum, helps to foster the understanding that it all boils down to judging the degree of certainty. It helps to understand that such judgement is by definition a human activity. This is in stark contrast with the idea that concepts are true or false on their own and that is precisely the point. Ursula von der Leyen’s statement that vitamin C does not cure the new coronavirus is a judgement, not an absolute truth.
Discuss don’t divide
Second, we should seek ways to discuss matters rather than categorizing people into those who are wrong and those who are right. Some ideas can be described as conspiracy theories, but that doesn’t make everyone who tends to believe any such theory a nutcase. In fact, there are many examples of actual conspiracies that took place. In a recent article in the New York Times Yuval Noah Harari mentions e.g. the Sovjet Union which did conspire “to ignite communist revolutions throughout the world” in the 1930’s. Or take the Watergate scandal for example. It is neither strange nor wrong that people think that some things are conspiracies.
Fact checkers are doing exactly the opposite, as they declare any contrasting opinion to be false, therewith closing the door to a meaningful conversationInstead of categorically dismissing anything that conspiracy theorists say or suppressing information they share, we should take them seriously, argues Charles Eisenstein. That does not necessarily apply to the content of their ideas, but certainly to the underlying reasons for people to believe them. Eisenstein argues that conspiracy theories are in essence stories; stories that are not to be taken literally but that do express feelings and do contain a message below the surface. If we keep the door open (both ways) to talk to another, we may start to understand each other’s feelings. Can this be done online? I believe so, as long as we start with the intention to question ourselves before anything or anyone else and respect other opinions. Fact checkers are doing exactly the opposite, as they declare any contrasting opinion to be false, therewith closing the door to a meaningful conversation.
Regulate algorithms that cause polarization
Third, governments could start regulating the core problem of online platforms: algorithms with the sole aim to keep people engaged. Various people that are interviewed in ‘The social dilemma’ explain that the optimization algorithms, in particular algorithms directed towards the engagement goal, cause polarization. This is not intentional, but simply a result of self-learning routines that gage human preferences. The human mind gets caught up by content that is close to, but a tiny fraction more extreme than one’s beliefs and self-learning algorithms just pick up this human tendency. By appealing to these kinds of human tendencies, platforms succeed in their goal to increase users’ engagement time. Algorithms do not even need to ‘know’ what kind of content it is serving or what that content means, they can make decisions based on ‘fingerprints’ of the content that may be meaningless to any human observer. For example, how often a certain piece of content is watched by people that previously watched some other content, does not say anything about the content itself.
Since the algorithms are so powerful, they can apply this at user level and this creates a non-static situation. If a user has liked one piece of content which was slightly more extreme than previous content, the next piece of content on this particular topic is likely to be slightly more extreme. Thus, users’ preferences change along with the content that they get served, all in favor of engagement. For the population at large this means that over time various groups tend to believe in opposing and increasingly extreme views.
To translate this into a Covid example, one may start having some doubts on the effectiveness of facemasks and over time, by viewing increasingly extreme yet convincing information, become convinced that masks do more harm than good. Another person may start off with the idea that if facemasks even help a tiny bit they will contribute to less spreading of the virus. Over time this position may change and create a fierce proponent of face mask who considers anyone expressing doubts to be very irresponsible.
Algorithms that drive people into increasingly more extreme corners make it hard to simply talk to each otherObviously, algorithms that drive people into increasingly more extreme corners make it hard to simply talk to each other. Pulling this engagement optimization plug could address this. That is, regulation could be put in place to prohibit algorithms that ‘prey’ on human tendencies to be intrigued by biased content. A major problem is that addressing human tendencies is being used everywhere. Adds are based on it, traditional media select headlines that appeal to it, and even if you write a motivation letter for a job you probably try to appeal to the addressee’s preferences. That means that any regulation must identify the key difference between outcomes that are due to algorithms and engagement tactics used by traditional channels.
A second hurdle is enforcement. If platforms are not willing or not able to provide insights in the way their algorithms work, enforcement must be based on an external view on their operation. It may be possible to design smart enforcement algorithms that detect when a platform does not comply with the regulation. However, that may result in an arms race that we know from malware developers versus antivirus software developers. If this is the way to enforce regulation, it may become a never-ending story of adjustments to the latest methods employed by social media.
Despite these hurdles, regulation of the very core of the problem is something to consider. This example may be done in a similar way as the international code for journalists regulates traditional media. At the very least regulation prevents platforms from having to introduce bias towards authoritative sources and the corresponding problems that this introduces. It is one, perhaps the only way to address the core issue of online platforms.
Get to know your bias
A fourth option that I can imagine is based on the very thing that helps people to become more open-minded: conversation with people that have a different background and perspective. In real life situations exposure to other views is the one common thing of interventions that bring people closer and decrease polarization. A similar approach may help online, for example by showing (links to) one or more alternative views with every piece of content. Several other implementations of such diversity exposure have been proposed and solutions like these could be enforced through regulation.
Letting the user make up their own mind removes the fundamental problem of a Ministry of truth type of solutionIt can’t be too difficult for online platforms to show other perspectives with every result that is being served to users. A Google result may be shown with one or more alternative views on the matter. A Facebook post with criticism to facemasks may be accompanied by a (link to) a post that defends the opposite, and vice versa. The last three words of the previous sentence are essential, because if this approach would only be applied for content that is considered disinformation we would again stumble upon the aforementioned pitfalls of determining what is right and what is not. It is up to the user (as it would be in the end in any system) what they consider reasonable and what not. Letting the user make up their own mind removes the fundamental problem of a Ministry of truth type of solution.
This or other solutions in which diverse views are being provided could be called a truly democratic way to make information accessible to anyone. It will, however, be in conflict with the engagement goals that many online platforms have. Engagement is stimulated more when people are exposed to views closer to their own, so intentionally showing alternative views could lower users’ engagement. This probably means that some form of regulation is needed to ensure that this line of solutions is actually being implemented. However, this is worthwhile as it replaces any Ministry of truth type of implementation with a system that just provides diversity.
Epistemologically empowering people
The fifth and in my view by and far the most important step that we can and should take is to empower people to judge things for themselves. This has nothing to do with education of facts or accepted theories, as traditional school systems usually employ, but everything with making people familiar with the basics of epistemology. Epistemology is a branch of philosophy that in essence studies the question: what can we know? This question is much more difficult to answer than many people seem to realize, as illustrated by the popularity of fact checking.
If we realize that things we think we know are not only based on pure and objective observation but are always colored by constructs of human thinking, and if we are aware where our own thinking colors our own views, we may become a bit more humble. We may be able to first question our own beliefs and start to understand that other peoples’ views may be rooted in a different perspective rather than in a lack of knowledge.
If we realize that things we think we know are not only based on pure and objective observation but are always colored by constructs of human thinking, and if we are aware where our own thinking colors our own views, we may become a bit more humbleMoreover, epistemology gives us the tools to judge information that we encounter. Even if something sounds plausible to us, how can we know if it is true? Why does it appeal to us and how could that appeal cloud our judgement? If understanding of epistemology is combined with awareness that our mind can trick us and that online media exploit these human pitfalls, we may be armed against the many platforms that lead us into our own bubbles.
In conclusion
I hope that the above discussion shows that the dangers associated with restricting exposure to information are just as real and threatening as disinformation. This approach reinforces prevailing paradigms, which may be wrong. As the truth does not exist, the best defense against disinformation is to educate people in making their own judgement. It is imperative to recognize that defining what is true and what isn't is a human activity and that in a democracy this must be exercised by all people rather than a few.
I also hope that the EC is willing to reconsider their approach and that the European Parliament critically reviews any future plans and draft regulations on this topic. Moreover, I hope that you draw your own conclusions from this text. If you are an European citizen and if you feel that the EC plans are moving in the wrong direction, please stand up and make your objections known to European politicians. If we as a society fail to address the dangers posed by both the algorithms of online platforms and the EC plans, we may soon be unable to sharpen our own ideas by a variety of information.
#4: Wow Dick! You really go into the depth, thanks.
(could discuss about the truths of the "Frankfurter Schule'... but leave that ;).... on Habermas we disagreed earlier....) ('Legitimationsprobleme im Spätkapitalismus' part of my research education at the uni at that time.... together with Popper e.a. - scientific philosophy being essential part of a - critical making - 5y uni.... and that has changed)
Arnold, can't understand a word of what you're saying! We'll have to remove comments in national languages.
Ik ervaar de bijdrage van Dennis als een zeer waardevolle reële waarschuwing aan ons, maar vooral aan wijze mensen hierover van gedachten te wisselen.
Algoritmen kunnen zeer gevaarlijke vormen gaan aannemen hebben dat een jaar of 6 geleden al aangegeven door voorbeelden te projecteren en zien nu in de voedselproducerende sector het gaan gebeuren, maar de commercie gaat het doordrukken. natuurlijk zij hebben de voorwaarden zo ingericht zodat zij altijd als de winnaar uit de strijd komen. Maar de tijd zal het leren met scha en schande onthoud dat maar eens.
Heb vanmorgen op Buitenhof Tjeerd Willink zijn waarschuwing horen herhalen van vier jaar geleden wat leren we daarvan? En hij heeft zijn gedachten geventileerd hoe om te gaan met de materie, heel herkenbaar voor een goed verstaander.
Jan Bransen, we would be much obliged if you would care to express your thoughts on Dennis' fears about the book burnings that have started again.
Frank, the advantage of mileage is mellowness. I pick my battles. This one will take well over a lifetime.
When I grew up intellectually, I was shocked great modern minds I got to know personally, couldn't agree with the kind of criticism the French structuralists and post-structuralists had developed. Please note I am using the word 'criticism' in Kant's sense. All they did, was pursue a great tradition that eventually developed out of phenomenology that earlier had been born in new pursuits of Kant's criticism of the human mind.
I guess it is something very deep in our culture: we still don't understand and cannot grasp what the European/Western Enlightenment in science did to our idea of the good.
In my early twenties I read Alisdair McIntyre's After Virtue (1981) almost as soon as the book came out. He points out - just as Bruno Latour would do more recently, when it came to climate change - that we need to act the right way. But modernity gets in the way. That's why McIntyre wrote a sentence in that 1981 book that runs in my mind ever since: 'The Dark Ages that are already upon us'. To put it in differently: science breeds paralysis and doubts at times we cannot avoid to act.
McIntyre's Dark Ages are definitely here, not just amongst intellectuals (as in the '80's) but in society as a whole. And we don't know how to cope with uncertainty as pre-moderns could.
Next to Stephen Toulmin (Cosmopolis, Return to Reason) and John Ralston Saul (Voltaire's Bastards), McIntyre was one of the foremost intellectuals of our era in the English speaking world. Yet the three of them are hardly - if even - known by the professionals, politicians and administrators that do science and govern us.
Some consider I am a postmodernist, because of my intellectual link with Lyotard and Foucault. I was never a great fan of Derrida although his work stems from the same kind of Critique reinvented in the '70's and '80 of the 20th century. I am a late modernist, I used to say.
Now, all of the above has been discussed and repeated time and again in the past 40 years in professional and intellectual circles. Very eloquently by some, less so by others. Taking the misunderstandings into account and having tried tot understand why it is so difficult to couple scientific culture, God's death and the need to live, decide and live as a socius has made me an observer rather than an activist. It's a hot and dangerous battle, because we are 'menschlich' (as Nietzsche says; McIntyre hates Nietzsche as he considers his philosophy a typical excess of the problem of modernity).
To put a very briefly: this battle is endemic and pops up more visibly now because science and politics don't blend well in the current pandemic's management by governments. Just as historian and popular sociology writer Yuval Harari wrote in the Financial Times last month: Lessons from a year of Covid.