Written by Benjamin Mencer
Some modern scientific theories are very complex and involve the effects of multiple variables operating simultaneously. Scientists use computer simulations to make predictions from these models. Will computer simulations ever be able to predict Black Swan events?
Futurist R. Buckminster Fuller first predicted in the mid-1900s that our total knowledge would double every century. However, in 2020, experts claim that our total knowledge might be doubling every 12 hours (Sorokin, 2019). How certain should we be about Sorokin’s claim? In particular, knowledge made from computer simulations may be the fastest-growing part. Additionally, while the rate at which knowledge grows is increasing, our ability to validate the new knowledge may not have increased. For example, it took nearly a century to validate certain parts of Einstein’s theory of Relativity (Gohd, 2018) and computer simulations about climate change have still not been fully validated.
Predictions from computer simulations are attempts to predict the future. If our world were completely deterministic, we would be able to predict the future from complete historical knowledge. However, if our world is indeed non-deterministic, as suggested by modern particle physics, the future occurs due to a mixture of present actions, patterns from the past, and future random events. For example, the stock of a company can go up or down, and sometimes it is impossible to predict. The rise of Google was nearly impossible to predict because there was nothing in the past or present (when Google first started) that would suggest that Google stock would rise so much. Therefore, predictions about the future can never be fully reliable.
In the area of knowledge of the natural sciences, scientific theories create an abstract understanding of our environment. Such abstract understanding is supported by experiments conducted using the scientific method. A model for scientific inquiry can be characterization, prediction, experiment, analysis of the conclusion, evaluation, and validation. In my opinion, since validation decides whether the theory will be accepted by the scientific community, it tells us how certain we should be about the knowledge.
For example, when Galileo, known as the father of empirical science, first started to experiment, he observed the night sky, where the complexity of the method or experiment was very low. Therefore, today, a computer simulation can accurately predict the movement of planets based on his findings.
On the other hand, when the complexity of the method is high, such as in climate modeling, clear and straightforward algorithms do not always provide clear answers, so AI has been recruited to help predict some of the effects of climate change (Cho, 2018). But what is AI? A part of AI, neural network algorithms use a technique called gradient descent, which optimizes variables inside the code using past data, so training the neural network would teach it to extrapolate from the past to the future. This allows scientists to predict complex processes, where they do not simulate all the intricate steps but only know the initial environment and the resulting impact on climate change. Therefore, these technological models are not based on arguments that explain why the prediction occurred, removing the confirmation biases of people involved in its creation, by only using logic and avoiding intuition.
As Climate Change has created a lot of political turmoil, biases are rampant and can consciously and unconsciously invalidate a computer simulation if it is written using a theory and not using optimization methods such as gradient descent. In this case, the variables are calculated automatically, using gradient descent, so there should be high certainty in the knowledge made in this way.
However, as these AI algorithms are very general, parameters are needed for the user to tune the algorithm to a particular challenge. One such parameter is the learning rate, which can have an immense effect on the prediction. Furthermore, in theory, an AI would have access to all past data to make its predictions, however, in practice, not all historical data is available and the user selects the learning rate. Selecting the learning rate, therefore raises the question: What data and which constant values do you pick to guide the computation? The data has an inherent problem because the information it represents needs to cause the prediction. In climate modeling, the data would need to include information about the environment that directly causes climate change. There are many things that could represent the factors that cause the climate to change, that it is difficult to know if the data that would allow for the most accurate predictions. Furthermore, David Hume argues that causation does not exist and correlative evidence exists because there is no way to prove that one thing causes another (Lorkowski, n.d.). Therefore, it may be prudent for past datasets to contain data that correlates with climate change to ensure more accurate predictions.
Moreover, these predictions can cause confirmation bias from programmers. Computer programs find a minimum of a function that describes differences between the prediction of the model and the past data and it is hard to know if an optimum is global or local. Therefore, programmers may believe the outcomes of their programs even though there could be more accurate predictions somewhere else. Also, if some constants change slightly or the amount of past data increases or decreases, the program may end up in different local optima.
In the area of knowledge of Human Sciences, being certain of knowledge made by using computer simulations may have substantial implications. For example, in the stock market, if a prediction by a human or computer program is made, the act of making a prediction, decreases the chance of it happening. This occurs because people may act on the predicted knowledge, causing the future to change. Therefore, as computer simulations are accessible to more people, the certainty of the original prediction decreases, as it becomes readily available. In the case of the stock market, if a computer AI shows that the price of oil will increase by 20%, then many people may buy oil, so the price instead increases by 50%. Consequently, the simulation was not accurate because people knew its prediction. This argument is only valid for the human sciences because humans change their behavior when given a prediction. However, in the natural sciences, predictions themselves do not cause the future to change. For example, the weather will not change in response to a weather forecast.
Another issue of AI programs is with their past data because quantifying qualitative data accurately is problematic. In the stock market, there is no way to accurately quantify the mood of the stockbroker on any given day. It is impossible to quantify accurately the feeling or behavior of humans because it is personal and not shared knowledge. The classification of the same emotions or feelings changes from person to person because of their schemas and past experiences. Furthermore, it is difficult to know if they are lying, consciously or unconsciously. This can be due to the observer effect, where the researcher or scientist asking questions induces demand characteristics, which is when the participant changes their answer for the scientist. Therefore, the past data used for predicting things like the stock market or someone’s mood will be deeply unreliable and the resulting predictions could be uncertain.
On the other hand, computer simulations can be accurate because unlike a scientific theory, computer code can analyze, what I would like to call a “Black Duck” event, related to historical “Black Swan” events. A Black Duck event is related to a “Black Swan” event because they are both rare, however, a key difference is that an event is a Black Duck event when no conventional theory predicts the event and it happens due to a non-deterministic unknown factor. Therefore, as we may not be able to predict Black Duck events with our common knowledge of the topic, the AI may pick up on complex patterns between seemingly unrelated Black Duck events to give us an idea of when they could be expected. For example, an AI could predict that the stock for a company would drop, even if our current theories say it will go up because it identified a complex pattern of Black Duck events, and predicted another would occur. In this case, the knowledge made from the computer simulations may be accurate.
To conclude, there is considerable uncertainty in the results of computer simulations, however, certainty is still high enough to be considered fairly trustworthy. It may be limited by the quality of past data. However, some sources claim that AI can accurately predict climate 97% of the time (AI and Climate Change - Plan A Academy, 2020). Though the way these percentages are calculated is purely based on past data. The AI uses a large part of past data to learn and then the model is tested on the other part of the data. Therefore, the 97% should be considered a data point, but not a conclusive verification. Also, a Black Duck event that the computer simulation did not catch, can always appear due to a truly random event. An implication of being certain about knowledge from computer simulations becomes an ethical issue when AI is used in medicine. The risk of misclassifying a tumor as being malignant or benign, is catastrophic, as someone may die from cancer. Therefore, AI could help this process and may catch Black Duck events, though it is important to always use method triangulation by using different AI algorithms, as well as theory before making any decisions. Another implication is that if we start to be certain in our knowledge made by using computer simulations, we may create a “Data Religion” (Harari, 2017), that would remove the privacy of the individual and may cause a police state.
Cho, R., 2018. Artificial Intelligence—A Game Changer For Climate Change And The Environment. [online] State of the Planet. Available at: <https://blogs.ei.columbia.edu/2018/06/05/artificial-intelligence-climate-environment/> [Accessed 31 October 2020].
Plan A Academy. 2020. AI And Climate Change - Plan A Academy. [online] Available at: <https://plana.earth/academy/ai-climate-change/#:~:text=Without%20any%20other%20data%2C%20the,the%20causes%20of%20climate%20change.> [Accessed 31 October 2020].
Harari, Y., 2017. Homo Deus. Albin Michel.
Lorkowski, C., n.d. Hume, David: Causation | Internet Encyclopedia Of Philosophy. [online] Iep.utm.edu. Available at: <https://iep.utm.edu/hume-cau/#:~:text=The%20relation%20of%20cause%20and,relations%20between%20objects%20of%20comparison.&text=Causation%20is%20a%20relation%20between,world%20beyond%20our%20immediate%20impressions> [Accessed 8 November 2020].
Sorokin, S., 2019. Thriving In A World Of “Knowledge Half-Life”. [online] CIO. Available at: <https://www.cio.com/article/3387637/thriving-in-a-world-of-knowledge-half-life.html#:~:text=Buckminster%20Fuller%20estimated%20that%20up,doubling%20every%2012%2D13%20months.> [Accessed 8 November 2020].
Gohd, C., 2018. Einstein Was Right! Scientists Confirm General Relativity Works With Distant Galaxy. [online] Space.com. Available at: <https://www.space.com/40958-einstein-general-relativity-test-distant-galaxy.html> [Accessed 8 November 2020].