Downloads

Humans versus algorithms: When minds meet machines in the domain of forecasting

Renato Frey

Next session: Thursday 18.04.2024 - 12.15h


Overview

Drawing inferences about the future – a.k.a. forecasting – has been a human desire, need, and challenge for ages. Whereas people originally had to rely on their own or experts' opinions about the future, in modern days such "judgmental forecasts" are increasingly supplemented if not substituted by statistical (i.e., algorithmic) forecasts. But how to optimally make judgmental and/or statistical forecasts, and under which conditions should humans and machines interact to profit from each other? In this course we will address these and related questions, as well as implement and compare various forecasting methods using a real-life prediction tournament in R.

Learning goals

  1. To understand the central concepts of judgmental and statistical/algorithmic forecasting
  2. To be able to compare and evaluate different forecasting methods concerning their accuracy and related metrics
  3. To be in the position to independently implement statistical (and hybrid) forecasting models using R.

Forecasting tournament

There will be an online "forecasting tournament" throughout the semester. This tournament requires uploading various forecasting algorithms and regularly updating them with new data. We will implement the respective forecasting algorithms using R. If you do not feel confident using R and / or Rstudio, please take a refresher prior to the course. For instance, you can find introductions at Datacamp or here YaRrr! The Pirate's Guide to R.

Proceed to the forecasting tournament.

Literature

The following books cover the basic principles of forecasting and are generally recommended. We will cover specific book chapters of these books in the various sessions of the course, please see below.

  • Armstrong, J. S. (2002). Principles of forecasting: A handbook for researchers and practitioners. Norwell, MA: Kluwer Academic.
  • Tetlock, P. E., & Gardner, D. (2015). Superforecasting: The art and science of prediction. New York: Crown.
  • Tetlock, P. E. (2006). Expert political judgment: How good is it? How can we know? Princeton University Press.

Sessions

22.02.2024 - S1: Introduction and overview

29.02.2024 - S2: Judgmental Forecasting I

Literature:

  • Arkes, H. R. (2002). Overconfidence in judgmental forecasting. In J. S. Armstrong (Ed.), Principles of forecasting: A handbook for researchers and practitioners (2001 edition, pp. 495–515). Boston, MA: Springer.

07.03.2024 - S3: Judgmental Forecasting II

Literature:

  • Primär: Armstrong, J. S., Green, K. C., & Graefe, A. (2015). Golden rule of forecasting: Be conservative. Journal of Business Research, 68(8), 1717–1731.

14.03.2024 - S4: Judgmental Forecasting III / Scoring Rules

Literature:

  • Primär: Hastie, R., & Dawes, R. M. (2010). A general framework for judgment. In Rational choice in an uncertain world: The psychology of judgment and decision making (pp. 47–72). SAGE.
  • Zusatz: Hogarth, R. M. (2006). On ignoring scientific evidence: The bumpy road to enlightenment. SSRN eLibrary. Retrieved from http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1002512

21.03.2024 - S5: Statistical Forecasting I / Scoring rules

Literature:

  • Babyak, M. A. (2004). What you see may not be what you get: a brief, nontechnical introduction to overfitting in regression-type models. Psychosomatic Medicine, 66(3), 411–421.
  • Zusatz: Armstrong, J. S., & Collopy, F. (1992). Error measures for generalizing about forecasting methods: Empirical comparisons. International Journal of Forecasting, 8(1), 69–80. http://doi.org/10.1016/0169-2070(92)90008-W

Various:

28.03.2024 - S6: Statistical Forecasting II / Overfitting

Literature:

  • Babyak, M. A. (2004). What you see may not be what you get: a brief, nontechnical introduction to overfitting in regression-type models. Psychosomatic Medicine, 66(3), 411–421.

04.04.2024 - NO SESSION

11.04.2024 - S7: Statistical Forecasting III

Literature:

  • Dawes, R. M. (1979). The robust beauty of improper linear models in decision making. American Psychologist, 34(7), 571–582. http://doi.org/10.1037/0003-066X.34.7.571
  • Zusatz: Dawes, R. M., Faust, D., & Meehl, P. E. (1989). Clinical versus actuarial judgment. Science, 243(4899), 1668–1674.

18.04.2024 - S8: Extrapolation / Time-series I

Literature:

  • Armstrong, J. S. (2002). Extrapolation. In Principles of forecasting: A handbook for researchers and practitioners (pp. 217–243). Norwell, MA: Kluwer Academic.

25.04.2024 - S9: Extrapolation / Time-series II

Literature:

  • Armstrong, J. S. (2002). Extrapolation. In Principles of forecasting: A handbook for researchers and practitioners (pp. 217–243). Norwell, MA: Kluwer Academic.

Various:

02.05.2024 - S10: Combining Forecasts / Ensemble Methods

Literature:

  • Combining forecasts (Armstrong, 2001)

09.05.2024 - NO SESSION

16.05.2024 - S11: Online: Group forecasting / Delphi

Literature:

  • Hastie, R., & Kameda, T. (2005). The robust beauty of majority rules in group decisions. Psychological Review, 112(2), 494–508.

23.05.2024 - S12: Forecasting tournament: Presentations

Literature:

  • Armstrong, J. S. (2006). Findings from evidence-based forecasting: Methods for reducing forecast error. International Journal of Forecasting, 22(3), 583–598. http://doi.org/10.1016/j.ijforecast.2006.04.006

30.05.2024 - S13: Forecasting tournament: Presentations

Literature:

  • Armstrong, J. S. (2006). Findings from evidence-based forecasting: Methods for reducing forecast error. International Journal of Forecasting, 22(3), 583–598. http://doi.org/10.1016/j.ijforecast.2006.04.006