Przejdź do głównej treści
Pomiń baner

Seminaria Zakładu Epistemologii

Seminaria zakładowe odbywają się w środy o godzinie 11:00 w sali 39 oraz są transmitowane na MS Teams.


There seems to exist an important asymmetry between the past and the future. Perhaps you will order the pumpkin spice latte tomorrow at the coffee shop. Perhaps not. Regardless of what you will actually do, you can still choose which coffee you will drink tomorrow, if at all. You have genuine options ahead of you, future possibilities the realization of which are still up to you. This is not so with what has already happened. You did order the pumpkin spice latte last week, and now you regret it. Now you cannot do anything about the fact that last week you ordered the pumpkin spice latte. There are no past possibilities in the same sense as there are future possibilities. The past is ‘closed’ and there is nothing you can do to change it. In fact, if we consider the genuine future possibilities ahead of you, it seems they must all come with a certain baggage, namely, the past itself. For whatever you are able to do in the future, those seem to be circumstances that would only add to the given past. This intuition, namely, that the past is somehow ‘fixed’ and out of our control can and has been articulated more carefully in numerous ways. One way of doing so is by the following principle of the ‘fixity of the past’:

Fixity of the past: For every action A, agent S, times t and t* (where t ≤ t*), and possible world w, S is able at t to A at t’ in w only if there is a possible world w* with the same past as that of w up to t in which S A-s at t*. 

This principle captures the relevant sense in which the past differs from the future. According to it, the past remains fixed in whatever circumstances witness one’s ability to do something. And although this principle is indeed intuitive, having been defended by many philosophers, it has disastrous and well known consequences when conjoined with the assumption that there is an omniscient being who infallibly believes in every truth. For such assumptions seem to entail that necessarily, no one is, or ever has been, able to do otherwise than what an omniscient being has foreknown, and thus believed, one would do. If, moreover, being able to do otherwise is required for having free will, this entails that no one has or ever had free will. Theological fatalism is true. In this talk, I will argue for a novel way out of theological fatalism. There are plausible assumptions about logic and language inspired by the works of Kripke and Kaplan which are widely accepted by philosophers, and when conjoined with other widely accepted principles about knowledge lead to the conclusion that there is a priori knowledge of contingent truths. One example is a priori knowledge of contingent trivialities such as p iff actually p, where p is a contingent truth and ‘actually’ gets formalized with a modal operator. But there are other relevant examples not involving the ‘actually’ operator. I will argue that some such instances of a priori knowledge are inconsistent with the principle of the Fixity of the past. If the theist should like to hold on to a traditional view of divine omniscience, in conjunction with the view that there is free will, she might do well in rejecting the Fixity of the past. There is an asymmetry between the future and the past, but the latter plays no role in constraining what free agents are able to do in a given situation — or so theists should believe.

One strand of Putnam’s (1967) argument is that there is no objective difference between the past, present, and future. This casts doubt on the legitimacy of the so called real possibilities that require a clear-cut distinction between the future and the remaining tenses. After discussing real possibilities, I will analyse Putnam’s argument by outlining a semantic theory for the language in which the argument is stated. In this semantic theory I construct counter-examples to the assumptions of the argument. As it stands, the argument is thus incorrect.

I will present the geometric reading of Dutch Book arguments given in [1]. One of the main claims there is the partial correspondence between Dutch Book results and accuracy-dominance results. It is partial because, while every accuracy-dominance result against a belief function generates a Dutch Book against that function, the converse is not generally true. I will voice additional doubts regarding the weakened version of the converse which was given by Williams: he shows that for some Dutch Books, it is their scaled versions which correspond to accuracy-dominance results, whereas I point out that in some cases the accuracy-dominance results in question are too weak to establish that the given belief function is irrational.

[1] Williams, J.R.G. (2012), 'Generalized Probabilism: Dutch Books and Accuracy Domination'. Journal of Philosophical Logic 41(5): 811-840. DOI

Enduring objects, events, and processes are typically listed as the main inhabitants of our world. A priority question then asks which of these categories is prior to the others. A minority view (Bergson, Whitehead, recently Simons and Dupre) points to processes as the fundamental category. This view faces a hard task to explain how objects emerge from processes. Developing on P. Simons’s analysis (1987, 2018), I will explicate the relation between processes and their phases, and discuss the concept of objects as precipitates of processes.

We consider classes of (possibly nonassociative) relation algebras defined by simple conditions, such as, for example, an abstract counterpart of forbidding monochromatic triangles. We callalgebras satisfying such conditions chromatic. Representability of chromatic algebras is the same as existence of edge colourings of complete graphs satisfying (i) translations of the abstract defining conditions of the algebras and (ii) a tiny amount of homogeneity. 

Qualitative representability is a weaker form of representability. For chromatic algebras it amounts to existence of edge colourings satisfying (i) above.   

We will survey representability and qualitative representability of chromatic algebras. Representability results are well known, qualitative representability results are new.

We show a surprising result concerning the relationship between the Principal Principle and its allegedly generalized form. Then, we formulate a few desiderata concerning chance-credence norms and argue that none of the norms widely discussed in the literature satisfies all of them. We suggest a minimally painful way out of the mess, involving the New Principle as proposed by Thau, Lewis and others in 1994.

Czy (ludzki) sprawca jest niezależny od przyrody, a jeśli tak, to w jaki sposób — to jedno z wielkich pytań filozofii.  W ostatnich dekadach ma ono swą odsłonę w podstawach mechaniki kwantowej: w argumencie Einsteina z EPR oraz dowodach twierdzenia Bella zakłada się niezależność wyboru ustawień pomiaru przez eksperymentatora od sytuacji fizycznej. Założenie to badano z różnych perspektyw (zob. np. “BIG Bell Test Collaboration”, Nature 2018) w ostatnich latach, ale dotąd nie zaproponowano jego ścisłej analizy (w odróżnieniu od pozostałych założeń twierdzenia Bella). Taką analizę proponuję w tym referacie.

Punktem wyjścia jest rozróżnienie między dwoma przypadkami indeterminizmu: (S) wywołanego przez sprawców (S) albo  będącego efektem wielu możliwych wyników  pomiaru (E). Zależność między S i E  definiuję  przy pomocy pojęcia modalnych korelacji (“modal funny business”).  Maszynerię tę stosuję do analizy twierdzeń o nieistnieniu modeli z ukrytymi parametrami dla eksperymentu GHZ. Pokażę, co to twierdzenie znaczy z perspektywy niezależności S-E: W modelu  bez parametrów ukrytych dla GHZ, S i E są niezależne; rozszerzenie tego modelu przez wprowadzenie parametrów ukrytych (kontekstualnych lub nie) czyni S i E zależnymi od siebie.