Well, first, a Merry Christmas and a Happy New Year!

Let *L* be a bounded distributive lattice. Given any element *u* of *L*, and any element *v* of *L* such that *u*≰*v*, there is a (join-)prime filter containing *u* and that does not contain *v*. That is an easy application of Zorn’s Lemma. We consider the set **Filt**(*L*) of filters in *L*, ordered by inclusion. This is a dcpo, where directed suprema are computed as unions. The subset of those filters *F* ∈ **Filt**(*L*) such that *u* ∈ *F* and *v* ∉ *F* is closed under those directed suprema, hence has a maximal element (Zorn). Let us call that maximal element *F*. That *F* is prime is a standard exercise, using the fact that *L* is distributive in a crucial way. (Exercise!)

Can we ask more properties of that prime filter *F*? For example, can we ask it to be Scott-open?

## Spatial frames

If *L* is a *complete* distributive lattice, then a prime, Scott-open filter is a completely prime filter, namely a *point* of *F*, or again, an element of the space **pt**(*L*) obtained through Stone duality from *L*. Proposition 8.1.17 of the book says that we can indeed require *F* to be Scott-open (and prime, containing *u* but not *v*) provided *L* is a spatial frame. In fact, when *L* is a frame, it is equivalent to require that all pairs of elements *u*≰*v* of *L* are separated by a prime, Scott-open filter *F* (*u* ∈ *F* and *v* ∉ *F*) and to require that *L* be spatial.

But what happens when *L* is not spatial, or when *L* is not even complete?

## The Rasiowa-Sikorski Lemma

The *Rasiowa-Sikorski Lemma* [1] states that, if *L* is a Boolean algebra, then for every pair *u*≰*v* in *L*, one can find a prime filter *F* separating *u* and *v* and satisfying a local form of Scott-openness.

Hence, as promised, we no longer assume *L* to be complete. Requiring *L* to be a Boolean algebra is more demanding than being just a bounded distributive lattice, but we will see later that this is not required.

Here is what this local form of Scott-openness is. A *sup situation* is a family *D* of elements of *L* that has a supremum sup *D* in *L*. (In a Boolean algebra, we cannot assume that all families, even just directed families, have a supremum.) *F* *respects the sup situation D* if and only if (sup *D* ∈ *F* implies that some element of *D* is in *F*). Hence *F* is Scott-open if and only if it respects every directed sup situation. *F* is completely prime if and only if it respects every sup situation.

The Rasiowa-Sikorski Lemma states that, given any *countable* family of sup situations in a Boolean algebra *L*, we can require the prime filter *F* separating *u* and *v* to respect all the sup situations from that countable family. That is crucial in the applications to logic that Rasiowa and Sikorski were interested in, and I will say a brief word about it in the next section.

In the meantime, notice that this is yet another case where countability enters the stage in a surprising way.

The surprise should dwindle down once you realize that the Rasiowa-Sikorski Lemma is a consequence of the Baire property, as realized by R. Goldblatt [2]… and I will explain how this works.

## First-order logic

Please allow me not to be overly explicit here. This is not a blog on logic, and I wish to shy away from the technical details unrelated to topology. I assume you all know what a formula is. The formulae of first-order logic are built from so-called atomic formulae *P*(*t*_{1},…,*t*_{n}) (where *P* is a so-called predicate symbol of arity *n*, and *t*_{1},…,*t*_{n} are so-called terms) using conjunction ⋀, disjunction ⋁, negation ¬, truth ⊤, falsity ⊥, universal quantification ∀ and existential quantification ∃. Terms are built from variables and applications *f*(*t*_{1},…,*t*_{n}) (where *f* is a so-called function symbol of arity *n*, and *t*_{1},…,*t*_{n} are terms, recursively). A *sentence* is a formulae whose variables are all quantified. For example, ∀*x*.∃*y*.*P*(*x*,*y*) is a sentence, and ∃*y*.*P*(*x*,*y*) is a formula that is not a sentence.

I will assume that there are at most countably many predicate symbols and function symbols. This entails that there are only countably many sentences.

Importantly, formulae have no meaning *per se*. They are just inert pieces of syntax. To give a meaning to formula, we need to define a *semantics*.

The semantics of first-order logic is given by so-called first-order *structures*. A structure *S* is the data of: (1) a non-empty set *X*, over which terms are interpreted, (2) for each predicate symbol *P* (of arity *n*), an *n*-ary relation over *X*, (3) for each function symbol (of arity *n*), a function from *X*^{n} to *X*. We can then define a satisfaction relation ⊨, so that *S* ⊨ *F* (for every formula *F*), if and only if *F* is true in *S*. For example, *S* ⊨ ∀*x*.∃*y*.*P*(*x*,*y*) if and only if for every value *a* (of *x*) in *X*, there is a *b* in *X* such that (*a*,*b*) is related by the relation associated with the symbol *P* by *S*.

Formulae are proved in a so-called *proof system*, given by *deduction rules*. There are plenty of proof systems available. All reasonable proof systems are *sound*: if you can prove *F* in this proof system, then *S* ⊨ *F* for every first-order structure *S*, namely, *F* is *valid*. The real question is *completeness*: is every valid sentence provable?

This question was solved first by Kurt Gödel [6] in 1930. That obviously depends on your proof system, but here is a generic completeness argument. By generic, I mean that it will apply to any proof system, provided it satisfies some assumptions which I will enumerate along the way.

We start by assuming that our proof system specifies an *entailment* relation, which I will write as ≤, and that entailment is a preordering. This induces an equivalence relation ≡ on sentences (*F*≡*G* if and only if *F*≤*G* and *G*≤*F*; known as logical equivalence). Let [*F*] be the equivalence class of *F*. The quotient of the set of all formulae by ≡ is a poset *L*, with (the trace of) ≤ as ordering. *L* is called the *Lindenbaum algebra* of the proof system.

We will assume that the proof system has enough deduction rules so that *L* is a Boolean algebra, with bottom element [⊥], top element [⊤], complement implemented by ¬ (i.e., the complement of [*F*] is [¬*F*]), binary infimum implemented by ∨ (i.e., the inf of [*F*] and [*G*] is [*F* ∨ *G*]), and binary supremum implemented by ∧. Explicitly, for the latter, we should have rules that allow us to show that *F* ∧ *G* entails *F*, that *F* ∧ *G* entails * G*, and that if

*H*entails

*F*and also entails

*G*, then

*H*entails

*F*∧

*G*. (The first two are the so-called elimination rules for conjunction, and the third one is the introduction rule for conjunction, in natural deduction.)

This is enough to deal with the propositional connectives ∧, ∨, ¬, ⊤, ⊥. In order to deal with the quantifiers, we will also assume that [∀*x*.*F*(*x*)] is the infimum of the family of elements [*F*(*t*)], where *t* ranges over the so-called closed terms (closed meaning without any variable, e.g., *f*(*a*,*b*), where *a* and *b* are constants, namely function symbols of arity 0), and that [∃*x*.*F*(*x*)] is the supremum of the family of elements [*F*(*t*)], where *t* ranges over the closed terms. (Technically, that requires one to add so-called *Henkin constants* to the language. This is a clever, but rather hackish trick. Please allow me not to say anything more about it.)

Because we have negation, we do not need to consider, say, the first constraint on universal quantifications, and we simply assume that [∀*x*.*F*(*x*)] = [¬∃*x*.¬*F*(*x*)], and that [∃*x*.*F*(*x*)] = sup {[*F*(*t*)] | *t* closed}, instead.

Now we are set. Imagine that *F* is an unprovable sentence. We need to find a first-order structure *S* such that *S* ⊨ *F* is wrong. The set *X* is simply the set of all closed terms (the so-called *Herbrand universe*). Instead of finding the relations associated with each predicate symbol *P*, we will directly define the family of all sentences *T* that are true in *S*. Here are our requirements on that family:

*T*should not contain [*F*];*T*should be closed under entailment, namely: if [*G*] is in*T*and*G*≤*H*then [*H*] is in*T*;*T*should mimic the semantics: it should contain [⊤] but not [⊥] (not containing [⊥] is automatic from 1 and 2); it contains [*G*∧*H*] if and only if it contains [*G*] and [*H*] (hence*T*should be a filter), it contains [*G*∨*H*] if and only if it contains [*G*] or [*H*] (hence*T*should be a prime filter), it contains [¬*G*] if and only if it does not contain [*G*] (hence*T*is an ultrafilter—that is a useless requirement since, in a Boolean algebra, ultrafilters and prime filters coincide), and it contains [∃*x*.*G*(x)] if and only if it contains [*G*(*t*)] for some closed term*t*.

Note that, since I have assumed that [∀*x*.*G*(*x*)] = [¬∃*x*.¬*G*(*x*)], I do not need to assume the symmetric statement that *T* contains [∀*x*.*G*(*x*)] if and only if *T* contains [*G*(*t*)] for every closed term *t*.

If you look at it closely, you will realize that the requirement “*T* contains [∃*x*.*G*(x)] if and only if it contains [*G*(*t*)] for some closed term *t*” is equivalent to saying “*T* respects the sup situations {[*G*(*t*)] | *t* closed}, for every formula *G*(*x*) with at most one unquantified variable *x*“.

There are only countably many formulae, so there are only countably many such sup situations to respect. Recapitulating, we need to find a prime filter *T* that contains [⊤], does not contain [*F*], and which respects countably many sup situations {[*G*(*t*)] | *t* closed}. This is exactly what the Rasiowa-Sikorski lemma states the existence of. *T* describes a first-order structure *S* such that *S* ⊨ *F* is wrong. Hence our proof system is complete.

Oh, before I go on, you will almost never find this argument, in papers or books of logic. Completeness is usually proved by building the ultrafilter *T* by hand, directly, using the fact that * L* itself is countable. The process is rather ad hoc: essentially we initialize

*T*to {¬

*F*}, then we add new formulae to

*T*as long as they do not contradict the current set of formulae in

*T*. There are additional difficulties, notably all formulae must be dealt with eventually, and the quantifications present specific difficulties as well. That argument looks more like a hack, but it is elementary, and this is perhaps why it is preferred nowadays.

But we are interested in topology, right? The point here is that, as R. Goldblatt noticed [2], the Rasiowa-Sikorski lemma is a consequence of Stone duality and the Baire property.

## The Görnemann-Rauszer-Sabalski Lemma

I will in fact explain a more general result than the Rasiowa-Sikorski result, and that is (half of) a result by Görnemann [3] and Rauszer and Sabalski [4]. My exposition is a variant on Goldblatt’s [2], as you may have guessed.

Here is the setting. We return to the case of a bounded distributive lattice *L*. A sup situation *D* in *L* is *distributive* if and only if, for every *u* in *L*, *u* ⋀ sup *D* = sup {*u* ⋀ *v* | *v* ∈ *D*}. (The theorem will not work without that extra assumption. But it will be automatic in Boolean algebras.) Then we have:

**Theorem [3,4].** Let *L* be a bounded distributive lattice, and *E* be a countable family of distributive sup situations in *L*. For every pair *u*≰*v* in *L*, there is a prime filter *F* that contains *u*, does not contain *v*, and respects all the sup situations in *E*.

I said “half of” the Görnemann-Rauszer-Sabalski theorem. The authors of [3] and [4] also consider so-called distributive *inf* situations. An inf situation is a family *A* with a greatest lower bound inf *A*. It is distributive if and only if, for every *u* in *L*, *u* ⋁ inf *A* = inf {*u* ⋁ *v* | *v* ∈ *A*}. *F* respects an inf situation *A* if and only if (*A* ⊆ *F* implies inf *A* ∈ *F*). The full result is that we can find a prime filter *F* that contains *u*, does not contain *v*, and respects both the sup situations and the inf situations from two countable sets, one of distributive sup situations, the other one of distributive inf situations. I will not bother to consider inf situations here. The argument is similar.

### The proof (1/5): Stone enters the stage

We now prove this. First, instead of looking for a (join-)prime filter, we will look for a (meet-)prime ideal. This is equivalent, in the sense that the complement of a meet-prime ideal is a join-prime filter. (Exercise!)

Now, what we are looking for, modulo that change of point of view, is a (meet-)prime ideal *x* that contains *v*, does not contain *u*, and respects all the sup situations *D* in *L*, in the sense that (*D* ⊆ *x* implies sup *D* ∈ *x*).

What is (meet-)prime ideal? That is an element of the space **pt**(**I**(*L*)), the Stone dual of **I**(*L*), if you follow Corollary 8.1.21 of the book. (This is why I wrote it as *x* above: this is really a point of a topological space.)

If you have a look at Exercise 9.5.12 in the book, you will see that **I**(*L*) is an algebraic fully arithmetic complete lattice, and Exercise 9.5.19 says (among other things) that **pt**(**I**(*L*)) is a spectral space. Goldblatt looks directly at its patch space, which is a Priestley space (see Theorem 9.5.24 in the book). We will make good use of that wealth of structure. Notably, **pt**(**I**(*L*))^{patch} is compact Hausdorff.

### The proof (2/5): the sets O_{a}

_{a}

The open subsets of the topology on **pt**(**I**(*L*)) are the sets O* _{I}*={

*x*∈

**pt**(

**I**(

*L*)) |

*I*⊊

*x*}, where

*I*ranges over

**I**(

*L*), still following Corollary 8.1.21. Let me write O

*for O*

_{a}*, for every*

_{↓a}*a*in

*L*. Then O

*is {*

_{a}*x*∈

**pt**(

**I**(

*L*)) |

*a*∉

*x*}, and the sets O

*form a basis of the topology on*

_{a}**pt**(

**I**(

*L*)). Indeed, for every ideal

*I*, O

*is the union of the sets O*

_{I}*where*

_{a}*a*ranges over the elements of

*I*.

Those sets O* _{a}* are not just open. They are

*compact open*. The short argument is as follows. The finite elements of the lattice of open sets

**O**(

*Y*) of any topological space

*Y*are its compact open subsets. When

*Y*=

**pt**(

**I**(

*L*)),

**O**(

*Y*) is isomorphic to

**I**(

*L*), because

**O**⊣

**pt**is an equivalence (Exercise 9.5.5). Explicitly,

*I*↦ O

*is an order isomorphism. But the finite elements of*

_{I}**I**(

*L*) are the principal filters ↓

*a*,

*a*in

*L*, and we are done.

### The proof (3/5): translating the statement into topological terms

Now what does it mean for a prime ideal *x* to contain *v*? That means that *x* is not in O* _{v.}* And what does it mean to fail to contain

*u*? That means that

*x*is in O

*. Hence we are looking for an element*

_{u}*x*of

**pt**(

**I**(

*L*)) in O

*–O*

_{u}*, and which respects all the sup situations in the countable family*

_{v}*E*.

Note, for future reference, that since O* _{u}* is open and O

*is compact saturated, O*

_{v}*–O*

_{u}*is patch-open. Recall that a patch-open set is one that is open in the patch topology. We will freely use the names “patch-closed”, “patch-compact”, “patch-dense”, and so on as well.*

_{v}Note also that O* _{u}*–O

*is non-empty. Otherwise, O*

_{v}*would be included in O*

_{u}*. Since*

_{v}*I*↦ O

*is an order isomorphism, that would imply ↓*

_{I}*u*⊆ ↓

*v*, hence

*u*≤

*v*, which would contradict our assumption that

*u*≰

*v*.

Finally, what does it mean for a prime ideal *x* to respect a sup situation *D*? This is the condition (*D* ⊆ *x* implies sup *D* ∈ *x*), or equivalently (if for every *d* in *D*, *x* is not in O_{d,} then *x* is not in O_{sup D}). By taking contrapositives, this means (if *x* is in O_{sup D} then *x* is in O_{d }for some *d* in *D*), or alternatively, (*x* is in *U _{D}*

**imp**

*V*), where

_{D}*U*is the open set O

_{D}_{sup D}and

*V*is the open set obtained as the union of the open sets O

_{D}_{d,}

*d*in

*D*. Recall the notation

*U*

**imp**

*V*, meaning the set of points

*x*such that (

*x*is in

*U*implies

*x*is in

*V*), and which we have introduced for so-called UCO sets in the post on countably presented locales.

We will reflect on the fact that *U _{D}*

**imp**

*V*is a UCO set later (and very briefly). For now, we remark that

_{D}*U*

_{D}**imp**

*V*is the union of the open set

_{D}*V*, and of the complement of

_{D}*U*= O

_{D}_{sup D}. That last complement is the complement of a compact open set, hence it is open in the de Groot dual of

**pt**(

**I**(

*L*)). It follows that both

*V*and the complement of

_{D}*U*are patch-open. Hence

_{D}*U*

_{D}**imp**

*V*is also patch-open.

_{D}### The proof (4/5): patch-density

*U _{D} *

**imp**

*V*is a UCO set, it is patch-open, good. It is also

_{D}*patch-dense*, as we now claim. This is where we use that

*D*is a distributive sup situation.

In order to see this, we consider a non-empty patch-open subset *V* of **pt**(**I**(*L*)), and we must show that it intersects *U _{D} *

**imp**

*V*. We use Theorem 9.5.24 of the book: the patch topology of

_{D}**pt**(

**I**(

*L*)) has a base of sets of the form

*K*–

*K’*, where

*K*and

*K’*range over the compact open subsets of

**pt**(

**I**(

*L*)). We remember that the compact open sets are the sets O

*,*

_{a}*a*in

*L*. Hence

*V*is a union of sets of the form O

*–O*

_{a}*, with*

_{b}*a*,

*b*∈

*L*. At least one of them must be non-empty. We pick one, O

*–O*

_{a}*. It is non-empty, so there is a point*

_{b}*x*in O

*–O*

_{a}*. We need to show that O*

_{b}*–O*

_{a}*intersects*

_{b}*U*

_{D}**imp**

*V*.

_{D}We imagine it does not. If that is the case, then (O* _{a}*–O

*) ∩ (*

_{b}*U*

_{D}**imp**

*V*) is empty. By distributing the union implicit in the

_{D}**imp**operator with intersection, this means: (1) O

*–O*

_{a}*⊆*

_{b}*U*, and (2) O

_{D}*–O*

_{a}*∩*

_{b}*V*is empty. Equivalently: (1) O

_{D}*⊆ O*

_{a}*∪*

_{b}*U*, and (2) O

_{D}*∩*

_{a}*V*⊆ O

_{D}*.*

_{b}We rewrite (2) as follows. *V _{D}* is the union of the open sets O

_{d,}

*d*in

*D*, so (2) states that O

*∩ O*

_{a}_{d}⊆ O

*for every*

_{b}*d*in

*D*. Since

*I*↦ O

*is an order isomorphism, O*

_{I}*∩ O*

_{a}_{d}= O

*∩ O*

_{↓a}_{↓d}= O

_{↓a}_{ ∩ }

*= O*

_{↓d}

_{↓}_{(}

_{a}_{ ⋀ }

_{d}_{)}is included in O

*if and only if*

_{b}*a*⋀

*d*≤

*b*. Hence (2) states that

*a*⋀

*d*≤

*b*for every

*d*in

*D*.

Oh, but *D* is a *distributive* sup situation! so *a* ⋀ sup *D* = sup {*a* ⋀ *d* | *d* *∈* *D*}, and therefore *a* ⋀ sup *D* ≤ *b*.

We haven’t used (1) yet. Since *x* is in O* _{a}*, by (1) it is in O

*or in*

_{b}*U*. We claim that

_{D}*x*is in O

*. It suffices to show this if*

_{b}*x*is in

*U*= O

_{D}_{sup D}. If so, then

*x*is in O

*∩ O*

_{a}_{sup D}= O

_{a}_{ ⋀ sup }

*Since*

_{D}.*a*⋀ sup

*D*≤

*b*, O

_{a}_{ ⋀ sup }

*is included in O*

_{D}*, so*

_{b}*x*is in O

*. All right, so*

_{b}*x*is in O

*, whichever case is true. But that is impossible, since we have taken*

_{b}*x*from O

*–O*

_{a}*.*

_{b}Hence *U _{D} *

**imp**

*V*is patch-dense, as we claimed.

_{D}### The proof (5/5): invoking Baire

Let us repeat a few of the things we have already said. First, **pt**(**I**(*L*))^{patch} is compact Hausdorff. Second, O* _{u}*–O

*is patch-open and non-empty. Third, U*

_{v}

_{D}**imp**

*V*is patch-open, and patch-dense for every distributive sup situation

_{D}*D*. We are also given a countable family

*E*of distributive sup situations.

Every compact Hausdorff space is Baire. In fact, every sober locally compact space is Choquet-complete, hence Baire (Theorem 8.3.24 in the book). Therefore **pt**(**I**(*L*))^{patch} is Baire.

The intersection of the sets *U _{D} *

**imp**

*V*, where

_{D}*D*ranges over

*E*, is a countable intersection of dense subsets of

**pt**(

**I**(

*L*))

^{patch}, hence it is dense, by the Baire property. Hence it intersects the non-empty open subset O

*–O*

_{u}*of*

_{v}**pt**(

**I**(

*L*))

^{patch}.

Let *x* be a point in the intersection. It is in O* _{u}*–O

*, so*

_{v}*u*is not in

*x*, and

*v*is in

*x*, as we have already argued. For every sup situation

*D*in

*E*,

*x*is in

*U*

_{D}**imp**

*V*, and we have already said that this means that

_{D}*x*respects the sup situation

*D*. This finishes the proof. ☐

## The Rasiowa-Sikorski lemma as a corollary

In summary, we have shown: Let *L* be a bounded distributive lattice, and *E* be a countable family of distributive sup situations in *L*; for every pair *u*≰*v* in *L*, there is a prime filter *F* that contains *u*, does not contain *v*, and respects all the sup situations in *E*.

If *L* is a Boolean algebra, then *L* is, in particular, a bounded distributive lattice. In fact, all suprema that exist automatically distribute over ⋀. This is because every Boolean algebra is a Heyting algebra (see Exercise 9.5.22 in the book), with *u* ⇒ *v* defined as ¬*u* ⋁ *v*. Recall that residuation ⇒ is uniquely determined by the fact that for every *d* in *L*, *u* ⋀ *d* ≤ *v* if and only if *d* ≤ *u* ⇒ *v*. In other words, *u* ⋀ _ is left adjoint to *u* ⇒ _. And left adjoints preserve all colimits (all suprema, here). Explicitly, imagine that *D* is a family with a supremum sup *D*, and let *u* be any element of *L*. Then *u* ⋀ sup *D* is an upper bound of {*u* ⋀ *d* | *d* ∈ *D*}. Given any other upper bound *v* of {*u* ⋀ *d* | *d* ∈ *D*}, we have *u* ⋀ *d* ≤ *v* for every *d* in *D*, hence *d* ≤ *u* ⇒ *v* for every *d* in *D*, namely sup *D* ≤ *u* ⇒ *v*, and therefore u ⋀ sup *D* ≤ *v*.

It follows that, if *L* is a Boolean algebra, then all sup situations are distributive. We obtain the Rasiowa-Sikorski lemma:

**Corollary [1].** Let *L* be a Boolean algebra, and *E* be a countable family of sup situations in *L*. For every pair *u*≰*v* in *L*, there is a prime filter *F* that contains *u*, does not contain *v*, and respects all the sup situations in *E*.

## And beyond…

Let us look back at the proof of the (“half of”) Görnemann-Rauszer-Sabalski theorem. We have built a subset *Y* of **pt**(**I**(*L*)) as the intersection of the sets *U _{D} *

**imp**

*V*, where

_{D}*D*ranges over a given countable set of distributive sup situations

*E*. We have seen that

*Y*is patch-dense, and that was the key to the proof.

*Y* is also a **Π**^{0}_{2} subset of **pt**(**I**(*L*)). Indeed, recall that sets of the form *U _{D}*

**imp**

*V*are UCO (union of closed and open subset), and that

_{D}**Π**

^{0}

_{2}subsets are countable intersections of UCO subsets by definition.

**pt**(**I**(*L*)) is a spectral space, hence is in particular sober and locally compact. If *L* is countable (as in the case of the Lindenbaum algebra of first-order logic), then the base of open sets O* _{a}* of

**pt**(

**I**(

*L*)) is countable, hence

**pt**(

**I**(

*L*)) is second countable. Every second countable, sober locally compact space is quasi-Polish, and every

**Π**

^{0}

_{2}subset of a quasi-Polish space is quasi-Polish [5]. That is extra structure that allows one to prove some more completeness results, especially on probabilistic logics. A remarkable paper in that respect is [7], which uses a variant of this (and other things…) in a crucial way.

- Helena Rasiowa and Roman Sikorski, The Mathematics of Metamathematics. PWN–PolishScientific Publishers, Warsaw, 1963.
- Robert Goldblatt. 2012. Topological Proofs of some Rasiowa-Sikorski Lemmas. Studia Logica, 100, 1–18.
- Sabine Görnemann. 1971. A Logic Stronger than Intuitionism. Journal of Symbolic Logic, 36(2), 249–261.
- Cecylia Rauszer and Bogdan Sabalski. 1975. Notes on the Rasiowa-Sikorski Lemma. Studia Logica, 34(3), 265–268.
- Matthew de Brecht. Quasi-Polish spaces. Annals of Pure and Applied Logic, Volume 164, Issue 3, March 2013, pages 356-381.
- Kurt Gödel. Die Vollständigkeit der Axiome des logischen Funktionenkalküls.
*Monatshefte für Mathematik*(in German). Volume**37**, Issue 1, 1930, 349–360. - Dexter Kozen, Kim G. Larsen, Radu Mardare, and Prakash Panangaden. Stone duality for Markov processes. In 28th Annual ACM/IEEE Symposium on Logic in Computer Science. IEEE Computer Society Press, 2013, 321–330.