Misplaced Pages

Ulam matrix

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Term in mathematical set theory

In mathematical set theory, an Ulam matrix is an array of subsets of a cardinal number with certain properties. Ulam matrices were introduced by Stanislaw Ulam in his 1930 work on measurable cardinals: they may be used, for example, to show that a real-valued measurable cardinal is weakly inaccessible.

Definition

Suppose that κ and λ are cardinal numbers, and let F {\displaystyle {\mathcal {F}}} be a λ {\displaystyle \lambda } -complete filter on λ {\displaystyle \lambda } . An Ulam matrix is a collection of subsets A α β {\displaystyle A_{\alpha \beta }} of λ {\displaystyle \lambda } indexed by α κ , β λ {\displaystyle \alpha \in \kappa ,\beta \in \lambda } such that

  • If β γ λ {\displaystyle \beta \neq \gamma \in \lambda } then A α β {\displaystyle A_{\alpha \beta }} and A α γ {\displaystyle A_{\alpha \gamma }} are disjoint.
  • For each β λ {\displaystyle \beta \in \lambda } , the union over α κ {\displaystyle \alpha \in \kappa } of the sets A α β , { A α β : α κ } {\displaystyle A_{\alpha \beta },\,\bigcup \left\{A_{\alpha \beta }:\alpha \in \kappa \right\}} , is in the filter F {\displaystyle {\mathcal {F}}} .

References

  1. Jech, Thomas (2003), Set Theory, Springer Monographs in Mathematics (Third Millennium ed.), Berlin, New York: Springer-Verlag, p. 131, ISBN 978-3-540-44085-7, Zbl 1007.03002


Stub icon

This set theory-related article is a stub. You can help Misplaced Pages by expanding it.

Categories: