A programmatic methodology in semantics streamlines the otherwise computationally grueling process of developing and testing novel semantic theories. By easing the process of investigating increasingly complex semantic theories, this methodology aims to support the growth and maturity of semantics as a discipline. Instead of only developing a model-theoretic version of a semantics and working out the consequence by hand or not at all, this project provides resources for writing semantic clauses using Z3‘s python API without having to start from scratch by providing users with a host of functionality along with a flexible template that can be easily adapted.

Although computational resources are common place (e.g., one’ laptop), the ability to make use of these resources to develop and explore the implications of novel semantic theories remains limited. For instance, Prover9 and Mace are restricted to first-order and equational statements. However, for the purposes of semantics, it is desirable to: (1) introduce a range of primitive operators; (2) specify novel semantic clauses for each operator; (3) define the space of models for the resulting language; (4) set up unit tests for easily evaluating which sentences are a logical consequence of which upon making changes to the semantics; and (5) print readable countermodels if there are any. After developing and testing a semantics for a language, one can develop the corresponding model theory and proof theory with a much better sense of the range of its theorems before establishing soundness and attempting to complete the logic.

Whereas the current version of the model-checker provides a programmatic semantics for a range of hyperintensional operators, the next release (currently under development and expected by the end of the year) abstracts on the semantic theory included in the model-checker so that users may develop semantic clauses for their own operators. Instead of computing whether a given sentence is a logical consequence of some set of sentences by hand, these resources allow users to search for countermodels or establish logical consequence up to a finite level complexity specified by the user. Although computational systems cannot search the space of all models (typically a proper class), proving that there are no finite models up to a user specified level of complexity provides evidence that the logical consequence in question holds in full generality. Although computational time increases exponentially with the complexity of admitted models, establishing logical consequence over even a small set of models provides significant evidence that the logical consequence holds in general. The broader the range of models considered, the stronger this evidence. Logical consequences can then be established by hand, or given the next phase of this project, using the LEAN proof assistant. Additionally, if finite countermodels exist, users will be able to generate and print those models rather than attempting to do so by hand which is not only limited, but liable to include errors.

Model-Checker

Whereas intensional semantic theories model propositions by subsets of a set of worlds, hyperintensional state semantic theories model propositions by ordered pairs of sets of states exact verifier states and exact falsifier states (I motivate and defend such an account in “Identity and Aboutness” and “Counterfactual Worlds”). Although the state semantics provides the much richer theory of content that is needed to provide semantic clauses for counterfactual conditional, constitutive explanatory, and relevance operators, the resulting space of models for the language is much more complex than intensional models. In order to facilitate the exploration of the space of hyperintensional models, the model-checker draws on the SMT solver Z3 to provide tooling for finding hyperintensional countermodels and establishing validity over models up to a user specified level of complexity for inferences in a propositional language with the following operators:

Accessible installation instructions are provided in the GitHub repository. If you have any trouble installing or using the model-checker or find any surprising or unnatural results, feel free to open an issue in the GitHub repository.