A programmatic methodology in semantics streamlines the otherwise computationally grueling process of developing and testing novel semantic theories. By easing the process of investigating increasingly complex semantic theories, this methodology aims to support the growth and maturity of semantics as a discipline. Instead of only developing a model-theoretic version of a semantics and working out the consequence by hand or not at all, this project provides tooling for writing semantic clauses using Z3‘s python API. You can find more information in the GitHub repository.

Programmatic Semantics

Although computational resources are commonplace, the ability to make use of these resources to develop and explore the implications of novel semantic theories remains limited. For instance, Prover9 and Mace are restricted to first-order and equational statements. However, for the purposes of developing new semantic theories and studying their corresponding logics, it is desirable to: (1) introduce a range of primitive operators; (2) specify semantic clauses for each operator; (3) define the space of models for the resulting language; (4) set up unit tests for easily evaluating which sentences are a logical consequence of which upon making changes to the semantics; and (5) print readable countermodels if there are any. After developing and testing a semantics for a language, the corresponding model theory and proof theory can be developed with a much better sense of the range of theorems before establishing soundness and attempting to complete the logic.

Whereas the current version of the model-checker provides a programmatic semantics for a range of hyperintensional operators, the next release (expected by the end of the year) abstracts on the semantic theory included in the current version of the model-checker so that users may declare primitive operators, providing their semantics. Instead of computing whether a given sentence is a logical consequence of some set of sentences by hand, these resources allow users to search for countermodels or establish logical consequence up to a finite level complexity specified by the user. If finite countermodels exist, users will be able to generate and print those models rather than attempting to do so by hand which is not only time consuming, but prone to include errors.

Although computational systems cannot search the space of all models (typically a proper class), proving that there are no finite models up to a user specified level of complexity provides evidence that the logical consequence in question holds in full generality. The broader the range of models considered, the stronger this evidence. Given sufficient evidence, logical consequences can then be established by hand. In the next phase of this project, I intend to incorporate the LEAN proof assistant into the methodology in order to streamline the process of axiomatizing novel primitives, providing new tooling for pursuing projects in conceptual engineering.

Model-Checker

Whereas intensional semantic theories model propositions by subsets of a set of worlds, hyperintensional state semantic theories model propositions by ordered pairs of sets of states exact verifier states and exact falsifier states (I motivate and defend such an account in “Identity and Aboutness” and “Counterfactual Worlds”). Although the state semantics provides the much richer theory of content that is needed to provide semantic clauses for counterfactual conditional, constitutive explanatory, and relevance operators, the resulting space of models for the language is much more complex than intensional models. In order to facilitate the exploration of the space of hyperintensional models, the model-checker draws on the SMT solver Z3 to provide tooling for finding hyperintensional countermodels and establishing validity over models up to a user specified level of complexity for inferences in a propositional language with the following operators:

Accessible installation instructions are provided in the GitHub repository. If you have any trouble installing or using the model-checker or find any surprising or unnatural results, feel free to open an issue in the GitHub repository.