I'm building neural networks that generate provably correct code, and the software infrastructure for training them.
I work on three fronts:
- New architectures: Understanding old, and designing new neural networks that consume and produce structured data
- Mathematical Foundations: Formalising what it even means to generalise when your inputs are not numbers, but inductive datatypes.
- Infrastructure: Building the stack required to train neural networks in dependently-typed languages: tensor processing, automatic differentiation and elaborator integration
In all of these, I use category theory, the mathematics of structure and composition, as a central glue. To read more about my research programme, check out this link. To find my academic work, click here.


