Hi, I'm Nick.
I’m a researcher in AI code generation at Microsoft Research, Cambridge. I’m interested in code as an output modality for generative AI due to its executability, a property distinct from free textual generation. Executable code is excellent for tasks involving calculation, planning, tool use, and verifiable reasoning, which traditional text-based approaches in LLMs struggle with. Further, generated code can exercise deep control of the computing environment, which I believe will enable a future of more powerful computing experiences for end-users.
My past research centers on the formal semantics of natural language and computer modeling of meaning, including with LLMs, Knowledge Graphs, Entailment Graphs, and bespoke neural architectures. I have done award-winning research showing that even though LLMs lack the understanding of language inherent in humans, they learn a latent space of concepts that is highly organized in a way which can approximate linguistic reasoning.
I hold a Ph.D. from the University of Edinburgh and the Institute for Language, Cognition, and Computation, where I was advised by Mark Steedman. I also hold a B.Sc. in Computer Science from Brown University.
Recent News:
Feb '24 | I started a postdoc position with Microsoft Research, Cambridge. |
Nov '23 | Our paper, Smoothing Entailment Graphs with Language Models, won the "Best Paper" award at AACL 2023. |
Oct '23 | Our paper, Sources of Hallucination by Large Language Models on Inference Tasks, was accepted to EMNLP Findings 2023. |
Oct '23 | I defended my Ph.D. thesis, Inference of Natural Language Predicates in the Open Domain. |