Will Thompson

image

Hey 👋🏽, my name is Will. I work as a Machine Learning Scientist at Tempus, where I focus on developing medically-focused AI with language models. See our recent paper “Large Language Models with Retrieval-Augmented Generation for Zero-Shot Disease Phenotyping” ( DGM4H NeurIPS 2023) to get a sense of the type of problems my group is trying to solve.

In a previous life, I worked in quantitative research/engineering in trading of the high frequency variety (i.e. “HFT”). Prior to that, I was a PhD student at Northwestern University in Applied Math.

Writing

‣
📂 Large Language Models

These are my notes on Large Language Model/Transformer research. I started an earlier set of notes here. Inspired by Lilian Weng’s blog. I am writing these for the sake of writing.

→ Emergent Abilities of LLMs 🪴 (August 22 2023)

‣
đź““ Papers

Note: I don’t have a background in academia, so writing papers hasn’t been a focus.

→ Will Thompson, David Vidmar, Jessica De Freitas, Gabriel Altay, Kabir Manghnani, Andrew Nelsen, Kellie Morland, John Pfeifer, Brandon Fornwalt, RuiJun Chen, Martin Stumpe, Riccardo Miotto “Large Language Models with Retrieval-Augmented Generation for Zero-Shot Disease Phenotyping.” DGM4H NeurIPS 2023.

→ David Vidmar, Will Thompson, Ruijun Chen, Dustin Hartzel, Daniel Rocha, Joseph Leader, Brandon Fornwalt, and Christopher M Haggerty. 2022. “Abstract 11934: Natural Language Processing Models Can Be Trained to Accurately Recognize the Presence of Disease Within Clinical Notes.” Circulation 146 (Suppl_1): A11934–A11934.

Miscellanea

‣
🎨 Repos
  1. microstructure-plotter (2023) - visualize market microstructure data.
  2. tldr-transformers (2021)- earlier set of notes comparing transformer research threads.
  3. deeplearning-nlp-models (2020)- built a few deep learning models from scratch in Pytorch (w/ tests). Run on GPUs.

What We Know About LLMs (Primer)
Emergent Abilities of LLMs 🪴
headshot