Harsha Nori PhD
Speaker: Harsha Nori PhD, Director of Research Engineering for Aether, (internal group on AI, Engineering and Ethics) at Microsoft.
CHIP Monthly AI Journal Club
Dr. Nori will discuss his work at Microsoft and two journal articles:
Capabilities of GPT-4 on Medical Challenge Problems (arxiv.org) [2303.13375]
Can Generalist Foundation Models Outcompete Special-Purpose Tuning? Case Study in Medicine (arxiv.org) [2311.16452]
Harsha Nori PhD, Director of Research Engineering for Aether, Microsoft’s internal group on AI, Engineering and Ethics. Their team focuses on bringing Responsible AI research to the hands of practitioners through open-source tools, libraries, and integrations into ML platforms.
Co-founded the InterpretML framework, which is widely used by data scientists and ML engineers for building interpretable models and explaining opaque model predictions. Contributed to a number of other open-source machine learning libraries across the Python ecosystem. Lately focused on Guidance, a library for helping developers build better prompts and control the outputs of LLMs (large language models).
Current research interests are in interpretability, privacy-preserving machine learning (via differential privacy), fairness, and machine learning for healthcare. Published on these topics at conferences like ICML, NeurIPS, KDD, CHI, AAAI, and USENIX ATC (see Google Scholar page for details).
Prior to joining Aether, Dr. Nori worked as an applied scientist on problems like malware detection, large scale experimentation, and time-series forecasting. A graduate of the Georgia Institute of Technology.