Repo arXiv Paper CLunch Slides (.key) CLunch Slides (.pdf)

Abstract

GPT-Vision has impressed us on a range of vision-language tasks, but it comes with the familiar new challenge: we have little idea of its capabilities and limitations. In our study, we formalize a process that many have instinctively been trying already to develop “grounded intuition” of this new model. Inspired by the recent movement away from benchmarking in favor of example-driven qualitative evaluation, we draw upon grounded theory and thematic analysis in social science and human-computer interaction to establish a rigorous framework for qualitative evaluation in natural language processing. We use our technique to examine alt text generation for scientific figures, finding that GPT-Vision is particularly sensitive to prompting, counterfactual text in images, and relative spatial relationships. Our method and analysis aim to help researchers ramp up their own grounded intuitions of new models while exposing how GPT-Vision can be applied to make information more accessible.

Media and Impact

  • A Peak into the Future of Visual Data Interpretation, Penn Today
  • ChatGPT-Maker OpenAI Hosts its First Big Tech Showcase as the AI Startup Faces Growing Competition, Associated Press
    Reposted by 162 US news outlets, including The Independent, ABC News, Washington Post, U.S. News & World Report, The Business Journal, and CBS News.
  • As OpenAI’s Multimodal API Launches Broadly, Research Shows It’s Still Flawed, TechCrunch

Suggested Citation

@misc{hwang_grounded_2023,
      title={Grounded Intuition of GPT-Vision's Abilities with Scientific Images}, 
      author={Alyssa Hwang and Andrew Head and Chris Callison-Burch},
      year={2023},
      eprint={2311.02069},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}