HEAL: Human-centered Evaluation and Auditing of Language Models

CHI 2024 Workshop
Sunday, May 12, 2024

Honolulu, Hawaii, USA (Hybrid)

→ Submission Site

Overview

This workshop aims to address the current ''evaluation crisis'' in LLM research and practice by bringing together HCI and AI researchers and practitioners to rethink LLM evaluation and auditing from a human-centered perspective. The recent advancements in Large Language Models (LLMs) have significantly impacted numerous and will impact more, real-world applications. However, these models also pose significant risks to individuals and society. To mitigate these issues and guide future model development, responsible evaluation and auditing of LLMs are essential.

The CHI 2024 Workshop on Human-centered Evaluation and Auditing of Language Models (HEAL@CHI'24) will explore topics around understanding stakeholders' needs and goals with evaluation and auditing LMs, establishing human-centered evaluation and auditing methods, developing tools and resources to support these methods, building community, and fostering collaboration.

Call for Participation

We welcome participants who work on topics related to supporting human-centered evaluation and auditing of language models. Interested participants will be asked to contribute a short paper to the workshop. Topics of interest include, but not limited to:

  • Empirical understanding of stakeholders' needs and goals of LLM evaluation and auditing
  • Human-centered evaluation and auditing methods for LLMs
  • Tools, processes, and guidelines for LLM evaluation and auditing
  • Discussion of regulatory measures and public policies for LLM auditing
  • Ethics in LLM evaluation and auditing

Submission Format: 2 - 6 pages ACM double-column, excluding references.

Submission Types: Position papers, full or in-progress empirical studies, literature reviews, system demos, method descriptions, or encore of published work. The submission will be non-archival.

Review Process: Double-blind. Papers will be selected based on the quality of the submission and diversity of perspectives to allow for a meaningful exchange of knowledge between a broad range of stakeholders.

Templates: [Word] [LaTex] [Overleaf]

Notes:

  • We encourage authors who submit also to help with the review process.
  • For an encore submission, you do not need to anonymize the submission. Encore submissions will go through a jury review process.
  • If you are using LaTex, please use \documentclass[manuscript,review,anonymous]{acmart} for submission.
  • For camera ready, please use \documentclass[sigconf]{acmart}.

→ Submission Site

Key Information

Submission deadline: Feb 23, 2024 (AoE) Mar 1, 2024 (AoE)

Notification of acceptance: Mar 22, 2024 Mar 24, 2024

Workshop date: Sunday, May 12, 2024

Workshop location: Honolulu, Hawaii, USA (Hybrid)

Contact: heal.workshop@gmail.com

Agenda

The primary goal of this one-day workshop is to bring together HCI and AI researchers from academia, industry, and non-profits to share their ongoing efforts around evaluting and auditing language models.

Organizers