Click here to flash read.
This paper presents two self-contained tutorials on stance detection in
Twitter data using BERT fine-tuning and prompting large language models (LLMs).
The first tutorial explains BERT architecture and tokenization, guiding users
through training, tuning, and evaluating standard and domain-specific BERT
models with HuggingFace transformers. The second focuses on constructing
prompts and few-shot examples to elicit stances from ChatGPT and open-source
FLAN-T5 without fine-tuning. Various prompting strategies are implemented and
evaluated using confusion matrices and macro F1 scores. The tutorials provide
code, visualizations, and insights revealing the strengths of few-shot ChatGPT
and FLAN-T5 which outperform fine-tuned BERTs. By covering both model
fine-tuning and prompting-based techniques in an accessible, hands-on manner,
these tutorials enable learners to gain applied experience with cutting-edge
methods for stance detection.