"Don't Blame the Annotator: Bias Already Starts in the Annotation Instructions. (arXiv:2205.00415v2 [cs.CL] UPDATED)" — A hypothesis that annotators pick up on patterns in the crowdsourcing instructions, which bias them to write many similar examples that are then over-represented in the collected data and a study of this bias in 14 recent Natural Language Understanding (NLU) benchmarks.