Got a survey from a Berkeley student group who are “researching how AI-powered tools can support educators like yourself by improving teaching efficiency and the learning experience,” and started filling it out just out of curiosity…
…but bailed on the survey when I reached this (mandatory) question. The correct answer for me, not present, is “I rarely use it •because• I am very familiar with it.”
I feel like there’s a classic social science and/or philosophy term for the way a question can circumscribe its possible answers, and thus skew our thinking. What is that? It escapes me. Whatever it’s called, this is a classic example.
You can see the students’ assumption / agenda plain as day: “The only barrier to AI utility is unfamiliarity! Therefore we can help these poor benighted educators by making them familiar with it!”
“People tried it and it sucks” is not a thinkable thought.
@inthehands See “framing” à la George Lakoff. The Wikipedia article is super long so here is a pithier one:
https://commonslibrary.org/frame-the-debate-insights-from-dont-think-of-an-elephant/
@aarbrk
Yes, Lakoff for sure. Lakoff and…something out of Wittgenstein, probably?