SECTION: EXHIBITION TEXT
Moesgaard But even when it seems like models possess humanlike common-sense reasoning, they don't. Instead they're making statistically probable predictions regarding patterns of language. This means they sometimes make mistakes. They can behave unpredictably. When models generate false information or misleading outcomes that do not accurately reflect the facts, patterns, or associations grounded in their training data, they are said to be "hallucinating"—that is, they're "seeing" something that isn't actually there.*