Glaut publised a study by University of Mannheim that looked exactly into that. Commendable, with all this new ResTech coming on the market we need some proper Research-on-Research to understand what works, where, when and why. ๐ผ๐ฃ๐ ๐ฌ๐๐๐ฃ ๐ฃ๐ค๐ฉ.
Based on this study (and in line with expectations...) using AI Moderated interviews outperform standard open answers due to:
โข ๐ฅ๐ถ๐ฐ๐ต๐ฒ๐ฟ ๐ฟ๐ฒ๐๐ฝ๐ผ๐ป๐๐ฒ๐: AIMI generated longer answers, more unique words, and higher lexical diversity
โข ๐๐ฟ๐ผ๐ฎ๐ฑ๐ฒ๐ฟ ๐ถ๐ป๐๐ถ๐ด๐ต๐๐: Participants mentioned 36% more unique themes
โข ๐๐น๐ฒ๐ฎ๐ป๐ฒ๐ฟ ๐ฑ๐ฎ๐๐ฎ: The static survey showed a 10% gibberish rate; AIMI had none (see my endnote...)
โข ๐๐ฒ๐๐๐ฒ๐ฟ ๐ฒ๐
๐ฝ๐ฒ๐ฟ๐ถ๐ฒ๐ป๐ฐ๐ฒ: Respondents found the AI format more conversational, less repetitive, and more trustworthy.
The paper is an interesting read. One thing that got my attention though in the analysis section is that "๐๐ฏ ๐ต๐ฉ๐ฆ ๐๐-๐ฎ๐ฐ๐ฅ๐ฆ๐ณ๐ข๐ต๐ฆ๐ฅ ๐ช๐ฏ๐ต๐ฆ๐ณ๐ท๐ช๐ฆ๐ธ ๐ค๐ฐ๐ฏ๐ฅ๐ช๐ต๐ช๐ฐ๐ฏ, ๐จ๐ช๐ฃ๐ฃ๐ฆ๐ณ๐ช๐ด๐ฉ ๐ฆ๐ฏ๐ต๐ณ๐ช๐ฆ๐ด ๐ธ๐ฆ๐ณ๐ฆ ๐ฆ๐น๐ค๐ญ๐ถ๐ฅ๐ฆ๐ฅ, ๐ณ๐ฆ๐ด๐ถ๐ญ๐ต๐ช๐ฏ๐จ ๐ช๐ฏ ๐ฏ = 100 ๐ท๐ข๐ญ๐ช๐ฅ ๐ณ๐ฆ๐ด๐ฑ๐ฐ๐ฏ๐ด๐ฆ๐ด. ๐๐ฏ ๐ต๐ฉ๐ฆ ๐ด๐ต๐ข๐ต๐ช๐ค ๐ด๐ถ๐ณ๐ท๐ฆ๐บ ๐ค๐ฐ๐ฏ๐ฅ๐ช๐ต๐ช๐ฐ๐ฏ, ๐จ๐ช๐ฃ๐ฃ๐ฆ๐ณ๐ช๐ด๐ฉ ๐ณ๐ฆ๐ด๐ฑ๐ฐ๐ฏ๐ด๐ฆ๐ด ๐ธ๐ฆ๐ณ๐ฆ ๐ณ๐ฆ๐ต๐ข๐ช๐ฏ๐ฆ๐ฅ ๐ด๐ฐ ๐ต๐ฉ๐ข๐ต ๐ต๐ฉ๐ฆ ๐ด๐ข๐ฎ๐ฑ๐ญ๐ฆ ๐ด๐ช๐ป๐ฆ ๐ณ๐ฆ๐ฎ๐ข๐ช๐ฏ๐ฆ๐ฅ ๐ข๐ต ๐ฏ = 100".
Unless i interpret this incorrectly, looks like the comparison has been between a pre-cleaned dataset and a non-clean data set which might explain some of the differences...
Link to paper:
https://research.glaut.com/hubfs/Paper/Glaut%20Research/Glaut%20vs.%20Survey%2c%20University%20of%20Manneheim..pdf
Lastly, not mentioned in the paper, but would be interesting to know what the impact of the AIMI was on the overall Length of Interview as well as dropout rates, things that impact the overall economics of adopting a new approach.


