Massive open online courses are inspiring many to think anew about how to teach at scale. This talk will introduce a new approach to teaching the design of information visualization, in which the process of assessing the designs has the potential to lead to better understanding of design of complex visualizations of many dimensions.
Student are challenged to create visualizations that can answer a wide range of questions about a given dataset. They are then asked to make both subjective, or expert as well as objective, or usability assessments on other students’ designs. We compare these assessments in parallel along metric scales. Several useful consequences result: innovative designs that might challenge an expert can prove themselves with strong showings on objective scores, instructor coding errors can be detected, and a database of designs that have been scored along two different dimensions should be more reliable for inferring future design heuristics than those consisting only of one type of assessment. This is similar to the “checks and balances” seen in ensemble learning.