Abstract
Peer assessment has at least a 50-year history in academia, and online applications for peer assessment have been available for more than 20 years. Until recently, online applications simply transmitted classmates' feedback to each other. But in the past decade, facilities have been incorporated to automatically recognize good reviews. This helps authors know which suggestions to follow and helps reviewers improve their reviews. It can also aid in assigning peer grades. Several types of data can be used to determine review quality. These metrics can be combined using machine-learning and neural-network models to produce better estimates of review quality, and hence better estimates of the quality of reviewed work. This paper discusses past work in automatically assessing reviews, and summarizes our current efforts to build on that work.
Original language | English |
---|---|
Journal | ASEE Annual Conference and Exposition, Conference Proceedings |
Volume | 2018-June |
State | Published - Jun 23 2018 |
Externally published | Yes |
Event | 125th ASEE Annual Conference and Exposition - Salt Lake City, United States Duration: Jun 23 2018 → Dec 27 2018 |
Keywords
- Convolutional neural networks
- Natural language processing
- Peer assessment
- Peer feedback
- Peer review
- Tensorflow