Table of Links
1 Introduction
2 Course Structure and Conditions
2.1 Conditions
2.2 Syllabus
3 Lectures
4 Practical Part
4.1 Engagement Mechanisms
4.2 Technical Setup and Automated Assessment
4.3 Selected Exercises and Tools
5 Check Your Proof by Example
6 Exams
7 Related Work
8 Conclusion, Acknowledgements, and References
4 Practical Part
4.1 Engagement Mechanisms
Students spend the majority of their time on the practical part of the course. This is where they apply the theory explained in the lecture to tutorial and homework exercises in the form of programming tasks, proof exercises, and miscellaneous other assignments (type inference, transformation of programs into tail-recursive form, etc.). As each student has unique interests, strengths and weaknesses, and different levels of commitment, we employed a diverse set of mechanisms to engage them.
As outlined in the introduction, keeping up engagement is particularly challenging in courses that are taught remotely. We experienced this first hand when teaching a course in theoretical computer science during the first semester affected by the COVID-19 pandemic. We saw a significantly larger decrease in homework and tutorial participation over the course of that semester than in previous years. We thus put a particular emphasis on engaging teaching methods for the functional programming course in WS20.
We want to emphasise that engagement does not simply increase by offering more things – this may even increase stress – but by offering things that serve neglected needs. Effective engagement mechanisms do not simply keep students busy but genuinely make the course more interesting and fun: they engage students with the content, the instructors, and with each other [10, 23, 40]. We now describe the mechanisms that proved particularly valuable to us:
Grade Bonus For both course iterations, students were able to obtain a bonus of one grade step on their final exam provided that they achieved certain goals during the semester. This incentive was already used in a previous iteration but subsequently dropped due to negative experiences with plagiarism. However, as a result, participation in homework exercises severely decreased [4]. Moreover, the student council reported to us that one of the most asked for wishes by students is that of a grade bonus.
We hence re-introduced the bonus with some changes. First, instead of asking for 40% of all achievable points, we changed to a pass-or-fail per exercise sheet system. Students passed a sheet if they passed ≈ 70% of all tests and obtained the bonus if they passed ≈ 70% of all sheets. We changed to this system so that students could not obtain the grade bonus early on in the semester and then stop participating, leading to cramming. Secondly, in WS20, we introduced additional ways to obtain bonus points, for example by participating in programming contests or workshops by industry partners. This diversified the system and particularly increased engagement of students who were struggling with programming tasks but were nevertheless interested in the course.
As a result, out of 802 students that interacted with the homework system, 298 obtained the grade bonus in WS20 (37%). More than 96% of all students that obtained the bonus passed the final exam, whereas more than half of all students that did not obtain the bonus failed. Similar numbers can be reported for WS19. Finally, in contrast to previous years, we have not seen any severe cases of plagiarism despite running all submissions through a plagiarism checking tool[8].
Instant Feedback An observation we made in Section 3 extends to the practical part of the course: feedback must come fast. The benefit of prompt feedback is well supported in the literature [23, 26]. Again, an asynchronous Q&A forum helps in this regard, at least when dealing with questions of a general nature. Problems specific to a student’s submission (e.g. a bug or error in a proof), however, must be fixed by the student herself as 1) it is a critical skill of computer scientists to discover bugs and 2) code/proofs may not be shared before the submission deadline due to the grade bonus.
Automated tests can fill this gap: they provide prompt feedback without giving away too much information (e.g. by only showing a failing input and expected output pair). Needless to say, they are also crucial to scale the homework system to a large number of students. We describe our testing infrastructure in more detail in Section 4.2.
However, we also let student assistants manually review all final submissions in the first iteration of the course to provide feedback not covered by automation, in particular regarding code quality. To our dismay, this feedback did very little and most of it was probably ignored. In part, this is because it took 1–2 weeks after each submission deadline to provide feedback to all students. At that point, students had already moved on to a fresh set of exercises and were probably not motivated to revisit their old submissions. Some may also only care about passing the tests and are not particularly interested in feedback about code quality.
In our second iteration, we hence reallocated resources: instead of grading submissions, student assistants now supported us by creating engaging exercises and offering new content (e.g. supervising workshops of industry partners) while we focused on writing exhaustive tests with good feedback and extended our automated proof checking facilities (see Section 5). To provide at least some feedback on code quality, we instructed students to use a linter (see Section 4.2).
We can report very positively on this decision: we were able to offer a more diverse set of exercises and had the resources to offer new content while quality of code did not seem to suffer. Indeed, the linter even seemed to increase students’ awareness to not only write correct code but also use good coding patterns. This seems to be due to the fact that 1) the linter provides instant feedback and 2) it visually highlights affected code fragments and provides quick fixes.
Competition and Awards Due to positive feedback, we continued the tradition of running an opt-in weekly programming competition as introduced in [4]. Each week, one homework assignment was selected as a competition problem and a criterion for ranking the submissions was fixed. Participation was optional: students could pass the exercise without optimising their code and submitting it to the competition. The set of competition problems was diverse, including code golf challenges, optimisation problems, game strategy competitions, an ACM-ICPC-like programming contest, and creative tasks like music composition and computer art (see Section 4.3). The top 30 entries received points and were presented to the public on a blog[9], written using the ironic self-important third-person style established in previous semesters.
The overall top 30 students received awards at a humorously organised award ceremony at the end of the semester. We cooperated with industry partners to offer prizes such as tickets to functional programming conferences, Haskell workshops and programming books, as well as cash and material prizes. This initial contact with industry partners also sparked the idea to offer Haskell workshops run by software engineers from industry in WS20 (explained further below).
The competition in WS20 greatly benefitted from incorporating the work of our student assistants: At the beginning of the semester, we brainstormed for competition ideas. We then formed teams, each one being responsible for the implementation of one idea to be published as a competition exercise during the semester. This allowed us to create more extensive, diverse, and practical exercises than in previous years, where all tasks were created by the course organisers.
As reported in [4], we can confirm that the competition works extremely well to motivate talented students. They go well beyond what is taught as part of the course when devising their competitive solutions. Many of them became major drivers in the team of student assistants in follow-up iterations. Indeed, after offering the competition in WS19, the number of applications for student assistant positions in WS20 more than doubled. In each iteration of the course, 144 different students ranked among the top 30 of the week at least once. We also received testimonies from students that even though they did not perform well (or participated at all) in the competition, they nevertheless enjoyed the blog posts and advanced material discussed on it.
The competition combines multiple effective engagement mechanisms [30, 40]: it is challenging, often practical, humorous, and integrates gamification aspects. Despite the help of our student assistants, running the competition remained enormously labour-intensive, in particular the evaluation of submissions and the writing of blog posts. We envisage further help by student assistants in those regards, cutting down the competition to a bi-weekly format, or replacing it by more efficient mechanisms that motivate talented students.
Workshops with Industry Partners Many students at TUM have questioned the applicability and value of functional programming for real-world applications. Obviously, there is not much use in us academics promising them otherwise. Instead, we had the idea to invite people from industry to offer functional programming workshops about practical topics not covered in our course.
In WS20, we hosted three workshops on 1) design patterns for functional programs, 2) networking and advanced I/O, and 3) user interfaces and functional reactive programming. We limited participation to 35 students for each workshop, and to our delight, demand exceeded supply (more than 120 students applied). Industry partners and workshop participants alike reported very positively to us. In some cases, workshops were even extended for multiple hours due to the great curiosity by students. Moreover, organisational overhead was small: we merely had to communicate the syllabus to our partners and coordinate time and place. We envisage offering more workshops in future iterations and highly recommend this mechanism to other instructors.
Social Interactions Studies confirmed that the COVID-19 pandemic worsened students’ social life, leading to higher levels of stress, anxiety, loneliness, and symptoms of depression [12]. In WS20, we hence investigated mechanisms to foster social interaction and exchange between students – which also play an important pedagogical role in general [18]. Crucially, there is no one-size-fits-all solution, but multiple forms of social interaction are needed to increase engagement [10, 23].
Firstly, we decided to employ pair-programming (groups of 3–4 students) in our online tutorials. The technical setup for this is described in Section 4.2. This not only made social interactions an integral part of the tutorial, but also had positive effects on knowledge sharing. We can report very positively on this policy: it received 13 positive and 2 negative comments in the course evaluation form.
Secondly, we hosted two informal get-together sessions, one at the beginning and one at the end of the semester. Each session was joined by ≈ 50 students. We started with icebreaker sessions in breakout rooms, randomly allocating students and at least one student assistant in each group. We then opened thematic breakout rooms where students could freely talk about a given topic. Some preferred to talk about the course, others had light-hearted conversations about university life, yet others started to play online games. All in all, we received very positive feedback for these sessions.
Thirdly, we organised an ACM-ICPC-like programming contest where students participated in teams, followed again by a light-hearted get-together session for participants.
Authors:
(1) Kevin Kappelmann, Department of Informatics, Technical University of Munich, Germany ([email protected]);
(2) Jonas Radle, Department of Informatics, Technical University of Munich, Germany ([email protected]);
(3) Lukas Stevens, Department of Informatics, Technical University of Munich, Germany ([email protected]).
[8] We used Moss https://theory.stanford.edu/~aiken/moss/
[9] https://www21.in.tum.de/teaching/fpv/WS20/wettbewerb.html (WS20) and https://www21.in.tum.de/teaching/fpv/WS19/wettbewerb.html (WS19)