Research

Computing Education Research

Developing Integrated Teaching Platforms to Enhance Blended Learning in STEM

Funding: National Science Foundation [$597,529]

Principal Investigators: Collin Lynch, Tiffany Barnes, Sarah Heckman

Summary: In most college courses, students use multiple online tools to support collaboration and learning. However, little is known about how students navigate and integrate their use of online tools, or about the collective impact of using a specific set of online tools. This project aims address this knowledge gap by developing an open platform to collect, integrate, and analyze data from students’ use of multiple online tools. This platform, called Concert, will actively track student progress, and allow instructors to identify students’ help-seeking and collaboration behaviors. It will also enable research to develop a model of how students use the online resources that are available to them. It is expected that results of this project will increase understanding of students’ help-seeking behaviors, study behaviors, and social relationships within classes, and how these behaviors and relationships affect student performance.

Using open application programming interfaces, the Concert platform will gather data from commonly used systems, such as the Piazza forum, Jenkins Automated Grader, the GitHub submission system, MyDigitalHand, and Moodle. It will integrate data from these online tools and provide a single student interface for notifications and help seeking, as well as a single instructor interface for data analysis and student evaluation. It will monitor students’ use of the online tools and their study habits, and respond with automated guidance. Although the project will initially focus on computer science courses, it is designed to support students in any other STEM field. The Concert platform will collect large sets of detailed, anonymous data about students’ online actions and class performance, providing a rich dataset to support further educational research. If successful, this project has the potential to empower STEM students and broaden participation by reducing the complexity of selecting and using online tools, thus supporting increased student engagement and learning.

Publications

  • N. Gitinabard, Y. Xu, S. Heckman, T. Barnes, C. F. Lynch, “How Widely Can Prediction Models Be Generalized? Performance Prediction in Blended Courses,” IEEE Transactions on Learning Technologies, pp. 184-197. Impact Factor: 1.869
  • N. Gitinabard, T. Barnes, S. Heckman, C. F. Lynch, “What will you do next? A sequence analysis on the student transitions between online platforms in blended courses,” Educational Data Mining 2019, to appear.

Collaborative Research: Transforming Computer Science Education Research Through Use of Appropriate Empirical Research Methods: Mentoring and Tutorials

Website: http://empiricalcsed.org/

Funding: National Science Foundation [$406,557]

Principal Investigators: Jeffrey Carver (University of Alabama), Sarah Heckman (NC State), and Mark Sherriff (University of Virginia)

Summary: The computer science education (CSEd) research community consists of a large group of passionate CS educators who often contribute to other disciplines of CS research. There has been a trend in other disciplines toward more rigorous and empirical evaluation of various hypotheses. However, many of the practices that we apply to demonstrate rigor in our discipline research are ignored or actively avoided when performing research in CSEd. This suggests that CSEd is "theory scarce" because most publications are not research and do not provide the evidence or replication required for meta-analysis and theory building . An increase in empiricism in CSEd research will move the field from "scholarly teaching" to the "scholarship of teaching and learning" (SoTL) providing the foundation for meta-analysis and the generation of theories about teaching and learning in computer science. We propose the creation of training workshops and tutorials to educate the educators about appropriate research design and evaluation of educational interventions. The creation of laboratory packages, "research-in-a-box," will support sound evaluation and replication leading to meta-analysis and theory building in the CSEd community.

Status: We have mentored three cohorts of 8-15 participants. Our participants have completed or are in the process of working on an educational study.

Publications

  • Our Publications
    • A. Al-Zubidy, J. Carver, S. Heckman, M. Sherriff, "A (Updated) Review of Empiricism at the SIGCSE Technical Symposium," 2016 SIGCSE Technical Symposium, Memphis, TN, March 2-5, 2016, p. 120-125. (Acceptance Rate: 35.4%)
  • Participant Publications
    • Lina Battestilli, Apeksha Awasthi, and Yingjun Cao “Two-Stage Programming Projects: Individual Work Followed by Peer Collaboration.”, Proceedings of the 49th ACM Technical Symposium on Computer Science Education (SIGCSE ‘18). ACM, New York, NY, USA, 479-484. [DOI]
    • Ankur Chattopadhyay, Bobby Chindaphone, “A Nifty Inter-Class Peer Learning Model for Enhancing Student-Centered Computing Education, and for Generating Student Interests in Co-Curricular Professional Development,” Proceedings of 2018 IEEE Frontiers in Education Conference, October 3-6, 2018, San Jose, CA, USA. [Paper]
    • Yeajin Ham, Brandon Myers, “Supporting Guided Inquiry with Cooperative Learning in Computer Organization,” Proceedings of the 50th ACM Technical Symposium on Computer Science Education, Minneapolis, MN, USA, February 27-March 2, 2019, pp. 273-279. [Paper]
    • John M. Edwards, Erika K. Fulton, Jonathan D. Holmes, Joseph L. Valentin, David V. Beard, Kevin R Parker, “Separation of syntax and problem solving in Introductory Computer Programming”, Proceedings of 2018 IEEE Frontiers in Education Conference, October 3-6, 2018, San Jose, CA, USA. [Paper]

Research Triangle Peer Teaching Fellows: Scalable Evidence-Based Peer Teaching for Improving CS Capacity and Diversity

Website: Peer Teaching Fellows Program

Funding: Google Computer Science Capacity Award

Principal Investigators: Jeff Forbes (Duke University), Ketan Mayer-Patel (UNC Chapel Hill), Kristy Boyer (University of Florida), Sarah Heckman (NC State)

Summary: The project seeks to increase undergraduate retention and diversity in introductory programming courses. We are interested in creating a scalable, effective, and evidence-based peer teacher training program at NC State, Duke University, and UNC Chapel-Hill.

Publications

  • M. Vellukunnel, P. Buffum, K. E. Boyer, J. Forbes, S. Heckman, K. Mayer-Patel, "Deconstructing the Discussion Forum: Student Questions and Computer Science Learning," SIGCSE 2017, pp. 603-608. (Acceptance Rate: 30%)
  • A. Smith, K. E. Boyer, J. Forbes, S. Heckman, K. Mayer-Patel, "My Digital Hand: A Tool for Scaling Up One-to-One Peer Teaching in Support of Computer Science Learning," SIGCSE 2017, pp. 549-554. (Acceptance Rate: 30%)

Jenkins: Automated Grading while Reinforcing Software Engineering Best Practices

Summary: Automated grading systems provide students with feedback early and often. Use of automated grading systems and other software engineering tooling could reinforce software engineering best practices when developing software. We have developed a custom configuration of Jenkins, a continuous integration server, to automatically grade assignments in CSC216, CSC230, and CSC326 while reinforcing software engineering best practices of test driven development, continuous integration, version control, code coverage, and static analysis.

Publications

  • S. Heckman and J. King, “Developing Software Engineering Skills using Real Tools for Automated Grading,” SIGCSE 2018, pp. 794-799. (Acceptance Rate: 35%)
  • S. Heckman, J. King, M. Winters, "Automating Software Engineering Best Practices Using an Open Source Continuous Integration Framework," Poster: 2015 SIGCSE Technical Symposium, 2015, p. 677.

CPATH II: Incorporating Communication Outcomes into the Computer Science Curriculum

Funding: National Science Foundation

Role: Senior Personal

Summary: In partnership with industry and faculty from across the country, this project will develop a transformative approach to developing the communication abilities (writing, speaking, teaming, and reading) of Computer Science and Software Engineering students. We will integrate communication instruction and activities throughout the curriculum in ways that enhance rather than replace their learning technical content and that supports development of computational thinking abilities of the students. We will implement the approach at two institutions. By creating concepts and resources that can be adapted by all CS and SE programs, this project also has the potential to increase higher education’s ability nationwide to meet industry need for CS and SE graduates with much better communication abilities than, on average, is the case today. In addition, by using the concepts and resources developed in this project, CS and SE programs will be able to increase their graduates’ mastery of technical content and computational thinking.

Publications

  • Paul V. Anderson, Sarah Heckman, Mladen Vouk, David Wright, Michael Carter, Janet E. Burge, and Gerald C. Gannod, "CS/SE Instructors Can Improve Student Writing without Reducing Cass Time Devoted to Technical Content: Experimental Results," Joint Software Engineering Education and Training (JSEET), p. 455-464.
  • Michael Carter, Robert Fornaro, Sarah Heckman, Margaret Heil, "Creating a Progression of Writing, Speaking, & Teaming Learning Outcomes in Undergraduate Computer Science/Software Engineering Curricula," World Engineering Education Forum (WEEF), Buenos Aires, Argentina, October 15-18, 2012.
  • Michael Carter, Robert Fornaro, Sarah Heckman, and Margaret Heil, "Developing a Learning Progression that Integrates Communication into an Undergraduate CS/SE Curriculum," NCSU Technical Report, TR-2012-7, May 25, 2012.

Course Redesigns

CSC326 Course Redesign – Creating an Agile Course to Support Software Engineering Process

Funding: NC State DELTA Course Redesign Grant [$36,400 + $9,500 supplement]

Principal Investigators: Sarah Heckman, Katie Stolee, Chris Parnin

Summary: CSC 326: Software Engineering is a required course for CSC majors typically taken in their Junior year and is a prerequisite for CSC 492: Senior Design. Students find CSC 326 challenging due to the new technologies, required collaborations, and using software engineering practices to complete programming assignments on a large, legacy system. We have added a credit hour to CSC 326 starting in Fall 2017, which will increase the lecture meeting time from 50-minutes to 75-minutes twice a week and better reflects the workload of students in practice. We want to redesign the course to 1) better support students in learning new technologies for completing the course project and in preparation for work in industry, 2) provide better team training to support stronger collaborative experiences, and 3) align the delivery of CSC 326 with earlier courses in the curriculum that have also been redesigned with DELTA support (CSC 116, CSC 216, and CSC 316).

Publications

  • S. Heckman, K. T. Stolee, C. Parnin, “10+ Years of Teaching Software Engineering with iTrust: the Good, the Bad, and the Ugly,” ICSE-SEET 2018, pp. 1-4. (Acceptance Rate: 28%)

Course Projects

Incorporating Software Engineering Best Practices into CSC216

Funding:

Principal Investigators:

Summary: Students in CSC216, a second semester programming course at NC State, struggle with the transition from a small integrated lecture-lab classroom to a large lecture class. While I use active learning the classroom, the exercises are small (e.g., write a single method) and do not demonstrate how the piece might fit into a larger solution. Student evaluations have consistently requested additional time to practice programming larger solutions in class. I revised a linear data structure unit to use in-class laboratory assignments. Before class, students watch short videos about the topic and they engage with the topic during class time on small teams. While working on the in-class labs, students are encouraged to follow software engineering best practices like test driven development. I have several research questions:

  • Do in-class labs on linear data structures in CSC216 increase student learning, engagement, and efficacy on linear data structures when compared with standard active learning lectures?
  • Does requiring good software engineering practices during in-class labs promote better software engineering practices during development of independent or team projects?
  • Do in-class labs in CSC216 reduce the number of students repeating the course?
  • Do in-class labs in CSC216 increase student success in follow on courses CSC230 and CSC326?

Current findings show no major difference on student learning between in-class labs and standard active learning lectures. Pass rates are lower than what we might expect to see from active learning classes [Freeman2014], but are higher than traditional lecture pass rates [Freeman2014] and the pass rates of CS1 courses from literature [Bennedsen2007, Watson2014].

The DELTA Course Redesign grant is supporting work to move CSC216 to a lab-based course. The in-class labs will be transitions into an affiliated Peer Teaching Fellow run lab of 20-24 students starting in Fall 2016.

Publications

  • S. Heckman, "An Empirical Study of In-Class Laboratories on Student Learning of Linear Data Structures," International Computing Education Research Conference, 2015, pp. 217-225.
  • S. Heckman, "A Continuous Integration Framework for Promoting Software Engineering Best Practices," Poster: International Computer Education Research Conference, 2015.
  • S. Heckman, "An Investigation of In-class Labs on Student Learning of Linear Data Structures," Poster: NC State 2015 Teaching and Learning Symposium, 2015.

Software Engineering Research

SHF: Small: Enabling Scalable and Expressive Program Analysis Notifications

Funding: National Science Foundation [$265,853] (a continuation of earlier work)

Principal Investigators: Emerson Murphy-Hill and Sarah Heckman

Summary: Program analysis tools are necessary for high quality software. The goal of this research is to understand how expressiveness and scalability can be increased within and across these tools, which is important to advancing knowledge by transforming how software development environments converse with the developers who use them. There will be three outcomes: a framework designed to enable toolsmiths to create program analysis tools that are expressive and scalable, three re-engineered program analysis tools that use the framework, and validation that program analysis tools built using our framework provide positive results. It will have significant benefits to society by enabling developers to fully reap the benefits of program analysis tools more correct, more reliable, and more on-time software systems.

Program analysis tools such as static analysis tools, restructuring tools, and code coverage tools communicate with the software developer through notifications, but these notifications must balance two competing priorities. First, they must be expressive enough that software developers can understand what the notification is trying to convey. Second, they must be scalable enough that as understanding a notification becomes increasingly cognitively demanding, the developer does not abandon the tool in favor of an error-prone process of manual diagnosis. The project designs a new interactive development environment (IDE) framework for notifications. Many program analysis notifications have a common structure, which can be leveraged to enable expressiveness and scalability for IDE notifications.

Expressive and Scalable Notifications from Program Analysis Tools

Funding: National Science Foundation

Principal Investigators: Emerson Murphy-Hill (NC State) and Sarah Heckman (NC State)

Summary: A wide variety of program analysis tools have been created to help software developers do their jobs, yet the output of these tools are often difficult to understand and vary significantly from tool to tool. As a result, software developers may waste time trying to interpret the output of these tools, instead of making their software more capable and reliable. This proposal suggests a broad investigation of several types of program analysis tools, with the end goal being an improved understanding of how program analysis tools can inform developers in the most expressive and uniform way possible. Once this goal is reached, we can create program analysis tools that enable developers to make tremendous strides towards more correct, more reliable, and more on-time software systems.

Resources The resources, below, are used in CSC216: Programming Concepts - Java, a second semester programming course for CSC majors. The resources are related directly to exercises and materials that were added to CSC216 as part of this grant. Access the PPTX and actual Google forms available upon request. Please note that some of the slides are modified from Reges and Stepp’s Building Java Programs website. See the available supplements.

Lecture Exercises Code
Code Coverage and Static Analysis

Exercise 05.01
Exercise 05.02
Exercise 05.03
Exercise 05.04
Exercise 05.05

 
Linked Lists

Exercise 11.03
Exercise 11.06

Lists.zip
Inspections Exercise 12.01
Exercise 12.02
Exercise 12.03
InspectionExercises.zip
Iterators and Inner Classes Exercise 13.03 Lists.zip
Recursion Exercise 18.03 Recursion.zip

Publications:

  • B. Johnson, R. Pandita, J. Smith, D. Ford, S. Elder, E Murphy-Hill, S. Heckman, C. Sadowski, "A Cross-Tool Communication Study on Program Analysis Tool Notifications," Foundations of Software Engineering, Seattle, WA, USA, November 13-18, 2016, pp. 73-84.
  • Brittany Johnson, Rahul Pandita, Emerson Murphy-Hill, Sarah Heckman, "Bespoke Tools: Adapted to the Concepts Developers Know," 10th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering, NIER Track, Bergamo, Italy, August 30-September 4, 2015, p. 878-881.

Predicting Actionable Static Analysis Alerts

Funding:

  • IBM PhD Fellowship
  • CACC Grant

Summary: Automated static analysis (ASA) can identify potential source code anomalies like null pointers. These anomalies could lead to field failures. Each alert or notification from an ASA tool requires inspection by a developer to determine if the alert is important enough to fix, which is called an actionable alert. Automated identification of actionable alerts generated by ASA could reduce the inspection overhead. Actionable alert identification techniques (AAIT) supplement ASA by using ASA alerts and other information about the software under analysis, called artifact characteristics, to prioritize or classify alerts.

We proposed an AAIT that utilizes machine learning on the history of a software development project to classify alerts as actionable or unactionable. Using the benchmark, FAULTBENCH, we completed a comparative evaluation between our AAIT and other AAIT from literature and found that for the subject programs our AAIT tended to better classify actionable an unactionable alerts when focusing on the goal of reducing false positives.

Publications

  • Sarah Heckman and Laurie Williams,  “A Systematic Literature Review of Actionable Alert Identification Techniques for Automated Static Code Analysis,” Information and Software Technology, vol. 53, no. 4, April 2011, pp. 363-387.
  • Sarah Heckman and Laurie Williams, "A Comparative Evaluation of Static Analysis Actionable Alert Identification Techniques," 9th International Conference on Predictive Models in Software Engineering (PROMISE), Baltimore, Maryland, USA, October 9, 2013, Article No. 4.
  • Sarah Heckman and Laurie Williams, "A Model Building Process for Identifying Actionable Static Analysis Alerts," 2nd IEEE International Conference on Software Testing, Verification, and Validation (ICST), Denver, CO, USA, April 1-4, 2009, pp. 161-170.
  • Sarah Heckman and Laurie Williams, "On Establishing a Benchmark for Evaluating Static Analysis Alert Prioritization and Classification Techniques," Proceedings of the 2nd International Symposium on Empirical Software Engineering and Measurement (ESEM), Kaiserslautern, Germany, October 9-10, 2008, pp. 41-50.
  • Sarah Smith Heckman, A Systematic Model Building Process for Predicting Actionable Static Analysis Alerts, Dissertation, Computer Science, North Carolina State University, 2009.