TY - GEN
T1 - Are All Code Reviews the Same? Identifying and Assessing the Impact of Merge Request Deviations
AU - Kansab, Samah
AU - Bordeleau, Francis
AU - Tizghadam, Ali
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Code review is a fundamental practice in software engineering, ensuring code quality, fostering collaboration, and reducing defects. While research has extensively examined various aspects of this process, most studies assume that all code reviews follow a standardized evaluation workflow. However, our industrial partner, which uses Merge Requests (MRs) mechanism for code review, reports that this assumption does not always hold in practice. Many MRs serve alternative purposes beyond rigorous code evaluation. These MRs often bypass the standard review process, requiring minimal oversight. We refer to these cases as deviations, as they disrupt expected workflow patterns. For example, work-in-progress (WIP) MRs may be used as draft implementations without the intention of being reviewed, MRs with huge changes are often created for code rebase, and library updates typically involve dependency version changes that require minimal or no review effort. We hypothesize that overlooking MR deviations can lead to biased analytics and reduced reliability of machine learning (ML) models used to explain the code review process. This study addresses these challenges by first identifying MR deviations. Our findings show that deviations occur in up to 37.02 % of MRs across seven distinct categories. In addition, we develop a detection approach leveraging few-shot learning, achieving up to 91 % accuracy in identifying these deviations. Furthermore, we examine the impact of removing MR deviations on ML models predicting code review completion time. Removing deviations significantly enhances model performance in 53.33 % of cases, with improvements of up to 2.25 times. Additionally, their exclusion significantly impacts model interpretation, strongly altering overall feature importance rankings in 47 % of cases and top-k rankings in 60 %. Our contributions include: (1) a clear definition and categorization of MR deviations, (2) a novel AI-based detection method leveraging few-shot learning, and (3) an empirical analysis of their exclusion impact on ML models explaining code review completion time. Our approach helps practitioners streamline review workflows, allocate reviewer effort more effectively, and ensure more reliable insights from MR analytics.
AB - Code review is a fundamental practice in software engineering, ensuring code quality, fostering collaboration, and reducing defects. While research has extensively examined various aspects of this process, most studies assume that all code reviews follow a standardized evaluation workflow. However, our industrial partner, which uses Merge Requests (MRs) mechanism for code review, reports that this assumption does not always hold in practice. Many MRs serve alternative purposes beyond rigorous code evaluation. These MRs often bypass the standard review process, requiring minimal oversight. We refer to these cases as deviations, as they disrupt expected workflow patterns. For example, work-in-progress (WIP) MRs may be used as draft implementations without the intention of being reviewed, MRs with huge changes are often created for code rebase, and library updates typically involve dependency version changes that require minimal or no review effort. We hypothesize that overlooking MR deviations can lead to biased analytics and reduced reliability of machine learning (ML) models used to explain the code review process. This study addresses these challenges by first identifying MR deviations. Our findings show that deviations occur in up to 37.02 % of MRs across seven distinct categories. In addition, we develop a detection approach leveraging few-shot learning, achieving up to 91 % accuracy in identifying these deviations. Furthermore, we examine the impact of removing MR deviations on ML models predicting code review completion time. Removing deviations significantly enhances model performance in 53.33 % of cases, with improvements of up to 2.25 times. Additionally, their exclusion significantly impacts model interpretation, strongly altering overall feature importance rankings in 47 % of cases and top-k rankings in 60 %. Our contributions include: (1) a clear definition and categorization of MR deviations, (2) a novel AI-based detection method leveraging few-shot learning, and (3) an empirical analysis of their exclusion impact on ML models explaining code review completion time. Our approach helps practitioners streamline review workflows, allocate reviewer effort more effectively, and ensure more reliable insights from MR analytics.
KW - Code Review
KW - Deviations
KW - Few-shot Learning
KW - Machine learning
KW - Merge Requests
UR - https://www.scopus.com/pages/publications/105022431231
U2 - 10.1109/ICSME64153.2025.00036
DO - 10.1109/ICSME64153.2025.00036
M3 - Contribution to conference proceedings
AN - SCOPUS:105022431231
T3 - Proceedings - 2025 IEEE International Conference on Software Maintenance and Evolution, ICSME 2025
SP - 308
EP - 320
BT - Proceedings - 2025 IEEE International Conference on Software Maintenance and Evolution, ICSME 2025
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 41st IEEE International Conference on Software Maintenance and Evolution, ICSME 2025
Y2 - 7 September 2025 through 12 September 2025
ER -