Subscribe: Software Engineering, IEEE Transactions on - new TOC
http://ieeexplore.ieee.org/rss/TOC32.XML
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
admitted technical  code  debt  language  model transformations  model  percent  problem  technical debt  technical  test  {mathcal 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Software Engineering, IEEE Transactions on - new TOC

IEEE Transactions on Software Engineering - new TOC



TOC Alert for Publication# 32



 



Language Inclusion Checking of Timed Automata with Non-Zenoness

Nov. 1 2017

Given a timed automaton $mathcal P$ modeling an implementation and a timed automaton $mathcal S$ as a specification, the problem of language inclusion checking is to decide whether the language of $mathcal P$ is a subset of that of $mathcal S$ . It is known to be undecidable. The problem gets more complicated if non-Zenoness is taken into consideration. A run is Zeno if it permits infinitely many actions within finite time. Otherwise it is non-Zeno. Zeno runs might present in both $mathcal P$ and $mathcal S$ . It is necessary to check whether a run is Zeno or not so as to avoid presenting Zeno runs as counterexamples of language inclusion checking. In this work, we propose a zone-based semi-algorithm for language inclusion checking with non-Zenoness. It is further improved with simulation reduction based on LU-simulation. Though our approach is not guaranteed to terminate, we show that it does in many cases through empirical study. Our approach has been incorporated into the PAT model checker, and applied to multiple systems to show its usefulness.



Model Transformation Modularization as a Many-Objective Optimization Problem

Nov. 1 2017

Model transformation programs are iteratively refined, restructured, and evolved due to many reasons such as fixing bugs and adapting existing transformation rules to new metamodels version. Thus, modular design is a desirable property for model transformations as it can significantly improve their evolution, comprehensibility, maintainability, reusability, and thus, their overall quality. Although language support for modularization of model transformations is emerging, model transformations are created as monolithic artifacts containing a huge number of rules. To the best of our knowledge, the problem of automatically modularizing model transformation programs was not addressed before in the current literature. These programs written in transformation languages, such as ATL, are implemented as one main module including a huge number of rules. To tackle this problem and improve the quality and maintainability of model transformation programs, we propose an automated search-based approach to modularize model transformations based on higher-order transformations. Their application and execution is guided by our search framework which combines an in-place transformation engine and a search-based algorithm framework. We demonstrate the feasibility of our approach by using ATL as concrete transformation language and NSGA-III as search algorithm to find a trade-off between different well-known conflicting design metrics for the fitness functions to evaluate the generated modularized solutions. To validate our approach, we apply it to a comprehensive dataset of model transformations. As the study shows, ATL transformations can be modularized automatically, efficiently, and effectively by our approach. We found that, on average, the majority of recommended modules, for all the ATL programs, by NSGA-III are considered correct with more than 84 percent of precision and 86 percent of recall when compared to manual solutions provided by active developers. The statistical anal- sis of our experiments over several runs shows that NSGA-III performed significantly better than multi-objective algorithms and random search. We were not able to compare with existing model transformations modularization approaches since our study is the first to address this problem. The software developers considered in our experiments confirm the relevance of the recommended modularization solutions for several maintenance activities based on different scenarios and interviews.



Testing from Partial Finite State Machines without Harmonised Traces

Nov. 1 2017

This paper concerns the problem of testing from a partial, possibly non-deterministic, finite state machine (FSM) ${mathcal S}$ . Two notions of correctness (quasi-reduction and quasi-equivalence) have previously been defined for partial FSMs but these, and the corresponding test generation techniques, only apply to FSMs that have harmonised traces. We show how quasi-reduction and quasi-equivalence can be generalised to all partial FSMs. We also consider the problem of generating an $m$ -complete test suite from a partial FSM ${mathcal S}$ : a test suite that is guaranteed to determine correctness as long as the system under test has no more than $m$ states. We prove that we can complete ${mathcal S}$ to form a completely-specified non-deterministic FSM ${mathcal S}^{prime}$ such that any $m$ -complete test suite generated from ${mathcal S}^{prim- }$ can be converted into an $m$ -complete test suite for ${mathcal S}$ . We also show that there is a correspondence between test suites that are reduced for ${mathcal S}$ and ${mathcal S}^{prime}$ and also that are minimal for ${mathcal S}$ and ${mathcal S}^{prime}$ .



Using Natural Language Processing to Automatically Detect Self-Admitted Technical Debt

Nov. 1 2017

The metaphor of technical debt was introduced to express the trade off between productivity and quality, i.e., when developers take shortcuts or perform quick hacks. More recently, our work has shown that it is possible to detect technical debt using source code comments (i.e., self-admitted technical debt), and that the most common types of self-admitted technical debt are design and requirement debt. However, all approaches thus far heavily depend on the manual classification of source code comments. In this paper, we present an approach to automatically identify design and requirement self-admitted technical debt using Natural Language Processing (NLP). We study 10 open source projects: Ant, ArgoUML, Columba, EMF, Hibernate, JEdit, JFreeChart, JMeter, JRuby and SQuirrel SQL and find that 1) we are able to accurately identify self-admitted technical debt, significantly outperforming the current state-of-the-art based on fixed keywords and phrases; 2) words related to sloppy code or mediocre source code quality are the best indicators of design debt, whereas words related to the need to complete a partially implemented requirement in the future are the best indicators of requirement debt; and 3) we can achieve 90 percent of the best classification performance, using as little as 23 percent of the comments for both design and requirement self-admitted technical debt, and 80 percent of the best performance, using as little as 9 and 5 percent of the comments for design and requirement self-admitted technical debt, respectively. The last finding shows that the proposed approach can achieve a good accuracy even with a relatively small training dataset.



When and Why Your Code Starts to Smell Bad (and Whether the Smells Go Away)

Nov. 1 2017

Technical debt is a metaphor introduced by Cunningham to indicate “not quite right code which we postpone making it right”. One noticeable symptom of technical debt is represented by code smells, defined as symptoms of poor design and implementation choices. Previous studies showed the negative impact of code smells on the comprehensibility and maintainability of code. While the repercussions of smells on code quality have been empirically assessed, there is still only anecdotal evidence on when and why bad smells are introduced, what is their survivability, and how they are removed by developers. To empirically corroborate such anecdotal evidence, we conducted a large empirical study over the change history of 200 open source projects. This study required the development of a strategy to identify smell-introducing commits, the mining of over half a million of commits, and the manual analysis and classification of over 10K of them. Our findings mostly contradict common wisdom, showing that most of the smell instances are introduced when an artifact is created and not as a result of its evolution. At the same time, 80 percent of smells survive in the system. Also, among the 20 percent of removed instances, only 9 percent are removed as a direct consequence of refactoring operations.



Clarifications on the Construction and Use of the ManyBugs Benchmark

Nov. 1 2017

High-quality research requires timely dissemination and the incorporation of feedback. Since the publication of the ManyBugs benchmark and its release on http://repairbenchmarks.cs.umass.edu/, researchers have provided feedback on the benchmark's construction and use. Here, we describe that feedback and our subsequent improvements to the ManyBugs benchmark.



Comments on ScottKnottESD in Response to “An Empirical Comparison of Model Validation Techniques for Defect Prediction Models”

Nov. 1 2017

In this article, we discuss the ScottKnottESD test, which was proposed in a recent paper “An Empirical Comparison of Model Validation Techniques for Defect Prediction Models” that was published in this journal. We discuss the implications and the empirical impact of the proposed normality correction of ScottKnottESD and come to the conclusion that this correction does not necessarily lead to the fulfillment of the assumptions of the original Scott-Knott test and may cause problems with the statistical analysis.