Workshop on Automated Software Testing, 2015
Co-located at the 10th Joint Meeting of the European Software Engineering Conference (ESEC) and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE), 2015
Software testing is at the moment the most important and mostly used quality assurance technique applied in industry. Considering the activities that make up the testing life-cycle, test case design, selection and evaluation is the activity that determines the quality and effectiveness of the whole testing process. The test cases decide about the kind and scope of the test. Test case design, selection and evaluation, however, is one of the most difficult, time-consuming and error-prone activities during testing. Moreover, up to date there are little or no tools available to support test case design and most of it is still done manually. Consequently, industry still spends a lot of effort and money in testing and the quality of the resulting tests is sometimes low since they fail to find important errors in the system.
A-TEST workshop aims to provide a venue for researchers as well as the industry to exchange and discuss trending views, ideas, state of the art work in progress, and scientific results on automated test case design, selection and evaluation. Submissions will be peer-evaluated. Those accepted for presentation in the workshop will also be published in the workshop's proceeding. A-TEST emphasizes discussion, as can be seen in its program, as we believe this to be a more effective way to share and improve each other work than traditional one-way presentations.
TopicsWe invite you to submit a paper to the workshop, and present and discuss it at the event itself on topics related to:
- Techniques and tools for automating test case design and selection, e.g. model-based approaches, combinatorial-based approaches, search based approaches.
- Test cases optimization.
- Test cases evaluation and metrics.
- Test cases design, selection, and evaluation in emerging test domains, e.g. Graphical User Interface, Social Network, Cloud, Games or Security.
- Case studies that have evaluated an existing technique or tool on real systems, not only toy problems, to show the quality of the resulting test cases compared to other approaches.
Types of submissions
- Position paper (2 pages) that analyzes trends in automated software testing and raises issues of importance. Position papers are intended to generate discussion and debate during the workshop, and will be reviewed with respect to relevance and their ability to start up fruitful discussions.
- Work-in-progress paper (4 pages) that describes novel, interesting, and highly potential work in progress, but not necessarily reaching its full completion.
- Full paper (10 pages) describing original and completed research -- either empirical or theoretical -- in the above topics, or an industrial case study.
- Tool demo (4 pages)
Tanja Vos (Univ. Poli. Valencia, ES)
Sigrid Eldh (Ericsson, SE)
Wishnu Prasetya (Univ. Utrecht, NL)
Anna Esparcia (Univ. Poli. Valencia, ES)
Programme Committee Members:
Pekka Aho (VTT, FI)
Emil Alégroth (Chalmers University, SE)
Shaukat Ali (Simula, NO)
Steve Counsell (Brunel University, UK)
Maria Fernanda Granda (Univ. Poli. Valencia, ES)
Sheikh Umar Farooq (University of Kashmir, IN)
Mark Harman (Univ. College London, UK)
Peter M. Kruse (Berner & Mattner)
Yvan Labiche (Carleton University, CA)
Jenny Li (Kean University, USA)
Atif Memon (Univ. of Maryland, USA)
John Penix (Google, USA)
Simon Poulding (Univ. of York, UK)
Onn Shehory (IBM, IL)
Daniel Sundmark (Malardalen Univ., SE)
Paolo Tonella (FBK, IT)
- papers submission: June 5, 2015
- authors notification: June 29, 2015
- camera-ready: July 15, 2015
- workshop: August 30-31, 2015
Format and submissionAll accepted papers will be published as part of ESEC/FSE proceedings. Camera ready versions of papers should be done according to the guidelines received from Conference Publishing Support by email.
Submitted papers have been reviewed by 3 members of Program/Organizing Committee (or their sub-reviewers) and selection of accepted papers will based on relevance, quality and originality of the submitted papers.
If a paper gets accepted, at least one of the (co)author(s) is expected to be present at the workshop to present the paper. All papers submitted to the workshop must be unpublished original work and should not be under review or submitted elsewhere while being under consideration.