VISSOFT 2015: 3rd IEEE Working Conference on Software Visualization – Preliminary Call for Papers

VISSOFT 2015

http://vissoft.info

Software visualization is a broad research area encompassing concepts, methods, tools, and techniques that assist in a range of software engineering and software development activities. Covered aspects include the development and evaluation of approaches for visually analyzing software and software systems, including their structure, execution behavior, and evolution.

The VISSOFT IEEE Working Conference on Software Visualization continues the history of the ACM SOFTVIS Symposium on Software Visualization and the IEEE VISSOFT International Workshop on Visualizing Software for Understanding and Analysis. The conference focuses on visualization techniques that target aspects of software maintenance and evolution, program comprehension, reverse engineering, and reengineering, i.e., how visualization helps professionals to understand, analyze, test and evolve software. We aim to gather tool developers, experts, users, and researchers from software engineering, information visualization, computer graphics, and human-computer interaction to discuss theoretical foundations, algorithms, techniques, tools, and applications related to software visualization. We seek technical papers, empirical studies, applications, or case studies and provide a platform for presenting novel research ideas and tools.

Topics of interest include, but are not limited to:

  • Innovative visualization and visual analytics techniques for software engineering data, such as,
    • source code
    • static and dynamic dependencies
    • software evolution and repositories
    • software documentation
    • web services
    • protocol, log, and performance data
    • parallel techniques
    • database schemes
    • software security and privacy issues
    • workflow and business processes
  • Visualization to support program comprehension, software testing, and debugging
  • Interaction techniques and algorithms for software visualization
  • Visualization-based techniques in computer science and software engineering education
  • Integration of software visualization tools and development environments
  • Empirical evaluation of software visualization
  • Industrial experience on using software visualization

Papers are solicited that present original, unpublished research results and will be rigorously reviewed by an international program committee. In addition to technical papers, VISSOFT features a New Ideas or Emerging Results (NIER) track and a Tool track related to the same list of topics suggested above. All accepted submissions will appear in the conference proceedings and the IEEE Digital Library.

Technical papers

These contributions describe in-depth mature research results in the above-mentioned areas of interest. The submission of a video (up to 5 minutes in length) to accompany the paper is highly encouraged to show interaction possibilities. Authors who wish to submit such video should provide a URL to the video. Technical papers have to be maximum 10 pages long (including bibliography and annexes).

Abstract submission date: April 27, 2015
Full paper submission date: May 4, 2015
Author response period: June 8 – 12, 2015
Notification: June 18, 2015

Artifacts: Traditionally, technical research papers are published without including any artifacts (such as tools, data, models, videos, etc.), even though the artifacts may serve as crucial and detailed evidence for the quality of the results that the associated paper offers. Following the effort initiated at ESEC/FSE’11, authors of accepted technical papers at VISSOFT 2015 can have their artifacts evaluated by the program committee. Positively evaluated artifacts will be reflected in the paper publication and presentation. More information about the artifacts may be found on http://www.artifact-eval.org.

Artifact submission for accepted papers: June 24, 2015
Artifact notification: July 31, 2015

Awards: VISSOFT 2015 will award distinguished technical papers. Monetary awards will be sponsored by ObjectProfile.com

Special issue: We plan to invite a selection of the technical papers accepted at VISSOFT 2015 to submit an extended version to a journal.

NIER papers

The NIER contributions (New Ideas and Emerging Results) describe work-in-progress and preliminary exciting results. Authors should include open questions and even provocative hypotheses to get early feedback on their research ideas or even support through new research collaborations. NIER papers have to be maximum 5 pages long (including bibliography and annexes).

Paper submission date: June 15, 2015
Notification: July 31, 2015

Tool papers

Tool contributions describe the design or actual utilization of software visualization tools, with a focus on relevant tool construction aspects or the use of the tool for gaining new insights. Authors should be prepared to demonstrate their tool at the conference. The submission may also contain a link to a screencast (video). Tools papers have to be maximum 5 pages long (including bibliography and annexes).

Paper submission date: June 15, 2015
Notification: July 31, 2015

General Chair:
Jürgen Döllner, Hasso-Plattner-Institut, Germany — http://www.hpi.uni-potsdam.de/doellner/

Program Co-Chairs:
Fabian Beck, University of Stuttgart, Germany — http://research.fbeck.com
Alexandre Bergel, University of Chile, Chile — http://bergel.eu

Please visit http://vissoft.info for updates.

Posted in Conferences | Tagged , | Leave a comment

High Performance Data Analysis and Visualization (HPDAV) 2015

High Performance Data Analysis and Visualization (HPDAV) 2015
An IPDPS 2015 Workshop, May 25-29, 2015, Hyderabad, India

=== Summary ===

- Workshop focus:  high performance data analysis, visualization, and
related data-intensive methods and techniques for evolving architectures
and large, complex datasets.

- Papers/panels: long papers (8-10 pages), short papers (4-5 pages), and
a panel.

- Due dates: paper/panel submissions due 5 Jan 2015, author notification
9 Feb 2015, camera-ready due 27 Feb 2015.

- Workshop dates: HPDAV 2015 is a one-day workshop that will be held in
conjunction with IPDPS 2015, which is May 25-29, 2015, in Hyderabad, India.

- Workshop web page: http://vis.lbl.gov/Events/HPDAV-IPDPS-2015/.

=== Workshop Theme ===

While the purpose of visualization and analysis is insight, realizing
that objective requires solving complex problems related to crafting or
adapting algorithms and applications to take advantage of evolving
architectures, and to solve increasingly complex data understanding
problems for ever larger and more complex data. These architectures, and
the systems from which they are built, have increasingly deep memory
hierarchies, increasing concurrency, decreasing relative
per-core/per-node I/O capacity, lessening memory per core, are
increasingly prone to failures, and face power limitations.

The purpose of this workshop is to bring together researchers,
engineers, and architects of data-intensive computing technologies,
which span visualization, analysis, and data management, to present and
discuss research topics germane to high performance data analysis and
visualization. Specifically, this workshop focuses on research topics
related to adapting/creating algorithms, technologies, and applications
for use on emerging computational architectures and platforms.

The workshop format includes traditional research papers (8-10 pages)
for in-depth topics, short papers (4 pages) for works in progress, and a
panel discussion.

=== Paper Topics ===

We invite papers on original, unpublished research in the following
topic areas under the general umbrella of high performance visualization
and analysis:

- Increasing concurrency at the node level, and at the system-wide level.

- Optimizations for improving performance, e.g., decreasing runtime,
leveraging a deepening memory hierarchy, reducing data move, reducing
power consumption.

- Applications of visualization and analysis, where there is a strong
thematic element related to being able to solve a larger or more complex
problem because  of algorithmic or design advances that take advantage
of increasing concurrency, architectural features, etc.

- Data analysis and/or visualization systems/designs/architectures
having an emphasis upon scalability, resilience,
high-throughput/high-capacity, and that are able to take advantage of
emerging architectures.

We anticipate a portion of the program to be dedicated to 20-minute
research  talks, and a portion to be dedicated to 10-minute short talks.

Paper format:

- Long papers: 8-10 pages, to provide a full problem description,
background and related work, methodology, and results.

- Short papers: 4 pages, for works in progress, vignettes, and topics of
more limited scope.

Latex and other templates: may be found via http://www.ipdps.org.

=== Panel Discussion ===

We solicit proposals for a panel, that would present position statements
on topics related to HPDAV and would be of interest to a broad audience.

Guidelines for panel submissions:

Content: Panel proposal statements should include the title of the
panel, the names of the panelists, an overall panel statement about the
focus and thesis of the panel, along with a brief position statement
from each of the prospective panelists.

Length: The panel proposal should be of sufficient length to convey the
main objective for the panel, along with a clear statement about each
panelist’s position. The following guidelines are not strict, but may
help give an idea of the level of detail: panel overview – 500 words;
each panelist’s statement – 200-400 words each.

Format: please submit a single PDF containing all of the panel proposal
content.

This workshop anticipates having one panel discussion, which would
consist of 40 minutes of panelist presentations and 20 minutes of
audience discussion.

=== Peer review process ===

All submissions – long papers, short papers, panel proposal – will
undergo a peer-review process consisting of at least three reviewers.

=== Important dates ===

Workshop submissions: 5 Jan 2015. All submissions – long papers, short
papers, panel proposal – are due Monday, 5 Jan 2015, 23:59 Anywhere On
Earth. Please submit your paper/panel proposal via the EDAS website used
by IPDPS  (http://www.edas.info/) to one of the following three tracks:
full papers, short papers, panel.

Author notification: 9 Feb 2015. Authors of all submissions – long
papers, short papers, and panel proposals – will be notified of the
review via email results by 9 Feb 2015.

Camera-ready copy: 27 Feb 2015. Authors of are expected to do revisions
and produce camera-ready copy, which is due by 27 Feb 2015.

=== Presentation at the workshop ===

It is expected that each accepted submission will be presented at the
workshop, which will be held in conjunction with IPDPS 2015, May 25-29,
2015, in Hyderabad, India..

=== Program Committee ===

Wes Bethel, Lawrence Berkeley National Laboratory (organizer)
Randall Frank, Applied Research Associates
Kelly Gaither, Texas Advanced Computing Center
Berk Geveci, Kitware
Alex Gray, Skytree
Ken Joy, UC Davis
Pat McCormick, Los Alamos National Laboratory
Peter Nugent, Lawrence Berkeley National Laboratory
George Ostrouchov, Oak Ridge National Laboratory
Rob Ross, Argonne National Laboratory
John Shalf, Lawrence Berkeley National Laboratory
Dale Southard, NVIDIA
Craig Tull, Lawrence Berkeley National Laboratory
Venkat Vishwanath, Argonne National Laboratory
John Wu, Lawrence Berkeley National Laboratory

Posted in Uncategorized | Leave a comment

Postdoctoral Position on Visualizing Multicore Performance – Dublin

Position for postdoctoral HCI researcher on Understanding and Visualizing Multicore Performance

The School of Computer Science and Statistics at Trinity College Dublin (http://www.tcd.ie) is looking for applications for a Post-doctoral research position in the area of HCI and Visualization. The position is part of Lero, the Irish Software Engineering Research Centre, with collaboration from IBM Research.

Whereas previously only a small minority of programmers would deal with parallel programming, the shift towards multi-core has meant that a much wider proportion of programmers will need to produce parallel programs. The successful candidate will work as part of a small team on the ManyCore project, which aims to support this activity through visualizations which help programmers understand and improve the performance of their programs.

Following initial qualitative work (http://www.scss.tcd.ie/ManyCore), a data collection framework is already in place, along with a real-time data visualization framework, and software for supporting experiments. This postdoctoral researcher will thus focus on continuing analysis and design work, coupled with experimental work and associated preparation of publications.

Given the nature of the domain, a strong background in HCI and an interest in programming are essential. Previous experience of
experimental work with visualizations would be desirable.

The researcher will be based in the School of Computer Science and Statistics at Trinity College Dublin. Situated in the centre of Dublin, Trinity College is a 400-year-old University with a large and active research profile in Computer Science.

The post is being offered on a full-time basis for an initial twelve months, with potential to extend the position by a further 9 months subject to satisfactory performance.

To apply, please email your CV and contact details for two references to Gavin.Doherty@tcd.ie, quoting MANYCORE in the subject line. Please use this address for queries also.


Dr. Gavin Doherty,
School of Computer Science and Statistics,
Trinity College Dublin.
Office: O’Reilly Institute LG.19
Tel: +353 1 8963858
Web: http://www.scss.tcd.ie/Gavin.Doherty/

Posted in Positions, Research | Leave a comment

State of the Art of Performance Visualization

This is a guest post by Kate Isaacs, UC Davis, who is one of the authors of the paper presented.

Software visualization for performance is about helping developers find inefficiencies slowing down their code. Performance can have significant affects on the usability, feasibility and cost of running software. At EuroVis 2014, we presented a State-of-The-Art Report (STAR) on performance visualization which I’ll go over below. See here for the full report, slides, and literature website.

Performance is generally measured in terms of time, e.g., time to complete or throughput. Power consumption is another performance measure of interest but there are fewer tools for gathering such data. Since neither can be determined statically, some component of performance data must be gathered during execution (or possibly a simulation there of). This includes calling contexts and state information from the software as well as performance counters like cycles, flops, packets, and cache misses from the hardware. Collecting these data generally falls into one of two formats: Profiles aggregate the data in time, offering low overhead but less detail. Traces record each event separately as it occurs and thus quickly grow in size, so must be limited in scope.

Bar horizontal placement and width indicates time and duration of a function. Bar vertical placement is call stack depth.

Trace visualization showing call stack timeline, by Trümper et al.

We’ve broken down the use of visualization here into three main tasks. First, developers want to gain an overall understanding of what actions the software takes and how it uses resources during execution. Second, developers want help in detecting performance problems — they want to be able to quickly find anomalies, bottlenecks, load imbalance, and misuse of resources. Finally, they want to attribute these problems either to the software itself or some interaction between the software and the system on which it runs. Going beyond line-of-code attribution is a major challenge in helping developers truly understand causes of poor performance.

Though many tools employ visual analytics approaches to meet these tasks, we focused our STAR on the unique visualizations that may be part of these systems or stand alone. We categorized the visualizations by the context they provide to the performance measurements:

The software context is that of the code itself.  Call graphs are a popular sub-context for performance visualization. The need to show time or counter data makes indented trees with attached tables or color on node-link diagrams popular avenues.

Performance data has also been displayed on the code itself. Serial traces, like the one of Trümper et al. above, often focus on displaying the call stack in time or other code information, so they fall in the software context as well.

Call graph drawn as an indented tree, so each row is a unique call path. Data associated with that call path is in the same row of an adjoined table.

Indented tree call graph with tabular attributes, by Lin et al.

Threads and parallel processes are the fundamental units of the tasks context. Visualizing traces is a large area of research in this context, with challenges due to the sheer number of tasks. Representing the interactions between these tasks and their creation and deletion in time adds even more difficulty. Gantt-like representations and node-link diagrams are widely used here.

Each row represents a different processes timelines with bars shown function time and duration. Lines drawn between rows show messages between processors.

Gantt-like per-process timelines with messages overlaid, from Vampir

The system on which software is run is the hardware context. This can be the individual CPU cores or GPUs running the code and their scheduling of instructions, traces of the memory hierarchy usage, and representations of compute nodes and their interconnection network. We also included the operating system in this context, as that is rarely changed by application developers. When possible, natural representations have been used, but scale and complexity of modern architectures has largely removed this option.

Ports are spheroids colored by performance data. Nodes are surrounded by these ports. Network links are shown as lines between ports, colored to show traffic.

Multiprocessor nodes connected in a 2D plane network, from Haynes et al.

The application context is the domain of what is computed by the software. In scientific simulations, this is often a physical domain and in linear algebra libraries this would be the matrices involved. A lot of work has been done in the SciVis community for visualizing the former, but few tools have integrated a mapping of performance data onto those visualizations.

Memory accesses are shown both on the matrices (application context) and on the 1D arrays representing memory and caches (hardware context).

Matrix multiply visualization showing the computing matrices and their memory accesses, from Choudhury et al.

There are several challenges to address in performance visualization, the largest one being scale. Representing growing numbers of parallel operations or multivariate data from counters, function calls, and static context information is a major part of this problem. However, simply managing and compressing the large amount of data that can be collected is also a problem. Another challenge is handling ensembles of data taken from multiple executions, so developers can better determine the effects of their changes. As mentioned before, sophisticated attribution and depictions of complicated architectures are also in demand. These challenges demonstrate the pressing need for innovative performance visualization.

Interested? The first Workshop on Visual Performance Analysis will be held at Supercomputing 2014 — regular and short papers are due July 28th.

Posted in Papers, Research, Summary, Uncategorized | Tagged , , | Leave a comment

VPA 2014 Call for Papers – 1st Workshop on Visual Performance Analysis

1st Workshop on Visual Performance Analysis (VPA)

Held in conjunction with SC14: The International Conference on High Performance Computing, Networking, Storage and Analysis

New Orleans, LA, USA
November 21, 2014

Submission Deadline: July 28, 2014

Over the last decades an incredible amount of resources has been devoted to building ever more powerful supercomputers. However, exploiting the full capabilities of these machines is becoming exponentially more difficult with each new generation of hardware. To help understand and optimize the behavior of massively parallel simulations the performance analysis community has created a wide range of tools and APIs to collect performance data, such as flop counts, network traffic or cache behavior at the largest scale. However, this success has created a new challenge, as the resulting data is far too large and too complex to be analyzed in a straightforward manner. Therefore, new automatic analysis approaches must be developed to allow application developers to intuitively understand the multiple, interdependent effects that their algorithmic choices have on the final performance.

This workshop will bring together researchers and practitioners from the areas of performance analysis, application optimization, visualization, and data analysis and provide a forum to discuss novel ideas on how to improve performance understanding, analysis and optimization through novel techniques in scientific and information visualization.

Workshop Topics

  • Scalable displays of performance data
  • Interactive visualization of performance data
  • Data models to enable data analysis and visualization
  • Graph representation of unstructured performance data
  • Collection and representation of meta data to enable fine grained attribution
  • Message trace visualization
  • Memory and network traffic visualization
  • Representation of hardware architectures

Paper Submission

We solicit two types of papers both covering original and previously unpublished ideas: 8 page regular papers and 4 page short papers. To be considered, your manuscript should be formatted according to the double-column IEEE format for Conference Proceedings (IEEEtran LaTeX Class (template) V1.8 packages and IEEEtran V1.12 BibTeX (bibliography)). Margins and font sizes should not be modified. The templates for “IEEEtran LaTeX Class (template) V1.8 packages and IEEEtran V1.12 BibTeX (bibliography)” can be found at http://www.ieee.org/conferences_events/conferences/publishing/templates.html

All papers must be submitted through Easychair at: https://www.easychair.org/conferences/?conf=vpa14

Logistics

All logistics, including registration, hotel reservations, and visa requests, will be handled by SC14.

Important Dates

  • July 28th: submission deadline for full papers
  • September 15th: notification of acceptance
  • October 6th: final paper and copyrights due

Workshop Organizers

  • Peer-Timo Bremer, Lawrence Livermore National Laboratory
  • Bernd Mohr, Jülich Supercomputing Centre
  • Valerio Pascucci, University of Utah
  • Martin Schulz, Lawrence Livermore National Laboratory

Contact

Program Committee

  • Carlos Scheidegger, AT&T
  • Naoya Maruyama, RIKEN AICS
  • Felix Wolf, German Research School for Simulation Sciences
  • Matthias Mueller, RWTH Aachen University
  • Holger Brunst, ZIH / TU Dresden
  • Joshua Levine, Clemson University
  • Derek Wang, Charlotte Visualization Center, UNCC
  • Todd Gamblin, Lawrence Livermore National Laboratory
  • Hank Childs, University of Oregon
  • Markus Geimer, Jülich Supercomputing Centre
  • Judit Gimenez, Barcelona Supercomputing Center / Universitat Politècnica de Catalunya
  • Remco Chang, Tufts University
Posted in Announcements, Conferences | Tagged , , | Leave a comment

VISSOFT 2014 Call For Papers

Image

http://vissoft.iro.umontreal.ca/

The second IEEE Working Conference on Software Visualization (VISSOFT 2014) builds upon the success of the first edition of VISSOFT in Eindhoven, which in turn followed after six editions of the IEEE International Workshop on Visualizing Software for Understanding and Analysis (VISSOFT) and five editions of the ACM Symposium on Software Visualization (SOFTVIS). In 2014, VISSOFT will again be co-located with ICSME in Victoria, BC, Canada.

Software Visualization is a broad research area encompassing techniques that assist in a range of software engineering activities, such as, specification, design, programming, testing, maintenance, reverse engineering and reengineering. Covered methods contain the development and evaluation of approaches for visually analyzing software and software systems, including their structure, execution behavior, and evolution.

In this conference, we focus on visualization techniques that target aspects of software maintenance and evolution, program comprehension, reverse engineering, and reengineering, i.e., how visualization helps programmers to understand, analyze, and evolve software. We aim to gather tool developers, users and researchers from software engineering, information visualization, and human-computer interaction to discuss theoretical foundations, algorithms, techniques, tools, and applications related to software visualization. We seek theoretical, as well as practical papers on applications, techniques, tools, case studies, and empirical studies.

Topics of interest include, but are not limited to:- Program visualization- Visual software analytics- Network visualizations in software engineering- Visualization of software documentations- Visualization of parallel programs- Visualization-based software in computer science and software engineering education- Visualization of workflow and business processes- Integration of software visualization tools and development environments- Visualization of web services- Visualization of software evolution- Visualization of database schemes- Protocol and log visualization (security, trust)- Graph algorithms for software visualization- Layout algorithms for software visualization- Visual debugging- Software visualization on the internet- Empirical evaluation of software visualization- Visualization to support program comprehension- Visualization to support software testing- Visualization of software repositories- Social media visualization Papers are solicited that present original, unpublished research results and will be rigorously reviewed by an international program committee. In addition to full papers, VISSOFT features a New Ideas or Emerging Results (NIER) track and a tool demo track related to the same list of topics suggested above. All accepted submissions will appear in the conference proceedings and the IEEE Digital Library. Hints for writing Software Visualization research papers are available online (http://www.st.uni-trier.de/~diehl/softvis/org/softvis06/hints.html).

Submission Information

Authors should prepare and electronically submit their papers or abstracts via the EasyChair submission site (https://www.easychair.org/conferences/?conf=vissoft2014). Take care that you provide all required information in EasyChair. At least one author of an accepted paper must attend the conference to present the work. The review process will be single-blind.

All papers must conform, at time of submission, to the IEEE Formatting Guidelines. Make sure that you use this MS Word template (http://www.conference-publishing.com/templates/MSW_USltr_format.doc) and this LaTeX class (http://www.ctan.org/tex-archive/macros/latex/contrib/IEEEtran/IEEEtran.cls updated January 3rd, 2013). Submissions must be in PDF format. Make sure that you are using the correct IEEE style file: the title should be typeset in 24pt font and the body of the paper should be typeset in 10pt font. Latex users: please use \documentclass[conference]{IEEEtran} (without option compsoc or compsocconf).

Submission Types

Technical papers (up to 10 pages): These contributions describe in-depth mature research results in the above-mentioned areas of interest. The submission of a video (up to 5 minutes in length) to accompany the paper is highly encouraged to show interaction possibilities. Authors who wish to submit such video should provide a URL to the video at the end of the abstract input box in EasyChair (not in the submitted paper itself!).

NIER papers (up to 5 pages): The NIER contributions describe work-in-progress and preliminary exciting results. Authors should include open questions and even provocative hypotheses to get early feedback on their research ideas or even support through new research collaborations.

Tool papers (up to 4 pages): Tool contributions describe the design or actual utilization of software visualization tools, with a focus on relevant tool construction aspects or the use of the tool for gaining new insights. Authors should be prepared to demonstrate their tool at the conference.

Challenge paper (up to 4 pages): In the software visualization challenge, authors are to demonstrate the usefulness of their visualization tools on the data provided. The paper should first provide an introduction to the problem, data used, methods and tools used, results and their implications, and conclusions.

Important Dates

Main track (technical papers)

Abstract Submission: May 9, 2014
Paper Submission: May 16, 2014
Author Notification: June 20, 2014
Camera-ready Copies: July 14, 2014

NIER, Tool Demo, and Tool Challenge tracks:

Abstract Submission: June 24, 2014 => extended to July 1
Paper Submission: July 1, 2014 => extended to July 8
Author Notification: July 25, 2014
Camera-ready Copies: August 7, 2014

Organizing Committee

General Chair: Houari Sahraoui (University of Montreal, CA)

Program Co-Chairs: Bonita Sharif (Youngstown State University, US) and Andy Zaidman (Delft University of Technology, NL)

NIER & Tool-Demo Track Co-Chairs: Fabian Beck (University of Stuttgart, DE) and Mircea Lungu (University of Bern, CH)

Publicity & Web Chair: Daniel Limberger (Hasso Plattner Institute, DE)

Posted in Announcements, Conferences | Tagged , , , | Leave a comment

Survey on the Use of Sketches and Diagrams

This is a guest post by Sebastian Baltes, University of Trier. Please consider taking part in his short study (details see below).

Over the last decade, studies have shown that, despite the dominance of source code, sketches and diagrams play a major role in software engineering practice.

The focus of our current research is to expand our knowledge on the use of sketches and diagrams in software engineering practice. We are particularly interested in how these visual artifacts are related to source code. We do not exclusively focus on software developers, but on all “software practitioners” including testers, architects, project managers, but also researchers and consultants.

If you think that you belong to this group of software practitioners, please help us gaining deeper insights into your work practice by participating in our short survey. It just takes five to ten minutes of your valuable time:

http://www.st.uni-trier.de/survey

For more information, don’t hesitate to contact me.
Thanks in advance for your participation!

Posted in Announcements | Tagged , | Leave a comment