PLEng Seminar Day 13th May​

Time: Monday 13th May, 9:45 – 18:00       Venue: BTH, Karlskrona, Lövsalen (Library)

Welcome to join PLEng Mini-conference, a full day with presentations of interest to industrial partners and researchers. The program includes seven sessions by PLEng licentiate-candidates where they present their research areas and future approaches, addressing challenges and innovate solutions to specific problems within industry. PLEng Mini-conference offers an opportunity for industry and researchers to meet and discuss applied research in industry.

The event is free for SERL/SERT Partners companies. Register as soon as possible here. For any questions, send an email to Anna Eriksson, aes@bth.se

We are also live-streaming all the presentation during the seminar day for virtual attendees. Join in via https://bth.zoom.us/j/310852993

Welcome!

Best regards,

Prof. Dr. Tony Gorschek

Senior Research Leader SERT…

Program​

Coffe/tea and sandwiches are served from 09:45 outside J1610

Welcome and intro talk – Tony Gorschek

The Agile and Lean movements (some would say bandwagons) have shaken the software industry for 18 years now, and have collectively changed a lot of old truths. Or have they? Or are there other, under-researched areas, that contribute to the purported success stories of Agile and Lean projects? I will also propose future study angles, and report some initial results from an ongoing case study.

Context: Continuous integration (CI) is a practice that aims to continuously verify quality aspects of a software intensive system to give fast feedback to the developers, both for functional and non-functional requirements (NFRs). Functional requirements are the direct result of development and can be tested in isolation, utilizing either manual or automated unit,integration or system tests. In contrast, some NFRs are hard to test without functionality as NFRs are often aspects of functionality. This lacking testability attribute makes NFR testing complicated and therefore underrepresented in industrial practice. However, the emergence of CI has radically affected how software intensive systems are developed and has created new avenues for software quality evaluation and quality information acquisition. Research has therefore been devoted to the utilization of this additional information for more efficient and effective NFR verification to preserve resources and time. Objective: We aim to identify the state-of-the-art (SOTA) research utilizing the CI environment for NFR testing, in continuation referred to as CI-NFR testing, and provide a synthesis of open challenges for CI-NFR testing. 

Method: We conducted a systematic literature review (SLR). Through rigorous selection, from an initial set of 747 papers, we identified 47 papers that describe how NFRs are tested in a CI environment. Evidence-based analysis, through coding, is performed on the identified papers. Results: First, ten different CI approaches are described by the papers selected for this SLR, each describing different tools, and a total of nine different types of NFRs where reported to be tested. Second, although possible, CI-NFR testing is associated with at least 10 challenges that adversely affect its adoption, use, and maintenance costs. Third, the identified CI-NFR testing processes are tool-driven but currently, there is a lack of NFR testing tools that can be used in the CI environment. Finally, we proposed a CI framework for NFRs testing. Conclusion: A synthesized CI framework is proposed for testing various NFRs, and associated CI tools are also mapped to the components of the framework. This contribution is valuable as results of the study also shows that CI-NFR testing can help improve NFR testing quality in industrial practice. However, results also indicate that CI-NFR testing is currently associated with several challenges that need to be addressed through future research.

12 – 13 Lunch at Bistro J.

The requirements on Authentication, Authorization and Accounting (AAA) for IoT systems are to a large extent context-dependent. The context for IoT systems can encompass a wide spectrum ranging from the 3gpp classication of IoT Systems from “Critical communications IoT systems” (like connected heart pacemaker or a glucometer), to “Enhanced Mobile Broadband” communication or to the “Massive IoT” systems (like connected bulbs). Besides, one IoT system may in itself have varying AAA requirements based on its own context like time of the day, type of activities being performed by the device, geographical location, and power/battery state to name a few. We can, therefore, understand that there are various AAA characteristics and challenges for the IoT systems. There exist a substantial amount of literature pertaining to the AAA characteristics of the IoT systems and their challenges. Hence a Systematic Literature Review (SLR) needs to be performed to research on that. Through this SLR, we shall identify the characteristics and challenges on AAA of the IoT systems. With this we shall also identify the gaps and the need for further research to meet the posed challenges and the various requirements.

Developments in the digital gig-economy, often signified by firms using atypical work arrangements to mediate remunerated work via online labour platforms, has lead to discussions over appropriate regulations in a number of countries. The labour platform debate tangents a broader discussion over labour market resilience in the form of widening income gaps, skill-biased technical change, and a declining labour share of income in OECD-countries. These developments may be explained by declines in trade union density and collective bargaining coverage. Sweden remains an exception with 69 percent union density and non-artificial collective bargaining coverage of 90 percent (2017). In this setting, we have observed labour platforms signing sectoral collective agreements. Here, we investigate the rationale for platform firms to sign collective agreements and conduct a law and economics analysis of these agreements to shed light on how they regulate a number of issues relating to overall labour market resilience.

Originating from the large web players such as Google, Amazon and Facebook, the concept of cloud-native application(CNA)s are spanning across every industry. CNAs quickly and cost effectively adapt to unpredicted changes. Due to the benefits provided by CNAs, Ericsson is in the process of transforming the existing digital services portfolio to CNAs. This transformation is significantly challenging, as the existing products and services, as well as the surrounding environments such as development organizations, delivery pipelines, are built around monolithic applications. In our first investigation, we presented challenges and research directions associated with monitoring and maintaining a large telecom system at Ericsson that was developed with a high degree of legacy application reuse. In the second study, we zoomed into one of the reused applications and investigate its architectural evolution over three releases. The goal of this research is to investigate the quality of system architecture when it evolves over releases. As a quality criterion, we investigated a specific type of technical debt(TD) called architectural technical debt (ADT), by introducing a bug classification framework to identify ADT hints. ADT is a type of debt that is difficult to identify and measure with existing automatic code analysis tools. The bug classification framework aims to bridge this gap of identification by providing a practical classification framework. Once the bugs can be classified into ADT categories, different stakeholders such a Architects, managers can make appropriate decisions on the management of ADTs. On this third study, we are investigating the challenges of migrating an Ericsson subscriber provisioning system to CNAs from multiple perspectives. The study attempts to systematically identify and validate specific impacted areas due to the migration, with already identified cloud-native migration challenges by prior research.

Energy consumption reduction has been an increasing trend in machine learning over the past few years due to its socio-ecological importance. In new challenging areas such as edge computing, energy consumption and predictive accuracy are key variables during algorithm design and implementation. State-of-the-art stream mining algorithms are able to create highly accurate real-time predictions on evolving datasets while adhering to low computational requirements to run in edge devices. This is the case of the Hoeffding Adaptive Tree algorithm. This algorithm achieves high levels of predictive accuracy on evolving datasets by increasing the amount of computations, thus increasing its energy consumption. This paper proposes to extend the Hoeffding Adaptive Tree algorithm to a more energy efficient version, named Green Hoeffding Adaptive Tree (GHAT). GHAT uses a per-node energy growth adaptation approach that has already been implemented and tested in the most novel Hoeffding tree algorithm, outputting promising energy reductions.

 Keywords GreenAI · Hoeffding Trees · Data Stream Mining · Energy Efficienc

10 min coffe/stretch legs

Technical Debt (TD) items are manifestations of poor quality introduced by sub-optimal design decisions. Refactoring is a practice that developers perform to improve code quality and increase its maintainability and understandability. There are refactoring operations that aim at improving the code quality by removing specific types of TD items. Other refactoring operations help developers improving code understandability or having more “clean code”. While the motivations behind refactoring operations utilized by developers have been investigated before, there is a lack of empirical evidence retrospectively investigating who are the presumable suspects that are usually refactored (i.e., are big files more prone to being refactored? or are the ones with more TD items?). To fill this gap, we have conducted an empirical study on three open source systems to investigate what matters when it comes to refactoring. We have analyzed 16,150 commits in total, to identify whether refactorings are more likely to happen in files containing more TD items or if they are more likely happening in bigger files. The main result is that size has been found as a significant factor in all the systems under analysis, whilst the number of TD items has not been found as significant in any of the systems.

SERT RESEARCH and INDUSTRY KICK-OFF EVENT

Time: Monday 1st of October, 9:00 – 17:00 (..ish)       Venue: Telia, Stjärntorget 1, Solna, STOCKHOLM

SERT Research and Industry event offers an extensive full day program of interest to industrial partners to discuss and lift bleeding-edge research and have a great kick-off!

The first part of the conference includes the presentations of SERT’s six sub-projects as they address both specific and overall challenges identified in dialogue with our partners thus far. The second part of the conference aims to an interactive mapping between the sub-projects, industry challenges, and each industrial partner and this will be organised in an open discussion forum with mingling and poster islands where you as a participant can move between areas. It will be an opportunity for researchers/ sub-project leaders to meet with industry representatives to discuss and identify challenges and starting points for the upcoming research and collaboration in SERT.

The event is free for SERT partner companies only (max 5 participants per company). Agenda (subject to change) can be seen below. Full program and other conference information will be sent out early September. In addition to physical presence the initial seminars will also be made available in on-line streaming format for partners only (more information to come). PLEASE take time to register as soon as possible as there are many on the waiting list.

Watch the SERT Kick-off live

We are also live-streaming all the presentations during the event for virtual attendees. 

Please help us share the link for live-streaming for all people interested!!

Join via: https://www.bth.se/events/kick-off-sert/

 

Program

We arrive at Telias great and modern venue around nine so that we have time to register and proceed to the event in time for start at 9.30. Breakfast are served from 07:30 if you drop in early.

Project manager and senior research scientist Prof. Dr. Dr. Gorschek gives an introduction to the research profile with an overview of utilizing multi-vocal research in combination with third generation empirical software engineering to solve tomorrows challenges today! 

SP1: Augmented Automated Testing: levering human-machine symbiosis for high-level test automation

The software market has over the last couple of years been spurred on by a need-for-speed that shows no sign of slowing down. This trend has fostered a culture of test automation since manual testing has been unable to scale with the size and speed of modern software development practices. Further, automation is requested on all levels of system abstraction, from small unit tests of individual software components to large-scale end-to-end system tests on a GUI level of abstraction.

  However, traditional testing, manual as well as automated, has been reliant on human users to define the test scenarios. Hence, acting in a parasitic manner that forces the user to define tests, how/when to run the tests and to analyze test results to act as final oracles. In fact, some test-purposes cannot even be fully automated, like test exploration, due to lacking test oracles. Simply put, a cognitive human being is required today to identify correct and incorrect system behavior. However, what if we could change this dynamic?

In this research, the ultimate goal is to find ways to leverage the cognitive power of users to explore and find defects, faults and tests, whilst allowing machines to perform the repetitive and boring tasks that make them error-prone when executed by humans. This will be achieved by utilizing advances in machine learning and artificial intelligence (AI) to foster mutualistic (from mutualism) collaboration between tool and user rather than the parasitic relationship. Mutualism will enable new and smarter tools to learn from the user, process the learnings and provide the user with feedback to improve the user’s capabilities. These improvements would, through reinforcement learning, make the tool even smarter and more capable, which in turn positively affects the tool’s capability to guide the user, creating a positive feedback loop that foster joint, mutualistic, improvement of both user and tool.

 However, with this new technology comes many new challenges, questions, and concerns, such as:

  • How do we construct a system with these characteristics?
  • How do we efficiently train such a system?
  • How is user trust affected when the system fails and how can it be reacquired?
  • How do we maintain such a system?
  • Where in the continuous pipeline does such a system fit to optimize its value?
SP2: Heterogeneous multi-source requirements engineering

Companies are currently exposed to large amounts of heterogeneous data originating from business intelligence, product usage data, reviews and other forms of feedback. This challenges requirements identification and concretization and creates demands for revisiting requirements management activities. A growing trend is also that substantial amount of this data is generated by machine-learning components integrated into software products that are self-adaptive (e.g. systems with deep learning algorithms). This means that these software products not only continuously provide data about the changing environment, but also self-adapt and change their behaviour based on contextual fluctuations (so called non-deterministic behaviour).This sub-project focuses on how to support the inception, realization and evolution phases of software systems development by efficient data acquisition and analysis approaches and machine learning.

In this talk, we will revisit requirements engineering activities and focus on how we can transform them to be more data-intensive and better support of:

 i). data collection and problem formulation (intelligence, identifying relevant data sources, filtering relevant information from non-relevant) – requirements analysts can not analyze all available data, therefore intelligent  filtering and triage is needed to support requirements screening and early removal of irrelevant information

ii) development of requirements realization alternatives (prioritizing these opinions and presenting them for decision-makers ) – requirements prioritization can be data driven and complement expert opinions by using analogy and product usage data

iii) evaluation of these alternatives (semi-automated analysis of product usage data and user feedback) where product usage data helps to understand the consequence and model projected customer response to new functionality.

SP3: Value-Oriented Strategy to Detect and Minimize Waste

Software companies are immersed in a competitive market in which changes need to be done under aggressive deadlines, and that sometimes are by themselves companies’ competitive advantage. This time pressure might force companies to make ineffective use of resources, by generating waste or overhead e.g., investing time and money in activities that do not produce any value. Examples of those can be investing on analysis and prototyping of requirements that would never be included in a product, or barely good enough decisions that might have a great impact on several areas of the product development. The consequences are severe and include: lower efficiency of requirements processing and decision making, code and architectural erosion, or sub-optimal usage of testing resources.

The goal of this research is to be able to identify and mitigate the different types of waste and overhead on the different stages of the software development process, to allow organizations to focus on value creation. Most of the work has focused on the development and maintenance activities, but we still need to have a broader view of the problem, including waste in requirements and testing unexplored.

In this talk we will introduce and illustrate with examples the different types of waste and overhead in the inception, realization and maintenance stages of the development process. Some of these types have a clear impact on the waste like  the requirements prioritization problem, to choose the “right” features in the inception phase. The problem is that some types of overhead can be mistaken for waste, like intra and inter-team communication, and when minimized then even more waste is introduced (lack of understanding due to lack of communication).

The concerns that remain open are:

How can we identify waste? What is overhead? How can we avoid these to focus on value creation?

All these questions are covered in the SERT subproject “Value-Oriented Strategy to Detect and Minimize Waste”

Lunch served at the venue

SP4: Cognitive software engineering development models
SP5: Study and Improve LeaGile handling of organizational and team interfaces

Staying competitive in today’s software market requires a high degree of flexibility and ability to adapt to changing market conditions. Agile and Lean software development have been widely adopted as solutions for reducing needless work and increasing flexibility. Open Source approaches have revolutionised software development through the use of open collaboration platforms with advanced tools and methods for code sharing and communication. However, these methods have issues with scaling and do not inform companies on how they can deal with cognitive and organisational limitations that prevent high work performance and flexibility to be realised in practice.

Traditionally, software development processes are seen as pipelines that transform inputs to outputs – for example, requirements become specifications, and specifications become code. Part of what moves in the process is software architecture in the traditional sense – high-level blueprints that specify how a software system is structured and how the software components fit together. The process steps and the components of the architecture can be mapped to units in the software organisation. We know that this relationship is an important factor that affects communication and coordination in the organisation. We also know that it influences cognitive load, social interactions, and motivation among individual software developers. A well designed system of processes, architectures, and organisational and team interfaces translate to higher performance and flexibility and can better support developers’ strengths and help them overcome their weaknesses.

In this talk, we illustrate why processes, architectures, and organisational and team interfaces form a crucial system of work design in software development and how they are related to developers’ cognition, feelings, and motivation. We give examples of how improvements in the work design have led to increased performance and flexibility, and discuss what could be possible in terms of creating intelligent automation that makes processes and architectures interactive rather than static pipelines and blueprints.

These topics are covered in the SERT sub-projects “Cognitive software engineering development models” and “Study and Improve LeaGile handling of organizational and team interfaces”.

SP6: Verification of Software Requirements in Dynamic, Complex and Regulated Markets

Software engineering is a data and people intensive activity. Data is accumulated, analyzed and transformed in order to drive activities such as requirements analysis, software design and implementation, testing and long-term maintenance. While task specialization and processes help to cope with the demands of the growing complexity of today’s software products, there can be gained a lot by supplementing human intelligence with computational intelligence. In the past years, we have identified, studied and analyzed human software engineering processes and designed support systems that help engineers to perform their tasks more effectively and efficiently.

One example is the interactive support for writing requirements specifications that indicates adherence to certain quality rules. Identifying defects in requirements already when they are written reduces reviewing costs and allows to free up resources to verify quality aspects for which humans are still the best judges.

Another example is the semi-automated identification of domain-specific synonyms. In large organizations that collaborate with external suppliers and customers, agreeing to a common terminology is often tedious and cost-intensive. Creating a common glossary can reduce ambiguity and misunderstandings internally but also when interfacing with external partners.

In this talk we will illustrate practical examples of how human and computational intelligence can be combined to improve software engineering activities. In addition we give an outlook how these technologies could be utilized to check conformance to external requirements.

After getting an introduction to the base sub-projects and areas each sub-project team will be available for discussions, questions and details. This session is CRITICAL as initial meet-and-greet as well as discussions gives the opportunity for new ideas and initial bookings of meetings and workshops. We really hope you will active and contribute!

This session is short but gives an overview of next steps planned in the SERT research profile, but more importantly gives YOU the opportunity to ask questions and share ideas. The session lasts as long as we need it to!…

Close Menu